AWS Solutions Architecture Associate

¡Supera tus tareas y exámenes ahora con Quizwiz!

Storage Gateway

* Is a software appliance that can be stored within your own data center which allows integration between your on-premise storage and that of AWS. * The software appliance can be downloaded from AWS as a virtual machine which can then be installed on your VMware or Microsoft hypervisors. * Acts as a virtual file server * The on-premises computing resources read and write to storage gateway, which then synchronizes to S3 * You can access the storage gateway with SMB- or NFS-based shares. SMB is optimal for Windows devices, while NFS is optimal for Linux machines. * Perfect for disaster recovery or backup purposes * Great for hybrid clouds as well as migration to full cloud environment

VPC Peering

* Is a technique to connect one or more VPCs without traversing the public internet. *Also mitigates the need for direct or VPN connections between organizations that are hosted on the AWS network * VPC Peering provides a non-transitive connection i.e. meaning that while VPC peering facilitates connectivity between VPCs, it does not facilitate routing traffic through a VPC to connect to another VPC. * Uses the AWS network backbone, so there is need for internet connections, internet gateways, NAT gateways, or public IP addresses * Inter-region VPC traffic is encrypted

Interface Endpoints

* Is an elastic network interface, that uses a private address from the VPCs address pool * The interface endpoint serves as an entry point from your organization to supported services. Supported services include AWS services and other VPCs. * Uses AWS PrivateLink Service which restricts all access to between the VPC and the AWS services

AWS Athena

* Is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. * While Amazon Athena is ideal for quick, ad-hoc querying, it can also handle complex analysis, including large joins, window functions, and arrays.

Amazon Simple Storage Service (S3)

* Is high-security, high-availability, durable, and scalable object-based storage. * S3 has 99.999999999% durability and 99.99 percent availability. * Durability refers to a file getting lost or deleted. * Availability is the ability to access the system when you need access to your data. * Smallest file size it supports is 0 bytes * Largest file size it supports is 5 terabytes * Regional service * By default your account can have up to 100 buckets. A request can be made to AWS if more is needed

MySQL

* Is one of the original open-source relational databases and has been around since 1995. * MySQL is extremely popular and is used in a wide variety of web applications.

Subnetting is critical for 2 reasons:

* Is that every interface on a system needs to be on a different network or subnet. * Is there is a practical limitation of how many hosts can be on a subnet due to system broadcasts.

Root User

* Is the person who created the AWS account. The root user has full system access. The root user can access the console and has programmatic access to AWS resources. * Since they can do anything it's best to use the root account to set up the VPC and then immediately create an IAM user with appropriate access to the VPC.

Multi-account strategies are especially beneficial from a security perspective because of the following security enhancements:

* Isolation between organizational units. * The ability to share only the necessary information between units. * The ability to reduce the visibility of workloads between organizations. * The ability to reduce the blast radius (meaning that if a problem happens in a single OU it won't affect another OUs. * The ability to truly compartmentalize data.

Gateway Endpoints

* It is a private endpoint that provides high-security access to an AWS service. * Effectively places a route in the VPC's routing table for traffic destined to the AWS service. * An example of a gateway endpoint is connecting Amazon S3 to DynamoDB

CNAME Record

* It's a record that maps a domain to another domain * It can map to another CNAME record or an A Record * So it will effectively redirect a request for on domain to another domain * I.e. map www.a.com to www.b.com

Securing EC2 Access:

* Keeping the instance patched with the latest security updates and turning off unnecessary services is a major part of securing EC2 instances. * The default security policy is to deny all traffic. So, if you don't configure the security group to allow what you desire it wont work. Ex: if you spin up a new instance and its not working in its intended way, did you put a security group on it and its it correct allowing the traffic you want and nothing else. * To allow traffic into the compute instance you must explicitly permit desired traffic flows

Identity and Access Management (IAM)

* Key component of any security architecture * IAM is about identifying the user and giving the user access to the resources necessary to perform their functions * IAM is used to determine who can access the system * AWS uses standard IAM concepts like users, groups and access control policies * This allows scalability with regards to user management

Kinesis Data Analytics

* Kinesis Data Analytics is managed service to transform and analyze streaming data in real time * Kinesis Data Analytics uses built in Apache Flink to process data streams * Kinesis streams is auto scaling and will scale to meet the organizations needs * Data on kinesis data streams can be queried with standard SQL queries

4 key types of DNS Records

* A Record * CNAME Record * NS Record * MX Record

What is a container?

* A container is a modern and lightweight version of a virtual machine * A virtual machine requires an entire copy of an OS per virtual machine to run * A container contains just enough of an OS for the application in the container to run * A container requires much less memory and CPU resources then a virtual machines on a server * Container images are logically isolated from each other facilitating a secure environment

Automating First Boot of AMI

* A full AMI can launch new instances * Bootstrapping can get a new server up and running * Bootstrap scripts are used to install applications, manage software, and patch the operating system * Linux bootstrap scripts - simple shell script * Windows Bootstrap - Powershell scripts

What is a Principal?

* A principal is an IAM entity that is permitted to access AWS resources * AWS further breaks down the principal concept into: - Root User i.e. they have access to EVERYTHING - IAM Users - Roles

AWS Budget

* AWS Budget helps an organization meet their budgetary requirements * How it works: - You create a budget and set custom alerts - The budget is created in the AWS management console or the AWS billing console - When an organization gets close to exceeding their budget, an alert is sent

AWS Certificate Manager (ACM)

* AWS Certificate Manager is a service to centrally managed SSL/TLS certificates in the AWS cloud * Certificate Manager adds protection to establish secure and safe connections * Certificate Manager enables simple provisioning, management and deployment of certificates (public or private) * Certificate Manager allows users to deploy certificates on AWS resources quickly and effectively * Certificate Manager provides free public and private certificates to AWS services like ELB and API Gateways

AWS CloudFront

* AWS CloudFront is an Amazon branded Content Delivery Network * CloudFront can dramatically improve web hosting performance and is integrated with numerous AWS services * CloudFront is effectively a network of caching servers spread throughout the world * When a request is made to a webpage, its location is determined. And the web request is sent to the closest CloudFront server * Local CloudFront servers cache website content and speed the delivery to remote locations throughput the world * If the content is very dynamic and user requests are all for new data, then the caching server will not help * CloudFront is often used as a front end to static websites stored on S3 * CloudFront can also be a front to an EC2 based website if an elastic load balancer is part of the architecture

Kinesis Data Firehose

* Kinesis Data Firehose is a managed service to load streaming data into stores, Data lakes and analytics services * Kinesis Data Firehose can capture streaming data and put it in S3, Redshift as well as other services * Fully managed and scales to match the throughput of your data - Autoscaling - Monitoring * Throughput is based upon shards. A Shard is a throughput unit that equates to 1MB/Second * Pricing is based upon the number of shards * Prior to use the organization determines the required capacity in shards. Shards are increased when additional capacity is required * Up to 10 shards per region, per account can be created. If additional shards are required, you can request more from AWS * A policy can be set up to auto scale the number of shards based upon utilization * Kinesis Data Firehose can be set up from the console. Setup is essentially setting up the sources and destinations for the data

Kinesis Data Streams

* Kinesis Data Streams is highly scalable platform for real time data * Kinesis Data Streams can capture gigabytes per second from hundreds of thousands of sources * This includes financial transactions, location tracking * Kinesis streams can ingest data and export that data with business intelligence tools * Use cases: - Large event data collection - Real time data analytics - Capturing gaming data - Capturing mobile data

4 Kinesis Platforms

* Kinesis Video Streams * Kinesis Data Streams * Kinesis Data Firehouse * Kinesis Data Analytics

Kinesis Video Streams

* Kinesis Video Streams is a kinesis application specifically for video * Enables kinesis to collect video from multiple sources * Enables the ingestion, storage and indexing of multiple streams * Enables the videos obtained by kinesis streams to be sent for media processing or machine learning applications

Amazon Kinesis

* Kinesis is an AWS service for collecting, processing, and analyzing streaming data * Kinesis can collect and analyzes streaming data in real time * Kinesis can collect data from real time applications including: - Video - Audio - Application log - Website clickstreams - IOT devices * Unlike traditional environments where you collect, store and then analyze the data kinesis function can do this in real time * By analyzing data in real time, the organization can receive a competitive advantage by analysis in real time and not having to wait for data storage and processing.

AWS Lambda

* Lambda is a serverless computing service * You upload the code for the Lambda function, there is no need to manage servers and OS * Lambda supports C#, Go, Java, Node.js, and Python * AWS Lambda enables automation across an organization's infrastructure * Lambda is stateless - Once the function is performed the function has been completed - If another function needs to occur as a second set it will be necessary to set up a second function * AWS Lambda is useful in many situations where automation can create can increase the efficiency of technology, and decrease manual intervention - Processing data across multiple systems - i.e. remediation of security events - Patching OS

2 versions of licensing supported by AWS for Oracle Databases:

* Licensed included: For these the database is licensed by AWS - Standard Edition One - Standard Edition Two * Bring your own license: You have a license for Oracle and you host your database on AWS - More license flexibility - Standard - Enterprise - Standard Edition One - Standard Edition Two

AWS Config

* AWS Config is a service that enables assessment, auditing and evaluation of configurations in AWS * Provides an opportunity to see what changes were made and who made these changes within the VPC * Provides constant monitoring of configurations and checks configurations against an organization's policies * Can track relationships between resources, so if changes are made, it will be easy to determine which systems will be affected by the changes. * AWS Config can help with troubleshooting, as it can integrate with CloudTrail and track changes made * When a change is made AWS Config can send an SNS alert to systems administrations * How it works: - A configuration change is made - AWS Config notes the change and records the change in a consistent format - AWS Config will then check the change against and organizations policies - AWS Config will notify the services that notify the system administrator of changes that occurred - This can occur as a CloudWatch Event, SNS or another AWS service

Load Balancers

* Load Balancer are network devices that facilitate load sharing across servers * Load Balancers can help greatly with scalability by allowing the application to be deployed across multiple servers * This can substantially increase performance as it allows for scaling out instead of just scaling up * Load Balancers can also increase availability by removing single points of failure * Load Balancers use health checks to increase availability

AWS Directory Service

* AWS Directory Service provides hosted, dedicated tenant, Windows Active Directory (AD) servers * These are high availability servers spread across 2 AZs by default * The AD servers are actual Microsoft AD servers hosted by AWS * This enables Microsoft dependent workloads hosted in AWS to use Microsoft Directory Services * The hosted AD servers can also be used by EC2, RDS for SQL server, AWS end user computing and AWS WorkSpaces for IAM functions

Amazon Elastic Beanstalk

* AWS Elastic Beanstalk is a service for provisioning, deploying and scaling web applications and services * Upload your code and Elastic Beanstalk automatically deploys the necessary infrastructure (EC2, Containers, Load Balancers) * Infrastructure deployed by Elastic Beanstalk is autoscaling * Additionally, infrastructure deployed from Elastic Beanstalk is automatically load balanced * Elastic Beanstalk works with the following programming languages Go, Java, .NET, Node.js, PHP, Python, and Ruby * Elastic Beanstalk provisions and manages the environment if desired after the computing platform is deployed * Elastic Beanstalk monitors the health of your applications * Elastic Beanstalk is integrated with CloudWatch logs for performance monitoring

Elastic Container Service (ECS)

* AWS Elastic Container Service (ECS) is a fully managed container management service * ECS is a high availability 99.99% high security service * ECS is deployed in a VPC which allows for using NACLs and Security Groups * ECS works to manage containers on EC2 or AWS Fargate. If you need/want complete autonomy, choose EC2, Fargate is serverless so it manages all of the underlying hardware for you

What are the 4 options to managing a VPC?

* AWS Management Console i.e. the easiest in most cases is this. * AWS CLI via SSH i.e. you have to know what you want to do before you do it since compared to the console there's no help. * AWS SDK i.e. for larger scale deployments and you really need to know what you're doing. * If you have some sort of Windows based like a server then you'll need to use a remote desk top protocol (RDP) since Windows typically doesn't do well with SSH

AWS Shield

* AWS Shield is enhanced DDoS protection * 2 versions are standard and advanced - AWS Shield Standard is provided at no additional cost to organizations using WAF - AWS Shield Advanced is a paid protection service that provides protection to EC2, ELB, CloudFront Distributions, Route 53 and Global Accelerator, have access to experts if you need assistants, additional protection for volumetric attacks, dynamic solution that can look at traffic patterns to determine if an attack is occurring, ability to detect an attack and can automatically deploy ACLs to mitigate the attack, and visibility and notification for layer 3/4/7 attacks

Amazon Step Functions

* AWS Step Functions provides a means to sequence multiple Lambda functions * AWS Step Functions are serverless just like AWS Lambda * Step Functions allow for multiple step workflow to occur * Step Functions can even be designed to initiate a re-try if for some reason an error occurred, and processing was not performed * How it works: - Design the steps of the application - Create the individual Lambda function - Configure the workflow in the step functions - Connect the workflow components to individual tasks specified in the Lambda functions - Execute the Step Functions in normal use - Optimize and evolve the Lambda and Step Functions as needed

AWS Trusted Advisor

* AWS Trusted Advisor is an online tool to help an organization optimize their spending on the AWS platform * How it works: - AWS Trusted Advisor scans your infrastructure - Compares infrastructure to AWS best practices and provides recommendations - Improves performance, security, availability and infrastructure costs

EBS Throughput Optimized HDD (st1)

* Low-cost magnetic storage but has relatively good throughout so if you have an application that needs to send and receive a lot of data but its not sensitive this is a good option. * Designed for frequently accessed data * Throughput intensive workloads * Ideal when there is a lot of data to store and very low latency is NOT required * Ideally suited to work well with large data sets requiring throughput-intensive workloads, such as data streaming, big data, and log processing * Great for sequential reads and writes * Throughput measured in MB/s, and includes the ability to burst up to 250 MB/s per TB, with a baseline throughput of 40 MB/s per TB and a maximum throughput of 500 MB/s per volume * Cannot be a boot volume

EBS Cold HDD (sc1)

* Lowest cost * Suited for workloads that are large in size and accessed infrequently * Key performance attribute is its throughput capabilities in megabytes per second *Can't be a boot volume * These volumes can burst up to 80 MB/s per TB, with a baseline throughput of 12 MB/s per TB and a maximum throughput of 250 MB/s per volume.

AWS WAF

* AWS WAF is a Web Application Firewall that protects against attacks * WAF monitors HTTP(S) requests looking for exploits and can mitigate an attack if it occurs. * WAF can control CloudFront distribution, Amazon API Gateway, REST API, or Application Load Balancer requests * WAF provides granular control to protect an organization's resources. It provides a means to control access through web ACLs, rules, or rules groups * WAF controls access to content that the user can specify * How it works: - Enable WAF on the application/device - Create a policy that filters access to the device - WAF analyzes the traffic depending on the policies created - WAF will permit or Deny the traffic depending on the policy - If an attack occurs new rules can be created to mitigate the attack - WAF integrates with CloudWatch to provide increased visibility into network traffic and potential or actual attacks

Roles enhance security by

* AWS credentials don't need to be in a stored instance * AWS services give the API a temporary token to allow access * Since the temporary tokens expire, new tokens are generated frequently by rotating tokens, security is enhanced as no password(key) needs to be passed to the application

Amazon Redshift

* AWS fully managed data warehouse solution * Gain actionable insights from data - Used for business analytics - Amazon Redshift spectrum can provide real time insights into your business when combined with other services i.e. S3 * Fast, powerful and fully managed * Petabyte scale data warehousing * Based on PostgreSQL which allows for SQL queries * Works with applications that perform SQL queries * Primary architecture built around clusters of computing nodes * The primary node is considered a leader node and the compute nodes support the leader node * Queries are directed at the leader node

SSE-S3 (AWS-Managed Keys)

* AWS managed keys * Complete key management solution * KMS manages all keys and secure storage of your encryption keys * Automatic key rotation * Every object is encrypted with unique encryption key. All object keys are then encrypted by a separate master key. * (Some businesses for security reasons or regulatory reasons may need to manage the keys themselves and for those reasons some may want to do customer managed keys)

There are 2 types of policies available in AWS:

* AWS managed policies * Customer-managed policies

AWS Managed Policies

* AWS managed policies are standalone policies created by AWS * Provide permission for services and functions within AWS. * Are optimized for common use cases. * Can be attached and moved to different entities and accounts in AWS. * Can be based on job role to provide different levels of access. * Have two major predefined roles: administrator access and power user. * Administrator access provides full access to every service. * Power users essentially have full access with the exception of IAM and organization management.

Amazon S3 Intelligent Tier

* AWS monitors data * AWS places data on most cost-effective platform, will move your data to the appropriate class for you * Based upon customer's needs * Cost optimization managed by AWS * Ideal for those circumstances where the frequency to the access of those objects is unknown

Routing Tables

* All VPCs effectively have a virtual router. They take the most specific address in the routing table and then they do a matching algorithm * The virtual router is what directs traffic to its destination * Routers build a map of the network in the form of a routing table * Can be built statically or dynamically. Static routes are user configured, where dynamic routes are dynamically learned via a routing protocol. Static routes are ideal when there are very few paths to reach the ultimate destination. Dynamic routes are learned, which is excellent for large networks.

Amazon Relational Databases (RDS) are available in several options

* Amazon Aurora * MariaDB * Microsoft SOL Server * MySQL * Oracle DB * PostgreSQL

AWS Route 53

* Amazon Route 53 is a highly available and highly scalable platform for DNS services * Route 53 provides name to IP address mappings just like any other DNS platform. * AWS uses anycast services, for which there are multiple servers with the same address placed over the internet. * Anycast provides extremely high availability and low latency. As a host it will connect to the closest DNS server based upon its IP address. If a DNS server were to become unavailable, the host will connect to the next closest Anycast address of the DNS server * Route 53 supports most of the available DNS record types. * Route 53 uses TCP and UDP port 53. * Route 53 works with health checks and can be used to create a high-availability solution. * Route 53 has numerous options to optimize a web-based environment.

Amazon S3 storage classes:

* Amazon S3 Standard * Amazon S3 Standard IA * Amazon S3 OneZone IA * Amazon S3 Intelligent Tier * Amazon S3 Glacier * Amazon S3 Glacier Deep Archive

Amazon Aurora

* Amazon managed fully relational database service * MySQL and PostgreSQL compatible * High performance and scalability - Up to 5x higher speeds then standard MySQL and 3x for PostgreSQL databases * Combines the benefits of commercial databases along with the cost of open-source databases

Providing an IP Address

* An EC2 instance can have multiple network interfaces * Each interface requires a VPC unique address space and subnet * Every time you have an interface each interface is going to have to be on a different subnet * All EC2 instances are automatically assigned a DNS name from AWS * Public IP address can be assigned to EC2 instances that require global internet reachability. Are used outside your VPC * Private IP can be assigned to EC2 instances that require security and protection from the internet. From anything you don't want to be on the public internet. Are used inside your VPC * Instances are automatically assigned IPv6 address at launch which are globally routable (Public) * This can be manually disabled upon launch

Multi-Account Strategies

* Another method to increase the security of the VPC is to partition the organization into multiple small accounts and share information between accounts * Each small account will be placed into a single billing unit, called an Organizational Units (OU) * The OUs are placed into a single billing organization * The organization is given a single bill which enables volume pricing but with the protection of smaller organization

AWS has 3 types of load balancers available:

* Application Load Balancer * Network Load Balancer * Classic Load Balancer (legacy can be network or application)

Application Load Balancer

* Application Load Balancers route based on many variables: - Path provided in URL - Elements based upon HTTP/HTTPS headers - HTTP method-based routing (Push or Get) - Source address * Ideal for load balancing HTTP, HTTPs traffic * Ideal for balancing requests to microservices and container-based applications * Can load balance between an AWS VPC and on-premises data center * Connections are stateful * Need at least 2 availability zones and you can distribute incoming traffic across your targets in multiple Availability Zones.

Operating Systems Interconnection Model (OSI)

* Application- Layer 7, (Function) User Interface, Ex: HTTP, DNS, SSH * Presentation- Layer 6, (Function) Presentation and Data Encryption, Ex: TLS * Session- Layer 5, (Function) Controls Connection, Ex: Sockets * Transport- Layer 4, (Function) Protocol Selection, Ex: TCP, UDP * Network- Layer 3, (Function) Logical Address, Ex: IP Address * Datalink- Layer 2, (Function) Hardware Connection, Ex: MAC Address * Physical layer- Layer 1, (Function) Physical Connection, Ex: Wire, Fiber

Database Scalability

* Architecturally the platform can be scaled up or scaled out * Scaling Up is the simplest method - Move the database to a larger compute instance * Scaling Out is adding additional computing instances - At some point its just not possible to scale up anymore. Just add more instances.

Data Warehousing Databases

* Are designed to assist with business intelligence and analytics. * Data warehouses are designed to perform analysis on large amounts of historical business data

IAM Users

* Are identities that have permissions to interact with specific AWS resources * IAM users are create by principals with administrative access. * IAM users can be created with the AWS management console, CLI, or SDKs. * IAM users are permanent unless deleted by an administrator.

Relational Databases adhere to the ACID Model

* Atomic - Transactions are all or nothing * Consistent - data is consisted immediately after writing to database * Isolated - Transactions do not affect each other * Durable - Data in the database will not be lost

Performance

* Auto scaling * Decouple architecture components * Caching * DNS * Load Balancers

Database Backups

* Automated backups are performed automatically by AWS * This backup is of the entire database instance * You can retain this backup from 1 day to 35 days * Backups happen during a defined window * During the backup the database storage may be temporarily unavailable * Database performance may be degraded during the backup process

Cross Region Replication

* Automatically copies the objects in an S3 bucket to another region. Therefore, all S3 buckets will be synchronized. After cross-region replication is turned on, all new files will be copied to the region for which cross-region replication has been enabled. * Objects in the bucket before turning on cross-region replication will need to be manually copied to the new bucket.

General DR scenarios

* Backup and Restore * Pilot Light * Warm Standby * Multi-site

Dedicated Hosts (tenancy option)

* Bare metal server (is a physical server without an operating system or applications installed) * Is dedicated to a single customer

AWS CloudWatch is available in 2 versions for EC2 instances:

* Basic Monitoring: Data is available automatically every 5 min at no charge * Detailed Monitoring: Data is available automatically every 1 min at an additional charge. Detailed monitoring must be enabled at the EC2 instance

DynamoDB by default uses the BASE model:

* Basically Available - The system should be available for queries. * Soft State - Data in the database may change over time. * Eventually Consistent - Writes to the database are eventually consistent. This means that a database read immediately after a write may not be available, but will be over time.

Multi-Factor Authentication

* MFA can greatly increase IAM effectiveness - As even if a password is compromised a hacker cannot get into the system * Uses the principle of something you have and something you know * How it works: - The organization sets up an authenticator app or device with a key - The authenticator device will create a one-time password which changes every few seconds - When user logs in with their username and password, AWS will provide a challenge asking for the one-time password - If the user provides the correct one-time password they are authenticated

IP addressing for the VPC

* Mail delivery is similar to packet delivery * Must have a unique address even if the variation is the zip code * For the internet to work every device must have a unique IP address * There are 2 versions of addresses - IPv4 and Ipv6 * IPv4 are the original IP addresses that were created and for the most part they're the primary addresses we're using on our computing systems today

Industry Compliance

* Many industries have security requirements, data retention requirements, etc * AWS supports critical industry compliance requirements * A full list can be seen at - https://aws.amazon.com/compliance/programs - PCI DSS: for payment cards - ISO 9001 . 27001, 27017, 27018 - Fed Ramp - HIPPA: US Healthcare Privacy

Server-Access Logging Permissions

* Both the source and target buckets should be in the same region, and it's a best practice that different buckets are used for each. * Permissions of the S3 Access Log Group can only be assigned via Access Control Lists and not through bucket policies, so when manually setting the permissions for this via an SDK, you must update the ACL. * If you have encryption enabled on your target bucket, access logs will only be delivered if this is set to SSE-S3 (Server-side encryption managed by S3) as encryption with KMS (Key Management Service) is not supported.

Securing data in S3

* Bucket policy - Preferred method - Very granular - Based upon IAM i.e. determining who can access ad what they can or can't do with the data * Access Control List (ACL) - Read, write, full control i.e. not as granular as bucket policies plus bucket policies have much more security

MariaDB

* MariaDB is an open-source relational database * Created by the developers of MySQL * Additional features and enhanced functionality over MySQL for Enterprise environments * Supports a larger connection pool and is comparatively faster than MySQL but doesn't support data masking and dynamic columns.

Amazon EFS has 2 different EFS throughput modes:

* Bursting Throughput - is the default mode, the amount of throughput scales as your file system grows. So the more you store, the more throughput is available to you. The default throughput available is capable of bursting to 100 mebibytes per second, however, with the standard storage class, this can burst to 100 mebibytes per second per tebibyte of storage used within the file system. * Provisioned Throughput - allows you to burst above your allocated allowance, which is based upon your file system size. So if your file system was relatively small but the use case for your file system required a high throughput rate, then the default bursting throughput options may not be able to process your request quick enough. Does incur additional charges, and you'll pay additional costs for any bursting above the default option of bursting throughput.

Instance types should be chosen based upon the need for the following computing options:

* CPU cores (virtual CPUs) * Memory (DRAM) * Storage (capacity and performance) * Network performance

Caching

* Caching can make a website much more scalable by offloading frequent requests to the cache * Caching is very helpful for frequently requested content * If the content is very dynamic and user requests are all for new data, then the caching server will not help to improve performance or scalability

Schema Conversion Tool (SCT)

* Can copy database schemas for homogenous migrations (same database) and convert schemas for heterogeneous migrations (different database) * Is used for larger, more complex datasets like data warehouses

AWS Database Migration Service (DMS)

* Can migrate your data to and from most widely used commercial and open-source databases * The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database * You can continuously replicate your data with high availability and consolidate databases into a petabyte-scale data warehouse by streaming data to Amazon Redshift and Amazon S3 * Is used for smaller, simpler conversions and also supports MongoDB and DynamoDB

How SQS works:

* Messages are sent from the computing platform to the queue * After the message is inside the queue it can be scheduled for delivery * If the ultimate destination is busy the message can stay in the queue until it is processed or times out * The message can stay in the queue for up to 14 days based upon the SQS configuration * The message is pulled from the queue to be processed * After the message is completely processed the message is deleted from the queue

FSx for Windows has 3 price points:

* Capacity - you do pay for the amount of storage capacity that you consume. This is priced on the average storage provisioned per month and uses the metric of gigabyte-months and offers varied pricing between a single or multi-AZ deployment. * Throughput - a cost of for the amount of throughput that you configure for your file systems, this metric is based upon MBps-months. Cost variations exist between single and multi-AZ deployment. One point to bear in mind is that any data transfer costs when using multi-AZ is included in the pricing you see for the multi-AZ deployment. * Backups - Because Amazon FSx performs incremental backups (either manual or automatic) of your file systems, it optimizes your storage costs as only the changes since the last backup are saved. Much like your storage capacity, backup storage is also charged based on the average metric of gigabyte-months, meaning the average amount of capacity you have used in the month.

Subnetting

* Careful use of IP address space * Subnetting optimizes IP usage * Every interface needs an address * Practical limitation of subnet size (broadcasts)

If you want to encrypt your data on S3 there are 2 ways to do it:

* Client-Side Encryption * Server-Side Encryption

CloudFormation

* CloudFormation is a means to template know good configurations for your services * CloudFormation therefore helps you provision your application in a safe and repeatable manner * CloudFormation templates can be made with simple text files or via supported programming languages * AWS CloudFormation templates are available for a multitude of options * The code can also be written from scratch in either JSON or YAML format * Code is stored locally or on S3 * Code is used by either used by CloudFormation in the console, CLI or API * CloudFormation will provision your systems based upon the template * CloudFormation can deploy your templates across your infrastructure to rebuild old applications or for new applications

CloudWatch can provide encryption in transit

* CloudFront can enforce SSL/TLS protocols * CloudFront integrates with the AWS certificates manager * CloudFront supports server name identification and custom certificates * Can be modified by changing the Time To Live (TTL) for objects in the cache * You can set the minimum, maximum and default TTL for objects

CloudFront can help prevent DDoS Attacks

* CloudFront distributes requests through multiple points of presence * CloudFront only forwards legitimate HTTP/HTTPS requests to the server that aren't already in the cache * The attacker can't launch a DDoS attack by sending many invalid requests to the server

CloudTrail

* CloudTrail is an AWS service that assists with the auditing process * CloudTrail provides an audit log that assists with risk management and compliance - Especially useful in highly regulated industries * CloudTrail tracks changes made to an AWS account by user, role, or AWS service * CloudTrail is enable when the AWS account is created - To start, create a trail with the CloudWatch console, CLI or CloudTrail API * CloudTrail records events and these events are visible in the CloudTrail console under event history

CloudWatch

* CloudWatch is a monitoring service to monitor AWS resources as well as applications that you deploy on AWS * CloudWatch provides metrics to monitor performance and troubleshoot issues * CloudWatch can monitor applications that run on AWS * CloudWatch can work with built in metrics and custom metrics * CloudWatch built in default metrics such as CPU utilization, disk Read/Write OPS, Network utilization * CloudWatch custom metrics can be set up to monitor factors critical to the application's performance, such as memory utilization, API performance, or other metrics. * CloudWatch has a notification system that notifies customers and allows them to set events when certain thresholds are met within AWS services

Amazon Lambda@Edge

* Completely serverless * It works with the CloudFront Content Delivery Network - This enables the content to be closest to the customer and achieve higher performance * Lambda@Edge allows for running Lambda functions close to the user

AWS Service Catalog

* Controlling what is placed on the network is essential to an organization security posture * AWS Service Catalog helps control what is place on an organizations network * The Service Catalog is a means to create a list of approved services including: - AMIs - Servers - Software - Databases - Application architectures * This simplifies deployments as administrators can only choose from the approved services

Convertible Reserved Instances

* Convertible reserved instances are reserved instances with flexibility to change the size of computing instances. * With convertible reserved instances, an organization purchases a computing platform based upon need. If the organization needs to resize its computing instances, it has the flexibility to change. * Convertible reserved instances offer flexibility but with higher costs than standard reserved instances.

3 ways you can create a IAM Policy:

* Copy an AWS Managed Policy - Make changes to it for your specific needs * Use the policy generator - Provides easy to use questionnaire for creating policies - Assign permissions to specific resources - You can create multiple permissions that appear as statements - A policy document is then created that can be edited - This document is what is used to generate the actual policy - Great for using a single policy to cover multiple services * Create one from scratch - You create the policy from scratch using the proper grammar and syntax ideal for people with JSON programming experience

Import/Export Service

* Copy data to a hard drive(s) * Ship hard drive to AWS * AWS will copy to your cloud computing environment

Copying snapshots may be required for:

* Creating services in other regions. * DR - the ability to restore from snapshot in another region. * Migration to another region. * Applying encryption. * Data retention.

SSE-KMS (Customer-Managed Keys with AWS KMS)

* Customer managed keys with AWS KMS i.e. customer manages their own keys but uses KMS to do it. * Complete key management solution * User manages master key * KMS manages data key * KMS provides audit trail of how, who and when data was accessed * Provides extra security by having separate permissions for using a customer-managed key, which provides added protection against unauthorized access of your data in Amazon S3.

Database Snapshots

* DB Snapshots are performed manually. A manual backup will be there until you delete them * DB Snapshots are a point in time copy of your EBS Volume * DB Snapshots are maintained until you delete them

Preventing Distributed Denial of Service Attacks (DDoS)

* DDoS attacks are a common assault against an organization's systems designed to interrupt the normal function of server, application, or network by overwhelming the service or its surrounding infrastructure. * Preventing of DDoS Attacks takes a full security posture * Blocking unwanted traffic with Network ACLs (To reduce the options the attacker can use to attack the network or server) * Leverage AWS Shield, which provides enhanced DDoS protection * Security Groups on the network or server (to reduce the traffic that can hit the server) * Auto scaling can help mitigate DDoS Attacks * Adding the WAF can recognize common attacks and can dynamically apply polices to mitigate these attacks

When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted:

* Data at rest inside the volume. * All data moving between the volume and the instance. * All snapshots created from the volume. * All volumes created from those snapshots. - Encryption is supported by all EBS volume types. - Expect the same IOPS performance on encrypted volumes as on unencrypted volumes. - All instance families support encryption

Object Storage

* Data is broken down to pieces called objects * Each object has a unique ID * Each object has metadata * Static content that does not require constant writes * Ex: S3 * Great for big data

VPN Connections offer:

* Data is encrypted and tunneled over the internet. Encrypted via IPSEC * No bandwidth guarantees i.e. the internet itself is comprised of multiple service providers and just because you can guarantee service TO the internet doesn't mean these services have to prioritize your traffic. * Variable latency i.e. this can work for some application but not others. You need to know your latency requirements to make the right judgment call.

A data warehouse is comprised of the following components:

* Database to store data. * Tools for visualizing the data. * Tool for prepping and loading data.

What is a database?

* Databases allow for the storage of large amounts of information * Facilitates sorting, calculating, reporting and information sharing * Can help an organization find key performance metrics from their data in order to make strategic business decisions * Critical component to modern applications

EC2 instances can be accessed via the following methods:

* Directly from the EC2 console * Via Secure Shell (SSH) for Linux machines * Via Remote Desktop Protocol (RDP) for Windows systems * Some Management can be performed via the AWS SDK

EBS Volume Types

* EBS Provisioned IOPS (io1) * EBS General Purpose SSD (gp2) * EBS Throughput Optimized HDD (st1) * EBS Cold HDD (sc1)

Anything that needs high availability should be in multiple AZs Examples:

* EC2 compute instance * Databases * Elastic Load Balancers * DNS or Rout 53

Dedicated Instances (tenancy option)

* EC2 instances on dedicated hardware * Also uses physically dedicated EC2 servers. * Does not provide the additional visibility and controls of dedicated hosts (e.g. how instance are placed on a server). *Billing is per instance * Available as On-Demand, Reserved Instances, and Spot Instances. * Cost additional $2 per hour per region * This server can house multiple virtual machines. All virtual machines on the dedicated instance belong to the single customer who owns the instance

There are 3 types of roles in the AWS environment:

* EC2 roles * Cross account roles * Identity federations

Elastic Compute OS

* EC2 supports Linux and Windows * Options to create or use prebuilt VMs * Prebuilt VMs available as an Amazon Machine Image (AMI) * AMI will need a storage volume-Instance or EBS

Amazon EKS

* EKS is a fully managed Kubernetes container management service * EKS is a full Kubernetes service * Therefore, Kubernetes containers can be moved to EKS without modification * Kubernetes is an open-source container management service * Kubernetes is effectively the standard container application * It is extremely similar to ECS, but it uses the Kubernetes container platform EKS can also be used with EC2 and Fargate in the same manner that ECS can be

Internal only ELB:

* ELB nodes have private IPs * Routes traffic to the private IP addresses of the EC2 instances * ELB DNS name format: internal--..elb.amazonaws.com. * Internal-only load balancers do not need an Internet gateway.

Internet facing ELB:

* ELB nodes have public IPs * Routes traffic to the private IP addresses of the EC2 instances * Need one public subnet in each AZ where the ELB is defined * ELB DNS name format: -..elb.amazonaws.com.

AWS EMR

* EMR is an AWS application for processing and analyzing large amounts of data * It is a managed cluster (and service) for managing big data frameworks * AWS EMR is built upon open-source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto * Often higher performance then traditional solutions * Often less expensive as there is management or heavy coding required

Egress-Only Internet Gateways

* Egress-only internet gateways allow internet connectivity to IPv6 systems. IPv6 does not really use private address space. So essentially all IPv6 address are unique and globally routable. * Therefore, IPv6 systems do not need NAT to connect to the internet, as there is no need to translate an internal address into an external address. * When using an egress-only internet gateway, systems will not be reachable from the internet. This type of internet gateway is stateful and allows traffic established by internal hosts to return to the AWS VPC. * This allows internal systems to download software patches and upgrades from the internet while keeping outsiders from connecting into the AWS VPC from the internet. * This is very similar in function to the NAT gateway without an internet gateway in IPv4. 3

AWS ElastiCache, there are 2 options:

* ElastiCache for Redis, offers a vey rich feature set for wide range of uses * ElastiCache for Memcached, is designed for simplicity

Hybrid Cloud is ideal for these circumstances:

* Migration to Pure Cloud * Ultra-high-performance i.e. Some applications benefit from the reduced latency and higher network capacity available on-premises, and all other applications can be migrated to the cloud to reduce costs and increase flexibility. * Disaster Recovery i.e. Run the organization computing locally, with a backup data center in the cloud.. * Standalone Requirements i.e. some organizations want to use certain things on their own network and use the cloud for the rest. Typical for a hybrid environment * On-demand capacity i.e. Prepare for spikes in application traffic by routing extra traffic to the cloud. * Specialized workloads - Move certain workflows to the cloud that require substantial development time, i.e., machine learning, rendering, transcoding

Relational Databases

* Most common form of database * Provides storage and access to data that is related to each other * Data is stored similarly to a spreadsheet with rows and columns * Shows the relationships between different variables stored in the database * Each row has a unique ID which is called a key * The columns hold data attributes called values * This forms Key-Value pairs * AWS RDS services are atomic databases i.e. it either works or it doesn't * As soon as data is written, it will be immediately available for query * Relational databases are best when the data is structured

Designing for High Availability

* Most organizations would consider high availability 99.99% or greater * Some organizations require much higher levels of availability * Service providers, banks, healthcare organizations, often require 99.999% availability or greater * No single points of failure - Redundant power, cooling, network connections, routers, switches, servers, DNS, storage, applications i.e. databases * Multi-AZ, each AZ is effectively a datacenter * Multi-Region is availability over 99.99% is required * Redundant network connections * There should always be multiple connections from the organization to AWS * For most clients this will be a direct connection and a VPN Backup

Network ACLs (NACL)

* NACLs keep unwanted traffic out of the subnet * Applied at the subnet level * NACLs allow or disallow traffic based upon the configured policy * Default policy = Deny all traffic * NACLs must specify the source and destination address, protocol, and port number * NACLs are stateless so they must be configured inbound and outbound i.e. doesn't watch what actually occurs. * NACLs rules are processed in order - So order matters

Load Balancer Routing Policies

* Network Load Balancers - Operate at layer 4 of the OSI Model (TCP, UDP) * Application Load Balancers - Operate at layer 7 of the OSI Model (HTTP, HTTPS)

IPv6 Addresses

* Newer form of IP addresses * Instead of a 32-bit binary address like IPv4, IPv6 uses a 128-bit hexadecimal address space * Infinitely more address space and much more scalable * Typically used by mobile phones * All interfaces in AWS are automatically assigned an IPv6 global address

Benefits of Instance Store Volumes

* No additional cost for storage, it's included in the price of the instance * Offer a very high I/O speed * Ideal as a cache or buffer for rapidly changing data without the need for retention * Often used within a load balancing group, where data is replicated and pooled between the fleet

Amazon S3 Glacier

* Often referred to as Cold Storage, ideally suited for long term backup and archival requirements * Don't put data on here that you're going to need frequently or even semi frequently * Vault lock option, immutable (meaning the data can't be modified d but can be read when needed) * Low cost * Pay for data retrieval * Must request access to data when needed since it takes a variety of time to get it depending in the retrieval option chosen * Data can be accessed sooner by paying for expedited retrievals

Instance Purchasing Options

* On Demand Instances * Reserved Instances * Scheduled Reserved Instances * Spot Instances * Dedicated Hosts

Principle of Least Privilege

* One of the most critical components of security * Provide the least amount of access necessary for individuals or systems to perform only the functions necessary to perform their role effectively * Privileges should be revoked when no longer needed, i.e., when an employee leaves the company

AMI components

* Operating system * Launch permissions * A block device mapping that specifies volumes to attach to the system

Oracle Database

* Oracle is one of the most poplar relational databases in the world * Has an extensive feature set and functionality * Developed, licensed, and managed by Oracle

Restricting Network Access

* Part of a complete security posture is keeping unwanted traffic out of the network * This means limiting routing information to those who need it * Filtering traffic with NACLs, Security Groups, and Firewalls * Modern firewalls keep unwanted traffic out of the network in an adaptive manner * AWS has a modern firewall solution that can be used with CloudFront, Application Load Balancers, and API Gateway

Scaling Out for NoSQL Databases

* Partitioning the database involves chopping the database into multiple logical pieces called shards * The application has the intelligence to route to the correct shard * Effectively, sharding breaks down the database into smaller, more manageable pieces. * Partitioning the database is effective for NoSQL databases like DynamoDB and Cassandra.

For AWS keeping the cloud secure is really about managing the following functions:

* Physical security - Keeping the facility locked, keeping unauthorized users out of the AWS data centers. * Principle of least privilege - Limiting who from AWS can manage assets in the cloud. * Security of the cloud - Keeping the cloud secure (firewalls, system patching, routing, IDS/IPS, change management). * Keeping all AWS applications secure with patching and maintaining the underlying components of serverless applications offered by AWS. * Keeping the AWS network secure with secure routing, VLANs, route filtering, firewalls, and intrusion prevention and detection (IDS/IPS).

Customer Managed Polices

* Policies created and managed by the customer in the customers' own account * Policies are not visible outside the customer organization * Custom made for the organization's specific needs * You can attach these polices to entities within the AWS account

PostgreSQL Database

* PostgreSQL is an open-source relational database * It has a very advanced feature set with enhanced functionality compared to MySQL

Organizations with higher availability requirements may need the following network setup

* Primary Direct Connection * Backup Direct Connection * VPN Backup

A failover to the backup database will be triggered in the following circumstances:

* Primary database instance fails * An AZ outage * The DB instance service type is changed * The DB instance is under maintenance (like an OS upgrade or patching) * A manual failover is initiated like a "reboot with failover"

Change Management

* Prior to making any changes across an organizations systems all stakeholders need to be notified of prospective changes * All stakeholders need to evaluate that any changes made will not affect the systems they manage * All stakeholders need to agree on a time for configuration changes * A time that will not have any jobs running or other configuration changes being made

VPC Endpoints Benefits:

* Privacy and security - Sending data over the AWS network is much more private and secure than the internet. * Performance - Internet gateways speeds are limited to about 45 Mbps; the AWS network has dramatically higher performance. * The AWS network is fully managed by AWS, therefore it can have lower latency and lower congestion than the public internet. The internet has no performance guarantees across autonomous systems. * Cost control - AWS charges for internet use. Sending data over the AWS network will cost less than internet use. * Simplicity - VPC endpoints do not require a public IP address, internet gateway, or NAT gateway.

Security group variables include:

* Protocol (i.e. TCP/UDP) *Port number

Tape Gateway

* Provides a cloud backup solution. It essentially replaces backup tapes used by some enterprise environments for deep archival purposes. * Since it is virtual, there is no need to maintain physical tapes and the infrastructure to support a tape-based backup solution. * With the tape gateway, data is copied to Glacier or Deep Archive.

Federated IAM

* Provides a means to authenticate with an external identity provider. * Federated IAM enables significant and granular control over user functions.

Amazon S3 Standard IA

* Provides the same high availability, high performance, etc as S3 Standard, the difference is the cost * Reduced pricing * Pay for data retrieval * Instantly available * Used for infrequent access

There are 3 types of IP address that can be assigned to an Amazon EC2 instance:

* Public - public address that is assigned automatically to instances in public subnets and reassigned if instance is stopped/started. *Private - private address assigned automatically to all instances. * Elastic IP - public address that is static

Public IP address in regards to EC2 Instances

* Public IPv4 addresses are lost when the instance is stopped but private addresses (IPv4 and IPv6) are retained. * Public IPv4 addresses are retained if you restart the instance * Public IP addresses are assigned for instances in public subnets * All IP addresses (IPv4 and IPv6) remain attached to the network interface when detached or reassigned to another instance.

AMIs can be obtained from the following sources:

* Published by AWS - Prebuilt for a variety of needs * AWS Marketplace - Prebuilt machines by AWS partners often for specific use * AMIs from existing interfaces - This is generally a customer generated AMI from an existing server * Uploaded Virtual Servers - Import of other virtual machines from physical server to VM conversions, VMware, Virtualbox

Dedicated Hosts

* Purchase of a dedicated server * Runs on a single dedicated tenant hardware * Provides system level information i.e. CPU cores, memory utilization *You then have control over which instances are deployed on that host. * The organization can install any operating system or application required *Available as On-Demand or with Dedicated Host Reservation. *Useful if you have server-bound software licenses that use metrics like per-core, per-socket, or per-VM. * Each dedicated host can only run one EC2 instance size and type. *Optimal when an organization needs access to system-level information, such as actual CPU usage * Most expensive option

AWS has 4 primary forms of databases

* Relational databases * NoSQL Databases * Data Warehousing Databases * Data Lakes

Spot Instances

* Request unused EC2 instance capacity * Sold via bids like an auction * Can be terminated if the spot price increases above your current price * Charged by hour or second *Spot Instances are available at up to a 90% discount compared to On-Demand prices. *You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and other test & development workloads.

VPC Components

* Routing * Internet Gateway * Egress-only Internet gateway * NAT instances and NAT Gateways * Elastic IP Addresses (EIPs) * VPC endpoints * VPC Peering * Network Access Control Lists * Security groups

AWS RDS supports multiple versions of the Microsoft Database

* SQL Server 2018 * SQL Server 2012 * SQL Server 2014

AWS SWF

* SWF is a pre-built workflow management solution. So, all that's necessary is to tell SWF the necessary workflow steps and SWF handles all coordination * SWF enables the coordination of different tasks across distributed applications * SWF enables you to create a workflow of tasks that take multiple steps for completion * SWF coordinates the execution of tasks in distributed applications * SWF mitigates the need to develop code to coordinate tasks in multiple systems. This saves substantial development time

A pure cloud computing environment has several advantages:

* Scalability - The cloud provides incredible scalability. * Agility - Adding computing resources can occur in minutes versus weeks in a traditional environment. * Pay-as-you-go pricing - Instead of purchasing equipment for maximum capacity, which may be idle 90 percent of the time, exactly what is needed is purchased when needed. This can provide tremendous savings. * Professional management - Managing data centers is very complicated. Space, power, cooling, server management, database design, and many other components can easily overwhelm most IT organizations. With the cloud, most of these are managed for you by highly skilled individuals, which reduces the risk of configuration mistakes, security problems, and outages. * Self-healing - Cloud computing can be set up with health checks that can remediate problems before they have a significant effect on users. * Enhanced security - Most cloud organizations provide a highly secure environment. Most enterprises would not have access to this level of security due to the costs of the technology and the individuals to manage it

Security Groups

* Security Groups support "allow" rules only * All non-allowed traffic is denied * Stateful i.e. you're only to need to allow inbound traffic and they automatically know to allow the return traffic *Applied at the instance level * Only inbound rules are necessary * All rules are evaluated prior to allowing or denying traffic * Include the source and destination IP address, protocol TCP/UDP, and port numbers * Default policy is to deny, must explicitly permit desired traffic for the security group, or all traffic will be denied.

Who is responsible for the AWS Cloud?

* Security and compliance are shared between AWS and the customer * Shared security model * AWS maintains the security OF the cloud * The customer maintains the security of their VPC * AWS manages the AWS infrastructure * AWS manages the underlying VPC technology

AWS has 3 choices for authentication with Identity Providers:

* Single sign on * Federated IAM * AWS Cognito

Range Gets

* Sometimes you need access to some of the data in the file, but not the entire object. A range get allows you to request a portion of the file * Highly useful when dealing with large files or when on a slow connection

Cloud agility benefits:

* Speed of deployment i.e. servers and databases * Auto-scaling * Serverless * Unlimited Capacity * Global Reach * Simplicity

AWS RDS supports multiple versions of the Oracle database

* Standard One * Standard * Enterprise

What are the3 SQS Queues?

* Standard Queues * First In First Out Queues (FIFO) * Dead Letter Queues (DLQ)

Steps for the best Cost Management

* Step One - Only provision the resources that you need - Monitor systems for optimal resource planning * Step Two - Properly size your resources - Size your equipment based upon average use and not peak usage - Cloud computing allows autoscaling, so you don't need to over provision in advance - Decouple systems in the architecture when possible * Step Three - Purchase the right platform - On demand - Spot Instance - Reserved (Standard, Scheduled, Convertible) - Cost optimized is often achieved with a mix of on demand, reserves, and spot instances * Step Four - Leverage managed services and serverless * Step Five - Managing data transfer costs - S3 Cross-Region Replication - Use CloudFront to server locally - Data in and out of a VPC over a direct connection can be lower cost than VPN if large amounts of data are being transferred.

To take application-consistent snapshots of RAID arrays:

* Stop the application from writing to disk. * Flush all caches to the disk. * Freeze the filesystem. * Unmount the RAID array. * Shut down the associated EC2 instance.

Standard Queues

* Super-fast: supports a nearly unlimited number of requests per second * Standard queues offer the fastest throughput and at once delivery * Every message is delivered at least once * Best effort delivery: Messages may be delivered out if order

Supernet - Route Summarization

* Supernets are the exact opposite of subnets * Supernets combine multiple smaller subnets into a bigger subnet * Therefore Super Netting is really used more in routing then addressing * Specifically this Supernet function is used in route summarization * Instead of having multiple routes which take memory and processing power from the network devices we can add a summary route * AWS has a limitation on the number of routes in a VPC so use route summarization

Network Load Balancer

* The AWS Network Load Balancer routes traffic based upon the destination port * Network Load Balancers are very fast and are an excellent option when ultimate speed is needed * Can handle millions of requests per second * Excellent with rapidly changing traffic patterns * The connections are stateful i.e. meaning that once a connection is established between the host and the server, the connection is maintained until the session is completed * Connection between a host and server is maintained until the session is completed * Sticky sessions provide a mapping of source and destination of the connections

Amazon S3 Glacier Deep Archive

* The cheapest option out of all the classes * Focuses on long term storage like Glacier * Minimum retrieval of data is within 12 hours * Ideal for circumstances that require specific data retention regulation and compliance with minimal access i.e. the health of financial industry

Customer Responsibility of of the AWS Cloud

* The customer is responsible for securing all aspects of their VPC - Identity and access management - Determine who is allowed in the VPC and define their functions. 96 - Principle of least privilege - Grant the least privileges necessary for employees and partners to perform their functions effectively. - Data security - Manage encryption. Maintenance of customer-designed applications. - Management of the VPC routing tables. - Managing traffic allowed into the VPC - Firewalls, NACLs, security groups. - Maintenance of the operating systems and applications stored on EC2 compute instances. - Physical security - Keep the devices that connect to the cloud secure from unauthorized users.

A Record

* The most fundamental record * It maps a name to an IP address * The IPv6 equivalent is AAAA Record

Storage Gateway Stored Volume

* The on-premises systems store your files to the storage gateway, just like any other file server. * The storage gateway then asynchronously backs up your data to S3. The data is backed up to S3 via point in time snapshots. * Since the data is on S3 via a snapshot, these snapshots can be used by EC2 instances should something happen to the on-premises data center. This provides an excellent and inexpensive offsite disaster recovery option. iSCSI protocol

The bucket owner is charged for the request under the following conditions:

* The requester doesn't include the parameter x-amz-request-payer in the header (GET, HEAD, or POST) or as a parameter (REST) in the request (HTTP code 403). * Request authentication fails (HTTP code 403). * The request is anonymous (HTTP code 403). * The request is a SOAP request

Data in the instance store is lost under the following circumstances:

* The underlying disk drive fails * The instance stops * The instance terminates

AWS Simple Queue Service (SQS)

* There are 2 options available * Standard: Simple Queue - this is a simple queue to temporarily store messages prior to being written to the database. With Standard SQS queues, there is no guarantee to the order of messages leaving the queue. This is the default option - First in and First Out (FIFO) - A queue designed where the data exits the queue in the order it was received

A high availability database design uses a multi-AZ environment

* Therefore, the database has multiple copies in other AZs * Multi-AZ does not improve database performance * Are for redundancy to enhance availability purposes * Multi-AZ works where is the primary instance were to fail the DB instance in the other AZ will take over * Data from the primary DB is synchronously copied from the primary DB to the "backup" DB in the other AZ

Object Lock property

* This feature is often used to meet a level of compliance known as WORM, meaning Write Once Read Many. * It allows you to offer a level of protection against your objects in your bucket and prevents them from being deleted, either for a set period of time that is defined by you or alternatively prevents it from being deleted until the end of time * Setting Object Lock on a bucket can only be achieved at the time of the creation of the bucket * Without first enabling versioning, it is NOT possible to enable object lock * Once you have created your bucket with object lock enabled it will be permanently enabled and can't be disabled.

Shared Tenancy

* This is a standard EC2 instance * The server will have virtual machines from several customer

When to use ENI:

* This is the basic adapter type for when you don't have any high performance requirements. * Can use with all instance types.

Database Encryption · Encryption at REST

* This means the data stored on the server is encrypted * Effectively the EBS Volume which houses the databases is encrypted * This is performed by enabling AWS key Management Service (KMS) * AWS KMS is a managed service to make it easy to create and control Customer Master Keys

A Trail that Applies to All Regions

* This provides the most comprehensive logging and auditing options available. * This provides a record of all events that occur inside an organization's infrastructure. * This type of trail can help correlate events across an organization's global infrastructure and provide insight on fixing problems.

SQS is optimal to use in the following circumstances:

* To increase scalability when there are a lot of write requests to the system. * To decrease the load on the database behind the SQS queue. * When it's not known exactly how much performance is needed, but the organization wants to be able to account for large spikes in traffic. * When extra insurance is desired that critical messages won't be lost. * When you want to decouple your application to increase availability, modularity, and scalability.

Authorization

* To keep the VPC secure, the default policy is to deny access to all services. * Authorization occurs by using the specific privileges that have been defined in IAM * These policies are then associated with user group, users or roles * IAM policy documents are written in JavaScript Object Notation (JSON) * The default user policy is to deny access to services so this must be configured

File Storage

* Traditional storage * Attached to a computer * Uses Standard File Systems i.e. NTFS * Can be mounted by the devices on the network

Amazon Simple Storage (S3) use cases:

* Typically used for backup and archival for an organizations data * Static website hosting * Distribution of content, media, or software * Disaster recovery planning * Big data analytics * Internet application hosting

Some SNS Use Case examples

* Use for application and system alerts - SNS can send when a predefined event occurs - i.e. when CPUs utilization is over 80% notify system administrators * Use for email and text messages - SNS can send Push notifications to people via email/text - i.e. a companies CEO is going to be on TV and a message is sent to all employees * Use for Mobile Notifications - SNS can send Push notifications directly to mobile applications - i.e. notify users of a flash sale on your app

Monitoring

* Use logging and auditing functions * Monitor for system alerts * Monitor for security breaches * Monitor for usage * Monitor for performance

Elastic IP Addresses (EIPs)

* Elastic IP Addresses are public IP addresses for use with the AWS VPC * AWS maintains a pool of public addresses * With an Elastic IP you borrow a static public IP address from the pool and it becomes your Elastic IP address * You can maintain as long as you need and when your finished with the address you can return to AWS and it gets returned to the pool * An EIP can have a single public address and have it mapped to multiple private IP addresses, with the main address being the primary address and the additional addresses being secondary addresses * Secondary IP addresses are useful during IP address migrations, as they allow for connectivity while IP addresses are changed. *Secondary addresses are often used when an organization merges with another organization, and IP addresses need to be modified to allow for full connectivity. This often occurs when both organizations are using the same private IP address space.

Elastic IP addresses in regards to EC2 Instances

* Elastic IPs are retained when the instance is stopped. * Elastic IP addresses are static public IP addresses that can be remapped (moved) between instances. * All accounts are limited to 5 elastic IP's per region by default. *AWS charge for elastic IP's when they're not being used. * An Elastic IP address is for use in a specific region only

More Ways To Design for High Availability

* Use the principle of least privilege in IAM * Disable unnecessary services * Limit blast radius of problems with AWS organizations * Keep unwanted network traffic out with NACLs and security groups * Use Amazon WAF for firewall, AWS Shield for DDoS and IDS/IPS for intrusion protection * Physical security * Strong passwords for when passwords are required * Template good configurations with CloudFormation templates * Consistent backup strategies will protect against data lost * Backups should be stored in at least one secure alternative * Create images of production servers

Cross Account Roles

* Used to connect to external organizations * Connecting to other organizations can create significant business opportunities but with that connectivity also brings challenges * The partner company may need access to certain resources but shouldn't have access to anything else * While its essential to provide access with the principle of least privilege, nowhere is it more critical than with connecting to external organizations

SSE-C (Customer-Provided Keys)

* User has complete autonomy over encryption keys * When using SSE-C, AWS S3 will perform all encryption and decryption of your data, but you have total control of your encryption keys

Authentication Options

* Username and Password - Log into the console providing a username and password - AWS verifies your identity and provides an authorization based upon your IAM Privilege * Access Key - An Access Key is a combination of a 20-character key ID and a 40-character secret jet - The access key is used to connect to AWS via an API. This is generally performed using the SDK * Access Key and Session Token - When IAM authentication needs to occur under an assumed role a secure token can be provided to the requestion application - The Secure token then functions alongside the access key. This provides additional security over the other methods

There are 2 types of storage:

* Volatile - something that goes away with a reboot i.e. data being stored in an instance store on an EC2 instance, if terminated or rebooted all the data is gone. Don't store anything critical on these types of storage. * Non-volatile - something stays there when its rebooted. So for anything that is critical and matters you'll put it on this type of storage.

DynamoDB Use Cases

* When near unlimited scalability is required * When extremely low latency is required * When storing data from a large number of devices (IOT) * Ideal for gaming applications - Game state, Player data stores, and Leaderboards * Ideal for large scale financial applications - Large number of user transactions * E-Commerce - Shopping cards, Inventory tracking, and Customer profiles and accounts.

Server-Access Logging

* When server-access logging is enabled on a bucket it captures details of requests that are made to that bucket and its objects. * Logging is important when it comes to security, root-cause cause analysis following incidents, and it can also be required to conform to specific audit and governance certifications. * Is not guaranteed and is conducted on a best-effort basis by S3. The logs themselves are collated and sent every few hours, or potentially sooner. No set rule regarding this.

Transfer Acceleration

* When transferring data into or out of Amazon S3 from and to your remote client, or to another AWS region, transfer acceleration can dramatically speed up the process by utilizing another AWS service, Amazon CloudFront. * When transferring data to S3 from your client with transfer acceleration enabled at the bucket level, the request will go via one of the CloudFront Edge Locations, from here the transfer request will then be routed through a high speed optimized AWS network path to Amazon S3. * Be aware that there is a cost

Multi Part Uploads

* When trying to upload large files, many things can go wrong and interrupt the file transmission. It is a best practice to send files larger than 100 MB as a multipart upload * The file is broken into pieces, and each piece is sent to S3. When all the pieces are received, S3 puts the pieces back together into a single file. * This helps dramatically if anything goes wrong with the transmission; only a part of the file needs to be re-sent instead of the whole file. * This improves both the speed and reliability of transferring large files to S3.

NoSQL databases are optimal under the following circumstances:

* When you need to store large amounts of unstructured data. * When the database schema may change. * When you need flexibility. * When an organization needs rapid deployment of the database.

Database Restoration

* When you restore a DB from backup it forms a new instance * The new instance will have a new IP address and new DNS name

Some key situations where SQS can make a notable difference:

* With capacity planning and application scalability. * To make sure messages (orders) are not lost if part of the system is overloaded (i.e., database on multitiered application). * For cost optimization, as it offers the ability to right-size the instances supporting an application. * For autoscaling, with its ability to trigger autoscaling based upon queue depth, as opposed to a less direct metrics, i.e., CPU utilization. * To support an application's ability to handle large spikes in traffic, without having to scale or make changes to the platform * To handle increased traffic destined for databases, often without the need to increase write capacity on the database.

S3 cost allocation tags

* You can assign key-value pairs at the bucket level to help with categorization * One point to note is that you must activate your cost allocation tags from within AWS Billing before they will show up on any reports.

Placement Groups come in 3 different configurations:

*Cluster *Spread *Partition

EBS Optimized Instances

*Dedicated capacity for Amazon EBS I/O. * EBS-optimized instances are designed for use with all EBS volume types. * Max bandwidth: 400 Mbps - 12000 Mbps. * IOPS: 3000 - 65000. * GP-SSD within 10% of baseline and burst performance 99.9% of the time. * PIOPS within 10% of baseline and burst performance 99.9% of the time * Additional hourly fee. * Available for select instance types. * Some instance types have EBS-optimized enabled by default.

S3 Versioning

*Protects data against accidental or malicious deletion by keeping multiple versions of each object in the bucket, identified by a unique version ID * Allows users to preserve, retrieve, and restore every version of every object stored in their Amazon S3 bucket * Once enabled, versioning cannot be removed from a bucket; it can be suspended only

Scheduled Reserved Instances

*Reserved for specific periods of time, accrue charges hourly, billed in monthly increments over the term (1 year). *Match your capacity reservation to a predictable recurring schedule * Optimal when you have a need for a specific amount of computing power on a scheduled basis.

Elastic Load Balancer

* Elastic Load Balancers automatically distribute multiple targets such as EC2, Web servers * Elastic Load Balancers are auto scaling * Elastic Load Balancers use an IP address - If auto scaling occurs multiple IP addresses will be used * Elastic Load Balancers can load balance across AZs * Elastic Load Balancers support health checks * Elastic Load Balancers can terminate SSL Connections * ELBs can be Internet facing or internal-only * EC2 instances and containers can be registered against an ELB. * ELB nodes use IP addresses within your subnets, ensure at least a /27 subnet and make sure there are at least 8 IP addresses available in order for the ELB to scale.

AWS CloudHub

* Enables an organization to have transitive VPC connections in a hub-and-spoke environment. * CloudHub uses BGP, specifically eBGP, to connect and share routing information across VPCs. * All VPCs will have access to all network layer reachability information

Single Sign On

* Enables the user to authenticate once to the Identity Provider and then they will not need to sign on to access AWS services * When the users signs on they are authenticated against the Identity Provider * Their groups are determined, and their privileges are assigned

VPC Endpoint

* Enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

Database Encryption · Encryption in Transit

* Encrypt data on the way to and from the AWS database * Effectively there is a certificate created that assists with authentication o the endpoints and encryption of the data * This uses the TLS protocol and uses SSL Certificates

Uses of Aurora

* Enterprise Applications * Software as a service (SAAS) applications

Domain Name System (DNS)

* Every device on the network needs an IP address * The DNS system maps a domain name to an IP address * AWS has their own domain name system called Route 53

Amazon S3 Glacier Retrieval Options

* Expedited - This is used when you have an urgent requirement to retrieve your data but the request has to be less then 250MB. The data is then made available in 1-5 minutes. Most expensive of the 3 * Standard - This can be used to retrieve any of your achieves no matter there size but your data will available in 3-5 hours. 2nd most expensive of the 3. * Bulk - This option is used to retrieve petabytes of data at a time taking at least 5-12 hours to complete. Cheapest of the options.

There are 4 versions of Microsoft SQL Databases:

* Express * Web * Standard * Enterprise

Exterior Gateway Protocols (EGP)

* Exterior gateway protocols are used to exchange routing information across organizations. * Exterior gateway protocols provide extensive tuning tools, proving the means to engineer traffic, and filter routes for scalability, security, and proper routing. * Exterior gateway protocols are slower to re-route traffic, as they are designed for scalability and tunability. * Border Gateway Protocol (BGP) is the exterior gateway protocol used by the internet and AWS. * Exterior gateway protocols are tuned to be able to store an enormous number of routes (assuming the routers have sufficient memory and CPU capacity). At the time of this writing, the internet routing table has over 800,000 routes.

Amazon S3 OneZone IA

* For people who need data but they're looking to save costs * Reduced availability storage * Reduced durability * This is great for backups of storage due to its low cost * Good for data you may not need frequent access to because its going to give you better pricing for the reduced availability

When NOT to use EBS Volumes Types:

* For temporary storage or multi-instance storage access (EBS volumes can only be accessed by one instance at a time) * For very high durability and availability of data storage (use Amazon S3 or EFS instead)

Applying IAM Policies

* For the IAM Policy to work it needs to be associated with the IAM principal * Can be applied with a user policy, managed policy or group policy - User Policy: Is applied to individual users. Works well but quickly becomes unscalable when many users exist in an organization - Managed Policy: A policy is created and exists independently of the user. The policy can then be attached to users or groups. This method is effective and scalable - Group Policy: An organization creates a group and attaches a policy to the group. Users are then added to groups and inherit the policy from the group. This is highly effective and scalable

DynamoDB

* Fully managed by AWS * Highly available and in multiple AZs by default * DynamoDB is serverless i.e. AWS manages the OS, servers and security. Because it is serverless, there is near unlimited scalability * DynamoDB stores information on high SSD storage * Low millisecond latency * DynamoDB Accelerator is an in-memory cache that can lower latency to sub-millisecond * Encrypts all data by default * Can be backed up with little to no effect on database performance * Can be set up for global (Cross Region) replication to allow data to be globally available * Designed to work with name value pairs (Primary index) * Has a very flexible schema * Can also work with secondary indexes which allows applications to have access to different query patterns * DynamoDB secondary indexes can be global or local * Local secondary indexes have the same partition key as the base table * Global indexes can span across all database partitions * Limitations on sizing: A key value can't exceed 10 GB * To increase scalability DynamoDB is eventually consistent * High availability - By default, DynamoDB is placed in multiple availability zones. * * High-performance storage - DynamoDB uses high performance SSD storage * Can be configured with strongly consistent reads if required * It is required to provision read and write capacity during table creation * Provision read and write capacity prior to need * Autoscaling is available but it wont scale back down on its own * Priced based upon throughput * On-demand capacity is available with a higher cost then fixed capacity

Amazon FSx for Windows File Server

* Fully managed, high availability Windows file system storage * Hosted on Windows servers * Supports Microsoft File System features (i.e. Quotas and Active Directory), Server Message Block (SMB) protocol, Windows NFTS, Distributed file system (DFS) * FSx provides encryption in transit and at rest * High-availability single and multiple availability zone options * Uses SSD storage for enhanced performance and throughput providing sub-millisecond latencies * Operates as shared file storage * Provides fully managed backups * So for organizations that have highly windows dependent workloads that need windows dependent type storage then this is the option to use

Amazon EFS has 2 different performance modes:

* General purpose - is a default performance mode and is typically used for most use cases i.e. home directories and general file-sharing environments. It offers an all-round performance and low latency file operation, and there is a limitation of this mode allowing only up to 7,000 file system operations per second to your EFS file system. * Max I/O - this mode offers virtually unlimited amounts of throughput and IOPS. The downside is, however, that your file operation latency will take a negative hit over that of General Purpose

EBS General Purpose SSD (gp2)

* General purpose SSD storage * Good balance of price and performance * Great for boot volumes as they are persistent (stays after the reboot, boot volumes are just the root volume i.e. the volume that's created at the time of creation with the instance) * Great for transactional workloads because its moderately low latency and still has relatively good throughput * Great for dev and test environments * Baseline of 3 IOPS per GiB with a minimum of 100 IOPS. * Burst up to 3000 IOPS (for volumes >= 334GB) * Up to 16,000 IOPS per volume * AWS designs gp2 volumes to deliver 90% of the provisioned performance 99% of the time. A gp2 volume can range in size from 1 GiB to 16 TiB

When to use ENA:

* Good for use cases that require higher bandwidth and lower inter-instance latency. * Supported for limited instance types (HVM only)

Direct Connect offers:

* Guaranteed bandwidth * Consistent latency * Highest reliability

When to use EFA:

* High Performance Computing. * MPI and ML use cases. * Tightly coupled applications. * Can use with all instance types

Amazon S3 Standard

* High availability * High durability * High performance * Low latency * Use for frequently accessed data * Highest price of the storage classes but still best option if your accessing your data frequently * SSL to encrypt data in transit and at rest

AWS Amazon Simple Queueing Service (SQS)

* High availability, high scalability applications demand modularity * Segmentation and decoupling application architectures can dramatically improve scalability by removing system bottlenecks * AWS Amazon Simple Queueing Service (SQS) can help decouple application architectures * SQS is a message queuing service that provides temporary storage * SQS enhances application availability by providing a means to keep messages from being lost * SQS is for transient storage, default queue retention is 4 days but can be configured up to 14 days * SQS enables right sizing of applications * SQS facilitates auto scaling * SQS mitigates the need for messaging middleware in multi-tiered applications

Elastic Block Storage (EBS)

* High performance block storage * Acts like a virtual hard drive * Typically use this on EC2 instance when you need high performance storage or storage that doesn't go away on instance termination and AWS databases cause there can be some high performance storage needed on the database * Ideal for databases, enterprise applications, containers and big data applications. * Extremely scalable * Not deleted upon instance termination * Mission critical use high availability 99.999% * Optimal for high throughput and high transaction workloads * You can attach multiple EBS volumes to an instance. * You cannot attach an EBS volume to multiple instances (use Elastic File Store instead). * Multiple performance options * Associated with a single AZ ** * Automatically backed up to another AZ * Backed up via a snapshot i.e. an image is made of the data and is typically stored on S3

First In First Out Queues (FIFO)

* High throughout but much slower than standard queues * First in and first out delivering guarantees the messages will be processed once in the order they are received * Since messages are sent in the order they are received, it is possible the FIFO Queue will increase latency as all new messages will be waiting for the previous message to be processed

Typically 2 types of VPC Peering architecture environments:

* Hub and Spoke: a hub is created with connections to all remote VPCs. This enables the hub to communicate with each remote VPC or spoke. However, since VPC peering is not transitive, VPCs will not be able to communicate with each other since communication is limited to hub-and-spoke VPCs. * Fully meshed environment: every VPC will ultimately peer with every other VPC. This is a great option for when you do need transitive properties and you have a small number of VPC to connect. Fully meshed environments do not scale when a large number of VPCs require connectivity with each other.

Pre-signed URL expiration:

* IAM Instance Profile - Up to 6 hours * AWS Security Token Service - Up to 36 hours * IAM User - Up to 7 days * Temporary Token - When the token expires

Routing Policies for Route 53

*Simple Routing: Basic DNS that maps a domain name to a single location. This is the default policy * Failover Routing: Send the traffic to the main server. If that is not available send to a backup server. Used for active/passive configurations * Geolocation routing: Geolocation routing will look at the source IP address of the user (which will ultimately provide their location) and route them to the closest geographic region * Geoproximity Routing: This is used when you have servers in multiple AZs. Geoproximity routing will send the requestor to the closest AZ * Latency based routing: This will send to the server with the lowest latency to optimize performance. * Multivalue Answer: Route to any available server * Weighted: Provides a mean to share traffic between servers at a percentage you chose (i.e. 75% to server A and 25% to server B). Great option when you wan to test an application functionality

Pure Cloud is ideal for these circumstances:

- Start-up organizations - Scalability i.e. the cloud computing environment provides near unlimited scalability. - Speed of deployment i.e. with the cloud you can pretty much instantly deploy if needed. - Connections to partner organizations i.e. you can peer them and enable them to talk to each other without having to build the network connections. - Distributed environments - Cost concerns i.e. generally speaking cloud computing can be cheaper then the traditional routes.

Capital Expenses (CAPEX)

1 of the key drivers to move to the cloud is to reduce capital costs. Organizations that have their own data centers need their own routers, servers, switches, firewalls, generators etc. With the cloud you have very reduced capital expenses since you don't really have anything to buy, you're just purchasing or leasing services on a as needed basis

If you have looser requirements in regard to latency, availability, and performance what should you choose?

A VPN connection

Since network connections can fail what is recommended to help with at?

A direct connection is generally combined with a VPN backup over the internet.

What are Key Pairs?

A key pair consists of a public key that AWS stores, and a private key file that you store. *For Windows AMIs, the private key file is required to obtain the password used to log into your instance. *For Linux AMIs, the private key file allows you to securely SSH (secure shell) into your instance.

Local Trail

A local trail is a tied to single region trail. CloudTrail logs are put into a single bucket. This is the default option when CloudTrail is configured by the CLI or API.

Snapshots

A point-in-time state of an instance * Can be used to migrate a system to a new AZ or region * Can be used to convert an unencrypted volume to an encrypted volume * Snapshots are stored on Amazon S3 * Does not provide granular backup (not a replacement for backup software). * EBS volumes are AZ specific but snapshots are region specific. * Volumes can be created from EBS snapshots that are the same size or larger * Snapshots can be taken of non-root EBS volumes while running

Private Virtual Interface

A private virtual interface should be used to access an Amazon VPC using private IP addresses

Public Virtual Interface

A public virtual interface can access all AWS public services using public IP addresses

Spread Placement Groups

A spread placement group is a group of instances that are each placed on distinct underlying hardware * Instances are grouped in partitions and spread across racks * Protects against failures that could occur on a rack Rack power, network switch, etc. * Provides high performance with reduction of risk for simple component failure * Excellent performance with much lower risk of single points of failure then a Clustered Placement Group * Recommended for applications that have a small number of critical instances that should be kept separate from each other * Can be spread across multiple availability zones. This design offers high availability, but with higher latency and lower network performance than clustered and partitioned placement groups

Transit virtual interface

A transit virtual interface should be used to access one or more Amazon VPC Transit Gateways associated with Direct Connect gateways

What is a "Key" for an Amazon S3 object?

A unique identifier for an object in a bucket

Some of these S3 security features, which can help you maintain a level of data protection are:

* IAM Policies. These are identity and access management policies that can be used to both allow and restrict access to S3 buckets and objects at a very granular level depending on identities permissions. * Bucket Policies. This are JSON policies assigned to individual buckets, whereas IAM Policies are permissions relating to an identity, a user group, or role. These Bucket Policies can also define who or what has access to that bucket's contents. * Access Control Lists. These allow you to control which user or AWS account can access a Bucket or object, using a range of permissions, such as read, write, or full control, et cetera. * Lifecycle Policies. Lifecycle Policies allow you to automatically manage and move data between classes, allowing specific data to be relocated based on compliance and governance controls you might have in place. * MFA Delete. Multi-Factor Authentication Delete ensures that a user has to enter a 6 digit MFA code to delete an object, which prevents accidental deletion due to human error. * Versioning. Enabling versioning on an S3 bucket, ensures you can recover from misuse of an object or accidental deletion, and revert back to an older version of the same data object. The consideration with versioning is that it will require additional space as a separate object is created for each version, so that's something to bear in mind

Creating IAM Policies

* IAM policies determine who is allowed into the VPC and what actions they can perform. * When creating an IAM policy, permissions can be applied to a specific resource or all resources. * Providing access to specific resources is based upon the Amazon Resource Name (ARN). * Providing access to all resources is accomplished with an asterisks (*) wildcard.

Authorization - Policy

* IAM policy documents are written in JavaScript Object Notation (JSON) & and defined by the following attributes: * Effect - Allow or Deny * Service - What is the service for which access is being requested * Resource - What is the resource being made available - The full Amazon Resource Name * Action - This determines the permissions of the user - i.e. Read only, Write only * Condition - Optional - Enables very granular control - i.e. to allow access from a specific IP subnet, time of day

Private IP Addresses

* IETF RFC 1918 - 10.0.0.0/8 - 172.16.0.0/16 - 172.31.0.0/16 - 192.168.0.0/16 * Private IP addresses are not globally routable

There are several key components to addressing EC2 instances:

* IP addresses are assigned to network interfaces and not computing systems. * Depending upon the instance type, an instance can have multiple network interfaces. * Each network instance must be on a different subnet, then other interfaces on the EC2 instance. * Each IP address assigned to an interface must be unique within the VPC. * IP addresses can be IPv4 or IPv6. * All interfaces are automatically assigned an IPv6 globally unique address, which can be manually disabled. * EC2 instances with public IP addresses with an internet gateway are reachable from the internet. * EC2 instances with private IP addresses are not reachable from the internet unless a NAT instance and an internet gateway is used. * EC2 instances with private IP addresses and a NAT gateway without an internet gateway will not be reachable from the internet, but will be able to connect to the internet for operating system patches and other needed connectivity.

IPv4 Classes

* IPv4 has 5 classes of IP address space * Address classes are legacy - Class A: 1.0.0.0 - 126.255.255.255/8 - Class B: 128.0.0.0 - 191.255.255.255/16 - Class C: 192.0.0.0 - 223.255.255.255/24 - Class D (Multicast): 224.0.0.0 - 239.255.255.255 - Class E (Experimental): 240.0.0.0- 255.255.255.255

NS Record

* Identifies the DNS servers that are responsible for your DNS Zone * These authoritative name services propagate an organizations official DNS information to the DNS servers across the internet * Can be several entities

Identity Federations

* Identity Federations can be used to enhance IAM scalability - This is accomplished by establishing a trust relationship between the AWS Account and the Identity Provider (IdP) - The connection is made with OpenID Connect (OIDC) or Security Assertion Markup Language 2.0 (SAML) * An external identity provider can leverage web-based identities - i.e. Google, Amazon, Facebook, Apple, Twitter, Linkedin * Additionally, Identity Federations enable an organization to connect their AWS VPC to internal identity management applications. This includes Active Directory or LDAP. The connection is made with SAML 2.0

Cognito

* Identity and data synchronization service that enables organizations to synchronize identity management and data across mobile devices * Provides authentication, authorization, and user management for your web and mobile apps * Cognito users can sign in directly with a username and password, or with a third party Identity Provider such as Facebook or Google * How it works: - The user or app authenticates against Cognito and gets a token. The token is then used to provide access to the AWS resources

Choosing Between EC2 and Fargate for Containers:

* If complete control over the device hosting the container is needed, choose EC2. * If there are specialized requirements and specific customization options required, choose EC2. * If a highly scalable platform with minimal management overhead is desired, choose Fargate. * If near limitless scalability is required, choose Fargate. * Overall, Fargate is likely a better solution for most customers' needs.

AMIs can be copied to different regions

* Image from a snapshot of the machine * Perfect for disaster recovery * Perfect for migrations to new regions

Internet Gateways (IGW)

* In order to connect to the internet, an internet connection and an internet gateway must be configured. * The internet gateway is a router with an internet service provider connection. AWS provides internet service to VPC customers when the customer sets up an internet gateway. * The AWS internet gateway is a high-availability, redundant internet router. When using the internet gateway, it will have a route to all internet destinations or a default route to an upstream provider. * Additionally, the internet gateway will translate private IP addresses into a public address for internet connectivity

Security group configuration for ELB:

* Inbound to ELB (allow) - Internet-facing ELB: Source: 0.0.0.0/0. Protocol: TCP. Port: ELB listener ports. - Internal-only ELB: Source: VPC CIDR. Protocol: TCP. Port: ELB Listener ports. * Outbound (allow, either type of ELB): Destination: EC2 registered instances security group. Protocol: TCP. Port: Health Check/Listener.

Security group configuration for registered instances:

* Inbound to registered instances (Allow, either type of ELB): Source: ELB Security Group. Protocol: TCP. Port: Health Check/Listener. * Outbound (Allow, for both types of ELB). Destination: ELB Security Group. Protocol: TCP. Port: Ephemeral

Partitioned Placement Groups

* Instances are grouped in partitions and spread across racks in the same AZ * Protects against failures that could occur on a rack - Rack power, network switch, etc. * No two partitions within a placement group share the same racks, allowing you to isolate the impact of hardware failure within your application * Provides high performance with reduction of risk for simple component failure * Excellent performance with much lower risk of single points of failure then a Clustered Placement Group * Can be used to deploy large distributed and replicated workloads, such as HDFS, HBase, and Cassandra, across distinct racks.

There are 2 types of VPC endpoints:

* Interface Endpoints * Gateway Endpoints

Interior Gateway Protocols (IGP)

* Interior gateway protocols are used to exchange routing information inside of an organization. * Interior gateway protocols provide a very detailed map of the organization's routes. * Interior gateway protocols can detect outages and re-route traffic very quickly. * Interior gateway protocols are tuned for performance at the expense of scalability. * Some examples of interior gateway routing protocols include OSPF, IS-IS, and EIGRP.

Amazon FSx for Lustre

* Is a fully managed file system designed for compute-intensive workloads i.e. Machine Learning and high-performance computing * Ability to process massive data sets * Performance can run up to hundreds of GB per second of throughput, millions of IOPS, and sub-millisecond latencies * Has integration with Amazon S3 * Supports cloud-bursting workloads from on-premises over Direct Connect and VPN connections.

MX Record

* A MX Record specifies which mail serves can accept mail for your domain * This is necessary to be able to receive email

Scaling Out for Relational Databases

* A Read Replica is a read only (except MariaDB) copy of a database instance * Is synchronized in near real time to the master database * Provides additional servers to reduce load on the master database server - When there is a lot of read activity - When query traffic to a database is slowing things down - When you need extra DB capacity and offloading reads to the Read Replica frees up the Master DB server * Read Replicas are used for performance and NOT Disaster Recovery

Elastic Network Interface or (Network Interface)

* Is a logical networking component in a VPC that represents a virtual network card. * You can create and configure network interfaces in your account and attach them to instances in your VPC * By default, eth0 is the only Elastic Network Interface (ENI) created with an EC2 instance when launched * You can add additional interfaces to EC2 instances (number dependent on instances family/type). * An ENI is bound to an AZ and you can specify which subnet/AZ you want the ENI to be added in * If you add a second interface AWS will not assign a public IP address to eth0 (you would need to add an Elastic IP)

Systems Manager Parameter Store

* Is a scalable, hosted serverless environment optimized for storing passwords, database strings, license codes, and API keys. * Provides secure, hierarchical storage for configuration data management, and secrets management

Default AWS Certificate Manager (ACM)

AWS Certificate Manager is for customers who want encryption and security using TLS. The certificates are deployed through ELBs, CloudFront, and API Gateways in order to make communication secure.

There are 2 methods to send data directly to AWS instead of using a network connection:

AWS Snowball and the AWS Import/Export service

Database Storage Options

AWS databases are stored on EBS volumes * Provisioned OPS (PIOPS)- use this when you need lowest latency high performance and highest throughput storage * General Purpose SSD- lower latency, moderate throughout and high performance. Could be used for databases in a lab environment or databases that aren't critically latency sensitive. * Magnetic Storage- Moderate performance, moderate throughput, lowest cost, highest latency and designed for light IO requirements * Great for storing large amounts of data, don't use for latency sensitive applications

How does AWS treat their IP addresses?

AWS reserves the first 4 IP addresses of a subnet and the last IP address which is the broadcast so effectively out of any subnet you're losing 5 IP addresses. The smallest subnet is /28

Pure Cloud

All computing resources are in the cloud i.e. servers, storage, applications, databases, and load balancers are all in the cloud. The organization is connected to the cloud with a direct connection or a VPN connection. The speed and reliability of the connection to the cloud provider will be the key determinant of the performance of this environment.

What is IAM also referred to as?

Also referred as AAA * Authentication: Who is the user * Authorization: Is the user allowed to access the resource * Accounting: The ability to see what the users have done

What are the prebuilt Linux images supported by AWS?

Amazon Linux 2 and Amazon Linux AMI

Target Groups

Are a logical grouping of targets (EC2 instances or ECS). * Targets are the endpoints and can be EC2 instances, ECS containers, or IP addresses. * Target groups can have up to 1000 targets. * A single target can be in multiple target groups. * Only one protocol and one port can be defined per target group. * You cannot use public IP addresses as targets. * You cannot use instance IDs and IP address targets within the same target group. * A target group can only be associated with one load balancer * Target groups are used for registering instances against an ALB or NLB. * Target groups are a regional construct

On Demand Instances

Are computing instances that are available when needed * Pay by the hour or second * Facilitates auto-scaling * Scaling out means adding more instances * Scaling up means changing the instance size adding more memory, performance and disk speed. *Ideal for short term needs or unpredictable workloads * Optimal when reliable computing capacity is needed without complete knowledge of how long the application will be used or its capacity requirements

Availability Zones (AZs)

Are datacenters within a region

Tags

Are just arbitrary name/value pairs that you can assign to virtually all AWS assets to serve as metadata.

Placement Groups

Are simply where an organization places their equipment. * Instances within a placement group can communicate with each other using private or public IP addresses. * Best performance is achieved when using private IP addresses. * Using public IP addresses, the performance is limited to 5Gbps or less * You can't merge placement groups * Recommended to keep instance types homogenous within a placement group. * Can use reserved instances at an instance level but cannot reserve capacity for the placement group. * An instance can be launched in one placement group at a time; it cannot span multiple placement groups

Why is Pure Cloud not optimal for 90% of organizations?

Because say if AWS goes down then they go down as well so it's very risky.

Why is Caching beneficial?

Caching is beneficial only when there are frequent requests for the same information, or queries.

CloudWatch Events

Can be is used to trigger auto scaling, Lambda functions, SNS notifications, actions on containers and many other functions

Hybrid Cloud

Combines a standard data center with outsourced cloud computing. In a hybrid architecture, the organization can run its applications and systems in its local data center and offload part of the computing to the cloud.

SWF controls the flow of tasks using what?

Deciders to keep track of the workflow. The decider receives decision tasks from SWF and then determines and schedules the next step to complete the task.

Recovery Point Objective (RPO)

Defined as the maximum amount of time for which a data could be lost for a service.

Recovery Time Objective (RTO)

Defined as the maximum amount of time in which a service can remain unavailable before it's classed as damaging to the business.

If you have tight demands on latency and strict performance requirements what should you pick?

Direct Connect

EC2 Roles enable what?

EC2 computing resources to access AWS services * For example, EC2 to S3 and DynamoDB * To set up an EC2 Role, an IAM role is created and then applied to the EC2 instance.

Microsoft SQL Databases

Enables organizations to bring their Windows-based workflows to the cloud. * Offers Tools like SQL Server management studio to help manage the infrastructure * AWS allows users to have their windows workloads brought to the cloud seamlessly * This allows for high reliability and easy migration * Microsoft SQL has different clustering and failover options than most AWS databases

AWS Batch Multi-node parallel jobs

Enables you to run single jobs that span multiple Amazon EC2 instances. You can run large-scale, tightly coupled, high performance computing applications and distributed GPU model training without the need to launch, configure, and manage Amazon EC2 resources directly

What is storage?

Environments to keep an organizations data. It's a critical component of the VPC

S3 Multi factor Authentication Delete (MFA Delete)

Excellent way to protect data the business is reliant on. If you're trying to delete a file it will use MFA Delete to make sure you're the right person and you truly want to delete the file by requesting an authentication code

Dead Letter Queues (DLQ)

Feature to retain messages if a deliver error occurs

Block Storage is excellent for...

Files that change frequently, housing an OS, if it needs to be utilized by a computer, if its mounted by a server

Cache Timeout

How fast the cache actually ages out and get rid of the data. This timeout, referred to as the time to live (TTL) can be configured based upon an organization's needs

EBS Provisioned IOPS (io1) is the best option when you need what?

If you have an application sensitive to latency this is the volume type you're going to using. Also has highest throughout out of the options so if you've got something that needs high throughput and low latency this your perfect volume type.

Pilot light

In pilot light data is merit and the environment is scripted as a template, which can be built out and scaled in the unlikely event of a disaster * This option has a lower RTO than Backup and Restore, but will be more expensive to maintain

Proper Technique NACL that allows desired traffic example:

Inbound Rule 110 Allow TCP Port 80 Source any Outbound Rule 110 Allow TCP Port 80 Destination any

There are essentially 2 kinds of routing protocols:

Interior Gateway Protocols (IGP) Exterior Gateway Protocols (EGP)

Region

Is a collection of availability zones that are geographically located close to one other

Amazon WorkDocs

Is a fully managed, secure content creation, storage, and collaboration service. It is similar in functionality to Google Drive or Dropbox. WorkDocs enables collaboration across projects and facilitates things like shared document editing. * Simple solution with pay as you go pricing * Accessed with a web interface or with client software

Block Storage

Is a highly efficient high-performance form of storage. Its much slower then instance storage and the cloud. It decouples your storage from your compute environment. - Data is placed in blocks - Each block has a unique identifier - Blocks are placed where its most efficient - Very scalable - Constant reads and writes

Amazon MQ

Is a managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud

Amazon Simple Notification Service (SNS)

Is a managed messaging service to deliver messages between systems or between systems and people * Used to decouple messages between microservice applications * Used to send SMS, email and push messages to mobile devices * Facilitates communication between senders and recipients via a Publish/Subscribe Model i.e. think of a mailing list * You subscribe via email when they are ready * SNS is a high availability platform that by default runs across multiple AZs * SNS can be used to fanout messages to a large number or subscriber systems or customer endpoints (i.e. SQS Queues or Lambda Functions) * SNS allows the creation of filter policies so that you only receive notifications for what you're interested in * SNS encrypts you messages immediately to protect from unauthorized access

Database queuing

Is a means to send schedule your data delivery * Can be used in multiple other application as well * This decouples the traffic destined to the database to the queuing system * Can make a significant improvement in increasing the write performance of a database. * Can significantly help reduce CPU usage and other resources in the database as well

The Publish-Subscribe (Pub-Sub)

Is a messaging model enables notifications being delivered to clients using a push mechanism which notifies clients of message updates

Virtual Private Cloud (VPC)

Is a private network an organization purchases from the cloud provider on shared cloud or really a virtual private data center environment. They're logically separated from each other. * It is logically isolated from the other AWS customers * IP addressing both - Private and Public * VPC has its own routing tables

Data Lake

Is a repository that allows you to store structured and unstructured data in the same place at any scale * A data lake is location that holds large amounts of raw data * Does not require you to structure the data as you would in a database * Allows an organization to store virtually any type of data at an almost unlimited scale. * Unlike a database, a data lake can store data in its native format until it is needed by another application. * Highly adaptable and can be changed at any time to meet an organization's requirements.

AWS Snowball

Is a rugged computer with substantial storage that can be rented from AWS. If you have tight time frames and moderate amount of data to transfer this is the best option *Customer copies files they want moved to AWS (the files are encrypted) and ships snowball to AWS * Once the Snowball is received by AWS, AWS employees move the data from the Snowball to the organization's cloud computing environment. * Specialty computer with high storage capacity * Migration of large amounts of data to AWS

VM Import/Export

Is a tool for migrating VMware, Microsoft, XEN VMs to the Cloud. Can also be used to convert EC2 instances to VMware, Microsoft or XEN VMs.

AWS Cognito User Pool

Is a user directory in Amazon Cognito. With a user pool, users can sign in to web or mobile apps through Amazon Cognito, or federate through a third-party identity provider (IdP).

Amazon EMR

Is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. EMR utilizes a hosted Hadoop framework running on Amazon EC2 and Amazon S3.

Scaling Amazon Redshift

Is achieved by adding additional nodes. Amazon offers two types of nodes: * Dense compute nodes - Dense compute nodes are based upon high speed SSD RAID arrays. * Dense storage nodes - Dense storage nodes are based upon magnetic disk RAID arrays

Event Source

Is an AWS service or developer-created application that produces events that trigger an AWS Lambda function to run

AWS Server Migration Service (SMS)

Is an agent-less service which makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. *AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations

Reserved Instances

Is an instance where an organization commits to purchase a specified compute capacity for a specified period of time. * Commitment to purchase for specified period of time- 1 or 3 years * Substantial discount for committed use * Charged by hour or second *Provides a capacity reservation when used in a specific AZ *Instance type modifications are supported for Linux only. *Cannot change the instance size of Windows RIs. *Billed whether running or not * Optimal when an organization knows how long the application will be used and its capacity requirements

Database Caching

Is another means to increase the scalability of a database * Caching is a service to make frequently accessed information and place the information in memory * Caching works by taking a request and temporarily storing the results of the request (DB Query) and placing them in memory * Future requests for the same information will be handled by the cache and will not be sent to the DB server * To prevent stale data the cache will not keep request in memory forever, the cache will timeout data and allow it to expire for fresh data periodically * You're using caching to reduce read activity

Local Zone

Is basically a new smaller data center that's even closer to your users

Server-Side Encryption

Is performed using S3 and KMS. Amazon S3 automatically will encrypt when storing and decrypt when accessing your data on S3

Latency

Is realistically speaking how fast the disk can be accessed in terms of number of times in and out per second. Measured by IOPS

What is the The URL used to access your file?

Is really a pointer to the database where your files are stored. S3 functions a lot like a database behind the scenes, which enables you do incredible things with data stored on S3 like SQL queries.

Elastic Fabric Adapter (EFA)

Is simply an Elastic Network Adapter (ENA) with added capabilities. Enables customers to run applications requiring high levels of inter-node communications at scale on AWS * It provides all of the functionality of an ENA, with additional OS bypass functionality. * This model allows the application (most commonly through some user-space middleware) access to the network interface without having to get the operating system involved with each message. * OS bypass capabilities of EFAs are not supported on Windows instances

What is the problem with VPN Connections?

Is that while the connection speed to the internet is guaranteed, there is no control of what happens on the internet. So, there can be substantial performance degradation based upon the availability, routing, and congestion on the internet.

Throughput

Is the amount of data that can actually be moved, measured in megabit per second.

Configuration Item (CI)

Is the configuration of a resource at a given point-in-time

Dynamic Routing with BGP

Is the routing protocol you're going to use when you connect to AWS * AWS supports connecting an organization to AWS with BGP * BGP is a highly tunable and scalable routing protocol * BGP runs on TCP port 179 * BGP enables an organization to have multiple connections to the internet or AWS * An autonomous system number is required to identify your organization * BGP is required when using a direct connection to connect to AWS * BGP has many tuning options * AWS supports the BGP community "No Export" * AWS BGP implementation supports weight, local preference, as path, the specificity of routing information * AWS has a very light BGP implementation that allows for dynamically sharing only 100 routes * Address wisely so you can use summary routes

What's a simple way to encrypt data on the way to S3?

Is to use HTTPS, which uses SSL to encrypt your data on its way to the S3 bucket

Elastic Compute Cloud (EC2)

Is, for the most, part virtual machines and you're going to size your virtual machine based on CPU cores, memory and storage requirements in terms of capacity and performance, and network performance which is how fast the network access do you need

AWS Direct Connect

It's going to be your highest performance option and best for most customers. Realistically speaking its effectively a wire that connects the organization to the cloud. As long as the connection is available, performance is excellent.

What's the best way to determine which performance option you need for Amazon EFS?

It's to run tests alongside your application. If your application sits comfortably within the limit of 7,000 operations per second, then General Purpose will be best suited, with the added plus point of lower latency. However, if your testing confirms 7,000 operations per second may be reached or exceeded, then select Max I/O

Elastic File System (EFS)

Its network storage so where as an EBS volume looks like a virtual hard drive to a system, this is network file storage. Since EFS in a network file system, it can be accessed simultaneously by many computing instances. * Highly scalable network file system * Very similar to Unix/Linux NFS * 2 versions of EFS - Standard-Normal highest performance option - Infrequent Access-Lower cost for files that are NOT accessed frequently * PosIX compatible i.e. it'll work with legacy systems * Scalable-High throughput, IOPS and low latency * Elastic-Will automatically adjust sizing to meet required capacity * Pricing - Pay for what is used * Best used when a high performance network file system is required

Resource Groups

Mappings of AWS assets defined by tags

Client-Side Encryption

Means encrypting the data files prior to sending to AWS. This means the files are already encrypted when transferred to S3 and will stay encrypted when stored on S3.

Cluster Placement Groups

Means placing an organization's servers extremely close to each other to reduce latency and optimize performance. * Instances are very close in physical proximity * Often the same rack * Often on the same physical server * This provides extremely low latency * It provides the absolute best network performance, great option when you have an application that's extremely demanding of latency meaning it has to be very low latency but if you're going to do this maybe you'll want another clustered placement group in another AZ in case of some sort of failure to the physical rack

File Gateway

NFS for Linux and SMB for Windows

NAT Instance

NOT the preferred way of doing things anymore * A NAT Instance is a custom AWS Instance that translates private IP addresses to a public IP address * Its available as an AMI from AWS and runs on an EC2 instance * NAT Instance must be in a public subnet with route to the Internet Gateway * Allows for egress only internet and its return traffic i.e. your devices request information on the internet the data can come back but external devices will not be able to connect to inside of your subnet. * Does not allow an inbound connection * There must be a default route to the NAT instance which then must have a route to the Internet Gateway

Can computing systems boot from object-based storage or block-based storage?

No

Classic Load Balancer

Not recommended to be used anymore * The AWS Classic Load Balancers can be either network or application based * Legacy platform that can work with both EC2 and VPCs * Autoscaling capabilities * Like modern ALB and ELB can support single or multiple AZs * Can terminate SSL connections * Provides logs to analyze traffic flows * Can be used with Cloud Trail for auditing

Files stored in S3 are called what?

Objects. Every object stored in an S3 bucket has a 14 unique identifier, also known as a key. Single files can be as small as zero bytes, all the way to 5 TB per file. This provides ultimate flexibility. Objects in S3 can have metadata (information about the data), which can make S3 extremely flexible and can assist with searching for data.

Operational Expenses (OPEX)

Organizations with a big data center needs a large IT team to manage it. Big center also mean big bills will be accumulating. OpEx is increased in a cloud computing environment.

When creating the bucket, to achieve the highest performance, where should you place the bucket?

Place the bucket in a region that is closest.

Its easier to design for HA with cloud computing as these factors:

Pretty much all of these are maintained by AWS * AWS maintains redundant power * AWS maintains redundant cooling * AWS has redundant connections to the internet and across their backbone * AWS has redundant routers and switches

What do Instance status checks (StatusCheckFailed_Instance) detect?

Problems that require your involvement to repair

What do System status checks (StatusCheckFailed_System) detect?

Problems with your instance that require AWS involvement to repair

Encrypting Your Data

Protects sensitive data and makes the data unusable without the encryption key to decrypt the data. Ideally, you encrypt your data on the way to S3, as well as when the data is stored (or resting) on S3

AWS Cognito Identity Pool

Provide temporary AWS credentials for users who are guests (unauthenticated) and for users who have been authenticated and received a token. An identity pool is a store of user identity data specific to your account.

VPN Connection

Provides a means to "tunnel" traffic over the internet in a secure manner. Encryption is provided by IPsec, which provides a means to provide encryption (privacy), authentication (identifying of the user), data authenticity (meaning the data has not been changed), and non-repudiation (meaning, the user can't say they didn't send the message after the fact)

Lifecycle Management

Provides an effective means to automatically transition data to the best S3 tier for an organization's storage needs.

What does a Hybrid Cloud provide?

Provides an opportunity for the organization to leverage its investment in its current technology while moving to the cloud.

Elastic Network Adapter (ENA)

Provides enhanced networking capabilities on Linux EC2 instances. Enhanced networking provides higher bandwidth, higher packet-per-second (PPS) performance, and consistently lower inter-instance latencies

Instance Store Volumes

Provides temporary (non-persistent) block-level storage for your instance. * If the instance reboots (either intentionally or unintentionally) the data persists * Instance store storage is located on disks that are physically attached to the host computer. * Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. * You can specify instance store volumes for an instance only when you launch it. * You can't detach an instance store volume from one instance and attach it to a different instance. * Not available for all instances * Capacity for instance store volumes increase with size of EC2 instance

Amazon Machine Image (AMI)

Provides the information required to launch an instance * AMIs are regional. You can only launch an AMI from the region in which it is stored. However, you can copy AMI's to other regions using the console, command line, or the API

EC2 Compute Units (ECU)

Provides the relative measure of the integer processing power of an Amazon EC2 instance.

SNS consists of 2 components:

Publishers and subscribers * Publishers communicate by sending messages to a topic * A subscriber subscribes to the topic and then receives messages

What are VPN-only connections are ideal for?

Remote workers and small branches of a few workers, where if they lose connectivity, there will not be significant costs to the organization.

Config Rules

Represents desired configurations for a resource and is evaluated against configuration changes on the relevant resources, as recorded by AWS Config. AWS Config Rules can check resources for certain desired conditions and if violations are found the resources are flagged as "noncompliant".

How can you make S3 feel like the most organized structure for end users?

S3 allows the user to specify a prefix and delimiter parameter so the user can organize their data in what feels like a folder. Essentially the user could use a slash / or backslash \ as a delimiter. * Ex: mike/2020/awsvideos/storage/S3.mp4 mike/2020/awsvideos/compute/ec2.mp4

CloudFront integrates with numerous AWS services

S3, EC2, Elastic Load Balancers and Route 53

EC2 has 3 RI types:

Standard, Convertible, and Scheduled

NoSQL Database

Stands for "Not only SQL" * Provides enhanced flexibility in a database scheme structure * Can handle structured and non-structured data * Since the structure is more flexible a NoSQL database can scale larger than a traditional relational database * Follows a base model i.e. they're basically available, soft state and eventually consistent. Meaning if you made a write to the database instantly someone reading from that database may not get the newest information, may take a few seconds for everything to become consistent * NoSQL databases have very flexible scheme * They are not bound to the same table and column scheme as SQL databases * They can work with less structured data than a relational database * More scalable than relational databases * AWS managed NoSQL database is DynamoDB

EBS Volume Types SSD

Suited for scenarios with smaller blocks i.e. data bases using transactional work loads Often used as boot volumes for EC2 instances

EBS Volume Types HDD

Suited for workloads that require higher throughput/large blocks of data i.e. processing big data, logging information

Tenancy Options

Tenancy is where the instances are located. It can make a substantial impact on performance and availability. * Shared tenancy * Dedicated instances * Dedicated hosts * Placement groups

What's the key thing about High Availability?

The highest availability architectures will include at least two direct connections to the cloud. Ideally, each connection is with a separate service provider, a dedicated router, and each router connected to different power sources. For even higher availability realistically speaking most organizations will have Direct Connect connections backed up by a VPN connection.

Storage Gateway Cached Volume

The organization's data is stored on S3. The cache volume maintains frequently accessed data locally on the volume gateway. This provides low-latency file access for frequently accessed files to the on-premises systems. * iSCSI protocol

VPC Endpoints

The way to connect your organization's VPC to AWS without traversing the internet is with a VPC endpoint. VPC endpoints are virtual devices used for routing within the AWS network. * Endpoints are virtual devices that are high availability, highly redundant and scalable

What'd the difference between storage and databases?

Think about a closet as an example for storage, you throw EVERYTHING in a closet. Think about a database as a structured pantry i.e. one shelf is all spices, one shelf is all sweeteners, etc. They store SPECIFIC stuff and are quarriable (searchable)

ACM Private CA

This is for communication within the organization and allows the organization the ability to create a hierarchy * They can issue certificates for users, computers, applications, services, servers, and more devices throughput the organization * Private CA Certificates cannot be used on the internet * Private Certificates are available at an additional cost

Why is a VPN connection typically cheaper?

This is typically cheaper because its not a private connection between the data center and the cloud, all you're buying is a connection to the internet.

Multi-site

This option keeps a copy of your production environment live at all times, so you can failover to it very quickly, but it is the most expensive DR choice of the four listed

Warm Standby

This option keeps a scaled-down version of your complete environment on standby, so it has a lower RTO than Pilot Light, but is more expensive to maintain

The key to all high-availability designs is to?

To avoid any single point of failure

SQS is an ideal means to do what?

To increase database performance by decreasing the effectiveness of the write activity even though the number of writes that would be going to the database would be the same but they'll be spread out over time so it'll smooth disk performance, CPU performance and increase scalability

User Data and Instance metadata

User Data is data that is supplied by the user at instance launch in the form of a script. Instance metadata is data about your instance that you can use to configure or manage the running instance. *User Data is limited to 16KB *Both are not encrypted

AWS divides the IAM concept into what?

Users and Roles * An IAM user is a person accessing the AWS cloud * An IAM role is used by an AWS service to access another service i.e. EC2 accessing a DynamoDB

3 types of storage gateways are available for different applications:

Volume gateways, both cached volumes and stored volumes, and tape gateways

What is a major drawback to Cluster Placement Groups?

When it comes to availability, since everything is close together, often in the same server, rack, switch, and power source there are many single points of failure when compared with other architectures. This architecture is perfect for applications that are not tolerant of latency

Requester Pays

When this feature is configured any costs associated with requests and data transfer becomes the responsibility of the requester instead of the bucket owner. The bucket owner will still, however, pay for the storage costs associated with the objects stored in the bucket.

Backup and Restore

With backup and restore data is stored as a virtual tape library using AWS storage gateway or another network appliance or of a similar nature * This option has the highest RTO, but the generally lowest maintenance cost

EBS Provisioned IOPS (io1)

With this option you can procure ahead of time the number of input and output operations per second you'll receive and by doing that you can make sure you have guaranteed input and output operations per second which means lower latency * Highest performance SSD storage * Lowest latency * Deliver enhanced predictable performance for applications requiring I/O intensive workloads * Perfect for large data bases, databases are very latency sensitive * Perfect for apps requiring low latency * Up to 1000 MB/SEC * More than 16,000 IOPS * Up to 64,000 IOPS per volume * Up to 50 IOPS per GiB * Amazon EBS delivers the provisioned IOPS performance 99.9% of the time.

SWF manages workflows using what?

Workers that carry out the steps. These workers are programmed to perform, process, and confirm the completion of each step. Workers can be deployed using EC2, Lambda, or on a local system.

Can DynamoDB can be set up for on-demand capacity?

Yes. However, on-demand capacity is more expensive than provisioned capacity. Therefore, for optimal pricing with DynamoDB, its best to know read and write capacity when the database is provisioned.

Amazon Redshift Spectrum

You can efficiently query and retrieve structured and semi-structured data from files in Amazon S3 without having to load the data into Amazon Redshift tables. Redshift Spectrum queries employ massive parallelism to execute very fast against large datasets

Pre-Signed URLs

You can generate a pre-signed URL to share an S3-based object with others. When you generate the pre-signed URL, the owner's encryption key is used to sign the URL, which allows the receiver temporary access to the object. * All objects in S3 are private by default (Meaning, only the owner has access to the objects in their bucket) * You need to enable access * Simplest solution is a pre-signed URL * Create and sign URL with account owner's encryption key * Provides temporary access and secure access to the desired content

NAT Gateway

You can reach out to the internet and your return traffic is allowed, for say patching of an OS * A NAT Gateway is a fully managed NAT service * Provides egress-only internet connectivity, similar to a NAT instance. This means that inbound connections from the internet will be refused, but internal hosts will be able to connect to the internet. * NAT Gateways are redundant inside of an AZ * Must be created in a public subnet * Uses an Elastic IP for the life of the Gateway * There must be a default route to the NAT Gateway * For a high availability architecture each AZ should have its own NAT Gateway

How can you enable having a private connection over the public internet?

You can use your private IP address with the VPN


Conjuntos de estudio relacionados

Renal Calculi Practice Questions (Test #5, Fall 2020)

View Set

Project Management Ch 04 Test Questions

View Set

Chapter 53. Medication Administration

View Set

ARTS: Neoclassical and Romantic Period

View Set