AWS Solutions Architect - Orion Paper Notes

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

authoritative server

A DNS server that holds a complete copy of a zone's resource records (typically a primary or secondary zone). it delegates ownership of a part of the hierarchy to an organization so the organization that runs the name server becomes authoritative

ECS

*Elastic Container Service* allows you to host docker containers, with less overhead than EC2. Involes scheduling and orchestration, cluster manager, and placement engine FARGATE MODE: completely managed by AWS Which role provides ECS containers permission to access other AWS services? TASK ROLE This is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. Eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines. page 111 of orion papers can help .. too many words

DNS health checks

- 3 types: monitor the endpoint, monitor health of another health check, monitor health of cloudwatch - global health check system manages an enpoint in an agreed upon way and frequency E.x. if greather than 18% of checks are healthy -> healthy - can do HTTP'HTTPS; TCP within 10 seconds - records can be linked to health checks so if check is unhealthy then record is not used - can be used for failover and other routing architectures too - cost: is about 50/75 cents per health check. if want a frequency of more than every 30 seconds check -> costs more. if want to see more information like strings, or latency details -> costs more health checks cannot be done for udp... could for tcp and http

IP basics

- 8 bits = 1 byte - an IP = 32 bits - has a network part and a node or host part - network = 1s ; nodes = 0s - 0.0.0.0 to 255.255.255.255 /16 -> first two = network and last 2 = node /24 -> first three = network and last 1 = node larger the side number ex. /16 vs /24 the smaller the subnect 10.0.0/16 -> 10.0.0.0/17 10.0.128.0/17 -> 10.0.0.0/18 10.0.64.0/18 - CIDR = classless intern domain routing which allows for effective allocation and subnetworking

RDS - Relational Database Service

- DBaaS - is without admin overhead of traditional DB platforms - can perform at scale, be made publicly accessible, and be configured for demanding availability and durability scenarios - fully managed by AWS - is in a VPC AZ. has a primary and a standby. primary has a CNAME. standby can connect to s3. standby is synchronous replication from primary. can also have a read replica in a different region - standyby instance is not read from, or interacted with. it truly is just the 'standby' - automated backups of standyby to S3 occur daily and can be retained for 0 - 35 days. can also take manual snapshots that exist until deleted that could created a new RDS instance with a new endpoint address (require app changes or dns changes) - log backups occur to S3 every 5 minutes, allowing point in time recovery - using an S3 snapshot, could copy the snapshot to another region, and then create a new instance from the snapshot in that new region already specified - no matter how you 'restore' an instance, a new DB will be created, meaning there will be a new CNAME - rds supports: mysql, mariadb, postgresql, oracle, microsoft sql server and also arurora (in house developed engine with feature and performance enhancements) - can be single AZ or multi-AZ for resilience (multi means primary and standby are in different AZs of same region). no performance benefit with multi-az but provides better RTO than restoring a snapshot - maintenance is performed on the standby first, then it is promoted to minimize downtime. backups are taken on the standby, ensuring no performance impact. both primary and master have their own storage - read replicas are read-only copies of RDS that can be in the same or different region as primary. read replica can be addressed independently (own DNS name), used for read workloads, allowing you to scale reads. Can have up to 5 read replicas from an RDS instance. Can create a read replica from a read replica and can promote a read replica to a primary instance and they can themselves be multi-AZ. reads from read-replica are eventually consistent (like seconds normally) - can be genera purpose, memory optimized, or burstable - storage can be: general purpose (3 IOPS per GB, burst to 3,000 IOPS) OR provisiioned (1,000 to 80,000 IOPS dependent on engine size) - billing is based on instance size, provisioned storage (not used), IOPS if using io1, data transferred out, any backups/snapshots beyond the 1005 that is free with each DB - DB endpoint is a CNAME. NOT expressed as an IP - RDS is scalable, less admin overhead, publicly and privately accesible - supports encryption as follows: can configure encryption, can take a snapshot then encrypt it then created a new encrypted instance from the snapshot, encryption cannot be removed, read replicas need to be the same state as primary instance (encrypted or not) - network access is restricted by SGs

S3 fundamentals

- bucket names must be globally unique, with a minimum of 3 and max 63 characters, no uppercase or underscores, must start with lowercase lettre or number and cannot be formatted as an IP billing: are billed for storage but is very minimal cost here - can be used for static html files, could host front-end code for serverless architecture, to host static content. Cloudfront could be added to increase speed and efficiency for global users as well as SSL - unlimiited objects in buckets - unlimited total capacity for buck - object's key = name - object's value = data - object's size must be 0-5 tb - by default, s3 buckets trust the account that created it. by default, there is NO public access to a bucket - identity policies can be created for users in the same account so that they can interact with the S3 bucket -- can be applied to users, groups and roles (identity policies are not for anonymous OR for other accounts). use this when granting to 1 identity - what should this user be able to do? - you could use a resource policy on S3 bucket or a specific S3 object (aka bucket policy) which apply to anyone accessessing this like anonymous or public or other accounts would use this. who should be able to access this bucket? - ACLs are attached to objects and buckets. are old way of doing this. default rule is to trust the aws account that created the bucket. can allow public access this way. not recommended for use by AWS

presigned urls ~s3

- can be crated by identity in aws, providing access to an object using creator's access permissions ... when used, access is verieid - URL is encoded with authentication built in and has an expiration time - can be used to upload or download objects - any identity can create a presigned url A temporary URL that allows users to see assigned S3 objects using the creator's credentials. - Ex. good for client access to upload an image for process to an s3 bucket ; or when running a stock images website -> you pay, and then get a link to use/view that image - if an error then could be : it has expired with a 7 day max, permissions of creator have changed, url was created using a role (36-hr max) and temporary credentials have expired - note: not wannt create presigned url using roles

custom vpc

- can be designed and configured in any valid way - need to allocate IP ranges, create subnets, and provision gateways and netowrking plus design and implement security - good for when multiple tiers are needed or a more complex set of networking - best practice is not to use default for most production things - if you want a public DNS, then would need to add it. a public IP is automatically provided.

vpc endpoints

- can be used to connect aws public services without need for vpc to have an attached internet gateway and be public - vpc endpoints can be gateway or interface endpoints - can be gateway (dynamo, s3) or interface (sns, sqs) - use this when entire vpc is private with no igw, if no public ip/natgq and needs access to public resources, to access resource restricted to specific vpcs or endpooints like a private s3 bucket) - gateway: used via route tables, HA across AZ in a region, controlled via policies. use a prefix route table. does NOT use DNS names, it DOES use CIDR blocks. are NOT controlled by security groups, are controlled by policies - interface: for HA, need one per AZ, are controlled by SGs, NACLS also impact traffic, they add or replace dns for the service (no route table updates required)

NAT

- create a NAT in each availability zone. must be in a public subnet. - provides outbound only from private to public static nat: private ip mapped to 1 public ip (internet gateway does this). static nat is also known as an internet gateway. if something in a public subnet wants to communicate with the internet then it will go from 'something' to igw which will go from private ip to public ip and then out to the internet and vice versa dynamic nat = nat gateway = range of private mapped to one public. is in a public subnet. it takes the source ip from private something and then instead uses its own private ip (which is an elastic static ip) then to its own public ip note: old version of this was called a nat instance - the only time to use this over nat gateway is due to cost. otherwise recommended to do nat gateway and a bastion host. if having issues -> would do a src/dest check on a nat instance note: they do scale to handle the load note: for high availability, would need to have one nat gateway per availability zone note: default bandwidth: 5 gb, up to 45 gb -> if need more then can have more nat gateways manually added example: home router ... no matter how many devices you connect they all show you're from the same public ip link comparing nat gateway to nat instance: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html

vpc routing

- high available, scalable and controls data entering and leaving the vpc and its subnets - each vpc has a main route table which is allocated to all subnets in the vpc by default. - a subnet can have only one route table - route table controls what vpc router does with traffic leaving the subnet - 1 internet gateway per vpc for all public ips to and from the internet. internet gateways provide access both inbound and outbound for public subnets - every route table has a local route which matches cidr of vpc and lets traffic be routed between subnets - route has destinatino and target - most specific route is used -> so /24 before /16 - targets can be ips or aws networking gateways/objects - subnect is public if (a) public ips allocated (b) internet gateway (c) subnet has a default ruote to the internet gateway

NACL - private networking

- if communication is inside of a subnet -> no nacl is applied. - different nacl per subnet. so if you have two subnets -> would have 2 nacls - operate at layer 4 (tcp/udp and below) - subnet has to be associated with a nacl - either vpc default or a custom - only impact traffic crossing boundary of a subnet - can explictly allow or deny based on protocol, port range and source/destination - rules are processed in number order, lowest first. when a match is found, that action is taken and processing stops - "*" is processed last and is an implicit deny - nacls have two sets of rules (inbound and outbound) - nacls cannot reference a logical resource (note: security groups can) - nacl = layer 4 - NACLS are stateless, need to consider initiating and response traffic ... when initiated is to a well known port number and back is to an ephemeral port - nacls have rule numbers, lowest rule numbers first. it processes until a rule fits .. if none of the rules fit, then there is an explicit deny. For a DEFAULT VPC -> default has rule 100 to allow all traffic , and deny * so it is worthless BUT if you create a custom NACL with a VPC then is an explicit deny (NO explicit allow)

reserved vs spot vs on demand

- instance size/type have spot price - bid more, instance provisioned for spot price. less = termination. - spot fleets are containers, allowing capacity to be managed - reservations are zonal (AZ) or regional - zonal reserved instances include a capacity reservation - for a reservation, you pay regardless of what you use - regional is more flexible, can buy an m5 large, and then split it into 2 m5 smalls / could buy a medium -> use a large and get a discount ... can't do that with AZ reservation reserved -> based/consistent load, known and understood growth, critical ssytem/comp - spot instance/fleet -> bursty workloads, cost critical which can cope with interruption. if state is stored elsewhere, then could be good also, like with web server. cheap ec2. workload is not time sensitive. great to cope with bursts. - spot instane: great for massive parallel computations - on demand -> default or unknown demand, anything in between reserved/spot, short term workloads that cannot tolerate interuption

aurora architecture

- is compatible with mysql, postgresql, and associated tools - uses a base configuration of clusters where a cluster contains a single primary instance and zero or more read replicas - can have up to 15 read replicas and all of them read from the same data source so they are reading it synchronously. you can choose the AZ they are in and also use them as a failover - all instances (primary and replicas) use the same shared storage - the cluster volumes. cluster volumes are SSD based, which can scale to up to 64 TiB in size - replicates data 6 times across three AZ zones. aurora can tolerate two failures without writes being impacted and three failures without impacting reads. aurora storage is self healing - cluster volume scales automatically, is billed for consumed data only, and is constantly backed up to S3 - aurora recplicas improve availability, can be promoted to a primary instance quickly and allow for efficient read scaling - cluster endpoints: can read and write - reader endpoint: used by reads and balance connections over all replica instances - to improve resilience, use additional replicas - to scale write workloads, scale up the instance size. to scale reads, scale out (adding more replicas) - shared storage, addressible replicas, parallel queries - can take snapshots of the DB, save to S3 and restore that way. can also schedule automated backups that could be stored from 0 - 35 days (same as RDS) - BUT Aurora has something called 'backtrack' which can go back up to 72 hours and revert the DB to that point in time. it does cause an outage but you don't need to update db configuration and create a new instance AND Aurroa also can move to a failover much faster than other RDS options - aurora parallel query: can query on all nodes at once. great for larger queries and massive datasets. gives massive performance benefits. must be configured when first create the aurora cluster. - aurora global db: must be added when first create the aurora cluster. 1 primary aws region (master) and 1 read only region. write operations on primary. use when need global resilience or separate region for low latency. is more expensive and have limited options for sizes - aurora serverless: is same DB engine as aurora BUT instead of provisionng certain resource allocation, serverless handles this as a service. you specificy min and max number of ACUs (aurora capacity units) and then aurora serverless can use the Data API. is an on demand auto-scaling service (can scale in capacity). no disruption through usage of a proxy fleet. least overhead -> not need to even worry about instances. in the backend aws keeps instances hot, so if scale goes to zero it can quickly start/allocate one cons of serverless: takes longer for failover. less highly available since it is only in 1 AZ. ability to scale back and pay nothing for an amount of time. cannot access a cluster via a VPN or an inter-region VPC serverless use cases: if unknown usage. if have light usage but have periods of peaks with high usage -> could define min/max. or if have development testing during work hours, then would use this. can use the query editor using an API (don't need a DB engine with sql editor)

dynamo DB

- is nosql db service. no schema is required. full integration with cloudwatch - little to no cost - fully managed by AWS. provides consistent, single-digit latency at any scale. supports both document and key-value store models - when to use: unstructured data like keys/values, jsons. is db as a service, is serverless from customer perspective, are just making tables with dynamodb. webscale, id federation -> choose dynamodb - is global, partitioned regionally and allows the creation of tables. must be created - eventually consistent is the default for dynamodb - great for: mobile, web, gaming, ad tech, IoT - table: collection of items that share the same partition key aka hash key (PK) or partition key and sort key aka range key (SK) toegether with other config and performance setting - item: collection of attributes (up to 400 kb in size) inside a table that share the same key structure as every other item in the table. this is like a row. must have unique identification which is based on primary key (partition key) - attribute: is a key and value (name and value) - is highly resilient and replicates data across multiple AZs in a region. when receive an HTTP code of 200 -> write is completed and durable (doesn't mean it's been written to all AZs) - eventually consistent is the default for read operations for dynamodb - strongly consistent means dynamodb returns most up to date copy of data (takes a lil longer but is sometimes requried for consistency). eventually consistent is faster. gets data from the leader node - table name must be unique in the region (dynamodb is regional) - when create a table, by default you get 3 AZs each with a different read replica. is highly resilient. - you cannot apply resources directly to a table. must use IAM policies or federation so that they can assume a role - can do point in time recovery if configured, up to 35 days in the past on a per table basis which does NOT create a new table. can also do manual backups. if restore it via 'on demand' or manual backup-> new table name but it carries over the table names and configurations. encryption default is carried over - can do global tables, to do this need to enable streams and add regions want to replicate to. can read/write for all of them. table needs to be empty when 'global tables' is enabled - in the console, you can use a 'scan' operation to filter on a specific attribute (exampel: when rain is 3). when this is done, it consumes the capacity of the entire table. not efficient for larger tables. on the other hand is a 'query' which is more efficient and can do lookups without having to read everything (based on a single partition key and 1 or more than 1 sort key(s)). query = preferred always - has two read/write capacity modes: provisioned throughput (default) and on demand mode. on demand mode is where dynamodb automatically scales to handle performance demands and bills a per-request charge. on-demand mode is good for new applications where usage is unknown, pay per use process then also good fit. provisioned throughput mode is where each table is configured with read capacity units (RCU) and write capacity units (WCU). provisioned is normally cheaper than on demand. - there is also autoscaling now where can identify min and max so dynamodb can adjust this automatically - every operations must consume a whole RCU/WCU - RCU: one RCU = 4 kb of data per second in strongly consistent way. if eventually consistent -> 8 kb - WCU: one WCU = 1 kb of data with strongly consistent. if eventually consistent -> 2 kb - when enabled, streams can provide an ordered list of changes that occur to items in a dynamodb table. is a rolling 24-hr window of changes. are enabled per table and only contain data from point of being enabled (not retroactive). eveyr stream has an ARN that identifies it. can be configured with 1 different view, options are: keys only (when item is added updated or delete then key is added), new image (entire item), old image, new and old image - streams are durable, scalable and reliable so are highly available. dynamodb is highly resilient. lamdba is high scalable. - want to enable CRR for live data migration, easier traffic management and disaster recovery - streams can be integrated with lambda, invoking a function when an item is changed in dynamodb (streams are good for triggers) ex. event driven pipeline to send email when something is updated in a table like a customer order or add data in another table - indexes are local secondary index (LSI) or global secondary index (GSI) Indexes allow you to define alternative partition and/or sort keys, which can allow you to use Query rather than Scan operations. is cheaper. Additionally, you can choose which attributes are projected into the indexes, meaning you will read less data for each ITEM retrieved. - LSI: must be created at same time as creating the table. use same partition key but different sort key. share same RCU and WCU as main table. can create up to 5 of these per table. can be strongly or eventually consistent. - GSI: can be created at any point after table is created. can use different partition and sort key. have their own RCU and WCU that can be auto-scaled. is asynchronously updated so must be eventually consistent. limit of 20 default that can be increased with a ticketf

lifecycle rules and intelligent tiering

- lifecycle rules: automated transition of objects from one storage class to another. rules added at bucket level. only moves down. can do by bucket OR can do by name/tag. can do for the current version of an object or to a previous version of an object. example: could use this to permanently delete objects / versions of objects after x amount of time - intelligent tiering: for objects smaller than 128 kb where object must be in original storage class for 30 days before transitioning to another tier. instead of transitioning can be configured after certain time periods. can also be set to be deleted after x amount of time; is moving between tiers automatically so if not accessed for 30 days then moved lower but cuold also be moed higher ; is a fee associated with this too but eliminates retrieval fee note: intelligent tiering is good when: data with unknown or changing access patterns OR when don't want the admin overhead note: intelligent tiering has No retrieval costs for objects, is a good choice for unknown or unpredictable access patterns, has No admin overhead, and Automatically moves objects to the best storage class based on access patterns

bastion host (aka jumpbox)

- must be deployed in a public vpc to get secure access to private resources (that are in a private subnet. a private subnet cannot reach the internet without this) - provides secure incoming access - host that sits at perimeter of vpc - functions as entry point to vpc from trusted admins - allows for updates and configurations while remotely allowd vpc to stay private and protected - generally with ssh (linux) or rdp (windows) - must be kept updated, security hardened and audited reguarly - multifactor auth, id federation, ip blocks

NAT Instance

- must be provisioned in a public subnet, and must be part of private subnet's route table

IPv6

- must opt in - dns names are not allocated to ipv6 - ipv6 are all publicly routable - OS is configured with public address via DHCP6 - elastic ips aren't relevant with IPv6 - not currently supported for VPNs, customer gateways, and VPC endpoints - egress only internet gateways provide IPv6 outgoing access to public but prevents access from the internet .. NAT isn't required with IPv6 so NATGWs arenot campatible either

private vs public zones (~ DNS)

- private zone is only accessible from vpc they are associated with - public: is created when registered with route53, when transfer to route53 domain or if manually created; hosted zone will have name servers where route53 becomes authoritative for the domain Example of public zone: associatecat.com (www is NOT included in naming format of a public zone) -private: create manually and associated with one or more vpcs; need enableDnsHostnames and enableDnsSupport on vpc enabled; not all route53 health checks are supported; can do split view dns, where private is preferred but public if no private matches route53 is a registrar and DNS provider

Region Default VPC

- required for some services, default for most - preconfigured with all required networking/security - configured using /16 CIDR block 172.31.0.0./16) - /20 public subnet in each AZ, allocating public IP by default - attached 1 internet gateway with main rote table sending infor to internet gateway (1 IGW per VPC) - default DHCP set is attached - SG: default is for all from itself, all outbound - NACL: default to allow all inbound and outbound -When a default VPC is created, it also forms a public subnet inside every AZ in that region. - default has a public DNS created

s3 storage classes (aka storage tier) ... this influences cost, availability, durability and latency. cna be changed manually or with lifecycle policies

- standard: default, all purpose or good when usage is unknown ; 99.9999999% durability and 4 9's availability; replicated in 1 az with no minimum object or retreival fee. general purpose, frequently accessed data standard infrequent access (standard ia): objects with real time access needed but infrequently done ; 99.9%, 3 azs, cheaper than standard; 30-day and 128 kb min charges with object retrieval fee. good for long-lived but less frequently accessed data - one zone ia: non critical or reproducible objects; 99.5%, 1 az, 30-day and 128 kb min charges, cheaper than standard and standard ia. good for long-lived but less frequently accessed data; can access data real time. use case: when using CRR, or if data is not mission critical or if data has been replicated like a thumbnail - glacier: long term archival storage (warm or cold backup) ; retreival for minutes or hours (faster = higher cost); 3 azs, 90 day and 40 kb min charge for retrieval. long-term archive and digital preservation. examples: disk files or file systems. is secure, durable, extremely low cost for data archiving. does have cost, very minimal. can access it from a few minutes to a few hours (depending on what you choose) - glacier deep archive: long term archival (cold backup) with 180 dya and 40 kn min ; longer retreival but cheaper than glacier (replaced tape style storage). long-term archive and digital preservation. always takes hours to access this.

S3 versioning and MFA Delete

- versioning can be enabled on S3 bucket (not objects) and once enabled everytime modified creates a new version. once enabled it can never be fully switched off, only suspended. by default versioning is turned off - with versioning, youre billed for all versions of objects .. when try to delete, there is a delete market (not actually deleted). you can delete all objects that were versioned, by deleting all of them in the versions area - if you delete an object, you get a new version added that has a delete marker. THEN, if you delete the delete marker version, you can see the actual object again on the front-end (it is 'undeleted') - older versions can be accessed using object name and version ID - MFA delete: meant to prevent accidental deletion of objects. so once enabled, one time password is required to delete an object version or when changing versioning state of a bucket

registering a domain name

1. check domain name is available 2. purchase domain via registrar 3. host the domain (pay or manage dns hosting or name servers for the domain and inform .co mperator to link those servers to you) 4. records in the zone file www. mail. ftp etc ... on name servers that are authoritative/host the domain you need to add records

four types of no sql databases and their associated pair

1. key value: data in key and value pairs. fast queries and ability to scale. dynamodb. no relationships and weak schema 2. document: stored in structured key and value pair called a document; highly performant. mongodb. is semi-structured data. if you can group documents together. great for content management like blog articles or comments. great for user profiles with varying and perhaps changing information. 3. column: data stored in columns, not rows; queries against attribute sets like all surnames, fast ; great for data warehousing and analytics. redshift. great for looking into patterns of the data. 4. graph: for dynamic relationships as nodes and relationships between nodes. good for human related data like social media. ex. neptune or neo4j. great for social media sites and tracking relationships that change over time.

Subdomain

A domain name under the control of a higher-level domain name. For example, pltw.org is a subdomain of .org.

DNS Record A, AAAA, CNAME, MX, NS, TXT, ALIAS

A or AAAA: for a given host (www) provided an IP CNAME: allows an alias to be created. so allthethings.linux.com would hve for www, ftp and images MX: mail servers for given domain NS: used to set authoritative servers for a domain; .com would have NS server for linux.com TXT: descriptive test in a domain ; eg gmail or office Alias record: extension of name; can refer to aws logical services like load balancer or s3 bucket and aws downsnt charge for queries of alias records against aws resources

name server

A server that contains databases with domain names and the numeric IP addresses to which they correspond.

records

A, MX, AAA, CNAME, TXT, NS SOA is a default record of a zone. SOA stands for start of authority

BaaS vs Faas (serverless architecture)

BaaS: third party services instead of your own like with Auth0, dynamo db FaaS: provide application logic. are only active or invoked when needed (ex. lambda)

levels that are used based on type of model being iass, paas, saas

Data Application Runtime OS Virtualization Host/Servers Network and Storage Data Center IASS: 1-4 PAAS - 1-2 SAAS - 1

EBS volume types

EBS volumes 'basic' have automatic monitoring every 5 minutes at no charge volume types: gp2 and io1 = solid state, focus on IOPS ; sc1 and st1 = mechanical, focus on throughput gp2 = default, balance of IOPS / MIBS - burst pool IOPS per GB. baseline is based on size of the volume. 3 IOPS/GB. bursts up to 3,000 IOPS. 100 - 16,000 IOPS. minimum size = 1 GB io1: highest performance, can adjust size and IPS separately; good for large database workloads. max throughput of 1,000. can change IOPS and size separately. 64,000 IOPS/volume. minimum size = 4GB - maximum ratio of IOPS to volume size is 50:1. Max for size of volume 8 GiB = 400 IOPS st1: low cost, throughput intensive, can't be a boot volume; good for big data and streaming sc1: cold lowest costs, infrequent access, can't be boot volume

instance roles and instance profiles

EC2 instance roles are IAM roles that can be assumed by EC2 using an instance profile is eithre created automatically using console or manually with CLI it's a container for the role that is associated with an EC2 instance profile allows applications on EC2 to access credentials from role using instance metadata

access keys and their parts

access keys are a pair of values used by CLI, applications, sdks has access key id and secret access key id secret = sensitive and private part, is available on ly once when the key is initially genreated; is stored by the owner can't be used to login to the console, only CLI - can make programmatic calls with access keys from AWS tools for windows powershell, direct http call using api, and aws cli

undifferentiated heavy lifting

allowing a vendor to handle this part frees up staff so if an application, system or platform is not specific to our business, then makes sense to go this route

ARN

amazon resource name arn:partition:service:region:accountid partition = aws or aws-cn service = s3, ec2, rds region = us-east-1, ap-southeast-2 could add in resourcetype/resource/qulaifier and this in different orders too like resourcetype:resource could have wildcards too which is a * fields with :: omit a value

lambda

FaaS - functions as code is stateless can perform task for up to 15 minutes. only pay when it is running (so not all the time) .. amount of time it is executing. no configuration, no patching, no maintenance. could be run on an event, could be run nightly. is good for one or small number of tasks. are stateless ... lack of persistence, entirely clean runtime environment every single time ... you need to store the data somewhere after it runs. can run it concurrently, can scale to infinitie levels. could monitor with cloudwatch logs too with errors, success, etc must have a unique name in the region, in your account. requires a runtime (like python 3.6), function code, runtime environment, execution role lambda assume a function's execution role and given temporary credentials via sts supports prebuilt runtimes and custom ones can be created' can be configured to be private in a specfic VPC given an IP, default is that it is public languages supported: node.js , java, python can configure different timeouts, default is 15 minutes, but could set it to like 5 minutes, or something else also. could also configure different memory usage (min 128) bad: if need fully fledged OS, consistent CPU, if needs something consistently running billing: minimum billed duration is 100 ms requires: code and libraries, function name, runtime, permissions can be event driven OR manually driven lambda has temporary local storage while running of up to 512mb

high availability VS fault tolerance

HA: hardware, softwar and config to allow a system to recover quickly in event of failure fault tolerance: system designed to operate through a failure with no user impact. more expensive and complex to achieve

high level flow of DNS when not known

ISP -> DNS root server -> Authoritative .com top level domain -> individual server like linux acadmey server

OSI model - data link layers

L7: application -> protocols like http, ssh, ftp L6: presntation -> data conversion, encryption, compression, standards for L7 l5: session l4: transport -> tcp (transport) and udp (speed), ports and data in correct order with error checking and ports l3: network -> ips over interconnected networks, packets sent l2: data link -> mac addresses on local network l1: physical -> 0's and 1's, device transmits signals and listens; how data gets received

S3 4 Types of encryption

Note: S3 does only object encryption NOT bucket encryption Note: On a bucket, could apply a default encryption for every object that is uploaded. Could force every object to be a certain encryption type before it is allowed to be uploaded. 1. client side encryption: client is for encryption/decription and its keys. good for strict security compliance since is has high admin and processing overhead 2. server side encryption, customer managed key (SSE-C). s3 handles encryption/decryption, customer for key management and keys needed for every put and get request. s3 does NOT keep the keys 3. server side encryption, s3 managed keys (SSE-S3): objects used AES-256. key generated by s3 using kms on my behalf ans key stored in object in encrypted form. need access to bucket or read only for s3 to decrypt and access it 4. server side encryption with aws kms managed key (sse-kms). objective encrypted using indiv key from kms. encyprted key store with encrypted object. decryption needs both s3 and kms key permissions (role separation). during this process, you choose what kms key you want to use

EBS snapshots

are point in time backup stored in S3 want an instance to be stopped before taking a snapshot initially is the full copy of the volume, then future snapshots are data changed since the last snapshot is cheaper than EBS bc is being stored in S3. is highly available and efficient. great to move or copy instances between AZs When doing this, it's recommended the instance is powered off or disks are 'flushed' snapshots can be copied between regions, shared and automated using data lifecylce manager (DLM) snapshots can be made public or private. they can also be shared directly with specific other aws accounts. can also create a new volume with it

EC2 instance types

T2 and T3: low cost instance types that provide burst capability M5: for general workloads C4: provide more capable CPU X1 and R4: optimize large amounts of memory I3: Delivers fast IO P2, G3, F1: deliver GPU and FPGAs special cases: a = amd cpus A = arm based n = high speed networking d = NVMe storage (advanced faster storage)

TLD (top-level domain)

The highest-level category used to distinguish domain names-for example, .org, .com, and .net. A TLD is also known as the domain suffix.

security pillar

ability to protect information, systems, assets while delivering business value through risk assessments and mitigation strategies implement a strong identity foundation, enable traceability, apply security at all layers, automate security best practices, protect data in transit and at rest, prepare for security events

operational excellence pillar

ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures perform operation as code, annotate documentation, make frequent small reversible changes, refine operations procedures frequently, anticipate failure, learn from all operational failures

performance efficiency pillar

ability to use computing resources efficiently to meet system requirements and to maintain efficiency as demand changes and technologies evolve democratize advanced technologies, go global in minutes, use serverless architecture, experiment more often, mechanical sympathy

reliability pillar

ability for a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, mitigate disruptions test recovery procedures, automatically recover from a failure, scale horizontally, stop guessing capactiy, manage change in automation

security groups

are software firewalls that can be attached to network interfaces and by association also products in AWS security groups are associated with VPCs are stateful, if allowed in/out then return traffic is also allowed are able to have 5 security groups per ENI (elastic network interface) there are inbound and outbound rules. a rule allows traffic to or from a source (like an IP, network, named AWS entity, protocol) is implicit default deny rule BUT it cannot explicitly deny traffic when first create it, default to allow either SSH or RDP depending on type.

CRR

cross region replication - is one way replication on s3 bucket from source to destination bucket in another region - by default it carries over, storage class, object name (key), owner, object permissions - can do the entire bucket OR by name/tag - can replicate to different AWS accounts - replication does NOT work retroactively - to do this requires versioning on both buckets and also iam role with permissions to replicate objects - once done can override storage class and object owner - replication only occurs one way, from source to destination - exlcuded from replication: system actions aka lifecycle events, existing objects from before replication is enable; sse-c encrypted objects (only sse-s3 and kms encrypted objects are supported) - use this for global resilience or better performance or for backup purposes

EC2 families great link: https://aws.amazon.com/ec2/instance-types/

general purpose, compute optimized, memory optimizing, storage optimized, accelerated computing https://aws.amazon.com/ec2/instance-types/

IAM groups

groups are not true identies meaning that cannot be a principal in a policy so they cannot be used in resource policies inline and managed policies can be applied to groups groups have no credentials

IAM users

hard limit of 5,000 users per account 10 group memberships per user default max of 10 managed policies per user no inline policy limit but cannot exceed 2048 characters for all inline policies on an iam user 1 mfa per user 2 access keys per user

IAM roles

iam roles are assumed by another identity in a trust policy when a role is assumed, an sts is generated with time-limited set of access keys (temporary security credentials) iam roles have no long term credentials identities that can use this are: iam users, aws service, external accounts, web identities trust policy = controls what identity can assume the role permission policy = what permissions are provided

geolocation routing ~ dns

based on geographic region starts with us -> north america -> default to like the UK

cost optimization pillar

includes ability to avoid or eliminate unneeded costs or suboptimal resources adopot a consumption model, measure overall efficiency, stop spending money no data center operations, analyze and atribute expenditure, use managed service to reduce cost of ownership

aws console navigation ;

click on aws -> taken to homepage notifications -> relevant info to aws account like open issues for support tickets, scheudled changes, etc account dropdown -> quick to aws orgs, billing dashboard, personal security credentials, switch role support -> support center, forums, documentation billing dashboard -> you can manage budgets and cost explorer

cluster vs partition vs spread

cluster: in one AZ placed physically near each other. provides the best performance. partition: if have large infrastructure and want visibility into what partition they have. PPGs are separated into partitions, each one occupying isolated racks in AZs/regions. can span multiple AZs. spread: provides the best availability. max of 7 ec2 instances per AZ, each AZ is it's own partition. SPGs are designed for max of seven instances per AZ that need to be separated. each instance occupies a partition and has an isolated fault domain. good for email servers, file servres and application HA pairs, domain controllers placement group: logical grouping of instances within a single AZ. are good for applications that benefit from low network latency, high network throughput, and highest packet-per-second network performance for placement group to potentially choose an instance that supports enhanced networking

cloudfront

content delivery network (cdn) default: publicly accessible so anyone with dns endpoint address can access it ; can be configured from private where each access requires cookie or signed url (can be done with trusted signers) note: private can be bypassed by going directly to origin (OAI) origin access identity where can restrict s3 to only allow OAI to access it (all other identities are denied). this only applies to s3 buckets. if have an application with signed URLS with cloudfront, then don't want people to get to the actual s3 bucket note: can be made private (aka trusted signers), but default is public note: a default domain name is given for cdn. you can change this which also requires SSL certificate and route53 DNS goes from edge location (if not this) -> regional edge location. if regional cache doesn't have it -> go fro origin source (like S3) to edge location. and also from origin source to regional cache so next time an edge location can get it from the the regional cache global cache that stores copies of data on edge locations which are positioned closer to customer benefits: lower latency, higher transfer speed rates, reduced load on content server - origin: server/service that hosts your content like s3, web server, amazon media store. must be public for it to work with cloudfront - distribution: config entity w/in cloudfront so where all aspects of implementation are configured. could be for html OR for RTMP (adobe flash media play RTMP protocol) - edge location: local infrastructure that hosts cache of data - regional edge cache: larger version of edge location content can expire, be discarded or recached. or you can explicitly invalidate content to remove it from cache note: you can select what countries/regions cloudfront applies to. this impacts the cost - could go for all edge locations (best performance), just us and canada, etc

CORS

cross origin resourcing sharing security measure allowing web app running in one doman to reference resources in another ex. hannah.com want photo from hannah bucket

EC2 volume encryption

data between EBS and EC2 is encrypted at rest and in transit encryption generates a DEK (data encryption key) from a customer master key (CMK) in a region. a unique DEK encrypts each volume snapshots of the volume are encrypted with the same DEK as are other volumes created from this plaintext DEKs are stored in EC2 memory and used to encrypt/decrypt data ... EC2 and OS see plaintext data so no performance impact

S3 data in transit vs at rest

data between client and s3 is encrypted in transit encryption at rest is configured per object basis

vpc peering

direct comm btwn vpcs, good to use when a vendor or partner VPC for a security audit, or to split an application up into different parts then could have them connected can communicate privates ips from vpc to vpc - can span accounts and regions - data is encrypted and transits via aws global backbone - vpc peers are used to link two vpcs at layer 3 - vpc cidr blocks can't overlap - routes are requried on both sides - NACLS and SG can be used to control access - SGs can be referenced but NOT cross-region. SGs can span accounts but NOT regions - IPV6 is not avaiable for cross region - dns to private ips can be enabled but setting is needed on both sides .. means that by default it will go to the public IP intead of the private ips - enables you to route traffice between two vpcs using private ips as long as same network/region

EC2 public instance with IP - elastic v normal

ec2-X.X.X.X.compute-1.amazon.com another example: http://examplebucket.s3-website-us-east-1.amazonaws.com/photo.jpeg bucket-name.s3-website-region-amazonaws.com elastic ips are static. when allocated, they replace the normal public IP which is then deallocated. are valid for the entire region, not AZ specific. ... if not have an elastic ip allocated, then would want to release it from the account so you don't pay for it. can map an elastic IP and move it to a different instance. it will last until instance is terminated. when first create an elastic, it replaces the dynamic ip. when remove the elastic, it is replaced with a new different dynamic ip. elastic IPs cost nothing unless you have one reserved but don't have it assigned to anything. public DNS resolves to public address externally BUT private address internally public IPv4 address can be allocated which is done when machine starts and deallocated when it stops normal: when rebooted, public ip doesn't change. normal: you can reach the public ip when connecting from public ... when connecting from private/local/internal -> get internal/private ip in general -> if in a subnet you want a pubilc IP -> need to modify the auto-assign public IP settings on the subnet

EBS vs instance store volume

elastic block store. can be attached to only one instance at a time. you can detach it and attach to a different instance, but that new instance must be in the same AZ creates and manages volumes. is based on a particular AZ. provides reliability volumes are persistent, can be attached and removed from EC2 instances and are replicated in a single AZ EBS snapshots can be used to protect against an AZ failure where data would be repliacated across AZs in the region max of 1,750 MiB/s and 80,000 IOPS .. if need more then go to instance store volumes. throughput = block size * IOPS instance store volumes are directly connected to the hardware, giving best performance BUT if host fails or changes, then lose the data. sometimes cost is included with EC2 itself. is ephemeral = instance store volume. reboot an ec2 -> instance store is still maintained bc still on the same host. if stop -> new hardware is driven -> lose instance store volume ... data is not persistent, any data before temporary data then would not want to use this instance store if rebooted, then data persists BUT if instance stops, disk drive fails or instance is terminated -> will not persist

EFS

elastic file system files can be created and mounted at the same time on multiple linux instances (have files accessed from multiple instances like shared media, home folder, documentation) is region resilient EFS is designed to work concurrently with multiple EC2 instances, EFS is designed to be mounted to each EC2 instance that uses it., EFS can utilize lifecycle management, Instances control access to an EFS via a security group. - used for: large scale parallel access of data. for potentially thousands of clients to mount a file system and access the data concurrently (for elastic or concurrent data). ex: shared media, wordpress instances, big data and analytics, (provides strong data consistency), deploying linux system for a home shared directory, shared bespoke logging information like if have tight security requirements to store them with access from only. could backup data into something more resilient or could data sync. - not used for: cannot access data from cloudfront, if only have one ec2 instance, is not object storage, not used for temporary service - is a file system - uses nfs4 or nfs4.1 to connect to mount targets - is placed in a vpc - is access via mount targets placed in subnets inside a vpc and have an ip address (1 mount target per AZ) - only supported on linux instances (mounted on linux) - are accessible from vpc, or on prem or vpn or direct connect SGs are used to control access to mount targets 2 performance modes: general purpose (good for 99% of needs) or max I/O (for larger number of instances like hundreds) can also encrypt at rest using KMS can access this from local vpc, vpc peering, or direct connect from a physical server on site a dnc is created for the efs 2 throughput modes: bursting throughput (link size of file system to performance, bursting is the default) or provisioned throughput ... throughput allows for control over throughput independently of file system size 2 storage classes: standard and infrequent access. lifecycle management is used to move files between classes based on access patterns

FDQN

fully qualified domain name hosts and domains: www.linuxacademy.com

proxy server

gateway that sits between private and public; generally needs an application client connects to proxy -> proxy connects to destination server ... often used as a caching server for large files, proxy server can provide directly to the client. can also be used for filters like malware, adult content and speeds up access to content can filter or act as web cache, speeding up web access for a location at a remote site. can speed up access for a large organization at a remote site Benefits: can choose to pass or not based on things network layer can't, like username or elements of corporate identity like department, age, dns name instead of IP, etc often used for existing on prem networks; can be installed on an EC2 instance only handles traffic going out (a firewall handles traffic going in and out) is configured in a web application, browser

cloudformation (CFN)

infrastructure as code (IaC) product. good if you're frequently deploying the same infrastructure or if requrie guaranteed consistent configuration can create, manage and remove infrastructure using JSON or YAML is effective if you frequently deploy the same infrastructure or you require guaranteed consistent configuration you have templates that contain logical resources and configuration -> these communicate with stacks that take logicial resources from template and create, update, or delete physical resources in AWS template -> stack -> physical resource template can create up to 200 resources if a stack is deleted -> any resources it created are also deleted ; stack can be updated by uploading new version of a template; new logical resrouces cause new physical resources if removed logical resource -> deltes physical resource changes to logical resource as update -> some disruption or replacement with physical resoruces costs nothing beyond infrastructure youre deploying

throughput is calculated how?

iops * block size iops = number or read and write operations per second. stands for input/output

EC2 private instance with default internal DNS name format

ip-X.X.X.X.ec2.internal private IP is allocated when launching an instance unchanged when stop/start but is released when terminated can have more than one private IP allocated - the number of unique IPs that can be allocated depends on the size of the instance

API / API endpoint / API Gateway

is an interface consumed by or access by another service or application API's require an API endpoint. APIs remain static, they are abstracted from what the code inside the service is doing AWS CLIs use the AWS APIs are REST or SOAP API GATEWAY: is a managed endpoint service, can buse other aws services, store and recall dtaa too. is a managed endpoint service. GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS API GATEWAY: pricing is based on the number of API calls, data transfered and cahcing required to improve performance ; can run on monolithic (EBS,EC2), microservices (aurora, fargate), and serverless (dynamo, lambda) API GATEWAY -> TYPES OF APIS: REST (STATELESS) AND WEBSOCKET (WEBSOCKET IS FOR HIGH THROUGHPUT) API GATEWAY: uses https for endpoints if have security errors -> want to enable CORS (cross origin resource sharing) API ENDPOINT: location that allows for an api interaction; It's a location that sends and receives API requests.

instance metadata

is data relating to the instance that can be access from within the instance itself using HTTP and the url URL: http://169.254.169.25/latest/meta-data ... could add /ami-id , instance-id , instance-type USE CURL FOR THIS is a way to get visibility of data like external IPv4 address, AZ, security groups applied, approximate time to exterminate

microservices

is inverse of monolithic components operate indenpendently all direct comm btwn components and end user if one part needs more capacity -> that service can be scaled and updated as needed

IAM policy

is known as an identity policy when attached to an identity or resource policy when attached to a resource each statement matches a request to AWS request is matched based on the action whihch is an API call or operation attempted and resource the request is against. will result in an allow or deny explicit deny -> explicit allow -> implicit deny only attached policies have an impact aws policies are low overhead but lack flexibility customer managed policies are flexible but requuire admin inline policies allow exceptions to identities inline and managed policies can apply to users, groups, roles

EBS-optimized mode

is now the default adds optimizations and dedicated communication paths for storage and networking allows for consistent utilization of both

zone and zone file

mapping of ips and hosts for a given subdomain. zone file for www.llinuxacademy.com = www

role switching

method of accessing one account from another using only one set of credentials it is used within aws organizations and between two unconnected accounts 1. a role in account B trusts account A 2. An identity in account A can assume role in account B 3. and using that role, it can operate inside account B have two accounts A is trying to access B ... B must trust A. A must use STS to assume role in B and have permissions policy on it in B

containers

populate container engine is Docker which is the basis for Elastic Container Service (ECS) container is a package that contains an application, libraries, file system required to run it. containers run on a container engine that runs within 1 OS, usually linux containers provide lightweigth isolation provides a faster start, and more dense packing within a host. infrastructure -> OS -> container engine -> apartments like library, runtime, application ... could run containers side by side with no issues think of it like an apartment, with the same OS, and one thing in one container can affect others Docker: container engine docker images: downloads layers like a webserver -> html -> entry point ... docker container itself will expose ports to the public container registery: docker hub. so that other people can utilize it. WHEN TO USE: use same underlying kernel by OS, if need long-running computer, if want different operating systems -> DONT USE THIS, like with security or governance

access management basics: principal authentication identity authorization

principal: a person that can make authenticated or anonymous request to perform an action on a system authentication: authenticating principal against an identity; could be username, password or api keys identity: objects that require authentication and are authorized to access resources authorization: process of checking and allowing or denying access to a resource from an identity

VPC - high level

private network in AWS ; private data center inside aws - can be configured for public/private/both - regional (can't span regions), HA, can be connected to physical data center and corporate networks - isolated by other vpcs by default - vpc and subnet: max / 16 (65,000 ips) and min /28 (16 ips) - vpc subnets can't span AZ (1:1 mapping) - certian IPs are reserved in subnects if you want instances to have dns names, this is a setting to be configured at the box level, NOT subnet level

DNS hosts

record in zone file www mail catgifserver

RDMS

relational database management system when have formal and fixed relationships data is stored in rows and all rows are parsed even if an individual attribute is all that is needed every table has a schema to define fixed layout of each row which is defined when table is first created. every row must have all attributes and correct data types conforms to ACID: atomicity, consistency, isolation and durability is hard to achieve high performance levels and has limits on scalability fixed relationships exist between tables based on keys. queries join tables to use information from both uses sql (structured query language) reading one attribute in 10,000 rows requires 10,000 rows to be read

what are the pillars of a well-architected framework?

security, reliability, operational excellence, performance efficiency, cost optimization

step function

serverless visual workflow service (SWF) that provides state machines uses ASL (amazon state language) state machines can orchestrate other AWS services using simple logic, branching and parallel execution and it maintains a state. workflow steps are known as states that perform via tasks allow for long running serverless workflows using ASL amazon states language Step functions replace SWF with a serverless version. it required an EC2 instance. Step functions are serverless. are designed to overcome lambda ... is using state machines can be manually or automatic, includes user interactions, can operate for up to 1 year .. allow multiple lambda functions or to run many in parallel WHEN USE: state machine can orchestrate parallel launching, branching, mergers, decisions or choices, can allow or stop, can be a task state like with a lambda, long-running workflow can be used based on input .... certain business processes that need manual approval

simple routing policy - ~ dns

single record within a hosted zone that contains one or more values when queried, it returns all values in random order cannot repeat dns names with simple routing. so if have hannah.com with ip 1.1.1.1 -> i cannot create a new policy for hannah.com with ip 1.2.2.2 pros: simple, the default, even spread of requests cons: no performance control, no granular health checks, for alias type so only a single aws resource. can only point it at 1 and only 1 aws resource (aka alias type being ec2, elastic load balancer, etc) Note: weighted routing is NOT the same as a load balancer. it is used for testing features. Note: multivalue overcomes the cons of simple routing, you can have up to 8 of the same records with ips. you can do health checks. must have a unique setid for each record. will be returned at random from the options

simple PUT upload vs multipart upload -- transferring data into s3

single: single stream of data with limit of 5gb so if one fails the whole thing fails multipart: object broken into parts (up to 10,000 parts) each between 5 mb - 5 gb ; is faster and if a part fails it can be retried itself ; if over 100 mb -> should use multipart ; is required for anything over 5gb

firewall

sits at border between different networks and monitors traffic flow between them ; can read packet data and allow or deny based on that data - it establishes a barrier between networks of different security levels - depends on the OSI layer - WAF: gives you control over which traffic to allow or block by defining customizable security rules. you can block common attack patterns like sql injection or cross site scripting. new rules can be deployed in minutes, so you can respond fast

edge location

small pockets of AWS compute, storage, and networking close to major populations and are generally used for edge computing and content delivery

AMIs

they store snapshots of EBS volumes, permissions, block device mapping which configures how the instance OS sees attached volumes used for base installation or when using an immutable architecture --- you don't change things, you create a new version and then launch that, also used for scaling and high availability can be shared, free, paid or copied to other AWS regions with appropriate launch permissions, instances can be created from an AMI configure instance -> create image -> launch instance -> new instance AMIs reference snapshots, permissions and block device mapping golden image is an an ami that has been constructed from a customized image. if you want to change this, you need create a new one and have some sort of versioning convention

dedicated host

this is paid hourly. can be on demand or reserverd. are specifying AZ, is NOT regional. this is physical hardware. need to choose type of ec2 host -- would designate how many .larges for how many .mediums would need (would need more mediums than larges) when would use: compliance and regulatory with dedicated hardware, for certain enterprise licensing like number of CPUs and cors, strict instance placement requirements

tightly coupled VS loosely coupled/decoupled

tightly coupled: is a type of architecture where components are directly linked AND dependent on each other. all components share the workload, so overall speed is dependent on its slowest part decoupled: each component can operate independently. can communicate using an intermediary like a message queue. This process is asynchronous. can allow for scaling or failure of one component without directly ipmacting the others

EC2 - when used and billing

traditional OS, compute is consistent with CPU, monolithic application that requires a server, application that is fairly bursty but needs to run always so an AD would be good since it needs to be active and consistent EC2 billing: is per second with a minimum of 60 seconds. so post a minute, are charged by the second. only charged when running allows underlying admin access to OS

DNS Root servers

trust starts here. are servers that are authoritative to give answers about root zone. TLDs are controlled by root zone

IAM dashboard - what can you manage

users encryption keys policies

horitzonal vs vertical vs elastic scaling

vertical = traditional ; must forecast demand and not purchase too late or early horizontal: more smaller servers is closer to demand elastic: automation and horizontal scaling are used in conjunction which relates to scaling in or out to match demand

vertical vs horizontal scaling

vertical: adding additional resources like memory or cpu to existing machine. will hit max machine size by cost or technically horizontal: scaling out, which is additional machines into a pool of resources, all of which provide the same service; has no size limitations and can be scaled to nearly infinite levels

bootstrapping

where instructions are executed on an instance during its launch process bootstrapping is when we use user data to create an AMI. in EC2, user data can be used to run shell scripts (bash or powershell) or run cloud-init directives is used to configure the instance, perform software installation, and add application configuration benefits: reduces time to provision an instance. like if it takes a long time to configure something. when time intensive, then choose this downsides: not able to do dynamic configurations at instance provision, like by size, type of instance, subnet or IP addresses could do both AMI and bootstrapping.

monolothic application

where presentation, logic and data tiers are implemented on the same code base and not separated is hard to scale and generally has to be done vertically ideal world: want each tier isolated on a separate machine or pool of machines with different cpu, memory, and disk i/o

failover routing ~ dns

you can create two records with the same name one is primary and one is secondary queries go to primary -> if primary is unhealthy -> go to secondary is used to provide emergency resuorces during fialure


Set pelajaran terkait

Public Rhetoric and The Rhretoric (Chapter 22)

View Set

Role of a Citizen: Extending Civil Rights forAll Citizens

View Set

Exam 3 Medications- Fundamentals PREPU

View Set

research methods in social psychology

View Set

PHSC 125: Race, Class, and Gender Quiz 2

View Set