AWS Solution Architecture

¡Supera tus tareas y exámenes ahora con Quizwiz!

Replica table

(replica, for short) is a single DynamoDB table that functions as part of a global table. Each replica stores the same set of data items. Any given global table can only have one replica table per region. You can add replica tables to the global table, so that it can be available in additional AWS regions. It is important that each replica table and secondary index in your global table has identical write capacity settings to ensure proper replication of data.

Authentication methods: Access Keys

- A combination of an access key ID and a secret access key -You can assign two active access keys to a user at a time - These can be used to make programmatic calls to AWS when using the API in program code or at a command prompt when using the AWS CLI or the AWS PowerShell tools - You can create, modify, view or rotate access keys - WHen created IAM returns the access key ID an secret access key - the secret access is returned only at creation time and if lost a new key must be created - Ensure access keys and secret access keys are stored securely - Users can be given access to change their own keys through IAM policy (not from the console) - YOu can disable a user's access key which prevents it from being used for API calls

Authentication methods: console password

- A password that the user can eneter to sign in to interactive sessions such as the AWS Management console. - You can allow users to change their own passwords - you can allow selected IAM users to change their passwords by disabling the option for all users and using an IAM policy to grant permissions for the selected users.

IAM infrastructure elements: Authentication

- A principal sending a request must be authenticated to send a request to AWS - To authenticate from the console, you must sign in with your user name and password - TO authenticate from the API or CLI, you must provide your access key and secret key.

Technical requirements for connecting virtual interfaces:

- A pubic or private ASN. If you are using a public ASN you must own it. If you are using a private ASN, it must be in the 64512 to 65535 range. -A new unused VLAN tag that you select ]-Private connection (VPC)- the VPC virtual private gateway (vGW) ID -Public connection- Public IPs (/30) allocated by you for the BGP session

Directory sharing:

- AWS Directory service for Microsoft Active Directory allows you to use a directory in one account and share it with multiple accounts and VPCs. - There is an hourly sharing charge for each additional account to which you share a directory. - there is no sharing charge for additional VPCs to which you share a directory, or for the account in which install the directory.

Updating stacks:

- AWS cloudformation provides two methods for updating stacks: direct update or creating and executing change sets. - When you directly update a stack, you submit changes and AWS cloudformation immediately deploys them - Use direct updates when you want to quickly deploy your update - with change sets, you can preview the changes AWS cloudformation will make to your stack, and then decide whether to apply those changes.

The AWS STS API action returns

- An access key which consists of an access key ID and a secret ID - A session token - Expiration or duration of validity - Users ( or an application that the user runs) can use these credentials to access your resources

IAM infrastructure elements: Principle

- An entity that can take an action on an AWS resource - Your administrative IAM user is your first principal - You can allow users and services to assume a role - IAM supports federated users -IAM supports programmatic access to allow an application to access your AWS account - IAM users, roles, federated uers, and applications are all AWS principals.

Stack creation errors:

- Automatic rollback on error is enabled by default - You will be charged for resources provisioned even if there is an error

VPC with public and private subnets and Hardware VPN access:

- Configuration adds an IPsec Virtual Private Network (VPN) connection between your Amazon VPC and your data center - effectively extending your data center to the cloud while also providing direct access to the internet for public subnet instances in your Amazon VPC. -Creates a /16 network with two /24 subnets - One subnet is directly connected to the internet while the other subnet is connected to yoru corporate network via an IPsec VPN turnnel.

Role Delegation

- Create an IAM role with two policies: Permission policy- grants the user of the role the required permissions on a resource trust policy - specifies the trusted accounts that are allowed to assume the role. -Wildcards (*) cannot be specified as a principal. A permissions policy must also be attached to the user in the trusted account.

What can you perform in the key management functions in AWS KMS?

- Create keys with a unique alias and description - Import your own key material - Define which IAM users and roles can manage keys - Define which IAM users and roles can use keys to encrypt and decrypt data - Choose to have AWS KMS automatically rotate your keys on an annual basis - Temporarily disable keys so they cannot be used by anyone - Re-enable disabled keys - Delete keys that you no longer use - Audit use of keys by inspecting logs in AWS Cloudtrail - Create custom key stores - Connect and disconnect custom key stores - Delete custom key stores ( * the sue of custome key stores requires CloudHSM resources to be available in your account)

Simple AD does not support:

- DNS dynamic updates - schema extensions - multi-factor authentication - communication over LDAPS - PowerShell AD cmdlets -FSMO role transfer Not compatible with RDS SQL server does not support trust relationships with other domains (use AWS MS AD)

Best practice for root accounts

- DOn't use the root user credentials - Don't share the root user credentials - Create an IAM user and assign administrative permissions as required -Enable MFA

Cloudwatch retains metric data as follows:

- Data points with a period of less than 60 secs are available for 3 hours. These data points are high-resolution custom metrics. - Data points with a period of 60 secs (1 min) are available for 15 days. - Data points with a period of 300 sec (5 min) are available for 63 days - data points with a period of 3600 sec (1hr) are available for 455 days (15 mos)

Short polling:

- Does not wait for messages to appear in the queue. - It queries only a subset of the available servers for messages (based on weighted random execution) - Short polling is the default - receiveMessagewaittime is set to ) - More requests are used, which implies higher cost

SWF applications include the following logical components:

- Domains - Workflows - Activities - Task Lists - Workers - Workflow Execution

RDS three storage types available

- General Purpose (SSD): -Use for Database workloads with a moderate I/O requirement. Cost effective, gp2, 3 IOPS/GB, Burst up to 3000 IOPS -Provisioned IOPS (SSD): - Use for I/O intensive workloads, low latency and consistent I/O, user specified IOPS -Magnetic -Not recommended anymore, available for backwards compatibility, doesn't allow you to scale storage when using the SQL Server database engine. Doesn't support elastic volumes, limited to a maximum size of 4TiB, Limited to a maximum of 1,000 IOPS.

SNS supports notifications over mutiple transport protocols:

- HTTP/HTTPS - subscribers specify a URL as part of thte subscription registration - Email/Email- JSON- message are sent to registered addresses as email (text-based or JSON-object) - SQS - users can specify an SQS standard queue as the endpoint - SMS - messages are sent to registered phone numbers as SMS text messages.

IAM roles with EC2 instances

- IAM roles can be used for granting applications running on EC2 instances permissions to AWS API requests using instance profiles - Only one role can be assigned to an EC2 instance at a time. - A role can be assigned at the EC2 instance creation time or at any time afterwards - When using the AWS CLI or API instance profiles must be created manually (it's automatic and transparent through the console) - Applications retrieve temporary security credentials from the instance metadata.

Cross Account Access:

- Lets users from one AWS account access resources in another - To make a request in a different account the resource in that account must have an attached resource based policy with the permissions you need - Or you must assume a role )identity- based policy) within the account with the permissions you need

FIFO (First-in-first-out) queues

- Preserve the exact order in which message are sent and received. - FIFO queues are available in limited regions. If you use a FIFO queue, you don't have to place sequencing information in your message. -FIFO queues provide exactly-once processing, which means that each message is delivered once and remains available until a consumer processes it and deletes it. It support up to 3000 messages per second when batching or 300 per second otherwise. The maximum messages size is 256 KB.

AD Connector comes in two sizes:

- Small - designed for organizations up to 500 users - Large -designed for organizations up to 5000 users.

Custom Key Stone:

- The AWS KMS custom key store feature combines the control provided by AWS CloudHSM with the integration and ease of use of AWS KMS -You can configure your ownn CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys rather then the default KMS key store - When you create keys in KMS you can chose to generate the key material in your CloudHSM cluster. Master keys that are generated in your custom key store never leave the HSMs in the CloudHSM cluster in plaintext and all KMS operations that use those keys are only performed in your HSMs. - In all other respects master keys stored in your custom key store are consistent with other KMS CMKs

For each cache behavior you can configure the following functionality:

- The path pattern (e.g. /images/*.jpg,?images*.php) - the origin to forward requests to (if there are multiple origins) - whether to forward query strings - whether to require signed URLs - Allowed HTTP methods - Minimum amount of time to retain the files in the Cloudfront cache ( regardless of the value of any cache-control headers)

Glacier Charges

- There is no charge for data transfer between EC2 and Glacier in the same region. There is a charge if you delete data within 90 days. when you restore you pay for: - the glacier archive - the requests - The restored data on S3

SNS payment

- Users pay $ .50 per 1 million Amazon SNS Requests, $.06 per 100,000 notification deliveries over HTTP, and $2.00 per 100,000 notification deliveries over email.

Federation (typically AD):

- Uses SAML 2.0, grants temporary access based on the users AD credentials, Does not need to be a user in IAM, Single sing-on allows users to login to the AWS console without assigning IAM credentials

Alternative to the AWS Directory service you can build your own Microsoft AD DCs in the AWS cloud (EC2)

- When you build your own you can join an existing on-premise Active Directory domain (replication mode) - You must establish a VPN (on top of Direct Connect if you have it) - Replication mode is less secure than establishing trust relationships

Key deletion

- You can schedule a customer master key and associated metadata that you created in AWS KMS for deletion, with a configurable waiting period from 7 to 30 days - This waiting period allows you to verify the impact of deleting a key on your applications and users that depend on it - the default waiting period is 30 days - You can cancel key deletion during the waiting period

VPC Wizard VPC with a single public subnet:

- Your instances run in a private, isolated section of the AWS cloud with direct access to the internet. - Network access control lists and security groups can be used to provide strict control over inbound and outbound network traffic to your instances. - creates a /16 network with a /24 subnet. Public subnet instances use Elastic IPs or Public IPs to access the internet.

RDS Encryption at rest the following elements are also encrypted?

- all DB snapshots -Backups -DB instance storage -read Replicas ( you cannot encrypt an existing DB, you need to create a snapshot, copy it, encrypt the copy, then build an encrypted DB from the snapshot).

System manager Inventory:

- aws system manager collects information about your instances and the software installed on them, helping you to understand your system configurations and installed applications - You can collect data about applications, files, network configurations, window services, registries, server roles, updates, and any other system properties. - the gathered data enables you to manage application assets, track licenses, monitor file integrity , discover applications not installed by a traditional installer, and more.

Origins access setup control to your buckets using:

- bucket policies -Access control lists -You can make objects publicly available or use cloudfront signed URLs. -A custom origin server is a HTTP server which can be an EC2 instance or an on-premise/non-AWS based web server. When using an on-premise or non-AWS based web server you must specify the DNS name, ports and protocols that you want Cloudfront to use when fetching objects from your origin. Most Cloudfront features are supported for custom orgins except RTMP distributions (must be an S3 bucket)

trails can be configured to log data events and management events:

- data events: these events provide insight into the resource operations performed on or within a resource. These are also known as data plane operations. -Management event: Management events provide insight into management operations that are performed on resources in your AWS account. These are also known as control plane operations. Management events can also include non-API events that occur in your account.

Origin Policy:

-HTTPS only -Match viewer- cloudfront matches the protocol with your custom origin -Use match viewer only if you specify Redirect HTTP to HTTPS or HTTPS only for the viewer protocol policy -Cloudfront caches the object once even if viewers makes requests using HTTP and HTTPS

High Availability approaches for databases

- if possible, choose DynamoDB over RDS because of inherent fault tolerance. If DynamoDB can't be used, choose Aurora because of redundancy and automatic recovery features. If Aurora can't be used, choose Multi-AZ RDS frequent RDS snapshots can protect against data corruption or failure and they won't impact performance of Multi-AZ deployment. Regional replication is also an option, but will not be strongly consistent. If the database runs on EC2, you have to design the HA yourself.

there is two types of zones

- public host zone - determines how traffic is routed on the internet - private hosted zone for VPC - determines how traffic is routed within VPC (resources are not accessible outside the VPC).

DynamoDB is integrated with Apache Hive on EMR. Hive can allow you to:

- read and write data in DynamoDB tables allowing you to query DynamoDB data using a SQL- like language (HiveQL) -Copy data from a DynamoDB table to an S3 bucket and vice versa - Copy data from a DynamoDB table into HDFS and vice versa. -perform join operations on DynamoDB tables

Available in two editions:

- small- supports up to 500 users ( approximately 2,000 objects ). - large- supports up to 5000 users (approximately 20,000 objects) AWS creates two directory servers and DNS servers on two different subnets within an AZ.

two editions:

- standard edition is optimized to be a primary directory for small and mid size businesses with up to 5,000 employees. It provides you enough storage capacity to support up to 30,000 directory objects, such as users, groups, and computers. - Enterprise Edition is designed to support enterprise organizations with up to 500,000 directory objects

Cloudtrail records account activity and service events from most AWS services and logs the following records:

- the identity of the API caller - the time of the API call - the source IP address of the API caller - the request parameters -the response elements returned by the AWS service not enabled by default cloudfront is per AWS account. Trails can be enabled per region or a trail can be applied to all regions.

Internet gateway serves 2 purposes

- to provide a target in your VPC route tables for internet-routable traffic. - to perform network address translation (NAT) for instances thathave been assign public IPv4 addresses Internet gateways (IGW) must be created and then attached to a VPC , be added to a route table, and then associated with the relevant subnet (s). If your subnet is associated with a route to the internet, then it is a public subnet. You cannot have multiple Internet Gateways in a VPC. IGWs must be detached before they can be deleted. Can only attach 1 IGW to a VPC at a time.

Key limits

- you can create you to 1000 customer master keys per account per region - As both enabled and disabled customer master keys count towards the limit, AWS recommend deleting disabled keys that you no longer use - AWS managed master keys created on your behalf for use within supported AWS services do not count against this limit - there is no limit to the number of data keys that can be derive using a master key and used in your application or by AWS services to encrypt data on your behalf

AWS opsworks for Puppet Enterprise

-A fully-managed configuration management service that hosts puppet Enterprise, a set of automation tools from Puppet for infrastructure and application management

Kinesis Data Analytics supports two types of inputs: streaming data sources and reference data sources:

-A streaming data source is continuously generated data that is read into your application for processing - A reference data source is static data that your application uses to enrich data coming in from streaming sources can configure destinations to persist the results. IAM can be used to provide Kinesis Analytics with permissios to read records from sources and write to destination.

Strongly consistent reads:

-A strongly consistent read returns a result that reflects all writes that received a successful response prior to the read (faster consistency)

For serving both the media player and media files you need two types of distributions:

-A web distribution for the media player -An RTMP distribution for the media files S3 buckets can be configured to create access logs and cookie logs which log all requests made to the S3 bucket. Amazon Athena can be used to analyze access logs. Cloudfront is integrated with Cloudtrail. cloudtrail saves logs to the S3 bucket you specify. Cloudtrail captures information about all requests whether they were made using the cloudfront console, the cloudfront API, the AWS SDKs, the coudfront CLI, or another service.

Automation:

-AWS System Manager allows you to safely automate common and repetitive IT operations and management tasks across AWS resources. -With System Manager, you can create JSON documents that specify a specific list of taks or use community published documents - These documents can be executed directly through the AWS Management console, CLIs, and SDKs, scheduled in a maintenance window, or triggered based on changes to AWS resources through Amazon Cloudwatch events. - You can track the execution of each step in the documents as well as require approvals for each step - You can also incrementally roll out changes and automatically halt when errors occur

Session Manager:

-AWS System Manager provides you safe, secure remote management of your instances at scale without logging into your servers, replacing the need for bastiion hosts, SSH, or remote Powershell. - It provides a simple way of automating common administrative tasks across groups of instances such as registry edits, user management, and software and patch installations. - Though integration with AWS Identity and Access Management (IAM), you can apply granular permissions to control the acgtions users can perform on instances - All actions taken with systems manager are recorded by AWS cloudtrail, allowing you to audit changes throughout your environment

Configuration compliance

-AWS System manger lets you scan your manged instance for patch compliance and configuration inconsistencies. - You can collect and aggregate data from multiple AWS accounts and regions, and then drill down into specific resources that aren't compliant - By default, AWS systems Manager displays data about patching and associations. You can also customize the service and create your own compliance types based on your requirements.

Options for connecting to a VPC are:

-Hardware based VPN - Direct Connect -VPN cloudHub -Software VPN

stack sets

-AWS cloudformation stacksets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation - Using an administrator account, you define and manage an AWS cloudformation template, and use the template as the basis for provisioning stacks into selected target accounts across specified regions - An administrator account is the AWS account in which you create stack sets - A stack set is managed by signing in to the AWS administrator account in which it was created - A target account is the account into which you create, update, or delete one or more stacks in your stack set

Routing policies: latency based routing:

-AWS maintains a database of latency from different parts of the world -focused on improving performance by routing to the region with the lowest latency. -you create latency records for your resources in multiple EC2 locations

cloudtrail logs API calls made via:

-AWS management console -AWS SDKs -command line tools - higher-level AWS services (such as cloudformation)

Best practices of cloudformation:

-AWS provides python "helper scripts" which can help you install software and start services on your EC2 instances - Use cloudformation to make changes to your landscape rather than going directly into the resources. - make use of change sets to identify potential trouble spots in your updates - Use stack policies to explicitly protect sensitive portions of your stack - Use a version control system such as CodeCommit or GitHub to track changes to templates

Parameter store:

-AWS systems Manager provides a centralized store to manage your configuration data, whether plain-text data such as database strings or secrets such as passwords - this allows you to separate you secrets and configuration data from your code. Parameters can be tagged and organized into hierarchies, helping you manage parameters more easily. -For example, you can use the same parameter name, "db-string," with a different hierarchical path, "dev/db-string" or "prod/db-string" to store different values - Systems Manager is integrated with AWS Key Management service (KMS), allowing you to automatically encrypt the data you store. - You can also control user and resource access gto parameters using AWS Identity and Access Management (IAM). Parameters can be referenced through other AWS services, such as Amazon Elastic Container Service, AWS Lambda, and AWS Cloudformation.

State manager:

-AWS systems Manager provides configuration management, which helps you maintain consistent configuration of your Amazon EC2 or on-premises instances - With System Manager, you can control configuration details such as server configurations, anti-virus definitions firewall settings and more - You can define configuration policies for your servers through the AWS management console or use existing scripts, Powershell modules, or ansible playbooks directly from GitHub or Amazon S3 buckets - System Manager automatically applies your configuration across your instances at a time and frequently that you define. - You can query systems Manager at any time to view the status of your instance configurations, giving you on-demand visibility into your compliance status

Run Command:

-AWS systems Manager provides you safe, secure remote management of your instances at scale without logging into your servers, replacing the need for bastion hosts, SSH, or remote Powershell. -It provides a simple way of automating common administrative tasks across groups of instances such as registry edits, user management, and software and patch installations. - Through integration with AWS Identity and Access Management (IAM), you can apply granular permissions to control the actions users can perform on instances -all actions taken with system Manager are recorded by AWS cloudtrail, allowing you to audit changes throughout your environment.

Patch Manager

-AWS systems manager helps you select and deploy operating system and software patches automatically across large groups of Amazon EC2 or on-premises instances - Though patch baselines, you can set rules to auto-approve select categories of patches to be installed, such as operating system or high severity patches, and you can specify a list of patches that override these rules and are automatically approved or rejected - you can also schedule maintenance windows for your patches so that they are only applied during preset times - system manager helps ensure that your software is up-to-date and meets your compliance policies

Maintenance Windows:

-AWS systems manager lets you schedule windows of time to run administrative and maintenance tasks across your instances -This ensures that you can select a convenient and safe time to install patches and update or make other configuration changes, improving the availability and reliability of your services and applications.

Cross-region replication allows you to replicate across regions:

-Amazon DynamoDB global tables provide a fully managed solution for deploying a multi-region, multi-master database. - When you create a global table, you specify the AWS regions where you want the table to be available. -DynamoDB performs all of the necessary tasks to create identical tables in these regions, and propagate ongoing data changes to all of them. Provides low read and write latency. Scale storage and throughput up or downtime. DynamoDB is schema-less. DynamoDB can be used for storing session state. Provides two read models.

What is the two methods to backup and restore RDS DS instances?

-Amazon RDS automated backups -User initiated manual backups Both options backup the entire DB instance and not just the individual DBs. Both options create a storage volume snapshot of the entire DB instance. You can make copies of automated backups and manual snapshots. Automated backups backup data to multiple AZs to provide for data durability.

Firehose destinations include:

-Amazon S3 -Amazon redshift -Amazon Elasticsearch service -Splunk

AwS OpsWorks stacks

-An application and server management service that allows you to model your application as a stack containing different layers, such as load balancing, database, and application server. -Opsworks stacks is an AWS creation and uses and embedded chef solo client installed on EC2 instances to run Chef recipes. - Opsworks stacks supports EC2 instances and on-premises servers as well as an agent.

Data types used with API gateway:

-Any payload set over HTTP (always encrypted over HTTPS) - data formats include JSON, XML, query string parameters and request headers - you can declare any content type for your APIs responses, and then use the transform templates to change the back-end response into your desired format

AWS PrivateLink access over Inter-Region VPC Peeriing:

-Applications in an AWS VPC can securely access AWS Privatelink endpoints across AWS regions using inter-region VPC peering. -AWS privatelink allows you to privately access services hosted on AWS in a highly available and scalable manner, without using public IPs and without requiring the traffic to traverse the internet. - Customers can privately connect to a service even if the service endpoint resides in a different AWS region. -Traffic using Inter-region VPC peering stays on the global AWS backbone and never traverses the public internet.

With STS yo can request a session token using one of the following APIs:

-AssumeRole - can only be used by IAM users (can be used for MFA) - AssumeRoleWithSAML- can be used by any user who passes a SAML authentication response that indicates authentication from a known (trusted) identity provider -AssumeRoleWith WebIdentitiy- can be used by a user who passes a web identity token that indicates authentication from a known (trusted) identity provider -GetSessionToken - can be used by an IAM user or AWS account root user (can be used for MFA) -GetFederationToken - can be used by an IAM user or AWS account root user *AWS recommends using Cognito for identity federation with Internet identity providers. Users can come from three sources.

Restrictions

-Blacklists and whitelists can be used for geography- you can only use one at a time.

Redshift charges

-Charged for computes nodes hours, 1 unit per hour (only compute node, not leader node). Backup storage - storage on S3 -Data transfer, no charge for data transfer between Redshift and S3 within a region but for other scenarios you may pay charges.

VOC with public and private subnets

-Containing a public subnet, this configuration adds a private subnet whose instances are not addressable from the internet - instances in the private subnet can establish outbound connections to the internet via the public subnet using Network Address translation (NAT) - Create a /16 network with two /24 subnets -Public subnet instances use Elastic IPs to access the Internet -Private subnet instances access the internet via Network address translation (NAT)

High Availability for Redshift

-Currenlty redshift does not support Multi-AZ deployments - the best HA option is to use multi-node cluster which supports dta replication and node recovery. - A single not redshift cluster does not support data replication and you'll have to restore from a snapshot on S3 if a drive fails

RDS Billing and Provisioning

-DB instance hours (partial hours are charged at full hours) - storage GB/monthly -I/O requests/ monthly- for magnetic storage - provisioned IOPS/month - RDS provisioned IOPS SSD - Engress data transfer - Backup storage ( DB backups and manual snapshots) - Back storage for the automated RDS backup is free

Redshift provides fault tolerance for the following failures:

-Disk failures -nodes failures -network failures -AZ/ region level disasters

Distributor:

-Distributor is an AWS system manager feature that enables you to securely store and distribute software packages in our organization - You can use distributor with existig system manager features like run command and state manager to control the lifecycle of the packages running on your instances.

Gateway enpoints are available for:

-DynamoDB and S3

Cloudwatch monitor

-EC2 - DynamoDB tables - RDS DB instance - custom metrics generated by applications and services -any logs files generated by your applications Cloudwatch logs let you monitor and troubleshoot your systems and applications using your existing system, s d for real time application and system monitoring as well as long term log retention. Logs keep logs indefinitely by default. Cloudtrail logs can be sent to cloudwatch logs for real-time monitoring. Log metrics filters can evaluate cloudtrail logs for specific items, phrases or values.

Custom S3 static website:

-Enter the S3 static website hosting endpoint for your bucket in the configuration -Example: http://<bucketname>.s3website-<region>.amazonaws.com The expiration time is controlled through the TTL. minimum expiration time is 0. Static websites on Amazon S3 are considered custom origins. Cloudfront keeps persistent connections open with origin servers. Files can also uploaded to Cloudfront.

Archive retrieval

-Expedited is 1-5 minutes retrieval ( most expensive) - Standard is 3.5 hours retrieval ( cheaper, 10GB data retrieval free per month) - Bulk retrieval is 5-12 hours (cheapest, use for large quantities of data) You can retrieve parts of an archive. When data is retrieved it is copied to S3 and the archive remain in Glacier and the storage class therefore does not change. AWS SNS can send notifications when retrieval jobs are complete. Retrieval data is available for 24 hours by default ( can be changed). To retrieve specific objects within an archive you can specify by the byte range (Range) in the HTTP GET request ( need to maintain a DB of byte ranges)

Field-level Encryption:

-Field-level encryption adds an additional layer of security on top of HTTPS that lets you protect specific data so that it is only visible to specific applications -field-level encryption allows you to securely upload user-submitted sensitive info to your web servers -The sensitive info is encrypted at the edge closer to the user and remains encrypted throughout application processing.

Define the allowed HTTP Method:

-GET, HEAD -GET, HEAD, OPTIONS -GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE

DynamoDB supports two kinds of secondary indexes:

-Global secondary index - An index with a partition key and sort key that can be different from those on the table. - Local secondary index- an index that has the same partition key as the table, but a different sort key.

Direct connect Gateway:

-Grouping of Virtual private gateways (VGWs) and private virtual interfacess (VIFs) that belongs to the same AWS account. - Direct connect gateway enables you to interface with VPCs in any AWS region (except AWS China Region) - Can share private virtual interface to interface with more that one Virtual Private Clouds (VPCs) reducing the numbers of BGP sessions.

Define the viewer protocol policy:

-HTTP and HTTPS -Redirect HTTP to HTTPS -HTTPS only

Routing policies; Simple

-an A record is associated with one or more IP addresses - uses round robin - Does not support health checks

IAM infrastructure elements: Authorization

-IAM uses values from the request context to check for matching policies and determines whether to allow or deny the request -IAM policies are stored in IAM as JSON documents and specify the permissions that are allowed or denied -IAM policies can be: user (identity) based polices -Resource-based policies - IAM checks each policy that matches the context of your request. If a single policy has a deny action IAM denies the request and stops evaluating (explicit deny) valuation logic: -By default all requests are denied (implicit deny) -An explicit allow overrides the implicit deny -An explicit deny overrides any explicit allows -Only the root user has access to all resources in the account by default.

A Kinesis data analytics application consists of three components:

-Input - the streaming source for your application - Application code - a series of SQL statements that process input and produce output - Output - one or more in-application streams to hold intermediate results

IAM infrastructure elements: Request

-Principals send requests via the console, CLI, SDK, or APIs - Requests are: - Actions (or operations) that the principal wants to perform - Resrouces upon which the actions are performed -Principal information including the environment from which the request wasmade - Request context- AWS gathers the request information: principal (requester) Aggregate permissions associated with the principal -Environment data, such as IP address, user agent, SSL status etc. - Resource data, or data that is related to the resource being requested.

Restrict access using methods?

-Restrict access to objects in Cloudfront edge caches using signed cookies or signed URLs -Restrict access to objects in your S3 bucket

Authentication methods: server certificates

-SSL/TLS certificates that you can use to authenticate with some AWS services - AWS reccommends that you use the AWS Certificate Manager (ACM) to provision, manage and deploy your server certificates - Use IAM only when you must support HTTPS connections in a region that is not supported by ACM

Opsworks consists of Stacks ad Layers:

-Stack are collections of resources need to support a service or application. -stacks are containers of resources (EC2, RDS, etc) that you want to manage collectlively. -Every stack contains one or more layers and layers automate the deployment of packages. -Layers represent different components of the application delivery hierarchy - EC2 instances, RDS instances, and ELBS are examples of Layers

Multi-node consist of: Computer nodes

-Stores data and performs queries and computations - local columnar storage -parallel/distribution execution of all queries, loads, backups, restores, resizes - up to 128 compute nodes

Default cache behavior

-The default cache behavior only allows a path pattern of /*. Additional cache behaviors need to be defined to change the path pattern following creation of the distribution.

Redshift always keeps three copies of your data:

-The original -A replica on compute nodes (within the cluster) - A backup copy on S3

Cloudformation charges

-There is no additional charge for AWS cloudformation -You pay for AWS resources ( such as amazon ec2 instances, Elstic load balancing load balancers, etc) created using AWS cloudformation in the same manner as if you created them manually - You only pay for what you use, you use it; there are no minimum fees and no required upfront commitments

Must update subnet route table to point to IGW, either:

-To all destinations, e.g. 0.0.0.0/0 for IPv4 or ::/0fir IPv6 -To specific public IPv4 addresses, e.g. your company's public endpoints outside of AWS

Federation with Mobile Apps:

-Use Facebook/Amazon/Google or other OpenID providers to login

There are two options available for geo-restriction (geo-blocking):

-Use the cloudfront geo-restriction feature (use for restricting access to all files in a distribution and at the country level) -Use a 3rd party geo-location service (use for restricting access o a subset of files in a distribution and for finer granularity at the country level)

Private Link Interface endpoint

-What: Elastic Network interface with a private IP -How: Uses DNS entries to redirect traffic -Which services: API gateway, Cloudformation, Cloudwatch, etc. -Security: Security groups

Object invalidation:

-You can remove an object from the cache by invalidating the object -you cannot cancel an invalidation after submission -You cannot invalidate media files in the Microsoft Smooth Streaming format when you have enabled Smooth Streaming for the corresponding cache behavior

Run database on EC2 consider the following points:

-You can run any database you like with full control and ultimate flexibility. - You must manage everything like backups, redundancy, patching and scaling. - Good option if you require a database not yet supported by RDS, such as IBM DB2 or SAP HANA - Good option if it is not feasible to migrate AWS-managed database.

VPC with a private subnet only and Hardwarwe VPN access:

-Your instances run in a private, isolated section of the AWS cloud with a private subnet whose instances are not addressable from the internet. -You can connect this private subnet to your corporate data center via an IPsec Virtual Private Network tunnel -Creates a /16 network with a /24 subnet and provisions an IPsec VPN tunnel between your Amazon VPC and your corporate network

AWS opsWorks for chef automate

-a fully-managed configuration management service that hosts chef Automate, a suite of automation tools from chef for configuration management, compliance and security, and continuous deployment - completely compatible with tooling and cookbooks from the Chef community and automatically registers new nodes with your chef server. - Chef server stores recipes and configuration data -chef client (node) is installed on each server

Kinesis data stream common use cases:

-accelerated log and data feed intake -real-time metrics and reporting -real-time data analytics - complex stream processing

IAM infrastructure elements: Actions

-actions are defined by services, actions are the things you can do to a resource such as viewing, creating, editing, deleting. Any actions on resources that are not explicitly allowed are denied. To allow a principal to perform an action you must include the necessary actions in a policy that applies to the principal or the affected resource.

To enable access to or from the internet for instances in a VPC subnet, you must do the following:

-attach an Internet gateway to your VPC - Ensure that yoru subnet's route table points to the internet gateway - Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address). -Ensure that yoru network access control and security group rules allow the relevant traffic to flow to and from your instance.

Public subnets are subnets that have:

-auto-assign public IPv4 address set to "yes" -The subnet route table has an attached internet gateway. Instances in the default VPC always have both a public and private IP address. AZs names are mapped to different zones for different users (i.e. the AZ "ap-southeast-2a" may map to a differnt physical zone for a different user).

routing policies; geo-location

-caters to different users in different countries and different languages - contains users within a particular geography and offers them a customized version of the workload based on their specific needs. - geolocation can be used for localizing content and presenting some or all of you website in the language of your users -can be used for spreading load evenly between regions - if you have multiple records for overlapping regions, route 53 will route to the smallest geographic region - you can create a default record for IP addresses that do not map to a geographic location

Distributed Denial of service (DDoS) protection:

-cloudfront distributes traffic across multiple edge locations and filters requests to ensure that only valid HTTP(S) requests will be forward to backend hosts. Cloudfront also supports geoblocking, which you ocan use to prevent requets from particular geographic locations from being served.

Domain Names

-cloudfront typically creates a domain name such as a 23223.cloudfront.net Alernate domain names can be added using an alias record (route 53)

Options for storing logs:

-cloudwatch logs - centralized logging system (e.g. splunk) -Custom script and store on S3 Do not store logs on non-persistent disks: best practice is to store logs in cloudwatch logs or S3. Amazon cloudwatch uses Amazon SNS to send e-mail.

What are the two type of distribution: RTMP

-distribute streaming media files using Adobe Flash media server's RTMP protocol -allows an end user to begin playing a media file before the file has finished downloading from a cloudfront edge location -files must be stored in an S3 bucket

Routing policies; failover

-failover to a secondary IP address. -associated with a health check -Used for active-passive -routes only when the resource is healthy -can be used with ELB -When used with alias records set Evaluate target health to "yes" and do not use health checks

Multi-node consists of: Leader node

-manages client connections and receives queries. - simple SQL end-point - stores metadata - optimizes query plan - coordinates query execution

Redshift provides continuous/incremental backup

-multiple copies within a cluster -continuous and incremental backups to S3 -Continuous and incremental backups across regions -streaming restore

DynamoDB scalability other regions

-per table - 10,000 read capacity units and 10,000 write capacity units -per account - 20,000 read capacity units and 20,000 write capacity units DynamoDB can throttle requests that exceed the provisioned throughput for a table. Can also throttle read requests for an Index to prevent your application from consuming too many capacity units. When a request is throttled it fails with an HTTP 400 code ( bad request) and a provisioned throughput Exceeded exception.

Egress-only Internet Gateway:

-provides outbound internet access for IPv6 addressed instances -Prevents inbound access to those IPv6 instances -IPv6 addresses are globally unique and are therefore public by default -stateful - forwards traffic from instance to internet and then sends back the response -must create a custom route for ::/0 to the egress-only internet gateway instead of NAT for IPv6

Redshift replication

-redshift can asynchronously replicate your snapshots to S3 in another region for DR. Single-node clusters do not support data replication ( in a failure scenario you would need to restore from a snapshot). Scaling requires a period of unavailability of a few minutes ( during maintenance window). Redshift moves data in parallel from the compute nodes in your existing data warehouse cluster to the compute nodes in your cluster.

Direct connect benefit

-reduce cost when using large volumes of traffic. - increase reliability (predictable performance) -increase bandwidth ( predictable bandwidthO) - decrease latency each AWS direct connect connection can be configured with one or more virtual interfaces (VIFs)

Partition key and sort key

-referred to as a composite primary key. composed of two attributes: partition key and sort key. An Item is a collection of attibutes. The aggregate size of an item cannot exceed 400KB. Supports GET/PUT operations using a user defined primary key. DynamoDB provides flexible querying by letting you query on non-primary key attributes using Global Secondary Indexes and Local Secondary Indexes.

routing policies: Traffic flow

-route 53 traffic flow provides global traffic management (GTM) services. -Traffic flow policies allow you to create routing configurations for resources using routing types sch as failover and geolocation -route 53 traffic flow makes it easy for developers to create policies that route traffic based on the constraints they care most about, including latency, endpoint health ,load, geo-proximity and geography -you can use Amazon Route 53 traffic flow to assemble a wide range of routing scenarios, from adding a simple backups page in Amazon S3 for our website, to building sophisticated routing policies that consider an end user's geographic location, proximity to an AWS region, and the health of each of your endpoints. -Amazon Route 53 traffic flow also includes a versioning feature that allows you to maintain a history of changes to your routing policies, and easily roll back to a previous policy version using the console or API.

routing policies: weighted

-similar to simple but you can specify a weight per IP address - you create records that have the same name and type and assign each record a relative weight -numerical value that favors one IP over another -To stop sending traffic to a resource you ocan change the weight of the record to 0.

What are the two type of distribution: Web Distribution

-static and dynamic content including .html, .css, .php, and graphic files. -distributes files over HTTP and HTTPS -Add, update, or delete objects, and submit data from web forms. -Use live streaming to stream an event in real time. To use Cloudfront live streaming, create a web distribution

Temporary security credentials work almost identically to long-term access key credentials that IAM users can use, with the following differences:

-temporary security credentials are short-term - they can be configured to last anywhere from a few minutes to several hours. - After the credentials expire, AWS no longer recognixes them or allows any kind of access to API requests made with them - temporary security credentials are not stored with the user but are generated dynamically and provided to the user when requested - when (or even before ) the temporary security credentials expire, the user can request new credentials, as long as the user requesting them still has permission to do so.

When using EC2 for custom origins Amazon recommend:

-use an AMI that automatically installs the software for a web server -use ELB to handle traffic across multiple EC2 instances -Specify the URL of your load balancer as the domain name of the origin server

routing policies: multi-value answer routing policy

-use for responding to DNS queries with up to eight healthy records selected at random

routing policies: geo-proximity routing policy (requires rote flow):

-use for routing traffic based on the location of resources and, optionally, shift traffic from resources is one location to resources in another.

Moving Domain names between distributions:

-you can move subdomains yourself -for the root domain you need to use AWS support

VPC flow logs with peer

-you can't enable flow logs for VPC's that are peered with your VPC unless the peer VPC is in your account. You can't tag a flow log. You can't change the configuration of a flow log after it's been created. After you've created a flow log, you cannot change its configuration ( you need to delete and re-create)

How Application Auto scaling works:

-you create a scaling policy for a table or a global secondary index. -The scaling policy specifies whether you want to scale read capacity or write capacity (or both), and the minimum and maximum provisioned capacity unit settings for the table or index. -The scaling policy also contains a target utilization- the percentage of consumed provisioned throughput at a point in time. - uses a target tracking algorithm to adjust the provisioned throughput of the table ( or index) upward or downward in response to actual workloads, so that the actual capacity utilization remains at or near your target utilization.

Advantages of STS are:

-you do not have to distribute or embed long-term AWS security credentials with an application - you can provide access to your AWS resources to users without having to define an AWS identity for them (temporary security credentials are the basis for IAM Roles and ID Federation) - The temporary security credential have a limited lifetime, so you do not have to rotate them or explicitly revoke them when they're no longer needed -After temporary security credentials expire, they cannot be reused (you can specify how long the credentials are valid for, up to a maximum limit)

routing policies charge

-you pay per hosted zone per month (no partial months) - a hosted zone delete witin 12 hours of creation is not charged (queries are charges). You pay for queries. Latency-based routing queries are more expensive. Geo DNS and geo-proximity also have higher prices. Alias records are free of charge - health checks are charged with different prices for AWS vs non-AWS endpoints . You do not pay for the records that you add to your hosted zones.

Configuration Items: A configuration itme (CI) is the configuration of a resource at a given point-in-time. A CI consists of 5 sections:

1. Basic Information about the resource that is common across different resource ( e.g, Amazon Resource names, tags) 2. Configuration data specific to the resource (e.g. EC2 instance type) 3. Map of relationships with other resource (e.g. EC2:: volume vol-3434df43 is "attached to instance" EC2 instance i-3432ee3a) 4. AWS cloudtrail event IDs that are related to this state 5. Metadata that helps you identify information about the CI, such as the version of this CI, and when this CI was captured.

AWS Step Functions: how does it works

1. Define the steps of your workflow in the JSON-based Amazon States lanugage. The visual console automatically graphs each step in the order of execution. 2. Start an execution to visualize and verify the steps of your application are operating as intended. The console highlights the real-time status of each step and provides a detailed history of every execution. 3. AWS Step Functions operates and scales the steps of your application and underlying compute for you to help ensure your application executes reliably under increasing demand. - Apps can interact and update the stream via Step Function API - Visual interface describes flow and real-time status - Detailed logs of each step execution

There are a couple of ways STS can be used: Scenario 2

1. Develop an Identity Broker to communicate with LDAP and AWS STS 2. Identity Broker authenticates with LDAP first, then gets an IAM role associated with the user 3. Application then authenticates with STS and assumes that IAM role 4. Application uses that IAM role to interact with the service.

There are a couple of ways STS can be used: Scenario 1

1. Develop an identity Broker to communicate with LDAP and AWS STS 2. Identity Broker always authenticates with LDAP first, then with AWS STS 3. Application then gets temporary access to AWS resources.

YOu can consolidate logs from multiple acconts using an S3 bucket:

1. Turn on Cloudtrail in the paying account. 2. Create a bucket policy that allows cross-account access 3. Turn on Cloudtrail in the other accounts and use the bucket in the paying account. You can integrate cloudtrail with cloudwatch logs to deliver data events captured by cloudtrail to a cloudwatch logs log stream

Data is encrypted in one of the following three scenarios:

1. You can use KMS APs directly to encrypt and decrypt data using your master keys stored in KMS 2. You can choose to have AWS services encrypt your data using your master keys stored in KMS. In this case data is encrypted using data keys that are protected by your master keys in KMS 3. You can use the AWS Encryption SDK that is integrated with AWS KMS to perform encryption within your own applications, whether they operate in AWS or not.

Amazon SQS max timeout

12 hours. An Amazon SQS message can contain up to 10 metadata attributes.

DynamoDB limits

256 tables per region, no limit on the size of the table. Read and write capacity unit limits vary per region.

4. An application requires a highly available relational database with an initial storage capacity of 8TB. The database will grow by 8 GB every day. To support expected traffic, at least eight read replicas will be required to handle database reads. Which option will meet these requirements? A. DynamoDB B. Amazon S3 C. Amazon Aurora D. Amazon Redshift

4) C - Amazon Aurora is a relational database that will automatically scale to accommodate data growth. Amazon Redshift does not support read replicas and will not automatically scale. DynamoDB is a NoSQL service, not a relational database. Amazon S3 is object storage, not a relational database.

What is the granularity of point-in-time recovery is?

5 mins

How many read replicas of a production DB

5. You cannot have more than four instances involved in a replication chain.

1) A customer relationship management (CRM) application runs on Amazon EC2 instances in multiple Availability Zones behind an Application Load Balancer. If one of these instances fails, what occurs? A. The load balancer will stop sending requests to the failed instance. B. The load balancer will terminate the failed instance. C. The load balancer will automatically replace the failed instance. D. The load balancer will return 504 Gateway Timeout errors until the instance is replaced.

A - An Application Load Balancer (ALB) sends requests to healthy instances only. An ALB performs periodic health checks on targets in a target group. An instance that fails health checks for a configurable number of consecutive times is considered unhealthy. The load balancer will no longer send requests to the instance until it passes another health check.

9) A company needs to maintain access logs for a minimum of 5 years due to regulatory requirements. The data is rarely accessed once stored, but must be accessible with 1 day's notice if it is needed. What is the MOST cost-effective data storage solution that meets these requirements? A. Store the data in Amazon S3 Glacier Deep Archive storage and delete the objects after 5 years using a lifecycle rule. B. Store the data in Amazon S3 Standard storage and transition to Amazon S3 Glacier after 30 days using a lifecycle rule. C. Store the data in logs using Amazon CloudWatch Logs and set the retention period to 5 years. D. Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage and delete the objects after 5 years using a lifecycle rule.

A - Data can be stored directly in Amazon S3 Glacier Deep Archive. This is the cheapest S3 storage class.

5) A Solutions Architect is designing a critical business application with a relational database that runs on an EC2 instance. It requires a single EBS volume that can support up to 16,000 IOPS. Which Amazon EBS volume type can meet the performance requirements of this application? A. EBS Provisioned IOPS SSD B. EBS Throughput Optimized HDD C. EBS General Purpose SSD D. EBS Cold HDD

A - EBS Provisioned IOPS SSD provides sustained performance for mission-critical low-latency workloads. EBS General Purpose SSD can provide bursts of performance up to 3,000 IOPS and have a maximum baseline performance of 10,000 IOPS for volume sizes greater than 3.3 TB. The 2 HDD options are lower cost, high throughput volumes

7) An analytics company is planning to offer a site analytics service to its users. The service will require that the users' web pages embed a JavaScript file that is hosted in the company's Amazon S3 bucket. What must the Solutions Architect do to ensure that the script will successfully execute? A. Enable cross-origin resource sharing (CORS) on the S3 bucket. B. Enable S3 versioning on the S3 bucket. C. Provide the users with a signed URL for the script. D. Configure a bucket policy to allow public execute privileges.

A - Web browsers will block the execution of a script that originates from a server with a different domain name than the web page. Amazon S3 can be configured with CORS to send HTTP headers that allow the script execution.

10) A company uses Reserved Instances to run its data-processing workload. The nightly job typically takes 7 hours to run and must finish within a 10-hour time window. The company anticipates temporary increases in demand at the end of each month that will cause the job to run over the time limit with the capacity of the current resources. Once started, the processing job cannot be interrupted before completion. The company wants to implement a solution that would allow it to provide increased capacity as cost-effectively as possible. What should a Solutions Architect do to accomplish this? A. Deploy On-Demand Instances during periods of high demand. B. Create a second Amazon EC2 reservation for additional instances. C. Deploy Spot Instances during periods of high demand. D. Increase the instance size of the instances in the Amazon EC2 reservation to support the increased workload.

A - While Spot Instances would be the least costly option, they are not suitable for jobs that cannot be interrupted or must complete within a certain time period. On-Demand Instances would be billed for the number of seconds they are running.

Database instance (DB instance)

A DB instance is a database environment in the cloud with the compute and storage resources you specify. Database instances are accessed via endpoints. Endpoints can be retrieved via the DB instance description in the AWS Management Console, DescribDBInstances API or describe-db-instances command. By default, customers are allowed to have up to a total of 40 Amazon RDS DB instances (only 10 of these can be Oracle or MS SQL unless you have your own licenses)

DB Subnet Groups

A DB subnet group is a collection of subnets (typically private) that you create in a VPC and that you then designate for your DB instances.

Components of a VPC Hardware VPN Connection

A hardware- based VPN connection between your Amazon VPC and your datacenter, home network, or co-location facility.

AWS snowmobile

A literal shipping container full of storage (up to 100PB) and a truck to transport it. Exabyte scale with up to 100PB per snowmobile.

Components of a VPC A virtual private cloud:

A logically isolated virtual network in the AWS cloud. You define a VPC's IP address space from ranges you select.

Components of VPC Peering Connection

A peering connection enables you to route traffic via private IP addresses between two peered VPCs

What record in the Kinesis data stream consist of?

A record is the unit of data stored in an Amazon Kinesis data stream. A record is composed of a sequence number, partition key, and data blob. By default, records of a stream are accessible for up to 24 hours from the time they are added to the stream (can be raised to 7 days by enabling extended data retention). A data blob is the data of interest your data producer adds to a data stream. The max. size of a data blob within one record is 1 megabyte (MB). A stream is compose of one or more shards.

IAM infrastructure elements: Resources

A resource is an entity that exists within a service. E.g. EC2 instances, S3 buckets, IAM users, and DynamoDB tables. Each AWS service defines a set of actions that can be performed on the resource. -After AWS approves the actions in your request, those actions can be performed on the related resources within our account.

role and federal

A role can be assigned to a federated users who signs in using an external identity provider

Components of a VPC Subnet

A segment of a VPC's IP address range where you can place groups of isolated resources (maps to an AZ, 1:1)

What is a shard in a Kinesis

A shard is the base throughput unit of an Amazon Kinesis data stream. One shard provides a capacity of 1MB/Sec data iput and 2MB/sec data output. each shard can support up to 1000 PUT records per second. When the data rate increases, add more shards to increase the size of the stream. Remove shards when the data rat decreases. Partition keys are used to group data by shard within a stream. Kinesis streams uses KMS master keys for encryption. To read from or write to an encrypted stream the producer and consumer applications must have permission to access the master key.

Components of VPC Egress-only Internet Gateway

A stateful gateway to provide egress only access for IPv6 traffic from the VPC to the internet

AD connector vs. Simple AD

AD connector: must have an existing AD, existing AD users can access AWS assets via IAM roles. It supports MFA via existing RADIUS-based MFA infrastucture. Simple AD: standalone AD based on Samba, supports user accounts, groups, group policies, and domains. Kerberos-based SSO, MFA not supported trust relationships not supported.

4) A company runs a public-facing three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances for the application tier running in private subnets need to download software patches from the internet. However, the instances cannot be directly accessible from the internet. Which actions should be taken to allow the instances to download the needed patches? (Select TWO.) A. Configure a NAT gateway in a public subnet. B. Define a custom route table with a route to the NAT gateway for internet traffic and associate it with the private subnets for the application tier. C. Assign Elastic IP addresses to the application instances. D. Define a custom route table with a route to the internet gateway for internet traffic and associate it with the private subnets for the application tier. E. Configure a NAT instance in a private subnet.

A, B - A NAT gateway forwards traffic from the instances in the private subnet to the internet or other AWS services, and then sends the response back to the instances. After a NAT gateway is created, the route tables for private subnets must be updated to point internet traffic to the NAT gateway.

7. A company is launching a new application and expects it to be very popular. The company requires a database layer that can scale along with the application. The schema will be frequently changes and the application cannot afford any downtime for database changes. Which AWS service allows the company to achieve these requirements?

A. Amazon Aurora B. Amazon RDS MySQL C. Amazon DynamoDB D. Amazon Redshift DynamoDB a NoSQL DB which means you can change the schema easily. It's also the only DB in the list that you can scale without any downtime.

1. Your company is starting to use AWS to host new web-based applications. A new two-tier application will be deployed that provides customers with access to data records. It is important that the application is highly responsive and retrieval times are optimized. You're looking for a persistent data store that can provide the required performance. From the list below what AWS service would you recommend

A. Elasticache with the memcached engine. b. elasticache with the Redis engine c. kinesis data streams d. RDS in a multi-AZ configuration Elasticache is a web service that makes it easy to depoy and run memcached or redis protocol-compliant server nodes in the cloud. The in-memory caching provided by Elasticache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads.

5. An application you manage exports data from a relational database into an S3 bucket. The data analytics team wants to import this data into a Redshift cluster in a VPC in the same account. Due to the data being sensitive the security team has instructed you to ensure that the data traverses the VPC without being routed via the public internet. Which combination of actions would meet this requirement? (Choose 2)

A. Enable Amazon Redshift enhanced VPC routing B. Create a cluster Security Group to allow the Amazon Redshift cluster to access Amazon S3 C. Create a NAT gateway in a public subnet to allows the Amazon Redshift cluster to access Amazon S3 D. Set up a NAT gateway in a private subnet to allow the Amazon Redshift cluster to access Amazon S3 E. Create and configure an Amazon S3 VPC endpoint. Amazon Redshift enhanced VPC routing forces all COPY and UNLOAD traffic between clusters and data repositories through a VPC. Implementing an S3 VPC engpoint will allow S3 to be accessed from other AWS services without traversing the public network. Amazon S3 uses the Gateway Endpoint type of VPC endpoint with which a target for a specified route is entered into the VPC route table and used for traffic destined to a supported AWS service.

8. You are a solutions architect at Digital Cloud training. Oneo fyour clients runs an application that writes data to a DynamoDB table. The client has asked how they can implement a function that runs code in response to item level changes that take place in the DyanmoDB table. What would you suggest to the client?

A. Enable server access logging and create an event source mapping between AWS Lambda and the S3 bucket to which the logs are written. B. Enable DynamoDB streams and create an event source mapping between AWS Lambda and the relevant stream C. Create a local secondary index that records item level changes and write some custom code that responds to updates to the index. D. Use Kinesis Data streams and configure DynamoDB as a producer. DynamoDB Streams help you to keep a list of item level changes or Provide a list of item level changes that have taken place in the last 24 hrs. Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers-pieces of code that automatically repond to events in DynamoDB streams. If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table's stream. AWS Lambda pools the stream and invokes your lambda function synchronously when it detects a new stream records.

6.A solution architect requires a highly available database that can deliver an extremely low RPO. Which of the following configurations uses synchronous replication?

A. RDS Read Replica across AWS regions B. DynamosDB Read Replica C. RDS DB instance using a Multi-AZconfiguration D. EBS volume synchronization A recovery Point Objective (RPO) relates to the amount of data loss that can be allowed, in this case a low RPO means that you need to minimize the amount of data lost so synchronous replication is required. Out of the options presented only Amazon RDS in a multi-AZ configuration uses synchronous replication.

1. The financial institution you are working for stores large amounts of historical transaction records. There are over 25TB records and your manager has decided to move them into the AWS cloud. You are planning to use snowball as copying the data would take too long. Which of the statements below are true regarding snowball? ( choose 2)

A. Snowball can import to S3 but cannot export from S3. B. Uses a secure storage device for physical transportation C. Can be used with multipart upload D. Petabyte scale data transport solution for transferring data into or out of AWS E. snowball can be used for migration on-premise to on-premise (snow ball is a petabyte scale data transport solution for transferring data into or out of AWS. It uses a secure device for physical transportation. The AWS Snowball client is software that is installed on a local computer and is used to identify, compress, encrypt, and transfer data. it uses 256-bit encryption (managed with the AWS KMS) and tamper-resistant enclosures with TPM. Snowball can import to S3 or export from S3. Snowball cannot be used with multipart upload.)

4. You company runs a two-tier application on the AWS cloud that is composed of a web front-end and an RDS database. The web font-end uses multiple EC2 instances in multiple availability zones (AZ) in an Auto scaling group behind an elastic load balancer. Your manager is concerned about a single point of failure in the RDS database layer. What would be the most effective approach to minimizing the risk of an AZ failure causing an outage to your database layer?

A. Take a snapshot of the database. B. Increase the DB instance size C. Create a Read Replica of the RDS DB instance in another AZ D. Enable multi-AZ for the RDS DB instance. Multi-AZ RDS creates a replica in another AZ and synchronously replicates to it. This provides a DR solution as if the AZ in which the primary DB resides fails, multi-AZ will automatically fail over to the replica instance with minimal downtime.

3.A customer has asked you to recommend the best solution for a highly available database. The database is a relational OLTP type of database and the customer does not want to manage the operating system the database runs on. failover between AZs must be automatic. Which of the below options would you suggest to the customer?

A. use DynamoDB B. Use RDS in a multi-AZ configuration C. install a relational database on EC2 in stances in multiple AZs and create a cluster D. Use Redshift in a multi-AZ configuration Amazon Relational Database service ( Amazon RDS) is a managed service that makes it easy to set up, operate, and scale a relational database in the cloud. With RDS you can configure Multi_AZ which creates a replica in an other AZ and synchronously replicate to it (DR only).

What does API provide to developers

API gateway provides developers with a simple, flexible, fully managed, pay-as-you- go service that handles all aspects of creating and operating robust APIs for application back ends. API gateway handles all of the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls. you can create and distribute API keys to developers. Option to use AWS sig-v4 authorize access to APIs

Additional features and benefits of API gateway

API gateway provides several features that assist with creating and managing APIs: -metering- define plans that meter and restricts third-party developer access to APIs - security - API gateway provides multiple tools to authorize access to APIs and control service operation access - resiliency- manage traffic with throttling so that backend operations can withstand traffic spikes -operations monitoring, API gateway way provides a metrics dashboard to monitor calls to services - lifecycle management, operate multiple API version and multiple stages for each version simultaneously so that existing applications can continue to call previous versions after new API versions are published

Cloudwatch accessed via?

API, command line interface, AWS SDKs, and the aws management console. Cloudwatch integrates with IAM.

AWS Database Migration Service

AWS Database Migration Service helps you migrate databses to AWS quickly and securely. Use along with the schema conversion tool (SCT) to migrate database to AWS RDS or EC2 based databases.

Benefits and features: Automatic Scaling

AWS Step Functions automatically scales the operations and underlying compute to run the steps of your application for you in response to changing workloads. Step Functions scales automatically to help ensure the performance of you application workflow remains consistently high as the frequency of requests increases

Benefits and features: administrative security

AWS Step Functions is integrated with AWS Identity and Access Management (IAM). IAM policies can be used to control access to the Step Functions APIs.

Benefits and features: Execution event history

AWS Step functions creates a detailed event log for every execution, so when things do go wrong, you can quickly identify not only where, but why. All of the execution history is available visually and programmatically to quickly troubleshoot and remediate failures.

Internet gateway (IGW)

AWS VPC side of the connection to the public internet.

The Snowball family

AWS import/export AWS snowball AWS snowball edge AWS snowmobile

Aurora

AWS proprietary database. high performance, low price, scales in 10 GB increments, Scales up to 32cCPUs and 244GB RAM. Can handle the loss of up to two copies of data without affecting DB write availability and up to three copies without affecting read availability.

Benefits and features: Built- in error handling

AWS step Functions tracks the state of each step, so you can automatically retry failed or time-out tasks, catch specific errors, and recover gracefully, whether the task takes seconds or months to complete

Benefits and features: high availability

AWS step functions has built-in fault tolerance. Step functions maintains service capacity across multiple Availability Zones in each region to help protect application workflows against individual machine or data center facility failures. There are no maintenance windows or scheduled downtimes.

Amazon MQ standby

Active/standby brokers are designed for high availability. In the event of a failure of the broker, or even a full AZ outage, Amazon MQ automatically fails over to the standby broker so you can coninue sending and receiving messages.

What is an Amazon VPC endpoint for Amazon S3?

An Amazon VPC endpoint for Amz S# is logical entity within a VPC that allows connectivity only to S3. The VPC endpoint routes requests to S3 and routes responses back to the VPC.

AWS KMS: Cloudtrail

All requests to use your master keys are logged in AWS Cloudtrail so you can understand who used which key under which context and when they used it. You can audit the use of keys via Cloudtrail.

Cache Behavior

Allows you to configure a variety of CloudFront functionality for a given URL path pattern.

Interface endpoints are available for:

Amazon API gateway, Amazon Cloudwatch logs, AWS codebuild, Amazon EC2 API, Elastic load balancing API, AWS Key Management service, Amazon Kinesis data streams, AWS service catalog, Amazon SNS, AWS System Manager, Endpoint services hosted by other AWS accounts, Supported AWS Marketplace partner services.

Cross Region Replication with global tables

Amazon DynamoDB global tables provide a fully managed solution for deploying a multi-region, multi-master database. When you create a global table, you specify the AWS regions where you want the table to be available. DynamoDB performs all of the necessary tasks to create identical tables in these regions, and propagate ongoing data changes to all of them.

RDS Encryption

Amazon RDS instances and snapshots at rest by enabling the encryption option for your Amazon RDS DR instance. When using encryption at rest the following elements are also encrypted: - all DB snapshots -Backups -DB instance storage -Read Replicas

How long does RDS retain backups?

Amazon RDS retains backups of a DB instance for a limited, user-specified period of time called the retention period, which by default is 7 days but can be up to 35 days.

RDS Performance

Amazon RDS uses EBS volumes (never uses instance store) for DB and log storage.

Amazon Redshift retains backups

Amazon Redshift retains beckups for 1 day. You can configure this to be as long as 35 days. If you delete the cluster you can choose to have a final snapshot taken and retained. Manual backups are not automatically deleted when you delete a cluster.

Application integration: Amazon SNS

Amazon Simple Notification Service ( Amazon SNS) is a web service that makes it easy to set up, operate, and send notifications from the cloud. Use simple APIs and easy integration with applications. Data type is JSON.

Amazon SQS

Amazon Simple Queue Service is a web service that gives you access to message queues that store messages waiting to be processed. SQS is hosted queue for storing messages in transit between computers. It is used for distributed/decoupled applications.

Amazon SWF

Amazon Simple Workflow Service (SWF) is a web service that makes it easy to coordinate work across distributed application components. It tracks the state of your workflow which you interact and update via API. AWS recommends that for new applications customers consider Step Functions instead of SWF.

Amazon VPC offers and VPN

Amazon VPC offers you the flexibility to fully manage both sides of your Amazon VPC connectivity by creating a VPN connection between your remote network and a software VPN appliance running in your Amazon VPC network.

Elastic Network interfaces and IP addresses

An Elastic Network Interface (ENI) is a logical networking component that represents a NIC. ENIs can be attached and detached from EC2 instances and the configuration of the ENI will be maintained. Every EC2 instance has a primary interface known as eth0 which cannot be detached. An elastic IP address is a static IPv4 addresss that is associated with an instance or network interface. Elastic IPs are retained in your account whereas auto-assigned public IPs are released. You can have up to 5 elastic IPs per account.

DynamoDB application

An application can read and write data to any replica table. If your application only uses eventually consistent reads, and only issues reads against one AWS region, then it will work without any modification. If your application requires strongly consistent reads, then it must perform all of its strongly consistent reads and write in the same region. DynamoDB does not support strongly consistent reads across AWS regions.

Edge locations Caches

An edge location is the location where content is cached (separate to AWS regions/AZs) -Requests are automatically routed to the nearest edge location. Edge locations are not tied to Availability Zones or regions. Edge locations are not just read only, you can write to them too.

Simple AD

An inexpensive Active Directory- compatible service with common directory features. Stanalone, fully manged, directory on the AWS cloud. It is the best choice for less than 5000 users and don't need advanced AD features. Can create users and control access to applications on AWS.

VPC Enpoints

An interface endpoint uses AWS PrivateLink and is an elastic network interface (ENI) with a private IP address that serves as an entry point for traffic destined to a supported service.

Origins

An origin is the the origin of the files that the CDN will distribute. Origins can be either an S3 bucket, an EC2 instance, an Elastic Load Balancer, or Route 53 - can also be external (non-AWS). By default all newly created buckets are private.

Outage occurs when for backup?

An outage occurs if you change the backup retention period from 2 to a non-zero volue or the other way around. The retention period is the period AWS keeps the automated backups before deleting them.

2. A customer is deploying services in a hybrid cloud model. The cutomer has mandated that data is transferred directly between cloud data centers, bypassing ISPs. Which AWS service can be used to enable hybrid cloud connectivity? A. IPsec VPN B. Amazon Route 53 C. AWS Direct Connect D. Amazon VPC

Answer (C) with AWS direct connect, you can connect to all your AWS resources in an AWS region, transfer your business-critical data directly from your datacenter, office, or colocation environment into and from AWS, bypassing your internet service provider and removing network congestion.

3. You have just created a new security group in your VPC. You have not yet created any rules. Which of the statements below are correct regarding the default state of the security group? ( choose 2) A. There is an outbound rule that allows all traffic to all IP addresses B. There are no inbound rule that allows traffic from the Internet Gateway D. There is an inbound rule allowing traffic from the internet to port 22 for management E. There is an outbound rule allowing traffic to the internet gateway.

Answer A & B Custom security groups do not have inbound allow rules (all inbound traffic is denied by default). Default security groups do have inbound allow rules (allowing traffic from within the group). All outbound traffic is allowed by default in both custom and default security groups. Security groups act like a stateful firewall at the instance level. Specifically, security groups operate at the network interface level of an EC2 instance. You can only assign permit rules in a security group, you cannot assign deny rules and there is an implicit deny rule at the end of the security group. All rules are evaluated until a permit is encountered or continues until the implicit deny. You can create ingress and egress rules.

3. Your company would like to restrict the ability of most users to most users to change their own passwords whilst continuing to allow a select group of users within specific user groups. What is the best way to achieve this? ( Choose 2) A. Under the IAM Password Policy deselect the option to allow users to change their own passwords B. Create an IAM policy that grants users the ability to change their own password and attach it to the groups that contain the users C. Create an IAM Role that grants users the ability to change their own password and attach it to the groups that contain the users D. Create an IAM Policy that grants users the ability change their own password and attach it to the individual users accounts E. Disable the ability for all users to change their own passwords using the AWS Security Token Service

Answer A & B, A password policy can be defined for enforcing password length, complexity etc. You can allow or disallow the ability to change passwords using an IAM policy and you should attach this to the group that contains the users, not to the individual users themselves. You cannot use an IAM role to perform this function. THe AWS STS is not used for controlling password policies.

5. A solutions Architect is designing the messaging and streaming layers of a serverless application. The messaging layer will manage communications between components and the streaming layer will manage real-time analysis and processing of streaming data. The Architect needs to select the most appropriate AWS services for these functions. Which services should be used for the messaging and streaming layers? (choose 2) A. Use Amazon Kinesis for collecting, processing and analyzing real-time streaming data B. Use Amazon EMR for collecting, processing and analyzing real-time streaming data C. Use Amazon SNS for providing a fully managed messaging service D. Use Amazon SWF for providing a fully managed messaging service E. Use Amazon Cloudtrail for collecting, processing and analyzing real-time streaming data

Answer A & C, Amazon Kinesis makes it easy to collect, process, and analyze real-time streaming data. With Amazon Kinesis Analytics, you can run standard SQL or build entire streaming applications using SQL. Amazon Simple Notification service ( Amazon SNS) provides a fully manged messaging service for pub/sub patterns using asynchronous event notifications and mobile push notifications for microservices, distributed systems, and serverless applications.

4. You have just created a new Network ACL in your VPC. You have not yet created any rules. Which of the statements below are correct regarding the default state of the Network ACL? (choose 2) A. There is a default inbound rule denying all traffic. B. There is a default outbound rule allowing all traffic C. there is a default inbound rule allowing traffic from the VPC CIDR block D. There is a default outbound rule allowing traffic to the internet gateway E. There is a default outbound rule denying all traffic

Answer A & E, a VPC automatically comes with a default network ACL which allows all inbound/outbound traffic. A custom NACL denies all traffic both inbound and outbound by default. Network ACL's function at the subnet level and you can have permit and deny rules. Network ACLs have separate inbound and outbound rules and each rule can allow or deny traffic. Network ACLs are stateless so responses are subject to the rules for the direction of traffic. NACLs only apply to traffic that is ingress or egress to the subnet not to traffic within the subnet.

8. You have launched an EC2 instance into a VPC. You need to ensure that instance have both a private and public DNS hostname. Assuming you did not change any settings during creation of the VPC, how will DNS hostnames be assigned by default? (choose 2) A. In a default VPC instnaces will be assigned a public and private DNS hostname B. In a non-default VPC instances will be assigned a public and private DNS hostname C. In a default VPC instances will be assigned a private but not a public DNS hostname D. In all VPCs instances no DNS host names will be assigned E. In a non-default VPC instances will be assigned a private but not a public DNS hostname.

Answer A & E, when you launch an instance into a default VPC, we provide the instance with public and private DNS hostnames that correspond to the public IPv4 and private IPv4 addresses for the instance. When you launch an instance into a non default VPC, we provide the instance with a private DNS hostname and we might provide a public DNS hostname, depending on the DNS attributes you specify for the VPC and if your instance has a public IPv4 address.

2. To improve security in your AWS account you have decided to enable multi-factor authentication (MFA). You can authenticate using an MFA device in which two ways? ( Choose 2) A. Locally to EC2 instances B. Through the AWS Management Console C. Using biometrics D. Using a key pair E. Using the AWS API

Answer A & E. You can authenticate using an MFA device in the following ways: - Through the AWS Management Console- the user is prompted for a user name, password and authentication code. - Using the AWS API - restrictions are added to IAM policies and developers can request temporary security credentials and pass MFA parameters in their AWS STS API requests - Using the AWS CLI by obtaining temporary security credentals from STS ( aws sts get-session-token)

4. Your company has started using the AWS CloudHSM for secure key storage. A recent administrative error resulted in the loss of credentials to access the CloudHSM. You need access to data that was encrypted using keys stored on the hardware security module. How can you recover the keys that are no longer accessible? A. There is no way to recover your keys if you lose your credentials B. Log a case with AWS support and they will use MFA to recover the credentials C. Restore a snapshot of the CloudHSM D. Reset the CloudHSM device and create a new set of credentials

Answer A, Amazon does not have access to your keys or credentials and therefore has no way to recover your keys if you lose your credentials.

2. Which service provides a way to convert video and audio files from their source format into version that will playback on devices like smartphones, tablets and PCs? A. Amazon Elastic Transcoder B. AWS Glue C. Amazon Rekognition D. Amazon Comprehend

Answer A, amazon elastic transcoder is a highly scalable, easy to use and cost-effective way for developers and businesses to convert (or "transcode") video and audio files from their source format into versions that will playback on devices like smartphones, tablets and PCs.

6. A client is in the design phase of developing an application that will process orders for their online ticketing system. The application will use a number of front-end EC2 instances that pick-up orders and place them in a queue for processing by another set of back-end EC2 instances. The client will have multiple options for customers to choose the level of service they want to pay for. The client has asked how he can design the application to process the orders in a prioritized way based on the level of service the customer has chosen? A. Create mutiple SQS queues, configure the front-end application to place orders onto a specific queue based on the level of service requested and configure the back-end instances to sequentially poll the queues in order of priority B. Create a combination of FIFO queues and Standard queues and configure the applications to place messages into the relevant queue based on priority. C. Create a single SQS queue, configure the front-end application to place orders on the queue in order of priority and configure the back-end instances to poll the queue and pick up messages in the order they are presented D. Create mutiple SQS queues, configure exactly-once processing and set the maximum visibility timeout to 12 hours

Answer A, the best option is to create multiple queues and configure the application to place order onto a specific queue based on the level of service. You then configure the back-end instances to poll these queues in order or priority so they pick up the higher priority jobs first. Creating a combination of FIFO and standard queues is incorrect as creating a mixture of queue types is not the best way to separate the messages, and there is nothing in this option that explains how the messages would be picked up in the right order. - Creating a single queue and configuring the applications to place orders on the queue in order of priority would not work as standard queues offer best-effort ordering so there's no guarantee that the messages would be picked up in the correct order. Creating multiple SQS queues and configuring exactly-once processing (only possible with FIFO) would not ensure that the order of the messages is prioritized.

1. There is expected to be a large increase in write intensive traffic to a website you manage that registers users onto an online learning program. You are concerned about write to the database being dropped and need to come up with a solution to ensure this does not happen. Which of the solution options below would be the best approach to take? A. Update the application to write data to an SQS queue and provision additional EC2 instances to process the data and write it to the database. B. Use RDS in a multi-AZ configuration to distribute writes across AZs C. Update the application to write data to an S3 bucket and provision additional EC2 instances to process the data and write it to the database D. Use CloudFront to cache the writes and configure the database as a custom origin

Answer A, this is a great use for Amazon Simple Queue Service (Amazon SQS). SQS is a web service that gives you access to message queues that store messages waiting to be processed and offers a reliable, highly-scalable, hosted queue for storing messages in transit between computers. SQS is used for distributed/decoupled applications. In this circumstance SQS will reduce the risk of writes being dropped and it the best option presented.

1. A Company needs to deploy virtual desktops for its customers in an AWS VPC, and would like to leverage their existing on-premise security principles. AWS Workspaces will be used as the virtual desktop solution. Which set of AWS services and features will meet the company's requirements? A. A VPN connection. AWS Directory Services B. A VPN connection, VPC NACLs and Security Groups C. A VPN connection, VPC NACLs and Security Groups D. Amazon EC2, and AWS IAM

Answer A: a security principle is an individual identity such as a user account within a directory. The AWS Directory service includes: Active Directory service for Microsoft Active Directory, Simple AD, AD connector. One of these services may be ideal depending on detailed requirements. The Active Directory SErvice for Microsoft AD and AD Connector both require a VPN or Direct Connect connection.

3. You are developing a multi-tier application that includes loosely-coupled, distributed application components and need to determine a method of sending notifications instantaneously instantaneously. Using SNS which transport protocols are supported? ( Choose 2) A. FTP B. Email-JSON C. HTTPS D. Amazon SWF E. AWS Lambda

Answer B & C. Note that the questions asks you which transport protocols are supported, NOT which subscribers - therefore Lambda is not supported SNS supports notifications over multiple transport protocols: -HTTP/HTTPS- subscribers specify a URL as part of the subscription registration - Email/Email- JSON - message are sent to registered addresses as email (text-based or JSON- object) - SQS - users can specify an SQS standard queue as the endpoint - SMS- messages are sent to registered phone numbers as SMS text messages

7. You are a developer working for Digitial Cloud Training. You are planning to write some code that creates a URL that lets users who sign in to your organization's network securely access the AWS Management console. The URL will include a sign-in token that you get from AWS that authenticates the user to AWS. You are using Microsoft Active Directory Federation Services as your identity provider (IdP) which is compatible with SAML 2.0 Which of the steps below will you need to include when developing your custom identity broker? (Choose 2) A. Generate a pre-signed URL programmatically using the AWS SDK for Java or the AWS SDK for .NET B. Call the AWS Security Token Service (AWS STS) AssumeRole or GetFederationToken API operations to obtain temporary security credentials for the user C. Delegate access to the IdP through the "Configure Provider" wiard in the IAM console D. Call the AWS federation endpoint and supply the temporary security credentials to request a sign-in token E. Assume an IAM role through the console or programmatically with the AWS CLI, Tools for Windows PowerShell or API

Answer B & D, The aim of this solution is to create a single sign-on solution that enables users signed in to the organization's Active Directory service to be able to connect to AWS resources. When developing a custom identity broker you use the AWS STS service. The AWS Security Token Service (STS) is a web servie that enables you to request temporary, limited-privilege credentials for IAM users or for users that you authenticate (federated users). The steps performed by the custom identity broker to sign users into the AWS management console are: 1. Verify that the user is authenticated by your local identity system 2. Call the AWS Securiy Tokem service (AWS STS) AssumeRole or GetFederationToken API operations to obtain temporary security credentials for the user. 3. Call the AWS Federation endpoint and supply the temporary security credentials to request a sign in token 4. Construct a URL for the console that includes the token 5. Give the URL to the user or invoke the URL on the user's behalf YOu cannot generate a pre-signed URL for this purpose using SDKs, delegate access through the IAM console or directly assume IAM roles.

5. An event in Cloudtrail is the record of an activity in an AWS account. What are the two types of events that can be logged in Cloudtrail? A. System events which are also known as instance level operations B. Management events which are also known as control plane operations C. Platform Events which are also known as hardware level operations D. Data events which are also known as data plane operations e. API events which are also known as Cloudwatch events

Answer B & D, Trails can be configured to log Data events and management events: -Data events: these events provide insight into the resource operations performed on or within a resource. These are also known as data plane operations. -Management events: Management events provide insight into management operations that are performed on resources in your AWS account. These are also known as control plane operations. Management events can also include non-API events that occur in your account.

7. The operations team in our company are looking for a method to automatically respond to failed system status check alarms that are being received from an EC2 instance. The system in questions is experiencing intermittent problems with its operating system software. Which two steps will help you to automate the resolution of the operating system software issues? (Choose 2) A. create a cloudwatch alarm that monitors the "statuscheck-failed_system" metric B.Create a Cloudwatch alarm that monitors the " statuscheckfailed_instance" metric C. configure an Ec2 action that recovers the instance D. Configure an Ec2 action that terminates the instance E. Configure an EC2 action that reboots the instance

Answer B&E. Ec2 status checks are performed every minute and each returns a pass or a fail status. if all checks pass, the overall status of the instance is OK. If one or more checks fail, the overall status is impaired. System status checks detect (statuscheck failed_system) problems with your instance that require AWS involvement to repair whereas Instance status checks (statuscheckfailed_instance) detect problems that require your involvement to repair. The action to recover the instance is only supported on specific instance types and can be used only with statuscheckfailed_system. Configurating an action to terminate the instance would not help resolve system software issues as the instance would be terminated.

1. You are a Solutions Architect at Digital Cloud Training. A client from a large multinational corporation is working on a deployment of a significant amount of resources into AWS. The client would like to be able to deploy resources across multiple AWS accounts and regions using a single toolset and template. You have been asked to suggest a toolset that can provide this functionality? A. Use a Cloudformation template that creates a stack and specify the logical IDs of each account and region B. Use a Cloudformation StackSet and specify the target account and regions in which the stacks will be created C. Use a third-party product such as Terraform that has support for multiple AWS accounts and regions D. This cannot be doe, use separat Cloudformation templates per AWS account and region

Answer B, AWS cloudformation staksets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation. Using an adminstrator account, you define and mange an AWS cloudformation template, and use the template as the basis ofr provisioning stacks into selected target accounts acros specified regions. An administrator account is the AWS account in which you create stack sets. A stack set is managed by signing in to the AWS administrator account in which it was created. A target account is the account into which you create, update, or delete one or more stacks in your stack set. Before you can use a stack set to create stacks in a target account, you must set up a trust relationship between the administrator and target accounts.

8. A health club is developing a mobile fitness app that allows customers to upload statistics and view their progress. Amazon Cognito is being used for authentication, authorization and user management and users will sing-in with Facebook IDs. In order to securely store data in DynamoDB, the design should use temporary AWS credentials. What feature of Amazon Cognito is used to obtain temporary credentials to access AWS services? A. User Pools B. Identity Pools C. SAML Identity Providers D. Key Pairs

Answer B, With an identity pool, users can obtain temporary AWS credentials to access AWS services, such as Amazon S3 and DynamoDB. A user pool is a user directory in Amazon Cognito. WIth a user pool, users can sign in to web or mobile apps through Amazon Cognito, or federate through a thrid-party identiy provider (IdP).

2. A Solution Architect needs to monitor application logs and receive a notification whenever a specific number of occurrences of certain HTTP status code errors occur. which tool should the Architect use? A. Cloudwatch Events B. Cloudwatch logs C. Cloudtrail Trails D. Cloudwatch metrics

Answer B, you can use Cloudwatch logs to monitor applications and systems using log data. For example, cloudwatch logs can track the number of errors that occur in your application logs and send you a notification whenever the rate of errors exceeds a threshold you specify. this is the best tool for this requirement.

2. Which AWS service can be used to prepare and load data for analytics using an extract, transform and load (ETL) process? a. AWS lambda b. amazon athena c. AWS glue d. Amazon EMR

Answer C, AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.

4. You recently enabled Access logs on your application load balancer (ALB). One of your colleagues would like to process the logs files using a hosted Hadoop service. What configuration changes and services can be leveraged to deliver this requirement? A. Configure Access Logs to be delivered to DynamoDB and use EMR for processing the log files B. Configure Access logs to be delivered to S3 and use Kinesis for processing the logs files C. Configure Access Logs to be delivered to S3 and use EMR for processing the log files D. Configure Access logs to be delivered to EC2 and install Hadoop for processing the log files

Answer C, Access logs can be enabled on ALB and configured to store data in an S3 bucket. Amazon EMR is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. EMR utilizes a hosted Hadoop framework running on Amazon EC2 and Amazon S3.

1. You are undertaking a project to make some audio and video files that your company uses for onboarding new staff members available via a mobile application. You are looking for a cost-effective way to convert the files from their current formats into formats that are compatible with smartphones and tablets. The files are currently stored in an S3 bucket. What AWS service can help with converting the files? A. MediaConvert B. Data Pipeline C. Elastic Transcoder D. Rekognition

Answer C, Amazon Elastic Transcoder is a highly scalable, easy to use and cost-effective way for developers and business to convert ( or "transcode") video and audio files from their source format into version that will playback on devices lik esmartphones, tablets, and PCs. Mediacovert converts file-based content for broadcast and multi-screen delivery. Data Pipeline helps you move, integrate, and process data across AWS compute and storage resources, as well as your on-premises resources. Rekoginition is a deep learning-based visual analysis service.

4. A Solutions Architect is creating the business process workflows associated with an order fulfillment system. What AWS service can assist with coordinating tasks across distributed application components? A. Amazon STS B. Amazon SQS C. Amazon SWF D. Amazon SNS

Answer C, Amazon Simple Workflow Service (SWF) is a web service that makes it easy to coordinate work across distributed application components. SWF enables applications for a range of use cases, including media processing, web application back-ends, business process workflows, and analytics pipelines, to be designed as a coordination of tasks.

5. The AWS Acceptable Use Policy describes permitted and prohitbited behavior on AWS and includes descriptions of prohibited security violations and network abuse. According to the policy, what is AWS's position on penetration testing? A. AWS do not allow any form of penetration testing B. AWS allow penetration testing by customers on their own VPC resources C. AWS allow penetration for some resources with prior authorization D. AWS allow penetration testing for all resources

Answer C, Permission is required for all penetration tests. You must complete and submit the AWS Vulnerability/ Penetration testing request Form to request authorization for penetration testing to or originating from any AWS resources. There is a limited set of resources on which penetration testing can be performed.

5. One of your clients is transitioning their web presence into the AWS cloud. As part of the migration the client will be running a web application both on-premises and in AWS for a period of time. During the period of co-exisitence the client would like 80% of the traffic to hit the AWS-based web servers and 20% to be directed to the on-premises web servers. What method can you use to distribute traffic as requested? A. Use a Network Load Balancer to distribute traffic based on Instance ID. B. Use an Application load balance to distribute traffic based on IP address C. Use Route 53 with a weighted routing policy and configure the respective weights D. Use Route 53 with a simple routing policy

Answer C, Route 53 weighted routing policy is similar to simple, but you can specify a weight per IP address. You create records that have the same name and type and assign each record a relative weight which is a numerical value that favors one IP over another (values must total 100). To stop sending traffic to a resource you can change the weight of the record to 0.

9. You are putting together an architecture for a new VPC on AWS. Your on-premise data center will be connected to the VPC by a hardware VPN and has publicc and VPN-only subnets. The security team has requested that traffic hitting public subnets on AWS that's destined to on-premise applications must be directed over the VPN to the corporate firewall. How can this be achieved? A. In the public subnet route table, add a route for your remote network and specify the customer gateway as the target B. Configure a NAT Gateway and configure all traffic to be directed via the virtual private gateway C. in the pubic subnet route table, add a route for your remote network and specify the virtual private gateway as the target D. In the VPN-only subnet route table, add a route table, add a route that directs all internet traffic to the virtual private gateway.

Answer C, Route tables determine where network traffic is directed. In your route table, you must add a route for your remote network and specify the virtual private gateway as the target. This enables traffic from your VPC that's destined for your remote network to route via the virtual private gateway and over one of the VPN tunnels. You can enable route propagation for your route table to automatically propagate your network routes to the table for you. You must select the virtual private gateway (AWS side of the VPN) in the target in the route table. NAT gateways are used to enable Internet access for EC2 instances in private subnets, they cannot be used to direct traffic to VPG. You must create the route table rule in the route table attached to the public subnet, not the VPN-only subnet.

6. You have been asked to come up with a solution for providing single sing-on to existing staff in your company who manage on-premise web applications and now need access to the AWS management console to mange resources in the AWS cloud. Which product combinations provide the best solution to achieve this requirement? A. Use your on-premise LDAP directory with IAM B. Use IAM and MFA C. Use the AWS Secure Token Service (STS) and SAML D. Use IAM and Amazon Cognito

Answer C, Single sing-on using federation allows users to login to the AWS console without assigning IAM credentials. The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for IAM users or for users that you authenticate (such as federated users from an on-premise directory). Federation (typically Active Directory) uses SAML 2.0 for authentication and grants temporary access based on the users AD credentials. The user does not need to be a user in IAM.

2. You are using a series of Spot instances that process messages from an SQS queue and store results in a DynamoDB table. Shortly after picking up a message from the queue AWS terminated the Spot instance. The Spot instance had not finished processing the message. What will happen to the message? A. The message will be lost as it would have been deleted from the queue when processed B. The message will remain in the queue and be immediately picked up by another instance C. The message will become available for processing again after the visibility timeout expires D. The results may be duplicated in DynamoDB as the message will likely be processed multiple times.

Answer C, The visibility timeout is the amount of time a message is invisible in the queue after a reader picks up the message. If a job is processed within the visibility timeout the message will be deleted. If a job is not processed within the visibility timeout the message will become visible timeout the message will become visible again ( could be delivered twice). The maximum visibility timeout for an Amazon SQS message is 12 hours. The message will not be lost and will not be immediately picked up by another instance. As mentioned above it will be available for processing in the queue again after the timeout expires. As the instance had not finished processing the message it should only be fully processed once. Depending on your application process however it is possible some data was written to DynamoDB.

7. A solutions architect has created a VPC and is in the process of formulating the subnet design. The VPC will be used to host a two tier application that will include internet facing web servers, and internal-only DB servers. Zonal redundancy is required. HOw many subnets are required to support this requirement? A. 1 subnets B. 2 subnets C. 4 subnets D. 6 subnets

Answer C, Zonal redundancy indicates that the architecture should be split across multiple availability Zones. Subnets are mapped 1:1 to AZs. A public subnet should be used for the Internet-facing web servers and a separate private subnet should be used for the internal-only DB servers. Therefore, you need 4 subnets - 2 (for redundancy ) per public/private subnet.

6. Your company currently uses puppet enterprise for infrastructure and application management. You are looking to move some of your infrastructure onto AWS and would like to continue to use the same tools in the cloud. What AWS service provides a fully managed configuration management service that is compatible with Puppet Enterprise? A. Elastic Beanstalk B. Cloudformation C. Opsworks D. Cloudtrail

Answer C, the only service that would allow you to continue to use the same tools is OpsWorks. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Opsworks lets you use chef and puppet to automate how servers are configured, deployed, and manged across your amazon EC2 instsnces or on- premises compute environments.

3. A Solutions Architect is designing the system monitoring and deployment layers of a serverless application. The system monitoring layer will manage system visibility through recording logs and metrics and the deployment layer will deploy the application stack and mange workload changes through a release management process. The Architect needs to select the most appropriate AWS services for these functions. Which services and frameworks should be used for the system monitoring and deployment layers? (choose 2) A. Use AWS X-Ray to package, test, and deploy the serverless application stack B. Use Amazon Cloudtrail for consolidating system and application logs and monitoring custom metrics C. Use AWS Lambda to package, test, and deploy the serverless application stack D. Use AWS SAM to package, test, and deploy the serverless application stack E. Use Amazon Cloudwatch for consolidating system and application logs and monitoring custom metrics

Answer D & E. AWS serverless applicatio model (AWS SAM) is an extension of AWS Cloudformation that is usd to package, test, and deploy serverless application. With Amazon Cloudwatch, you can access system metrics on all the AWS services use, consolidate system and application level logs, and create business key performance indicators (KPIs) as custom metrics for your specific needs.

3. Solutions Architect is designing a solution for a financial application that will receive trading data in large volumes. What is the best solution for ingesting and processing a very large number of data streams in a near real time? a. Amazon EMR b. Amazon kinesis firehose c. Amazon redshift d. Amazon kinesis data streams

Answer D, Kinesis data streams enable you to build custom applications that process or analyze streaming data for specialized needs. It enables real-time processing of streaming big data and can be used for rapidly moving data and can be used for rapidly moving data off data producers and then continuously processing the data. Kinesis Data streams stores data for later processing by applications (key difference with firehose which delivers data directly to AWS services).

4. A systems integration consultancy regularly deploys and manages multi-tiered web services for customers on AWS. The Sysops team are facing challenges in tracking changes that are made to the web services and rolling back when problems occur. which of the approaches below would BEST assist the SysOps team? A. Use AWS System Manager to mage all updates to the web services B. Use Codedeploy to manage version control for the web services C. Use trusted advisor to record updates made to the web services D. Use cloudformation templates to deploy and manage the web services.

Answer D, When you provision your infrastructure with AWS Cloudformation, the AWS cloudformation template describes exactly what resources are provisioned and their settings. Because these template are text files, you simply track differences in your templates to track changes to your infrastructure, similar to the way developers control revisions to source code. For example, you can use a version control system with your templates so that you know exactly what changes were made, who made them, and when. If at any point you need to reverse changes to your infrastructure, you can use a previous version of your template.

5. You are a developer at Digitial Cloud Training. An application stack you are building needs a message bus to decouple the application components from each other. The application will generate up to 300 messages per second without using batching. You need to ensure that a message is only delivered once, and duplicates are not introduced into the queue. It is not necessary to maintain the order of the messages. Which SQS queue type will you use? A. Standard queues B. Long polling queues C. Auto Scaling queues D. FIFO queues

Answer D, the key fact you need to consider here is that duplicate messages cannot be introduced into the queue. FOr this reason alone you must use a FIFO queue. The statement about it not being necessary maintain the order of the messages is meant to confuse you , as that might lead you to think you can use a standard queue, but standard queues don't quarantee that duplicates are not introduced into the queue. FIFO (first-in-first-out_ queues preserve the exact order in which messages are sent and received - note that this not required in the question but exactly once processing is. FIFO queues provide exactly-once processing, which means that each message is delivered once and remains available until a consumer processes it and deletes it.

1. A user is testing a new service that receives location updates from 5,000 rental cards every hour. Which service will collect data and automatically scale to accommodate production workload? a. Amazon EC2 b. Amazon Kinesis firehose c. Amazon EBS d. Amazon API Gateway

Answer b, what we need here is a service that can streaming collect streaming data. The only option available is Kinesis Firehose which captures, transforms, and loads streaming data into "destination" such as S3, Redshift, Elasticsearch and Splunk. Amazon EC2 is not suitable for collecting streaming data.

1. Your company shares some HR videos stored in an Amazon S3 bucket via Cloudfront. You need to restrict access to the private content so users coming from specific IP addresses cam access the videos and ensure direct access and ensure direct access via the Amazon s3 bucket is not possible. How can this be achieved? A. Configure cloudfront to require users to access the files using a signed URL, create an origin access identity (OAI) and restrict access to files in the Amazon S3 bucket to the OAI. B. Configure cloudfront to require users to access the files using signed cookies, create an origin access identity (OAI) and instruct users to login with the OAI. C. Configure Cloudfront to require users to access the files using a signed URL, and configure the S3 bucket as a website endpoint. D. Configure Cloudfront to require users to access the files using signed cookies, and move the files to an encrypted EBS volume.

Answer is (A.)A sign URL includes additional information, for example an expiration date and time that gives you more control over access to our content. You can also specify the IP address or range of IP address or range of IP addresses of the users who can access your content.

6. You need to setup a distribution method for some static files. The requests will be mainly GET requests and you are expecting a high volume of GETs often exceeding 2000 per second. The files are currently stored in an S3 bucket. According to AWS best practices, what can you do to optimize performancd? A. Integrate Cloudfront with S3 to cache the content B. Use cross-region replication to spread teh load across regions C. Use ElasticCache to cache the content D. Use S3 transfer Acceleration

Answer is A, Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/POST/DELETE and 5,500 GET requests per second per prefix in a bucket. There are no limits to the number of prefixes in a bucket.

RDS anti-pattern

Anti-patterns are certain patterns in architecture or development that are considered bad, or sub-optimal practices there may be better service of method to produce the best result.

Glacier archive hold potential ?

Archives can be 1 bytes up to 40 TB. Glacier file archives of 1 byte- 4 GB can be performed in a single operation. Glacier file archives from 100MB up to 40 TB can be uploaded to Glacier using the multi-part upload API. - upload archives synchronous and downloading archives is asynchronous.

Read replicas are available on what

Are available for MySQL, PostgreSQL, MariaDB and Autrora (no SQL server or Oracle). -You can take snapshots of PostgreSQL read replicas but cannot enable automated backups. -You can enable automatic backups on MySQL and MariaDB read replicas -You can enable writes to the MySQL and MariaDB Read Replicas

Roles

Are created and then "assumed" by trusted entities and define a set of permissions for making AWS service requests.

NAT gateway

Are managed for you by AWS. Fully-managed NAT service that replaces the need for NAT instances on EC2. Must be created in a public subnet. Uses an Elastic IP address for the public IP. Private instances in private subnets must have a route to the NAT instance, usually the default route destination of 0.0.0.0/0. Created in a specified AZ with redundancy in that zone.

Two types of replication in Aurora

Aurora replica ( up to 15) and MySQL Read Replica (up to 5). You can create read replicas for an Amazon Aurora database in up to five AWS regions. This capability is available for Amazon Aurora with MySQL compatibility.

Aurora automated backup

Automated backups allow point in time recovery to any point within the retention period down to a second. When automated backups are turned on for your DB instance, Amazon RDS automatically performs a full daily snapshot of your data ( during your preferred backup window) and captures transaction logs (as updates to your DB instance are made).

When does the automated backups get deleted?

Automated backups are deleted when you delete the RDS DB instance.

3) A company currently stores data for on-premises applications on local drives. The Chief Technology Officer wants to reduce hardware costs by storing the data in Amazon S3 but does not want to make modifications to the applications. To minimize latency, frequently accessed data should be available locally. What is a reliable and durable solution for a Solutions Architect to implement that will reduce the cost of local storage? A. Deploy an SFTP client on a local server and transfer data to Amazon S3 using AWS Transfer for SFTP. B. Deploy an AWS Storage Gateway volume gateway configured in cached volume mode. C. Deploy an AWS DataSync agent on a local server and configure an S3 bucket as the destination. D. Deploy an AWS Storage Gateway volume gateway configured in stored volume mode.

B - An AWS Storage Gateway volume gateway connects on on-premises software application with cloudbacked storage volumes that can be mounted as Internet Small Computer System Interface (iSCSI) devices from on-premises application servers. In cached volumes mode, all the data is stored in Amazon S3 and a copy of frequently accessed data is stored locally.

7) An application saves the logs to an S3 bucket. A user wants to keep the logs for one month for troubleshooting purposes, and then purge the logs. What feature will enable this? A. Adding a bucket policy on the S3 bucket. B. Configuring lifecycle configuration rules on the S3 bucket. C. Creating an IAM policy for the S3 bucket. D. Enabling CORS on the S3 bucket.

B - Lifecycle configuration allows lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. Bucket policies and IAM define access to objects in an S3 bucket. CORS enables clients in one domain to interact with resources in a different domain.

2. A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data ? ( Select TWO) A. CloudWatch B. DynamoDb C. Elastic Load Balancing D. ElastiCache E. Storage Gateway

B, D- Both DynamoDB and ElastiCache provide high performance storage of key-value pairs. CloudWatch and ELB are not storage services. Storage Gateway is a storage service, but it is a hybrid storage service that enables on-premises applications to use cloud storage.

3. Company salespeople upload their sales figures daily. A Solutions Architect needs a durable storage solution for these documents that also protects against users accidentally deleting important documents. Which action will protect against users accidentally deleting important documents. Which action will protect against unintended user actions? A. Store data in an EBS volume and creat snapshots once a week. B. Store data in an S3 bucket and enable versioning. C. Store data in two S3 buckets in different AWS regions. D. Store data on EC2 instance storage.

B- If a versioned objected is deleted, then it can still be recovered by retrieving the final version. Response A would lose any changes committed since the previous snapshot. Storing the data in 2 S3 buckets would provide slightly more protection, but a user could still delete the object from both buckets. EC2 instance storage is ephemeral and should never be used for data requiring durability.

When to use Amazon S3

BLOBs and static websites

Promoted read replicas retain?

Backup retention window, backup window, DB parameter group. Existing read replicas continue to function as normal. Each read replica has it own DNS endpoint.

DB snapshots backups

Backups are taken within a defined window, I/O is briefly suspended while backups initialize and may increase latency (applicable to single-AZ RDS). DB snapshots that are performed manually will be stored even after the RDS instance is deleted.

Active Directory Service for Microsoft Active Directory

Best choice if you have more than 5000 users and/or need a trust relationship set up. Includes software pathing, replication, automated backups, replacing failed DCs and monitoring. Runs on a Window server. Can perform schema extensions. You can setup trust relationships to extend authentication from on-premises Active Directories into the AWS cloud. On-premise users and groups can access resources in either domain using SSO. Requires a VPN or Direct Connect connection. Can be used as a standalone AD in the AWS cloud.

High Availability approaches for Networking

By creating subnets in the available AZs, you create Multi-AZ presence for your VPC. Best practice is to create at least two VPN Tunnels into your Virtual Private Gateway. Direct Connect is not HA by default, so you need to establish a secondary connection via another Direct Connect (ideally with another provider) or use a VPN.

What is the retention period for backups?

By default the retention period is 7 days if configured from the console for all DB engines except Aurora. The default retention period is 1 day if configured from the API or CLI. The retention period for Aurora is 1 day regardless of how it is configured. You can increase the retention period up to 35 days. - during the backup window I/O may be suspended

9) An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data. How can the organization control which networks can access the cluster? A. Run the cluster in a different VPC and connect through VPC peering. B. Create a database user inside the Amazon Redshift cluster only for users on the network. C. Define a cluster security group for the cluster that allows access from the allowed networks. D. Only allow access to networks that connect with the shared services network via VPN.

C - A security group can grant access to traffic from the allowed networks via the CIDR range for each network. VPC peering and VPN are connectivity services and cannot control traffic for security. Amazon Redshift user accounts address authentication and authorization at the user level and have no control over network traffic.

6. A web application allows customers to upload orders to an S3 bucket. The resulting Amazon S3 events trigger a Lambda function that inserts a message to an SQS queue. A single EC2 instance reads messages from the queue, processes them, and stores them in an DynamoDB table partitioned by unique order ID. Next month traffic is expected to increase by a factor of 10 and a Solutions Architect is reviewing the architecture for possible scaling problems. Which component is MOST likely to need re-architecting to be able to scale to accommodate the new traffic? A. Lambda function B. SQS queue C. EC2 instance D. DynamoDB table

C - A single EC2 instance will not scale and is a single point of failure in the architecture. A much better solution would be to have EC2 instances in an Auto Scaling group across 2 availability zones read messages from the queue. The other responses are all managed services that can be configured to scale or will scale automatically.

5) A Solutions Architect wants to design a solution to save costs for EC2 instances that do not need to run during a 2-week company shutdown. The applications running on the instances store data in instance memory (RAM) that must be present when the instances resume operation. Which approach should the Solutions Architect recommend to shut down and resume the instances? A. Modify the application to store the data on instance store volumes. Reattach the volumes while restarting them. B. Snapshot the instances before stopping them. Restore the snapshot after restarting the instances. C. Run the applications on instances enabled for hibernation. Hibernate the instances before the shutdown. D. Note the Availability Zone for each instance before stopping it. Restart the instances in the same Availability Zones after the shutdown.

C - Hibernating an instance saves the contents of RAM to the Amazon EBS root volume. When the instance restarts, the RAM contents are reloaded.

10) A Solutions Architect is designing an online shopping application running in a VPC on EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet. Which VPC design meets these requirements? A. Public subnets for both the application tier and the database cluster B. Public subnets for the application tier, and private subnets for the database cluster C. Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster D. Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway

C - The online application must be in public subnets to allow access from clients' browsers. The database cluster must be in private subnets to meet the requirement that there be no access from the Internet. A NAT Gateway is required to give the database cluster the ability to download patches from the Internet. NAT Gateways must be deployed in public subnets

8) A company's Security team requires that all data stored in the cloud be encrypted at rest at all times using encryption keys stored on-premises. Which encryption options meet these requirements? (Select TWO.) A. Use Server-Side Encryption with Amazon S3 Managed Keys (SSE-S3). B. Use Server-Side Encryption with AWS KMS Managed Keys (SSE-KMS). C. Use Server-Side Encryption with Customer Provided Keys (SSE-C). D. Use client-side encryption to provide at-rest encryption. E. Use an AWS Lambda function triggered by Amazon S3 events to encrypt the data using the customer's keys.

C, D - Server-Side Encryption with Customer-Provided Keys (SSE-C) enables Amazon S3 to encrypt objects server side using an encryption key provided in the PUT request. The same key must be provided in GET requests for Amazon S3 to decrypt the object. Customers also have the option to encrypt data client side before uploading it to Amazon S3 and decrypting it after downloading it. AWS SDKs provide an S3 encryption client that streamlines the process.

API gateway and cross origin resource sharing

Can enable cross origin resource sharing (CORS) for multiple domain use with Java-script/AJAX: -can be used to enabled request from domains other than the APIs domain. -allows the sharing of resources between different domain -the method (GET, PUT, POST etc) for which you will enable CORS must be available in the API Gateway API before you enable CORS -If CORS is not enabled and an API resource received request will be blocked -Enable CORS on the APIs resources using the selected methods under the API gateway

Name servers and route 53

Changes to Name servers may not take effect for up to 48 hours due to the DNS record time to live (TTL) values. You can transfer domains to Route 53 only if the Top-level domain (TLD) is supported. You can transfer a domain from route 53 to another registrar by contacting AWS support however it doe not migrate the hosted zone by default (optional)

CloudHSM generate

CloudHSM allows you to securely generate, store and manage cyrptographic keys used for data encryption in a way that keys are accessible only by you. A hardware security module (HSM) provides secure key storage and cyptographic operations within a tamper-resistant hardware device.

AWS CloudHSM

CloudHSM service helps you meet corporate, contractual, and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) instances within the AWS Cloud.

Monitoring

CloudWatch is integrated with SQS and you can view and monitor queue metrics. CloudWatch metrics are automaticaly collected every 5 minutes. - CloudWatch considers queue to be active for up to 6 hours if it contains any messages or if any API action accesses it. - No charge for CloudWatch (no detailed monitoring) - CloudTrail captures API calls from SQS and logs to a specified S3 bucket.

General Cloudfront concepts

Cloudfront is a web service that gives businesses and web application developers an easy and cost-effective way to distribute content with low latency and high data transfer speeds.

What is Cloudfront good for?

Cloudfront is good choice for distribution of frequently accessed static content that benefits from edge delivery like popular website images, videos, media files or software downloads. Use for dynamic, static, streaming, and interactive content. It is a global serive: -ingress to upload objects -egress to distribute content

Cloudtrail and S3 buckets

Cloudtrail saves logs to the S3 bucket you specify. Cloudtrail captures information about all requests whether they were made using the Cloudfront console, the Cloudfront API, the AWS SDKs, the Cloudfront CLI, or another service. CloudTrail can be used to determine which requests were made, the source IP address, who made the request etc. To view cloudfront requests in CloudTrail logs you must update an existing trail to include global services. To delete a distribution it must first be disabled (can take up to 15 minutes).

AWS CloudHSM service link to CloudHSM Cluster

Clusters can contain multiple HSM instances, spread across multiple Availability Zones in a region. HSM instances in a cluster are automatically synchronized and load balanced. - After creating and initializing a CloudHSM Cluster, you can configure a client on your EC2 instance that allows your applications to use the cluster over a secure, authentication network connection. Must be within a VPC and can be accessed via VPC Peering. - Applications don't need to be in the same VPC but the server or instance on which your application and the HSM client are running must have network (IP) reach ability to all HSM to all HSMs in the cluster

6) A company plans to run a monitoring application on an Amazon EC2 instance in a VPC. Connections are made to the instance using its private IPv4 address. The Solutions Architect needs to design a solution that will allow traffic to be quickly directed to a standby instance if the application fails and becomes unreachable. Which approach will meet these requirements? A. Deploy an Application Load Balancer configured with a listener for the private IP address and register the primary instance with the load balancer. Upon failure, de-register the instance and register the secondary instance. B. Configure a custom DHCP option set. Configure DHCP to assign the same private IP address to the secondary instance when the primary instance fails. C. Attach a secondary elastic network interface (ENI) to the instance configured with the private IP address. Move the ENI to the standby instance if the primary instance becomes unreachable. D. Associate an Elastic IP address with the network interface of the primary instance. Disassociate the Elastic IP from the primary instance upon failure and associate it with a secondary instance.

D - A secondary ENI can be added to an instance. While primary ENIs cannot be detached from an instance, secondary ENIs can be detached and attached to a different instance.

8. An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern? A. Access the data through an Internet Gateway. B. Access the data through a VPN connection. C. Access the data through a NAT Gateway. D. Access the data through a VPC endpoint for Amazon S3

D - VPC endpoints for Amazon S3 provide secure connections to S3 buckets that do not require a gateway or NAT instances. NAT Gateways and Internet Gateways still route traffic over the Internet to the public endpoint for Amazon S3. There is no way to connect to Amazon S3 via VPN.

2) A company needs to perform asynchronous processing, and has implemented Amazon Simple Queue Service (Amazon SQS) as part of a decoupled architecture. The company wants to ensure that the number of empty responses from polling requests are kept to a minimum. What should a Solutions Architect do to ensure that empty responses are reduced? A. Increase the maximum message retention period for the queue. B. Increase the maximum receives for the redrive policy for the queue. C. Increase the default visibility timeout for the queue. D. Increase the receive message wait time for the queue.

D - When the ReceiveMessageWaitTimeSeconds property of a queue is set to a value greater than zero, long polling is in effect. Long polling reduces the number of empty responses by allowing Amazon SQS to wait until a message is available before sending a response to a ReceiveMessage request.

Transient Data Store (storage type)

Data is just temporarily stored and passed along to another process or persistent store. Epi: SQS and SNS

Ephemeral Data Store ( storage type)

Data is lost when the system is stopped. Epi: EC2 instance store, memcached

1. A company is storing an access key (access key ID and secret access key) in a text file on a custom AMI. The company uses the access key to access DynamoDB tables from instances created from the AMI. The security team has mandated a more secure solution. A. Put the access key in an S3 bucket, and retrieve the access key on boot from the instance. B. Pass the access key to the instances through instance user data. C. Obtain the access key from a key server launched in a private subnet. D. Create an IAM role with permissions to access the table, and launch all instances with the new role.

D. IAM roles for EC2 instances allow applications running on the instance to access AWS resources without having to create and store any access keys. Any solution involving the creation of an access key then introduces the complexity of managing that secret.

Aurora DB Snapshots

DB snapshots are user-initiated and enable you to back up your DB instance in a known state as frequently as you wish, and then restore to that specific state. Cannot be used for point-in-time recovery. Snapshots are stored on S3 and it remain in there until deleted.

VPC and routing

DNS is supported. Most update route tables to configure routing, must update the inbound and outbound rules for VPC security group to reference security groups in the peered VPC. WHen creating a VPC peering ocnnection with another account you need to enter the account ID and VPC ID from the other Account.

cloudwatch dashboard

Dashboards allow you to create, customize, interact with, and save graphs of AWS resources and custom metrics. Alarms can be used to monitor any Amazon cloudwatch metric in your account Events are a stream of system events describing changes in your AWS resources. Logs help you to aggregate, monitor and store logs. Basic monitoring - 5 mins (free for ec2 instances, ebs volumes, elbs and RDS and DBs) Detailed monitoring - 1 min (chargeable)

VPC in different region

Data sent between VPCs in different regions is encrypted (traffic charges apply). Can peer with other accounts within or between regions.

DynamoDB additional Charges include:

Data transfer out, Backups per GB (continuous or on-demand), Global tables, DynamoDB Accelerator (DAX), DynamoDB streams

Negative side of HSM

Does not natively integrate with many AWS services like KMS, but instead requires custom application scripting

DynamoDB global tables are ideal

DynamoDB global tables are ideal for massively scaled applications, with globally dispersed users. Global tables provide automatic multi-master replication to AWS regions world-wide, so you can deliver low-latency data access to your users no matter where they are located.

What DB subnet group been in what zone?

Each DB subnet group should have subnets in at least two availability Zones in a given region. It is recommended to configure a subnet group with subnets in each SZ (even for standalone instances). During the creation of an RDS instance you can select the DB subnet group and the AZ within the group to place the RDS DB instance in.

DynamoDB Integrations

ElastiCache can be used in front of DynamoDB for performance of reads on infrequently changed data. Triggers integrate with AWS lambda to respond to triggers.

Elasticache EC2

ElastiCache is billed by node size and hours of use. Elasticache ec2 nodes cannot be accessed from the internet, nor can they be accessed by ec2 instances in other VPCs. Cached information may include the results of I/O intensive database queries or the results of computationally intensive calculations. can be on-demand or reserved instances too ( but not spot instance).

Components of VPC VPC Endpoints

Enables private connectivity to services hosted in AWS, from within your VPC without using an Internet Gateway, VPN, network addres translation (NAT) devices, or firewall proxies

Multi-AZ deployments for MySQL, MariaDB, Oracle, and PostgreSQL use what engine

Engines utilize synchronous physical replication

when to use Amazon ElastiCache

Fast temporary storage for small amounts of data. Highly volatile data.

Provides a subset of the features provided by AWS MS AD

Features include: Manage user accounts, manage groups, apply group policies, securely connect to EC2 instances, kerberos-based SSO, supports joining Linux or Windows based EC2 instances. AWS provides monitoring, daily snapshots, and recovery services. Manual snapshots possible. It is also compatible with Workspaces, workdocs, workmail and QuickSight. You can also sign on to the AWS management console with Simple AD user accounts to manage AWS resources.

Firehose and Amazon S3

For Amazon S3 destination, streaming data is delivered to your S3 bucket. If data transformation is enabled, you can optionally back up source data to another Amazon S3 bucket:

what happen if Redshift nodes failed?

For nodes failures the data warehouse cluster will be unavailable for queries and updates until a replacement node is provisioned and added to the DB.

Amazon RDS

General RDS Concepts: Amazon Relation Database Service (Amazon RDS) is a managed service that makes it easy to set up, operate, and scale a relational database in the cloud. · RDS is an Online Transaction Processing (OLTP) type of database Best for structured, relation data store requirements. Aims to be drop-in replacement for existing on-premise instances of the same databases. · Automated backups and patching applied in customer-defined maintenance windows · Push-button scaling, replication and redundancy

SNS Subscribers

HTTP, HTTPS, Email, Email-JSON, SQS, Application, Lambda

AWS IAM

IAM is used to securely control individual and group access to AWS resources. It makes it easy to provide multiple users secure access to AWS resources. IAM can be used to manage: -Users, Groups, Access policies, Roles, User credentials, user passwords policies, multi-factor authentication (MFA), API keys for programmatic access (CLI). It enables shared access to your AWS account. By default new users are created with NO access to any AWS services- they can only login to the AWS console. Each IAM user has three main components: - A username, A password, Permissions to access various resources.

Service account

IAM user can be created to represent applications and these. You can have up to 5000 users per AWS account. An unique ID is also created which is returned whenyou create the user using the API, Tools for WIndows Poershell or the AWS CLI. - The Access Key ID and secret access key are not the same as a password and cannot be used to login to the AWS console. You can allow or disallow the ability to change passwords using an IAM policy.

IAM users and roles

IAM users or AWS services can assume a role to obtain temporary secrity credentials that can be used to make AWS API calls. You can delegate using roles.

DynamoDB is a web service that uses?

IT uses HTTP over SSL (HTTPS) as a transport and JSON as a message serialization format.

Cloudformation physical IDs

Identify resources outside of AWS cloudformation templates, but only after the resources have been created.

Public subnet

If a subnet's traffic is routed to an internet gateway, the subnet is known as a public subnet.

High Availability approaches for database.

If possible, choose DynamoDB over RDS because of inherent fault tolerance. If DynamoDB can't be used, choose Aurora because of redundancy and automatic recovery features. If Aurora can't be used, choose Multi-AZ RDS Frequent RDS snapshots can protect against data corruption or failure and they won't impact performance of Multi-AZ deployment. Regional replication is also an option, but will not be strongly consistent. If the database runs on EC2, you have to design the HA yourself.

Using MQ

If you want an easy low-hassle path to migrate from existing message brokers to AWS. MQ also provides encryption of your messages at rest and in transit. Connections to the broker use SSL, and access can be restricted to a private endpoint within your Amazon VPC, which allows you to isolate your brooker in your own virtual network. - you can configure security groups to control network access to your broker - Amazon MQ is integrated with Amazon CloudWatch and AWS CloudTrail. With CloudWatch you can monitor metrics on your brokers, queues, and topics.

Multi-AZ failover read replicas

In a multi-AZ failover the read replicas are switched to the new primary. Read replicas must be explicitly deleted. If a source DB instance is deleted without deleting the replicas each replica becomes a standalone single-AZ DB instance.

Multi-AZ configurations

In multi-AZ configuration snapshots and automated backups are performed on the standby to avoid I/O suspension on the primary instance.

Limits

In-flight messages are messages that have been picked up by a consumer but not yet deleted from the queue. Standards: have a limit of 120,000 in-flight messages per queue. FIFO: have a limit of 20,000 in-flight messages per queue. Queue names can be up to 80 characters. Messages are retained for 4 days by default up to 14 days.

AWS Microsoft AD supports AWS applications including Workspaces, WOrkDocs, QuickSight, Chime, Amazon Connect, And RDS for Microsoft SQL server

Includes security features such as: - Fine- grained password policy management -LDAP encryption through SSL/ TLS - HIPAA and PCI DSS approved -Multi-factor authentication though integration with existing RADIUS based MFA infrastructure. Monitoring provided through CloudTrail, notifications through SNS, daily automated snapshots. Scalable service that scales by adding Domin controllers. Deployed in a HA configuration across two AZs in the same region AWS Microsoft AD does not support replication mode where replication to an on-premise AD takes place.

Automated backups are only supported for?

InnoDB storage engine for MySQL (not for myISAM).

VPC Peering in Instance

Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs or with a VPC in another AWS account The VPCs can be in different regions (also known as an inter-region VPC peering connection) - AWS uses the exisitng infrastructure of a VPC to create a VPC peering connection. It is neither a gateway nor a VPN connection, and does not rely on a separate piece of physical hardware. Can only have one peering connectionbetween any two VPCs at a time. Cannot have overlapping CIDR ranges.

AWS config

Is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance. With AWS config you can discover existing AWS resources, export a complete inventory of your AWS resources with all configuration details, and determine how a resource was configured at any point in time. These capabilities enable compliance auditing, security analysis, resource change tracking, and troubleshooting. Allow you to assess, audit and evaluate configurations of your AWS resources. Creates baseline of various configuration settings and files and can then track variations against baseline

What is an AWS region?

Is a geographic location where AWS provides multiple, physically, separated and isolated Availability Zones which are connected with low latency, high throughput, and highly redundant networking.

Internet Gateways

Is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet.

Cloudformation

Is a service that gives developers and businesses an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion.

What is Amazon Machie

Is an AI=powered security service that helps you prevent data loss by automatically discovering, classifying, and protecting sensitive data stored in Amazon S3. Amazon Macie uses machine learning to recognize sensitive data such as personally identifiable informaiton (PII) or intellectual property, assigns a business value, and provides visibility into where this data is stored and how it is being used in your organization. IT continuously monitors data access activity for anomalies, and delivers alerts when it detects risk of unauthorized access or inadvertent data leaks.

Examples of Config Rules:

Is backup enabled on RDS? is cloudtrail enabled on the AWS account ? Are EBS volumes Encrypted

Amazon DynamoDB

Is fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.

Amazon State Language declarative

JSON

AWS KMS: Secret Manager

KMS differs from Secrets Manager as its purpose-built for encryption key management. KMS is validated by many compliance schemes

DynamoDB best practices

Keep item sizes small, if you are storing serial data in DynamoDB that will require actions based on date/time use separate tables for days, weeks, months. Store more frequently and less frequently accessed data in separate tables. If possible compress larger attribute values store objects larger than 400KB in S3 and use pointers (S3 object ID) in DynamoDB

IAM Best Practices

Lock away the AWS root users access keys. Create individual IAM users. Use AWS defined policies to assign permissions whenever possible. Use groups to assign permissions to IAM users. Grant least privilege. Use access levels to review IAM permission. Enable MFA for privilege users. Use roles for applications that run on AWS EC2 instances. Delegate by using roles instead of sharing credentials. Rotate credentials regularly. Use policy conditions for extra security. Monitor activity in your AWS account.

RDS failover may be triggered n the following circumstances

Loss of primary AZ or primary DB instance failure. Loss of network connectivity on primary. Compute (EC2) unit failure on primary. Storage (EBS) unit failure on primary. The primary DB instnace is changed. Patching of the OS on the primary DB instance. Manual failover ( reboot with failover selected on primary). - During failover RDS automatically updates configuration ( including DNS endpoint) to use the second node. Recommended to use the endpoint rather than the IP address to point applications to RDS DB.

Maintenance DB instance

Maintenance windows are configured to allow DB instances modifications to take place such as sacling and software patching (some operations require the DB instance to be taken offline briefly). You can define the maintenance window or AWS will schedule a 30 minute window. · Windows integrated authentication for SQL only works with domains created using the AWS directory service - need to establish a trust with an on-premise AD directory.

Amazon Kinesis

Makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Collection of services for processing streams of various data. Data is processed in "shards" - with each shard able to ingest 1000 records per second. There is a default limit of 500 shards, but you can request an increase to unlimited shards. A record consists of a partition key, sequence number, and data blob (up to 1MB). Transient data store - default retention of 24 hours, but can be configured for up to 7 days. There are four types of kinesis service and these are detailed below.

When to use Amazon RedShift

Massive amounts of data. Primarily OLAP workloads.

Push button scaling

Means that you can scale the DB at any time without incurring downtime. Defaults to eventual consistency reads but can request strongly consistent read via SDK parameter. Price on throughput, rather than compute. Can achieve ACID compliance with DynamoDB transactions. SSD based and uses limited indexing on attributes for performance.

two types of Elasticache engine

Memcached- simplest model, can run large nodes with multiple cores/threads, can be scaled in and out can cache objects such as DBs. - Redis - complex model, supports encryption, master / slave replication, cross AZ (HA), automatic failover and backup/ restore.

High Availability for Elasticache

Memchached: because memcached does not support replication, a node failure will result in data loss, Use multiple nodes in each shard to minimize data loss on node failure. Launch multiple nodes across available AZs to minimize data loss on AZ failure. Redis: Use multiple nodes in each shard and distribute the nodes across multiple AZs. Enable Multi-AZ on the replication group to permit automatic failover if the primary nodes fails. Schedule regular backups of your Redis cluster

Multi-AZ and Read Replicas

Multi-AZ RDS creates a replica in another AZ and synchronously replicates to it (DR only). AWS recommends the use of provisioned IOPS storage for muti-AZ RDS DB instances. Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable.

route 53 and NS server

NS servers are specified by fully qualified domain name (FQDN) but you can get the IP addresses from the command line ( dig or nslookup)

When to use Amazon DynamoDB

Name/value pair data or unpredictable data structure. In-memory performance with persistence. High I/O needs. Scale dynamically.

NAT instances

Nat instances are managed by. Used to enable private subnet instances to access the internet. NAT instance must live on a public subnet with a route to an Internet gateway. Private instances in private subnets must have a route to the NAT instance, usually the default route destination of 0.0.0.0/0. When creating NAT instances always disable the source/destination check on the instance. NAT instances must be in a single public subnet. NAT instances need to be assigned to security groups.

When to use Amazon RDS

Need traditional relational database for OLTP. Your data is well-formed and structured. Existing apps requiring RDBMS.

NAT Network ACL's

Network ACL's function at the subnet level. The VPC router hosts the network ACL function. With NACLs you can have permit and deny rules. Network ACLs contain a numbered list of rules that are evaluated in order from the lowest number until the explicit deny. recommended to leave spacing between network ACL numbers. Network ACLs have separate inbound and outbound rules and each rule can allow or deny traffic.

Egress and NAT Gateway

No need to disable source/destination checks. Egress only NAT gateways operate on IPv6 whereas NAT gatways operate on IPv4. Port forwarding is not supported. Using the NAT Gateway as a Bastion host server is not supported. Traffic metrics are not supported

Is S3 non-persistent ?

No, S3 is a persistent, highly durable data store. Persistent data stores are non-volatile storage devices that retain data when powered off. This is in contrast to transient data stores and ephemeral data stores. The following table provides a description of persistent, transient, and ephemeral data stores and which AWS service to use:

can you create stacks in a target account without a trust relationship between the administrator and target account?

No, before you can use a stack set to create stacks in a target account, you must set up a trust relationship between the administrator and target accounts.

Can a archive content of a Glacier upload be modified?

No, it cannot be modified

Is automated backups enabled manually?

No, it's enabled by default and data is stored on S3 and is equal to the size of the DB.

is redshift slower than SQL DB

No, redshift is 10x faster than a traditional SQL DB. Redshift can store huge amount of data but cannot ingest huge amounts of data in real time.

VPC flow logs traffic

Not all traffic is monitored, e.g. the following traffic is excluded: -traffic that goes to Route 53 -traffic generated for Windows license activation -traffic to and from 169.254.169.254 ( instance metadata) -traffic to and from 169.254.169.123 for the Amazon time sync service -DHCP traffic -traffic to the reserved IP address for the default VPC router

what is the object ACLS limits?

Object ACLS are limited to 100 granted permissions per ACL

what kind of DB can be restored

Only default DB parameters and security groups are restored- you must manually associate all other DB parameters and SGs. It is recommended to take a final snapshot before deleting an RDS instance. Snapshots can be shared with other AWS accounts.

RDS Scalability

Only scale RDS up ( compute and storage). You cannot decrease the allocated storage for a RDS instance. Can scale storage and change the storage type for all DB engines except MS SQL. For MS SQL the workaround is to create a new instance from a snapshot with the new configuration.

Opswork service and region

Opswork is a global service, but when you create a stack, you must specify a region and that stack can only control resources in that region. There are three offering: Opswork for chef automate, Opswork for puppet enterprise, and opswork stacks.

Security

PCI DSS compliant but recommended not to cache credit card information at edge locations. HIPPA compliant as a HIPAA eligible service.

What is the process for implementing maintenance activities ?

Perform operations on standby. Promote standby to primary. Perform operations on new standby (demoted primary).

API and IAM roles

Permissions to invoke a method are granted using IAM roles and policies or API gateway custom authorized. All of the APIs created with Amazon API gateway expose HTTPS endpoints only (does not support unencrypted endpoints)

Mirgration AWS Snowball

Petabyte scale data transport solution for transferring data into or out of AWS. use a secure storage device for physical transportation. AWS Snowball client is software that is installed on a local computer and is used to identify, compress, encrypt, and transfer data. Uses 256-bit encryption (manage with the AWS KMS) and tamper-resistant enclosures with TPM. Must be in the same region. To speed up data transfer it is recommended to run simultaneous instances of the AWS Snowball Client in multiple terminals and transfer small files as batches. Snowball can import to S3 or export from S3.

Policies

Policies are documents that define permissions and can be applied to users, groups and roles. Policy documents are written in JSON (key value pair that consists of an attribute and a value) All permissions are implicitly denied by default. The most restrictive policy is applied. The IAM policy simulator is a tool to help you understand, test, and validate the effects of access control policies. The condition element can be used to apply further conditional logic.

domain and route 53

Possible to have the domain registered in one AWS account and the hosted zone in an other AWS account. Primary uses UDP port 53 ( can use TCP). there is a default limit of 50 domain names but this can be increased by contacting support. Private DNS is a route 53 feature that lets you have authoritative DNS within your VPCs without exposing your DNS records. You can use the AWS Manangement console or API to register new domain names with route 53.

Redshift business intelligence tool

PostgreSQL compatible with JDBC and ODBC drivers available; compatible with most business intelligence tools out of the box. Features parallel processing columnar data stores which are optimized for complex queries.

IAM and power user

Power user access allows all permissions except the management of groups and users in IAM. Temporary security credential consist of the AWS access key ID< secret access key, and security token. The sign-in URL includes the account ID or account alias e.g https://My_AWS_Account_ID.signin.aws.amazon.com/console/ or https://console.aws.amazon.com

CLB ( Classic Loading Balancer)

Provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and connection level. - Operate at layer 4 and layer 7 -Supported protocols: TCP. SSL, HTTP, HTTPS -Does not support HTTP/2

What is a provisioned Capacity Unit (PCU) and when should it use PCU?

Provisioned Capacity guarantees that your retrieval capacity for expedited retrievals will be available when you need it. Each unit of capacity ensures that at least 3 expedited retrievals can be performed every 5 minutes and provides up to 150 MB/s of retrieval throughput. Retrieval capacity can be provisioned if you have specific Expedited retrieval rate requirements that need to be met. Without provisioned capacity, Expedited retrieval requests will be accepted if capacity is available at the time the request is made. You can purchase provisioned capacity using the console, SDK, or the CLI. Each unit of provisioned capacity costs $100 per month from the date of purchase.

Scalability

Push button scaling without downtime, you can scale down only 4 times per calendar day. AWS places some default limits on the throughput you can provision. These are the limits unless you request a higher amount

DynamoDB, you can search using one of the following methods:

Query operation - find items in a table or a secondary index using only the primary keys attributes. Scan operation - reads every item in a tale or a secondary index and by default will return all items. Use DynamoDB when relational features are not required and the DB is likely to need to scale.

Queues

Queue names must be unique within a region. it canbe either standard or first-in-first-out (FIFO). Standard queues provide a loose-FIFO capability that attempts to preserve the order of messages.

Kinesis data analytics and firehose

Quickly author and run powerful SQL code against streaming sources. Can ingest data from Kinesis streams and Kinesis firehose. Output to S3, Redshift, Elasticsearch and Kinesis data firehose.

READ replicas for RDS

Read replicas are used for read heavy DBs and replication is asynchronous. Read replicas are for workload sharing and offloading, Read Replicas provide read-only DR. Read Replicas are created from a snapshot of the master instance. - must have automated backups enabled on the primary (retention period>0) - Only supported for transaction database storage engines (InnoDB not MyISAM).

Direct connection and bidirectional forwarding detection

Recommended to enable Bidirectional forwarding Detection (BFD) for faster detection and failover. You cannot extend you ron-premise VLANs into the AWS cloud using Direct Connect. Can aggregate up to 4 direct connect ports into a single connection using link Aggregation groups (LAG) AWS direct connect supports both single (IPv4) and dual stack (IPv4/IPv6) configurations on public and private VIFs

DynamoDB integration with RedShift:

Redshift complements DynamoDB with advanced business intelligence. - When copying data from a DynamoDB table into RedShift you can perform complex data analysis queries including joins with other tables. - A copy operation from a DynamoDB table counts against the table's read capacity - After data is copied, SQL queries do not affect the data in DynamoDB.

Redshift and EC2

Redshift uses EC2 instances so you need to choose your instance type/size for scaling compute vertically, but you can also scale horizontally by adding more nodes to the cluster. You cannot have direct access to your AWS Redshift cluster nodes as a user, but you can through applications. HDD and SSD storage options.

Regional Edge caches

Regional Edge caches are located between origin web servers and global edge locations and have a larger cache. Regional edge cahces have larger cache-width than any individual edge location, so your objects remain in cache longer at these locations. Regional edge caches aim to get content closer to users. Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin from the edge locations and do not proxy through Regional Edge caches. Regional Edge caches are used for custom origins, but not Amazon S3 orgins. Dynamic content goes straight to the origin and does not flow through Regional Edge caches. Edge locations are not just read only, you can write to them too.

when to use Amazon Neptune?

Relationship between objects a major portion of data value.

Resource Groups

Resource groups allow you to group resources and then tag them. Tag editor assists with finding resources and adding tags. Resource groups contain information such as: -region, name, health checks

Restored DBs

Restored DBs will always be a new RDS instance with a new DNS endpoint, can restore up to the last 5 minutes. You cannot restore from a DB snapshot to an existing DB- a new instance is created when you restore.

AWS accounts ( AWS Organizations)

Root account with organizational units and AWS accounts behind the OU's. Policies can be assigned at different points in the hierarchy. Available in two feature sets: - Consolidated billing - All features Limit of 20 linked accounts for consolidate billing (default). Can help with cost control through volume discounts. Billing alerts can be setup at the paying account which shows billing for all linked accounts.

High Availability networking and route 53's heath

Route 53's health checks provide a basic level of redirecting DNS resolutions. Elastic IPs allow you flexibility to change out bakcing assets without impacting name resolution. For Multi-AZ redundancy of NAT Gateways, create gateways in each AZ with routes for private subnets to use the local gateway.

Components of VPC Router

Routers interconnect subnets and direct traffic between Internet gateways, virtual private gateways, NAT gateways, virtual private gateways, NAT gateways, and subnets

AWS Snowball

Ruggedlized NAS in a box that AWS ships to you. You can copy up to 80TBof data and shop it back to AWS. They copy the data over to S3. (50TB model available only in the USA)

SNS supports

SNS has topic names are limited to 256 characters. SNS supports CloudTrail auditing for authenticated calls.

SNS supports?

SNS supports a wide variety of needs including event notification, monitoring applications, workflow systems, time-sensitive information updates, mobile applications, and any other application that generates or consumes notifications.

Charges SQS

The cost of SQS is calculated per request, plus data transfer charges for data transferred out of SQS.

SQS uses?

SQS can be used with Redshift,DynamoDB, EC2, ECS, RDS, S3 and Lambda. IT also uses message-oriented API. It also pull based (polling) not push based. Messages are 256 KB in size. Message can be kept in the queue from 1 minute to 14 days ( default is 4 days). If a job is processed within the visibility timeout the message will be deleted. If a job is not processed within the visibility timeout the message will become visible again ( could be delivered twice).

Polling

SQS uses short polling and long polling.

Amazon SWF essentials

SWF has a completion time of up to 1 year for workflows executions. SWF uses a task-oriented API. SWF ensures a task is assigned once and never duplicated. SWF keeps track of all the tasks and events in an application.

NAT Security Groups

Security groups are stateful. By default, custom security groups do not have inbound allow rules (all inbound traffic is denied by defualt). By Default, custom security groups do not have inbound allow rules (allowing traffic from within the group). All outbound traffic is allowed by default in custom and default within a VPC. You can use security group names as the source or destination in other security groups. You can use the security group name as a source in its own inbound rules. Security group members can be within any AZ or subnet within the VPC. Seuciryt group membership can be changed whilst instances are running. any changes made will take effect immediately. Up to 5 security groups can be added per EC2 instance interface. There is no limit on the number of EC2 instance within a security group. You cannot block specific IP addresses using security groups, use NACLs instead.

Nat instances security

Security groups for NAT instances must allow HTTP/HTTPS inbound from the private subnet and outbound to 0.0.0.0/0. There needs to be a route from a private subent to the NAT instance for it to work. The amoutn of traffic a NAT instance can support is based on the instance type. Using A NAT instance can lead to bottle necks (not HA). HA can be achieved by using Auto Scaling groups, multiple subnets in different AZ's and a script to automate failover. Performance is dependent on instance size. Can scale up instance size or use enhanced networking. Can scale out by using multiple NATs in multiple subnets. Can use as a bastion (jump) host. Can monitor traffic metrics Not supported for IPv6 ( use Egress-only internet gateway)

Sever-side encryption (SSE) and Queue

Server-side encryption (SSE) lets you transmit sensitive data in encrypted queues ( AWS KMS): - SSE encrypts messages as soon as SQS receives them. - The messages are stored in encrypted form and SQS decrypts messages only when they are sent to an authorized consumer. - Uses AES 256 bit encryption - Not available in all regions - Standard and FIFO queues - the following is not encrypted: - Queue metadata - Message metadata - per-queue metrics

AWS snowball import/export

Ship an external hard drive to AWS. Someone at AWS plugs it in an copies your data to S3. AWS import/export is when you send your own disks into AWS - this is being deprecated in favour of Snowball.

S3 Static Web Hosting

Simple and massively scalable static website hosting

Firehose and Splunk destinations

Streaming data is delivered to Splunk, and it can optionally be backed up to your S3 bucket concurrently

IAM root account

The "root" account is the account created when you setup the AWS account. It has complete admin access and is the only account that has this access by default. It is a best practice to not use the root account for anything other than billing.

STS

The AWS security Token service (STS) is a web service that enables you to request temporary, limited-privilege credentials for IAM users or for users that you authenticate ( federated users). You can optionally send your AWS STS requests to endpoints in any region ( can reduce latency). All regions are enabled for STS by default but can be disabled. The region in which temporary credentials are requested must be enabled. STS supports AWS Cloudtrail, which records AWS calls for your AWS account and delivers log files to an S3 bucket.

VPC subnets and subnet sizing

The VPC is created with a master address range (CIDR block, acn be anywhere from 16-28 bits), and subnet ranges are created within the range. Once the VPC is created you cannot change the CIDR block YOu cannot create additional CIDR blocks that overlap with existing CIDR blocks and also not create additional CIDR blocks in a different RFC 1918 range. The first 4 and last IP addresses in a subnet are reserved. Subnets are created within availability zones. Availability zones are distinct locations that are engineered to be isolated from failures in other Availability zones. It is connected with low latency, high throughput, and highly redundant networking. YOu can only attach one internet gateway to a custom VPC.

AD connector with VPC

The VPC must be connected to your on-premise network via VPN or Direct Connect. When users log in to AWS connector forwards sign-in requests to your on-premise AD DCs. YOu can also join EC2 instances to your on-premise AD through AD connector. You can also login to the AWS Management Console using your on-premise AD DCs for authentication. It is not compatible with RDS, SQL. You can use AD connector for multi-factor authentication using RADIUS-Based MFA infrastructure.

Routing

The VPC router performs routing between AZs within a region. The VPC router connects different AZs together and connects the VPC to the internet gateway. Each subnet has a route table the router uses to forward traffic within the VPC. Cannot delete the main route table. You can manually set another route table to become the main route table. There is a default rule that allows all VPC subnets to communicate with one another this cannot be deleted or modified. - Routing between subnets is always possible because of this rule - any problems communicating is more likely to be security groups or NACLs.

AWS Directory service: general

The following three type currently feature on the exam and will be covered on this page: - Active Directory service for Microsoft Active Directory - Simple AD -AD connector

What method is used for RDS DB instance failover

The method to initiate a manual RDS DB instances failover is to reboot selecting the option to failover. A DB instance reboot is required for changes to take effect when you change the B parameter group or when you change a static DB parameter. The secondary DB in a multi-AZ configuration cannot be used as an independent read node ( read or write). There is no charge for data transfer between primary and secondary RDS instance.

Read replicas storage type and instance class?

The read replicas storage type and instance class can be different from the source but the compute should be at least the performance of the source. You cannot change the DB engine.

S3 Requester pays

The requester rather than the bucket owner pays for requests and data transfer

What is the size of a single node

The size of a single node is 160GB and clusters can be created up to a petabyte or more

Source database during migration

The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The migration service can migrate your data to and from most widely used commercial and ope-source databases.

VPC connectivity

There are several methods of connecting to VPC. These include: - AWS Managed VPN - AWS Direct connect - AWS VPN cloudhub -software VPN -transit VPC -VPC peering -AWS Privatelink -VPC endpoints

How is the VPN managed in the VPC

This option is recommended if you must manage both ends of the VPN connection either for compliance purposes or for leveraging gateway devices that are not currently supported by Amazon VPCs VPN solution.

Amazon DynamoDB stores three geographic

Three geographically distributed replicas of each table to enable high availability and data durability. Data is synchronously replicated across 3 facilities (AZs) in a region.

Distributions

To distribute content with Cloudfront you need to create a distribution. The distribution includes the configuration of the CDN including: -Content origins -Access (public or restricted) -Security (HTTP or HTTPS) -Cookie or query-string forwarding -Geo-restrictions -Access logs (record viewer activity)

DynomoDB not ideal for the following situations:

Traditional RDS apps, Joins and/or complex transactions, BLOB data, Large data with low I/O rate.

When to use Database on EC2

Ultimate control over database. Preferred DB not available under RDS

How many VPC routing tables ?

Up to 200 routes tables per VPC. Up to 50 route entries per route table. Each subnet can only be associated with one route table. Can assign one route table to multiple subnets. If no route table is specified a subnet will be assigned to the main route table at creation time.

Multi-AZ deployments for the SQL server engine use what engine

Use synchronous logical replication (SQL Server-native mirroring technology)

Direct connect Ethernet

Uses Ethernet trucking (802.1q). Each connection consists of a single dedicated connection between ports on the customer router and an Amazon router. For HA you must have 2DX connections- can be active/active or active/standby. Route tables need to be updated to point to a direct connection connection. VPN can be maintained as a backup with a higher BGP priority.

DynamoDB Auto Scaling

Uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling.

Cloudfront and API

Using cloudfront behind the scenes, custom domains, and SNI are supported. Can be published as products and monetized on AWS marketplace. Collections can be deployed in stages.

Virtual private gateway (VPG)

VPC endpoint on the AWS side

Inter-region VPC Limitation

VPC peering there are some limitations - YOu cannot create a security group rule that references a peer security group. - Cannot enable DNS resolution -Maximum MTU is 1500 bytes (no jumbo frames support) -Limited region support - You do not have any peering relationship with VPCs that your VPC is not directly peered with. -limits are 50 VPC peer per VPC, up to 125 by requests

AWS Managed VPN

What: AWS Managed IPSec VPN connection over your existing Internet. When: Quick and usually simple way to establish a secure tunnelled connection to a VPC, redundant link for Direct Connect or other VPC VPN. Pros: supports static routes or BGP peering and routing Cons: Dependent on your internet connection How: Create a Virtual Private gateway (VPG) on AWS, and a customer gateway on the on-premises side. A virtual private gateway (VGW) is required on the AWS side. A customer gateway is required on the customer side. An internet route-able IP address is required on the customer gateway.

AWS PrivateLink

What: AWS-provided network connection between VPCs and/or AWS services using interface endpoints. When: Keep Private subnets truly private by using the AWS backbone to reach other AWS or marketplace services rather than the public internet. Pros: redundant; uses the AWS backbone. Cons: How: Create endpoint for required: AWS or Marketplace service in all required subnets; access via the provided DNS hostname. Using PrivateLink you can connect your VPC to supported AWS services, services hosted by other AWS accounts ( VPC endpoint services), and supported AWS Marketplace partner services.

VPC Peering

What: AWS-provided network connectivity between two VPCs. When: Multiple VPCs need to communicate or access each other's resources Pros: Uses AWS backbone without traversing the internet Cons: transitive peering is not supported How: VPC peering request made; accepts request (either within or across accounts). A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. A VPC peering connection helps you to facilitate the transfer of data.

Transit VPC

What: Common strategy for connecting geographically dispersed VPCs and locations in order to create a global network transit center. When: locations and VPC-deployed assets across multiple regions that need to communicate with one another. pros: ultimate flexibility and manageability but also AWS-manged VPN hub-and-spoke between VPCs. Cons: You must design for any needed redundancy across the whole chain. How: providers like Cisco, juniper networks, and riverbed have offerings which work with their equipment and AWS VPC

AWS VPN CloudHub

What: Connect location sin a hub and spoke manner using AWSs Virtual Private Gateway. When: Link remote offices for backup or primary WAN access to AWS resources and each other. Pros: Reuses existing internet connections; supports BGP routes to direct traffic Cons: Dependent on Internet connection; no inherent redundancy How: Assign multiple customer gateways to a virtual gateway, each with their own BGP ASN and unique IP ranges.

AWS Direct Connect

What: Dedicated network connection over private lines straight into the AWS backbone. When: Requires a large network link AWS; lots of resources and services being provided on AWS to your corporate users Pros: More predictable network performance; potential bandwidth cost reduction; up to 10 Gbps provisioned connections; supports BGP peering and routing Cons: May require additional telecom and hosting provider relationships and/or network circuits; costly. How: Work with your existing data networking provider; create virtual interfaces (VIFs) to connect to VPCs (private VIFs) or other AWS services like S3 or Glacier (public VIFs). It makes easy to establish a dedicated connection from an on-premises network to Amazon VPC. Using AWS Direct Connect, you can establish private connectivity between AWS and your data center, office, or collocated environment.

Software VPN

What: you provide your own VPN endpoint and software. When: You must manage both ends of the VPN connection for compliance reasons or you want use a VPN option not supported by AWS. Pros: Ultimate flexibility and manageability Cons; You must design for any needed redundancy across the whole chain How: Install VPN software via Marketplace appliance of on an EC2 instance.

S3 supports ACL permission "Full_CONTROL":

When granted on a bucket: allows grantee the READ, WRITE, READ_ACP, and WRITE_ACP permissions on the bucket When granted on an object: Allows grantee the READ, READ_ACP, and WRITE_ACP permissions on the object.

When you restore a DB instance back in backup?

When you restore a DB instance the default DB parameters and security groups are applied. you must then apply the custome DB parameters and security groups. You cannot restore from a DB snapshot into an existing DB instance. Following a restore the new DB instance will have a new endpoint. The storage type can be changed when restoring a snapshot.

Benefits and features: Pay per use

With AWS step functions, you pay only for the transition from one step of you application workflow to the next, called a state transition. Billing is metered by state transition, regardless of how long each state persists (up to one year)

What can you associate a network ACL with?

With multiple subnets; however a subnet can only be associated with one network ACL at a time. Network ACLs do not filter traffic between instances in the same subnet. NACLs are the preferred option for blocking specific IPs or ranges. Security groups cannot be used to block specific ranges of IPs. NACL is the first line of defense, the security groups is the second line. Also recommended to have software firewalls installed on your instances. Changes to NACLs take effect immediately.

Does Amazon S3 support data access auditing?

Yes, customers can optionally configure an Amazon S3 bucket to create access log records for all requests made against it. Alternatively, customers who need to capture IAM /user identity information in their logs can configure AWS CloudTrail Data Events. These access log records can be used for audit purposes and contain details about the request, such as the request type, the resources specified in the requests, and the time and data the request was processed.

Can you have read replicas of read replicas?

Yes, you can have read replicas of read replicas for MySQL and MariaDB but not for PostgreSQL. Read Replicas can be configured from the AWS console or the API.

Some features of IAM

You can assign users individual security credential such as access keys, passwords, and multi-factor authentication devices. Identity federation (including AD, facebook etc) can be configured allowing secure access to resources in an AWS account and for individual users under the account. IAM supports PCI DSS compliance. AWS recommend that you use the AWS SDKs to make programmatic API calls to IAM. However, you can also use the IAM Query API to make direct calls to the IAM web service

MFA device and IAM

You can authenticate using an MFA device in the following two ways: - through the AWS Management console: The user is prompted for a user name, password and authentication code. - Using the AWS API - restrictions are added to IAM policies and developers can request temporary security credentials and pass MFA parameters in their AWS STS API requests. - Using the AWS CLI by obtaining temporary security credentials from STS (aws sts get-session-token)

what options do i have for encrypting data store on Amazon S3?

You can choose to encrypted data using SSE-S3, SSE-C, SSE-KMS, or a client library such as the Amazon S3 Encryption Client. All four enable you to store sensitive data encrypted at rest in Amazon S3.

RDS Encryption

You can encrypt your Amazon RDS instances and snapshots at rest by enabling the encryption option for your Amazon RDS DB instance. Encryption at rest is supported for all DB types and uses AWS KMS.

What else can you establish with AWS direct connect

You can establish a dedicated network connection between your network create a logical connection to public AWS resources, such as an Amazon virtual private gateway IPsec endpoint. This solution combines the AWS managed benefits of the VPN Solution with low latency, increased bandwidth, more consistent benefits of the AWS Direct Connection solution, and an end-to-end, secure IPsec connection.

Scalability and durability

You can have multiple queues with different priorities. Scaling is performed by creating more queues. SQS stores all message queues and messages within a single, highly-available AWS region with multiple redundant AZs.

Security

You can use IAM policies to control who can read/write messages. Authentication can be used secure messages within queues (who can send and receive). SQS supports HTTPS and supports TLS versions 1.0, 1.1, 1.2

HSM being used as ?

You can use the CloudHSM service to support a variety of use cases and applications, such as database encryption, Digital Rights Management (DRM), Public Key infrastructure (PKI), authentication and authorization, document signing, and transaction processing. Runs on a dedicated hardware device, single tenanted

What you cannot encrypted RDS?

You cannot encrypt an existing DB, you need to create a snapshot, copy it, encrypt the copy, then build an encrypted DB from the snapshot.

Domain charges not pay for

You do not pay for: -data transfer between AWS regions and cloudfront -regional edge cache -AWS ACM SSl/TLS certificates -shared cloudfront certificates

API gateway charge

You only pay when your APIs are in use. There are no minimum fees or upfront commitments. YOu pay only for the API calls you receive and the amount of data transferred out. API gateway also provides optional data caching charged at an hourly rate that varies based on the cache size you select.

Long Polling

_ uses fewer requests and reduces cost -Eliminates false empty responses by querying all servers - SQS waits until a message is available in the queue before sending a response - Requests contain at least one of the available messages up to the maximum number of messages specified in the ReceiveMessage action - Shouldn't be used if your application expects an immediate response to receive message calls -ReceiveMessageWaitTime is set to a non-zero value (up to 20 seconds) - Same charge per million requests as short polling

Private link gateway endpoint

a gateway endpoint is a gateway that is a target for a specified route in your route table, used for traffic destined to a supported AWS service. An interface VPC endpoint enables you to connect to services powered by AWS PrivateLink. -What: A gateway that is a target for a specific route -How: uses DNS entries to redirect traffic -Which services: Amazon S3, dynamoDb -Security: VPC Endpoint Policies -By default, IAM users donot have permission to work with endpoints. -You can create an IAM user polidy that grants users the permissions to create, modify, describe, and delete endpoints

Components of a VPC NAT Gateway

a highly available, managed Network Address translation (NAT) service for your resources in a private subnet to access the Internet.

Hosted Zones

a hosted zone is a collection of records for a specified domain. A hosted zone is analogous to a traditional DNS zone file; it represents a collection of records that can be managed together.

Elasticache node

a node is a fixed-sized chunk of secure, network-attached RAM and is the smallest building block. Each node runs an instance of the Memcached or Redis protocol-compliant service and has its own DNS name and port. Failed nodes are automatically replaced. Access to elasticache nodes is controlled by VPC security groups and subnet groups ( when deployed in a VPC). Elasticache nodes are deployed in clusters and can span more than one subnet of the same subnet group. A cluster is a collection of one or more nodes using the same caching engine.

API gateway allows you to manage what?

allows you to manage requests to protect your backend. You can throttle and monitor requests to protect your backend. SDK generation for iOS, Android and JavaScript. Reduced latency and dirstributed denial of service protection through the use of cloudfront. It also provide Swagger support.

Partition Key

a simple primary key, composed one attribute known as the partition key.

3: A legacy application running on-premises requires a Solutions Architect to be able to open a firewall to allow access to several Amazon S3 buckets. The Architect has a VPN connection to AWS in place. Which option represents the simplest method for meeting this requirement? a) Create an IAM role that allows access from the corporate network to Amazon S3 b) Configure a proxy on Amazon EC2 and use an Amazon S3 VPC endpoint c) Use Amazon API Gateway to do IP whitelisting d) Configure IP whitelisting on the customer's gateway

a) Create an IAM role that allows access from the corporate network to Amazon S3. The solutions architect can create an IAM role that provides access to the required S3 buckets. With the on-premises firewall opened to allow outbound access to S3 (over HTTPS), a secure connection can be made, and the files can be uploaded. This is the simplest solution. You can use a condition in the IAM role that restricts access to a list of source IP addresses (your on-premises routed IPs).

1. A company has an on-premises data warehouse that they would like to move to AWS where they will analyze large quantities of data. What is the most cost-efficient EBS storage volume type that is recommended for this use case? a) Throughput Optimized HDD (st1) b) EBS Provisioned IOPS SSD (i01) c) EBS GENERAL purpose SSd (gp2) d) Cold HDD (sc1)

a) Throughput Optimized HDD (st1). Throughput Optimized HDD (st 1) volumes are recommended for streaming workloads requiring consistent, fast throughput at a low price. Examples include Big Data warehouses and Log Processing. You cannot use these volumes as a boot volume.

6: You have implemented the AWS Elastic File System (EFS) to store data that will be accessed by a large number of EC2 instances. The data is sensitive and you are working on a design for implementing security measures to protect the data. You need to ensure that network traffic is restricted correctly based on firewall rules and access from hosts is restricted by user or group. How can this be achieve with EFS? (choose 2) a) Use EFS Security Groups to control network traffic b) Use AWS Web Application Firewall (WAF) to protect EFS c) Use POSIX permissions to control access from hosts by user or group d) Use IAN groups to control access by user or group e) Use Network ACLs to control the traffic

a) Use EFS Security Groups to control network traffic c) Use POSIX permissions to control access from hosts by user or group. You can control who can administer your file system using IAM. You can control access to files and directories with POSIX- compliant user and group-level permissions. POSIX permissions allows you to restrict access from hosts by user and group. EFS Security Groups act as a firewall, and the rules you add define the traffic flow. You cannot use AWS WAF to protect EFS data using users and groups. You do not use IAM to control access to files and directories by user and group, but you can use IAM to control who can administer the file system configuration. You use EFS Security Groups to control network traffic to EFS, not Network ACLs.

7: You launched an EBS-backed EC2 instance into your VPC. A requirement has come up for some high-performance ephemeral storage and so you would to add an instance-store backed volume. How can you add the new instance store volume? a) You can specify the instance store volumes for your instance only when you launch an instance. b) You can use a block device mapping to specify additional instance store volumes when you launch your instance, or you can attach additional instance store volumes after your instance is running. c) You must shutdown the instance in order to be able to add the instance store volume. d) You must use an Elastic Network Adapter (ENA) to add instance store volumes. First, attach an ENA, and then attach the instance store volume

a) You can specify the instance store volumes for your instance only when you launch an instance. You can specify the instance store volumes for your instance only when you launch an instance. You can't attach instance store volumes to an instance you've launched it. You can use a block device mapping to specify additional EBS volumes when you launch your instance, or you can attach additional EBS volumes after your instance is running.

2. You are planning to launch a Redshift cluster for processing and analyzing a large amount of data. The Redshift cluster will be deployed into a VPC with multiple subnets. which construct is used when provisioning the cluster to allow you to specify a set of subnets in the VPC that the cluster will be deployed into?

a. DB subnet group b. subnet group c. availability Zone d. cluster subnet group You create a cluster subnet group if you are provisioning your cluster in your virtual private cloud (VPC). A cluster subnet group allows you to specify a set of subnet in your VPC. When provisioning a cluster, you provide the subnet group and Amazon Redshift creates the cluster on one of the subnets in the group.

Cross-region read replicas

allow you to improve your disaster recovery posture, scale read operations in regions closer to your application users, and easily migrate from one region to another. Self-healing, and automatic failover is available for Aurora replicas only.

AWS system manager

allows you to centralize operational data from multiple AWS services and automate tasks across your AWS resources. You can create logical groups of resources such as applications, different layers of an application stack, or production versus development environments. With system Manager, you can select a resource group and view its recent API activity, resource configuration changes, related notifications, operational alerts, software inventory, and patch compliance status.

charge for amazon elasticache

there is no charge for data transfer between Amazon EC2 and Amazon Elasticache within the same Availability Zone. Node-hour consumed for each Node Type. Partial Node-hours are billed as full hours.

Aurora multi-master

amazon aurora multi-master is a new feature of Aurora MySQL compatible edition that adds the ability to scale out write performance across multiple Availability Zones, allowing applications to direct read/write workloads to multiple instances in a database cluster and operate with higher availability.

Amazon API gateway

an Amazon API gateway is a collection of resources and methods that are integrated with back-end HTTP endpoints, Lambda functions or other AWS services. API gateway is fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale. It provides robust, secure, and scalable access to backend APIs and hosts multiple versions and release stages for your APIs. API gateway can scale to any level of traffic recieved by an API.

application and elasticache

applications connect to elasticache clusters using endpoints. An endpoint is a node or cluster's unique address. Maintenance windows can be defined and allow software patching to occur.

Groups

are a collecgtions of users and have polices attached to them. Is not an identity and cannot be identified as a principal in an IAM policy. Use groups to assign permissions to users. Use the principal of least privilege when assigning permissions. You cannot nest groups ( groups within groups)

Temporary credentials

are primarily used with IAM roles and automatically expire. ROles can be assumed temporarily through the console or programmatically with the AWS CLI, Tools for windows Powershell or API

API gateway ad domain certificates

by default API gateway assigns an internal domain taht automatically uses the API Gateway certificates. When configuring your APIs to run under a custom domain name you can provide your own certificate. APIs created with Amazon API gateway exposed HTTPS endpoints only. Supported data formats include JSON, XML, query string parameters, and request headers.

5: A large quantity of data that is rarely accessed is being archived onto Amazon Glacier. Your CIO wants to understand the resilience of the service. Which of the statements below is correct about Amazon Glacier storage? (choose 2) a) Provide 99.9% availability of archives b) Data is resilient in the event of one entire region destruction c) Data is resilient in the event of one entire Availability Zone destruction d) Provides 99.99999999% durability of archives e) Data is replicated globally

c) Data is resilient in the event of one entire Availability Zone destruction d) Provides 99.99999999% durability of archives Glacier is designed for durability of 99.9999999% of objects across multiple availability zones. Data is resilient in the event of one entire availability zone destruction. Glacier supports SSL for data in transit and encryption of data at rest. Glacier is extrememly low cost and is ideal for long-term archival.

4: A Solutions Architect is designing a mobile application that will capture receipt images to track expenses. The Architect wants to store the images on Amazon S3. However, uploading the images through the web server will create too much traffic. What is the most efficient method to store images from a mobile application on Amazon S3? a) Upload to a second bucket, and have a Lambda event copy the image to the primary bucket b) Upload to a separate Auto Scaling Group of server behind an ELB Classic Load Balancer, and have the server instances write to the Amazon S3 bucket. c) Upload directly to S3 using a pre-signed URL d) Expand the web server fleet with Spot instances to provide the resource to handle the images

c) Upload directly to S3 using a pre-signed UR. Uploading using a pre-signed URL allows you to upload the object without having any AWS security credentials/permissions. Pre-signed URLs can be generated programmatically and anyone who receives a valid pre-signed URL can then programmatically upload an object. This solution bypasses the web server avoiding any performance bottlenecks.

8: You are a Solutions Architect for an insurance company. An application you mange is used to store photos and video files that relate to insurance claims. The application writes data using the iSCSI protocol to a storage array. The array currently holds 10TB of data and is approaching capacity. Your manage has instructed you that he will not approve further capital expenditure for on-premises infrastructure. Therefore, you are planning to migrate data into the cloud. How can you move data into the cloud whilst retaining low-latency access to frequently accessed data on-premises using the iSCSI protocol? a) Use an AWS Storage Gateway File Gateway in cached volume mode b) Use an AWS Storage Gateway Virtual Tape Library c) Use an AWS Storage Gateway Volume Gateway in cached volume mode d) Use an AWS Storage Gateway Volume Gateway in stored volume mode

c) Use an AWS Storage Gateway Volume Gateway in cached volume mode. The AWS Storage Gateway service enables hybrid storage between on-premises environments and the AWS Cloud. It provides low-latency performance by caching frequently accessed data on premises, while storing data securely and durably in Amazon cloud storage services. AWS storage gateway supports three storage interfaces: file, volume, and tape. File: file gateway provides a virtual on-premises file server, which enables you to store and retrieve files as objects in Amazon S3. - File gateway offers SMB or NFS-based access to data in Amazon S3 with local caching The question asks for an iSCSI (block) storage solution, so a file gateway is not the right solution - volume gateway represents the family of gateways that support blcok-based volumes, previously referred to as gateway-cahced and gateway-stored -iSCSI based

2: Your company keeps unstructured data on a filesystem. You need to provide access to employees via EC2 instances in your VPC. Which storage solution should you choose? a) Amazon S3 b) Amazon EBS c) Amazon EFS d) Amazon Snowball

c).Amazon EFS. EFS is the only storage system presented that provides a file system. EFS is accessed by mounting filesystems using the NFS v4.1 protocol from your EC2 instances. You can concurrently connect up to thousands of instances to a single EFS filesystem.

RDS read replica region

can be in another region (uses asynchronous replication) this configuration can be used for centralizing data from across different regions for anaytics

Route Propagation

can be used to send customer side routes to the VPC. You can only have one 0.0.0.0/0 ( all IP addresses) entry per route table. You can bind mutiple ports for higher bandwidth.

Schema Conversion Tool

can copy database schema for homogeneous migration (same database) and convert schema for heterogeneous migrations (different database). DMS is used for smaller, simpler conversion and also supports MongoDB and DynamoDB. Has replication functions for on-premise to AWS or to Snowball or S3. SCT is used for larger, more complex datasets like data warehouses.

Elasticache clustering mode enabled:

can have up to 15 shards,each shard can have one primary node and 0-5 read only replicas. Taking snapshots can slow down nodes, best to take from the read replicas

cloudformation supported by:

can use bootstrap scripts, can define deletion policies, provides waitcondition function, can create roles in IAM, VPCs can be created and customized, VPC peering in the same AWS account can be performed. Route 53 is supported.

AWS WAF cloudfron

cloudfront responds to requests with the requested content or an HTTP 403 status code (forbidden) cloudfront can also be configured to deliver a custom error page. Need to associate the relevant distribution with the web ACL.

AWS config vs cloudtrail

cloudtrail records user API activity on your account and allows you to access information about this activity. AwS config records point-in-time configuration details for your AWS resources as configuration Items (CIs) You can use an AWS config CI to answer "what did my AWS resource look like?" at a point in time. You can use AWS cloudtrail to answer "who made an API call to modify this resource?"

Attributes in DynamoDB

consists of name and value or set of values. Attributes in DynamoDB are similar to fields or columns in other database systems. The primary key is the only required attribute for items in a table and it uniquely identifies each item.

NAT multi-AZ redundancy

create NAT Gateways in each AZ with routes for private subnets to use the local gateway. Up to 5 Gbps bandwidth that can scale up to 45 Gbps. Can't use a NAT gateway to access VPC peering, VPN or Direct Connect, so be sure to include specific routes to those in your route table. NAT gateway are highly available in each AZ into which they are deployed. No need to patch, not associated with any security groups. Automatically assigned a public IP address. Remember to update route tables and point towards your gateway. More secure (e.g. you cannot access with SSH and there are no security groups to maintain).

Amazon RDS creates what daily ?

creates a daily full storage volume snapshot and also captures transaction logs regularly. there is no additional charge for back ups, but you will pay for storage costs on S3. You can disable autmated backups by setting the retention period to zero.

Does DynamoDB auto scaling scale down?

currently, auto scaling does not scale down your provisioned capacity if your table's consumed capacity becomes zero. If you use the AWS Management console to create a table or a global secondary index, DynamoDB auto scaling is enabled by default.

Redis

data is persistent, can be used as a datastore, not multi-threaded. Scales by adding shards, not nodes.

Redshift uses columnar data storage:

data is stored sequentially in columns instead of rows, columnar based DB is ideal for data wareshousing and analytics, requires few I/O which greatly enhances performance.

Redshift provides advanced compression

data is stored sequentially in columns which allows for much better performance and less storage space. Redshift automatically selects the compression scheme.

Routing policies

determines how route 53 responds to queries.

Direct connect charge

direct connect is charged by port hours and data transfer. Available in 1 Gbps and 10 Gpbs. Speeds of 50 Mbps, 100Mbps, 200Mbps, 300Mbps, 400Mbps, and 500Mbps can be purchased through AWS Direct Connect Partner.

S3 Standard

durable, immediately available, frequently accessed.

Redis backups

during backup you cannot perform CLI or API operations on the cluster. Automated backups are enabled by default (automatically deleted with Redis deletion). You can only move snapshots between regions by exporting them from Elasticache before regions by exporting them from Elasticache before moving between regions (can then populate a new cluster with data). Multi-AZ is possible using read replicas in another AZ in the same region.

Kinesis Data Streams

enables you to build custom applications that process or analyze streaming data for specialized needs. This enable real-time processing of streaming big data.

Read Replica Amazon RDS

encrypted instance is also encrypted using the same key master instance when both are in the same region. If the master and Read Replica are in different regions, you encrypt using the encryption key for that region. You can't have an encrypted Read Replica of an un-encrypted DB instance or an un-encrypted Read Replica of an encrypted DB instance RDS supports SSL encryption between applications and RDS DB instances. RDS generates a certificate for the instance.

Elasticache Multi-AZ failover:

failures are detected by elasticache, elasticache automatically promotes the replica that has the lowest replica lag, DNS records remain the same but point to the IP of the new primary. Other replicas start to sync with the new primary.

Alias records do not allow you to resolve a naked domain name to an ELBs DNS address. True or False?

false, alias record can be used for resolving apex/naked domain name (e.g. example.com rather than sub.example.com ) - A CNAME record can't be used for resolving apex/naked domain names. Generally use an Alias record where possible.Route 53 support wildcard entries for all records types, except NS records.

VPC Flow Logs

flow logs capture information about the IP traffic going to and from network interfaces in a VPC. Flow log data is stored using Amazon cloudwatch logs. Flow logs can be created at the following levels: -VPC, subnet, and network interface

Firehose and Elasticsearch

for Amazon Elaticsearch destination, streaming data is delivered to your Amazon EB cluster, and it can optionally be backed up to your S3 concurrently:

Firehose and Amazon Redshift

for Amazon redshift destinations, streaming data is delivered to your S3 bucket first. Kinesis Data firehose then issues an Amazon Redshift COPY command to load data from your S3 buckeet to your Amazo Redshift cluster. If data transformation is enabled, you can optionally back up source data to another Amazon S3 bucket

private hosted zone

for private hosted zone you can see a list of VPCs in each regions and must select one. For private hosted zones you must set the following VPC settings to "true": - enableDnsHostname - enableDnsSupport You also need to create a DHCP options set. You also need to extend an on-premises DNS to VPC. You cannot extend route 53 to on-premises instances. You cannot automatically register EC2 instances with private hosted zones (would need to be scripted)

Wed distribution for cache behavior

for web distributions you can configure CloudFront to require that viewers use HTTPS.

General ElastiCache concepts

fully managed implementations of two popular in-memory data stores - redis and Memcahced. ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud. The in-memory caching provided by ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads. Elasticache can be used for storing session state.

Health checks for hosted zone

health checks check the instance health by connecting to it. Health checks can be pointed at: - endpoints -status of other health checks - status of a cloudwatch alarm endpoints can be IP addresses or domain names

Route 53 health checks

health checks verify internet connected resources are reachable, available and functional. Route 53 can be used to rote internet traffic for domains registered with another domain register (any domain)

DynamoDb Streams

help you to keep a list of item level changes or provide a list of item level changes that have taken place in the last 24 hrs. Amazon DyanmoDB is integrated with AWS Lambda so that you can create triggers pieces of code that automatically respond to events in DynamoDB streams. If you enable DynamoDB Streams on a table, you can associate the steam ARN with a Lambda function that you write.

VPN-ONLY subnet

if a subnet doesn't have a route to the internet gateway, but has its traffic routed to a virtual private gateway for a VPN connection, the subnet is known as a VPN-only subnet.

Private subnet

if a subnet doesnt have a route to the internet gateway, the subnet is known as a private subnet

web session store

in cases with load-balanced web servers, store web session information in Redis so if a server is lost, the session info is not lost and another web server can pick it up.

API calls

include traffic management, authorization and access control, monitoring, and API version management. Together with Lambda, API gateway forms the app-facing part of the AWS serverless infrastructure. Cloudfront is used as the pubic endpoint for API gateway.

Amazon MQ supports

industry-standard APIs and protocols so you can migrate messaging and applications without rewriting code.

Amazon MQ supports?

industry-strandard APIs and protocols so you can migrate messaging and applications without rewriting code. It provides cost-efficient and flexible messaging capacity - you pay for broker instance and storage usage as you go. - Use SQS, if you're creating a new application from scratch.

Alias record

is a Route 53 specific record type. Alias records are used to map resource record sets in your hosted zone to Amazon Elastic load Balancing load balancers, Amazon Cloudfront distributions, AWS Elastic beanstalk environments, or Amazon S3 buckets that are configured as websites. The alias is pointed to the DNS name of the service. You cannot set the TTL or Alias records for ELB, S3, or Elastic Beanstalk environment (uses the service's default). Alias records work like a CNAME record in that you can map one DNS name (e.g. example.com)to another 'target' DNS name (e.g.elb1234.elb.amazonaws.com)

Global table

is a collection of one or more replica tables, all owned by a single AWS account. With global table, each replica table stores the same set of data items. DynamoDB does not support partial replication of only some of the items.

AWS OpsWorks

is a configuration management service that provides managed instances of chef and puppet two very popular automation platforms. Automates how applications are configured, deployed and managed. Provide configuration management to deploy code, automate tasks, configure instances, perform upgrades. OpsWorks is an automation platform that transforms infrastructure into code.

AD Connector

is a directory gateway for redirecting directory requests to your on-premise Active Directory. AD connector eliminates the need for directory synchronization and the cost and complexity of hosting a federation infrastructure connects your existing on-premise AD to AWS. Best choice when you want to use an exisiting Active Directory with AWS services

Amazon RedShift

is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and existing Business intelligence (BI) tools. Clustered peta-byte scale data warehouse. Redshift is a SQL based data warehouse used for analytics applications . Is an online Analytics Processing (OLAP) type of DB. Is used for running complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution. Provides good query performance and compression. It also massively parallel processing MPP by distributing data and queries across all nodes.

Amazon Redshift spectrum

is a feature of amazon redshift that enables you to run queries against exbytes of unstructured data in Amazon S3 with no loading or ETL required.

What is Access Analyzer for S3?

is a feature that monitors your access policies, ensuring that the policies provide only the intended access to your S3 resources. Access Analyzer for S3 evaluates your bucket access policies and enables you to discover and swiftly remediate buckets with potentially unintended access.

Amazon Elastic transcoder

is a highly scalable, easy to use and cost-effective way for developers and business to convert (or "transcode") video and audio files from their source format into versions that will playback on devices like smartphones, tablets and PCs. Also offers features for automatic video bit rate optimization, generation of thumbnails, overlay of visual watermarks, caption support, DRM packaging, progressive downloads, encryption and more. It picks up files from an input S3 bucket and saves the output to an output S3 bucket. It uses a JSON API, and SDKs are provided for Python, Node.js, Java, .NET, PHP, and Ruby. You are charged based on the duration of the content and the resolution or format of the media.

Amazon MQ

is a managed message broker service for ActiveMQ that makes it easy to set up and operate message brokers in the cloud, so you can migrate your messaging and applications without rewriting code.

Amazon MQ provides:

is a managed message broker service for ActiveMQ that makes it easy to set up and operate message brokers in the cloud, so you can migrate your messaging and applications without rewriting code.

AWS KMS ( Key Management Store)

is a managed service that enables you to easily encrypt your data. It provides a highly available key storage, management, and auditing solution for you to encrypt data within your own applications and control the encryption of stored data across AWS services. You can generate CMKs in KMS, in an AWS CloudHSM cluster, or import them from your own key management infrastructure. These master keys are protected by hardware security modules, you can submit data directly to KMS to be encrypted or decrypted using these master keys. can set usage policies on these keys. Data keys are not retained or managed by KMS. When a service needs to decrypt your data they request KMS to decrypt the data key ousing your master key.

Amazon Cloudwatch

is a monitoring service for AWS cloud resources and the applications you run on AWS. Logs events across AWS service- think operations, higher-level comprehensive monitoring and evening. log from multiple accounts, logs from multiple accounts, logs stored indefinitely, and alarms history for 14 days. Used to collect and track metrics, collect and monitor logs files, and set alarms. automatically react to change in your AWS resources.

AWS Direct Connect

is a network service that provides an alternative to using the internet to connect a customer's on-premise sites to AWS. Data is transmitted through a private network connection between AWS and a customer's datacenter or corporate network.

what is an AWS Availability Zone (AZ)

is a physicaly isolated location within an AWS Region. Within each AWS Region, S3 operates in a minimum of three AZs, each separated by miles to protect against local events like fires, floods, etc.

Redis shard

is a subnet of the cluster's keyspace, that can include a primary node and zero or more read- replicas. Supports automatic and manual snapshots (S3). Backups include cluster data and metadata. You can restore your data by creating a new Redis cluster and populating it from a backup. Supports master/slave replication.

AWS WAF

is a web application firewall that lets you monitor HTTP and HTTPS requests that are forwarded to Cloudfront and lets you control access to your content - you can shield access to content based on conditions in a web access control list (web ACL) such as: -Origin IP address -Values in query strings

Amazon EMR

is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. EMR utilizes a hosted Hadoop framework running on Amazon EC2 and Amazon S3. managed Hadoop framework for processing huge amounts of data. It also support Apache spark, HBase, Presto, and flink. You can also launch Presto clusters. Presto is an open-source distributed SQL query engine designed for fast analytic queries against large datasets. EMR launches all nodes for a given cluster in the same Amazon EC2 availability zone. You can access Amazon EMR by using the AWS Management console, command line tools, SDKS, or the EMR API

AWS Cloudtrail

is a web service that records activity made on your account and delivers log files to an Amazon S3 bucket. Auditing, log API activity across AWS services - think activities, more low-level granular, log from multiple accounts, logs stored to S3 or cloudwatch indefinitely, No native alarming; can use cloudwatch alarms.

cloudtrail potential

is about logging and saves a history of API calls for your AWS account. Provides visibility into user activity by recording actions taken on your account. API history enables security analysis, recourse change tracking, and compliance auditing

IAM users

is an entity that represents a person or service. It can be assigned: - An access key ID and secret access key for prgrammatic access to the AWS APi, CLI, SDK, and other development tools - A password for access to the management console The account root user credentials are the email adress ued to create the account and a password. THe root account has full administrative permissions and these cannot be restricted.

Default VPC

is automatically created for each AWS account the first time Amazon EC2 resources are provisioned. Default VPC has all-public subnets

General Route 53

is highly available and scalable Domain Name System (DNS) service Route 53 offers the following functions; -domain name registry -DNS solution -Health Checking of resources Route 53 is located alongside of all edge locations.

Memcached

not persistent, cannot be used as a data store, supports large nodes with multiple cores or threads. scales out and in, by adding and removing nodes. Ideal front-end for data stores (RDS, DynamoDB)

Kinesis data firehose

is the easiest way to load streaming data into data stores and analytics tools. Captures, transforms, and loads streaming data. Enables ear real-time analytics with existing business intelligence tools and dashboards. Kinesis data streams can be used as the source(s) to kinesis data firehose. Firehose can batch, compress, and encrypt data before loading it. Firehose synchronously replicates data across three AZs as it is transported to destinations. Can invoke a Lambda function to transform data before delivering it to destinations. The max size of a record (before base64-encoding) is 1000 KB.

Kinesis Data Analytics

is the easiest way to process and analyze real-time, streaming data. Can use standard SQL queries to process Kinesis data streams. Provides real-time analysis. Use cases: - Generate time-series analytics - Feed real-time dashboards - Create real-time alerts and notifications

VPN Cloudhub allow what?

is used for hardware-based VPNs and allows you to configure your branch officse to go into a VPC and then connect that to the corporate DC (hub and spoke topology with AWS as the hub). Can have up to 10 IPsec tunnels on a VGW by default. Uses eBGP, braches can talk to each other, can have direct connect connections, hourly rate + data egress charges.

Cloudtrail lof file integrity validation feature?

it allows you to determine whether a cloudtrail log file was unchanged, deleted, or modified since cloudtrail delivered it to the specifid Amazon S3 bucket

Amazon SWF enables?

it enables applications for a range of use cases, including media processing, web application back-ends, business process workflows, and analytics pipelines, to be designed as a coordination of tasks

Additional feature of multi-factor authentication

it is best practice to always setup multi-factor authentication on the root account. IAM is universal (global) and does not apply to regions. IAM is eventually consistent. IAM replicates data across multiple data centres around the world.

what can system manger provide

it provides a central place to view and manage your AWS resources, so you can have complete visibility and control over your operations. Designed for managing a large fleet of systems- tens or hundreds. SSM agent enables system manager features and supports all OSs supported by OS as well as back to Windows server 2003 and raspian.

Cloudformation features

it provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. It can be used to provision a broad range of AWS resources. Think of cloudformation as deploying infrastructure as code. Elastic beanstalk is more focused on deploying applications on EC2 (PaaS). Cloudformation can deploy elastic beanstalk hosted applications however it the reverse is not possible.

Amazon VPC (Virtual Private Cloud) networking and content delivery

lets you provision a logically isolated section of the Amazon Web services ( AWS) cloud where you can launch AWS resources in a virtual network that you define Analogous to having you own DC inside AWS. Provides complete control over the virtual networking environment including selection of IP ranges, creation of subnets, and configuration of route tables and gateways. -VPC is logically isolated from other VPCs on AWS. VPCs are region wide. A defualt VPC is created in each region with a subnet in each AZ. By default you can create up to 5 VPCs per region.

S3 encryption: SSE-C?

leverage Amazon S3 to perform the encryption and decryption of your obj. while retaining control of the keys used to encrypt objects. With SSE-C, you don't need to implement or use a client-side library to perform the encryption and decryption of objects you store in Amazon S3, but you do need to manage the keys that you send to Amazon S3 to encrypt and decrypt objects. Use SSE-C if you want to maintain your own encryption keys, but you don't want to implement or leverage a client-side encryption library.

Cloudtrail and encryption

log files are encrypted using S3 server-side encryption (SSE). You can also enable encryption using SSE KMS for additional security. A single KMS key can be used to encrypt log files for trails applied to all regions.

AWS Step Functions

makes it easy to coordinate the components of distributed applications as a series of steps in a visual workflow. You can quickly build and run state machines to execute the steps of you application in a reliable and scalable fashion. IT create tasks, sequential steps, parallel steps, branching paths or timers.

Kinesis Video Stream

makes it easy to securely stream video from connected devices to AWS for analytics, machine learning (ML), and other processing. Durably stores, encrypts, and indexes video data streams, and allows access to data through easy-to-use APIs. Stores data for 24 hours by default, up to 7 days. Stores data in shards - 5 transacttion per second for reads, up to a maz read rate of 2MB pr second and 1000 records per second for writes up to a max of 1MB per second. Supports encryption at rest with server-side encryption (KMS) with a customer master key.

Memcached node

max 100 nodes per region, 1-20 nodes per cluster (soft limits) Can integrate with SNS for node failure/recovery notification. Support auto-discovery for nodes added/removed from the cluster. Scales out/in (horizontally) by adding/removing nodes. Scales up/down (vertically) by changing the node family/type. Does not support multi-AZ failover or replication, does not support snapshots.

Multi-AZ deployments version upgrades

multi-AZ deployments version upgrades will be conducted on both the primary and standby at the same time causing an outage of both DB instance. Ensure security groups and NACLs will allow your application servers to communicate with both the primary and standby instances.

Can you pick out your IP within the subnet allocate for RDS instance?

no, you cannot pick the IP within the subnet that is allocated.

RDS services includes?

o Security and patching of the DB instances o Automated backup for the DB instances o Software updates for the DB engine o Easy scaling for stage and compute o Multi-AZ option with synchronous replication o Automatic failover for Multi-AZ option o Read replicas option for read heavy workloads

AWS VPN CloudHub operate

on simple hub-and-spoke model that you can use with or without a VPC. Use this design if you have multiple branch offices and existing internet connection and would like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices.

capacity units

one read capacity unit represents one strongly consistent read per second, or two eventually consistent reads per second for items up to 4KB. For items larger than 4KB, DynamoDB consumes additional read capacity units. One write capacity unit represents one write per second for an item up to 1KB.

what does producer creates in Kinesis data stream?

producer creates the data that makes up the stream. Producers can be used through the following: -kinesis streams API -Kinesis producer Library (KPL) -Kinesis agent

firehose kinesis

producers provide data streams, no shards, totally automated. Can encrypt data with an exisiting AWS key Management Service (KMS) key. Server-side-encryption can be used if Kinesis streams is used as the data source. Fireshose can invoke an AWS Lambda function to transform incoming data before delivering it to a destination.

streaming data dashboards

provide a landing spot for streaming sensor data on the factory floor, providing live real-time dashboard displays.

S3 encryption: SSE-S3?

provides an integrated solution where Amazon handles key management and key protection using multiple layers of security. You should choose SSE-S3 if you prefer to have Amazon manage your keys.

direct connect and virtual interfaces

public VIF (virtual interface) allow access to public services such as s3, EC2, and dynamoDB Private VIFs allow access to your VPC. Must use public IP addresses on public VIFs and use private IP addresses on private VIFs, cannot do layer 2 over direct connect (L3 only) You can establish IPSec connections over public VIFs to remote regions. Virtual interfaces are configured to connect to either AWS public services ( EC2 or S3) or private services

Kinesis Data Stream is useful for?

rapidly moving data off data producers and then continuously processing the data. It stores data for later processing by applications (key difference with firehose which delivers data directly to AWS services)

Redshift Availability and durability

redshift uses replication and continuous backups to enhance availability and improve durability and can automatically recover from component and node failures. you can run data warehouse clusters in multiple AZ's by loading data into two amazon redshift data warehouse clusters in separate AZs from the same set of Amazon S3 input files Redshift replicates your data within your data warehouse cluster and continuously backs up yoru data to Amazon S3.

Multi-AZ backups are taken from where?

the standby instance (for MariaDB, MySQL, Oracle and PostgressSQL) The DB instance must be in an Active state for automated backups to happen. Only automated backups can be used for point-in-time DB instance recovery.

Customer gateway (CGW)

representation of the customer end of the connection

Config rules

represents desired configurations for a resource and is evaluated against configuration changes on the relevant resource, as recorded by AWS config. It can check resources for certain desired conditions and if violations are found the resources are flagged as "non compliant"

Route 53 and the host zones relationship

route 53 automatically creates the Name server (NS) and start of authority (SOA) records for the hosted zone. you can create multiple hosted zones with the same name and different records. route 53 creates a set of 4 unique name servers ( a delegation set) within each hosted zone.

Records on Route 53

route 53 currently supports the following DNS record types: -A (address record) -AAAA (IPv6 address record) -CNAME (canonical name record) - CAA (certification authority authorization) -MX (mail exchange record) -NAPTR ( name authority pointer record) - NS (name server record) - PTR (pointer record) -SOA (start of authority record) -SPF (sender policy framework) -SRV (service locator) -Alias (an Amazon Route 53- specific virtual record)

AWS snowball edge

same as snowball, but with onboard lambda and clustering. snowball edge (100TB) comes with onboard storage and compute capabilities.

how to do a scale request on RDS

scaling requests are applied during the specified maintenance window unless apply immediately is used. Amazon Aurora supports a maximum DB size of 64 TiB. All other RDS DB types support a maximum DB size of 16 TiB.

when to use memcached

simple no-frills, you need to scale-out and in as demand changes, you need to run multiple CPU cores and threads, you need to cache objects ( database queries). Use Cases: cache the contents of a DB, cache data from dynamically generated web pages, transient session data, high frequency counters for admission control in high volume web apps.

S3 Transfer Acceleration

speed up data uploads using CloudFront in reverse

Elasticache subnet group

subnet group are a collection of subnets designated for your Amazon Elasicache cluster. You cannot move an existing Amazon ElastiCache cluster from outside VPC into a VPC. You need to configure subnet groups for Elasticache for the VPC that hosts the EC2 instances and the Elasticache cluster. When not using a VPC, Amazon Elasticache allows you to control access to your cluster through cache security groups ( you need to link the corresponding ec2 security groups)

Cloudformation templates, stacks, and change sets:

template: Architectural designs, create & update and delete templates, written in JSON or YAML. Cloudformation determines the order of provisioning, don't need to worry about dependencies, modifies and updates templates in a controlled way (version control). Designer allows you to visualize using a drag and drop interface. Stacks: deployed resources based on templates, create, update, delete stacks using templates. Deployed through the Management console, CLI or APIs Template elements: Mandatory: - file format and version, list of resources and associated configuration values Not mandatory: - Template parameters (limited to 60), output values (limited to 60), list of data tables.

Cloudfront provides a simple API

that lets you : - Distribute content with low latency and high data transfer rates by serving requests using a network of edge locations around the world. - Get started without negotiating contracts and minimum commitments. Cloudfront supports wildcard CNAME, wildcard SSL certificates, Dedicate IP, Custom SSL and SNI custome SSL (cheaper), perfect forward secrecy which creates a new private key for each SSl session.

Components of a VPC Virtual Private Gateway

the Amazon VPC side of a VPN connection

Components of a VPC Internet Gateway

the Amazon VPC side of a connection to the public internet

Amazon MQ manages?

the administration and maintenance of ActiveMQ brokers and automatically provisions infrastructure for high availability. With this you can use the AWS Management Console, AWS CloudFormation, the Command line interface (CLI), or simple API calls to launch a production- ready message broker in minutes. - it's managed implementation of Apache ActiveMQ. - Active MQ API and support for JMS, NMS< MQTT, and WebSockets

Logging and monitoring API gateway

the amazon API gateway logs (near real time) back-end performance metrics such as API calls, latency, and error rates to cloudwatch. You can monitor through the API gateway dashboard (REST API) allowing you to visually monitor calls to the services. - API gateway also meters utilization by third-party developers and the data is available in the API Gateway console and through APIs. - Amazon API gateway is integrated with AWS Cloudtrail to give a full auditable history of the changes to your REST APIs. All APIs calls made to the Amazon API gateway APIs to create, modify, delete, or deploy REST APIs are logged to Cloudtrail.

Eventually consistent reads (Default)

the eventual consistency option maximize your read throughput. An eventually consistent read might not reflect the results of a recently completed write. Consistency across all copies reached within 1 second.

Additional important features of Network ACLs

they are stateless so response are subject to the rules for the direction of traffic. NACLs only apply to traffic that is ingress or egress to the subnet not to traffic within the subnet. A VPC automatically comes with a default network ACL which allows all inbound/outbound traffic. Custome NACL denies all traffic both inbound/outbound. Subnets must be associated with a network ACL.

How can Lambda run their code in response to HTTP?

they can run code in response to HTTP requests using Amazon API gateway or API calls made using the AWS SDKs

AWS Direct private connection

this private connection can reduce network costs, increase bandwidth througput, and provide a more consistent network experience than internet-based connections. You can establish 1 Gbps or 10 Gbps dedicated network connection ( or multiple connections)between AWS networks and one of the AWS Direct Connect locations. It uses industry-standard VLANs to access Amazon Elastic Compute Cloud (Amazon EC2) instances running within an American VPC using private IP addresses.

S3 encryption: SSE-KMS?

to manage your encryption keys. Using AWS KMS to manage your keys provides several additional benefits. there are separate permissions for the use of the master key, providing an additional layer of control as well as protection against unauthorized access to your objects stored in Amazon S3. AWS KMS provides an audit trail so you can see who used your key to access which object and when, as well as view failed attempts to access data from users without permission to decrypt the data. It provides additional security controls to support customer efforts to comply with PCI-DSS, HIPAA/HITECH, and FedRAMP industry requirements.

DynamoDB charges

two prices models for DynamoDB: - on-demand capacity mode: DynamoDB charges you for the data reads and writes your application performs on you tables. You do not need to specify how much read and write throughput you expect your application to perform because DynamoDB instantly accommodates your workloads as they ramp up or down. -Provisioned capacity mode: you specify the number of reads and writes per second that you expect your application to require. you can use auto scaling to automatically adjust your table's capacity based on the specified utilization rate to ensure application performance while reducing cost.

database caching

use memcached in front of AWS RDS to cache popular queries to offload work from RDS and return results faster to users.

Leaderboards

use redis to provide a live leaderboard for millions of users of your mobile app.

DynamoDB Auto scaling decreases

when the workload decreases, Application auto scaling decreases the throughput so that you don't pay for unused provisioned capacity

Domain Route 53

when you register a domain with Route 53 it becomes the authoritative DNS server for that domain and creates a public hosted zone. - to make route 53 the authoritative DNS for an existing domain without transferring the domain create a route 53 public hosted zone domain create a route 53 public hosted zone and change the DNS name servers on teh existing provider to the router 53 Name servers

What are the S3 resources-based policies?

· Attached to buckets and objects · ACL-based policies define permissions · ACLs can be used to grant read/write permission to other accounts · Bucket policies can be used to grant other AWS accounts or IAM users permission to the bucket and objects

Config charges

with AWS config, you are charged based on the number configuration items (CIs) recorded for supported resources in your AWS account. AWS config creates a configuration item whenever it detects a change to a resource type that it is recording.

Is it possible to add caching to API calls

yes, you can add caching to API calls by provisioning an Amazon API gateway cache and specifying its size in gigabytes. A cache can be created and specified in gigabytes (not enabled by default). Caches are provisioned for a specific stage of you APIs. Caching features include customizable keys and time-to-live (TTL) in seconds for your API data which enhances response times and reduces load on back-end services.

Can you upgrade DB instance manually?

yes, you can manually upgrade a DB instance to supported DB engine version from the AWS Console. By default upgrades will take effect during the next maintenance window. You can optionally force an immediate upgrade.

AWS Direct Connect Plus

you can combine one or more AWS Direct connect dedicated network connections iwth teh Amazon VPC VPN. This combination provides an IPsec-endcrypted private connection that also reduces network costs, increases bandwidth throughput, and provides a more consistent network experience than internet-based VPN connections.

building software VPN design

you can create a global transit network on AWS. A transit VPC is a common strategy for connecting multiple, geographically disperse VPCs and remote networks in order to create a global network transit center. A transit VPC simplifies network management and minimizes the number of connections required to connect multiple VPCs and remote networks

Custom ACL

you can create custome network ACL's. By default, each custom network ACL denies all inbound/outbound traffic until you add rules. Each subnet in your VPC must be associated with a network ACL. If you don't do this manually it will be associated with the default network ACL.

Elasticache-redis implementation

you can have a fully automated, fault tolerant Elasticache-Redis implementation by enabling both cluster mode and multi_AZ failover.

Redshift security

you can load encrypted data from S3. It supports SSL encryption in-transit between client applications and Redshift data warehouse cluster. VPC for network isolation. Encryption for data at rest. audit logging and AWS CloudTrail integration. redshift takes care of key management or you can mangage your own through HSM or KMS

Elasticache clustering mode disabled:

you can only have one shard, one shard can have one read/write primary node and 0-5 read only replicas, you can distribute the replicas over multiple AZs in the same region. Replication from the primary node is asynchronous.

EFS-to-EFS backup solution

you can schedule automatic incremental backups of your Amazon EFS file system.

Glacier and CLI

you can upload data to Glacier using the CLI, SDKs or APIs - you cannot use the AWS Console. Glacier adds 32-40KB ( indexing and archive metadata) to each object when transitioning from other classes using lifecycle policies. AWS recommends that if you have lots of small objects they are combined in an archive (e.g. zip file) before uploading. - a description can be added to archives, no other metadata can be added. - Glacier archive IDs are added upon upload and are unique for each upload.

When to use Redis

you need encryption, you need HIPAA compliance, support for clustering, you need complex data types, you need HA (replication ), pub/sub capability, geospacial indexing, backup and restore.

Domain charges

you pay for: -data transfer out to internet -data transfer out to origin -number of HTTP/HTTPS requests -invalidation requests -Dedicated IP custom SSL -Field level encryption requests

Components of VPC Customer Gateway

your side of a VPN connection

Amazon RDS support which database engines

· Amazon Aurora · MySQL · MariaDB · Oracle · SQL Server · PostgreSQL

RDS Events and Notifications

· Amazon RDS uses AWS SNS to send RDS events via SNS notifications · YOuu can use API calls to the Amazon RDS service to list the RDS events in the last 14 days (describe Events API) · You can view events from the last 14 days using the CLI · Using the AWS Console you can only view RDS events for the last 1 day.

Amazon S3

· Amazone S3 is object storage built to store and retrieve any amount of data from anywhere on the internet. · Amazon S3 is a distributed architecture and objects are redundantly stored on multiple devices across multiple facilities (AZs) in an Amazon S3 region. · Amazon S3 is a simple key-based object store · Keys can be any string, and they can be constructed to mimic hierarchical attributes · Alternatively, you can use S3 Object Tagging to organize your data across all of your S3 buckets and/ or prefixes · Amazon S3 provides a simple, standards-based REST web services interface that is designed to work with any Internet development toolkit · Files can be from 0 bytes to 5TB · The largest object that can be uploaded in a single PUT is 5 gigabytes · For objects larger than 100 megabytes use the Multipart Upload capability · Updates to an object are atomic - when reading an updated object, you will either get the new object or the old one, you will never get partial or corrupt data. · There is unlimited storage available. It is recommended to access S3 through SDKs and APIs ( the console uses APIs) · Event notifications for specific actions, can send alerts or trigger actions

Examples of S3 Use Case?

· Backup and Storage - Provide data backup and storage services for others. · Application Hosting - Provide services that deploy, install, and manage web applications. · Media Hosting - Build a redundant, scalable, and highly available infrastructure that hosts video, photo, or music uploads and downloads. · Software Delivery - Host your software applications that customers can download.

what are the S3 bucket naming consist of ?

· Bucket names must be at least 3 and no more than 63 character in length. Bucket names must start and end with a lowercase character or a number. Bucket names must be a series of one or more labels which are separated by a period. Bucket names can contain lowercase letters, numbers and hyphens · Bucket names cannot be formatted as an IP address. For better performance, lower latency, and lower cost, create the bucket closer to your clients

What are some S3 resources?

· By default a bucket, its objects, and related sub-resources are all private. · By default only a resource owner can access a bucket · The resource owner refers to the AWS account that creates the resource · With IAM the account owner rather than the IAM user is the owner. · Within an IAM policy you can grant either programmatic access or AWS Management Console access to Amazon S3 resources. · Amazon Resource Names (ARN) are used for specifying resources in a policy.

How many file system can EFS you can create?

· By default you can create up to 10 file systems per account

S3 Multipart upload

· Can be used to speed up uploads to S3 · Multipart upload uploads objects in parts independently, in parallel and in any order · Performed using the S3 Multipart upload API · It is recommended for objects of 100MB or larger o Can be used for objects from 5 MB up to 5TB o Must be used for objects larger than 5GB If transmission of any part fails it can be retransmitted. Improves throughput, can pause and resume object uploads, can begin upload before you know the final object size.

What are the S3 user policies?

· Can use IAM to manage access to S3 resources · Using IAM you can create users, groups and roles and attach access policies to them granting them access to resources · You cannot grant anonymous permissions in an IAM user policy as the policy is attached to a user User policies can grant permissions to a bucket and the objects in it.

EFS file Sync

· EFS File Sync provides a fast and simple way to securely sync existing file systems into Amazon EFS · EFS File Sync copies files and directories into Amazon EFS at speeds up to 5x faster than standard Linux copy tools, with simple setup and management in the AWS console · EFS File Sync securely and efficiently copies files over the internet or an AWS Direct Connect connection · Copies file data and file system metadata such as ownership over the internet or an AWS Direct Connect connection · EFS File Sync securely and efficiently copies files over the internet or an AWS Direct Connect connection · Copies file data and file system metadata such as ownership, timestamps, and access permissions · EFS File Sync provides the following benefits: § Efficient high-performance parallel data transfer that tolerates unreliable and high-latency networks § Encryption of data transferred from your IT environment to AWS. § Data transfer rate up to five times faster than standard Linux copy tools § Full and incremental syncs for repetitive transfers. · Note: EFS File Sync currently doesn't support syncing from an Amazon EFS source to an NFS destination · When deploying Amazon EFS File Sync on EC2, the instance size must be at least xlarge for your EFS File Sync to function. · Recommended to use one of the Memory optimized r4.xlarge instance types · Can choose to run EFS File Sync either on-premises as a virtual machine (VM), or in AWS as an EC2 instance · Supports VMware ESXi

EFS compatibility

· EFS is integrated with a number of other AWS services, including CloudWatch, CloudFormation, CloudTrail, IAM, and Tagging services. · CloudWatch allows you to monitor file system activity using metrics · CloudFormation allows you to create and manage file systems using templates · CloudTrail allows you to record all Amazon EFS API calls in log files · IAM allows you to control who can administer your file system · Tagging services allows you to label your file systems with metadata that you define

EFS encryption

· EFS offers the ability to encrypt data at rest and in transit Encryption keys are managed by the AWS Key Management Service (KMS) · Data encryption in transit uses industry standard Transport Layer Security (TLS) · Enable encryption at rest in the EFS console or by sing the AWS CLI or SDKs · Data can be encrypted in transit between your Amazon EFS file system and its clients by sing the EFS mount helper

what are the S3 objects?

· Each object is stored and retrieved by a unique key (ID or name). AN object in S3 is uniquely identified and addressed through: o Service end-point o Bucket name o Object key (name) o Optionally, an object version Objects stored in a bucket will never leave the region in which they are stored unless you move them to another region or enable cross-region replication. You can define permissions on objects when uploading and at any time afterwards using the AWS Management Console.

EFS File Gateway

· File gateway provides a virtual on-premises file server, which enables you to store and retrieve files as objects in Amazon S3. Can be used for on-premises applications, and for Amazon EC2-resident applications that need file storage in S3 for object-based workloads · Use for flat files only, stored directly on S3. · File gateway offers SMB or BFS-based access to data in Amazon S3 with local caching. File gateway supports Amazon S3 standard, S3 standard-infrequent Access (S3 Standard-IA) and S3 One Zone-IA · File gateway supports clients connecting to the gateway using NFS v3 and v4.1 · Microsoft windows clients that support NFS v3. Can connect to file gateway · The maximum size of an individual file is 5 TB

What is a buckets?

· Files are stored in buckets: A bucket can be viewed as a container for objects · A bucket is a flat container of objects · It does not provide a hierarchy of objects · You can use an object key name to mimic folders 100 buckets per account by default. You can store unlimited objects in your buckets. You can create folders in your buckets (only available through the console). · You cannot create nested buckets · Bucket ownership is not transferrable · Bucket names cannot be changed after they have been created. If a bucket is deleted its name becomes available again. Bucket names are part of the URL used to access the bucket. An S3 bucket is region specific. S3 is a universal namespace so names must be unique globally, URL is in this format: https://s3-eu-west-1.amazonaws.com/<bucketname> · Can backup a bucket to another bucket in another account. Can enable logging to a bucket.

S3 User policies ?

· Granting permissions for all Amazon S3 operations · Managing permissions for users in your account · Granting object permissions to user within the account

S3 bucket policies for?

· Granting users permissions to a bucket owned by your account · Managing object permissions (where the object owner is the same account as the bucket owner) · Managing cross-account permissions for all Amazon S3 permissions

What are some of the access S3 buckets and object?

· Individual users · AWS accounts · Everyone (public/anonymous) · All authenticated users (AWS users) Access policies define access to resources and can be associated with resources (buckets and objects) and users You can use the AWS Policy Generator to create a bucket policy for your Amazon S3 bucket The categories of policy are resource-based policies and user policies.

AWS Elastic Beanstalk capabilities

· Integrates with VPC, IAM · Con provision most database instances · Allows full control of the underlying resources · Stores your application file and, optionally, server log files in Amazon S3. · Application data can also be stored on S3 · Multiple environments are supported to enable versioning · Changes from Git repositories are replicated · Linux and Windows 2008 R2 AMI support. Code is deployed using a WAR file or Git repository. Use the AWS toolkit for Visual Studio and the AWS toolkit fpr Eclipse to deploy Elastic Beanstalk · Fault tolerance within a single region. By default, applications are publicly accessible. Provides integration with CloudWatch. Can adjust application server settings. Can access logs without logging into application servers. Can use CloudFormation to deploy Elastic Beanstalk.

What is Amazon S3 data consist of?

· Key (name), Value (data), Version ID, Metadata, Access Control Lists

Lambda Edge operations and monitoring

· Lambda automatically monitors Lambda functions and reports metrics through CLoudWatch · Lambda tracks the number of requests, the latency per request, and the number of requests resulting in an error · You can view the request rate and error rates using the AWS Lambda Console, the CloudWatch console, and other AWS resrouces. · X-ray is an AWS service that can be used to detect, analyze and optimize performance issues with Lambda applications · X-ray collects metadata from the Lambda service and any upstream and downstream services that make up your application · Lambda is integrated with CloudTrail for capturing API calls and can deliver log files to your S3 bucket

Lambda Functions configuration

· Lambda functions configured to access resources in a particular VPC will not have access to the internet as a default configuration. If you need access to external endpoints, you will need to create a NAT in your VPC to forward this traffic and configure your security group to allow this outbound traffic · Versioning can be used to run different versions of you code · Each Lambda function has a unique Amazon Resource Name (ARN) which cannot be changed after publishing

Lambda Edge

· Lambda@Edge allows you to run code across AWS locations globally without provisioning or managing servers, responding to end users at the lowest network latency. · You just upload your Node.js code to AWS Lambda and configure your function to be triggered in response to an Amazon CloudFront request · The code is then ready to execute across AWS locations globally when a request for content is received, and scales with the volume of CloudFront request globally.

Lambda Edge limits

· Memory- minimum 128 MB, maximum 3008MB in 64 MB increments · Ephemeral disk capacity (/tmp space) per invocation - 512 MB · Number of file descriptors - 1024 · Number of processes and threads (combined) - 1024 · Maximum execution duration per request - 900 seconds · Concurrent executions per account - 1000 (soft limit)

S3 charges

· No charge for data transferred between EC2 and S3 in the same region · Data transfer into S3 is free of charge · Data transferred to other regions is charged · Data Retrieval (applies to S3 Standard-IA and S3 One Zone-IA) charges are: o Per GB/month storage fee o Data transfer out of S3 o Upload request (PUT and GET) o Retrieval requests (S3-IA or Glacier)

Lambda Edge prices

· Number of requests. First 1 million are free then $.20 per 1 million · Duration. Calculated from the time your code begins execution until it returns or terminates. Depends on the amount of memory allocated to a function.

S3 IAM user to access resources in another account

· Permission from the parent account through a user policy · Permission from the resource owner to the IAM user through a bucket policy, or the parent account through a bucket policy, bucket ACL or object ACL.IF an AWS account owns a resource it can grant permissions to another account, that account can then delegate those permissions or a subnet of them to use in the account (permissions delegation) An account that receives permissions from another account cannot delegate permissions cross-accounts to a third AWS account.

S3 log delivery group

· Providing WRITE permission to this group on a bucket enables S3 to write server access logs. · Not applicable to objects

S3 ACL

· S3 ACLs enable you to manage access to buckets and objects · Each bucket and object has an ACL attached to it as a sub-resource · Bucket and object permissions are independent of each other · The ACL defines which AWS accounts (grantees) or pre-defined S3 groups are granted access and the type of access. · A grantee can be an AWS account or one of the predefined Amazon S3 groups · When you create a bucket or an object, S3 creates a default ACL that grants the resource owner full control over the resource.

How does Amazon S3 notifications be sent?

· SNS topics, SQS queue, Lambda functions, need to configure SNS/SQS/Lambda before S3. No extra charges from S3 but you pay for SNS, SQS, and Lambda · Requester pays function causes the requester to pay (removes anonymous access). · Can provide time-limited access to objects. Provides read after write consistency for PUTS of new objects. · Provides eventual consistency for overwrite PUTS and DELETES ( Takes time to propagate). · You can only store files on S3, not possible for operating system's HTTP 200 code indicates a successful write to S3.

AWS Elastic Beanstalk support what program languages?

· Supports Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker web applications · Supports the following languages and development stacks o Apache Tomcat for Java applications o Apache HTTP server for PHO applications o Apache HTTP server for phython applications o Nginx or Apache HTTP server for Node.js applications o Passenger or Puma for Ruby applications o Microsoft IIS 7.5, 8.0, 8.5 for .NET applications o Java SE o Docker o Go

AWS Storage Gateway

· The AWS Storage Gateway service enables hybrid storage between on-premises environments and the AWS Cloud · It provides low-latency performance by caching frequently accessed data on premises, while storing data securely and durably in Amazon cloud storage services · Implemented using a virtual machine that you run on-premises (VMware or Hyper-V virtual appliance) · Provides local storage resources backed by AWS S3 and Glacier · Often used in disaster recovery preparedness to sync data to AWS · Each gateway you have can provide one type of interface. All data transferred between any type of gateway appliance and AWS storage is encrypted using SSL · By default, all data stored by AWS Storage Gateway in S3 is encrypted server-side with Amazon S3-Managed Encryption Keys using SSE-KMS

Requester pay

· The bucket owner will only pay for object storage fees · The requester will pay for requests (upload/downloads) and data transfers · Can only be enabled at the bucket level

EFS Volume Gateway

· The volume gateway represent the family of gateways that support block-based volumes, previously referred to as gateway-cached and gateway-stored modes · Block storage - iSCSI based. Cached Volume mode - the entire dataset is store on S3 and a cache of the most frequently accessed data is cached on-site · Stored Volume mode- the entire dataset is stored on-site and is asynchronously backed up to S3 (EBS point-in-time snapshots). Snapshots are incremental and compressed. Each volume gateway can support up to 32 volumes. · In cached mode, each volume can be up to 32 TB for a maximum of 1 PB of data per gateway (32 volumes, each TB in size). · In stored mode, each volume can be up to 16 TB for a maximum of 512TB of data per gateway (32 volumes, each 16 TB in size).

EFS Gateway virtual tape library

· Used for backup with popular backup software. Each gateway is preconfigured with a media changer and tapes drives. Supported by NetBackup, Backup Exec, Veeam etc. When creating virtual tapes, you select one of the following sizes: 100GB, 200GB, 400GB, 800GB, 1.5TB, and 2.5TB. · A tape gateway can have up to 1,500 virtual tapes with a maximum aggregate capacity of 1PB.

EFS access control

· When you create a file system, you create endpoints in your VPC called "mount targets" · When mounting from an EC2 instance, your file system's DNS name, which you provide in your mount command, resolves to a mount target's IP address. · You can control who can administer your file system using IAM. · You can control access to files and directories with POSIX-compliant user and group-level permissions · POSIX permissions allow you to restrict access from hosts by user and group · EFS Security Groups act as a firewall, and the rules you add define the traffic flow

S3 Copy

· You can create a copy of objects up to 5 GB in size in a single atomic operation · For files larger than 5GB you must use the multipart upload API. Can be performed using the AWS SDKs or REST API The copy operation can be used to: · Generate additional copies of objects · Renaming objects · Changing the copy's storage class or encryption at rest status · Move objects across AWS locations/regions · Change object metadata Once uploaded to S3 some object metadata cannot be changed, copying the object can allow you to modify this information

what is the limits to managing permissions using ACLs?

· You cannot grant permissions to individual users · You cannot grant conditional permissions · You cannot explicitly deny accesss

What are the S3 cross account access?

· You grant permission to anther AWS account using the email address or the canonical user ID. · However, if you provide an email address in your grant request, AmazonS3 finds the canonical user ID for that account and adds it to the ACL. · Grantee accounts can then then delegate the access provided by other accounts to their individual users.

EFS Pricing and Billing

· You pay only for the amount of file system storage you use per month · When using the Provisioned Throughput mode, you pay for the throughput you provision per month · There is no minimum fee and there are no set-up charges With EFS File Sync, you pay per-GB for data copied to EFS

What is AWS Elastic Beanstalk

A web service for deploying and managing applications in the AWS Cloud without worrying about the infrastructure that runs those applications.· Developers upload applications and Elastic Beanstalk handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. · Considered a Platform as a Service (PaaS) solution

· Your organization is planning to go serverless in the cloud. Which of the following combinations of services provides a fully serverless architecture?

A. Lambda, API Gateway, DynamoDB, S3, CloudFront

Where does AWS Lambda store its code?

AWS Lambda stores code in Amazon S3 and encrypts it at rest Continuous scaling- scales out not up. Lambda scales concurrently executing functions up to your default limit (1000).

EFS performance 2

Amazon EFS is designed to burst to allow high throughput levels for periods of time. Amazon EFS file systems are distributed across an unconstrained number of storage servers, enabling file systems to grow elastically to petabyte scale and allowing massively parallel access from Amazon EC2 instances to your data.

Amazon Glacier

Archived data, where you can wait 3-5 hours for access

S3 Tag

Assign tags to objects to use in costing, billing, security etc.

What are the S3 pre-defined groups?

Authenticated Users group: · This group represents all AWS accounts · Access permission to this group allows any AWS account access to the resource · All requests must be signed (authenticated) · Any authenticated user can access the resource

Elastic Load Balancing

Automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses. It provides fault tolerance for applications by automatically balancing traffic across targets- Amazon EC2 instances, containers, and IP addresses.

5. A solutions Architect is designing the compute layer of a serverless application. The compute layer will manage requests from external systems, orchestrate serverless workflows, and execute the business logic. The Architect needs to select the most appropriate AWS services for these functions. Which services should be used for the compute layer? (choose 2)

B. Use Amazon API Gateway with AWS Lambda for executing the business logic. D. a. Use AWS Elastic Beanstalk for executing the business logic. With Amazon API Gateway, you can run a fully manged REST API that integrates with Lambda to execute your business logic and includes traffic management, authorization and access control, monitoring, and API versioning. AWS CloudFormation and Elastic Beanstalk are orchestrators that are used for describing and provisioning resources not actually performing workflow functions within the application.

What is the bucket policies limitation in size?

Bucket policies are limited to 20 KB in size

AWS Auto Scaling Launch Configuration

Created from the AWS console or CLI. You can create a new launch configuration or use an existing running EC2 instance to create the launch configuration. The AMI must exist on EC2. If need to make changes on your launch configuration you have to create a new one, make the required changes, and use that with your auto-scaling groups, and use that with your auto scaling groups. You can attach one or more classic ELBs to your existing auto scaling group (ASG). ELBs must be in the same region. Once EC2 in the same region, EC2 instance existing or added by the ASG will be automatically registered with the ASG define ELBs

4. You are using the Elastic Container Service (ECS) to run a number of containers using the EC2 launch type. TO gain more control over scheduling containers you have decided to utilize Blox to integrate a third-party scheduler. The third-party scheduler will use the StartTask API to place tasks on specific container instances. What type of ECS scheduler will you need to use to enable this configuration?

D. Custom Scheduler. Amazon ECS provides a service scheduler (for long-running tasks and applications), the ability to run tasks manually (for batch jobs or single run tasks) with Amazon ECS placing tasks on your cluster for you. The service scheduler is ideally suited for long running stateless services and applications. Amazon ECS allows you to create your own schedulers that meet the needs of your business, or to leverage third party schedulers.

S3 Standard-IA

Durable, immediately available, infrequently accessed

What does Glacier archive that it does not support?

Glacier does not archive object metadata, you need to maintain a client-side database to maintain this information.

AWS Auto Scaling

Helps you ensure that you have that correct number of Amazon EC2 instances available to handle the load for your application.You can create collections of EC2 instances called Auto Scaling Groups. Automatically provides horizontal scaling for your instance. Availability, cost, and system metrics can all factor into scaling. Auto scaling is region- specific. It works with ELB, CloudWatch, and CloudTrail. Auto scaling will try to distribute EC2 instances evenly across AZs. You cannot edit a launch configuration once defined

Network Load Balancer

Layer 4 load balancer that routes connections based on IP protocol data. It is architected to handle millions of requests/sec, sudden volatile traffic patterns and provides extremely low latencies. it also use static IP addresses. Also can assign an Elastic IP to the load balancer per AZ. It supports WebSockets and cross-zone load balancing. -Does not support SSL termination. -Uses the same API as application load balancer, uses target groups. CloudWatch reports Network load balancer metrics. It also has an enhanced logging, it can use the flow logs feature to record all requests sent to your load balancer

S3 One Zone-IA

Lower cost for infrequently accessed data with less resilience. · Objects stored in the S3 One Zone-IA storage class are stored redundantly within a single Availability Zone in the AWS Region you select.

What are the S3 Storage type?

Persistent Data Store ( Data is durable and sticks around after reboots, restarts, or power cycles), ep S3, Glacier, EBS, EFS

What are the S3 sub-resources fundamental ?

Sub-resources are subordinate to objects, they do not exist independently but are always associated with another entity such as an object or bucket. · Sub-resources (configuration containers) associated with buckets include: o Lifecyle-define an object's lifecycle. o Website - configuration for hosting static websites o Versioning - retain multiple versions of objects as they are changed. o Access Control Lists (ACLs) - control permissions access to the bucket o Bucket Policies - control access to the bucket o Cross Origin Resource Sharing (CORS) o Logging Sub-resources associated with objects include: · ACLs- define permission to access the object · Restore - restoring an archive

EFS Performance

There are two performance modes: · "General Purpose" performance mode is appropriate for most file systems · "Max I/O" performance mode is optimized for applications where tens, hundreds, or thousands of EC2 instances are accessing the file system

Amazon S3 additonal capabilities

Transfer acceleration, Requester pays, tags, events, static web hosting, bitTorrent

S3 Events

Trigger notifications to SNS, SQS, or Lambda when certain events happen in your bucket

BitTorrent

Use the BitTorrent protocol to retrieve any publicly available object by automatically generating a .torrent file.

S3 supports ACL permission "WRITE":

When granted on a bucket: allows grantee to create, overwrite, and delete any object in the bucket When granted on an object: not applicable

S3 supports ACL permission "READ":

When granted on a bucket: allows grantee to list the object in the bucket. When granted on an object: allows grantee to read the object data and its metadata.

S3 supports ACL permission "READ_ACP":

When granted on a bucket: allows grantee to read the bucket ACL. When granted on an object: allows grantee to read the object ACL

S3 supports ACL permission "Write_ACP":

When granted on a bucket: allows grantee to write the ACL for the application bucket. When granted on an object: Allows grantee to write the ACL for the applicable object.

Does Amazon S3 scales up to high request rates?

Yes, · For example, your application can achieve at least 3,500 PUT/POST/Delete and 5,500 GET requests per second per prefix in a bucket. · There are no limits to the number of prefixes in a bucket. It is simple to increase your read or write performance exponentially. For read intensive requests you can also use CloudFront edge locations to offload from S3.

Building Lambda Apps

You can deploy and manage your serverless applications using the AWS serverless Application Model (AWS SAM) · AWS SAM is a specification that prescribes the rules for expressing serverless applications on AWS · This specification aligns with the syntax used by AWS CLoudFormation today and is supported natively within AWS CloudFormation as a set of resource types (referred to as "serverless resources") · You can automate your serverless application's release process using AWS CodePipeline and AWS CodeDeploy. · You can enable your Lambda function for tracing with AWS X-ray.

8. You are a Solutions Architect at Digital Cloud Training. In your VPC you have a mixture of EC2 instances in production and non-production environments. You need to devise a way to segregate access permissions to different sets of users for instances in different environments. How can this be achieved? (choose 2)

a. Add a specific tag to the instances you want to grant the users of groups access to. D. Create an IAM policy with a conditional statement that matches the environment variables. You can use the condition checking in IAM policies to look for a specific tag. IAM checks that the tag attached to the principal making the request matches the specified key name and value. You cannot achieve this outcome using environment variable stored in user data and conditional statements in a policy. you must use an IAM policy that grants access to instances based on the tag. You cannot use an IdP for this solution.

7. You just created a new subnet in your VPC and have launched an EC2 instance into it. You care trying to directly access the EC2 instance from the Internet and cannot connect. Which steps should you take to troubleshoot the issue? (choose 2)

a. Check that the instance has a public IP address. c. Check that the route table associated with the subnet has an entry for an Internet Gateway. Public subnets are subnets that have: 1. Auto-assign public IPv4 address set to Yes. 2. The subnet route table has an attached Internet gateway. A NAT gateway is used for providing outbound Internet access for EC2 instances in private subnets. Checking you can ping from another subnet does not relate to being able to access the instance remotely as it uses different protocols and a different network path. Security groups are statefull and do not need a rule for outbound traffic. For this solution you would only need to create an inbound rule that allows the relevant protocol.

How does S3 format resource look like?

arn:aws:s3:::bucket_name arn:aws:s3:::bucket_name/key_name

2. An application stack includes an Elastic Load Balancer in a public subnet, a fleet of Amazon EC2 instances in an Auto Scaling Group, and an Amazon RDS MySQL cluster. Users connect to the application from the Internet. The application server and database must be secure. What is the most appropriate architecture for the application stack?

b. Create a private subnet for the Amazon EC2 instances and a private subnet for the Amazon RDS cluster.. Typically the nodes of an internet-facing load balancer have public IP addresses and must therefore be in a public subnet. To keep your back-end instances secure you can place them in a private subnet. To do this you must associate a corresponding public private subnet for each availability zone the ELB/instances are in. For RDS, you create a DB subnet group which is a collection of subnet (typically private) that you create in a VPC and that you then designate for your DB instances.

6. A Solution Architect is creating a solution for an application that must be deployed on Amazon EC2 hosts that are dedicated to the client. Instance placement must be automatic and billing should be per instance. Which type of EC2 deployment model should be used?

b. Dedicated Instance. Dedicated instances are Amazon EC2 instances that run in a VPC on hardware that's dedicated to a single customer. Your dedicated instances are physically isolated at the host hardware level from instances that belong to other AWS accounts. Dedicated instances allow automatic instance placement and billing is per instance. An Amazon EC2 dedicated host is a physical serer with EC2 instance capacity fully dedicated to your use. Dedicated hosts can help your address compliance requirements and reduce costs by allowing you to use your existing server-bound software licenses. With dedicated hosts billing is on a per-host basis (not per instance).

3. A call center application consists of a three-tier application using Auto Scaling groups to automatically scale resources as needed. Users report that every morning at 9:00 am the system becomes very slow for about 15 minutes. A Solutions Architect determines that a large percentage of the call center staff starts work at 9:00 am, so Auto Scaling does not have enough time to scale to meet demand. How can the Architect fix the problem?

b. a. Create an Auto Scaling scheduled action to scale out the necessary resources at 8:30 am each morning. Scaling based on a schedule allows you to set your own scaling schedule for predictable load changes. To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action. This is ideal for situation where you know when and for how long you are going to need the additional capacity.

Amazon EFS ( Elastic File System)

is a fully-managed service that makes it easy to set up and scale file storage in the Amazon Cloud. · Implementation of an NFS file share and is accessed using the NFSv4.1 protocol. · Elastic storage capacity, and pay for what you use (in contrast to EBS with which you pay for what you provision). · Multi-AZ metadata and data storage · Can configure mount-points in one, or many, AZs · Can be mounted from on-premises systems ONLY if using Direct Connect or a VPN connection · Alternatively, use the EFS file Sync agent · Good for big data and analytics, media processing workflows, content management, web serving, home directories etc. · Pay for what you use (no pre-provisioning required) · Can scale up to petabytes · EFS is elastic and grows and shrinks as you add and remove data · Can concurrently connect 1 to 1000s of EC2 instances, from multiple AZs · A file system can be accessed concurrently from all AZs in the region where it is located.· Data is stored across multiple AZ's within a region · Read after write consistency

AWs Glacier

is an archiving storage solution for infrequently accessed data. Archived objects are not available for real time access and you need to submit a retrieval request. Retrieval can take a few hours. Glacier must complete a job before you can get its output. Request archival data is copied to S3 One Zone-IA. Following retrieval, you have 24 hours to download your data. you cannot specify Glacier as the storage class at the time you create an object. There is no SLA. Glacier is designed to sustain the loss of two facilities. Glacier automatically encrypts data at rest using AES 256 symmetric keys ad supports secure transfer of data over SSL.

what are 4 mechanisms for controlling access to Amazon S3 resources?

o IAM policies o Bucket policies o Access Control Lists (ACLs) o Query string authentication (URL to an Amazon S3 object which is only valid for a limited time)

Note on S3 set of permission

o Permissions are assigned at the account level for authenticated users o You cannot assign permissions to individual IAM users o When read is granted on a bucket it only provides the ability to list the objects in the bucket. o When READ is grated on an object the data can be read o ACP means access control permissions and READ_ACP/WRITE_ACP control who can read/write the ACLs themselves o WRITE is only applicable to the bucket level (except for ACP)

What are the S3 storage class?

there are 4 classes. -S3 standard, S3 standard-IA, S3 One Zone-IA, and Amazon Glacier

Can bucket owner grant access cross-account?

yes, bucket owner can grant cross-account permissions to another AWS account ( users in an account) to upload objects. · The AWS account that uploads the objects owns them · The bucket owner does not have permissions on objects that other accounts own, however: o The bucket owner pays the charges o The bucket owner can deny access to any objects regardless of ownership o The bucket owner can archive any objects or restore archived regardless of ownership. · A bucket owner grant cross-account permissions to another AWS account (or users in an account) to upload objects o The AWS account that uploads the objects owns them o The bucket owner dos not have permission on objects that other accounts own, however: § The bucket owner pay the charges § The bucket owner can deny access to any objects regardless of ownership § The bucket owner can archive any objects or restore archived objects regardless of ownership

How to enable VPC support on AWS Lambda?

you need to specify one or more subnets in a single VPC and a security group as part of your function configuration. Lambda functions provide access only to a single VPC. If multiple subnets are specified, they must all be in the same VPC. · Lambda functions provide access only to a single VPC. If multiple subnets are specified, they must all be in the same VPC.

What are the components of AWS Lambda are?

· A Lambda function which is comprised of you custom code and any dependent libraries · Event sources such as SNS or a custom service that triggers your function and executes its logic · Downstream resources such as DyanmoDB or Amazon S3 buckets that your Lambda function calls once it is triggered · Log streams are custom logging statements that allow you to analyze the execution flow and performance of you Lambda function Lambda is an even-driven compute service where AWS Lambda runs code in response to events such as a changes to data in an S3 bucket or a DynamoDB table.

AWS Lambda supports what program language?

· AWS Lambda supports code written in Node.js (JavaScript), Python, Java (Java 8 compatible), c# (.Net Core), Ruby, Go and PowerShell

How to access auditing on S3?

· Access auditing can be configured by configuring an Amazon S3 bucket to create access log records for all requests made against it. For capturing IAM/user identity information in logs configure AWS CloudTrail Data Events. · By default a bucket, its objects, and related sub-resources are all private. · By default only a resource owner can access a bucket · The resource owner refers to the AWS account that creates the resource · With IAM the account owner rather than the IAM user is the owner. · Within an IAM policy you can grant either programmatic access or AWS Management Console access to Amazon S3 resources. · Amazon Resource Names (ARN) are used for specifying resources in a policy.

S3 all users group?

· Access permission to this group allows anyone in the world access to the resource · The requests can be signed (authenticated) or unsigned (anonymous) · Unsigned requests omit the authentication header in the request · AWS recommends that you never grant the All Users group WRITE, WRITE_ACP, or FULL_CONTROL permissions

How to enable EFS file systems from on-premises server?

· Access to EFS file systems from on-premises servers can be enabled via Direct Connect or AWS VPN · You mount an EFS file system on your on-premises Linux server using the standard Linux mount command for mounting a file system via the NFSv4.1 protocol · Can choose General Purpose or Max I/O (both SSD) · The VPC of the connecting instance must have DNS hostnames enabled · EFS is compatible with all Linux-based AMIs for Amazon EC2

What is Amazon ECS?

· Amazon Elastics Container service (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure. Using API calls you can launch and stop container-enabled applications, query the complete state of clusters, and access many familiar features like security groups, Elastic Load Balancing, EBS volumes and IAM roles.

What are supported for AWS Lambda sources?

· Amazon S3 · Amazon DynamoDB · Amazon Kinesis Data Streams · Amazon Simple notification service · Amazon Simple Email service · Amazon simple queue services · Amazon congnito · AWS CloudFormation · Amazon CLoudWatch Logs · Amazon CloudWatch Events · AWS CodeCommit · Schedule Events (powered by Amazon CloudWatch Events) · AWS Config · Amazon Alexa · Amazon Lex · Amazon API Gateway · AWS IoT Button · Amaxon CloudFront · Amazon Kinesis Data Firehose · Other Event Srouces: Invoking a Lambda Function On Demand Other even sources can invoke Lambda Functions on-demand (application needs permissions to invoke the Lambda function)


Conjuntos de estudio relacionados

Social Psych Exam 2 (Chapters 6, 7, 8)

View Set

Good Manufacturing Practices Test 1

View Set

First 60 Elements of the Periodic Table

View Set

FINA Ch.8 Risk Analysis in Investment Decisions

View Set

7B Statutární orgán kapitálových OK (působnost, typy, práva a povinnosti členů)

View Set

Focus on Personal Finance- Chapter 4

View Set

Chapter 6 - Attitudes and Persuasion

View Set

Slaughterhouse 5 Chapter 4 & 5, Slaughterhouse 5 notes and chapters 1-5

View Set

chapter 13- neurocognitive disorders

View Set