docs.aws.amazon - Migration Planning 15%
Routing traffic to an ELB load balancer
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html
Querying Archives with S3 Glacier Select
https://docs.aws.amazon.com/amazonglacier/latest/dev/glacier-select.html
Scaling based on Amazon SQS
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
Health checks for Auto Scaling instances
https://docs.aws.amazon.com/autoscaling/ec2/userguide/healthcheck.html
Using high-level (s3) commands with the AWS CLI
https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html When you use aws s3 commands to upload large objects to an Amazon S3 bucket, the AWS CLI automatically performs a multipart upload. You can't resume a failed upload when using these aws s3 commands. If the multipart upload fails due to a timeout, or if you manually canceled in the AWS CLI, the AWS CLI stops the upload and cleans up any files that were created. This process can take several minutes. If the multipart upload or cleanup process is canceled by a kill command or system failure, the created files remain in the Amazon S3 bucket. To clean up the multipart upload, use the s3api abort-multipart-upload command.
Health checks for your target groups
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html
Cluster Configuration Guidelines and Best Practices
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-instances-guidelines.html
Managing Linux Security Updates
https://docs.aws.amazon.com/opsworks/latest/userguide/workingsecurity-updates.html
AWS OpsWorks Stacks integrates with IAM to let you control the following:
https://docs.aws.amazon.com/opsworks/latest/userguide/workingsecurity.html
Configuring DNSSEC for a domain
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-configure-dnssec.html
Security best practices for your VPC
- Use multiple Availability Zone deployments so you have high availability. - Use security groups and network ACLs. For more information, see Security groups for your VPC and - Network ACLs. - Use IAM policies to control access. - Use Amazon CloudWatch to monitor your VPC components and VPN connections. - Use flow logs to capture information about IP traffic going to and from network interfaces in your VPC. For more information, see VPC Flow Logs.
Site-to-Site VPN limitations
A Site-to-Site VPN connection has the following limitations. IPv6 traffic is not supported for VPN connections on a virtual private gateway. An AWS VPN connection does not support Path MTU Discovery. In addition, take the following into consideration when you use Site-to-Site VPN. When connecting your VPCs to a common on-premises network, we recommend that you use non-overlapping CIDR blocks for your networks.
Active Directory Connector
AD Connector is a directory gateway with which you can redirect directory requests to your on-premises Microsoft Active Directory without caching any information in the cloud. AD Connector comes in two sizes, small and large. You can spread application loads across multiple AD Connectors to scale to your performance needs. There are no enforced user or connection limits. Once set up, AD Connector offers the following benefits: Your end users and IT administrators can use their existing corporate credentials to log on to AWS applications such as Amazon WorkSpaces, Amazon WorkDocs, or Amazon WorkMail. You can manage AWS resources like Amazon EC2 instances or Amazon S3 buckets through IAM role-based access to the AWS Management Console. You can consistently enforce existing security policies (such as password expiration, password history, and account lockouts) whether users or IT administrators are accessing resources in your on-premises infrastructure or in the AWS Cloud. You can use AD Connector to enable multi-factor authentication by integrating with your existing RADIUS-based MFA infrastructure to provide an additional layer of security when users access AWS applications. Continue reading the topics in this section to learn how to connect to a directory and make the most of AD Connector features.
How AWS Config Works
AWS Config only delivers the configuration history files and configuration snapshots to the specified S3 bucket; AWS Config doesn't modify the lifecycle policies for objects in the S3 bucket. You can use lifecycle policies to specify whether you want to delete or archive objects to Amazon S3 Glacier. https://docs.aws.amazon.com/config/latest/developerguide/how-does-config-work.html
What is AWS SMS?
AWS Server Migration Service automates the migration of your on-premises VMware vSphere, Microsoft Hyper-V/SCVMM, and Azure virtual machines to the AWS Cloud. AWS SMS incrementally replicates your server VMs as cloud-hosted Amazon Machine Images (AMIs) ready for deployment on Amazon EC2. Working with AMIs, you can easily test and update your cloud-based images before deploying them in production.
Sending SSM Agent logs to CloudWatch Logs
AWS Systems Manager Agent (SSM Agent) is Amazon software that runs on your EC2 instances and your hybrid instances (on-premises instances and virtual machines) that are configured for Systems Manager. SSM Agent processes requests from the Systems Manager service in the cloud and configures your machine as specified in the request. https://docs.aws.amazon.com/systems-manager/latest/userguide/monitoring-ssm-agent.html
AWS Systems Manager Patch Manager
AWS Systems Manager Patch Manager automates the process of patching managed instances with both security related and other types of updates. You can use Patch Manager to apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for Microsoft applications.) You can use Patch Manager to install Service Packs on Windows instances and perform minor version upgrades on Linux instances. You can patch fleets of EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type. This includes supported versions of , Amazon Linux, Amazon Linux 2, CentOS, Debian Server, macOS, Oracle Linux, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), Ubuntu Server, and Windows Server. You can scan instances to see only a report of missing patches, or you can scan and automatically install all missing patches. [Important] AWS does not test patches for Windows Server or Linux before making them available in Patch Manager. Also, Patch Manager doesn't support upgrading major versions of operating systems, such as Windows Server 2016 to Windows Server 2019, or SUSE Linux Enterprise Server (SLES) 12.0 to SLES 15.0. Patch Manager uses patch baselines, which include rules for auto-approving patches within days of their release, as well as a list of approved and rejected patches. You can install patches on a regular basis by scheduling patching to run as a Systems Manager maintenance window task. You can also install patches individually or to large groups of instances by using Amazon EC2 tags. (Tags are keys that help identify and sort your resources within your organization.) You can add tags to your patch baselines themselves when you create or update them. Patch Manager provides options to scan your instances and report compliance on a schedule, install available patches on a schedule, and patch or scan instances on demand whenever you need to. Patch Manager integrates with AWS Identity and Access Management (IAM), AWS CloudTrail, and Amazon EventBridge to provide a secure patching experience that includes event notifications and the ability to audit usage.
AWS Shield
AWS provides AWS Shield Standard and AWS Shield Advanced for protection against DDoS attacks. AWS Shield Standard is automatically included at no extra cost beyond what you already pay for AWS WAF and your other AWS services. For added protection against DDoS attacks, AWS offers AWS Shield Advanced. AWS Shield Advanced provides expanded DDoS attack protection for your resources. You can add protection for any of the following resource types: - Amazon CloudFront distributions - Amazon Route 53 hosted zones - AWS Global Accelerator accelerators - Application load balancers - Elastic Load Balancing (ELB) load balancers - Amazon Elastic Compute Cloud (Amazon EC2) Elastic IP addresses
Amazon Data Lifecycle Manager
Amazon Data Lifecycle Manager cannot be used to manage snapshots or AMIs that are created by any other means. Amazon Data Lifecycle Manager cannot be used to automate the creation, retention, and deletion of instance store-backed AMIs. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
New Capacity-Optimized Allocation Strategy for Provisioning Amazon EC2 Spot Instances
Amazon EC2 now offers a new "Capacity Optimized" allocation strategy for provisioning Spot Instances via EC2 Auto Scaling, EC2 Fleet, and Spot Fleet. The "Capacity Optimized" allocation strategy automatically makes the most efficient use of available spare capacity while still taking advantage of the steep discounts offered by Spot Instances. One of the best practices for using EC2 Spot Instances effectively is to be flexible across a wide range of instance types. When customers configure their Auto Scaling group, EC2 Fleet, or Spot Fleet to use multiple instance types with Spot, they must choose a Spot allocation strategy. Spot allocation strategies determine how the Spot Instances in your fleet are fulfilled from Spot Instance pools. The capacity-optimized strategy automatically launches Spot Instances into the most available pools by looking at real-time capacity data and predicting which are the most available. This works well for workloads such as big data and analytics, image and media rendering, machine learning, and high performance computing that may have a higher cost of interruption. By offering the possibility of fewer interruptions, the capacity-optimized strategy can lower the overall cost of your workload. Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity available in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and other test & development workloads. Spot Instances are easy to launch, scale and manage through AWS services such as Amazon ECS and Amazon EMR, or integrated third parties such as Terraform and Jenkins. Spot Instances can be launched via RunInstances API with a single additional parameter. You can also provision compute capacity across Spot Instances, RIs and On-Demand instances to optimize performance and cost using EC2 Auto Scaling, EC2 Fleet, and Spot Fleet APIs.
Amazon ElastiCache
Amazon ElastiCache offers fully managed Redis and Memcached. With both ElastiCache for Redis and ElastiCache for Memcached you: - No longer need to perform management tasks such as hardware provisioning, software patching, setup, configuration, and failure recovery. This allows you to focus on high value application development. - Have access to monitoring metrics associated with your nodes, enabling you to diagnose and react to issues quickly. - Can take advantage of cost-efficient and resizable hardware capacity. - Additonally, ElastiCache for Redis features an enhanced engine which improves on the reliability and efficiency of open source Redis while remaining Redis-compatible so your existing Redis applications work seamlessly without changes. ElastiCache for Redis also features Online Cluster Resizing, supports encryption, and is HIPAA eligible and PCI DSS compliant. - ElastiCache for Memcached features Auto Discovery which helps developers save time and effort by simplifying the way an application connects to a cluster. Read the more detailed comparison between ElastiCache for Redis and Elasticache for Memcached for further information about differences between the two products.
Amazon RDS Read Replicas
Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server as well as Amazon Aurora. For the MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server database engines, Amazon RDS creates a second DB instance using a snapshot of the source DB instance. It then uses the engines' native asynchronous replication to update the read replica whenever there is a change to the source DB instance. The read replica operates as a DB instance that allows only read-only connections; applications can connect to a read replica just as they would to any DB instance. Amazon RDS replicates all databases in the source DB instance.
Multi-AZ Failover Condition
Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operations as quickly as possible without administrative intervention. Amazon RDS automatically performs a failover in the event of any of the following: - Loss of availability in primary Availability Zone - Loss of network connectivity to primary - Compute unit failure on primary - Storage failure on primary [Note] When operations such as DB Instance scaling or system upgrades like OS patching are initiated for Multi-AZ deployments, for enhanced availability, they are applied first on the standby prior to an automatic failover (see the Aurora documentation for details on update behavior). As a result, your availability impact is limited only to the time required for automatic failover to complete. Note that Amazon RDS Multi-AZ deployments do not failover automatically in response to database operations such as long running queries, deadlocks or database corruption errors.
Q: What types of replication does Amazon RDS support for Oracle?
Amazon RDS for Oracle supports two types of replication technologies - Amazon RDS Multi-AZ and Oracle Read Replicas. Multi-AZ deployments are supported for both the License Included and Bring Your Own License (BYOL) licensing models, while the Read Replicas are supported for Bring Your Own License (BYOL) model only. Amazon RDS for Oracle provides Multi-AZ deployment to provide enhanced availability and durability for database (DB) instances within a specific AWS Region, and is often an effective disaster recovery (DR) solution for most use cases. For customers running mission critical databases who have a business requirement for their DR configuration to span across different AWS regions, Oracle Read Replicas feature is an ideal choice.
Choosing between alias and non-alias records
Amazon Route 53 alias records provide a Route 53-specific extension to DNS functionality. Alias records let you route traffic to selected AWS resources, such as CloudFront distributions and Amazon S3 buckets. They also let you route traffic from one record in a hosted zone to another record. Unlike a CNAME record, you can create an alias record at the top node of a DNS namespace, also known as the zone apex. For example, if you register the DNS name example.com, the zone apex is example.com. You can't create a CNAME record for example.com, but you can create an alias record for example.com that routes traffic to www.example.com. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html
Constructing Amazon WorkDocs Content Manager
Amazon WorkDocs Content Manager can be used for both administrative and user applications. For user applications, a developer must construct Amazon WorkDocs Content Manager with anonymous AWS credentials and an authentication token. For administrative applications, the Amazon WorkDocs client must be initialized with AWS Identity and Access Management (IAM) credentials. In addition, the authentication token must be omitted in subsequent API calls. The following code demonstrates how to initialize Amazon WorkDocs Content Manager for user applications using Java or C#.
What Is Amazon WorkSpaces?
Amazon WorkSpaces enables you to provision virtual, cloud-based Microsoft Windows or Amazon Linux desktops for your users, known as WorkSpaces. Amazon WorkSpaces eliminates the need to procure and deploy hardware or install complex software. You can quickly add or remove users as your needs change. Users can access their virtual desktops from multiple devices or web browsers. https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces.html
Macie Usecases
Assessing your data privacy and security An important aspect in maintaining the right level of data security is to be able to continuously identify your sensitive data and evaluate security and access controls. Amazon Macie allows you to do this across your entire Amazon S3 environment, generating actionable findings that you can use to quickly respond where needed. Macie also gives you the flexibility to identify sensitive data residing in other data stores by temporarily moving it to S3. For example, you can initiate Amazon Relational Database Service (RDS) or Amazon Aurora snapshots to export data in these services to Amazon S3 where it can be evaluated for sensitive data using Macie. This allows you to utilize Macie to help you maintain data privacy and security. Maintaining regulatory compliance Compliance teams are required to monitor where sensitive data resides, protect it properly, and provide evidence that they are enforcing data security and privacy to meet regulatory compliance requirements. Amazon Macie provides different options for scheduling your data analysis, such as one-time, daily, weekly, or monthly sensitive data discovery jobs to help you meet and maintain your data privacy and compliance requirements. Macie automatically sends all sensitive data discovery job outputs, including findings, evaluation results, time stamps, and a historical record of all buckets and objects scanned for sensitive data to an S3 bucket you own. These sensitive data discovery detail reports can be used in data privacy and protection audits and for long term retention. Identifying sensitive data in data migrations When migrating large volumes of data to AWS, you can set up a secure Amazon S3 environment to use as an initial staging area where you use Macie to discover sensitive data. You can also extract files from applications such as email, file share, collaboration tools, and transfer to S3 for evaluation by Macie. The results can be used to inform where the migration data should be stored and what security controls, such as encryption and resource tagging, need to be applied. Using Macie's findings, you can automate the configuration of data protection and role-based access policies as your data moves into AWS.
What are AWS WAF, AWS Shield, and AWS Firewall Manager?
At the simplest level, AWS WAF lets you choose one of the following behaviors: Using AWS WAF has several benefits: AWS Shield AWS Firewall Manager https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html
Configure sticky sessions for your Classic Load Balancer
By default, a Classic Load Balancer routes each request independently to the registered instance with the smallest load. However, you can use the sticky session feature (also known as session affinity), which enables the load balancer to bind a user's session to a specific instance. This ensures that all requests from the user during the session are sent to the same instance. The key to managing sticky sessions is to determine how long your load balancer should consistently route the user's request to the same instance. If your application has its own session cookie, then you can configure Elastic Load Balancing so that the session cookie follows the duration specified by the application's session cookie. If your application does not have its own session cookie, then you can configure Elastic Load Balancing to create a session cookie by specifying your own stickiness duration. Elastic Load Balancing creates a cookie, named AWSELB, that is used to map the session to the instance. Requirements - An HTTP/HTTPS load balancer. - At least one healthy instance in each Availability Zone.
What is AWS Site-to-Site VPN?
By default, instances that you launch into an Amazon VPC can't communicate with your own (remote) network. You can enable access to your remote network from your VPC by creating an AWS Site-to-Site VPN (Site-to-Site VPN) connection, and configuring routing to pass traffic through the connection. Although the term VPN connection is a general term, in this documentation, a VPN connection refers to the connection between your VPC and your own on-premises network. Site-to-Site VPN supports Internet Protocol security (IPsec) VPN connections. Your Site-to-Site VPN connection is either an AWS Classic VPN or an AWS VPN. For more information, see Site-to-Site VPN categories.
Elastic Load Balancing
Highly availability and elasticity Elastic Load Balancing is part of the AWS network, with native awareness of failure boundaries like AZs to keep your applications available across a region, without requiring Global Server Load Balancing (GSLB). ELB is also a fully managed service, meaning you can focus on delivering applications and not installing fleets of load balancers. Capacity is automatically added and removed based on the utilization of the underlying application servers. Security Elastic Load Balancing works with Amazon Virtual Private Cloud (VPC) to provide robust security features, including integrated certificate management, user-authentication, and SSL/TLS decryption. Together, they give you the flexibility to centrally manage TLS settings and offload CPU intensive workloads from your applications. ALB also supports integration with AWS WAF, adding a level of protection before bad actors reach the application. Further, S2N and HTTP Guardian have been developed as Open Source solutions to reduce the potential for HTTP-based attacks. Feature breadth Elastic Load Balancing offers the breadth of features needed by businesses of all sizes, while delivering them in an AWS-native experience. Elastic Load Balancing includes support for features needed in container-based workloads, including HTTP/2, gRPC, TLS offload, advanced rule-based routing, and integration with container services as an ingress controller. ALB provides customers with a native HTTP endpoint for calling Lambda functions, removing the dependency on other solutions. Further, Gateway Load Balancer creates one gateway for routing traffic through fleets of third-party appliances. Robust monitoring & visibility Elastic Load Balancing allows you to monitor the health of your applications and their performance in real time with Amazon CloudWatch metrics, logging, and request tracing. This improves visibility into the behavior of your applications, uncovering issues and identifying performance bottlenecks in your application stack. ELB helps ensure compliance with application Service Level Agreements (SLAs). Integration and global reach As a native AWS service, ELB is tightly integrated with other AWS services like EC2, ECS/EKS, Global Accelerator and operational tools such as AWS CloudFormation and AWS Billing. Across the Amazon Global Infrastructure and customer data centers with AWS Outposts and on-premises target support, ELB is available everywhere you run your AWS workloads.
Creating Snapshots of Volumes in a RAID Array
If you want to back up the data on the EBS volumes in a RAID array using snapshots, you must ensure that the snapshots are consistent. This is because the snapshots of these volumes are created independently. To restore EBS volumes in a RAID array from snapshots that are out of sync would degrade the integrity of the array. To create a consistent set of snapshots for your RAID array, use EBS multi-volume snapshots. Multi-volume snapshots allow you to take point-in-time, data coordinated, and crash-consistent snapshots across multiple EBS volumes attached to an EC2 instance. You do not have to stop your instance to coordinate between volumes to ensure consistency because snapshots are automatically taken across multiple EBS volumes. For more information, see the steps for creating multi-volume snapshots under Creating Amazon EBS snapshots.
Using AWS Lambda with CloudFront Lambda@Edge
Lambda@Edge lets you run Node.js and Python Lambda functions to customize content that CloudFront delivers, executing the functions in AWS locations closer to the viewer. The functions run in response to CloudFront events, without provisioning or managing servers. You can use Lambda functions to change CloudFront requests and responses at the following points: - After CloudFront receives a request from a viewer (viewer request) - Before CloudFront forwards the request to the origin (origin request) - After CloudFront receives the response from the origin (origin response) - Before CloudFront forwards the response to the viewer (viewer response) https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html
Amazon S3 Glacier & S3 Glacier Deep Archive
MEDIA ASSET WORKFLOWS Media assets such as video and news footage require durable storage and can grow to many petabytes over time. The Amazon S3 Glacier and S3 Glacier Deep Archive storage classes allow you to archive older media content affordably then move it to Amazon S3 for distribution when needed. HEALTHCARE INFORMATION ARCHIVING Hospital systems need to retain petabytes of patient records (LIS, PACS, EHR, etc.) for decades to meet regulatory requirements. The Amazon S3 Glacier and S3 Glacier Deep Archive storage classes help you reliably archive patient record data securely at a very low cost. REGULATORY AND COMPLIANCE ARCHIVING Many enterprises like Financial Services and Healthcare must retain regulatory and compliance archives for extended durations. Amazon S3 Object Lock helps you set compliance controls to meet your objectives, such as SEC Rule 17a-4(f). SCIENTIFIC DATA STORAGE Research organizations generate, analyze, and archive vast amounts of data. With the Amazon S3 Glacier and S3 Glacier Deep Archive storage classes, you avoid the complexities of hardware and facility management and capacity planning. DIGITAL PRESERVATION Libraries and government agencies face data-integrity challenges in their digital preservation efforts. Unlike traditional systems, which can require laborious data verification and manual repair, Amazon S3 performs regular, systematic data integrity checks and is built to be automatically self-healing. MAGNETIC TAPE REPLACEMENT On-premises or offsite tape libraries can lower storage costs but require large upfront investments and specialized maintenance. The Amazon S3 Glacier and S3 Glacier Deep Archive storage classes have no upfront cost and eliminate the cost and burden of maintenance.
AWS Systems Manager Parameter Store
Note To implement password rotation lifecycles, use AWS Secrets Manager. Secrets Manager allows you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. How can Parameter Store benefit my organization? Who should use Parameter Store? What are the features of Parameter Store? What is a parameter? https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html
Step 2: Create an IAM service role for a hybrid environment
Note Users in your company or organization who will use Systems Manager on your hybrid machines must be granted permission in IAM to call the SSM API. For more information, see Create non-Admin IAM users and groups for Systems Manager. Note If you use an on-premises firewall and plan to use Patch Manager, that firewall must also allow access to the patch baseline endpoint arn:aws:s3:::patch-baseline-snapshot-region/*. region represents the identifier for an AWS Region supported by AWS Systems Manager, such as us-east-2 for the US East (Ohio) Region. For a list of supported region values, see the Region column in Systems Manager service endpoints in the Amazon Web Services General Reference. Note The policies you add for a service profile for managed instances in a hybrid environment are the same policies used to create an instance profile for EC2 instances. Note Users in your company or organization who are to use Systems Manager on your hybrid machines must be granted permission in IAM to call the SSM API.
Amazon Macie features
Ongoing evaluation of your Amazon S3 environment Amazon Macie continually evaluates your Amazon S3 environment and provides an S3 resource summary across all of your accounts. You can search, filter, and sort buckets by metadata variables, such as bucket names, tags, and security controls like encryption status or public accessibility. For any unencrypted buckets, publicly accessible buckets, or buckets shared with AWS accounts outside those you have defined in AWS Organizations, you can be alerted in order to take action. Scalable on-demand and automated sensitive data discovery jobs Amazon Macie allows you to run one-time, daily, weekly, or monthly sensitive data discovery jobs for all, or a subset of objects in an Amazon S3 bucket. For sensitive data discovery jobs, Amazon Macie automatically tracks changes to the bucket and only evaluates new or modified objects over time. Fully managed sensitive data types Amazon Macie maintains a growing list of sensitive data types that include common personally identifiable information (PII) and other sensitive data types as defined by data privacy regulations, such as GDPR, PCI-DSS, and HIPAA. These data types use various data detection techniques including machine learning and are continually added to and improved upon over time. [Custom-defined sensitive data types] Amazon Macie provides you the ability to add custom-defined data types using regular expressions to enable Macie to discover proprietary or unique sensitive data for your business. [Detailed and actionable security and sensitive data discovery findings] Macie reduces alert volume and speeds up triage by consolidating findings by object or bucket. Based on severity level, Macie findings are prioritized and each finding includes details, such as the sensitive data type, tags, public accessibility, and encryption status. Findings are retained for 30-days and are available in the AWS Management Console or through the API. The full sensitive data discovery details are automatically written to a customer-owned S3 bucket for long-term retention. [One-click deployment with no upfront data source integration] With one-click in the AWS Management Console or a single API call, you can enable Amazon Macie in a single account. With a few more clicks in the console, you can enable Macie across multiple accounts. Once enabled, Macie generates an ongoing Amazon S3 resource summary across accounts that includes bucket and object counts as well as the bucket-level security and access controls. [Multi-account support and integration with AWS Organizations] In the multi-account configuration, a single Macie administrator account can manage all member accounts, including the creation and administration of sensitive data discovery jobs across accounts. Amazon Macie supports multiple accounts through AWS Organizations integration as well as natively within Macie. Security and sensitive data discovery findings are aggregated in the Macie administrator account and sent to Amazon CloudWatch Events. Now using one account, you can integrate with event management, workflow, and ticketing systems or use Macie findings with AWS Step Functions to automate remediation actions.
Automation use cases
Perform common IT tasks Automation can simplify common IT tasks such as changing the state of one or more instances (using an approval automation) and managing instance states according to a schedule. Here are some examples: Use the AWS-StopEC2InstanceWithApproval runbook to request that one or more AWS Identity and Access Management (IAM) users approve the instance stop action. After the approval is received, Automation stops the instance. Use the AWS-StopEC2Instance runbook to automatically stop instances on a schedule by using Amazon EventBridge or by using a maintenance window task. For example, you can configure an automation to stop instances every Friday evening, and then restart them every Monday morning. Use the AWS- UpdateCloudFormationStackWithApproval runbook to update resources that were deployed by using CloudFormation template. The update applies a new template. You can configure the Automation to request approval by one or more IAM users before the update begins. https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html
Amazon DynamoDB
Performance at scale DynamoDB supports some of the world's largest scale applications by providing consistent, single-digit millisecond response times at any scale. You can build applications with virtually unlimited throughput and storage. DynamoDB global tables replicate your data across multiple AWS Regions to give you fast, local access to data for your globally distributed applications. For use cases that require even faster access with microsecond latency, DynamoDB Accelerator (DAX) provides a fully managed in-memory cache. No servers to manage DynamoDB is serverless with no servers to provision, patch, or manage and no software to install, maintain, or operate. DynamoDB automatically scales tables up and down to adjust for capacity and maintain performance. Availability and fault tolerance are built in, eliminating the need to architect your applications for these capabilities. DynamoDB provides both provisioned and on-demand capacity modes so that you can optimize costs by specifying capacity per workload, or paying for only the resources you consume. Enterprise ready DynamoDB supports ACID transactions to enable you to build business-critical applications at scale. DynamoDB encrypts all data by default and provides fine-grained identity and access control on all your tables. You can create full backups of hundreds of terabytes of data instantly with no performance impact to your tables, and recover to any point in time in the preceding 35 days with no downtime. You also can export your DynamoDB table data to your data lake in Amazon S3 to perform analytics at any scale. DynamoDB is also backed by a service level agreement for guaranteed availability.
Direct connect Benefits
REDUCES YOUR BANDWIDTH COSTS If you have bandwidth-heavy workloads that you wish to run in AWS, AWS Direct Connect reduces your network costs into and out of AWS in two ways. First, by transferring data to and from AWS directly, you can reduce your bandwidth commitment to your Internet service provider. Second, all data transferred over your dedicated connection is charged at the reduced AWS Direct Connect data transfer rate rather than Internet data transfer rates. CONSISTENT NETWORK PERFORMANCE Network latency over the Internet can vary given that the Internet is constantly changing how data gets from point A to B. With AWS Direct Connect, you choose the data that utilizes the dedicated connection and how that data is routed which can provide a more consistent network experience over Internet-based connections. COMPATIBLE WITH ALL AWS SERVICES AWS Direct Connect is a network service, and works with all AWS services that are accessible over the Internet, such as Amazon Simple Storage Service (Amazon S3), Elastic Compute Cloud (Amazon EC2), and Amazon Virtual Private Cloud (Amazon VPC). PRIVATE CONNECTIVITY TO YOUR AMAZON VPC You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC, providing you with a private, high bandwidth network connection between your network and your VPC. With multiple virtual interfaces, you can even establish private connectivity to multiple VPCs while maintaining network isolation. ELASTIC AWS Direct Connect makes it easy to scale your connection to meet your needs. AWS Direct Connect provides 1 Gbps and 10 Gbps connections, and you can easily provision multiple connections if you need more capacity. You can also use AWS Direct Connect instead of establishing a VPN connection over the Internet to your Amazon VPC, avoiding the need to utilize VPN hardware that frequently can't support data transfer rates above 4 Gbps. SIMPLE You can sign up for AWS Direct Connect service quickly and easily using the AWS Management Console. The console provides a single view to efficiently manage all your connections and virtual interfaces. You can also download customized router templates for your networking equipment after configuring one or more virtual interfaces.
Amazon S3 Glacier
S3 Glacier provides a console, which you can use to create and delete vaults. However, all other interactions with S3 Glacier require that you use the AWS Command Line Interface (AWS CLI) or write code. For example, to upload data, such as photos, videos, and other documents, you must either use the AWS CLI or write code to make requests, by using either the REST API directly or by using the AWS SDKs.
AWS Systems Manager Automation
Systems Manager Automation simplifies common maintenance and deployment tasks of Amazon EC2 instances and other AWS resources. Automation enables you to do the following: - Build automations to configure and manage instances and AWS resources. - Create custom runbooks or use pre-defined runbooks maintained by AWS. - Receive notifications about Automation tasks and runbooks by using Amazon EventBridge. - Monitor Automation progress and details by using the AWS Systems Manager console. https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html
Working with private hosted zones
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html
Uploading an Archive in Amazon S3 Glacier
The S3 Glacier vault inventory is only updated once a day. When you upload an archive, you will not immediately see the new archive added to your vault (in the console or in your downloaded vault inventory list) until the vault inventory has been updated. [Options for Uploading an Archive to Amazon S3 Glacier] Depending on the size of the data you are uploading, S3 Glacier offers the following options: - Upload archives in a single operation - In a single operation, you can upload archives from 1 byte to up to 4 GB in size. However, we encourage S3 Glacier customers to use multipart upload to upload archives greater than 100 MB. For more information, see Uploading an Archive in a Single Operation. Upload archives in parts - Using the multipart upload API, you can upload large archives, up to about 40,000 GB (10,000 * 4 GB). - The multipart upload API call is designed to improve the upload experience for larger archives. You can upload archives in parts. These parts can be uploaded independently, in any order, and in parallel. If a part upload fails, you only need to upload that part again and not the entire archive. You can use multipart upload for archives from 1 byte to about 40,000 GB in size
Duration-based session stickiness - Tutorial https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html
The load balancer uses a special cookie, AWSELB, to track the instance for each request to each listener. When the load balancer receives a request, it first checks to see if this cookie is present in the request. If so, the request is sent to the instance specified in the cookie. If there is no cookie, the load balancer chooses an instance based on the existing load balancing algorithm. A cookie is inserted into the response for binding subsequent requests from the same user to that instance. The stickiness policy configuration defines a cookie expiration, which establishes the duration of validity for each cookie. The load balancer does not refresh the expiry time of the cookie and does not check whether the cookie is expired before using it. After a cookie expires, the session is no longer sticky. The client should remove the cookie from its cookie store upon expiry. With CORS (cross-origin resource sharing) requests, some browsers require SameSite=None; Secure to enable stickiness. In this case, Elastic Load Balancing creates a second stickiness cookie, AWSELBCORS, which includes the same information as the original stickiness cookie plus this SameSite attribute. Clients receive both cookies. If an instance fails or becomes unhealthy, the load balancer stops routing requests to that instance, and chooses a new healthy instance based on the existing load balancing algorithm. The request is routed to the new instance as if there is no cookie and the session is no longer sticky. If a client switches to a listener with a different backend port, stickiness is lost.
Amazon ECS task networking - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
The networking behavior of Amazon ECS tasks hosted on Amazon EC2 instances is dependent on the network mode defined in the task definition. The following are the available network modes. Amazon ECS recommends using the awsvpc network mode unless you have a specific need to use a different network mode. [awsvpc] — The task is allocated its own elastic network interface (ENI) and a primary private IPv4 address. This gives the task the same networking properties as Amazon EC2 instances. [bridge] — The task utilizes Docker's built-in virtual network which runs inside each Amazon EC2 instance hosting the task. [host] — The task bypasses Docker's built-in virtual network and maps container ports directly to the ENI of the Amazon EC2 instance hosting the task. As a result, you can't run multiple instantiations of the same task on a single Amazon EC2 instance when port mappings are used. [none] — The task has no external network connectivity.
Tagging your Amazon EC2 resources - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html
To help you manage your instances, images, and other Amazon EC2 resources, you can assign your own metadata to each resource in the form of tags. Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type—you can quickly identify a specific resource based on the tags that you've assigned to it. This topic describes tags and shows you how to create them. [Warning] Tag keys and their values are returned by many different API calls. Denying access to DescribeTags doesn't automatically deny access to tags returned by other APIs. As a best practice, we recommend that you do not include sensitive data in your tags. [Tagging your resources] You can tag most Amazon EC2 resources that already exist in your account. The table below lists the resources that support tagging. If you're using the Amazon EC2 console, you can apply tags to resources by using the Tags tab on the relevant resource screen, or you can use the Tags screen. Some resource screens enable you to specify tags for a resource when you create the resource; for example, a tag with a key of Name and a value that you specify. In most cases, the console applies the tags immediately after the resource is created (rather than during resource creation). The console may organize resources according to the Name tag, but this tag doesn't have any semantic meaning to the Amazon EC2 service. If you're using the Amazon EC2 API, the AWS CLI, or an AWS SDK, you can use the CreateTags EC2 API action to apply tags to existing resources. Additionally, some resource-creating actions enable you to specify tags for a resource when the resource is created. If tags cannot be applied during resource creation, we roll back the resource creation process. This ensures that resources are either created with tags or not created at all, and that no resources are left untagged at any time. By tagging resources at the time of creation, you can eliminate the need to run custom tagging scripts after resource creation.
Restricting Access to Amazon S3 Content by Using an Origin Access Identity
To restrict access to content that you serve from Amazon S3 buckets, follow these steps: Create a special CloudFront user called an origin access identity (OAI) and associate it with your distribution. Configure your S3 bucket permissions so that CloudFront can use the OAI to access the files in your bucket and serve them to your users. Make sure that users can't use a direct URL to the S3 bucket to access a file there. After you take these steps, users can only access your files through CloudFront, not directly from the S3 bucket. In general, if you're using an Amazon S3 bucket as the origin for a CloudFront distribution, you can either allow everyone to have access to the files there, or you can restrict access. If you restrict access by using, for example, CloudFront signed URLs or signed cookies, you also won't want people to be able to view files by simply using the direct Amazon S3 URL for the file. Instead, you want them to only access the files by using the CloudFront URL, so your protections work. For more information about using signed URLs and signed cookies, see Serving private content with signed URLs and signed cookies. This topic explains in detail how to set up the OAI and grant permissions to maintain secure access to your S3 files. [Important] If you use an Amazon S3 bucket configured as a website endpoint, you must set it up with CloudFront as a custom origin. You can't use the origin access identity feature described in this topic. However, you can restrict access to content on a custom origin by setting up custom headers and configuring your origin to require them.
Key concepts for Site-to-Site VPN:
VPN connection: A secure connection between your on-premises equipment and your VPCs. VPN tunnel: An encrypted link where data can pass from the customer network to or from AWS. Each VPN connection includes two VPN tunnels which you can simultaneously use for high availability. Customer gateway: An AWS resource which provides information to AWS about your customer gateway device. Customer gateway device: A physical device or software application on your side of the Site-to-Site VPN connection. Virtual private gateway: The VPN concentrator on the Amazon side of the Site-to-Site VPN connection. You use a virtual private gateway or a transit gateway as the gateway for the Amazon side of the Site-to-Site VPN connection. Transit gateway: A transit hub that can be used to interconnect your VPCs and on-premises networks. You use a transit gateway or virtual private gateway as the gateway for the Amazon side of the Site-to-Site VPN connection.
Multivalue answer routing
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-multivalue
Mobile push notifications
With Amazon SNS, you have the ability to send push notification messages directly to apps on mobile devices. Push notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts.
Configure cross-zone load balancing for your Classic Load Balancer - Tutorials https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-crosszone-lb.html
With cross-zone load balancing, each load balancer node for your Classic Load Balancer distributes requests evenly across the registered instances in all enabled Availability Zones. If cross-zone load balancing is disabled, each load balancer node distributes requests evenly across the registered instances in its Availability Zone only. For more information, see Cross-zone load balancing in the Elastic Load Balancing User Guide. Cross-zone load balancing reduces the need to maintain equivalent numbers of instances in each enabled Availability Zone, and improves your application's ability to handle the loss of one or more instances. However, we still recommend that you maintain approximately equivalent numbers of instances in each enabled Availability Zone for higher fault tolerance. For environments where clients cache DNS lookups, incoming requests might favor one of the Availability Zones. Using cross-zone load balancing, this imbalance in the request load is spread across all available instances in the Region, reducing the impact of misbehaving clients. When you create a Classic Load Balancer, the default for cross-zone load balancing depends on how you create the load balancer. With the API or CLI, cross-zone load balancing is disabled by default. With the AWS Management Console, the option to enable cross-zone load balancing is selected by default. After you create a Classic Load Balancer, you can enable or disable cross-zone load balancing at any time.
Bring your own IP addresses (BYOIP) in Amazon EC2
You can bring part or all of your public IPv4 address range or IPv6 address range from your on-premises network to your AWS account. You continue to own the address range, but AWS advertises it on the internet by default. After you bring the address range to AWS, it appears in your account as an address pool. BYOIP is not available in all Regions and for all resources. For a list of supported Regions and resources, see the FAQ for Bring Your Own IP. [Note] The following steps describe how to bring your own IP address range for use in Amazon EC2 only. For steps to bring your own IP address range for use in AWS Global Accelerator, see Bring your own IP addresses (BYOIP) in the AWS Global Accelerator Developer Guide.
Self-managed Oracle RAC on Amazon EC2
You can now deploy scalable Oracle Real Application Clusters (RAC) on Amazon EC2 using the recently-published tutorial and Amazon Machine Images (AMI) on AWS Marketplace. Oracle RAC is a shared-everything database cluster technology from Oracle that allows a single database (a set of data files) to be concurrently accessed and served by one or many database server instances. Deploying Oracle RAC on Amazon EC2 allows you leveraging the elasticity and scalability of Amazon Web Services. Adding a node to the Oracle RAC cluster on Amazon EC2 is as easy as a few API calls and commands. It can be done in minutes, rather than weeks, as is the case with physical on-premises infrastructure. Developers building software that uses a production Oracle RAC database are now able to do their development and testing against their own Oracle RAC database deployed to Amazon EC2, which allows a quick detection of performance and availability regressions in their software related to the RAC architecture.
Optimizing high availability with CloudFront origin failover - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html
You can set up CloudFront with origin failover for scenarios that require high availability. To get started, you create an origin group with two origins: a primary and a secondary. If the primary origin is unavailable, or returns specific HTTP response status codes that indicate a failure, CloudFront automatically switches to the secondary origin. To set up origin failover, you must have a distribution with at least two origins. Next, you create an origin group for your distribution that includes two origins, setting one as the primary. Finally, you create or update a cache behavior to use the origin group.
Multi-AZ Benefits
[Enhanced durability] Multi-AZ deployments for the MySQL, MariaDB, Oracle, and PostgreSQL engines utilize synchronous physical replication to keep data on the standby up-to-date with the primary. Multi-AZ deployments for the SQL Server engine use synchronous logical replication to achieve the same result, employing SQL Server-native Mirroring technology. Amazon Aurora uses an SSD-backed virtualized storage layer purpose-built for database workloads. All approaches safeguard your data in the event of a DB Instance failure or loss of an Availability Zone. [Increased availability] You benefit from enhanced database availability when running Multi-AZ deployments. If an Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete: typically under one minute for Amazon Aurora (as little as 30 seconds when using the MariaDB Connector/J) and one to two minutes for other database engines (see the RDS FAQs for details). The availability benefits of Multi-AZ deployments also extend to planned maintenance and backups. In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the automatic failover. As a result, your availability impact is, again, only the time required for automatic failover to complete. [Protection of your database performance] Unlike Single-AZ deployments, I/O activity is not suspended on your primary during backup for Multi-AZ deployments for the MySQL, MariaDB, Oracle, and PostgreSQL engines, because the backup is taken from the standby. However, note that you may still experience elevated latencies for a few minutes during backups for Multi-AZ deployments. On instance failure in Amazon Aurora deployments, Amazon RDS uses RDS Multi-AZ technology to automate failover to one of up to 15 Amazon Aurora Replicas you have created in any of three Availability Zones. If no Amazon Aurora Replicas have been provisioned, in the case of a failure, Amazon RDS will attempt to create a new Amazon Aurora DB instance for you automatically. [Automatic failover] If a storage volume on your primary instance fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby (or to a replica in the case of Amazon Aurora). Compare this to a Single-AZ deployment: in case of a Single-AZ database failure, a user-initiated point-in-time-restore operation will be required. This operation can take several hours to complete, and any data updates that occurred after the latest restorable time (typically within the last five minutes) will not be available. DB Instance failover is fully automatic and requires no administrative intervention. Amazon RDS monitors the health of your primary and standbys, and initiates a failover automatically in response to a variety of failure conditions.
IAM Usecases
[Fine-grained access control to AWS resources] IAM enables your users to control access to AWS service APIs and to specific resources. IAM also enables you to add specific conditions such as time of day to control how a user can use AWS, their originating IP address, whether they are using SSL, or whether they have authenticated with a multi-factor authentication device. [Multi-factor authentication] for highly privileged users Protect your AWS environment by using AWS MFA, a security feature available at no extra cost that augments user name and password credentials. MFA requires users to prove physical possession of a hardware MFA token or MFA-enabled mobile device by providing a valid MFA code. [Analyze access] IAM helps you analyze access across your AWS environment. Your security teams and administrators can quickly validate that your policies only provide the intended public and cross-account access to your resources. You can also easily identify and refine your policies to allow access to only the services being used. This helps you to better adhere to the principle of least privilege. [Integrate with your corporate directory] IAM can be used to grant your employees and applications federated access to the AWS Management Console and AWS service APIs, using your existing identity systems such as Microsoft Active Directory. You can use any identity management solution that supports SAML 2.0, or feel free to use one of our federation samples (AWS Console SSO or API federation).
ACM Benefits
[Free public certificates for ACM-integrated services] With AWS Certificate Manager, there is no additional charge for provisioning public or private SSL/TLS certificates you use with ACM-integrated services, such as Elastic Load Balancing and API Gateway. You pay for the AWS resources you create to run your application. For private certificates, ACM Private CA provides you the ability to pay monthly for the service and certificates you create. You pay less per certificate as you create more private certificates. [Managed certificate renewal] AWS Certificate Manager manages the renewal process for the certificates managed in ACM and used with ACM-integrated services, such as Elastic Load Balancing and API Gateway. ACM can automate renewal and deployment of these certificates. With ACM Private CA APIs, ACM enables you to automate creation and renewal of private certificates for on-premises resources, EC2 instances, and IoT devices. [Get certificates easily] AWS Certificate Manager removes many of the time-consuming and error-prone steps to acquire an SSL/TLS certificate for your website or application. There is no need to generate a key pair or certificate signing request (CSR), submit a CSR to a Certificate Authority, or upload and install the certificate once received. With a few clicks in the AWS Management Console, you can request a trusted SSL/TLS certificate from AWS. Once the certificate is created, AWS Certificate Manager takes care of deploying certificates to help you enable SSL/TLS for your website or application.
Common Uses for VM Import/Export
[Migrate Your Existing Applications and Workloads to Amazon EC2] Migrate your existing VM-based applications and workloads to Amazon EC2. Using VM Import, you can preserve the software and settings that you have configured in your existing VMs, while benefiting from running your applications and workloads in Amazon EC2. Once your applications and workloads have been imported, you can run multiple instances from the same image, and you can create Snapshots to backup your data. You can use AMI and Snapshot copy to replicate your applications and workloads around the world. You can change the instance types that your applications and workloads use as their resource requirements change. You can use CloudWatch to monitor your applications and workloads after you have imported them. And you can take advantage of AutoScaling, Elastic Load Balancing, and all of the other Amazon Web Services to support your applications and workloads after you have migrated them to Amazon EC2. [Copy Your VM Image Catalog to Amazon EC2] Copy your existing VM image catalog to Amazon EC2. If you use a catalog of approved VM images, a common practice in Enterprise computing environments, VM Import enables you to copy your image catalog to Amazon EC2, which will create Amazon EC2 AMIs from your VMs, which will serve as your image catalog within Amazon EC2. Your existing software, including products that you have installed like anti-virus software, intrusion detection systems, and more, can all be imported along with your VM images. [Create a Disaster Recovery Repository for your VM images] Import your on-premises VM images to Amazon EC2 for backup and disaster recovery contingencies. VM Import will store the imported images as Elastic Block Store-backed AMIs so they're ready to launch in Amazon EC2 when you need them. In the event of a contingency, you can quickly launch your instances to preserve business continuity while simultaneously exporting them to rebuild your on-premises infrastructure. You only pay for Elastic Block Store charges until you decide to launch the instances. Once launched, you pay normal Amazon EC2 service charges for your running instances. If you choose to export your instances, you will pay normal S3 storage charges.
RAID Configuration on Linux - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html
[Note] You should avoid booting from a RAID volume. Grub is typically installed on only one device in a RAID array, and if one of the mirrored devices fails, you may be unable to boot the operating system. [Important] RAID 5 and RAID 6 are not recommended for Amazon EBS because the parity write operations of these RAID modes consume some of the IOPS available to your volumes. Depending on the configuration of your RAID array, these RAID modes provide 20-30% fewer usable IOPS than a RAID 0 configuration. Increased cost is a factor with these RAID modes as well; when using identical volume sizes and speeds, a 2-volume RAID 0 array can outperform a 4-volume RAID 6 array that costs twice as much. [Important] Create volumes with identical size and IOPS performance values for your array. Make sure you do not create an array that exceeds the available bandwidth of your EC2 instance. Creating a RAID 0 array allows you to achieve a higher level of performance for a file system than you can provision on a single Amazon EBS volume. A RAID 1 array offers a "mirror" of your data for extra redundancy. Before you perform this procedure, you need to decide how large your RAID array should be and how many IOPS you want to provision. The resulting size of a RAID 0 array is the sum of the sizes of the volumes within it, and the bandwidth is the sum of the available bandwidth of the volumes within it. The resulting size and bandwidth of a RAID 1 array is equal to the size and bandwidth of the volumes in the array. For example, two 500 GiB io1 volumes with 4,000 provisioned IOPS each create a 1000 GiB RAID 0 array with an available bandwidth of 8,000 IOPS and 1,000 MiB/s of throughput or a 500 GiB RAID 1 array with an available bandwidth of 4,000 IOPS and 500 MiB/s of throughput.
Amazon CloudFront Support for Custom Origins
https://aws.amazon.com/blogs/aws/amazon-cloudfront-support-for-custom-origins/
Amazon RDS - Multi-AZ Deployments For Enhanced Availability & Reliability
https://aws.amazon.com/blogs/aws/amazon-rds-multi-az-deployment/
New Amazon EC2 Feature: Bring Your Own Keypair
https://aws.amazon.com/blogs/aws/new-amazon-ec2-feature-bring-your-own-keypair/
Patching your Windows EC2 instances using AWS Systems Manager Patch Manager
https://aws.amazon.com/blogs/mt/patching-your-windows-ec2-instances-using-aws-systems-manager-patch-manager/
Introducing Bring Your Own IP (BYOIP) for Amazon VPC
https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-bring-your-own-ip-byoip-for-amazon-vpc/
How to Use AWS Config to Monitor for and Respond to Amazon S3 Buckets Allowing Public Access
https://aws.amazon.com/blogs/security/how-to-use-aws-config-to-monitor-for-and-respond-to-amazon-s3-buckets-allowing-public-access/
AWS Caching Solutions - ElastiCache, Amazon CloudFront, and Amazon Route 53
https://aws.amazon.com/caching/aws-caching/
ACM FAQ
https://aws.amazon.com/certificate-manager/faqs/
AWS Database Migration Service
https://aws.amazon.com/dms/
How can I resolve Route 53 private hosted zones from an on-premises network via an Ubuntu instance?
https://aws.amazon.com/premiumsupport/knowledge-center/r53-private-ubuntu/
Route53 FAQ
https://aws.amazon.com/route53/faqs/
SWF FAQs
https://aws.amazon.com/swf/faqs/
Bring Your Own IP
https://aws.amazon.com/vpc/faqs/#Bring_Your_Own_IP
EC2 Fleet configuration strategies
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-fleet-configuration-strategies.html
Allocation strategy for Spot Instances
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html#spot-fleet-allocation-strategy
Introduction to Amazon Mechanical Turk
https://docs.aws.amazon.com/AWSMechTurk/latest/AWSMechanicalTurkGettingStartedGuide/SvcIntro.html
Managing how long content stays in the cache (expiration)
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html
Increasing the proportion of requests that are served directly from the CloudFront caches (cache hit ratio)
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cache-hit-ratio.html
Setting Up Field-Level Encryption - Tutorial
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html#field-level-encryption-setting-up
Tutorial: Log Amazon S3 Object-Level Operations Using CloudWatch Events
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/log-s3-data-events.html
Analyzing Log Data with CloudWatch Logs Insights
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html [Important] If your network security team doesn't allow the use of web sockets, you can't currently access the CloudWatch Logs Insights portion of the CloudWatch console. You can use the CloudWatch Logs Insights query capabilities using APIs.
What Is Amazon CloudWatch Logs? - FAQ
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html
Container Network Mode
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#network_mode
Comparing Memcached and Redis
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/SelectEngine.html
Best practices for Amazon RDS
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_BestPractices.html
Exporting data from a MySQL DB instance by using replication
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Exporting.NonRDSRepl.html
Controlling access to AWS resources using resource tags
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html You can use tags to control access to your AWS resources that support tagging. You can also tag IAM users and roles to control what they can access. To learn how to tag IAM users and roles, see Tagging IAM users and roles. To view a tutorial for creating and testing a policy that allows IAM roles with principal tags to access resources with matching tags, see IAM Tutorial: Define permissions to access AWS resources based on tags. Use the information in the following section to control access to other AWS services without tagging IAM users or roles.
Security best practices in IAM
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Revoking IAM role temporary security credentials
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_revoke-sessions.html
Walkthrough: Run the EC2Rescue tool on unreachable instances - EC2Rescue
https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-ec2rescue.html EC2Rescue can help you diagnose and troubleshoot problems on EC2 instances for Linux and Windows Server. You can run the tool manually, as described in Using EC2Rescue for Linux Server and Using EC2Rescue for Windows Server. Or, you can run the tool automatically by using Systems Manager Automation and the AWSSupport-ExecuteEC2Rescue runbook. The AWSSupport-ExecuteEC2Rescue runbook is designed to perform a combination of Systems Manager actions, AWS CloudFormation actions, and Lambda functions that automate the steps normally required to use EC2Rescue. You can use the AWSSupport-ExecuteEC2Rescue runbook to troubleshoot and potentially remediate different types of operating system (OS) issues. See the following topics for a complete list:
Setting up AWS Systems Manager for hybrid environments
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html
What is AWS Systems Manager?
https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html
Internetwork traffic privacy in Amazon VPC
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Security.html
VPC Subnets
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html
VPCs and subnets
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html