a

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

AWS Launch Wizard

AWS Launch Wizard helps you reduce the time that it takes to deploy application and domain-controller solutions to the cloud. You provide your application or domain controller requirements, and AWS Launch Wizard identifies the right AWS resources to deploy and run your solution. AWS Launch Wizard estimates the cost of the deployment, lets you modify your resources, and then view the updated cost assessment. Then, after you approve, AWS Launch Wizard provisions and configures the selected resources in a few hours to create fully-functioning, production-ready applications or domain controllers. It also creates custom AWS CloudFormation templates, which you can reuse and customize for subsequent deployments. This section of the AWS Launch Wizard documentation provides guidance for deploying self-managed domain controllers and AWS Directory Service for Microsoft Active Directory using the Launch Wizard service.

Amazon Comprehend

Amazon Comprehend uses natural language processing (NLP) to extract insights about the content of documents without the need of any special preprocessing. Amazon Comprehend processes any text files in UTF-8 format. It develops insights by recognizing the entities, key phrases, language, sentiments, and other common elements in a document. Use Amazon Comprehend to create new products based on understanding the structure of documents. With Amazon Comprehend you can search social networking feeds for mentions of products, scan an entire document repository for key phrases, or determine the topics contained in a set of documents. Amazon Comprehend uses natural language processing (NLP) to extract insights about the content of documents. It develops insights by recognizing the entities, key phrases, language, sentiments, and other common elements in a document. Use Amazon Comprehend to create new products based on understanding the structure of documents. For example, using Amazon Comprehend you can search social networking feeds for mentions of products or scan an entire document repository for key phrases. You can access Amazon Comprehend document analysis capabilities using the Amazon Comprehend console or using the Amazon Comprehend APIs. You can run real-time analysis for small workloads or you can start asynchronous analysis jobs for large document sets. You can use the pre-trained models that Amazon Comprehend provides, or you can train your own custom models for classification and entity recognition. All of the Amazon Comprehend features can analyze UTF-8 text documents as the input files. In addition, custom entity recognition can analyze image files, PDF files, and Word files. Amazon Comprehend can examine and analyze documents in a variety of languages, depending on the specific feature. For more information, see Languages supported in Amazon Comprehend. Amazon Comprehend's Dominant language capability can examine documents and determine the dominant language for a far wider selection of languages. Amazon Comprehend insights Amazon Comprehend uses a pre-trained model to examine and analyze a document or set of documents to gather insights about it. This model is continuously trained on a large body of text so that there is no need for you to provide training data. Amazon Comprehend gathers the following types of insights: Entities - References to the names of people, places, items, and locations contained in a document. Key phrases - Phrases that appear in a document. For example, a document about a basketball game might return the names of the teams, the name of the venue, and the final score. Personally Identifiable Information (PII) - Personal data that can identify an individual, such as an address, bank account number, or phone number. Language - The dominant language of a document. Sentiment - The dominant sentiment of a document, which can be positive, neutral, negative, or mixed. Targeted sentiment - The sentiments associated with specific entities in a document. The sentiment for each entity occurrence can be positive, negative, neutral or mixed. Syntax - The parts of speech for each word in the document. For more information, see Insights. Amazon Comprehend Custom You can customize Amazon Comprehend for your specific requirements without the skillset required to build machine learning-based NLP solutions. Using automatic machine learning, or AutoML, Amazon Comprehend Custom builds customized NLP models on your behalf, using data you already have. Custom classification - Create custom classification models (classifiers) to organize your documents into your own categories. Custom entity recognition - Create custom entity recognition models (recognizers) that can analyze text for your specific terms and noun-based phrases. For more information, see Amazon Comprehend Custom. Document clustering (topic modeling) You can also use Amazon Comprehend to examine a corpus of documents to organize them based on similar keywords within them. Document clustering (topic modeling) is useful to organize a large corpus of documents into topics or clusters that are similar based on word frequency. Benefits Some of the benefits of using Amazon Comprehend include: Integrate powerful natural language processing into your apps - Amazon Comprehend removes the complexity of building text analysis capabilities into your applications by making powerful and accurate natural language processing available with a simple API. You don't need textual analysis expertise to take advantage of the insights that Amazon Comprehend produces. Deep learning based natural language processing - Amazon Comprehend uses deep learning technology to accurately analyze text. Our models are constantly trained with new data across multiple domains to improve accuracy. Scalable natural language processing - Amazon Comprehend enables you to analyze millions of documents so that you can discover the insights that they contain. Integrate with other AWS services - Amazon Comprehend is designed to work seamlessly with other AWS services like Amazon S3, AWS KMS, and AWS Lambda. Store your documents in Amazon S3, or analyze real-time data with Kinesis Data Firehose. Support for AWS Identity and Access Management (IAM) makes it easy to securely control access to Amazon Comprehend operations. Using IAM, you can create and manage AWS users and groups to grant the appropriate access to your developers and end users. Encryption of output results and volume data - Amazon S3 already enables you to encrypt your input documents, and Amazon Comprehend extends this even farther. By using your own KMS key, you can not only encrypt the output results of your job, but also the data on the storage volume attached to the compute instance that processes the analysis job. The result is significantly enhanced security. Low cost - With Amazon Comprehend, there are no minimum fees or upfront commitments. You pay for the documents that you analyze and custom models that you train. Amazon Comprehend pricing There is a usage charge for running real-time or asynchronous analysis jobs. You pay to train custom models, and you pay for custom model management. For real-time requests using custom models, you pay for the endpoint from the time that you start your endpoint until you delete the endpoint.

Amazon Inspector

Amazon Inspector is a security vulnerability assessment service that helps improve the security and compliance of your AWS resources. Amazon Inspector automatically assesses resources for vulnerabilities or deviations from best practices, and then produces a detailed list of security findings prioritized by level of severity. Amazon Inspector includes a knowledge base of hundreds of rules mapped to common security standards and vulnerability definitions that are regularly updated by AWS security researchers. Amazon Inspector is a vulnerability management service that continuously scans your AWS workloads for vulnerabilities. Amazon Inspector automatically discovers and scans Amazon EC2 instances and container images residing in Amazon Elastic Container Registry (Amazon ECR) for software vulnerabilities and unintended network exposure. When a software vulnerability or network issue is discovered, Amazon Inspector creates a finding. A finding describes the vulnerability, identifies the affected resource, rates the severity of the vulnerability, and provides remediation guidance. Details of a finding for your account can be analyzed in multiple ways using the Amazon Inspector console, or you can view and process your findings through other AWS services. For more information, see Understanding findings in Amazon Inspector. Features of Amazon Inspector Centrally manage multiple Amazon Inspector accounts If your AWS environment has multiple accounts, you can centrally manage your environment through a single account by using AWS Organizations and designating an account as the delegated administrator account for Amazon Inspector. Amazon Inspector can be enabled for your entire organization with a single click. Additionally, you can automate enabling the service for future members whenever they join your organization. The Amazon Inspector delegated administrator account can manage findings data and certain settings for members of the organization. This includes viewing aggregated findings details for all member accounts, enabling or disabling scans for member accounts, and reviewing scanned resources within the AWS organization. Continuously scan your environment for vulnerabilities and network exposure With Amazon Inspector you do not need to manually schedule or configure assessment scans. Amazon Inspector automatically discovers and begins scanning your eligible resources. Amazon Inspector continues to assess your environment throughout the lifecycle of your resources by automatically scanning resources whenever you make changes to them. Unlike traditional security scanning software, Amazon Inspector has minimal impact on the performance of your fleet. When vulnerabilities or open network paths are identified, Amazon Inspector produces a finding that you can investigate. The finding includes comprehensive details about the vulnerability, the impacted resource, and remediation recommendations. If you appropriately remediate a finding, Amazon Inspector automatically detects the remediation and closes the finding. Assess vulnerabilities accurately with the Amazon Inspector Risk score As Amazon Inspector collects information about your environment through scans, it provides severity scores specifically tailored to your environment. Amazon Inspector examines the security metrics that compose the National Vulnerability Database (NVD) base score for a vulnerability and adjusts them according to your compute environment. For example, the service may lower the Amazon Inspector score of a finding for an Amazon EC2 instance if the vulnerability is exploitable over the network but no open network path to the internet is available from the instance. This score is in CVSS format and is a modification of the base Common Vulnerability Scoring System (CVSS) score provided by NVD. Identify high-impact findings with the Amazon Inspector dashboard The Amazon Inspector dashboard offers a high-level view of findings from across your environment. From the dashboard, you can access the granular details of a finding. The newly redesigned dashboard contains streamlined information about scan coverage in your environment, your most critical findings, and which resources have the most findings. The risk-based remediation panel in the Amazon Inspector dashboard presents the findings that affect the largest number of instances and images. This panel makes it easier to identify the findings with the greatest impact on your environment, see findings details, and view suggested solutions. Manage your findings using customizable views In addition to the dashboard, the Amazon Inspector console offers a Findings view. This page lists all findings for your environment and provides the details of individual findings. You can view findings grouped by category or vulnerability type. In each view you can further customize your results using filters. You can also use filters to create suppression rules that hide unwanted findings from your views. Any Amazon Inspector user can use filters and suppression rules to generate finding reports that show all findings or a customized selection of findings. Reports can be generated in CSV or JSON formats. Monitor and process findings with other services and systems To support integration with other services and systems, Amazon Inspector publishes findings to Amazon EventBridge as finding events. EventBridge is a serverless event bus service that can route findings data to targets such as AWS Lambda functions and Amazon Simple Notification Service (Amazon SNS) topics. With EventBridge, you can monitor and process findings in near-real time as part of your existing security and compliance workflows. If you have enabled AWS Security Hub, then Amazon Inspector will also publish findings to Security Hub. Security Hub is a service that provides a comprehensive view of your security posture across your AWS environment and helps you check your environment against security industry standards and best practices. With Security Hub, you can more easily monitor and process your findings as part of a broader analysis of your organization's security posture in AWS. Accessing Amazon Inspector Amazon Inspector is available in most AWS Regions. For a list of Regions where Amazon Inspector is currently available, see Amazon Inspector endpoints and quotas in the Amazon Web Services General Reference. To learn more about AWS Regions, see Managing AWS Regions in the Amazon Web Services General Reference. In each Region, you can work with Amazon Inspector in the following ways. AWS Management Console The AWS Management Console is a browser-based interface that you can use to create and manage AWS resources. As part of that console, the Amazon Inspector console provides access to your Amazon Inspector account and resources. You can perform Amazon Inspector tasks from the Amazon Inspector console. AWS command line tools With AWS command line tools, you can issue commands at your system's command line to perform Amazon Inspector tasks. Using the command line can be faster and more convenient than using the console. The command line tools are also useful if you want to build scripts that perform tasks. AWS provides two sets of command line tools: the AWS Command Line Interface (AWS CLI) and the AWS Tools for PowerShell. For information about installing and using the AWS CLI, see the AWS Command Line Interface User Guide. For information about installing and using the Tools for PowerShell, see the AWS Tools for PowerShell User Guide. AWS SDKs AWS provides SDKs that consist of libraries and sample code for various programming languages and platforms, including Java, Go, Python, C++, and .NET. The SDKs provide convenient, programmatic access to Amazon Inspector and other AWS services. They also handle tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For information about installing and using the AWS SDKs, see Tools to Build on AWS. Amazon Inspector REST API The Amazon Inspector REST API gives you comprehensive, programmatic access to your Amazon Inspector account and resources. With this API, you can send HTTPS requests directly to Amazon Inspector. However, unlike the AWS command line tools and SDKs, use of this API requires your application to handle low-level details such as generating a hash to sign a request.

Amazon Neptune

Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Neptune is a purpose-built, high-performance graph database engine that is optimized for storing billions of relationships and querying the graph with milliseconds latency. Neptune supports the popular graph query languages Apache TinkerPop Gremlin and W3C's SPARQL, allowing you to build queries that efficiently navigate highly connected datasets. Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security. Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Neptune is a purpose-built, high-performance graph database engine. This engine is optimized for storing billions of relationships and querying the graph with milliseconds latency. Neptune supports the popular graph query languages Apache TinkerPop Gremlin, the W3C's SPARQL, and Neo4j's openCypher, enabling you to build queries that efficiently navigate highly connected datasets. Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security. Neptune is highly available, with read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across Availability Zones. Neptune provides data security features, with support for encryption at rest and in transit. Neptune is fully managed, so you no longer need to worry about database management tasks like hardware provisioning, software patching, setup, configuration, or backups. Key Service Components Primary DB instance - Supports read and write operations, and performs all of the data modifications to the cluster volume. Each Neptune DB cluster has one primary DB instance that is responsible for writing (that is, loading or modifying) graph database contents. Neptune replica - Connects to the same storage volume as the primary DB instance and supports only read operations. Each Neptune DB cluster can have up to 15 Neptune Replicas in addition to the primary DB instance. This provides high availability by locating Neptune Replicas in separate Availability Zones and distribution load from reading clients. Cluster volume - Neptune data is stored in the cluster volume, which is designed for reliability and high availability. A cluster volume consists of copies of the data across multiple Availability Zones in a single AWS Region. Because your data is automatically replicated across Availability Zones, it is highly durable, and there is little possibility of data loss. Supports Open Graph APIs Amazon Neptune supports open graph APIs for both Gremlin and SPARQL. It provides high performance for both of these graph models and their query languages. You can choose the Property Graph (PG) model and its open source query language, or the Apache TinkerPop Gremlin graph traversal language. Or, you can use the W3C standard Resource Description Framework (RDF) model and its standard SPARQL Query Language. Highly Secure Neptune provides multiple levels of security for your database. Security features include network isolation using Amazon VPC, and encryption at rest using keys that you create and control through AWS Key Management Service (AWS KMS). On an encrypted Neptune instance, data in the underlying storage is encrypted, as are the automated backups, snapshots, and replicas in the same cluster. Fully Managed With Amazon Neptune, you don't have to worry about database management tasks like hardware provisioning, software patching, setup, configuration, or backups. You can use Neptune to create sophisticated, interactive graph applications that can query billions of relationships in milliseconds. SQL queries for highly connected data are complex and hard to tune for performance. With Neptune, you can use the popular graph query languages TinkerPop Gremlin and SPARQL to execute powerful queries that are easy to write and perform well on connected data. This capability significantly reduces code complexity so that you can quickly create applications that process relationships. Neptune is designed to offer greater than 99.99 percent availability. It increases database performance and availability by tightly integrating the database engine with an SSD-backed virtualized storage layer that is built for database workloads. Neptune storage is fault-tolerant and self-healing. Disk failures are repaired in the background without loss of database availability. Neptune automatically detects database crashes and restarts without the need for crash recovery or rebuilding the database cache. If the entire instance fails, Neptune automatically fails over to one of up to 15 read replicas.

Amazon Timestream

With Amazon Timestream, you can easily store and analyze sensor data for IoT applications, metrics for DevOps use cases, and telemetry for application monitoring scenarios such as clickstream data analysis. Amazon Timestream is a fast, scalable, fully managed, purpose-built time series database that makes it easy to store and analyze trillions of time series data points per day. Timestream saves you time and cost in managing the lifecycle of time series data by keeping recent data in memory and moving historical data to a cost optimized storage tier based upon user defined policies. Timestream's purpose-built query engine lets you access and analyze recent and historical data together, without having to specify its location. Amazon Timestream has built-in time series analytics functions, helping you identify trends and patterns in your data in near real-time. Timestream is serverless and automatically scales up or down to adjust capacity and performance. Because you don't need to manage the underlying infrastructure, you can focus on optimizing and building your applications. Timestream also integrates with commonly used services for data collection, visualization, and machine learning. You can send data to Amazon Timestream using AWS IoT Core, Amazon Kinesis, Amazon MSK, and open source Telegraf. You can visualize data using Amazon QuickSight, Grafana, and business intelligence tools through JDBC. You can also use Amazon SageMaker with Timestream for machine learning. Timestream key benefits The key benefits of Amazon Timestream are: Serverless with auto-scaling - With Amazon Timestream, there are no servers to manage and no capacity to provision. As the needs of your application change, Timestream automatically scales to adjust capacity. Data lifecycle management - Amazon Timestream simplifies the complex process of data lifecycle management. It offers storage tiering, with a memory store for recent data and a magnetic store for historical data. Amazon Timestream automates the transfer of data from the memory store to the magnetic store based upon user configurable policies. Simplified data access - With Amazon Timestream, you no longer need to use disparate tools to access recent and historical data. Amazon Timestream's purpose-built query engine transparently accesses and combines data across storage tiers without you having to specify the data location. Purpose-built for time series - You can quickly analyze time series data using SQL, with built-in time series functions for smoothing, approximation, and interpolation. Timestream also supports advanced aggregates, window functions, and complex data types such as arrays and rows. Always encrypted - Amazon Timestream ensures that your time series data is always encrypted, whether at rest or in transit. Amazon Timestream also enables you to specify an AWS KMS customer managed key (CMK) for encrypting data in the magnetic store. High availability - Amazon Timestream ensures high availability of your write and read requests by automatically replicating data and allocating resources across at least 3 different Availability Zones within a single AWS Region. For more information, see the Timestream Service Level Agreement. Durability - Amazon Timestream ensures durability of your data by automatically replicating your memory and magnetic store data across different Availability Zones within a single AWS Region. All of your data is written to disk before acknowledging your write request as complete. Timestream use cases Examples of a growing list of use cases for Timestream include: Monitoring metrics to improve the performance and availability of your applications. Storage and analysis of industrial telemetry to streamline equipment management and maintenance. Tracking user interaction with an application over time. Storage and analysis of IoT sensor data.

AWS CloudHSM

AWS CloudHSM offers secure cryptographic key storage for customers by providing managed hardware security modules in the AWS Cloud. AWS CloudHSM provides hardware security modules in the AWS Cloud. A hardware security module (HSM) is a computing device that processes cryptographic operations and provides secure storage for cryptographic keys. When you use an HSM from AWS CloudHSM, you can perform a variety of cryptographic tasks: Generate, store, import, export, and manage cryptographic keys, including symmetric keys and asymmetric key pairs. Use symmetric and asymmetric algorithms to encrypt and decrypt data. Use cryptographic hash functions to compute message digests and hash-based message authentication codes (HMACs). Cryptographically sign data (including code signing) and verify signatures. Generate cryptographically secure random data. If you want a managed service for creating and controlling your encryption keys but you don't want or need to operate your own HSM, consider using AWS Key Management Service.

VPC Peering

- Connecting one VPC to another - Instances behave as if they were on the same private network - You can peer VPC's with other AWS accounts as well as with other VPCs in the same account. - Peering is in a star configuration, ie 1 central VPC peers with 4 others.

AWS CloudShell

AWS CloudShell is a browser-based shell that you can use to manage AWS services using the AWS Command Line Interface (AWS CLI) and a range of pre-installed development tools. AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. You can run AWS CLI commands against AWS services using your preferred shell (Bash, PowerShell, or Z shell). And you can do this without needing to download or install command line tools. AWS CloudShell features AWS Command Line Interface You launch AWS CloudShell from the AWS Management Console, and the AWS credentials you used to sign in to the console are automatically available in a new shell session. This pre-authentication of AWS CloudShell users allows you to skip configuring credentials when interacting with AWS services using AWS CLI version 2 (pre-installed on the shell's compute environment). For more information on interacting with AWS services using the command-line interface, see Working with AWS services in AWS CloudShell. Shells and development tools With the shell that's created for AWS CloudShell sessions, you can switch seamlessly between your preferred command-line shells. More specifically, you can switch between Bash, PowerShell, and Z shell. You also have access to pre-installed tools and utilities such as git, make, pip, sudo, tar, tmux, vim, wget, and zip. The shell environment is pre-configured with support for leading software languages, enabling you to run Node.js and Python projects, for example, without first having to perform runtime installations. PowerShell users can use the .NET Core runtime. Files created in or uploaded to AWS CloudShell can also be committed to a local repository before being pushed to a remote repository managed by AWS CodeCommit. For more information, see AWS CloudShell compute environment: specifications and software. Persistent storage When using AWS CloudShell you have persistent storage of 1 GB for each AWS Region at no additional cost. The persistent storage is located in your home directory ($HOME) and is private to you. Unlike ephemeral environment resources that are recycled after each shell session ends, data in your home directory persists between sessions. For more information about the retention of data in persistent storage, see Persistent storage. Security The AWS CloudShell environment and its users are protected by specific security features such as IAM permissions management, shell session restrictions, and Safe Paste for text input. Permissions management with IAM Administrators can grant and deny permissions to AWS CloudShell users using IAM policies. Administrators can also create policies that specify at a granular level the particular actions those users can perform with the shell environment. For more information, see Managing AWS CloudShell access and usage with IAM policies. Shell session management Inactive and long-running sessions are automatically stopped and recycled. For more information, see Shell sessions. Safe Paste for text input Enabled by default, Safe Paste is a security feature that asks you to verify that multiline text that you're about to paste into the shell doesn't contain malicious scripts. For more information, see Using Safe Paste for multiline text. Customization options Your AWS CloudShell experience can be customized by changing screen layouts (multiple tabs), text sizes, and light/dark interface themes. For more information, see Customizing your AWS CloudShell experience. You can also extend your shell environment by installing your own software and modifying start-up shell scripts. Pricing AWS CloudShell is an AWS service that's available at no additional charge. You pay for any other AWS resources that you run with AWS CloudShell. Standard data transfer rates also apply. For more information, see Service quotas and restrictions for AWS CloudShell.

AWS Pricing Calculator

AWS Pricing Calculator is a web service that you can use to create cost estimates that match your AWS use case. AWS Pricing Calculator is useful both for people who have never used AWS and for those who want to reorganize or expand their usage.

Auto Scaling

AWS provides multiple services that you can use to scale your application. Auto scaling is enabled by Amazon CloudWatch and is available at no additional charge beyond the service fees for CloudWatch and the other AWS resources that you use. Use a scaling plan to configure auto scaling for related or associated scalable resources in a matter of minutes. For example, you can use tags to group resources in categories such as production, testing, or development. Then, you can search for and set up scaling plans for scalable resources that belong to each category. Or, if your cloud infrastructure includes AWS CloudFormation, you can define stack templates to use to create collections of resources. Then, create a scaling plan for the scalable resources that belong to each stack. Supported resources AWS Auto Scaling supports the use of scaling plans for the following services and resources: Amazon Aurora - Increase or decrease the number of Aurora read replicas that are provisioned for an Aurora DB cluster. Amazon EC2 Auto Scaling - Launch or terminate EC2 instances by increasing or decreasing the desired capacity of an Auto Scaling group. Amazon Elastic Container Service - Increase or decrease the desired task count in Amazon ECS. Amazon DynamoDB - Increase or decrease the provisioned read and write capacity of a DynamoDB table or a global secondary index. Spot Fleet - Launch or terminate EC2 instances by increasing or decreasing the target capacity of a Spot Fleet. Scaling plan features and benefits Scaling plans provide the following features and benefits: Resource discovery - AWS Auto Scaling provides automatic resource discovery to help find resources in your application that can be scaled. Dynamic scaling - Scaling plans use the Amazon EC2 Auto Scaling and Application Auto Scaling services to adjust capacity of scalable resources to handle changes in traffic or workload. Dynamic scaling metrics can be standard utilization or throughput metrics, or custom metrics. Built-in scaling recommendations - AWS Auto Scaling provides scaling strategies with recommendations that you can use to optimize for performance, costs, or a balance between the two. Predictive scaling - Scaling plans also support predictive scaling for Auto Scaling groups. This helps to scale your Amazon EC2 capacity faster when there are regularly occurring spikes. Important If you have been using scaling plans only for configuring predictive scaling for your Auto Scaling groups, we strongly recommend that you use the predictive scaling policies of Auto Scaling groups instead. This recently introduced option offers enhanced features, such as using metrics aggregations to create new custom metrics or retain historical metric data across blue/green deployments. For more information, see Predictive scaling for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide. How to get started Use the following resources to help you create and use a scaling plan: How scaling plans work Best practices for scaling plans Getting started with scaling plans Work with scaling plans You can create, access, and manage your scaling plans using any of the following interfaces: AWS Management Console - Provides a web interface that you can use to access your scaling plans. If you've signed up for an AWS account, you can access your scaling plans by signing into the AWS Management Console, using the search box on the navigation bar to search for AWS Auto Scaling, and then choosing AWS Auto Scaling. AWS Command Line Interface (AWS CLI) - Provides commands for a broad set of AWS services, and is supported on Windows, macOS, and Linux. To get started, see AWS Command Line Interface User Guide. For more information, see autoscaling-plans in the AWS CLI Command Reference. AWS Tools for Windows PowerShell - Provides commands for a broad set of AWS products for those who script in the PowerShell environment. To get started, see the AWS Tools for Windows PowerShell User Guide. For more information, see the AWS Tools for PowerShell Cmdlet Reference. AWS SDKs - Provides language-specific API operations and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS SDKs. Query API - Provides low-level API actions that you call using HTTPS requests. Using the Query API is the most direct way to access AWS services. However, it requires your application to handle low-level details such as generating the hash to sign the request, and handling errors. For more information, see the AWS Auto Scaling API Reference. AWS CloudFormation - Supports creating scaling plans using CloudFormation templates. For more information, see the AWS::AutoScalingPlans::ScalingPlan reference in the AWS CloudFormation User Guide. To connect programmatically to an AWS service, you use an endpoint. For information about endpoints for calls to AWS Auto Scaling, see AWS Auto Scaling endpoints and quotas in the AWS General Reference. This page also shows the Regional availability of scaling plans. Pricing All scaling plan features are enabled for your use. The features are provided at no additional charge beyond the service fees for CloudWatch and the other AWS Cloud resources that you use.

Porting Assistant for .NET

Porting Assistant for .NET is a compatibility scanner that reduces the manual effort required to port Microsoft .NET Framework applications to .NET Core. The Porting Assistant for .NET assesses the .NET application source code and identifies incompatible APIs and third-party packages. Where applicable, the Porting Assistant for .NET also provides replacement suggestions that are compatible with .NET Core.

AWS Microservice Extractor for .NET

Reduce the skill bar and effort required to transform monolithic applications into smaller and independent services that improve uptime and reduce operational costs.

Business Support

We recommend AWS Business Support if you are running production workloads on AWS and want 24x7 access to technical support from engineers, access to Health API, and contextual architectural guidance for your use-cases.

Developer Support

We recommend AWS Developer Support if you are testing or doing early development on AWS and want the ability to get technical support during business hours as well as general architectural guidance as you build and test.

Enterprise Support

We recommend Enterprise Support for 24x7 technical support from high-quality engineers, tools and technology to automatically manage health of your environment, consultative architectural guidance, and a designated Technical Account Manager (TAM) to coordinate access to proactive / preventative programs and AWS subject matter experts.

AWS Audit Manager

AWS Audit Manager helps you continuously audit your AWS usage to simplify how you manage risk and compliance with regulations and industry standards. AWS Audit Manager makes it easier to evaluate whether your policies, procedures, and activities—also known as controls—are operating as intended. The service offers prebuilt frameworks with controls that are mapped to well-known industry standards and regulations, full customization of frameworks and controls, and automated collection and organization of evidence as designed by each control requirement. AWS Audit Manager helps you continually audit your AWS usage to simplify how you manage risk and compliance with regulations and industry standards. Audit Manager automates evidence collection so you can more easily assess whether your policies, procedures, and activities—also known as controls—are operating effectively. When it's time for an audit, Audit Manager helps you manage stakeholder reviews of your controls. This means that you can build audit-ready reports with much less manual effort. AWS Audit Manager provides prebuilt frameworks that structure and automate assessments for a given compliance standard or regulation. Frameworks include a prebuilt collection of controls with descriptions and testing procedures. These controls are grouped according to the requirements of the specified compliance standard or regulation. You can also customize frameworks and controls to support internal audits according to your specific requirements. You can create an assessment from any framework. When you create an assessment, AWS Audit Manager automatically runs resource assessments. These assessments collect data for both the AWS account and services that you define as in scope for your audit. The data that's collected is automatically transformed into audit-friendly evidence. Then, it's attached to the relevant controls to help you demonstrate compliance in security, change management, business continuity, and software licensing. This evidence collection process is ongoing, and starts when you create your assessment. After you complete an audit and you no longer need Audit Manager to collect evidence, you can stop evidence collection. To do this, change the status of your assessment to inactive. Features of AWS Audit Manager With AWS Audit Manager, you can do the following tasks: Get started quickly — Create your first assessment by selecting from a gallery of prebuilt frameworks that support a range of compliance standards and regulations. Then, initiate automatic evidence collection to audit your AWS service usage. Upload and manage evidence from hybrid or multicloud environments — In addition to the evidence that Audit Manager collects from your AWS environment, you can also upload and centrally manage evidence from your on-premises or multicloud environment. Support common compliance standards and regulations — Choose one of the AWS Audit Manager standard frameworks. These frameworks provide prebuilt control mappings for common compliance standards and regulations. These include the CIS Foundation Benchmark, PCI DSS, GDPR, HIPAA, SOC2, GxP, and AWS operational best practices. Monitor your active assessments — Use the Audit Manager dashboard to view analytics data for your active assessments, and quickly identify non-compliant evidence that needs to be remediated. Customize frameworks — Create your own frameworks with standard or custom controls based on your specific requirements for internal audits. Share custom frameworks — Share your custom AWS Audit Manager frameworks with another AWS account, or replicate them into another AWS Region under your own account. Support cross-team collaboration — Delegate control sets to subject matter experts who can review related evidence, add comments, and update the status of each control. Create reports for auditors — Generate assessment reports that summarize the relevant evidence that's collected for your audit and link to folders that contain the detailed evidence. Ensure evidence integrity — Store evidence in a secure location, where it remains unaltered. Note AWS Audit Manager assists in collecting evidence that's relevant for verifying compliance with specific compliance standards and regulations. However, it doesn't assess your compliance itself. The evidence that's collected through AWS Audit Manager therefore might not include all the information about your AWS usage that's needed for audits. AWS Audit Manager isn't a substitute for legal counsel or compliance experts.

Amazon Braket

Amazon Braket is a fully managed service that helps you get started with quantum computing by providing a development environment to explore and design quantum algorithms, test them on simulated quantum computers, and run them on your choice of different quantum hardware technologies. Amazon Braket is a fully-managed AWS service that helps researchers, scientists, and developers get started with quantum computing. Quantum computing has the potential to solve computational problems that are beyond the reach of classical computers because it harnesses the laws of quantum mechanics to process information in new ways. Gaining access to quantum computing hardware can be expensive and inconvenient. Limited access makes it difficult to run algorithms, optimize designs, evaluate the current state of the technology, and plan for when to invest your resources for maximum benefit. Amazon Braket helps you overcome these challenges. Amazon Braket offers a single point of access to a variety of quantum computing technologies. It enables you to: Explore and design quantum and hybrid algorithms Test algorithms on different quantum circuit simulators Run algorithms on different types of quantum computers Create proof of concept applications Defining quantum problems and programming quantum computers to solve them requires a new set of skills. To help you gain these skills, Amazon Braket offers different environments to simulate and run your quantum algorithms. You can find the approach that best suits your requirements and you can get started quickly with a set of example environments called notebooks. Amazon Braket development has three aspects — Build, Test, and Run: Build - Amazon Braket provides fully-managed Jupyter notebook environments that make it easy to get started. Amazon Braket notebooks are pre-installed with sample algorithms, resources, and developer tools, including the Amazon Braket SDK. With the Amazon Braket SDK, you can build quantum algorithms and then test and run them on different quantum computers and simulators by changing a single line of code. Test - Amazon Braket provides access to fully-managed, high-performance, quantum circuit simulators. You can test and validate your circuits. Amazon Braket handles all the underlying software components and EC2 clusters to take away the burden of simulating quantum circuits on classical HPC infrastructure. Run - Amazon Braket provides secure, on-demand access to different types of quantum computers. You have access to gate-based quantum computers from IonQ, OQC, Xanadu, and Rigetti as well as a quantum annealer from D-Wave. You have no upfront commitment, and no need to procure access with individual providers. About quantum computing and Amazon Braket Quantum computing is in its early developmental stage. It's important to understand that no universal, fault-tolerant quantum computer exists at present. Therefore, certain types of quantum hardware are better suited for certain use cases and it is crucial to have access to a variety of computing hardware. Amazon Braket offers a variety of hardware through third-party providers. Existing quantum hardware is limited due to noise, which introduces errors. The industry is in the Noisy Intermediate Scale Quantum (NISQ) era. In the NISQ era, quantum computing devices are too noisy to sustain pure quantum algorithms, such as Shor's algorithm or Grover's algorithm. Until better quantum error correction is available, the most practical quantum computing requires the combination of classical (traditional) computing resources with quantum computers to create hybrid algorithms. Amazon Braket helps you work with hybrid quantum algorithms. In hybrid quantum algorithms, quantum processing units (QPUs) are used as co-processors for CPUs, thus speeding up specific calculations in a classical algorithm. These algorithms utilize iterative processing, in which computation moves between classical and quantum computers. For example, current applications of quantum computing in chemistry, optimization, and machine learning are based on variational quantum algorithms, which are a type of hybrid quantum algorithm. In variational quantum algorithms, classical optimization routines adjust the parameters of a parameterized quantum circuit iteratively, much in the same way the weights of a neural network are adjusted iteratively based on the error in a machine learning training set. Amazon Braket offers access to the PennyLane open source software library, which assists you with variational quantum algorithms. Quantum computing is gaining traction for computations in four main areas: Number theory — including factoring and cryptography (For example, Shor's algorithm is a primary quantum method for number theory computations). Optimization — including constraint satisfaction, solving linear systems, and machine learning. Oracular computing — including search, hidden subgroups, and order finding (For example, Grover's algorithm is a primary quantum method for oracular computations). Simulation — including direct simulation, knot invariants, and quantum approximate optimization algorithm (QAOA) applications. Applications for these categories of computations can be found in financial services, biotechnology, manufacturing, and pharmaceuticals, to name a few. Amazon Braket offers capabilities and notebook examples that can already be applied to many proof of concept problems, and certain practical problems.

Amazon EBS

Amazon Elastic Block Store (Amazon EBS) is a web service that provides block level storage volumes for use with Amazon Elastic Compute Cloud instances. Amazon EBS volumes are highly available and reliable storage volumes that can be attached to any running instance and used like a hard drive. Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 instances. EBS volumes behave like raw, unformatted block devices. You can mount these volumes as devices on your instances. EBS volumes that are attached to an instance are exposed as storage volumes that persist independently from the life of the instance. You can create a file system on top of these volumes, or use them in any way you would use a block device (such as a hard drive). You can dynamically change the configuration of a volume attached to an instance. We recommend Amazon EBS for data that must be quickly accessible and requires long-term persistence. EBS volumes are particularly well-suited for use as the primary storage for file systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage. Amazon EBS is well suited to both database-style applications that rely on random reads and writes, and to throughput-intensive applications that perform long, continuous reads and writes. With Amazon EBS, you pay only for what you use. Features of Amazon EBS You create an EBS volume in a specific Availability Zone, and then attach it to an instance in that same Availability Zone. To make a volume available outside of the Availability Zone, you can create a snapshot and restore that snapshot to a new volume anywhere in that Region. You can copy snapshots to other Regions and then restore them to new volumes there, making it easier to leverage multiple AWS Regions for geographical expansion, data center migration, and disaster recovery. Amazon EBS provides the following volume types: General Purpose SSD, Provisioned IOPS SSD, Throughput Optimized HDD, and Cold HDD. For more information, see EBS volume types. The following is a summary of performance and use cases for each volume type. General Purpose SSD volumes (gp2 and gp3) balance price and performance for a wide variety of transactional workloads. These volumes are ideal for use cases such as boot volumes, medium-size single instance databases, and development and test environments. Provisioned IOPS SSD volumes (io1 and io2) are designed to meet the needs of I/O-intensive workloads that are sensitive to storage performance and consistency. They provide a consistent IOPS rate that you specify when you create the volume. This enables you to predictably scale to tens of thousands of IOPS per instance. Additionally, io2 volumes provide the highest levels of volume durability. Throughput Optimized HDD volumes (st1) provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. These volumes are ideal for large, sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing. Cold HDD volumes (sc1) provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. These volumes are ideal for large, sequential, cold-data workloads. If you require infrequent access to your data and are looking to save costs, these volumes provides inexpensive block storage. You can create your EBS volumes as encrypted volumes, in order to meet a wide range of data-at-rest encryption requirements for regulated/audited data and applications. When you create an encrypted EBS volume and attach it to a supported instance type, data stored at rest on the volume, disk I/O, and snapshots created from the volume are all encrypted. The encryption occurs on the servers that host EC2 instances, providing encryption of data-in-transit from EC2 instances to EBS storage. For more information, see Amazon EBS encryption. You can create point-in-time snapshots of EBS volumes, which are persisted to Amazon S3. Snapshots protect data for long-term durability, and they can be used as the starting point for new EBS volumes. The same snapshot can be used to create as many volumes as needed. These snapshots can be copied across AWS Regions. For more information, see Amazon EBS snapshots. Performance metrics, such as bandwidth, throughput, latency, and average queue length, are available through the AWS Management Console. These metrics, provided by Amazon CloudWatch, allow you to monitor the performance of your volumes to make sure that you are providing enough performance for your applications without paying for resources you don't need. For more information, see Amazon EBS volume performance on Linux instances.

Amazon FSx

Amazon FSx makes it easy and cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities. With Amazon FSx, you can choose between four widely-used file systems: Lustre, NetApp ONTAP, OpenZFS, and Windows File Server. Amazon File Cache is a high-speed cache on AWS that makes it easier to process file data, regardless of where the data is stored. Amazon File Cache is a fully managed high-speed cache on AWS that makes it easier to process file data, regardless of where the data is stored. File Cache serves as a temporary, high-performance storage location, for data stored in on-premises file systems, AWS file systems, and Amazon S3 buckets. Using this capability, you can make dispersed data sets available to file-based applications on AWS with a unified view and at high speeds - sub-millisecond latencies and high throughput. Amazon File Cache presents data from linked data sets as a unified set of files and directories. It serves data in the cache to applications running in AWS at high speeds - consistent, sub-millisecond latencies, up to hundreds of GB/s of throughput, and up to millions of operations per second, helping speed up workload completion times and optimize compute resource consumption costs. File Cache automatically loads data into the cache when it's accessed for the first time and releases data when it's not use. Amazon File Cache is a fully managed service. With a few clicks in the AWS console, CLI, or API, you can create a high-performance cache. With Amazon File Cache, you don't have to worry about managing file servers and storage volumes, updating hardware, configuring software, running out of capacity, or tuning performance—Amazon File Cache automates these time-consuming administration tasks. Amazon File Cache is POSIX-compliant, so you can use your current Linux-based applications without having to make any changes. Amazon File Cache provides a native file system interface and works as any file system does with your Linux operating system. It also provides read-after-write consistency and supports file locking. Amazon File Cache and data repositories You can link your cache to data repositories on Amazon S3, or on NFS file systems that support the NFSv3 protocol. The NFS data repository can be on on premises or in the cloud. You can link a maximum of 8 data repositories, but they must all be of the same repository type (either all S3 or all NFS). For more information on linking your cache to a data repository, see Linking your cache to a data repository. When linked to a data repository, a cache transparently presents S3 or NFS objects as files and directories. By default, File Cache automatically loads data into the cache when it's accessed for the first time. You can optionally pre-load data into the cache before starting your workload. For more information on importing data repository files and directories, see Importing files from your data repository. When the files on your cache are changed (either by users or by your workloads), you can write the cache data back to the data repository. You can use HSM commands to transfer the data and metadata between your cache and its linked data repositories. For more information, see Exporting changes to the data repository. Deployment and storage type Amazon File Cache supports the CACHE_1 deployment type. When you create a new cache on the AWS console, this deployment type is automatically preset for your cache. On caches using the CACHE_1 deployment type, data is automatically replicated, and file servers are replaced if they fail. Amazon File Cache is built on a solid state drive (SSD) storage. The SSD storage type is suited for low-latency, IOPS-intensive workloads that typically have small, random file operations. For more information about cache performance, see Amazon File Cache performance. Accessing Amazon File Cache You can mix and match the compute instance types and Linux Amazon Machine Images (AMIs) that are connected to a single cache. Amazon File Cache is accessible from compute workloads running on Amazon Elastic Compute Cloud (Amazon EC2) instances and on Amazon Elastic Container Service (Amazon ECS) Docker containers. Amazon EC2 - You access your cache from your Amazon EC2 compute instances using the open-source Lustre client. Amazon EC2 instances can access your cache from other Availability Zones within the same Amazon Virtual Private Cloud (Amazon VPC), provided your networking configuration provides for access across subnets within the VPC. After your cache is mounted, you can work with its files and directories just as you do using a local file system. Amazon ECS - You access Amazon File Cache from Amazon ECS Docker containers on Amazon EC2 instances. For more information, see Mounting from Amazon Elastic Container Service. Amazon File Cache is compatible with the most popular Linux-based AMIs, including Red Hat Enterprise Linux (RHEL), CentOS, Rocky Linux, Ubuntu, and SUSE Linux. For RHEL, CentOS, Rocky Linux, and Ubuntu, an AWS Lustre client repository provides clients that are compatible with these operating systems. For more information on the clients, compute instances, and environments from which you can access your cache, see Accessing caches. Integrations with AWS services Amazon File Cache integrates with AWS Batch using EC2 Launch Templates. AWS Batch enables you to run batch computing workloads on the AWS Cloud, including high performance computing (HPC), machine learning (ML), and other asynchronous workloads. AWS Batch automatically and dynamically sizes instances based on job resource requirements. For more information, see What Is AWS Batch? in the AWS Batch User Guide. Amazon File Cache integrates with AWS Thinkbox Deadline. Deadline is a hassle-free administration and compute management toolkit for Windows, Linux, and macOS based render farms. Deadline offers a world of flexibility and a wide-range of management options for render farms and compute clusters of all sizes, and allows users the ability to easily access any combination of on-premise or cloud-based resources for their rendering and processing needs. For more information, see the Deadline User Guide. Security and compliance Amazon File Cache supports encryption at rest and in transit. Amazon File Cache automatically encrypts cache data at rest using keys managed in AWS Key Management Service (AWS KMS). Data in transit is also automatically encrypted on caches when accessed from supported Amazon EC2 instances. For more information about data encryption in Amazon File Cache, see Data encryption in Amazon File Cache. . For more information about security, see Security in Amazon File Cache. Pricing for Amazon File Cache With Amazon File Cache, there are no upfront hardware or software costs. You pay for only the resources used, with no minimum commitments, setup costs, or additional fees.

Amazon Kendra

Amazon Kendra is a search service, powered by machine learning, that enables users to search unstructured text using natural language. Amazon Kendra is a highly accurate and intelligent search service that enables your users to search unstructured and structured data using natural language processing and advanced search algorithms. It returns specific answers to questions, giving users an experience that's close to interacting with a human expert. It is highly scalable and capable of meeting performance demands, tightly integrated with other AWS services such as Amazon S3 and Amazon Lex, and offers enterprise-grade security. For information on Amazon Kendra API operations, see the API Reference documentation. Amazon Kendra users can ask the following types of questions, or queries: Factoid questions—Simple who, what, when, or where questions, such as Where is the nearest service center to Seattle? Factoid questions have fact-based answers that can be returned in the form of a single word or phrase. The answer is retrieved from a FAQ or from your indexed documents. Descriptive questions—Questions where the answer could be a sentence, passage, or an entire document. For example, How do I connect my Echo Plus to my network? or How do I get tax benefits for lower income families?. Keyword searches—Questions where the intent and scope are not clear. For example, keynote address. As 'address' can often have several meanings, Amazon Kendra can infer the user's intent behind the search query to return relevant information aligned with the user's intended meaning. Amazon Kendra uses deep learning models to handle this kind of query. Benefits of Amazon Kendra Amazon Kendra has the following benefits: Accuracy—Unlike traditional search services that use keyword searches where results are based on basic keyword matching and ranking, Amazon Kendra attempts to understand the context of the question. Amazon Kendra searches across your data and goes beyond traditional search to return the most relevant word, snippet, or document for your query. Amazon Kendra uses machine learning to improve search results over time. Simplicity—Amazon Kendra provides a console and API for managing your documents that you want to search. You can use a simple search API to integrate Amazon Kendra into your client applications, such as websites or mobile applications. Connectivity—Amazon Kendra can connect to third-party data repositories or data sources such as Microsoft SharePoint. You can easily index and search your documents using your data source. User access control—Amazon Kendra delivers highly secure enterprise search for your search applications. Your search results reflect the security model of your organization. Search results can be filtered based on the user or their group access to documemnts. Customers are responsible for authenticating and authorizing user access. Amazon Kendra Developer Edition The Amazon Kendra Developer Edition provides all of the features of Amazon Kendra at a lower cost. It includes a free tier that provides 750 hours of use. The Developer Edition is ideal to explore how Amazon Kendra indexes your documents, to try out features, and to develop applications that use Amazon Kendra. The Developer Edition provides the following: Up to 5 indexes with up to 5 data sources each. 10,000 documents or 3 GB of extracted text. Approximately 4,000 queries per day or 0.05 queries per second. Runs in 1 availability zone (AZ)—see Availability Zones (data centers in AWS regions) You should not use the Developer Edition for a production application. The Developer Edition doesn't provide any guarantees of latency or availability. Amazon Kendra Enterprise Edition Use Amazon Kendra Enterprise Edition when you want to index your entire enterprise document library or for when your application is ready for use in a production environment. The Enterprise Edition provides the following: Up to 5 indexes with up to 50 data sources each. 100,000 documents or 30 GB of extracted text. Approximately 8,000 queries per day or 0.1 queries per second. Runs in 3 availability zones (AZ)—see Availability Zones (data centers in AWS regions) You can increase this quota using the Service Quotas console. Pricing for Amazon Kendra You can get started for free with the Amazon Kendra Developer Edition that provides usage of up to 750 hours for the first 30 days. After your trial expires, you are charged for all provisioned Amazon Kendra indexes, even if they are empty and no queries are executed. After the trial expires, there are additional charges for scanning and syncing documents using the Amazon Kendra data sources.

AWS Command Line Interface

The AWS Command Line Interface (AWS CLI) is a unified tool that provides a consistent interface for interacting with all parts of Amazon Web Services. AWS CLI commands for different services are covered in the accompanying user guide, including descriptions, syntax, and usage examples. The AWS Command Line Interface (AWS CLI) is an open source tool that enables you to interact with AWS services using commands in your command-line shell. With minimal configuration, the AWS CLI enables you to start running commands that implement functionality equivalent to that provided by the browser-based AWS Management Console from the command prompt in your terminal program: Linux shells - Use common shell programs such as bash, zsh, and tcsh to run commands in Linux or macOS. Windows command line - On Windows, run commands at the Windows command prompt or in PowerShell. Remotely - Run commands on Amazon Elastic Compute Cloud (Amazon EC2) instances through a remote terminal program such as PuTTY or SSH, or with AWS Systems Manager. All IaaS (infrastructure as a service) AWS administration, management, and access functions in the AWS Management Console are available in the AWS API and AWS CLI. New AWS IaaS features and services provide full AWS Management Console functionality through the API and CLI at launch or within 180 days of launch. The AWS CLI provides direct access to the public APIs of AWS services. You can explore a service's capabilities with the AWS CLI, and develop shell scripts to manage your resources. In addition to the low-level, API-equivalent commands, several AWS services provide customizations for the AWS CLI. Customizations can include higher-level commands that simplify using a service with a complex API. About AWS CLI version 2 The AWS CLI version 2 is the most recent major version of the AWS CLI and supports all of the latest features. Some features introduced in version 2 are not backported to version 1 and you must upgrade to access those features. There are some "breaking" changes from version 1 that might require you to change your scripts. For a list of breaking changes in version 2, see Migrating from AWS CLI version 1 to version 2. The AWS CLI version 2 is available to install only as a bundled installer. While you might find it in package managers, these are unsupported and unofficial packages that are not produced or managed by AWS. We recommend that you install the AWS CLI from only the official AWS distribution points, as documented in this guide. To install the AWS CLI version 2, see Installing or updating the latest version of the AWS CLI. Maintenance and support for SDK major versions For information about maintenance and support for SDK major versions and their underlying dependencies, see the following in the AWS SDKs and Tools Reference Guide: AWS SDKs and tools maintenance policy AWS SDKs and tools version support matrix About Amazon Web Services Amazon Web Services (AWS) is a collection of digital infrastructure services that developers can leverage when developing their applications. The services include computing, storage, database, and application synchronization (messaging and queuing). AWS uses a pay-as-you-go service model. You are charged only for the services that you—or your applications—use. Also, to make AWS more approachable as a platform for prototyping and experimentation, AWS offers a free usage tier. On this tier, services are free below a certain level of usage. For more information about AWS costs and the Free Tier, see AWS Free Tier. To obtain an AWS account, open the AWS home page and then choose Create an AWS Account.

AWS Config

AWS Config provides a detailed view of the resources associated with your AWS account, including how they are configured, how they are related to one another, and how the configurations and their relationships have changed over time. AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time. An AWS resource is an entity you can work with in AWS, such as an Amazon Elastic Compute Cloud (EC2) instance, an Amazon Elastic Block Store (EBS) volume, a security group, or an Amazon Virtual Private Cloud (VPC). For a complete list of AWS resources supported by AWS Config, see Supported Resource Types. Features When you set up AWS Config, you can complete the following: Resource management Specify the resource types you want AWS Config to record. Set up an Amazon S3 bucket to receive a configuration snapshot on request and configuration history. Set up Amazon SNS to send configuration stream notifications. Grant AWS Config the permissions it needs to access the Amazon S3 bucket and the Amazon SNS topic. For more information, see Viewing AWS Resource Configurations and History and Managing AWS Resource Configurations and History. Rules and conformance packs Specify the rules that you want AWS Config to use to evaluate compliance information for the recorded resource types. Use conformance packs, or a collection of AWS Config rules and remediation actions that can be deployed and monitored as a single entity in your AWS account. For more information, see Evaluating Resources with AWS Config Rules and Conformance Packs. Aggregators Use an aggregator to get a centralized view of your resource inventory and compliance. An aggregator is an AWS Config resource type that collects AWS Config configuration and compliance data from multiple AWS accounts and AWS Regions into a single account and Region. For more information, see Multi-Account Multi-Region Data Aggregation . Advanced queries Use one of the sample queries or write your own query by referring to the configuration schema of the AWS resource. For more information, see Querying the Current Configuration State of AWS Resources . Ways to Use AWS Config When you run your applications on AWS, you usually use AWS resources, which you must create and manage collectively. As the demand for your application keeps growing, so does your need to keep track of your AWS resources. AWS Config is designed to help you oversee your application resources in the following scenarios: Resource Administration To exercise better governance over your resource configurations and to detect resource misconfigurations, you need fine-grained visibility into what resources exist and how these resources are configured at any time. You can use AWS Config to notify you whenever resources are created, modified, or deleted without having to monitor these changes by polling the calls made to each resource. You can use AWS Config rules to evaluate the configuration settings of your AWS resources. When AWS Config detects that a resource violates the conditions in one of your rules, AWS Config flags the resource as noncompliant and sends a notification. AWS Config continuously evaluates your resources as they are created, changed, or deleted. Auditing and Compliance You might be working with data that requires frequent audits to ensure compliance with internal policies and best practices. To demonstrate compliance, you need access to the historical configurations of your resources. This information is provided by AWS Config. Managing and Troubleshooting Configuration Changes When you use multiple AWS resources that depend on one another, a change in the configuration of one resource might have unintended consequences on related resources. With AWS Config, you can view how the resource you intend to modify is related to other resources and assess the impact of your change. You can also use the historical configurations of your resources provided by AWS Config to troubleshoot issues and to access the last known good configuration of a problem resource. Security Analysis To analyze potential security weaknesses, you need detailed historical information about your AWS resource configurations, such as the AWS Identity and Access Management (IAM) permissions that are granted to your users, or the Amazon EC2 security group rules that control access to your resources. You can use AWS Config to view the IAM policy that was assigned to an IAM user, group, or role at any time in which AWS Config was recording. This information can help you determine the permissions that belonged to a user at a specific time: for example, you can view whether the user John Doe had permission to modify Amazon VPC settings on Jan 1, 2015. You can also use AWS Config to view the configuration of your EC2 security groups, including the port rules that were open at a specific time. This information can help you determine whether a security group blocked incoming TCP traffic to a specific port.

Amazon CloudFront

Amazon CloudFront speeds up distribution of your static and dynamic web content, such as .html, .css, .php, image, and media files. When users request your content, CloudFront delivers it through a worldwide network of edge locations that provide low latency and high performance. Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. If the content is already in the edge location with the lowest latency, CloudFront delivers it immediately. If the content is not in that edge location, CloudFront retrieves it from an origin that you've defined—such as an Amazon S3 bucket, a MediaPackage channel, or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content. As an example, suppose that you're serving an image from a traditional web server, not from CloudFront. For example, you might serve an image, sunsetphoto.png, using the URL https://example.com/sunsetphoto.png. Your users can easily navigate to this URL and see the image. But they probably don't know that their request is routed from one network to another—through the complex collection of interconnected networks that comprise the internet—until the image is found. CloudFront speeds up the distribution of your content by routing each user request through the AWS backbone network to the edge location that can best serve your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users' requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of the file—and higher data transfer rates. You also get increased reliability and availability because copies of your files (also known as objects) are now held (or cached) in multiple edge locations around the world. How you set up CloudFront to deliver content You create a CloudFront distribution to tell CloudFront where you want content to be delivered from, and the details about how to track and manage content delivery. Then CloudFront uses computers—edge servers—that are close to your viewers to deliver that content quickly when someone wants to see it or use it. How you configure CloudFront to deliver your content You specify origin servers, like an Amazon S3 bucket or your own HTTP server, from which CloudFront gets your files which will then be distributed from CloudFront edge locations all over the world. An origin server stores the original, definitive version of your objects. If you're serving content over HTTP, your origin server is either an Amazon S3 bucket or an HTTP server, such as a web server. Your HTTP server can run on an Amazon Elastic Compute Cloud (Amazon EC2) instance or on a server that you manage; these servers are also known as custom origins. You upload your files to your origin servers. Your files, also known as objects, typically include web pages, images, and media files, but can be anything that can be served over HTTP. If you're using an Amazon S3 bucket as an origin server, you can make the objects in your bucket publicly readable, so that anyone who knows the CloudFront URLs for your objects can access them. You also have the option of keeping objects private and controlling who accesses them. See Serving private content with signed URLs and signed cookies. You create a CloudFront distribution, which tells CloudFront which origin servers to get your files from when users request the files through your web site or application. At the same time, you specify details such as whether you want CloudFront to log all requests and whether you want the distribution to be enabled as soon as it's created. CloudFront assigns a domain name to your new distribution that you can see in the CloudFront console, or that is returned in the response to a programmatic request, for example, an API request. If you like, you can add an alternate domain name to use instead. CloudFront sends your distribution's configuration (but not your content) to all of its edge locations or points of presence (POPs)— collections of servers in geographically-dispersed data centers where CloudFront caches copies of your files. As you develop your website or application, you use the domain name that CloudFront provides for your URLs. For example, if CloudFront returns d111111abcdef8.cloudfront.net as the domain name for your distribution, the URL for logo.jpg in your Amazon S3 bucket (or in the root directory on an HTTP server) is https://d111111abcdef8.cloudfront.net/logo.jpg. Or you can set up CloudFront to use your own domain name with your distribution. In that case, the URL might be https://www.example.com/logo.jpg. Optionally, you can configure your origin server to add headers to the files, to indicate how long you want the files to stay in the cache in CloudFront edge locations. By default, each file stays in an edge location for 24 hours before it expires. The minimum expiration time is 0 seconds; there isn't a maximum expiration time.

AWS AppConfig

Use AWS AppConfig to quickly deploy application configurations to applications of any size. AWS AppConfig supports controlled deployments and includes built-in validation checks and monitoring. Use AWS AppConfig, a capability of AWS Systems Manager, to create, manage, and quickly deploy application configurations. A configuration is a collection of settings that influence the behavior of your application. You can use AWS AppConfig with applications hosted on Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS Lambda, containers, mobile applications, or IoT devices. To view examples of the types of configurations you can manage by using AWS AppConfig, see Example configurations. Simplify tasks with AWS AppConfig AWS AppConfig helps simplify the following tasks: Configure Source your configurations from Amazon Simple Storage Service (Amazon S3), AWS AppConfig hosted configurations, Parameter Store, Systems Manager Document Store. Use AWS CodePipeline integration to source your configurations from Bitbucket Pipelines, GitHub, and AWS CodeCommit. Validate While deploying application configurations, a simple typo could cause an unexpected outage. Prevent errors in production systems using AWS AppConfig validators. AWS AppConfig validators provide a syntactic check using a JSON schema or a semantic check using an AWS Lambda function to ensure that your configurations deploy as intended. Configuration deployments only proceed when the configuration data is valid. Deploy and monitor Define deployment criteria and rate controls to determine how your targets retrieve the new configuration. Use AWS AppConfig deployment strategies to set deployment velocity, deployment time, and bake time. Monitor each deployment to proactively catch any errors using AWS AppConfig integration with Amazon CloudWatch. If AWS AppConfig encounters an error, the system rolls back the deployment to minimize impact on your application users. AWS AppConfig use cases AWS AppConfig can help you in the following use cases: Application tuning - Introduce changes carefully to your application that can be tested with production traffic. Feature toggle - Turn on new features that require a timely deployment, such as a product launch or announcement. Allow list - Allow premium subscribers to access paid content. Operational issues - Reduce stress on your application when a dependency or other external factor impacts the system. Benefits of using AWS AppConfig AWS AppConfig offers the following benefits for your organization: Reduce errors in configuration changes AWS AppConfig reduces application downtime by enabling you to create rules to validate your configuration. Configurations that aren't valid can't be deployed. AWS AppConfig provides the following two options for validating configurations: For syntactic validation, you can use a JSON schema. AWS AppConfig validates your configuration by using the JSON schema to ensure that configuration changes adhere to the application requirements. For semantic validation, you can call an AWS Lambda function that runs your configuration before you deploy it. Deploy changes across a set of targets quickly AWS AppConfig simplifies the administration of applications at scale by deploying configuration changes from a central location. AWS AppConfig supports configurations stored in Systems Manager Parameter Store, Systems Manager (SSM) documents, and Amazon S3. You can use AWS AppConfig with applications hosted on EC2 instances, AWS Lambda, containers, mobile applications, or IoT devices. Targets don't need to be configured with the Systems Manager SSM Agent or the AWS Identity and Access Management (IAM) instance profile required by other Systems Manager capabilities. This means that AWS AppConfig works with unmanaged instances. Update applications without interruptions AWS AppConfig deploys configuration changes to your targets at runtime without a heavy-weight build process or taking your targets out of service. Control deployment of changes across your application When deploying configuration changes to your targets, AWS AppConfig enables you to minimize risk by using a deployment strategy. You can use the rate controls of a deployment strategy to determine how fast you want your application targets to retrieve a configuration change. How AWS AppConfig works At a high level, there are three processes for working with AWS AppConfig: Configure AWS AppConfig to work with your application. Enable your application code to periodically check for and retrieve configuration data from AWS AppConfig. Deploy a new or updated configuration.

Amazon Kinesis

Amazon Kinesis makes it easy to collect, process, and analyze video and data streams in real time. Amazon Kinesis Video Streams is a fully managed AWS service that you can use to stream live video from devices to the AWS Cloud, or build applications for real-time video processing or batch-oriented video analytics. Kinesis Video Streams isn't just storage for video data. You can use it to watch your video streams in real time as they are received in the cloud. You can either monitor your live streams in the AWS Management Console, or develop your own monitoring application that uses the Kinesis Video Streams API library to display live video. You can use Kinesis Video Streams to capture massive amounts of live video data from millions of sources, including smartphones, security cameras, webcams, cameras embedded in cars, drones, and other sources. You can also send non-video time-serialized data such as audio data, thermal imagery, depth data, RADAR data, and more. As live video streams from these sources into a Kinesis video stream, you can build applications that can access the data, frame-by-frame, in real time for low-latency processing. Kinesis Video Streams is source-agnostic; you can stream video from a computer's webcam using the GStreamer library, or from a camera on your network using real-time streaming protocol (RTSP)". You can also configure your Kinesis video stream to durably store media data for the specified retention period. Kinesis Video Streams automatically stores this data and encrypts it at rest. Additionally, Kinesis Video Streams time-indexes stored data based on both the producer time stamps and ingestion time stamps. You can build applications that periodically batch-process the video data, or you can create applications that require ad hoc access to historical data for different use cases. Your custom applications, real-time or batch-oriented, can run on Amazon EC2 instances. These applications might process data using open source deep-learning algorithms, or use third-party applications that integrate with Kinesis Video Streams. Benefits of using Kinesis Video Streams include the following: Connect and stream from millions of devices - Kinesis Video Streams enables you to connect and stream video, audio, and other data from millions of devices ranging from consumer smartphones, drones, dash cams, and more. You can use the Kinesis Video Streams producer libraries to configure your devices and reliably stream in real time, or as after-the-fact media uploads. Durably store, encrypt, and index data - You can configure your Kinesis video stream to durably store media data for custom retention periods. Kinesis Video Streams also generates an index over the stored data based on producer-generated or service-side time stamps. Your applications can easily retrieve specified data in a stream using the time-index. Focus on managing applications instead of infrastructure - Kinesis Video Streams is serverless, so there is no infrastructure to set up or manage. You don't need to worry about the deployment, configuration, or elastic scaling of the underlying infrastructure as your data streams and number of consuming applications grow and shrink. Kinesis Video Streams automatically does all the administration and maintenance required to manage streams, so you can focus on the applications, not the infrastructure. Build real-time and batch applications on data streams - You can use Kinesis Video Streams to build custom real-time applications that operate on live data streams, and create batch or ad hoc applications that operate on durably persisted data without strict latency requirements. You can build, deploy, and manage custom applications: open source (Apache MXNet, OpenCV), homegrown, or third-party solutions via the AWS Marketplace to process and analyze your streams. Kinesis Video Streams Get APIs enable you to build multiple concurrent applications processing data in a real-time or batch-oriented basis. Stream data more securely - Kinesis Video Streams encrypts all data as it flows through the service and when it persists the data. Kinesis Video Streams enforces Transport Layer Security (TLS)-based encryption on data streaming from devices, and encrypts all data at rest using AWS Key Management Service (AWS KMS). Additionally, you can manage access to your data using AWS Identity and Access Management (IAM). Pay as you go

Amazon QuickSight

Amazon QuickSight is a fast business analytics service to build visualizations, perform ad hoc analysis, and quickly get business insights from your data. Amazon QuickSight seamlessly discovers AWS data sources, enables organizations to scale to hundreds of thousands of users, and delivers fast and responsive query performance by using the Amazon QuickSight Super-fast, Parallel, In-Memory, Calculation Engine (SPICE). Amazon QuickSight is a cloud-scale business intelligence (BI) service that you can use to deliver easy-to-understand insights to the people who you work with, wherever they are. Amazon QuickSight connects to your data in the cloud and combines data from many different sources. In a single data dashboard, QuickSight can include AWS data, third-party data, big data, spreadsheet data, SaaS data, B2B data, and more. As a fully managed cloud-based service, Amazon QuickSight provides enterprise-grade security, global availability, and built-in redundancy. It also provides the user-management tools that you need to scale from 10 users to 10,000, all with no infrastructure to deploy or manage. QuickSight gives decision-makers the opportunity to explore and interpret information in an interactive visual environment. They have secure access to dashboards from any device on your network and from mobile devices. To learn more about the major components and processes of Amazon QuickSight and the typical workflow for creating data visualizations, see the following sections. Get started today to unlock the potential of your data and make the best decisions that you can. Topics Why QuickSight? Starting work with QuickSight Why QuickSight? Every day, the people in your organization make decisions that affect your business. When they have the right information at the right time, they can make the choices that move your company in the right direction. Here are some of the benefits of using Amazon QuickSight for analytics, data visualization, and reporting: The in-memory engine, called SPICE, responds with blazing speed. No upfront costs for licenses and a low total cost of ownership (TCO). Collaborative analytics with no need to install an application. Combine a variety of data into one analysis. Publish and share your analysis as a dashboard. Control features available in a dashboard. No need to manage granular database permissions—dashboard viewers can see only what you share. For advanced users, QuickSight Enterprise edition offers even more features: Saves you time and money with automated and customizable data insights, powered by machine learning (ML). This enables your organization to do the following, without requiring any knowledge of machine learning: Automatically make reliable forecasts. Automatically identify outliers. Find hidden trends. Act on key business drivers. Translate data into easy-to-read narratives, like headline tiles for your dashboard. Provides extra Enterprise security features, including the following: Federated users, groups, and single sign-on (SSO) with AWS Identity and Access Management (IAM) Federation, SAML, OpenID Connect, or AWS Directory Service for Microsoft Active Directory. Granular permissions for AWS data access. Row level security. Highly secure data encryption at rest. Access to AWS data and on-premises data in Amazon Virtual Private Cloud Offers pay-per-session pricing for the users that you place in the "reader" security role—readers are dashboard subscribers, people who view reports but don't create them. Empowers you to make QuickSight part of your own websites and applications by deploying embedded console analytics and dashboard sessions. Makes our business your business with multitenancy features for value-added resellers (VARs) of analytical services. Enables you to programmatically script dashboard templates that can be transferred to other AWS accounts. Simplifies access management and organization with shared and personal folders for analytical assets. Enables larger data import quotas for SPICE data ingestion and more frequently scheduled data refreshes.

Savings Plans

Savings Plans is a flexible pricing model that helps you save a significant percentage on Amazon EC2 and Fargate usage. Savings Plans provide low prices on EC2 and Fargate in exchange for a commitment to a consistent amount of usage for a one-year or three-year term.

AWS App Mesh

AWS App Mesh makes it easy to monitor and control microservices that are running on AWS.

AWS Application Cost Profiler

AWS Application Cost Profiler is an AWS service that provides granular cost insights for your multi-tenant applications.

AWS Batch Documentation

AWS Batch enables you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure. AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly. As a fully managed service, AWS Batch helps you to run batch computing workloads of any scale. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems. Components of AWS Batch AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure. Jobs A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs. For more information, see Jobs. Job Definitions A job definition specifies how jobs are to be run. You can think of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job definitions Job Queues When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment. You associate one or more compute environments with a job queue. You can also assign priority values for these compute environments and even across job queues themselves. For example, you can have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper. Compute Environment A compute environment is a set of managed or unmanaged compute resources that are used to run jobs. With managed compute environments, you can specify desired compute type (Fargate or EC2) at several levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets. AWS Batch efficiently launches, manages, and terminates compute types as needed. You can also manage your own compute environments. As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you. For more information, see Compute environment.

AWS Control Tower

AWS Control Tower is a service that enables you to enforce and manage governance rules for security, operations, and compliance at scale across all your organizations and accounts in the AWS Cloud. AWS Control Tower offers a straightforward way to set up and govern an AWS multi-account environment, following prescriptive best practices. AWS Control Tower orchestrates the capabilities of several other AWS services, including AWS Organizations, AWS Service Catalog, and AWS IAM Identity Center (successor to AWS Single Sign-On), to build a landing zone in less than an hour. Resources are set up and managed on your behalf. AWS Control Tower orchestration extends the capabilities of AWS Organizations. To help keep your organizations and accounts from drift, which is divergence from best practices, AWS Control Tower applies preventive and detective controls (guardrails). For example, you can use guardrails to help ensure that security logs and necessary cross-account access permissions are created, and not altered. If you are hosting more than a handful of accounts, it's beneficial to have an orchestration layer that facilitates account deployment and account governance. You can adopt AWS Control Tower as your primary way to provision accounts and infrastructure. With AWS Control Tower, you can more easily adhere to corporate standards, meet regulatory requirements, and follow best practices. AWS Control Tower enables end users on your distributed teams to provision new AWS accounts quickly, by means of configurable account templates in Account Factory. Meanwhile, your central cloud administrators can monitor that all accounts are aligned with established, company-wide compliance policies. In short, AWS Control Tower offers the easiest way to set up and govern a secure, compliant, multi-account AWS environment based on best practices established by working with thousands of enterprises. For more information about the working with AWS Control Tower and the best practices outlined in the AWS multi-account strategy, see AWS multi-account strategy: Best practices guidance. Features AWS Control Tower has the following features: Landing zone - A landing zone is a well-architected, multi-account environment that's based on security and compliance best practices. It is the enterprise-wide container that holds all of your organizational units (OUs), accounts, users, and other resources that you want to be subject to compliance regulation. A landing zone can scale to fit the needs of an enterprise of any size. Guardrails - A guardrail is a high-level rule that provides ongoing governance for your overall AWS environment. It's expressed in plain language. Two kinds of guardrails exist: preventive and detective. Three categories of guidance apply to the two kinds of guardrails: mandatory, strongly recommended, or elective. For more information about guardrails, see How Guardrails Work. Account Factory - An Account Factory is a configurable account template that helps to standardize the provisioning of new accounts with pre-approved account configurations. AWS Control Tower offers a built-in Account Factory that helps automate the account provisioning workflow in your organization. For more information, see Provision and manage accounts with Account Factory. Dashboard - The dashboard offers continuous oversight of your landing zone to your team of central cloud administrators. Use the dashboard to see provisioned accounts across your enterprise, guardrails enabled for policy enforcement, guardrails enabled for continuous detection of policy non-conformance, and noncompliant resources organized by accounts and OUs. How AWS Control Tower interacts with other AWS services AWS Control Tower is built on top of trusted and reliable AWS services including AWS Service Catalog, AWS IAM Identity Center (successor to AWS Single Sign-On), and AWS Organizations. For more information, see Integrated services. You can incorporate AWS Control Tower with other AWS services into a solution that helps you migrate your existing workloads to AWS. For more information, see How to take advantage of AWS Control Tower and CloudEndure to migrate workloads to AWS. Configuration, Governance, and Extensibility Automated account configuration: AWS Control Tower automates account deployment and enrollment by means of an Account Factory (or "vending machine"), which is built as an abstraction on top of provisioned products in AWS Service Catalog. The Account Factory can create and enroll AWS accounts, and it automates the process of applying guardrails and policies to those accounts. Centralized governance: By employing the capabilities of AWS Organizations, AWS Control Tower sets up a framework that ensures consistent compliance and governance across your multi-account environment. The AWS Organizations service provides essential capabilities for managing a multi-account environment, including central governance and management of accounts, account creation from APIs, and service control policies (SCPs). Extensibility: You can build or extend your own AWS Control Tower environment by working directly in AWS Organizations, as well as in the AWS Control Tower console. You can see your changes reflected in AWS Control Tower after you register your existing organizations and enroll your existing accounts into AWS Control Tower. You can update your AWS Control Tower landing zone to reflect your changes. If your workloads require further advanced capabilities, you can leverage other AWS partner solutions along with AWS Control Tower.

Amazon Transcribe

Amazon Transcribe provides transcription services for your audio files and audio streams. It uses advanced machine learning technologies to recognize spoken words and transcribe them into text.

AWS Resource Management

Learn how to manage your AWS resources. Grouping your AWS resources Group your resources to manage them as a single unit instead of individually. You can create resource groups that consist of all of the resources that are part of an AWS CloudFormation stack. You can also create a group of all resources that are tagged with specific keys and values. You can use resource groups to organize your AWS resources. AWS Resource Groups is the service that lets you manage and automate tasks on large numbers of resources at one time. This guide shows you how to create and manage resource groups in AWS Resource Groups. The tasks that you can perform on a resource vary based on the AWS service you're using. For a list of the services that support AWS Resource Groups and a brief description of what each service allows you to do with a resource group, see AWS services that work with AWS Resource Groups. You can access Resource Groups through any of the following entry points. In the AWS Management Console, in the top navigation bar, choose Services. Then, under Management & Governance, choose Resource Groups & Tag Editor. Direct link: AWS Resource Groups console In the AWS Systems Manager console, from the left navigation pane entry for Resource Groups. By using the Resource Groups API, in AWS CLI commands or AWS SDK programming languages. To work with resource groups on the AWS Management Console home Sign in to the AWS Management Console. On the navigation bar, choose Services. Under Management & Governance, choose Resource Groups & Tag Editor. In the navigation pane on the left, choose Saved Resource Groups to work with an existing group, or Create a Group to create a new one. What are resource groups? In AWS, a resource is an entity that you can work with. Examples include an Amazon EC2 instance, an AWS CloudFormation stack, or an Amazon S3 bucket. If you work with multiple resources, you might find it useful to manage them as a group rather than move from one AWS service to another for each task. If you manage large numbers of related resources, such as EC2 instances that make up an application layer, you likely need to perform bulk actions on these resources at one time. Examples of bulk actions include: Applying updates or security patches. Upgrading applications. Opening or closing ports to network traffic. Collecting specific log and monitoring data from your fleet of instances. A resource group is a collection of AWS resources that are all in the same AWS Region, and that match the criteria specified in the group's query. In Resource Groups, there are two types of queries you can use to build a group. Both query types include resources that are specified in the format AWS::service::resource. Tag-based Tag-based queries include lists of resources and tags. Tags are keys that help identify and sort your resources within your organization. Optionally, tags include values for keys. Important Do not store personally identifiable information (PII) or other confidential or sensitive information in tags. We use tags to provide you with billing and administration services. Tags are not intended to be used for private or sensitive data. AWS CloudFormation stack-based In an AWS CloudFormation stack-based query, you choose an AWS CloudFormation stack in your account in the current region, and then choose resource types within the stack that you want to be in the group. You can base your query on only one AWS CloudFormation stack. Resource groups can be nested; a resource group can contain existing resource groups in the same region. Use cases for resource groups By default, the AWS Management Console is organized by AWS service. But with Resource Groups, you can create a custom console that organizes and consolidates information based on criteria specified in tags, or the resources in an AWS CloudFormation stack. The following list describes some of the cases in which resource grouping can help organize your resources. An application that has different phases, such as development, staging, and production. Projects managed by multiple departments or individuals. A set of AWS resources that you use together for a common project or that you want to manage or monitor as a group. A set of resources related to applications that run on a specific platform, such as Android or iOS. For example, you are developing a web application, and you are maintaining separate sets of resources for your alpha, beta, and release stages. Each version runs on Amazon EC2 with an Amazon Elastic Block Store storage volume. You use Elastic Load Balancing to manage traffic and Route 53 to manage your domain. Without Resource Groups, you might have to access multiple consoles just to check the status of your services or modify the settings for one version of your application. With Resource Groups, you use a single page to view and manage your resources. For example, let's say you use the tool to create a resource group for each version—alpha, beta, and release—of your application. To check your resources for the alpha version of your application, open your resource group. Then view the consolidated information on your resource group page. To modify a specific resource, choose the resource's links on your resource group page to access the service console that has the settings that you need. AWS Resource Groups and permissions Resource Groups feature permissions are at the account level. As long as users who are sharing your account have the correct IAM permissions, they can work with resource groups that you create. For information about creating IAM users, see Creating an IAM User in the IAM User Guide. Tags are properties of a resource, so they are shared across your entire account. Users in a department or specialized group can draw from a common vocabulary (tags) to create resource groups that are meaningful to their roles and responsibilities. Having a common pool of tags also means that when users share a resource group, they don't have to worry about missing or conflicting tag information. AWS Resource Groups resources In Resource Groups, the only available resource is a group. Groups have unique Amazon Resource Names (ARNs) associated with them. For more information about ARNs, see Amazon Resource Names (ARN) and AWS Service Namespaces in the Amazon Web Services General Reference. Resource Type ARN Format Resource Group arn:aws:resource-groups:region:account:group/group-name How tagging works Tags are key and value pairs that act as metadata for organizing your AWS resources. With most AWS resources, you have the option of adding tags when you create the resource, whether it's an Amazon EC2 instance, an Amazon S3 bucket, or other resource. However, you can also add tags to multiple, supported resources at once by using Tag Editor. You build a query for resources of various types, and then add, remove, or replace tags for the resources in your search results. Tag-based queries assign an AND operator to tags, so any resource that matches the specified resource types and all specified tags is returned by the query. Important Do not store personally identifiable information (PII) or other confidential or sensitive information in tags. We use tags to provide you with billing and administration services. Tags are not intended to be used for private or sensitive data.

AWS Proton

AWS Proton creates and manages standardized infrastructure and deployment tooling for developers and their serverless and container-based applications.

Service Quotas

Service Quotas is a service for viewing and managing your quotas easily and at scale as your AWS workloads grow. Quotas, also referred to as limits, are the maximum number of resources that you can create in an AWS account.

AWS Identity and Access Management (IAM)

AWS Identity and Access Management (IAM) is a web service for securely controlling access to AWS services. With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which AWS resources users and applications can access. AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. When you create an AWS account, you begin with one sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account. We strongly recommend that you do not use the root user for your everyday tasks. Safeguard your root user credentials and use them to perform the tasks that only the root user can perform. IAM features IAM gives you the following features: Shared access to your AWS account You can grant other people permission to administer and use resources in your AWS account without having to share your password or access key. Granular permissions You can grant different permissions to different people for different resources. For example, you might allow some users complete access to Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Redshift, and other AWS services. For other users, you can allow read-only access to just some S3 buckets, or permission to administer just some EC2 instances, or to access your billing information but nothing else. Secure access to AWS resources for applications that run on Amazon EC2 You can use IAM features to securely provide credentials for applications that run on EC2 instances. These credentials provide permissions for your application to access other AWS resources. Examples include S3 buckets and DynamoDB tables. Multi-factor authentication (MFA) You can add two-factor authentication to your account and to individual users for extra security. With MFA you or your users must provide not only a password or access key to work with your account, but also a code from a specially configured device. If you already use a FIDO security key with other services, and it has an AWS supported configuration. For more information, see Supported configurations for using FIDO security keys. Identity federation You can allow users who already have passwords elsewhere—for example, in your corporate network or with an internet identity provider—to get temporary access to your AWS account. Identity information for assurance If you use AWS CloudTrail, you receive log records that include information about those who made requests for resources in your account. That information is based on IAM identities. PCI DSS Compliance IAM supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. Integrated with many AWS services For a list of AWS services that work with IAM, see AWS services that work with IAM. Eventually Consistent IAM, like many other AWS services, is eventually consistent. IAM achieves high availability by replicating data across multiple servers within Amazon's data centers around the world. If a request to change some data is successful, the change is committed and safely stored. However, the change must be replicated across IAM, which can take some time. Such changes include creating or updating users, groups, roles, or policies. We recommend that you do not include such IAM changes in the critical, high-availability code paths of your application. Instead, make IAM changes in a separate initialization or setup routine that you run less frequently. Also, be sure to verify that the changes have been propagated before production workflows depend on them. For more information, see Changes that I make are not always immediately visible. Free to use AWS Identity and Access Management (IAM) and AWS Security Token Service (AWS STS) are features of your AWS account offered at no additional charge. You are charged only when you access other AWS services using your IAM users or AWS STS temporary security credentials. For information about the pricing of other AWS products, see the Amazon Web Services pricing page. Accessing IAM You can work with AWS Identity and Access Management in any of the following ways. AWS Management Console The console is a browser-based interface to manage IAM and AWS resources. For more information about accessing IAM through the console, see Signing in to the AWS Management Console as an IAM user or root user. For a tutorial that guides you through using the console, see Creating your first IAM admin user and user group. AWS Command Line Tools You can use the AWS command line tools to issue commands at your system's command line to perform IAM and AWS tasks. Using the command line can be faster and more convenient than the console. The command line tools are also useful if you want to build scripts that perform AWS tasks. AWS provides two sets of command line tools: the AWS Command Line Interface (AWS CLI) and the AWS Tools for Windows PowerShell. For information about installing and using the AWS CLI, see the AWS Command Line Interface User Guide. For information about installing and using the Tools for Windows PowerShell, see the AWS Tools for Windows PowerShell User Guide. AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to IAM and AWS. For example, the SDKs take care of tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page. IAM HTTPS API You can access IAM and AWS programmatically by using the IAM HTTPS API, which lets you issue HTTPS requests directly to the service. When you use the HTTPS API, you must include code to digitally sign requests using your credentials. For more information, see Calling the IAM API using HTTP query requests and the IAM API Reference.

AWS License Manager

AWS License Manager streamlines the process of bringing software vendor licenses to the AWS Cloud. As you build out cloud infrastructure on AWS, you can save costs by repurposing your existing license inventory for use with cloud resources. License Manager reduces the risk of licensing overages and penalties with inventory tracking that is tied directly to AWS resources. AWS License Manager is a service that makes it easier for you to manage your software licenses from software vendors (for example, Microsoft, SAP, Oracle, and IBM) centrally across AWS and your on-premises environments. This provides control and visibility into the usage of your licenses, enabling you to limit licensing overages and reduce the risk of non-compliance and misreporting. As you build out your cloud infrastructure on AWS, you can save costs by using Bring Your Own License model (BYOL) opportunities. That is, you can re-purpose your existing license inventory for use with your cloud resources. License Manager reduces the risk of licensing overages and penalties with inventory tracking that is tied directly into AWS services. With rule-based controls on the consumption of licenses, administrators can set hard or soft limits on new and existing cloud deployments. Based on these limits, License Manager helps stop non-compliant server usage before it happens. License Manager's built-in dashboards provide ongoing visibility into license usage and assistance with vendor audits. License Manager supports tracking any software that is licensed based on virtual cores (vCPUs), physical cores, sockets, or number of machines. This includes a variety of software products from Microsoft, IBM, SAP, Oracle, and other vendors. With AWS License Manager, you can centrally track licenses and enforce limits across multiple Regions, by maintaining a count of all the checked out entitlements. License Manager also tracks the end-user identity and the underlying resource identifier, if available, associated with each check out, along with the check-out time. This time-series data can be tracked to the ISV through CloudWatch metrics and events. ISVs can use this data for analytics, auditing, and other similar purposes. AWS License Manager is integrated with AWS Marketplace and AWS Data Exchange, and with the following AWS services: AWS Identity and Access Management (IAM), AWS Organizations, Service Quotas, AWS CloudFormation, AWS resource tagging, and AWS X-Ray. Managed Entitlements With License Manager, a license administrator can distribute, activate, and track software licenses across accounts and throughout an organization. Independent software vendors (ISVs) can use AWS License Manager to manage and distribute software licenses and data to end users by means of managed entitlements. As an issuer, you can track the usage of your seller-issued licenses centrally using the License Manager dashboard. ISVs selling through AWS Marketplace benefit from automatic license creation and distribution as a part of the transaction workflow. ISVs can also use License Manager to create license keys and activate licenses for customers without an AWS account. License Manager uses open, secure, industry standards for representing licenses and allows customers to cryptographically verify their authenticity. License Manager supports a variety of different licensing models including perpetual licenses, floating licenses, subscription licenses, and usage-based licenses. If you have licenses that must be node-locked, License Manager provides mechanisms to consume your licenses in that way. You can create licenses in AWS License Manager and distribute them to end users using an IAM identity or through digitally signed tokens generated by AWS License Manager. End-users using AWS can further redistribute the license entitlements to AWS identities in their respective organizations. End users with distributed entitlements can check out and check in the required entitlements from that license through your software integration with AWS License Manager. Each license check out specifies the entitlements, the associated quantity, and check-out time period such as checking out 10 admin-users for 1 hour. This check out can be performed based on the underlying IAM identity for the distributed license or based on the long-lived tokens generated by AWS License Manager through the AWS License Manager service.

AWS Network Firewall

AWS Network Firewall is a stateful, managed, network firewall and intrusion detection and prevention service for your virtual private cloud (VPC). AWS Network Firewall is a stateful, managed, network firewall and intrusion detection and prevention service for your virtual private cloud (VPC) that you created in Amazon Virtual Private Cloud (Amazon VPC). With Network Firewall, you can filter traffic at the perimeter of your VPC. This includes filtering traffic going to and coming from an internet gateway, NAT gateway, or over VPN or AWS Direct Connect. Network Firewall uses the open source intrusion prevention system (IPS), Suricata, for stateful inspection. Network Firewall supports Suricata compatible rules. For more information, see Working with stateful rule groups in AWS Network Firewall. You can use Network Firewall to monitor and protect your Amazon VPC traffic in a number of ways, including the following: Pass traffic through only from known AWS service domains or IP address endpoints, such as Amazon S3. Use custom lists of known bad domains to limit the types of domain names that your applications can access. Perform deep packet inspection on traffic entering or leaving your VPC. Use stateful protocol detection to filter protocols like HTTPS, independent of the port used. To enable Network Firewall for your VPC, you perform steps in both Amazon VPC and in Network Firewall. For information about managing your Amazon Virtual Private Cloud VPC, see the Amazon Virtual Private Cloud User Guide. For more information about how Network Firewall works, see How AWS Network Firewall works. Network Firewall is supported by AWS Firewall Manager. You can use Firewall Manager to centrally configure and manage your firewalls across your accounts and applications in AWS Organizations. You can manage firewalls for multiple accounts using a single account in Firewall Manager. AWS Network Firewall​ AWS resources Network Firewall manages the following AWS resource types: Firewall - Provides traffic filtering logic for the subnets in a VPC. FirewallPolicy - Defines rules and other settings for a firewall to use to filter incoming and outgoing traffic in a VPC. RuleGroup - Defines a set of rules to match against VPC traffic, and the actions to take when Network Firewall finds a match. Network Firewall uses stateless and stateful rule group types, each with its own Amazon Resource Name (ARN). AWS Network Firewall concepts AWS Network Firewall is a firewall service for Amazon Virtual Private Cloud (Amazon VPC). For information about managing your Amazon Virtual Private Cloud VPC, see the Amazon Virtual Private Cloud User Guide. The following are the key concepts for Network Firewall: Virtual private cloud (VPC) - A virtual network dedicated to your AWS account. Internet gateway - A gateway that you attach to your VPC to enable communication between resources in your VPC and the internet. Subnet - A range of IP addresses in your VPC. Network Firewall creates firewall endpoints in subnets inside your VPC, to filter network traffic. In a VPC architecture that uses Network Firewall, the firewall endpoints sit between your protected subnets and locations outside your VPC. Firewall subnet - A subnet that you've designated for exclusive use by Network Firewall for a firewall endpoint. A firewall endpoint can't filter traffic coming into or going out of the subnet in which it resides, so don't use your firewall subnets for anything other than Network Firewall. Route table - A set of rules, called routes, that are used to determine where network traffic is directed. You modify your VPC route tables in Amazon VPC to direct traffic through your firewalls for filtering. Network Firewall firewall - An AWS resource that provides traffic filtering logic for the subnets in a VPC. Network Firewall firewall policy - An AWS resource that defines rules and other settings for a firewall to use to filter incoming and outgoing traffic in a VPC. Network Firewall rule group - An AWS resource that defines a set of rules to match against VPC traffic, and the actions to take when Network Firewall finds a match. Stateless rules - Criteria for inspecting a single network traffic packet, without the context of the other packets in the traffic flow, the direction of flow, or any other information that's not provided by the packet itself. Stateful rules - Criteria for inspecting network traffic packets in the context of their traffic flow. Accessing AWS Network Firewall You can create, access, and manage your firewall, firewall policy, and rule group resources in Network Firewall using any of the following methods: AWS Management Console - Provides a web interface for managing the service. The procedures throughout this guide explain how to use the AWS Management Console to perform tasks for Network Firewall. You can access the AWS Management Console at https://aws.amazon.com/console. To access Network Firewall using the console: https://<region>.console.aws.amazon.com/network-firewall/home AWS Command Line Interface (AWS CLI) - Provides commands for a broad set of AWS services, including Network Firewall. The CLI is supported on Windows, macOS, and Linux. For more information, see the AWS Command Line Interface User Guide. To access Network Firewall using the CLI endpoint: aws network-firewall AWS Network Firewall API - Provides a RESTful API. The REST API requires you to handle connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS APIs and the AWS Network Firewall API Reference. To access Network Firewall, use the following REST API endpoint: https://network-firewall.<region>.amazonaws.com AWS SDKs - Provide language-specific APIs. If you're using a programming language that AWS provides an SDK for, you can use the SDK to access AWS Network Firewall. The SDKs handle many of the connection details, such as calculating signatures, handling request retries, and handling errors. They integrate easily with your development environment, and provide easy access to Network Firewall commands. For more information, see Tools for Amazon Web Services. AWS CloudFormation - Helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want and AWS CloudFormation takes care of provisioning and configuring those resources for you. For more information, see Network Firewall resource type reference in the AWS CloudFormation User Guide. AWS Tools for Windows PowerShell - Let developers and administrators manage their AWS services and resources in the PowerShell scripting environment. For more information, see the AWS Tools for Windows PowerShell User Guide. Regions and endpoints for AWS Network Firewall To reduce data latency in your applications, AWS Network Firewall offers a regional endpoint to make your requests: https://network-firewall.<region>.amazonaws.com To view the complete list of AWS Regions where Network Firewall is available, see Service endpoints and quotas in the AWS General Reference. AWS Network Firewall quotas AWS Network Firewall defines maximum settings and other quotas on the number of Network Firewall resources that you can use. You can request an increase for some of these quotas.

Amazon Elastic Container Service

Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster of Amazon EC2 instances. Amazon Elastic Container Service (Amazon ECS) is a highly scalable and fast container management service. You can use it to run, stop, and manage containers on a cluster. With Amazon ECS, your containers are defined in a task definition that you use to run an individual task or task within a service. In this context, a service is a configuration that you can use to run and maintain a specified number of tasks simultaneously in a cluster. You can run your tasks and services on a serverless infrastructure that's managed by AWS Fargate. Alternatively, for more control over your infrastructure, you can run your tasks and services on a cluster of Amazon EC2 instances that you manage. Amazon ECS provides the following features: A serverless option with AWS Fargate. With AWS Fargate, you don't need to manage servers, handle capacity planning, or isolate container workloads for security. Fargate handles the infrastructure management aspects of your workload for you. You can schedule the placement of your containers across your cluster based on your resource needs, isolation policies, and availability requirements. Integration with AWS Identity and Access Management (IAM). You can assign granular permissions for each of your containers. This allows for a high level of isolation when building your applications. In other words, you can launch your containers with the security and compliance levels that you've come to expect from AWS. AWS managed container orchestration. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. This also means that you don't need to manage control plane, nodes, or add-ons. It's integrated with both AWS and third-party tools, such as Amazon Elastic Container Registry and Docker. This integration makes it easier for teams to focus on building the applications, not the environment. Continuous integration and continuous deployment (CI/CD). This is a common process for microservice architectures that are based on Docker containers. You can create a CI/CD pipeline that takes the following actions: Monitors changes to a source code repository Builds a new Docker image from that source Pushes the image to an image repository such as Amazon ECR or Docker Hub Updates your Amazon ECS services to use the new image in your application Support for service discovery. This is a key component of most distributed systems and service-oriented architectures. With service discovery, your microservice components are automatically discovered as they're created and terminated on a given infrastructure. Support for sending your container instance log information to CloudWatch Logs. After you send this information to Amazon CloudWatch, you can view the logs from your container instances in one convenient location. This prevents your container logs from taking up disk space on your container instances. The AWS container services team maintains a public roadmap on GitHub. The roadmap contains information about what the teams are working on and enables AWS customers to provide direct feedback. For more information, see AWS Containers Roadmap on the GitHub website. Launch types There are two models that you can use to run your containers: Fargate launch type - This is a serverless pay-as-you-go option. You can run containers without needing to manage your infrastructure. EC2 launch type - Configure and deploy EC2 instances in your cluster to run your containers. The Fargate launch type is suitable for the following workloads: Large workloads that need to be optimized for low overhead Small workloads that have occasional burst Tiny workloads Batch workloads The EC2 launch type is suitable for the following workloads: Workloads that require consistently high CPU core and memory usage Large workloads that need to be optimized for price Your applications need to access persistent storage You must directly manage your infrastructure Access Amazon ECS You can create, access, and manage your Amazon ECS resources using any of the following interfaces: AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. AWS SDKs — Provides language-specific APIs and takes care of many of the connection details. These include calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. AWS Copilot — Provides an open-source tool for developers to build, release, and operate production ready containerized applications on Amazon ECS. For more information, see AWS Copilot on the GitHub website. Amazon ECS CLI — Provides a command line interface for you to run your applications on Amazon ECS and AWS Fargate using the Docker Compose file format. You can quickly provision resources, push and pull images using Amazon Elastic Container Registry, and monitor running applications on Amazon ECS or Fargate. You can also test containers that are running locally along with containers in the Cloud within the CLI. For more information, see Amazon ECS CLI on the GitHub website. AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. For more information, see Getting started with Amazon ECS using the AWS CDK. Pricing Amazon ECS pricing is dependent on whether you use AWS Fargate or Amazon EC2 infrastructure to host your containerized workloads. When using Amazon ECS on AWS Outposts, the pricing follows the same model that's used when you use Amazon EC2 directly. For more information, see Amazon ECS Pricing. Amazon ECS and Fargate also offer Savings Plans that provide significant savings based on your AWS usage. For more information, see the Savings Plans User Guide. To view your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console. Your bill contains links to usage reports that provide additional details about your bill. To learn more about AWS account billing, see AWS Account Billing. If you have questions concerning AWS billing, accounts, and events, contact AWS Support. Trusted Advisor is a service that you can use to help optimize the costs, security, and performance of your AWS environment. For more information about Trusted Advisor, see AWS Trusted Advisor.

Amazon Elastic Kubernetes Service

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to install and operate your own Kubernetes clusters. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. Amazon EKS: Runs and scales the Kubernetes control plane across multiple AWS Availability Zones to ensure high availability. Automatically scales control plane instances based on load, detects and replaces unhealthy control plane instances, and it provides automated version updates and patching for them. Is integrated with many AWS services to provide scalability and security for your applications, including the following capabilities: Amazon ECR for container images Elastic Load Balancing for load distribution IAM for authentication Amazon VPC for isolation Runs up-to-date versions of the open-source Kubernetes software, so you can use all of the existing plugins and tooling from the Kubernetes community. Applications that are running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, no matter whether they're running in on-premises data centers or public clouds. This means that you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification. Amazon EKS control plane architecture Amazon EKS runs a single tenant Kubernetes control plane for each cluster. The control plane infrastructure isn't shared across clusters or AWS accounts. The control plane consists of at least two API server instances and three etcd instances that run across three Availability Zones within an AWS Region. Amazon EKS: Actively monitors the load on control plane instances and automatically scales them to ensure high performance. Automatically detects and replaces unhealthy control plane instances, restarting them across the Availability Zones within the AWS Region as needed. Leverages the architecture of AWS Regions in order to maintain high availability. Because of this, Amazon EKS is able to offer an SLA for API server endpoint availability. Amazon EKS uses Amazon VPC network policies to restrict traffic between control plane components to within a single cluster. Control plane components for a cluster can't view or receive communication from other clusters or other AWS accounts, except as authorized with Kubernetes RBAC policies. This secure and highly available configuration makes Amazon EKS reliable and recommended for production workloads. How does Amazon EKS work? Getting started with Amazon EKS is easy: Create an Amazon EKS cluster in the AWS Management Console or with the AWS CLI or one of the AWS SDKs. Launch managed or self-managed Amazon EC2 nodes, or deploy your workloads to AWS Fargate. When your cluster is ready, you can configure your favorite Kubernetes tools, such as kubectl, to communicate with your cluster. Deploy and manage workloads on your Amazon EKS cluster the same way that you would with any other Kubernetes environment. You can also view information about your workloads using the AWS Management Console. To create your first cluster and its associated resources, see Getting started with Amazon EKS. To learn about other Kubernetes deployment options, see Deployment options. Pricing An Amazon EKS cluster consists of a control plane and the Amazon EC2 or AWS Fargate compute that you run pods on. For more information about pricing for the control plane, see Amazon EKS pricing. Both Amazon EC2 and Fargate provide: On-Demand Instances - Pay for the instances that you use by the second, with no long-term commitments or upfront payments. For more information, see Amazon EC2 On-Demand Pricing and AWS Fargate Pricing. Savings Plans - You can reduce your costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years. For more information, see Pricing with Savings Plans.

Amazon Managed Service for Prometheus

Amazon Managed Service for Prometheus provides highly available, secure, and managed monitoring for your containers. It automatically scales as your ingestion and query needs grow, and gives you access to remote write metrics from existing Prometheus servers and to query metrics using PromQL.

Amazon Lex

Amazon Lex is an AWS service for building conversational interfaces into applications using voice and text. With Amazon Lex, the same deep learning engine that powers Amazon Alexa is now available to any developer, enabling you to build sophisticated, natural language chatbots into your new and existing applications. Amazon Lex provides the deep functionality and flexibility of natural language understanding (NLU) and automatic speech recognition (ASR) to enable you to build highly engaging user experiences with lifelike, conversational interactions and create new categories of products. Amazon Lex V2 is an AWS service for building conversational interfaces for applications using voice and text. Amazon Lex V2 provides the deep functionality and flexibility of natural language understanding (NLU) and automatic speech recognition (ASR) so you can build highly engaging user experiences with lifelike, conversational interactions, and create new categories of products. Amazon Lex V2 enables any developer to build conversational bots quickly. With Amazon Lex V2, no deep learning expertise is necessary—to create a bot, you specify the basic conversation flow in the Amazon Lex V2 console. Amazon Lex V2 manages the dialog and dynamically adjusts the responses in the conversation. Using the console, you can build, test, and publish your text or voice chatbot. You can then add the conversational interfaces to bots on mobile devices, web applications, and chat platforms (for example, Facebook Messenger). Amazon Lex V2 provides integration with AWS Lambda, and you can integrate with many other services on the AWS platform, including Amazon Connect, Amazon Comprehend, and Amazon Kendra. Integration with Lambda provides bots access to pre-built serverless enterprise connectors to link to data in SaaS applications such as Salesforce. For bots created after August 17, 2022, you can use conditional branching to control the conversation flow with your bot. With conditional branching you can create complex conversations without needing to write Lambda code. Amazon Lex V2 provides the following benefits: Simplicity - Amazon Lex V2 guides you through using the console to create your own bot in minutes. You supply a few example phrases, and Amazon Lex V2 builds a complete natural language model through which the bot can interact using voice and text to ask questions, get answers, and complete sophisticated tasks. Democratized deep learning technologies - Amazon Lex V2 provides ASR and NLU technologies to create a Speech Language Understanding (SLU) system. Through SLU, Amazon Lex V2 takes natural language speech and text input, understands the intent behind the input, and fulfills the user intent by invoking the appropriate business function. Speech recognition and natural language understanding are some of the most challenging problems to solve in computer science, requiring sophisticated deep learning algorithms to be trained on massive amounts of data and infrastructure. Amazon Lex V2 puts deep learning technologies within reach of all developers. Amazon Lex V2 bots convert incoming speech to text and understand the user intent to generate an intelligent response so you can focus on building your bots with added value for your customers and define entirely new categories of products made possible through conversational interfaces. Seamless deployment and scaling - With Amazon Lex V2, you can build, test, and deploy your bots directly from the Amazon Lex V2 console. Amazon Lex V2 enables you to publish your voice or text bots for use on mobile devices, web apps, and chat services (for example, Facebook Messenger). Amazon Lex V2 scales automatically. You don't need to worry about provisioning hardware and managing infrastructure to power your bot experience. Built-in integration with the AWS platform - Amazon Lex V2 operates natively with other AWS services, such as AWS Lambda and Amazon CloudWatch. You can take advantage of the power of the AWS platform for security, monitoring, user authentication, business logic, storage, and mobile app development. Cost-effectiveness - With Amazon Lex V2, there are no upfront costs or minimum fees. You are charged only for the text or speech requests that are made. The pay-as-you-go pricing and the low cost per request make the service a cost-effective way to build conversational interfaces. With the Amazon Lex V2 free tier, you can easily try Amazon Lex V2 without any initial investment.

Amazon Macie

Amazon Macie is a fully managed data security and data privacy service. Macie uses machine learning and pattern matching to help you discover, monitor, and protect your sensitive data in Amazon S3. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to help you discover, monitor, and protect sensitive data in your AWS environment. Macie automates the discovery of sensitive data, such as personally identifiable information (PII) and financial data, to provide you with a better understanding of the data that your organization stores in Amazon Simple Storage Service (Amazon S3). Macie also provides you with an inventory of your S3 buckets, and it automatically evaluates and monitors those buckets for security and access control. Within minutes, Macie can identify and report overly permissive or unencrypted buckets for your organization. If Macie detects sensitive data or potential issues with the security or privacy of your data, it creates detailed findings for you to review and remediate as necessary. You can review and analyze these findings directly in Macie, or monitor and process them by using other services, applications, and systems. Features of Amazon Macie Here are some of the key ways that Amazon Macie can help you discover, monitor, and protect your sensitive data in Amazon S3. Automate the discovery of sensitive data With Macie, you can automate discovery and reporting of sensitive data by creating and running sensitive data discovery jobs. A sensitive data discovery job analyzes objects in S3 buckets to determine whether they contain sensitive data. If Macie detects sensitive data in an object, it creates a sensitive data finding for you. The finding provides a detailed report of the sensitive data that Macie found. You can configure a job to run only once, for on-demand analysis and assessment, or on a recurring basis for periodic analysis, assessment, and monitoring. You can also choose various options to control the breadth and depth of a job's analysis—the S3 buckets to analyze, the sampling depth, and custom include and exclude criteria that derive from properties of S3 objects. With these scheduling and scope options, you can build and maintain a comprehensive view of the data that your organization stores in Amazon S3 and any security or compliance risks for that data. Discover a variety of sensitive data types When you create a sensitive data discovery job, you can configure the job to use built-in criteria and techniques, such as machine learning and pattern matching, to analyze objects in S3 buckets. These criteria and techniques, referred to as managed data identifiers, can detect a large and growing list of sensitive data types for many countries and regions, including multiple types of personally identifiable information (PII), financial data, and credentials data. You can also configure the job to use custom data identifiers. A custom data identifier is a set of criteria that you define to detect sensitive data—a regular expression (regex) that defines a text pattern to match and, optionally, character sequences and a proximity rule that refine the results. With this type of identifier, you can detect sensitive data that reflects your particular scenarios, intellectual property, or proprietary data, and supplement the managed data identifiers that Macie provides. To fine tune the analysis, a job can also use allow lists. Allow lists define specific text and text patterns that you want Macie to ignore in S3 objects—for example, the names of public representatives for your organization, public phone numbers for your organization, or sample data that your organization uses for testing. Evaluate and monitor data for security and access control When you enable Macie, Macie immediately generates and begins maintaining a complete inventory of your S3 buckets, and it begins evaluating and monitoring the buckets for security and access control. If Macie detects a potential issue with the security or privacy of a bucket, it creates a policy finding for you. In addition to specific findings, a dashboard gives you a snapshot of aggregated statistics for your buckets. This includes statistics that indicate how many of your buckets are publicly accessible, are shared with other AWS accounts, or don't encrypt objects by default. You can drill down on each statistic to review the supporting data. Macie also provides detailed information and statistics for individual buckets in your inventory. This data includes breakdowns of a bucket's public access and encryption settings, and the size and number of objects that Macie can analyze to detect sensitive data in the bucket. You can browse the inventory, or sort and filter the inventory by certain fields. When you choose a bucket, a panel displays the bucket's details. Review and analyze findings In Macie, a finding is a detailed report of sensitive data in an S3 object or a potential policy-related issue with the security or privacy of an S3 bucket. Each finding provides a severity rating, information about the affected resource, and additional details, such as when and how Macie found the issue. To review, analyze, and manage findings, you can use the Findings pages on the Amazon Macie console. These pages list your findings and provide the details of individual findings. They also provide multiple options for grouping, filtering, sorting, and suppressing findings. You can also use the Amazon Macie API to query, retrieve, and suppress findings. If you use the API, you can pass the data to another application, service, or system for deeper analysis, long-term storage, or reporting. Monitor and process findings with other services and systems To support integration with other services and systems, Macie publishes findings to Amazon EventBridge as finding events. EventBridge is a serverless event bus service that can route findings data to targets such as AWS Lambda functions and Amazon Simple Notification Service (Amazon SNS) topics. With EventBridge, you can monitor and process findings in near-real time as part of your existing security and compliance workflows. You can configure Macie to also publish findings to AWS Security Hub. Security Hub is a service that provides a comprehensive view of your security posture across your AWS environment and helps you check your environment against security industry standards and best practices. With Security Hub, you can more easily monitor and process your findings as part of a broader analysis of your organization's security posture in AWS. Centrally manage multiple Macie accounts If your AWS environment has multiple accounts, you can centrally manage Macie for accounts in your environment. You can do this in two ways, by integrating Macie with AWS Organizations or by sending membership invitations from Macie. In a multiple-account configuration, a designated Macie administrator can perform certain tasks and access certain Macie settings, data, and resources for accounts that are members of the same organization. Tasks include reviewing information about S3 buckets that are owned by member accounts, reviewing policy findings for those buckets, and running sensitive data discovery jobs to detect sensitive data in the buckets. If the accounts are associated through AWS Organizations, the Macie administrator can also enable Macie for member accounts in the organization. Develop and manage resources programmatically In addition to the Amazon Macie console, you can interact with Macie by using the Amazon Macie API. The Amazon Macie API gives you comprehensive, programmatic access to your Macie account and resources. To develop and manage resources with the Amazon Macie API, you can send HTTPS requests directly to Macie or use a current version of an AWS command line tool or an AWS SDK. AWS provides tools and SDKs that consist of libraries and sample code for various languages and platforms, such as PowerShell, Java, Go, Python, C++, and .NET. Accessing Amazon Macie Amazon Macie is available in most AWS Regions. For a list of Regions where Macie is currently available, see Amazon Macie endpoints and quotas in the Amazon Web Services General Reference. To learn more about AWS Regions, see Managing AWS Regions in the Amazon Web Services General Reference. In each Region, you can work with Macie in any of the following ways. AWS Management Console The AWS Management Console is a browser-based interface that you can use to create and manage AWS resources. As part of that console, the Amazon Macie console provides access to your Macie account and resources. You can perform any Macie task by using the Macie console—review statistics and other information about your S3 buckets, run sensitive data discovery jobs, review and analyze findings, and more. AWS command line tools With AWS command line tools, you can issue commands at your system's command line to perform Macie tasks and AWS tasks. Using the command line can be faster and more convenient than using the console. The command line tools are also useful if you want to build scripts that perform tasks. AWS provides two sets of command line tools: the AWS Command Line Interface (AWS CLI) and the AWS Tools for PowerShell. For information about installing and using the AWS CLI, see the AWS Command Line Interface User Guide. For information about installing and using the Tools for PowerShell, see the AWS Tools for PowerShell User Guide. AWS SDKs AWS provides SDKs that consist of libraries and sample code for various programming languages and platforms—for example, Java, Go, Python, C++, and .NET. The SDKs provide convenient, programmatic access to Macie and other AWS services. They also handle tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For information about installing and using the AWS SDKs, see Tools to Build on AWS. Amazon Macie REST API The Amazon Macie REST API gives you comprehensive, programmatic access to your Macie account and resources. With this API, you can send HTTPS requests directly to Macie. However, unlike the AWS command line tools and SDKs, use of this API requires your application to handle low-level details such as generating a hash to sign a request. For information about this API, see the Amazon Macie API Reference. Pricing for Amazon Macie As with other AWS products, there are no contracts or minimum commitments for using Amazon Macie. Macie pricing is based on two dimensions—evaluating and monitoring S3 buckets for security and access control, and analyzing S3 objects to discover and report sensitive data in those objects. To help you understand and forecast the cost of using Macie, Macie provides estimated usage costs for your account. You can review these estimates on the Amazon Macie console and access them with the Amazon Macie API. Depending on how you use the service, you might incur additional costs for using other AWS services in combination with certain Macie features, such as retrieving bucket data from Amazon S3 and using customer managed AWS KMS keys to decrypt objects for analysis. For more information, see Amazon Macie pricing. When you enable Macie for the first time, your AWS account is automatically enrolled in the 30-day free trial of Macie. This includes individual accounts that are enabled as part of an organization in AWS Organizations. During the free trial, there's no charge for using Macie in the applicable AWS Region to evaluate and monitor your S3 data for security and access control. Note that the free trial doesn't include running sensitive data discovery jobs to discover and report sensitive data in S3 objects. To help you understand and forecast the cost of using Macie after the free trial ends, Macie provides you with estimated usage costs based on your use of Macie during the trial. Your usage data also indicates the amount of time that remains before your free trial ends. You can review this data on the Amazon Macie console and access it with the Amazon Macie API. Related services To further secure your data, workloads, and applications in AWS, consider using the following AWS services in combination with Amazon Macie. AWS Security Hub AWS Security Hub gives you a comprehensive view of the security state of your AWS resources and helps you check your AWS environment against security industry standards and best practices. It does this partly by consuming, aggregating, organizing, and prioritizing your security findings from multiple AWS services (including Macie) and supported AWS Partner Network (APN) products. Security Hub helps you analyze your security trends and identify the highest priority security issues across your AWS environment. To learn more about Security Hub, see the AWS Security Hub User Guide. To learn about using Macie and Security Hub together, see Amazon Macie integration with AWS Security Hub. Amazon GuardDuty Amazon GuardDuty is a security monitoring service that analyzes and processes certain types of AWS logs, such as AWS CloudTrail data event logs for Amazon S3 and CloudTrail management event logs. It uses threat intelligence feeds, such as lists of malicious IP addresses and domains, and machine learning to identify unexpected and potentially unauthorized and malicious activity within your AWS environment. To learn more about GuardDuty, see the Amazon GuardDuty User Guide.

Amazon Lightsail

Amazon Lightsail helps developers get started using Amazon Web Services (AWS) to build websites or web applications. It includes the features that you need to launch your project: instances (virtual private servers), managed databases, object storage, load balancers, content delivery network (CDN) distributions, SSD-based block storage, static IP addresses, DNS management of registered domains, and snapshots (backups). These features are all available for a low, predictable monthly price.

Amazon Managed Grafana

Amazon Managed Grafana is a fully managed and secure data visualization service that you can use to instantly query, correlate, and visualize operational metrics, logs, and traces from multiple data sources. Amazon Managed Grafana makes it easy to deploy, operate, and scale Grafana, a widely deployed open-source data visualization tool popular for its extensible data support.

AWS Resilience Hub

AWS Resilience Hub helps you proactively prepare and protect your AWS applications from disruptions. AWS Resilience Hub provides resiliency assessment and validation to help you identify and resolve issues before releasing applications into production.

AWS Tools for PowerShell

AWS Tools for PowerShell and AWS Tools for PowerShell Core are PowerShell modules, built on functionality exposed by the AWS SDK for .NET, that enable you to script operations on AWS resources from the PowerShell command line. Although you use the SDK's service clients and methods to implement the cmdlets, the cmdlets give you a PowerShell experience to specify parameters and handle results. For example, the cmdlets in both modules support PowerShell pipelining to pipe PowerShell objects to and from the cmdlets.

Amazon Redshift

Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. It is optimized for datasets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions. The Amazon Redshift service manages all of the work of setting up, operating, and scaling a data warehouse. These tasks include provisioning capacity, monitoring and backing up the cluster, and applying patches and upgrades to the Amazon Redshift engine. Cluster management An Amazon Redshift cluster is a set of nodes, which consists of a leader node and one or more compute nodes. The type and number of compute nodes that you need depends on the size of your data, the number of queries you will run, and the query execution performance that you need. Creating and managing clusters Depending on your data warehousing needs, you can start with a small, single-node cluster and easily scale up to a larger, multi-node cluster as your requirements change. You can add or remove compute nodes to the cluster without any interruption to the service. For more information, see Amazon Redshift clusters. Reserving compute nodes If you intend to keep your cluster running for a year or longer, you can save money by reserving compute nodes for a one-year or three-year period. Reserving compute nodes offers significant savings compared to the hourly rates that you pay when you provision compute nodes on demand. For more information, see Purchasing Amazon Redshift reserved nodes. Creating cluster snapshots Snapshots are point-in-time backups of a cluster. There are two types of snapshots: automated and manual. Amazon Redshift stores these snapshots internally in Amazon Simple Storage Service (Amazon S3) by using an encrypted Secure Sockets Layer (SSL) connection. If you need to restore from a snapshot, Amazon Redshift creates a new cluster and imports data from the snapshot that you specify. For more information about snapshots, see Amazon Redshift snapshots. Cluster access and security There are several features related to cluster access and security in Amazon Redshift. These features help you to control access to your cluster, define connectivity rules, and encrypt data and connections. These features are in addition to features related to database access and security in Amazon Redshift. For more information about database security, see Managing Database Security in the Amazon Redshift Database Developer Guide. AWS accounts and IAM credentials By default, an Amazon Redshift cluster is only accessible to the AWS account that creates the cluster. The cluster is locked down so that no one else has access. Within your AWS account, you use the AWS Identity and Access Management (IAM) service to create user accounts and manage permissions for those accounts to control cluster operations. For more information, see Security in Amazon Redshift. Security groups By default, any cluster that you create is closed to everyone. IAM credentials only control access to the Amazon Redshift API-related resources: the Amazon Redshift console, command line interface (CLI), API, and SDK. To enable access to the cluster from SQL client tools via JDBC or ODBC, you use security groups: If you are using the EC2-VPC platform for your Amazon Redshift cluster, you must use VPC security groups. We recommend that you launch your cluster in an EC2-VPC platform. You cannot move a cluster to a VPC after it has been launched with EC2-Classic. However, you can restore an EC2-Classic snapshot to an EC2-VPC cluster using the Amazon Redshift console. For more information, see Restoring a cluster from a snapshot. If you are using the EC2-Classic platform for your Amazon Redshift cluster, you must use Amazon Redshift security groups. In either case, you add rules to the security group to grant explicit inbound access to a specific range of CIDR/IP addresses or to an Amazon Elastic Compute Cloud (Amazon EC2) security group if your SQL client runs on an Amazon EC2 instance. For more information, see Amazon Redshift cluster security groups. In addition to the inbound access rules, you create database users to provide credentials to authenticate to the database within the cluster itself. For more information, see Databases in this topic. Encryption When you provision the cluster, you can optionally choose to encrypt the cluster for additional security. When you enable encryption, Amazon Redshift stores all data in user-created tables in an encrypted format. You can use AWS Key Management Service (AWS KMS) to manage your Amazon Redshift encryption keys. Encryption is an immutable property of the cluster. The only way to switch from an encrypted cluster to a cluster that is not encrypted is to unload the data and reload it into a new cluster. Encryption applies to the cluster and any backups. When you restore a cluster from an encrypted snapshot, the new cluster is encrypted as well. For more information about encryption, keys, and hardware security modules, see Amazon Redshift database encryption. SSL connections You can use Secure Sockets Layer (SSL) encryption to encrypt the connection between your SQL client and your cluster. For more information, see Configuring security options for connections. Monitoring clusters There are several features related to monitoring in Amazon Redshift. You can use database audit logging to generate activity logs, configure events and notification subscriptions to track information of interest,. Use the metrics in Amazon Redshift and Amazon CloudWatch to learn about the health and performance of your clusters and databases. Database audit logging You can use the database audit logging feature to track information about authentication attempts, connections, disconnections, changes to database user definitions, and queries run in the database. This information is useful for security and troubleshooting purposes in Amazon Redshift. The logs are stored in Amazon S3 buckets. For more information, see Database audit logging. Events and notifications Amazon Redshift tracks events and retains information about them for a period of several weeks in your AWS account. For each event, Amazon Redshift reports information such as the date the event occurred, a description, the event source (for example, a cluster, a parameter group, or a snapshot), and the source ID. You can create Amazon Redshift event notification subscriptions that specify a set of event filters. When an event occurs that matches the filter criteria, Amazon Redshift uses Amazon Simple Notification Service to actively inform you that the event has occurred. For more information about events and notifications, see Amazon Redshift events. Performance Amazon Redshift provides performance metrics and data so that you can track the health and performance of your clusters and databases. Amazon Redshift uses Amazon CloudWatch metrics to monitor the physical aspects of the cluster, such as CPU utilization, latency, and throughput. Amazon Redshift also provides query and load performance data to help you monitor the database activity in your cluster. For more information about performance metrics and monitoring, see Monitoring Amazon Redshift cluster performance. Databases Amazon Redshift creates one database when you provision a cluster. This is the database you use to load data and run queries on your data. You can create additional databases as needed by running a SQL command. For more information about creating additional databases, go to Step 1: Create a database in the Amazon Redshift Database Developer Guide. When you provision a cluster, you specify an admin user who has access to all of the databases that are created within the cluster. This admin user is a superuser who is the only user with access to the database initially, though this user can create additional superusers and users. For more information, go to Superusers and Users in the Amazon Redshift Database Developer Guide. Amazon Redshift uses parameter groups to define the behavior of all databases in a cluster, such as date presentation style and floating-point precision. If you don't specify a parameter group when you provision your cluster, Amazon Redshift associates a default parameter group with the cluster. For more information, see Amazon Redshift parameter groups. After your cluster is created, there are several operations you can perform on it. The operations include resizing, pausing, resuming, renaming, and deleting. Resizing clusters in Amazon Redshift As your data warehousing capacity and performance needs change, you can resize your cluster to make the best use of Amazon Redshift's computing and storage options. A resize operation comes in two types: Elastic resize - You can add nodes to or remove nodes from your cluster. You can also change the node type, such as from DS2 nodes to RA3 nodes. Elastic resize is a fast operation, typically completing in minutes. For this reason, we recommend it as a first option. When you perform an elastic resize, it redistributes data slices, which are partitions that are allocated memory and disk space in each node. Elastic resize is appropriate when you: Add or reduce nodes in an existing cluster, but you don't change the node type - This is commonly called an in-place resize. When you perform this type of resize, some running queries complete successfully, but others can be dropped as part of the operation. An elastic resize completes within a few minutes. Change the node type for a cluster - When you change the node type, a snapshot is created and data is redistributed from the original cluster to a cluster comprised of the new node type. On completion, running queries are dropped. Like the in-place resize, it completes quickly. Classic resize - You can change the node type, number of nodes, or both, in a similar manner to elastic resize. Classic resize takes more time to complete, but it can be useful in cases where the change in node count or the node type to migrate to doesn't fall within the bounds for elastic resize. This can apply, for instance, when the change in node count is really large. You can also use classic resize to change the cluster encryption. For example, you can use it to modify your unencrypted cluster to use AWS KMS encryption. Scheduling a resize - You can schedule resize operations for your cluster to scale up to anticipate high use or to scale down for cost savings. Scheduling works for both elastic resize and classic resize. You can set up a schedule on the Amazon Redshift console. For more information, see Resizing a cluster, under Managing clusters using the console. You can also use AWS CLI or Amazon Redshift API operations to schedule a resize. For more information, see create-scheduled-action in the AWS CLI Command Reference or CreateScheduledAction in the Amazon Redshift API Reference. Topics Elastic resize Classic resize Elastic resize An elastic resize operation, when you add or remove nodes of the same type, has the following stages: Elastic resize takes a cluster snapshot. This snapshot always includes no-backup tables for nodes where it's applicable. (Some node types, like RA3, don't have no-backup tables.) If your cluster doesn't have a recent snapshot, because you disabled automated snapshots, the backup operation can take longer. (To minimize the time before the resize operation begins, we recommend that you enable automated snapshots or create a manual snapshot before starting the resize.) When you start an elastic resize and a snapshot operation is in progress, the resize can fail if the snapshot operation doesn't complete within a few minutes. For more information, see Amazon Redshift snapshots. The operation migrates cluster metadata. The cluster is unavailable for a few minutes. The majority of queries are temporarily paused and connections are held open. It is possible, however, for some queries to be dropped. This stage is short. Session connections are reinstated and queries resume. Elastic resize redistributes data to node slices, in the background. The cluster is available for read and write operations, but some queries can take longer to run. After the operation completes, Amazon Redshift sends an event notification. When you use elastic resize to change the node type, it works similarly to when you add or substract nodes of the same type. First, a snapshot is created. A new target cluster is provisioned with the latest data from the snapshot, and data is transferred to the new cluster in the background. During this period, data is read only. When the resize nears completion, Amazon Redshift updates the endpoint to point to the new cluster and all connections to the original cluster are dropped. If you have reserved nodes, for example DS2 reserved nodes, you can upgrade to RA3 reserved nodes when you perform a resize. You can do this when you perform an elastic resize or use the console to restore from a snapshot. The console guides you through this process. For more information about upgrading to RA3 nodes, see Upgrading to RA3 node types. New console To monitor the progress of a resize operation using the Amazon Redshift console, choose CLUSTERS, then choose the cluster being resized to see the details. Elastic resize doesn't sort tables or reclaim disk space, so it isn't a substitute for a vacuum operation. For more information, see Vacuuming tables. Elastic resize has the following constraints: Elastic resize and data sharing clusters - When you add or subtract nodes on a cluster that's a producer for data sharing, you can't connect to it from consumers while Amazon Redshift migrates cluster metadata. Similarly, if you perform an elastic resize and choose a new node type, data sharing is unavailable while connections are dropped and transferred to the new target cluster. In both types of elastic resize, the producer is unavailable for several minutes. Single-node clusters - You can't use elastic resize to resize from or to a single-node cluster. Data transfer from a shared snapshot - To run an elastic resize on a cluster that is transferring data from a shared snapshot, at least one backup must be available for the cluster. You can view your backups on the Amazon Redshift console snapshots list, the describe-cluster-snapshots CLI command, or the DescribeClusterSnapshots API operation. Platform restriction - Elastic resize is available only for clusters that use the EC2-VPC platform. For more information, see Use EC2-VPC when you create your cluster. Storage considerations - Make sure that your new node configuration has enough storage for existing data. You may have to add additional nodes or change configuration. Source vs target cluster size - The possible configurations of node number and type that you can resize to with elastic resize is determined by the number of nodes in the original cluster and the target node type of the resized cluster. To determine the possible configurations available, you can use the console. Or you can use the describe-node-configuration-options AWS CLI command with the action-type resize-cluster option. For more information about the resizing using the Amazon Redshift console, see Resizing a cluster. The following example CLI command describes the configuration options available. In this example, the cluster named mycluster is a dc2.large 8-node cluster. aws redshift describe-node-configuration-options --cluster-identifier mycluster --region eu-west-1 --action-type resize-cluster This command returns an option list with recommended node types, number of nodes, and disk utilization for each option. The configurations returned can vary based on the specific input cluster. You can choose one of the returned configurations when you specify the options of the resize-cluster CLI command. Ceiling on additional nodes - Elastic resize has limits on the nodes that you can add to a cluster. For example, a dc2 cluster supports elastic resize up to double the number of nodes. To illustrate, you can add a node to a 4-node dc2.8xlarge cluster to make it a five-node cluster, or add more nodes until you reach eight. With some ra3 node types, you can increase the number of nodes up to four times the existing count. Specifically, suppose that your cluster consists of ra3.4xlarge or ra3.16xlarge nodes. You can then use elastic resize to increase the number of nodes in an 8-node cluster to 32. Or you can pick a value below the limit. (Keep in mind in this case that the ability to grow the cluster by 4x depends on the original cluster size.)If your cluster has ra3.xlplus nodes, the limit is double. All ra3 node types support a decrease in the number of nodes to a quarter of the existing count. For example, you can decrease the size of a cluster with ra3.4xlarge nodes from 12 nodes to 3, or to a number above the minimum. Classic resize Classic resize handles cases where the change in cluster size or node type isn't within the specifications supported by elastic resize. Classic resize has undergone performance improvements in order that migration of large data volumes, which could take hours or days in the past, completes much more quickly. It does this by making use of a backup and restore operation between the source and target cluster. It also uses more efficient distribution to merge the data to the target cluster. Classic resize has the following stages: Initial migration from source cluster to target cluster. When the new, target cluster is provisioned, Amazon Redshift sends an event notification that the resize has started. It restarts your existing cluster, which closes all connections. This includes connections from consumers, if the cluster is a producer for data sharing. After the restart, the cluster is in read-only mode, and data sharing resumes. These actions take a few minutes. Next, data is migrated to the target cluster, and both reads and writes are available. Distribution Key tables migrated as Distribution Even are converted back to their original distribution style, using background workers. The duration of this phase is dependent on the data-set size. For more information, see Distribution styles. Both reads and writes to the database work during this process. There can be degredation in query performance. When the resize process nears completion, Amazon Redshift updates the endpoint to the target cluster, and all connections to the source cluster are dropped. The target cluster takes on the producer role for data sharing. After the resize completes, Amazon Redshift sends an event notification. You can view the resize progress on the Amazon Redshift console. The time it takes to resize a cluster depends on the amount of data.\ Snapshot, restore, and resize Elastic resize is the fastest method to resize an Amazon Redshift cluster. If elastic resize isn't an option for you and you require near-constant write access to your cluster, use the snapshot and restore operations with classic resize as described in the following section. This approach requires that any data that is written to the source cluster after the snapshot is taken must be copied manually to the target cluster after the switch. Depending on how long the copy takes, you might need to repeat this several times until you have the same data in both clusters. Then you can make the switch to the target cluster. This process might have a negative impact on existing queries until the full set of data is available in the target cluster. However, it minimizes the amount of time that you can't write to the database. The snapshot, restore, and classic resize approach uses the following process: Take a snapshot of your existing cluster. The existing cluster is the source cluster. Note the time that the snapshot was taken. Doing this means that you can later identify the point when you need to rerun extract, transact, load (ETL) processes to load any post-snapshot data into the target database. Restore the snapshot into a new cluster. This new cluster is the target cluster. Verify that the sample data exists in the target cluster. Resize the target cluster. Choose the new node type, number of nodes, and other settings for the target cluster. Review the loads from your ETL processes that occurred after you took a snapshot of the source cluster. Be sure to reload the same data in the same order into the target cluster. If you have ongoing data loads, repeat this process several times until the data is the same in both the source and target clusters. Stop all queries running on the source cluster. To do this, you can reboot the cluster, or you can log on as a superuser and use the PG_CANCEL_BACKEND and the PG_TERMINATE_BACKEND commands. Rebooting the cluster is the easiest way to make sure that the cluster is unavailable. Rename the source cluster. For example, rename it from examplecluster to examplecluster-source. Rename the target cluster to use the name of the source cluster before the rename. For example, rename the target cluster from preceding to examplecluster. From this point on, any applications that use the endpoint containing examplecluster connect to the target cluster. Delete the source cluster after you switch to the target cluster, and verify that all processes work as expected. Alternatively, you can rename the source and target clusters before reloading data into the target cluster. This approach works if you don't have a requirement that any dependent systems and reports be immediately up to date with those for the target cluster. In this case, step 6 moves to the end of the process described preceding. The rename process is only required if you want applications to continue using the same endpoint to connect to the cluster. If you don't require this, you can instead update any applications that connect to the cluster to use the endpoint of the target cluster without renaming the cluster. There are a couple of benefits to reusing a cluster name. First, you don't need to update application connection strings because the endpoint doesn't change, even though the underlying cluster changes. Second, related items such as Amazon CloudWatch alarms and Amazon Simple Notification Service (Amazon SNS) notifications are tied to the cluster name. This tie means that you can continue using the same alarms and notifications that you set up for the cluster. This continued use is primarily a concern in production environments where you want the flexibility to resize the cluster without reconfiguring related items, such as alarms and notifications. Getting the leader node IP address If your cluster is public and is in a VPC, it keeps the same Elastic IP address (EIP) for the leader node after resizing. If your cluster is private and is in a VPC, it keeps the same private IP address for the leader node after resizing. If your cluster isn't in a VPC, a new public IP address is assigned for the leader node as part of the resize operation. To get the leader node IP address for a cluster, use the dig utility, as shown following. dig mycluster.abcd1234.us-west-2.redshift.amazonaws.com The leader node IP address is at the end of the ANSWER SECTION in the results, as shown following. Pausing and resuming clusters If you have a cluster that only needs to be available at specific times, you can pause the cluster and later resume it. While the cluster is paused, on-demand billing is suspended. Only the cluster's storage incurs charges. For more information about pricing, see the Amazon Redshift pricing page. When you pause a cluster, Amazon Redshift creates a snapshot, begins terminating queries, and puts the cluster in a pausing state. If you delete a paused cluster without requesting a final snapshot, then you can't restore the cluster. You can't cancel or roll back a pause or resume operation after it's initiated. You can pause and resume a cluster on the Amazon Redshift console, with the AWS CLI, or with Amazon Redshift API operations. You can schedule actions to pause and resume a cluster. When you use the new Amazon Redshift console to create a recurring schedule to pause and resume, then two scheduled actions are created for the date range that you choose. The scheduled action names are suffixed with -pause and -resume. The total length of the name must fit within the maximum size of a scheduled action name. You can't pause the following types of clusters: EC2-Classic clusters. Clusters that are not active, for example a cluster that is currently modifying. Hardware security module (HSM) clusters. Clusters that have automated snapshots disabled. When deciding to pause a cluster, consider the following: Connections or queries to the cluster aren't available. You can't see query monitoring information of a paused cluster on the Amazon Redshift console. You can't modify a paused cluster. Any scheduled actions on the cluster aren't done. These include creating snapshots, resizing clusters, and cluster maintenance operations. Hardware metrics aren't created. Update your CloudWatch alarms if you have alarms set on missing metrics. You can't copy the latest automated snapshots of a paused cluster to manual snapshots. While a cluster is pausing, it can't be resumed until the pause operation is complete. When you pause a cluster, billing is suspended. However, the pause operation typically completes within 15 minutes, depending upon the size of the cluster. Audit logs are archived and not restored on resume. After a cluster is paused, traces and logs might not be available for troubleshooting problems that occurred before the pause. No-backup tables on the cluster are not restored on resume. For more information about no-backup tables, see Excluding tables from snapshots. When you resume a cluster, consider the following: The cluster version of the resumed cluster is updated to the maintenance version based on the maintenance window of the cluster. If you delete the subnet associated with a paused cluster, you might have an incompatible network. In this case, restore your cluster from the latest snapshot. If you delete an Elastic IP address while the cluster is paused, then a new Elastic IP address is requested. If Amazon Redshift can't resume the cluster with its previous elastic network interface, then Amazon Redshift tries to allocate a new one. When you resume a cluster, your node IP addresses might change. You might need to update your VPC settings to support these new IP addresses for features like COPY from Secure Shell (SSH) or COPY from Amazon EMR. If you try to resume a cluster that isn't paused, the resume operation returns an error. If the resume operation is part of a scheduled action, modify or delete the scheduled action to prevent future errors. Depending upon the size of the cluster, it can take several minutes to resume a cluster before queries can be processed. In addition, query performance can be impacted for some period of time while the cluster is being re-hydrated after resume completes. Renaming clusters You can rename a cluster if you want the cluster to use a different name. Because the endpoint to your cluster includes the cluster name (also referred to as the cluster identifier), the endpoint changes to use the new name after the rename finishes. For example, if you have a cluster named examplecluster and rename it to newcluster, the endpoint changes to use the newcluster identifier. Any applications that connect to the cluster must be updated with the new endpoint. You might rename a cluster if you want to change the cluster to which your applications connect without having to change the endpoint in those applications. In this case, you must first rename the original cluster and then change the second cluster to reuse the name of the original cluster before the rename. Doing this is necessary because the cluster identifier must be unique within your account and region, so the original cluster and second cluster cannot have the same name. You might do this if you restore a cluster from a snapshot and don't want to change the connection properties of any dependent applications. Note If you delete the original cluster, you are responsible for deleting any unwanted cluster snapshots. When you rename a cluster, the cluster status changes to renaming until the process finishes. The old DNS name that was used by the cluster is immediately deleted, although it could remain cached for a few minutes. The new DNS name for the renamed cluster becomes effective within about 10 minutes. The renamed cluster is not available until the new name becomes effective. The cluster will be rebooted and any existing connections to the cluster will be dropped. After this completes, the endpoint will change to use the new name. For this reason, you should stop queries from running before you start the rename and restart them after the rename finishes. Cluster snapshots are retained, and all snapshots associated with a cluster remain associated with that cluster after it is renamed. For example, suppose that you have a cluster that serves your production database and the cluster has several snapshots. If you rename the cluster and then replace it in the production environment with a snapshot, the cluster that you renamed still has those existing snapshots associated with it. Amazon CloudWatch alarms and Amazon Simple Notification Service (Amazon SNS) event notifications are associated with the name of the cluster. If you rename the cluster, you need to update these accordingly. You can update the CloudWatch alarms in the CloudWatch console, and you can update the Amazon SNS event notifications in the Amazon Redshift console on the Events pane. The load and query data for the cluster continues to display data from before the rename and after the rename. However, performance data is reset after the rename process finishes. For more information, see Modifying a cluster. Shutting down and deleting clusters You can shut down your cluster if you want to stop it from running and incurring charges. When you shut it down, you can optionally create a final snapshot. If you create a final snapshot, Amazon Redshift will create a manual snapshot of your cluster before shutting it down. You can later restore that snapshot if you want to resume running the cluster and querying data. If you no longer need your cluster and its data, you can shut it down without creating a final snapshot. In this case, the cluster and data are deleted permanently. For more information about shutting down and deleting clusters, see Deleting a cluster. Regardless of whether you shut down your cluster with a final manual snapshot, all automated snapshots associated with the cluster will be deleted after the cluster is shut down. Any manual snapshots associated with the cluster are retained. Any manual snapshots that are retained, including the optional final snapshot, are charged at the Amazon Simple Storage Service storage rate if you have no other clusters running when you shut down the cluster, or if you exceed the available free storage that is provided for your running Amazon Redshift clusters.

Amazon Rekognition

Amazon Rekognition makes it easy to add image and video analysis to your applications. You just provide an image or video to the Amazon Rekognition API, and the service can identify objects, people, text, scenes, and activities. It can detect any inappropriate content as well. Amazon Rekognition also provides highly accurate facial analysis and facial recognition. With Amazon Rekognition Custom Labels, you can create a machine learning model that finds the objects, scenes, and concepts that are specific to your business needs. Amazon Rekognition makes it easy to add image and video analysis to your applications. You just provide an image or video to the Amazon Rekognition API, and the service can identify objects, people, text, scenes, and activities. It can detect any inappropriate content as well. Amazon Rekognition also provides highly accurate facial analysis, face comparison, and face search capabilities. You can detect, analyze, and compare faces for a wide variety of use cases, including user verification, cataloging, people counting, and public safety. Amazon Rekognition is based on the same proven, highly scalable, deep learning technology developed by Amazon's computer vision scientists to analyze billions of images and videos daily. It requires no machine learning expertise to use. Amazon Rekognition includes a simple, easy-to-use API that can quickly analyze any image or video file that's stored in Amazon S3. Amazon Rekognition is always learning from new data, and we're continually adding new labels and facial comparison features to the service. For more information, see the Amazon Rekognition FAQs. Common use cases for using Amazon Rekognition include the following: Searchable image and video libraries - Amazon Rekognition makes images and stored videos searchable so you can discover objects and scenes that appear within them. Face-based user verification - Amazon Rekognition enables your applications to confirm user identities by comparing their live image with a reference image. Detection of Personal Protective Equipment Amazon Rekognition detects Personal Protective Equipment (PPE) such as face covers, head covers, and hand covers on persons in images. You can use PPE detection where safety is the highest priority. For example, industries such as construction, manufacturing, healthcare, food processing, logistics, and retail. With PPE detection, you can automatically detect if a person is wearing a specific type of PPE. You can use the detection results to send a notification or to identify places where safety warnings or training practices can be improved. Sentiment and demographic analysis - Amazon Rekognition interprets emotional expressions such as happy, sad, or surprise, and demographic information such as gender from facial images. Amazon Rekognition can analyze images, and send the emotion and demographic attributes to Amazon Redshift for periodic reporting on trends such as in store locations and similar scenarios. Note that a prediction of an emotional expression is based on the physical appearance of a person's face only. It is not indicative of a person's internal emotional state, and Rekognition should not be used to make such a determination. Facial Search - With Amazon Rekognition, you can search images, stored videos, and streaming videos for faces that match those stored in a container known as a face collection. A face collection is an index of faces that you own and manage. Searching for people based on their faces requires two major steps in Amazon Rekognition: Index the faces. Search the faces. Unsafe content detection - Amazon Rekognition can detect adult and violent content in images and in stored videos. Developers can use the returned metadata to filter inappropriate content based on their business needs. Beyond flagging an image based on the presence of unsafe content, the API also returns a hierarchical list of labels with confidence scores. These labels indicate specific categories of unsafe content, which enables granular filtering and management of large volumes of user-generated content (UGC). Examples include social and dating sites, photo sharing platforms, blogs and forums, apps for children, ecommerce sites, entertainment, and online advertising services. Celebrity recognition - Amazon Rekognition can recognize celebrities within supplied images and in videos. Amazon Rekognition can recognize thousands of celebrities across a number of categories, such as politics, sports, business, entertainment, and media. Text detection - Amazon Rekognition Text in Image enables you to recognize and extract textual content from images. Text in Image supports most fonts, including highly stylized ones. It detects text and numbers in different orientations, such as those commonly found in banners and posters. In image sharing and social media applications, you can use it to enable visual search based on an index of images that contain the same keywords. In media and entertainment applications, you can catalog videos based on relevant text on screen, such as ads, news, sport scores, and captions. Finally, in public safety applications, you can identify vehicles based on license plate numbers from images taken by street cameras. Custom labels- With Amazon Rekognition Custom Labels, you can identify the objects and scenes in images that are specific to your business needs. For example, you can find your logo in social media posts, identify your products on store shelves, classify machine parts in an assembly line, distinguish healthy and infected plants, or detect animated characters in videos. For more information, see What is Amazon Rekognition Custom Labels? in the Amazon Rekognition Custom Labels Developer Guide. Some of the benefits of using Amazon Rekognition include: Integrating powerful image and video analysis into your apps - You don't need computer vision or deep learning expertise to take advantage of the reliable image and video analysis in Amazon Rekognition. With the API, you can easily and quickly build image and video analysis into any web, mobile, or connected device application. Deep learning-based image and video analysis - Amazon Rekognition uses deep-learning technology to accurately analyze images, find and compare faces in images, and detect objects and scenes within your images and videos. Scalable image analysis - Amazon Rekognition enables you to analyze millions of images so you can curate and organize massive amounts of visual data. Integration with other AWS services - Amazon Rekognition is designed to work seamlessly with other AWS services like Amazon S3 and AWS Lambda. You can call the Amazon Rekognition API directly from Lambda in response to Amazon S3 events. Because Amazon S3 and Lambda scale automatically in response to your application's demand, you can build scalable, affordable, and reliable image analysis applications. For example, each time a person arrives at your residence, your door camera can upload a photo of the visitor to Amazon S3. This triggers a Lambda function that uses Amazon Rekognition API operations to identify your guest. You can run analysis directly on images that are stored in Amazon S3 without having to load or move the data. Support for AWS Identity and Access Management (IAM) makes it easy to securely control access to Amazon Rekognition API operations. Using IAM, you can create and manage AWS users and groups to grant the appropriate access to your developers and end users. Low cost - With Amazon Rekognition, you pay for the images and videos that you analyze, and the face metadata that you store. There are no minimum fees or upfront commitments. You can get started for free, and save more as you grow with the Amazon Rekognition tiered pricing model. Amazon Rekognition and HIPAA eligibility This is a HIPAA Eligible Service. For more information about AWS, U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA), and using AWS services to process, store, and transmit protected health information (PHI), see HIPAA Overview.

Amazon Relational Database Service

Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks. Amazon Aurora is a fully-managed relational database engine that's built for the cloud and compatible with MySQL and PostgreSQL. Amazon Aurora is part of Amazon RDS. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks. Overview of Amazon RDS Why do you want to run a relational database in the AWS Cloud? Because AWS takes over many of the difficult and tedious management tasks of a relational database. Amazon EC2 and on-premises databases Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. When you buy an on-premises server, you get CPU, memory, storage, and IOPS, all bundled together. With Amazon EC2, these are split apart so that you can scale them independently. If you need more CPU, less IOPS, or more storage, you can easily allocate them. For a relational database in an on-premises server, you assume full responsibility for the server, operating system, and software. For a database on an Amazon EC2 instance, AWS manages the layers below the operating system. In this way, Amazon EC2 eliminates some of the burden of managing an on-premises database server. Amazon EC2 isn't a fully managed service. Thus, when you run a database on Amazon EC2, you're more prone to user errors. For example, when you update the operating system or database software manually, you might accidentally cause application downtime. You might spend hours checking every change to identify and fix an issue. Amazon RDS and Amazon EC2 Amazon RDS is a managed database service. It's responsible for most management tasks. By eliminating tedious manual tasks, Amazon RDS frees you to focus on your application and your users. We recommend Amazon RDS over Amazon EC2 as your default choice for most database deployments. In the following table, you can find a comparison of the management models in Amazon EC2 and Amazon RDS. Amazon RDS provides the following specific advantages over database deployments that aren't fully managed: You can use the database products you are already familiar with: MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL. Amazon RDS manages backups, software patching, automatic failure detection, and recovery. You can turn on automated backups, or manually create your own backup snapshots. You can use these backups to restore a database. The Amazon RDS restore process works reliably and efficiently. You can get high availability with a primary instance and a synchronous secondary instance that you can fail over to when problems occur. You can also use read replicas to increase read scaling. In addition to the security in your database package, you can help control who can access your RDS databases. To do so, you can use AWS Identity and Access Management (IAM) to define users and permissions. You can also help protect your databases by putting them in a virtual private cloud (VPC). Amazon RDS Custom for Oracle and Microsoft SQL Server Amazon RDS Custom is an RDS management type that gives you full access to your database and operating system. You can use the control capabilities of RDS Custom to access and customize the database environment and operating system for legacy and packaged business applications. Meanwhile, Amazon RDS automates database administration tasks and operations. In this deployment model, you can install applications and change configuration settings to suit your applications. At the same time, you can offload database administration tasks such as provisioning, scaling, upgrading, and backup to AWS. You can take advantage of the database management benefits of Amazon RDS, with more control and flexibility. For Oracle Database and Microsoft SQL Server, RDS Custom combines the automation of Amazon RDS with the flexibility of Amazon EC2. For more information on RDS Custom, see Working with Amazon RDS Custom. With the shared responsibility model of RDS Custom, you get more control than in Amazon RDS, but also more responsibility. For more information, see Shared responsibility model. Amazon RDS on AWS Outposts Amazon RDS on AWS Outposts extends RDS for SQL Server, RDS for MySQL, and RDS for PostgreSQL databases to AWS Outposts environments. AWS Outposts uses the same hardware as in public AWS Regions to bring AWS services, infrastructure, and operation models on-premises. With RDS on Outposts, you can provision managed DB instances close to the business applications that must run on-premises. For more information, see Working with Amazon RDS on AWS Outposts. DB instances A DB instance is an isolated database environment in the AWS Cloud. The basic building block of Amazon RDS is the DB instance. Your DB instance can contain one or more user-created databases. You can access your DB instance by using the same tools and applications that you use with a standalone database instance. You can create and modify a DB instance by using the AWS Command Line Interface (AWS CLI), the Amazon RDS API, or the AWS Management Console. DB engines A DB engine is the specific relational database software that runs on your DB instance. Amazon RDS currently supports the following engines: MariaDB Microsoft SQL Server MySQL Oracle PostgreSQL Each DB engine has its own supported features, and each version of a DB engine can include specific features. Support for Amazon RDS features varies across AWS Regions and specific versions of each DB engine. To check feature support in different engine versions and Regions, see Supported features in Amazon RDS by AWS Region and DB engine. Additionally, each DB engine has a set of parameters in a DB parameter group that control the behavior of the databases that it manages. DB instance classes A DB instance class determines the computation and memory capacity of a DB instance. A DB instance class consists of both the DB instance type and the size. Each instance type offers different compute, memory, and storage capabilities. For example, db.m6g is a general-purpose DB instance type powered by AWS Graviton2 processors. Within the db.m6g instance type, db.m6g.2xlarge is a DB instance class. You can select the DB instance that best meets your needs. If your needs change over time, you can change DB instances. For information, see DB instance classes. Note For pricing information on DB instance classes, see the Pricing section of the Amazon RDS product page. DB instance storage Amazon EBS provides durable, block-level storage volumes that you can attach to a running instance. DB instance storage comes in the following types: General Purpose (SSD) Provisioned IOPS (PIOPS) Magnetic The storage types differ in performance characteristics and price. You can tailor your storage performance and cost to the needs of your database. Each DB instance has minimum and maximum storage requirements depending on the storage type and the database engine it supports. It's important to have sufficient storage so that your databases have room to grow. Also, sufficient storage makes sure that features for the DB engine have room to write content or log entries. For more information, see Amazon RDS DB instance storage. Amazon Virtual Private Cloud (Amazon VPC) You can run a DB instance on a virtual private cloud (VPC) using the Amazon Virtual Private Cloud (Amazon VPC) service. When you use a VPC, you have control over your virtual networking environment. You can choose your own IP address range, create subnets, and configure routing and access control lists. The basic functionality of Amazon RDS is the same whether it's running in a VPC or not. Amazon RDS manages backups, software patching, automatic failure detection, and recovery. There's no additional cost to run your DB instance in a VPC. For more information on using Amazon VPC with RDS, see Amazon VPC VPCs and Amazon RDS. Amazon RDS uses Network Time Protocol (NTP) to synchronize the time on DB Instances. AWS Regions and Availability Zones Amazon cloud computing resources are housed in highly available data center facilities in different areas of the world (for example, North America, Europe, or Asia). Each data center location is called an AWS Region. Each AWS Region contains multiple distinct locations called Availability Zones, or AZs. Each Availability Zone is engineered to be isolated from failures in other Availability Zones. Each is engineered to provide inexpensive, low-latency network connectivity to other Availability Zones in the same AWS Region. By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location. For more information, see Regions, Availability Zones, and Local Zones. You can run your DB instance in several Availability Zones, an option called a Multi-AZ deployment. When you choose this option, Amazon automatically provisions and maintains one or more secondary standby DB instances in a different Availability Zone. Your primary DB instance is replicated across Availability Zones to each secondary DB instance. This approach helps provide data redundancy and failover support, eliminate I/O freezes, and minimize latency spikes during system backups. In a Multi-AZ DB clusters deployment, the secondary DB instances can also serve read traffic. For more information, see Multi-AZ deployments for high availability. Security A security group controls the access to a DB instance. It does so by allowing access to IP address ranges or Amazon EC2 instances that you specify. For more information about security groups, see Security in Amazon RDS. Monitoring an Amazon RDS DB instance There are several ways that you can track the performance and health of a DB instance. You can use the Amazon CloudWatch service to monitor the performance and health of a DB instance. CloudWatch performance charts are shown in the Amazon RDS console. You can also subscribe to Amazon RDS events to be notified about changes to a DB instance, DB snapshot, or DB parameter group. For more information, see Monitoring metrics in an Amazon RDS instance. How to work with Amazon RDS There are several ways that you can interact with Amazon RDS. AWS Management Console The AWS Management Console is a simple web-based user interface. You can manage your DB instances from the console with no programming required. To access the Amazon RDS console, sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/. Command line interface You can use the AWS Command Line Interface (AWS CLI) to access the Amazon RDS API interactively. To install the AWS CLI, see Installing the AWS Command Line Interface. To begin using the AWS CLI for RDS, see AWS Command Line Interface reference for Amazon RDS. Programming with Amazon RDS If you are a developer, you can access the Amazon RDS programmatically. For more information, see Amazon RDS API reference. For application development, we recommend that you use one of the AWS Software Development Kits (SDKs). The AWS SDKs handle low-level details such as authentication, retry logic, and error handling, so that you can focus on your application logic. AWS SDKs are available for a wide variety of languages. For more information, see Tools for Amazon web services . AWS also provides libraries, sample code, tutorials, and other resources to help you get started more easily. For more information, see Sample code & libraries. How you are charged for Amazon RDS When you use Amazon RDS, you can choose to use on-demand DB instances or reserved DB instances. For more information, see DB instance billing for Amazon RDS.

Amazon Relational Database Service

Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks. Amazon Aurora is a fully-managed relational database engine that's built for the cloud and compatible with MySQL and PostgreSQL. Amazon Aurora is part of Amazon RDS. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks. Overview of Amazon RDS Why do you want to run a relational database in the AWS Cloud? Because AWS takes over many of the difficult and tedious management tasks of a relational database. Amazon EC2 and on-premises databases Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. When you buy an on-premises server, you get CPU, memory, storage, and IOPS, all bundled together. With Amazon EC2, these are split apart so that you can scale them independently. If you need more CPU, less IOPS, or more storage, you can easily allocate them. For a relational database in an on-premises server, you assume full responsibility for the server, operating system, and software. For a database on an Amazon EC2 instance, AWS manages the layers below the operating system. In this way, Amazon EC2 eliminates some of the burden of managing an on-premises database server. Amazon EC2 isn't a fully managed service. Thus, when you run a database on Amazon EC2, you're more prone to user errors. For example, when you update the operating system or database software manually, you might accidentally cause application downtime. You might spend hours checking every change to identify and fix an issue. Amazon RDS and Amazon EC2 Amazon RDS is a managed database service. It's responsible for most management tasks. By eliminating tedious manual tasks, Amazon RDS frees you to focus on your application and your users. We recommend Amazon RDS over Amazon EC2 as your default choice for most database deployments. Amazon RDS provides the following specific advantages over database deployments that aren't fully managed: You can use the database products you are already familiar with: MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL. Amazon RDS manages backups, software patching, automatic failure detection, and recovery. You can turn on automated backups, or manually create your own backup snapshots. You can use these backups to restore a database. The Amazon RDS restore process works reliably and efficiently. You can get high availability with a primary instance and a synchronous secondary instance that you can fail over to when problems occur. You can also use read replicas to increase read scaling. In addition to the security in your database package, you can help control who can access your RDS databases. To do so, you can use AWS Identity and Access Management (IAM) to define users and permissions. You can also help protect your databases by putting them in a virtual private cloud (VPC). Amazon RDS Custom for Oracle and Microsoft SQL Server Amazon RDS Custom is an RDS management type that gives you full access to your database and operating system. You can use the control capabilities of RDS Custom to access and customize the database environment and operating system for legacy and packaged business applications. Meanwhile, Amazon RDS automates database administration tasks and operations. In this deployment model, you can install applications and change configuration settings to suit your applications. At the same time, you can offload database administration tasks such as provisioning, scaling, upgrading, and backup to AWS. You can take advantage of the database management benefits of Amazon RDS, with more control and flexibility. For Oracle Database and Microsoft SQL Server, RDS Custom combines the automation of Amazon RDS with the flexibility of Amazon EC2. For more information on RDS Custom, see Working with Amazon RDS Custom. With the shared responsibility model of RDS Custom, you get more control than in Amazon RDS, but also more responsibility. For more information, see Shared responsibility model. Amazon RDS on AWS Outposts Amazon RDS on AWS Outposts extends RDS for SQL Server, RDS for MySQL, and RDS for PostgreSQL databases to AWS Outposts environments. AWS Outposts uses the same hardware as in public AWS Regions to bring AWS services, infrastructure, and operation models on-premises. With RDS on Outposts, you can provision managed DB instances close to the business applications that must run on-premises. For more information, see Working with Amazon RDS on AWS Outposts. DB instances A DB instance is an isolated database environment in the AWS Cloud. The basic building block of Amazon RDS is the DB instance. Your DB instance can contain one or more user-created databases. You can access your DB instance by using the same tools and applications that you use with a standalone database instance. You can create and modify a DB instance by using the AWS Command Line Interface (AWS CLI), the Amazon RDS API, or the AWS Management Console. DB engines A DB engine is the specific relational database software that runs on your DB instance. Amazon RDS currently supports the following engines: MariaDB Microsoft SQL Server MySQL Oracle PostgreSQL Each DB engine has its own supported features, and each version of a DB engine can include specific features. Support for Amazon RDS features varies across AWS Regions and specific versions of each DB engine. To check feature support in different engine versions and Regions, see Supported features in Amazon RDS by AWS Region and DB engine. Additionally, each DB engine has a set of parameters in a DB parameter group that control the behavior of the databases that it manages. DB instance classes A DB instance class determines the computation and memory capacity of a DB instance. A DB instance class consists of both the DB instance type and the size. Each instance type offers different compute, memory, and storage capabilities. For example, db.m6g is a general-purpose DB instance type powered by AWS Graviton2 processors. Within the db.m6g instance type, db.m6g.2xlarge is a DB instance class. You can select the DB instance that best meets your needs. If your needs change over time, you can change DB instances. For information, see DB instance classes. Note For pricing information on DB instance classes, see the Pricing section of the Amazon RDS product page. DB instance storage Amazon EBS provides durable, block-level storage volumes that you can attach to a running instance. DB instance storage comes in the following types: General Purpose (SSD) Provisioned IOPS (PIOPS) Magnetic The storage types differ in performance characteristics and price. You can tailor your storage performance and cost to the needs of your database. Each DB instance has minimum and maximum storage requirements depending on the storage type and the database engine it supports. It's important to have sufficient storage so that your databases have room to grow. Also, sufficient storage makes sure that features for the DB engine have room to write content or log entries. For more information, see Amazon RDS DB instance storage. Amazon Virtual Private Cloud (Amazon VPC) You can run a DB instance on a virtual private cloud (VPC) using the Amazon Virtual Private Cloud (Amazon VPC) service. When you use a VPC, you have control over your virtual networking environment. You can choose your own IP address range, create subnets, and configure routing and access control lists. The basic functionality of Amazon RDS is the same whether it's running in a VPC or not. Amazon RDS manages backups, software patching, automatic failure detection, and recovery. There's no additional cost to run your DB instance in a VPC. For more information on using Amazon VPC with RDS, see Amazon VPC VPCs and Amazon RDS. Amazon RDS uses Network Time Protocol (NTP) to synchronize the time on DB Instances. AWS Regions and Availability Zones Amazon cloud computing resources are housed in highly available data center facilities in different areas of the world (for example, North America, Europe, or Asia). Each data center location is called an AWS Region. Each AWS Region contains multiple distinct locations called Availability Zones, or AZs. Each Availability Zone is engineered to be isolated from failures in other Availability Zones. Each is engineered to provide inexpensive, low-latency network connectivity to other Availability Zones in the same AWS Region. By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location. For more information, see Regions, Availability Zones, and Local Zones. You can run your DB instance in several Availability Zones, an option called a Multi-AZ deployment. When you choose this option, Amazon automatically provisions and maintains one or more secondary standby DB instances in a different Availability Zone. Your primary DB instance is replicated across Availability Zones to each secondary DB instance. This approach helps provide data redundancy and failover support, eliminate I/O freezes, and minimize latency spikes during system backups. In a Multi-AZ DB clusters deployment, the secondary DB instances can also serve read traffic. For more information, see Multi-AZ deployments for high availability. Security A security group controls the access to a DB instance. It does so by allowing access to IP address ranges or Amazon EC2 instances that you specify. For more information about security groups, see Security in Amazon RDS. Monitoring an Amazon RDS DB instance There are several ways that you can track the performance and health of a DB instance. You can use the Amazon CloudWatch service to monitor the performance and health of a DB instance. CloudWatch performance charts are shown in the Amazon RDS console. You can also subscribe to Amazon RDS events to be notified about changes to a DB instance, DB snapshot, or DB parameter group. For more information, see Monitoring metrics in an Amazon RDS instance. How to work with Amazon RDS There are several ways that you can interact with Amazon RDS. AWS Management Console The AWS Management Console is a simple web-based user interface. You can manage your DB instances from the console with no programming required. To access the Amazon RDS console, sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/. Command line interface You can use the AWS Command Line Interface (AWS CLI) to access the Amazon RDS API interactively. To install the AWS CLI, see Installing the AWS Command Line Interface. To begin using the AWS CLI for RDS, see AWS Command Line Interface reference for Amazon RDS. Programming with Amazon RDS If you are a developer, you can access the Amazon RDS programmatically. For more information, see Amazon RDS API reference. For application development, we recommend that you use one of the AWS Software Development Kits (SDKs). The AWS SDKs handle low-level details such as authentication, retry logic, and error handling, so that you can focus on your application logic. AWS SDKs are available for a wide variety of languages. For more information, see Tools for Amazon web services . AWS also provides libraries, sample code, tutorials, and other resources to help you get started more easily. For more information, see Sample code & libraries. How you are charged for Amazon RDS When you use Amazon RDS, you can choose to use on-demand DB instances or reserved DB instances. For more information, see DB instance billing for Amazon RDS.

AWS App2Container

AWS App2Container (A2C) is a command-line tool for modernizing .NET and Java applications into containerized applications. Using A2C simplifies your migration tasks by performing inventory and analysis of your existing applications, creating Docker containers that include your application dependencies, and generating deployment templates based on AWS best practices with known values filled in for you. After you have reviewed your templates, A2C helps you register your containers to Amazon ECR, deploy to Amazon ECS or Amazon EKS, and build CI/CD pipelines using AWS CodeStar. AWS App2Container (A2C) is a command line tool to help you lift and shift applications that run in your on-premises data centers or on virtual machines, so that they run in containers that are managed by Amazon ECS, Amazon EKS, or AWS App Runner. Moving legacy applications to containers is often the starting point toward application modernization. There are many benefits to containerization: Reduces operational overhead and infrastructure costs Increases development and deployment agility Standardizes build and deployment processes across an organization Contents How App2Container works Accessing AWS through App2Container Pricing How App2Container works You can use App2Container to generate container images for one or more applications running on Windows or Linux servers that are compatible with the Open Containers Initiative (OCI). This includes commercial off-the-shelf applications (COTs). App2Container does not need source code for the application to containerize it. You can use App2Container directly on the application servers that are running your applications, or perform the containerization and deployment steps on a worker machine. App2Container performs the following tasks: Creates an inventory list for the application server that identifies all running ASP.NET (Windows) and Java applications (Linux) that are candidates to containerize. Analyzes the runtime dependencies of supported applications that are running, including cooperating processes and network port dependencies. Extracts application artifacts for containerization and generates a Dockerfile. Initiates builds for the application container. Generates AWS artifacts and optionally deploys the containers on Amazon ECS, Amazon EKS, or AWS App Runner. For example: a CloudFormation template to configure required compute, network, and security infrastructure to deploy containers using Amazon ECS, Amazon EKS, or AWS App Runner. An Amazon ECR container image, Amazon ECS task definitions, or AWS CloudFormation templates for Amazon EKS or AWS App Runner that incorporate best practices for security and scalability of the application by integrating with various AWS services. When deploying directly, App2Container can upload AWS CloudFormation resources to an Amazon S3 bucket, and create a CloudFormation stack. Optionally creates a CI/CD pipeline with AWS CodePipeline and associated services, to automate building and deploying your application containers. Accessing AWS through App2Container When you initialize App2Container, you provide it with your AWS credentials. This allows App2Container to do the following: Store artifacts in Amazon S3, if you configured it to do so. Create and deploy application containers using AWS services such as Amazon ECS, Amazon EKS, and AWS App Runner. Create CI/CD pipelines using AWS CodePipeline. Pricing App2Container is offered at no additional charge. You are charged only when you use other AWS services to run your containerized application, such as Amazon ECR, Amazon ECS, Amazon EKS, and AWS App Runner. For more information, see AWS Pricing.

AWS CloudFormation

AWS CloudFormation enables you to create and provision AWS infrastructure deployments predictably and repeatedly. It helps you leverage AWS products such as Amazon EC2, Amazon Elastic Block Store, Amazon SNS, Elastic Load Balancing, and Auto Scaling to build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure. AWS CloudFormation enables you to use a template file to create and delete a collection of resources together as a single unit (a stack). AWS CloudFormation is a service that helps you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and CloudFormation takes care of provisioning and configuring those resources for you. You don't need to individually create and configure AWS resources and figure out what's dependent on what; CloudFormation handles that. The following scenarios demonstrate how CloudFormation can help. Simplify infrastructure management For a scalable web application that also includes a backend database, you might use an Auto Scaling group, an Elastic Load Balancing load balancer, and an Amazon Relational Database Service database instance. You might use each individual service to provision these resources and after you create the resources, you would have to configure them to work together. All these tasks can add complexity and time before you even get your application up and running. Instead, you can create a CloudFormation template or modify an existing one. A template describes all your resources and their properties. When you use that template to create a CloudFormation stack, CloudFormation provisions the Auto Scaling group, load balancer, and database for you. After the stack has been successfully created, your AWS resources are up and running. You can delete the stack just as easily, which deletes all the resources in the stack. By using CloudFormation, you easily manage a collection of resources as a single unit. Quickly replicate your infrastructure If your application requires additional availability, you might replicate it in multiple regions so that if one region becomes unavailable, your users can still use your application in other regions. The challenge in replicating your application is that it also requires you to replicate your resources. Not only do you need to record all the resources that your application requires, but you must also provision and configure those resources in each region. Reuse your CloudFormation template to create your resources in a consistent and repeatable manner. To reuse your template, describe your resources once and then provision the same resources over and over in multiple regions. Easily control and track changes to your infrastructure In some cases, you might have underlying resources that you want to upgrade incrementally. For example, you might change to a higher performing instance type in your Auto Scaling launch configuration so that you can reduce the maximum number of instances in your Auto Scaling group. If problems occur after you complete the update, you might need to roll back your infrastructure to the original settings. To do this manually, you not only have to remember which resources were changed, you also have to know what the original settings were. When you provision your infrastructure with CloudFormation, the CloudFormation template describes exactly what resources are provisioned and their settings. Because these templates are text files, you simply track differences in your templates to track changes to your infrastructure, similar to the way developers control revisions to source code. For example, you can use a version control system with your templates so that you know exactly what changes were made, who made them, and when. If at any point you need to reverse changes to your infrastructure, you can use a previous version of your template.

AWS Wavelength

AWS Wavelength allows developers to build applications that deliver ultra-low latencies to mobile devices and end users. Wavelength deploys standard AWS compute and storage services to the edge of telecommunication carriers' 5G networks. Developers can extend an Amazon Virtual Private Cloud (VPC) to one or more Wavelength Zones, and then use AWS resources like Amazon Elastic Compute Cloud (EC2) instances to run applications that require ultra-low latency and a connection to AWS services in the Region. VPCs After you create a VPC in a Region, create a subnet in a Wavelength Zone that is associated with the VPC. In addition to the Wavelength Zone, you can create resources in all of the Availability Zones and Local Zones that are associated with the VPC. You have control over the VPC networking components, such as IP address assignment, subnets, and route table creation. VPCs that contain a subnet in a Wavelength Zone can connect to a carrier gateway. A carrier gateway allows you to connect to the following resources: 4G/LTE and 5G devices on the telecommunication carrier network Outbound traffic to public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom route tables for your subnets in Wavelength Zones. The destinations are the same destinations as a subnet in an Availability Zone or Local Zone, with the addition of a carrier gateway. For more information, see Routing. Carrier gateways A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and internet. There is no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the telecommunication carrier, and devices on the telecommunication carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Carrier IP address A Carrier IP address is the address that you assign to a network interface, which resides in a subnet in a Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through the carrier gateway. You allocate a Carrier IP address from a network border group, which is a unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses, for example, us-east-1-wl1-bos-wlz-1. Routing You can set the carrier gateway as a destination in a route table for the following resources: VPCs that contain subnets in a Wavelength Zone Subnets in Wavelength Zones Create a custom route table for the subnets in the Wavelength Zones so that the default route goes to the carrier gateway, which then sends traffic to the internet and telecommunication carrier network.

Amazon Simple Storage Service (S3)

Amazon Simple Storage Service (Amazon S3) is storage for the internet. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can use Amazon S3 to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides management features so that you can optimize, organize, and configure access to your data to meet your specific business, organizational, and compliance requirements. Topics Features of Amazon S3 How Amazon S3 works Amazon S3 data consistency model Related services Accessing Amazon S3 Paying for Amazon S3 PCI DSS compliance Features of Amazon S3 Storage classes Amazon S3 offers a range of storage classes designed for different use cases. For example, you can store mission-critical production data in S3 Standard for frequent access, save costs by storing infrequently accessed data in S3 Standard-IA or S3 One Zone-IA, and archive data at the lowest costs in S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive. You can store data with changing or unknown access patterns in S3 Intelligent-Tiering, which optimizes storage costs by automatically moving your data between four access tiers when your access patterns change. These four access tiers include two low-latency access tiers optimized for frequent and infrequent access, and two opt-in archive access tiers designed for asynchronous access for rarely accessed data. For more information, see Using Amazon S3 storage classes. For more information about S3 Glacier Flexible Retrieval, see the Amazon S3 Glacier Developer Guide. Storage management Amazon S3 has storage management features that you can use to manage costs, meet regulatory requirements, reduce latency, and save multiple distinct copies of your data for compliance requirements. S3 Lifecycle - Configure a lifecycle policy to manage your objects and store them cost effectively throughout their lifecycle. You can transition objects to other S3 storage classes or expire objects that reach the end of their lifetimes. S3 Object Lock - Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can use Object Lock to help meet regulatory requirements that require write-once-read-many (WORM) storage or to simply add another layer of protection against object changes and deletions. S3 Replication - Replicate objects and their respective metadata and object tags to one or more destination buckets in the same or different AWS Regions for reduced latency, compliance, security, and other use cases. S3 Batch Operations - Manage billions of objects at scale with a single S3 API request or a few clicks in the Amazon S3 console. You can use Batch Operations to perform operations such as Copy, Invoke AWS Lambda function, and Restore on millions or billions of objects. Access management Amazon S3 provides features for auditing and managing access to your buckets and objects. By default, S3 buckets and the objects in them are private. You have access only to the S3 resources that you create. To grant granular resource permissions that support your specific use case or to audit the permissions of your Amazon S3 resources, you can use the following features. S3 Block Public Access - Block public access to S3 buckets and objects. By default, Block Public Access settings are turned on at the account and bucket level. AWS Identity and Access Management (IAM) - Create IAM users for your AWS account to manage access to your Amazon S3 resources. For example, you can use IAM with Amazon S3 to control the type of access a user or group of users has to an S3 bucket that your AWS account owns. Bucket policies - Use IAM-based policy language to configure resource-based permissions for your S3 buckets and the objects in them. Amazon S3 access points - Configure named network endpoints with dedicated access policies to manage data access at scale for shared datasets in Amazon S3. Access control lists (ACLs) - Grant read and write permissions for individual buckets and objects to authorized users. As a general rule, we recommend using S3 resource-based policies (bucket policies and access point policies) or IAM policies for access control instead of ACLs. ACLs are an access control mechanism that predates resource-based policies and IAM. For more information about when you'd use ACLs instead of resource-based policies or IAM policies, see Access policy guidelines. S3 Object Ownership - Disable ACLs and take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3. You, as the bucket owner, automatically own and have full control over every object in your bucket, and access control for your data is based on policies. Access Analyzer for S3 - Evaluate and monitor your S3 bucket access policies, ensuring that the policies provide only the intended access to your S3 resources. Data processing To transform data and trigger workflows to automate a variety of other processing activities at scale, you can use the following features. S3 Object Lambda - Add your own code to S3 GET requests to modify and process data as it is returned to an application. Filter rows, dynamically resize images, redact confidential data, and much more. Event notifications - Trigger workflows that use Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), and AWS Lambda when a change is made to your S3 resources. Storage logging and monitoring Amazon S3 provides logging and monitoring tools that you can use to monitor and control how your Amazon S3 resources are being used. For more information, see Monitoring tools. Automated monitoring tools Amazon CloudWatch metrics for Amazon S3 - Track the operational health of your S3 resources and configure billing alerts when estimated charges reach a user-defined threshold. AWS CloudTrail - Record actions taken by a user, a role, or an AWS service in Amazon S3. CloudTrail logs provide you with detailed API tracking for S3 bucket-level and object-level operations. Manual monitoring tools Server access logging - Get detailed records for the requests that are made to a bucket. You can use server access logs for many use cases, such as conducting security and access audits, learning about your customer base, and understanding your Amazon S3 bill. AWS Trusted Advisor - Evaluate your account by using AWS best practice checks to identify ways to optimize your AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas. You can then follow the recommendations to optimize your services and resources. Analytics and insights Amazon S3 offers features to help you gain visibility into your storage usage, which empowers you to better understand, analyze, and optimize your storage at scale. Amazon S3 Storage Lens - Understand, analyze, and optimize your storage. S3 Storage Lens provides 29+ usage and activity metrics and interactive dashboards to aggregate data for your entire organization, specific accounts, AWS Regions, buckets, or prefixes. Storage Class Analysis - Analyze storage access patterns to decide when it's time to move data to a more cost-effective storage class. S3 Inventory with Inventory reports - Audit and report on objects and their corresponding metadata and configure other Amazon S3 features to take action in Inventory reports. For example, you can report on the replication and encryption status of your objects. For a list of all the metadata available for each object in Inventory reports, see Amazon S3 Inventory list. Strong consistency Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions. This behavior applies to both writes of new objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access control lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. For more information, see Amazon S3 data consistency model. How Amazon S3 works Amazon S3 is an object storage service that stores data as objects within buckets. An object is a file and any metadata that describes the file. A bucket is a container for objects. To store your data in Amazon S3, you first create a bucket and specify a bucket name and AWS Region. Then, you upload your data to that bucket as objects in Amazon S3. Each object has a key (or key name), which is the unique identifier for the object within the bucket. S3 provides features that you can configure to support your specific use case. For example, you can use S3 Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects that are accidentally deleted or overwritten. Buckets and the objects in them are private and can be accessed only if you explicitly grant access permissions. You can use bucket policies, AWS Identity and Access Management (IAM) policies, access control lists (ACLs), and S3 Access Points to manage access. Topics Buckets Objects Keys S3 Versioning Version ID Bucket policy S3 Access Points Access control lists (ACLs) Regions Buckets A bucket is a container for objects stored in Amazon S3. You can store any number of objects in a bucket and can have up to 100 buckets in your account. To request an increase, visit the Service Quotas Console. Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in the DOC-EXAMPLE-BUCKET bucket in the US West (Oregon) Region, then it is addressable using the URL https://DOC-EXAMPLE-BUCKET.s3.us-west-2.amazonaws.com/photos/puppy.jpg. For more information, see Accessing a Bucket. When you create a bucket, you enter a bucket name and choose the AWS Region where the bucket will reside. After you create a bucket, you cannot change the name of the bucket or its Region. Bucket names must follow the bucket naming rules. You can also configure a bucket to use S3 Versioning or other storage management features. Buckets also: Organize the Amazon S3 namespace at the highest level. Identify the account responsible for storage and data transfer charges. Provide access control options, such as bucket policies, access control lists (ACLs), and S3 Access Points, that you can use to manage access to your Amazon S3 resources. Serve as the unit of aggregation for usage reporting. For more information about buckets, see Buckets overview. Objects Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. The metadata is a set of name-value pairs that describe the object. These pairs include some default metadata, such as the date last modified, and standard HTTP metadata, such as Content-Type. You can also specify custom metadata at the time that the object is stored. An object is uniquely identified within a bucket by a key (name) and a version ID (if S3 Versioning is enabled on the bucket). For more information about objects, see Amazon S3 objects overview. Keys An object key (or key name) is the unique identifier for an object within a bucket. Every object in a bucket has exactly one key. The combination of a bucket, object key, and optionally, version ID (if S3 Versioning is enabled for the bucket) uniquely identify each object. So you can think of Amazon S3 as a basic data map between "bucket + key + version" and the object itself. Every object in Amazon S3 can be uniquely addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version. For example, in the URL https://DOC-EXAMPLE-BUCKET.s3.us-west-2.amazonaws.com/photos/puppy.jpg, DOC-EXAMPLE-BUCKET is the name of the bucket and photos/puppy.jpg is the key. For more information about object keys, see Creating object key names. S3 Versioning You can use S3 Versioning to keep multiple variants of an object in the same bucket. With S3 Versioning, you can preserve, retrieve, and restore every version of every object stored in your buckets. You can easily recover from both unintended user actions and application failures. For more information, see Using versioning in S3 buckets. Version ID When you enable S3 Versioning in a bucket, Amazon S3 generates a unique version ID for each object added to the bucket. Objects that already existed in the bucket at the time that you enable versioning have a version ID of null. If you modify these (or any other) objects with other operations, such as CopyObject and PutObject, the new objects get a unique version ID. For more information, see Using versioning in S3 buckets. Bucket policy A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy that you can use to grant access permissions to your bucket and the objects in it. Only the bucket owner can associate a policy with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner. Bucket policies are limited to 20 KB in size. Bucket policies use JSON-based access policy language that is standard across AWS. You can use bucket policies to add or deny permissions for the objects in a bucket. Bucket policies allow or deny requests based on the elements in the policy, including the requester, S3 actions, resources, and aspects or conditions of the request (for example, the IP address used to make the request). For example, you can create a bucket policy that grants cross-account permissions to upload objects to an S3 bucket while ensuring that the bucket owner has full control of the uploaded objects. For more information, see Bucket policy examples. In your bucket policy, you can use wildcard characters on Amazon Resource Names (ARNs) and other values to grant permissions to a subset of objects. For example, you can control access to groups of objects that begin with a common prefix or end with a given extension, such as .html. S3 Access Points Amazon S3 Access Points are named network endpoints with dedicated access policies that describe how data can be accessed using that endpoint. Access Points are attached to buckets that you can use to perform S3 object operations, such as GetObject and PutObject. Access Points simplify managing data access at scale for shared datasets in Amazon S3. Each access point has its own access point policy. You can configure Block Public Access settings for each access point. To restrict Amazon S3 data access to a private network, you can also configure any access point to accept requests only from a virtual private cloud (VPC). For more information, see Managing data access with Amazon S3 access points. Access control lists (ACLs) You can use ACLs to grant read and write permissions to authorized users for individual buckets and objects. Each bucket and object has an ACL attached to it as a subresource. The ACL defines which AWS accounts or groups are granted access and the type of access. ACLs are an access control mechanism that predates IAM. For more information about ACLs, see Access control list (ACL) overview. By default, when another AWS account uploads an object to your S3 bucket, that account (the object writer) owns the object, has access to it, and can grant other users access to it through ACLs. You can use Object Ownership to change this default behavior so that ACLs are disabled and you, as the bucket owner, automatically own every object in your bucket. As a result, access control for your data is based on policies, such as IAM policies, S3 bucket policies, virtual private cloud (VPC) endpoint policies, and AWS Organizations service control policies (SCPs). A majority of modern use cases in Amazon S3 no longer require the use of ACLs, and we recommend that you disable ACLs except in unusual circumstances where you need to control access for each object individually. With Object Ownership, you can disable ACLs and rely on policies for access control. When you disable ACLs, you can easily maintain a bucket with objects uploaded by different AWS accounts. You, as the bucket owner, own all the objects in the bucket and can manage access to them using policies. For more information, see Controlling ownership of objects and disabling ACLs for your bucket. Regions You can choose the geographical AWS Region where Amazon S3 stores the buckets that you create. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. Objects stored in an AWS Region never leave the Region unless you explicitly transfer or replicate them to another Region. For example, objects stored in the Europe (Ireland) Region never leave it. Note You can access Amazon S3 and its features only in the AWS Regions that are enabled for your account. For more information about enabling a Region to create and manage AWS resources, see Managing AWS Regions in the AWS General Reference. For a list of Amazon S3 Regions and endpoints, see Regions and endpoints in the AWS General Reference. Amazon S3 data consistency model Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions. This behavior applies to both writes to new objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access controls lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. Updates to a single key are atomic. For example, if you make a PUT request to an existing key from one thread and perform a GET request on the same key from a second thread concurrently, you will get either the old data or the new data, but never partial or corrupt data. Amazon S3 achieves high availability by replicating data across multiple servers within AWS data centers. If a PUT request is successful, your data is safely stored. Any read (GET or LIST request) that is initiated following the receipt of a successful PUT response will return the data written by the PUT request. Here are examples of this behavior: A process writes a new object to Amazon S3 and immediately lists keys within its bucket. The new object appears in the list. A process replaces an existing object and immediately tries to read it. Amazon S3 returns the new data. A process deletes an existing object and immediately tries to read it. Amazon S3 does not return any data because the object has been deleted. A process deletes an existing object and immediately lists keys within its bucket. The object does not appear in the listing. Note Amazon S3 does not support object locking for concurrent writers. If two PUT requests are simultaneously made to the same key, the request with the latest timestamp wins. If this is an issue, you must build an object-locking mechanism into your application. Updates are key-based. There is no way to make atomic updates across keys. For example, you cannot make the update of one key dependent on the update of another key unless you design this functionality into your application. Bucket configurations have an eventual consistency model. Specifically, this means that: If you delete a bucket and immediately list all buckets, the deleted bucket might still appear in the list. If you enable versioning on a bucket for the first time, it might take a short amount of time for the change to be fully propagated. We recommend that you wait for 15 minutes after enabling versioning before issuing write operations (PUT or DELETE requests) on objects in the bucket.

Amazon Personalize

Real-time personalization and recommendations, based on the same technology used at Amazon.com. Amazon Personalize is a fully managed machine learning service that uses your data to generate item recommendations for your users. It can also generate user segments based on the users' affinity for certain items or item metadata. Common use case examples include the following: Personalizing a video streaming app - You can use preconfigured or customizable Amazon Personalize resources to add multiple types of personalized video recommendations to your streaming app. For example, Top picks for you, More like X and Most popular video recommendations. Adding product recommendations to an ecommerce app - You can use preconfigured or customizable Amazon Personalize resources to add multiple types of personalized product recommendations to your retail app. For example, Recommended for you, Frequently bought together and Customers who viewed X also viewed product recommendations. Creating personalized emails - You can use customizable Amazon Personalize resources to generate batch recommendations for all users on an email list. Then you can use an AWS service or third party service to send users personalized emails recommending items in your catalog. Creating a targeted marketing campaign - You can use Amazon Personalize to generate segments of users who will most likely interact with items in your catalog. Then you can use an AWS service or third party service to create a targeted marketing campaign that promotes different items to different user segments. Amazon Personalize includes API operations for real-time personalization, and batch operations for bulk recommendations and user segments. You can get started quickly with use-case optimized recommenders for your business domain, or you can create your own configurable custom resources. With Amazon Personalize, your data can come from both your historical bulk interaction records in a CSV file, and real-time events from your users as they interact with your catalog. Before Amazon Personalize can generate recommendations, your interactions data must have: At minimum 1000 interactions records from users interacting with items in your catalog. These interactions can be from bulk imports, or streamed events, or both. At minimum 25 unique user IDs with at least 2 interactions for each. Different use cases may have additional data requirements. If you don't have enough data, you can use Amazon Personalize to first collect real-time event data. After you have recorded enough events, Amazon Personalize can generate recommendations. Pricing for Amazon Personalize With Amazon Personalize, you pay only for what you use. There are no minimum fees and no upfront commitments. The costs of Amazon Personalize depend on data processing, training, and number of recommendation requests. The AWS Free Tier provides a monthly quota of up to 20 GB of data processing per available AWS region, up to 100 hours of training time per eligible AWS region, and up to 50 TPS-hours of real-time recommendations/month. The free tier is valid for the first two months of usage. For a complete list of charges and prices, see Amazon Personalize pricing. Related AWS services and solutions Amazon Personalize integrates seamlessly with other AWS services and solutions. For example, you can: Use AWS Amplify to record user interaction events. Amplify includes a JavaScript library for recording events from web client applications, and a library for recording events in server code. For more information, see Amplify - analytics. Automate and schedule Amazon Personalize tasks with Maintaining Personalized Experiences with Machine Learning. This AWS Solutions Implementation automates the Amazon Personalize workflow, including data import, solution version training, and batch workflows. Use Amazon CloudWatch Evidently to perform A/B testing with Amazon Personalize recommendations. For more information, see Perform launches and A/B experiments with CloudWatch Evidently in the Amazon CloudWatch User Guide. Use Amazon Pinpoint to create targeted marketing campaigns. For an example that shows how to use Amazon Pinpoint and Amplify to add Amazon Personalize recommendations to a marketing email campaign and a web app, see Web Analytics with Amplify. Third-party services Amazon Personalize works well with various third-party services. Amplitude - You can use Amplitude to track user actions to help you understand your users' behavior. For information on using Amplitude and Amazon Personalize, see the following AWS Partner Network (APN) blog post: Measuring the Effectiveness of Personalization with Amplitude and Amazon Personalize. Braze - You can use Braze to send users personalized emails recommending items in your catalog. Braze is a market leading messaging platform (email, push, SMS). For a workshop that shows how to integrate Amazon Personalize and Braze, see Amazon Personalize workshop. mParticle - You can use mParticle to collect event data from your app. For an example that shows how to use mParticle and Amazon Personalize to implement personalized product recommendations, see How to harness the power of a CDP for machine learning: Part 2. Optimizely - You can use Optimizely to perform A/B testing with Amazon Personalize recommendations. For information on using Optimizely and Amazon Personalize, see Optimizely integrates with Amazon Personalize to combine powerful machine learning with experimentation. Segment - You can use Segment to send your data to Amazon Personalize. For more information on integrating Segment with Amazon Personalize, see Amazon Personalize Destination.

AWS Lambda

With AWS Lambda, you can run code without provisioning or managing servers. You pay only for the compute time that you consume—there's no charge when your code isn't running. You can run code for virtually any type of application or backend service—all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app. Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, and logging. With Lambda, you can run code for virtually any type of application or backend service. All you need to do is supply your code in one of the languages that Lambda supports. Note In the AWS Lambda Developer Guide, we assume that you have experience with coding, compiling, and deploying programs using one of the supported languages. You organize your code into Lambda functions. Lambda runs your function only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time that you consume—there is no charge when your code is not running. You can invoke your Lambda functions using the Lambda API, or Lambda can run your functions in response to events from other AWS services. For example, you can use Lambda to: Build data-processing triggers for AWS services such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB. Process streaming data stored in Amazon Kinesis. Create your own backend that operates at AWS scale, performance, and security. Lambda is an ideal compute service for many application scenarios, as long as you can run your application code using the Lambda standard runtime environment and within the resources that Lambda provides. When using Lambda, you are responsible only for your code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code. Because Lambda manages these resources, you cannot log in to compute instances or customize the operating system on provided runtimes. Lambda performs operational and administrative activities on your behalf, including managing capacity, monitoring, and logging your Lambda functions. If you need to manage your own compute resources, AWS has other compute services to meet your needs. For example: Amazon Elastic Compute Cloud (Amazon EC2) offers a wide range of EC2 instance types to choose from. It lets you customize operating systems, network and security settings, and the entire software stack. You are responsible for provisioning capacity, monitoring fleet health and performance, and using Availability Zones for fault tolerance. AWS Elastic Beanstalk enables you to deploy and scale applications onto Amazon EC2. You retain ownership and full control over the underlying EC2 instances. Lambda features The following key features help you develop Lambda applications that are scalable, secure, and easily extensible: Concurrency and scaling controls Concurrency and scaling controls such as concurrency limits and provisioned concurrency give you fine-grained control over the scaling and responsiveness of your production applications. Functions defined as container images Use your preferred container image tooling, workflows, and dependencies to build, test, and deploy your Lambda functions. Code signing Code signing for Lambda provides trust and integrity controls that let you verify that only unaltered code that approved developers have published is deployed in your Lambda functions. Lambda extensions You can use Lambda extensions to augment your Lambda functions. For example, use extensions to more easily integrate Lambda with your favorite tools for monitoring, observability, security, and governance. Function blueprints A function blueprint provides sample code that shows how to use Lambda with other AWS services or third-party applications. Blueprints include sample code and function configuration presets for Node.js and Python runtimes. Database access A database proxy manages a pool of database connections and relays queries from a function. This enables a function to reach high concurrency levels without exhausting database connections. File systems access You can configure a function to mount an Amazon Elastic File System (Amazon EFS) file system to a local directory. With Amazon EFS, your function code can access and modify shared resources safely and at high concurrency.

AWS Organizations and AWS Account Management

With AWS Organizations, you can consolidate multiple AWS accounts into an organization that you create and centrally manage. You can create member accounts and invite existing accounts to join your organization. You can organize those accounts and manage them as a group. With AWS Account Management you can update the alternate contact information for each of your AWS accounts. AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWS Organizations includes account management and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs of your business. As an administrator of an organization, you can create accounts in your organization and invite existing accounts to join the organization. This user guide defines key concepts for AWS Organizations, provides tutorials, and explains how to create and manage an organization. Topics AWS Organizations features AWS Organizations pricing Accessing AWS Organizations Support and feedback for AWS Organizations AWS Organizations features AWS Organizations offers the following features: Centralized management of all of your AWS accounts You can combine your existing accounts into an organization that enables you to manage the accounts centrally. You can create accounts that automatically are a part of your organization, and you can invite other accounts to join your organization. You also can attach policies that affect some or all of your accounts. Consolidated billing for all member accounts Consolidated billing is a feature of AWS Organizations. You can use the management account of your organization to consolidate and pay for all member accounts. In consolidated billing, management accounts can also access the billing information, account information, and account activity of member accounts in their organization. This information may be used for services such as Cost Explorer, which can help management accounts improve their organization's cost performance. Hierarchical grouping of your accounts to meet your budgetary, security, or compliance needs You can group your accounts into organizational units (OUs) and attach different access policies to each OU. For example, if you have accounts that must access only the AWS services that meet certain regulatory requirements, you can put those accounts into one OU. You then can attach a policy to that OU that blocks access to services that do not meet those regulatory requirements. You can nest OUs within other OUs to a depth of five levels, providing flexibility in how you structure your account groups. Policies to centralize control over the AWS services and API actions that each account can access As an administrator of the management account of an organization, you can use service control policies (SCPs) to specify the maximum permissions for member accounts in the organization. In SCPs, you can restrict which AWS services, resources, and individual API actions the users and roles in each member account can access. You can also define conditions for when to restrict access to AWS services, resources, and API actions. These restrictions even override the administrators of member accounts in the organization. When AWS Organizations blocks access to a service, resource, or API action for a member account, a user or role in that account can't access it. This block remains in effect even if an administrator of a member account explicitly grants such permissions in an IAM policy. For more information, see Service control policies (SCPs). Policies to standardize tags across the resources in your organization's accounts You can use tag policies to maintain consistent tags, including the preferred case treatment of tag keys and tag values. For more information, see Tag policies Policies to control how AWS artificial intelligence (AI) and machine learning services can collect and store data. You can use AI services opt-out policies to opt out of data collection and storage for any of the AWS AI services that you don't want to use. For more information, see AI services opt-out policies Policies that configure automatic backups for the resources in your organization's accounts You can use backup policies to configure and automatically apply AWS Backup plans to resources across all your organization's accounts. For more information, see Backup policies Integration and support for AWS Identity and Access Management (IAM) IAM provides granular control over users and roles in individual accounts. AWS Organizations expands that control to the account level by giving you control over what users and roles in an account or a group of accounts can do. The resulting permissions are the logical intersection of what is allowed by AWS Organizations at the account level and the permissions that are explicitly granted by IAM at the user or role level within that account. In other words, the user can access only what is allowed by both the AWS Organizations policies and IAM policies. If either blocks an operation, the user can't access that operation. Integration with other AWS services You can leverage the multi-account management services available in AWS Organizations with select AWS services to perform tasks on all accounts that are members of an organization. For a list of services and the benefits of using each service on an organization-wide level, see AWS services that you can use with AWS Organizations. When you enable an AWS service to perform tasks on your behalf in your organization's member accounts, AWS Organizations creates an IAM service-linked role for that service in each member account. The service-linked role has predefined IAM permissions that allow the other AWS service to perform specific tasks in your organization and its accounts. For this to work, all accounts in an organization automatically have a service-linked role. This role enables the AWS Organizations service to create the service-linked roles required by AWS services for which you enable trusted access. These additional service-linked roles are attached to IAM permission policies that enable the specified service to perform only those tasks that are required by your configuration choices. For more information, see Using AWS Organizations with other AWS services. Global access AWS Organizations is a global service with a single endpoint that works from any and all AWS Regions. You don't need to explicitly select a region to operate in. Data replication that is eventually consistent AWS Organizations, like many other AWS services, is eventually consistent. AWS Organizations achieves high availability by replicating data across multiple servers in AWS data centers within its Region. If a request to change some data is successful, the change is committed and safely stored. However, the change must then be replicated across the multiple servers. For more information, see Changes that I make aren't always immediately visible. Free to use AWS Organizations is a feature of your AWS account offered at no additional charge. You are charged only when you access other AWS services from the accounts in your organization. For information about the pricing of other AWS products, see the Amazon Web Services pricing page. AWS Organizations pricing AWS Organizations is offered at no additional charge. You are charged only for AWS resources that users and roles in your member accounts use. For example, you are charged the standard fees for Amazon EC2 instances that are used by users or roles in your member accounts. For information about the pricing of other AWS services, see AWS Pricing. Accessing AWS Organizations You can work with AWS Organizations in any of the following ways: AWS Management Console The AWS Organizations console is a browser-based interface that you can use to manage your organization and your AWS resources. You can perform any task in your organization by using the console. AWS Command Line Tools With the AWS command line tools, you can issue commands at your system's command line to perform AWS Organizations and AWS tasks. Working with the command line can be faster and more convenient than using the console. The command line tools also are useful if you want to build scripts that perform AWS tasks. AWS provides two sets of command line tools: AWS Command Line Interface (AWS CLI). For information about installing and using the AWS CLI, see the AWS Command Line Interface User Guide. AWS Tools for Windows PowerShell. For information about installing and using the Tools for Windows PowerShell, see the AWS Tools for Windows PowerShell User Guide. AWS SDKs The AWS SDKs consist of libraries and sample code for various programming languages and platforms (for example, Java, Python, Ruby, .NET, iOS, and Android). The SDKs take care of tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For more information about the AWS SDKs, including how to download and install them, see Tools for Amazon Web Services. AWS Organizations HTTPS Query API The AWS Organizations HTTPS Query API gives you programmatic access to AWS Organizations and AWS. The HTTPS Query API lets you issue HTTPS requests directly to the service. When you use the HTTPS API, you must include code to digitally sign requests using your credentials. For more information, see Calling the API by Making HTTP Query Requests and the AWS Organizations API Reference.

AWS Chatbot

AWS Chatbot is an AWS service that enables DevOps and software development teams to use Slack or Amazon Chime chat rooms to monitor and respond to operational events in their AWS Cloud. AWS Chatbot processes AWS service notifications from Amazon Simple Notification Service (Amazon SNS), and forwards them to Slack or Amazon Chime chat rooms so teams can analyze and act on them. Teams can respond to AWS service events from a chat room where the entire team can collaborate, regardless of location. AWS Chatbot is an AWS service that enables DevOps and software development teams to use messaging program chat rooms to monitor and respond to operational events in their AWS Cloud. AWS Chatbot processes AWS service notifications from Amazon Simple Notification Service (Amazon SNS), and forwards them to chat rooms so teams can analyze and act on them immediately, regardless of location. You can also run AWS CLI commands in Slack channels using AWS Chatbot. Topics Features of AWS Chatbot How AWS Chatbot works Regions and quotas for AWS Chatbot AWS Chatbot requirements Accessing AWS Chatbot Features of AWS Chatbot AWS Chatbot enables ChatOps for AWS. ChatOps speeds software development and operations by enabling DevOps teams to use chat clients and chatbots to communicate and execute tasks. AWS Chatbot notifies chat users about events in their AWS services, so teams can collaboratively monitor and resolve issues in real time, instead of addressing emails from their SNS topics. AWS Chatbot also allows you to format incident metrics from Amazon CloudWatch as charts for viewing in chat notifications. Important features of the AWS Chatbot service include the following: Supports Slack and Amazon Chime - You can add AWS Chatbot to your Slack channel or Amazon Chime chat rooms in just a few clicks. Predefined AWS Identity and Access Management (IAM) policy templates - AWS Chatbot provides chat room-specific permission controls through AWS Identity and Access Management (IAM). AWS Chatbot's predefined templates make it easy to select and set up the permissions you want associated with a given channel or chat room. Receive notifications - Use AWS Chatbot to receive notifications about operational incidents and other events from supported sources, such as operational alarms, security alerts, or budget deviations. To set up notifications in the AWS Chatbot console, you simply choose the channels or chat rooms you want to receive notifications and then choose which Amazon Simple Notification Service (Amazon SNS) topics should trigger notifications. Monitor and manage AWS resources through the AWS CLI with Slack - AWS Chatbot supports CLI commands for most AWS services, making it easy to monitor and manage your AWS resources from Slack on desktop and mobile devices. Your teams can retrieve diagnostic information in real-time, change your AWS resources, run AWS SM runbooks, and start long running jobs from a centralized location. AWS Chatbot commands use the standard AWS Command Line Interface syntax. How AWS Chatbot works AWS Chatbot uses Amazon Simple Notification Service (Amazon SNS) topics to send event and alarm notifications from AWS services to your chat channels. Once an SNS topic is associated with a configured chat client, events and alarms from various services are processed and notifications are delivered to the specified chat channels and webhooks. For Slack, after the Slack administrator approves AWS Chatbot support for the Slack workspace, anyone in the workspace can add AWS Chatbot to their Slack channels. For Amazon Chime, users with AWS Identity and Access Management (IAM) permissions to use Amazon Chime can add AWS Chatbot to their webhooks. You use the AWS Chatbot console to configure Amazon Chime and Slack clients to receive notifications from SNS topics. AWS Chatbot supports a number of AWS services, including Amazon CloudWatch, AWS Billing and Cost Management, and AWS Security Hub. For a complete list of supported services, see Monitoring AWS services. You can also run AWS CLI commands directly in Slack channels using AWS Chatbot. You can retrieve diagnostic information, configure AWS resources, and run workflows. To run a command, AWS Chatbot checks that all required parameters are entered. If any are missing, AWS Chatbot prompts you for the required information. AWS Chatbot then confirms if the command is permissible by checking the command against what is allowed by the configured IAM roles and the channel guardrail policies. For more information, see Running AWS CLI commands from Slack channels and Understanding permissions. Regions and quotas for AWS Chatbot AWS Chatbot is a global service and can be used in all commercial AWS Regions. You can combine Amazon SNS topics from multiple Regions in a single AWS Chatbot configuration. For information about AWS Chatbot AWS Region availability and quotas, see AWS Chatbot endpoints and quotas. AWS Chatbot supports using all supported AWS services in the Regions where they are available. AWS Chatbot requirements To use AWS Chatbot, you need the following: An AWS account to associate with Amazon Chime or Slack chat clients during AWS Chatbot setup. Administrative privileges for your Slack workspace or Amazon Chime chat room. You can be the Slack workspace owner or have the ability to work with workspace owners to get approval for installing AWS Chatbot. Familiarity with AWS Identity and Access Management (IAM) and IAM roles and policies. For more information about IAM, see What is IAM? in the IAM User Guide. Experience with the AWS services supported by AWS Chatbot, including experience configuring those services to subscribe to Amazon Simple Notification Service (Amazon SNS) topics to send notifications. For information about supported services, see Using AWS Chatbot with Other AWS Services. To access Amazon CloudWatch metrics, AWS Chatbot requires an AWS Identity and Access Management (IAM) role with a permissions policy and a trust policy. You create this IAM role, with the required policies, using the AWS Chatbot console. You can use an existing IAM role, but it must have the required policies. Accessing AWS Chatbot You access and configure AWS Chatbot through the AWS Chatbot console at https://console.aws.amazon.com/chatbot/.

AWS Directory Service

AWS Directory Service provides multiple ways to set up and run Microsoft Active Directory with other AWS services such as Amazon EC2, Amazon RDS for SQL Server, FSx for Windows File Server, and AWS IAM Identity Center (successor to AWS Single Sign-On). AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, enables your directory-aware workloads and AWS resources to use a managed Active Directory in the AWS Cloud. AWS Directory Service provides multiple ways to use Microsoft Active Directory (AD) with other AWS services. Directories store information about users, groups, and devices, and administrators use them to manage access to information and resources. AWS Directory Service provides multiple directory choices for customers who want to use existing Microsoft AD or Lightweight Directory Access Protocol (LDAP)-aware applications in the cloud. It also offers those same choices to developers who need a directory to manage users, groups, devices, and access. AWS Directory Service options AWS Directory Service includes several directory types to choose from. Also known as AWS Managed Microsoft AD, AWS Directory Service for Microsoft Active Directory is powered by an actual Microsoft Windows Server Active Directory (AD), managed by AWS in the AWS Cloud. It enables you to migrate a broad range of Active Directory-aware applications to the AWS Cloud. AWS Managed Microsoft AD works with Microsoft SharePoint, Microsoft SQL Server Always On Availability Groups, and many .NET applications. It also supports AWS managed applications and services including Amazon WorkSpaces, Amazon WorkDocs, Amazon QuickSight, Amazon Chime, Amazon Connect, and Amazon Relational Database Service for Microsoft SQL Server (Amazon RDS for SQL Server, Amazon RDS for Oracle, and Amazon RDS for PostgreSQL). AWS Managed Microsoft AD is approved for applications in the AWS Cloud that are subject to U.S. Health Insurance Portability and Accountability Act (HIPAA) or Payment Card Industry Data Security Standard (PCI DSS) compliance when you enable compliance for your directory. All compatible applications work with user credentials that you store in AWS Managed Microsoft AD, or you can connect to your existing AD infrastructure with a trust and use credentials from an Active Directory running on-premises or on EC2 Windows. If you join EC2 instances to your AWS Managed Microsoft AD, your users can access Windows workloads in the AWS Cloud with the same Windows single sign-on (SSO) experience as when they access workloads in your on-premises network. AWS Managed Microsoft AD also supports federated use cases using Active Directory credentials. Alone, AWS Managed Microsoft AD enables you to sign in to the AWS Management Console. With AWS IAM Identity Center (successor to AWS Single Sign-On), you can also obtain short-term credentials for use with the AWS SDK and CLI, and use preconfigured SAML integrations to sign in to many cloud applications. By adding Azure AD Connect, and optionally Active Directory Federation Service (AD FS), you can sign in to Microsoft Office 365 and other cloud applications with credentials stored in AWS Managed Microsoft AD. The service includes key features that enable you to extend your schema, manage password policies, and enable secure LDAP communications through Secure Socket Layer (SSL)/Transport Layer Security (TLS). You can also enable multi-factor authentication (MFA) for AWS Managed Microsoft AD to provide an additional layer of security when users access AWS applications from the Internet. Because Active Directory is an LDAP directory, you can also use AWS Managed Microsoft AD for Linux Secure Shell (SSH) authentication and for other LDAP-enabled applications. AWS provides monitoring, daily snapshots, and recovery as part of the service—you add users and groups to AWS Managed Microsoft AD, and administer Group Policy using familiar Active Directory tools running on a Windows computer joined to the AWS Managed Microsoft AD domain. You can also scale the directory by deploying additional domain controllers and help improve application performance by distributing requests across a larger number of domain controllers. AWS Managed Microsoft AD is available in two editions: Standard and Enterprise. Standard Edition: AWS Managed Microsoft AD (Standard Edition) is optimized to be a primary directory for small and midsize businesses with up to 5,000 employees. It provides you enough storage capacity to support up to 30,000* directory objects, such as users, groups, and computers. Enterprise Edition: AWS Managed Microsoft AD (Enterprise Edition) is designed to support enterprise organizations with up to 500,000* directory objects. * Upper limits are approximations. Your directory may support more or less directory objects depending on the size of your objects and the behavior and performance needs of your applications. When to use AWS Managed Microsoft AD is your best choice if you need actual Active Directory features to support AWS applications or Windows workloads, including Amazon Relational Database Service for Microsoft SQL Server. It's also best if you want a standalone AD in the AWS Cloud that supports Office 365 or you need an LDAP directory to support your Linux applications. For more information, see AWS Managed Microsoft AD.

AWS Snowball Edge

AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can do local processing and edge-computing workloads in addition to transferring data between your local environment and the AWS Cloud. Each Snowball Edge device can transport data at speeds faster than the internet. This transport is done by shipping the data in the appliances through a regional carrier. The appliances are rugged, complete with E Ink shipping labels. Snowball Edge devices have three options for device configurations—Storage Optimized, Compute Optimized, and Compute Optimized with GPU. When this guide refers to Snowball Edge devices, it's referring to all options of the device. When specific information applies only to one or more optional configurations of devices (such as how the Snowball Edge with GPU has an on-board GPU), it is called out specifically. AWS Snowball Edge Features Snowball Edge devices have the following features: Large amounts of storage capacity or compute functionality for devices. This depends on the options you choose when you create your job. Network adapters with transfer speeds of up to 100 Gbit/second. Encryption is enforced, protecting your data at rest and in physical transit. You can import or export data between your local environments and Amazon S3, and physically transport the data with one or more devices without using the internet. Snowball Edge devices are their own rugged box. The built-in E Ink display changes to show your shipping label when the device is ready to ship. Snowball Edge devices come with an on-board LCD display that can be used to manage network connections and get service status information. You can cluster Snowball Edge devices for local storage and compute jobs to achieve data durability across 5-10 devices and locally grow or shrink storage on demand. You can use the file interface to read and write data to an AWS Snowball Edge device through a file share or Network File System (NFS) mount point. You can write Python-language Lambda functions and associate them with Amazon S3 buckets when you create an AWS Snowball Edge device job. Each function triggers when a local Amazon S3 PUT object action is run on the associated bucket on the device. Snowball Edge devices have Amazon S3 and Amazon EC2 compatible endpoints available, enabling programmatic use cases. Snowball Edge devices support the new sbe1, sbe-c, and sbe-g instance types, which you can use to run compute instances on the device using Amazon Machine Images (AMIs). Prerequisites for Using Snowball Edge The Amazon S3 bucket associated with the job must use the Amazon S3 standard storage class. Before creating your first job, keep the following in mind. For jobs that import data into Amazon S3, follow these steps: Create an AWS account with AWS Identity and Access Management (IAM) administrator-level permissions. For more information, see Setting Up Your AWS Access for AWS Snowball Edge. Confirm that the files and folders to transfer are named according to the object key naming guidelines for Amazon S3. Any files or folders with names that don't meet these guidelines aren't imported into Amazon S3. Plan what data you want to import into Amazon S3. For more information, see Planning Your Large Transfer. Before exporting data from Amazon S3, follow these steps: Understand what data is exported when you create your job. For more information, see Using Export Ranges. For any files with a colon (:) in the file name, change the file names in Amazon S3 before you create the export job to get these files. Files with a colon in the file name fail export to Microsoft Windows Server. For jobs using compute instances: Before you can add any AMIs to your job, you must have an AMI in your AWS account and it must be a supported image type. Currently, supported AMIs are based on the Amazon Linux 2, CentOS 7 (x86_64) - with Updates HVM, or Ubuntu 16.04 LTS - Xenial (HVM) images. You can get these images from the AWS Marketplace. If you're using SSH to connect to the instances running on a Snowball Edge, you must already have the key pair for connecting to the instance. For information specific to using compute instances on a device, see Using Amazon EC2 Compute Instances. Services Related to the AWS Snowball Edge You can use an AWS Snowball Edge device with the following related AWS services: Amazon S3 - Transfer data to an AWS Snowball Edge device using the Amazon S3 API for Snowball Edge, which supports a subset of the Amazon S3 API operations. You can do this in a single Snowball Edge device or in a cluster of devices for increased data durability. You can also import data that is hosted on an AWS Snowball Edge device to Amazon S3 and your local environment through a shipped Snowball Edge device. For more information, see the Amazon Simple Storage Service User Guide. Amazon EC2 - Run compute instances on a Snowball Edge device using the Amazon EC2 compatible endpoint, which supports a subset of the Amazon EC2 API operations. For more information about using Amazon EC2 in AWS, see Getting started with Amazon EC2 Linux instances. AWS Lambda powered by AWS IoT Greengrass - Invoke Lambda functions based on Amazon S3 storage actions made on an AWS Snowball Edge device. These Lambda functions are associated with an AWS Snowball Edge device during job creation. For more information about using Lambda, see the AWS Lambda Developer Guide. Amazon Elastic Block Store (Amazon EBS) - Provide block-level storage volumes for use with EC2 instances. For more information, see Amazon Elastic Block Store (Amazon EBS). AWS Identity and Access Management (IAM) - Use this service to securely control access to AWS resources. For more information, see What is IAM? AWS Security Token Service (AWS STS) - Request temporary, limited-privilege credentials for IAM users or for users that you authenticate (federated users). For more information, see Temporary security credentials in IAM. Amazon EC2 Systems Manager - Use this service to view and control your infrastructure on AWS. For more information, see What is AWS Systems Manager? Accessing the Service You can either use the AWS Snow Family Management Console or the job management API to create and manage jobs. For information about the job management API, see Job Management API Reference for AWS Snowball. Accessing an AWS Snowball Edge Device After your Snowball Edge device or devices are onsite, you can access them in several different ways. You can use the LCD display (used only for network configuration) that's built into each device, the Amazon S3 and Amazon EC2 compatible endpoints, or the available file interface. For more information, see Using an AWS Snowball Edge Device.

AWS Storage Gateway

AWS Storage Gateway is a service that connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between your on-premises IT environment and the AWS storage infrastructure in the AWS Cloud. What is Amazon S3 File Gateway PDFRSS AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure. You can use the service to store data in the AWS Cloud for scalable and cost-effective storage that helps maintain data security. AWS Storage Gateway offers file-based, volume-based, and tape-based storage solutions. Topics Amazon S3 File Gateway Amazon S3 File Gateway Amazon S3 File Gateway -Amazon S3 File Gateway supports a file interface into Amazon Simple Storage Service (Amazon S3) and combines a service and a virtual software appliance. By using this combination, you can store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB). The software appliance, or gateway, is deployed into your on-premises environment as a virtual machine (VM) running on VMware ESXi, Microsoft Hyper-V, or Linux Kernel-based Virtual Machine (KVM) hypervisor. The gateway provides access to objects in S3 as files or file share mount points. With a S3 File Gateway, you can do the following: You can store and retrieve files directly using the NFS version 3 or 4.1 protocol. You can store and retrieve files directly using the SMB file system version, 2 and 3 protocol. You can access your data directly in Amazon S3 from any AWS Cloud application or service. You can manage your S3 data using lifecycle policies, cross-region replication, and versioning. You can think of a S3 File Gateway as a file system mount on Amazon S3. A S3 File Gateway simplifies file storage in Amazon S3, integrates to existing applications through industry-standard file system protocols, and provides a cost-effective alternative to on-premises storage. It also provides low-latency access to data through transparent local caching. A S3 File Gateway manages data transfer to and from AWS, buffers applications from network congestion, optimizes and streams data in parallel, and manages bandwidth consumption. S3 File Gateway integrate with AWS services, for example with the following: Common access management using AWS Identity and Access Management (IAM) Encryption using AWS Key Management Service (AWS KMS) Monitoring using Amazon CloudWatch (CloudWatch) Audit using AWS CloudTrail (CloudTrail) Operations using the AWS Management Console and AWS Command Line Interface (AWS CLI) Billing and cost management In the following documentation, you can find a Getting Started section that covers setup information common to all gateways and also gateway-specific setup sections. The Getting Started section shows you how to deploy, activate, and configure storage for a gateway. The management section shows you how to manage your gateway and resources: provides instructions on how to create and use a S3 File Gateway. It shows you how to create a file share, map your drive to an Amazon S3 bucket, and upload files and folders to Amazon S3. describes how to perform management tasks for all gateway types and resources. In this guide, you can primarily find how to work with gateway operations by using the AWS Management Console. If you want to perform these operations programmatically, see the AWS Storage Gateway API Reference. What is Amazon FSx File Gateway? PDFRSS Storage Gateway offers File Gateway, Volume Gateway, and Tape Gateway storage solutions. Amazon FSx File Gateway (FSx File Gateway) is a new File Gateway type that provides low latency and efficient access to in-cloud FSx for Windows File Server file shares from your on-premises facility. If you maintain on-premises file storage because of latency or bandwidth requirements, you can instead use FSx File Gateway for seamless access to fully managed, highly reliable, and virtually unlimited Windows file shares provided in the AWS Cloud by FSx for Windows File Server. Benefits of using Amazon FSx File Gateway FSx File Gateway provides the following benefits: Helps eliminate on-premises file servers and consolidates all their data in AWS to take advantage of the scale and economics of cloud storage. Provides options that you can use for all your file workloads, including those that require on-premises access to cloud data. Applications that need to stay on premises can now experience the same low latency and high performance that they have in AWS, without taxing your networks or impacting the latencies experienced by your most demanding applications. What is Tape Gateway? PDFRSS AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure. You can use the service to store data in the Amazon Web Services Cloud for scalable and cost-effective storage that helps maintain data security. AWS Storage Gateway offers file-based File Gateways (Amazon S3 File and Amazon FSx File), volume-based (Cached and Stored), and tape-based storage solutions. Topics Tape Gateway Are you a first-time Storage Gateway user? How Tape Gateway works (architecture) Storage Gateway pricing Plan your Storage Gateway deployment Tape Gateway Tape Gateway - A Tape Gateway provides cloud-backed virtual tape storage. With a Tape Gateway, you can cost-effectively and durably archive backup data in S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive. A Tape Gateway provides a virtual tape infrastructure that scales seamlessly with your business needs and eliminates the operational burden of provisioning, scaling, and maintaining a physical tape infrastructure. You can deploy Storage Gateway either on-premises as a VM appliance running on VMware ESXi, KVM, or Microsoft Hyper-V hypervisor, as a hardware appliance, or in AWS as an Amazon EC2 instance. You deploy your gateway on an EC2 instance to provision iSCSI storage volumes in AWS. You can use gateways hosted on EC2 instances for disaster recovery, data mirroring, and providing storage for applications hosted on Amazon EC2. For an architectural overview, see How Tape Gateway works (architecture). To see the wide range of use cases that AWS Storage Gateway helps make possible, see AWS Storage Gateway. Documentation: For Tape Gateway documentation, see Creating a Tape Gateway. Are you a first-time Storage Gateway user? In the following documentation, you can find a Getting Started section that covers setup information common to all gateways and also gateway-specific setup sections. The Getting Started section shows you how to deploy, activate, and configure storage for a gateway. The management section shows you how to manage your gateway and resources: Creating a Tape Gateway provides instructions on how to create and use a Tape Gateway. It shows you how to back up data to virtual tapes and archive the tapes. Managing Your Gateway describes how to perform management tasks for your gateway and its resources. In this guide, you can primarily find how to work with gateway operations by using the AWS Management Console. If you want to perform these operations programmatically, see the AWS Storage Gateway API Reference. Storage Gateway pricing For current information about pricing, see Pricing on the AWS Storage Gateway details page. Plan your Storage Gateway deployment By using the Storage Gateway software appliance, you can connect your existing on-premises application infrastructure with scalable, cost-effective AWS cloud storage that provides data security features. To deploy Storage Gateway, you first need to decide on the following two things: Your gateway type - this guide covers the following gateway type: Tape Gateway - If you are looking for a cost-effective, durable, long-term, offsite alternative for data archiving, deploy a Tape Gateway. With its virtual tape library (VTL) interface, you can use your existing tape-based backup software infrastructure to store data on virtual tape cartridges that you create. For more information, see Supported third-party backup applications for a Tape Gateway. When you archive tapes, you don't worry about managing tapes on your premises and arranging shipments of tapes offsite. For an architectural overview, see How Tape Gateway works (architecture). Hosting option - You can run Storage Gateway either on-premises as a VM appliance or hardware appliance, or in AWS as an Amazon EC2 instance. For more information, see Requirements. If your data center goes offline and you don't have an available host, you can deploy a gateway on an EC2 instance. Storage Gateway provides an Amazon Machine Image (AMI) that contains the gateway VM image. Additionally, as you configure a host to deploy a gateway software appliance, you need to allocate sufficient storage for the gateway VM. Before you continue to the next step, make sure that you have done the following: For a gateway deployed on-premises, choose the type of VM host and set it up. Your options are VMware ESXi Hypervisor, Microsoft Hyper-V, and Linux Kernel-based Virtual Machine (KVM). If you deploy the gateway behind a firewall, make sure that ports are accessible to the gateway VM. For more information, see Requirements. Install your client backup software. For more information, see Supported third-party backup applications for a Tape Gateway. What is Volume Gateway? PDFRSS AWS Storage Gateway connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure. You can use the service to store data in the Amazon Web Services Cloud for scalable and cost-effective storage that helps maintain data security. AWS Storage Gateway offers file-based File Gateways (Amazon S3 File and Amazon FSx File), volume-based (Cached and Stored), and tape-based storage solutions. Topics Volume Gateway Are you a first-time Storage Gateway user? How Volume Gateway works (architecture) Storage Gateway pricing Plan your Storage Gateway deployment Volume Gateway Volume Gateway - A Volume Gateway provides cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your on-premises application servers. You can deploy a Volume Gateway either on-premises as a VM appliance running on VMware ESXi, KVM, or Microsoft Hyper-V hypervisor, as a hardware appliance, or in AWS as an Amazon EC2 instance. The gateway supports the following volume configurations: Cached volumes - You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data. Stored volumes - If you need low-latency access to your entire dataset, first configure your on-premises gateway to store all your data locally. Then asynchronously back up point-in-time snapshots of this data to Amazon S3. This configuration provides durable and inexpensive offsite backups that you can recover to your local data center or Amazon Elastic Compute Cloud (Amazon EC2). For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2. Documentation: For Volume Gateway documentation, see Creating a Volume Gateway. Are you a first-time Storage Gateway user? In the following documentation, you can find a Getting Started section that covers setup information common to all gateways and also gateway-specific setup sections. The Getting Started section shows you how to deploy, activate, and configure storage for a gateway. The management section shows you how to manage your gateway and resources: Creating a Volume Gateway describes how to create and use a Volume Gateway. It shows you how to create storage volumes and back up data to the volumes. Managing Your Gateway describes how to perform management tasks for your gateway and its resources. In this guide, you can primarily find how to work with gateway operations by using the AWS Management Console. If you want to perform these operations programmatically, see the AWS Storage Gateway API Reference. Storage Gateway pricing For current information about pricing, see Pricing on the AWS Storage Gateway details page. Plan your Storage Gateway deployment By using the Storage Gateway software appliance, you can connect your existing on-premises application infrastructure with scalable, cost-effective AWS cloud storage that provides data security features. To deploy Storage Gateway, you first need to decide on the following two things: Your gateway type - this guide covers the following gateway type: Volume Gateway - Using Volume Gateways, you can create storage volumes in the Amazon Web Services Cloud. Your on-premises applications can access these as Internet Small Computer System Interface (iSCSI) targets. There are two options—cached and stored volumes. With cached volumes, you store volume data in AWS, with a small portion of recently accessed data in the cache on-premises. This approach enables low-latency access to your frequently accessed dataset. It also provides seamless access to your entire dataset stored in AWS. By using cached volumes, you can scale your storage resource without having to provision additional hardware. With stored volumes, you store the entire set of volume data on-premises and store periodic point-in-time backups (snapshots) in AWS. In this model, your on-premises storage is primary, delivering low-latency access to your entire dataset. AWS storage is the backup that you can restore in the event of a disaster in your data center. For an architectural overview of Volume Gateways, see Cached volumes architecture and Stored volumes architecture>. Hosting option - You can run Storage Gateway either on-premises as a VM appliance or hardware appliance, or in AWS as an Amazon EC2 instance. For more information, see Requirements. If your data center goes offline and you don't have an available host, you can deploy a gateway on an EC2 instance. Storage Gateway provides an Amazon Machine Image (AMI) that contains the gateway VM image. Additionally, as you configure a host to deploy a gateway software appliance, you need to allocate sufficient storage for the gateway VM. Before you continue to the next step, make sure that you have done the following: For a gateway deployed on-premises, choose the type of VM host and set it up. Your options are VMware ESXi Hypervisor, Microsoft Hyper-V, and Linux Kernel-based Virtual Machine (KVM). If you deploy the gateway behind a firewall, make sure that ports are accessible to the gateway VM. For more information, see Requirements.

Amazon DocumentDB

Amazon DocumentDB (with MongoDB compatibility) is a fast, reliable, and fully managed database service that makes it easy for you to set up, operate, and scale MongoDB-compatible databases. Amazon DocumentDB (with MongoDB compatibility) is a fast, reliable, and fully managed database service. Amazon DocumentDB makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud. With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB. Overview of Amazon DocumentDB The following are some high-level features of Amazon DocumentDB: Amazon DocumentDB automatically grows the size of your storage volume as your database storage needs grow. Your storage volume grows in increments of 10 GB, up to a maximum of 64 TiB. You don't need to provision any excess storage for your cluster to handle future growth. With Amazon DocumentDB, you can increase read throughput to support high-volume application requests by creating up to 15 replica instances. Amazon DocumentDB replicas share the same underlying storage, lowering costs and avoiding the need to perform writes at the replica nodes. This capability frees up more processing power to serve read requests and reduces the replica lag time—often down to single digit milliseconds. You can add replicas in minutes regardless of the storage volume size. Amazon DocumentDB also provides a reader endpoint, so the application can connect without having to track replicas as they are added and removed. Amazon DocumentDB lets you scale the compute and memory resources for each of your instances up or down. Compute scaling operations typically complete in a few minutes. Amazon DocumentDB runs in Amazon Virtual Private Cloud (Amazon VPC), so you can isolate your database in your own virtual network. You can also configure firewall settings to control network access to your cluster. Amazon DocumentDB continuously monitors the health of your cluster. On an instance failure, Amazon DocumentDB automatically restarts the instance and associated processes. Amazon DocumentDB doesn't require a crash recovery replay of database redo logs, which greatly reduces restart times. Amazon DocumentDB also isolates the database cache from the database process, enabling the cache to survive an instance restart. On instance failure, Amazon DocumentDB automates failover to one of up to 15 Amazon DocumentDB replicas that you create in other Availability Zones. If no replicas have been provisioned and a failure occurs, Amazon DocumentDB tries to create a new Amazon DocumentDB instance automatically. The backup capability in Amazon DocumentDB enables point-in-time recovery for your cluster. This feature allows you to restore your cluster to any second during your retention period, up to the last 5 minutes. You can configure your automatic backup retention period up to 35 days. Automated backups are stored in Amazon Simple Storage Service (Amazon S3), which is designed for 99.999999999% durability. Amazon DocumentDB backups are automatic, incremental, and continuous, and they have no impact on your cluster performance. With Amazon DocumentDB, you can encrypt your databases using keys that you create and control through AWS Key Management Service (AWS KMS). On a database cluster running with Amazon DocumentDB encryption, data stored at rest in the underlying storage is encrypted. The automated backups, snapshots, and replicas in the same cluster are also encrypted. If you are new to AWS services, use the following resources to learn more: AWS offers services for computing, databases, storage, analytics, and other functionality. For an overview of all AWS services, see Cloud Computing with Amazon Web Services. AWS provides a number of database services. For guidance on which service is best for your environment, see Databases on AWS. Clusters A cluster consists of 0 to 16 instances and a cluster storage volume that manages the data for those instances. All writes are done through the primary instance. All instances (primary and replicas) support reads. The cluster's data is stored in the cluster volume with copies in three different Availability Zones. Instances An Amazon DocumentDB instance is an isolated database environment in the cloud. An instance can contain multiple user-created databases. You can create and modify an instance using the AWS Management Console or the AWS CLI. The computation and memory capacity of an instance are determined by its instance class. You can select the instance that best meets your needs. If your needs change over time, you can choose a different instance class. For instance class specifications, see Instance Class Specifications. Amazon DocumentDB instances run only in the Amazon VPC environment. Amazon VPC gives you control of your virtual networking environment: You can choose your own IP address range, create subnets, and configure routing and access control lists (ACLs). Before you can create Amazon DocumentDB instances, you must create a cluster to contain the instances. Regions and Availability Zones Regions and Availability Zones define the physical locations of your cluster and instances. Regions AWS Cloud computing resources are housed in highly available data center facilities in different areas of the world (for example, North America, Europe, or Asia). Each data center location is called a Region. Each AWS Region is designed to be completely isolated from the other AWS Regions. Within each are multiple Availability Zones. By launching your nodes in different Availability Zones, you can achieve the greatest possible fault tolerance. The following diagram shows a high-level view of how AWS Regions and Availability Zones work. Availability Zones Each AWS Region contains multiple distinct locations called Availability Zones. Each Availability Zone is engineered to be isolated from failures in other Availability Zones, and to provide inexpensive, low-latency network connectivity to other Availability Zones in the same Region. By launching instances for a given cluster in multiple Availability Zones, you can protect your applications from the unlikely event of an Availability Zone failing. The Amazon DocumentDB architecture separates storage and compute. For the storage layer, Amazon DocumentDB replicates six copies of your data across three AWS Availability Zones. As an example, if you are launching an Amazon DocumentDB cluster in a Region that only supports two Availability Zones, your data storage will be replicated six ways across three Availability Zones but your compute instances will only be available in two Availability Zones. Amazon DocumentDB Pricing Amazon DocumentDB clusters are billed based on the following components. Amazon DocumentDB does not currently have a free tier so creating a cluster will incur costs. Instance hours (per hour)—Based on the instance class of the instance (for example, db.r5.xlarge). Pricing is listed on a per-hour basis, but bills are calculated down to the second and show times in decimal form. Amazon DocumentDB usage is billed in one second increments, with a minimum of 10 minutes. For more information, see Managing Instance Classes. I/O requests (per 1 million requests per month) — Total number of storage I/O requests that you make in a billing cycle. Backup storage (per GiB per month) — Backup storage is the storage that is associated with automated database backups and any active database snapshots that you have taken. Increasing your backup retention period or taking additional database snapshots increases the backup storage consumed by your database. Backup storage is metered in GB-months and per second does not apply. For more information, see Backing Up and Restoring in Amazon DocumentDB. Data transfer (per GB) — Data transfer in and out of your instance from or to the internet or other AWS Regions. Monitoring There are several ways that you can track the performance and health of an instance. You can use the free Amazon CloudWatch service to monitor the performance and health of an instance. You can find performance charts on the Amazon DocumentDB console. You can subscribe to Amazon DocumentDB events to be notified when changes occur with an instance, snapshot, parameter group, or security group. For more information, see the following: Monitoring Amazon DocumentDB with CloudWatch Logging Amazon DocumentDB API Calls with AWS CloudTrail Interfaces There are multiple ways for you to interact with Amazon DocumentDB, including the AWS Management Console and the AWS CLI. AWS Management Console The AWS Management Console is a simple web-based user interface. You can manage your clusters and instances from the console with no programming required. To access the Amazon DocumentDB console, sign in to the AWS Management Console and open the Amazon DocumentDB console at https://console.aws.amazon.com/docdb. AWS CLI You can use the AWS Command Line Interface (AWS CLI) to manage your Amazon DocumentDB clusters and instances. With minimal configuration, you can start using all of the functionality provided by the Amazon DocumentDB console from your favorite terminal program. To install the AWS CLI, see Installing the AWS Command Line Interface. To begin using the AWS CLI for Amazon DocumentDB, see AWS Command Line Interface Reference for Amazon DocumentDB. The mongo Shell To connect to your cluster to create, read, update, delete documents in your databases, you can use the mongo shell with Amazon DocumentDB. To download and install the mongo 4.0 shell, see Step 4: Install the mongo shell. MongoDB Drivers For developing and writing applications against an Amazon DocumentDB cluster, you can also use the MongoDB drivers with Amazon DocumentDB.

Amazon Elastic File System

Amazon EFS is a simple, serverless, elastic, set-and-forget file system that automatically grows and shrinks as you add and remove files with no need for management or provisioning. You can use Amazon EFS with Amazon EC2, AWS Lambda, Amazon ECS, Amazon EKS and other AWS compute instances, or with on-premises servers. Amazon Elastic File System (Amazon EFS) provides a simple, serverless, set-and-forget elastic file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS has a simple web services interface that allows you to create and configure file systems quickly and easily. The service manages all the file storage infrastructure for you, meaning that you can avoid the complexity of deploying, patching, and maintaining complex file system configurations. Amazon EFS supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol, so the applications and tools that you use today work seamlessly with Amazon EFS. Multiple compute instances, including Amazon EC2, Amazon ECS, and AWS Lambda, can access an Amazon EFS file system at the same time, providing a common data source for workloads and applications running on more than one compute instance or server. With Amazon EFS, you pay only for the storage used by your file system and there is no minimum fee or setup cost. Amazon EFS offers a range of storage classes designed for different use cases. These include: Standard storage classes - EFS Standard and EFS Standard-Infrequent Access (Standard-IA), which offer multi-AZ resilience and the highest levels of durability and availability. One Zone storage classes - EFS One Zone and EFS One Zone-Infrequent Access (EFS One Zone-IA), which offer customers the choice of additional savings by choosing to save their data in a single Availability Zone. For more information, see EFS storage classes. Costs related to Provisioned Throughput are determined by the throughput values you specify. For more information, see Amazon EFS Pricing. Amazon EFS is designed to provide the throughput, IOPS, and low latency needed for a broad range of workloads. With Amazon EFS, you can choose from two performance modes and two throughput modes: The default General Purpose performance mode is ideal for latency-sensitive use cases, like web serving environments, content management systems, home directories, and general file serving. File systems in the Max I/O mode can scale to higher levels of aggregate throughput and operations per second with a tradeoff of higher latencies for file system operations. For more information, see Performance modes. Using the default Bursting Throughput mode, throughput scales as your file system grows. Using Provisioned Throughput mode, you can specify the throughput of your file system independent of the amount of data stored. For more information, see Throughput modes. The service is designed to be highly scalable, highly available, and highly durable. Amazon EFS file systems using Standard storage classes store data and metadata across multiple Availability Zones in an AWS Region. EFS file systems can grow to petabyte scale, drive high levels of throughput, and allow massively parallel access from compute instances to your data. Amazon EFS provides file system access semantics, such as strong data consistency and file locking. For more information, see Data consistency in Amazon EFS. Amazon EFS also enables you to control access to your file systems through Portable Operating System Interface (POSIX) permissions. For more information, see Security in Amazon EFS. Amazon EFS supports authentication, authorization, and encryption capabilities to help you meet your security and compliance requirements. Amazon EFS supports two forms of encryption for file systems, encryption in transit and encryption at rest. You can enable encryption at rest when creating an Amazon EFS file system. If you do, all your data and metadata is encrypted. You can enable encryption in transit when you mount the file system. NFS client access to EFS is controlled by both AWS Identity and Access Management (IAM) policies and network security policies like security groups. For more information, see Data encryption in Amazon EFS, Identity and access management for Amazon EFS, and Controlling network access to Amazon EFS file systems for NFS clients. Note Using Amazon EFS with Microsoft Windows-based Amazon EC2 instances is not supported.

Amazon ElastiCache

Amazon ElastiCache makes it easy to set up, manage, and scale distributed in-memory cache environments in the AWS Cloud. It provides a high performance, resizable, and cost-effective in-memory cache, while removing complexity associated with deploying and managing a distributed cache environment. ElastiCache works with both the Redis and Memcached engines; to see which works best for you, see the Comparing Memcached and Redis topic in either user guide. Amazon ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution. At the same time, it helps remove the complexity associated with deploying and managing a distributed cache environment. Overview of ElastiCache for Redis Existing applications that use Redis can use ElastiCache with almost no modification. Your applications simply need information about the host names and port numbers of the ElastiCache nodes that you have deployed. ElastiCache for Redis has multiple features that help make the service more reliable for critical production deployments: Automatic detection of and recovery from cache node failures. Multi-AZ for a failed primary cluster to a read replica, in Redis clusters that support replication. Redis (cluster mode enabled) supports partitioning your data across up to 500 shards. For Redis version 3.2 and later, all versions support encryption in transit and encryption at rest encryption with authentication. This support helps you build HIPAA-compliant applications. Flexible Availability Zone placement of nodes and clusters for increased fault tolerance. Integration with other AWS services such as Amazon EC2, Amazon CloudWatch, AWS CloudTrail, and Amazon SNS. This integration helps provide a managed in-memory caching solution that is high-performance and highly secure. ElastiCache for Redis manages backups, software patching, automatic failure detection, and recovery. You can have automated backups performed when you need them, or manually create your own backup snapshot. You can use these backups to restore a cluster. The ElastiCache for Redis restore process works reliably and efficiently. You can get high availability with a primary instance and a synchronous secondary instance that you can fail over to when problems occur. You can also use read replicas to increase read scaling. You can control access to your ElastiCache for Redis clusters by using AWS Identity and Access Management to define users and permissions. You can also help protect your clusters by putting them in a virtual private cloud (VPC). By using the Global Datastore for Redis feature, you can work with fully managed, fast, reliable, and secure replication across AWS Regions. Using this feature, you can create cross-Region read replica clusters for ElastiCache for Redis to enable low-latency reads and disaster recovery across AWS Regions. Data tiering provides a price-performance option for Redis workloads by utilizing lower-cost solid state drives (SSDs) in each cluster node in addition to storing data in memory. It is ideal for workloads that access up to 20 percent of their overall dataset regularly, and for applications that can tolerate additional latency when accessing data on SSD. For more information, see Data tiering. Clusters The basic building block of ElastiCache for Redis is the cluster. A cluster is a collection of one or more cache nodes, all of which run an instance of the Redis cache engine software. When you create a cluster, you specify the engine and version for all of the nodes to use. Your ElastiCache for Redis instances are designed to be accessed through an Amazon EC2 instance. You can create and modify a cluster by using the AWS CLI, the ElastiCache for Redis API, or the AWS Management Console. Each ElastiCache for Redis cluster runs a Redis engine version. Each Redis engine version has its own supported features. Additionally, each Redis engine version has a set of parameters in a parameter group that control the behavior of the clusters that it manages. The computation and memory capacity of a cluster is determined by its instance, or node, class. You can select the node type that best meets your needs. If your needs change over time, you can change node types. For information, see Supported node types. You can also leverage data-tiering when considering your node type needs. Data tiering is a feature where some least frequently used data is stored on disk to mitigate against memory limitations on applications that can tolerate additional latency when data on SSD (solid state drives) is accessed. Note For pricing information on ElastiCache instance classes, see Amazon ElastiCache pricing. Cluster node storage comes in two types: Standard and memory-optimized. They differ in performance characteristics and price, allowing you to tailor your storage performance and cost to your needs. Each instance has minimum and maximum storage requirements depending on the storage type. It's important to have sufficient storage so that your clusters have room to grow. Also, sufficient storage makes sure that features have room to write content or log entries. You can run a cluster on a virtual private cloud (VPC) using the Amazon Virtual Private Cloud (Amazon VPC) service. When you use a VPC, you have control over your virtual networking environment. You can choose your own IP address range, create subnets, and configure routing and access control lists. ElastiCache manages backups, software patching, automatic failure detection, and recovery. There's no additional cost to run your cluster in a VPC. For more information on using Amazon VPC with ElastiCache for Redis, see Amazon VPCs and ElastiCache security. AWS Regions and Availability Zones Amazon cloud computing resources are housed in highly available data center facilities in different areas of the world (for example, North America, Europe, or Asia). Each data center location is called an AWS Region. Each AWS Region contains multiple distinct locations called Availability Zones, or AZs. Each Availability Zone is engineered to be isolated from failures in other Availability Zones. Each is engineered to provide inexpensive, low-latency network connectivity to other Availability Zones in the same AWS Region. By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location. For more information, see Choosing regions and availability zones. You can create your cluster in several Availability Zones, an option called a Multi-AZ deployment. When you choose this option, Amazon automatically provisions and maintains a secondary standby node instance in a different Availability Zone. Your primary node instance is asynchronously replicated across Availability Zones to the secondary instance. This approach helps provide data redundancy and failover support, eliminate I/O freezes, and minimize latency spikes during system backups. For more information, see Minimizing downtime in ElastiCache for Redis with Multi-AZ. Security A security group controls the access to a cluster. It does so by allowing access to IP address ranges or Amazon EC2 instances that you specify. For more information about security groups, see Security in ElastiCache for Redis. Monitoring an ElastiCache for Redis cluster There are several ways that you can track the performance and health of a ElastiCache for Redis cluster. You can use the CloudWatch service to monitor the performance and health of a cluster. CloudWatch performance charts are shown in the ElastiCache for Redis console. You can also subscribe to ElastiCache for Redis events to be notified about changes to a cluster, snapshot, parameter group, or security group. For more information, see Monitoring Use with CloudWatch metrics.

AWS Serverless Application Model (AWS SAM)

The AWS Serverless Application Model (AWS SAM) is an open-source framework that enables you to build serverless applications on AWS. It provides you with a template specification to define your serverless application, and a command line interface (CLI) tool. The AWS Serverless Application Model (AWS SAM) is an open-source framework that you can use to build serverless applications on AWS. A serverless application is a combination of Lambda functions, event sources, and other resources that work together to perform tasks. Note that a serverless application is more than just a Lambda function—it can include additional resources such as APIs, databases, and event source mappings. You can use AWS SAM to define your serverless applications. AWS SAM consists of the following components: AWS SAM template specification. You use this specification to define your serverless application. It provides you with a simple and clean syntax to describe the functions, APIs, permissions, configurations, and events that make up a serverless application. You use an AWS SAM template file to operate on a single, deployable, versioned entity that's your serverless application. For the full AWS SAM template specification, see AWS Serverless Application Model (AWS SAM) specification. AWS SAM command line interface (AWS SAM CLI). You use this tool to build serverless applications that are defined by AWS SAM templates. The CLI provides commands that enable you to verify that AWS SAM template files are written according to the specification, invoke Lambda functions locally, step-through debug Lambda functions, package and deploy serverless applications to the AWS Cloud, and so on. For details about how to use the AWS SAM CLI, including the full AWS SAM CLI Command Reference, see AWS SAM CLI command reference. This guide shows you how to use AWS SAM to define, test, and deploy a simple serverless application. It also provides an example application that you can download, test locally, and deploy to the AWS Cloud. You can use this example application as a starting point for developing your own serverless applications. Benefits of using AWS SAM Because AWS SAM integrates with other AWS services, creating serverless applications with AWS SAM provides the following benefits: Single-deployment configuration. AWS SAM makes it easy to organize related components and resources, and operate on a single stack. You can use AWS SAM to share configuration (such as memory and timeouts) between resources, and deploy all related resources together as a single, versioned entity. Extension of AWS CloudFormation. Because AWS SAM is an extension of AWS CloudFormation, you get the reliable deployment capabilities of AWS CloudFormation. You can define resources by using AWS CloudFormation in your AWS SAM template. Also, you can use the full suite of resources, intrinsic functions, and other template features that are available in AWS CloudFormation. Built-in best practices. You can use AWS SAM to define and deploy your infrastructure as config. This makes it possible for you to use and enforce best practices such as code reviews. Also, with a few lines of configuration, you can enable safe deployments through CodeDeploy, and can enable tracing by using AWS X-Ray. Local debugging and testing. The AWS SAM CLI lets you locally build, test, and debug serverless applications that are defined by AWS SAM templates. The CLI provides a Lambda-like execution environment locally. It helps you catch issues upfront by providing parity with the actual Lambda execution environment. To step through and debug your code to understand what the code is doing, you can use AWS SAM with AWS toolkits like the AWS Toolkit for JetBrains, AWS Toolkit for PyCharm, AWS Toolkit for IntelliJ, and AWS Toolkit for Visual Studio Code. This tightens the feedback loop by making it possible for you to find and troubleshoot issues that you might run into in the cloud. Deep integration with development tools. You can use AWS SAM with a suite of AWS tools for building serverless applications. You can discover new applications in the AWS Serverless Application Repository. For authoring, testing, and debugging AWS SAM-based serverless applications, you can use the AWS Cloud9 IDE. To build a deployment pipeline for your serverless applications, you can use CodeBuild, CodeDeploy, and CodePipeline. You can also use AWS CodeStar to get started with a project structure, code repository, and a CI/CD pipeline that's automatically configured for you. To deploy your serverless application, you can use the Jenkins plugin.

AWS Key Management Service

AWS Key Management Service (AWS KMS) is an encryption and key management service scaled for the cloud. AWS KMS keys and functionality are used by other AWS services, and you can use them to protect data in your own applications that use AWS. AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the cryptographic keys that are used to protect your data. AWS KMS uses hardware security modules (HSM) to protect and validate your AWS KMS keys under the FIPS 140-2 Cryptographic Module Validation Program. The HSMs that AWS KMS uses to protect KMS keys in China (Beijing) and China (Ningxia) Regions comply with all pertinent Chinese regulations, but are not validated under the FIPS 140-2 Cryptographic Module Validation Program. AWS KMS integrates with most other AWS services that encrypt your data. AWS KMS also integrates with AWS CloudTrail to log use of your KMS keys for auditing, regulatory, and compliance needs. You can use the AWS KMS API to create and manage KMS keys and special features, such as custom key stores, and use KMS keys in cryptographic operations. For detailed information, see the AWS Key Management Service API Reference. You can create and manage your AWS KMS keys: Create, edit, and view symmetric and asymmetric KMS keys, including HMAC keys. Control access to your KMS keys by using key policies, IAM policies, and grants. AWS KMS supports attribute-based access control (ABAC). You can also refine policies by using condition keys. Create, delete, list, and update aliases, friendly names for your KMS keys. You can also use aliases to control access to your KMS keys. Tag your KMS keys for identification, automation, and cost tracking. You can also use tags to control access to your KMS keys. Enable and disable KMS keys. Enable and disable automatic rotation of the cryptographic material in a KMS key. Delete KMS keys to complete the key lifecycle. You can use your KMS keys in cryptographic operations. For examples, see Programming the AWS KMS API. Encrypt, decrypt, and re-encrypt data with symmetric or asymmetric KMS keys. Sign and verify messages with asymmetric KMS keys. Generate exportable symmetric data keys and asymmetric data key pairs. Generate and verify HMAC codes. Generate random numbers suitable for cryptographic applications. You can use the advanced features of AWS KMS. Create multi-Region keys, which act like copies of the same KMS key in different AWS Regions. Import cryptographic material into a KMS key Create KMS keys in your own custom key store backed by a AWS CloudHSM cluster Connect directly to AWS KMS through a private endpoint in your VPC Use hybrid post-quantum TLS to provide forward-looking encryption in transit for the data that you send to AWS KMS. By using AWS KMS, you gain more control over access to data you encrypt. You can use the key management and cryptographic features directly in your applications or through AWS services integrated with AWS KMS. Whether you write applications for AWS or use AWS services, AWS KMS enables you to maintain control over who can use your AWS KMS keys and gain access to your encrypted data. AWS KMS integrates with AWS CloudTrail, a service that delivers log files to your designated Amazon S3 bucket. By using CloudTrail you can monitor and investigate how and when your KMS keys have been used and who used them. AWS KMS in AWS Regions The AWS Regions in which AWS KMS is supported are listed in AWS Key Management Service Endpoints and Quotas. If an AWS KMS feature is not supported in an AWS Region that AWS KMS supports, the regional difference is described in the topic about the feature. AWS KMS pricing As with other AWS products, using AWS KMS does not require contracts or minimum purchases. For more information about AWS KMS pricing, see AWS Key Management Service Pricing. Service level agreement AWS Key Management Service is backed by a service level agreement that defines our service availability policy.

Amazon Data Lifecycle Manager

With Amazon Data Lifecycle Manager, you create lifecycle policies that automate the creation, retention, cross-Region and cross-account copy, and deletion of Amazon EBS snapshots and Amazon EBS-backed AMIs. You can use Amazon Data Lifecycle Manager to automate the creation, retention, and deletion of EBS snapshots and EBS-backed AMIs. When you automate snapshot and AMI management, it helps you to: Protect valuable data by enforcing a regular backup schedule. Create standardized AMIs that can be refreshed at regular intervals. Retain backups as required by auditors or internal compliance. Reduce storage costs by deleting outdated backups. Create disaster recovery backup policies that back up data to isolated accounts. When combined with the monitoring features of Amazon CloudWatch Events and AWS CloudTrail, Amazon Data Lifecycle Manager provides a complete backup solution for Amazon EC2 instances and individual EBS volumes at no additional cost. Important Amazon Data Lifecycle Manager cannot be used to manage snapshots or AMIs that are created by any other means. Amazon Data Lifecycle Manager cannot be used to automate the creation, retention, and deletion of instance store-backed AMIs. How Amazon Data Lifecycle Manager works The following are the key elements of Amazon Data Lifecycle Manager. Elements Snapshots EBS-backed AMIs Target resource tags Amazon Data Lifecycle Manager tags Lifecycle policies Policy schedules Snapshots Snapshots are the primary means to back up data from your EBS volumes. To save storage costs, successive snapshots are incremental, containing only the volume data that changed since the previous snapshot. When you delete one snapshot in a series of snapshots for a volume, only the data that's unique to that snapshot is removed. The rest of the captured history of the volume is preserved. For more information, see Amazon EBS snapshots. EBS-backed AMIs An Amazon Machine Image (AMI) provides the information that's required to launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration. Amazon Data Lifecycle Manager supports EBS-backed AMIs only. EBS-backed AMIs include a snapshot for each EBS volume that's attached to the source instance. For more information, see Amazon Machine Images (AMI). Target resource tags Amazon Data Lifecycle Manager uses resource tags to identify the resources to back up. Tags are customizable metadata that you can assign to your AWS resources (including Amazon EC2 instances, EBS volumes and snapshots). An Amazon Data Lifecycle Manager policy (described later) targets an instance or volume for backup using a single tag. Multiple tags can be assigned to an instance or volume if you want to run multiple policies on it. You can't use the \ or = characters in a tag key. Target resource tags are case sensitive. For more information, see Tag your Amazon EC2 resources. Amazon Data Lifecycle Manager tags Amazon Data Lifecycle Manager applies the following system tags to all snapshots and AMIs created by a policy, to distinguish them from snapshots and AMIs created by any other means: aws:dlm:lifecycle-policy-id aws:dlm:lifecycle-schedule-name aws:dlm:expirationTime — For policies with age-based retention schedules only. aws:dlm:managed aws:dlm:expirationtime — For snapshots that could not be successfully archived by a schedule. aws:dlm:archived — For snapshots that were archived by a schedule. aws:dlm:scheduled-archive-expiration-time You can also specify custom tags to be applied to snapshots and AMIs on creation. You can't use the \ or = characters in a tag key. The target tags that Amazon Data Lifecycle Manager uses to associate volumes with a snapshot policy can optionally be applied to snapshots created by the policy. Similarly, the target tags that are used to associate instances with an AMI policy can optionally be applied to AMIs created by the policy. Lifecycle policies A lifecycle policy consists of these core settings: Policy type—Defines the type of resources that the policy can manage. Amazon Data Lifecycle Manager supports the following types of lifecycle policies: Snapshot lifecycle policy—Used to automate the lifecycle of EBS snapshots. These policies can target individual EBS volumes or all EBS volumes attached to an instance. EBS-backed AMI lifecycle policy—Used to automate the lifecycle of EBS-backed AMIs and their backing snapshots. These policies can target instances only. Cross-account copy event policy—Used to automate snapshot copies across accounts. Use this policy type in conjunction with an EBS snapshot policy that shares snapshots across accounts. Resource type—Defines the type of resources that are targeted by the policy. Snapshot lifecycle policies can target instances or volumes. Use VOLUME to create snapshots of individual volumes, or use INSTANCE to create multi-volume snapshots of all of the volumes that are attached to an instance. For more information, see Multi-volume snapshots. AMI lifecycle policies can target instances only. One AMI is created that includes snapshots of all of the volumes that are attached to the target instance. Target tags—Specifies the tags that must be assigned to an EBS volume or an Amazon EC2 instance for it to be targeted by the policy. Policy schedules(Snapshot and AMI policies only)—Define when snapshots or AMIs are to be created and how long to retain them for. For more information, see Policy schedules. For example, you could create a policy with settings similar to the following: Manages all EBS volumes that have a tag with a key of account and a value of finance. Creates snapshots every 24 hours at 0900 UTC. Retains only the five most recent snapshots. Starts snapshot creation no later than 0959 UTC each day. Policy schedules Policy schedules define when snapshots or AMIs are created by the policy. Policies can have up to four schedules—one mandatory schedule, and up to three optional schedules. Adding multiple schedules to a single policy lets you create snapshots or AMIs at different frequencies using the same policy. For example, you can create a single policy that creates daily, weekly, monthly, and yearly snapshots. This eliminates the need to manage multiple policies. For each schedule, you can define the frequency, fast snapshot restore settings (snapshot lifecycle policies only), cross-Region copy rules, and tags. The tags that are assigned to a schedule are automatically assigned to the snapshots or AMIs that are created when the schedule is initiated. In addition, Amazon Data Lifecycle Manager automatically assigns a system-generated tag based on the schedule's frequency to each snapshot or AMI. Each schedule is initiated individually based on its frequency. If multiple schedules are initiated at the same time, Amazon Data Lifecycle Manager creates only one snapshot or AMI and applies the retention settings of the schedule that has the highest retention period. The tags of all of the initiated schedules are applied to the snapshot or AMI. (Snapshot lifecycle policies only) If more than one of the initiated schedules is enabled for fast snapshot restore, then the snapshot is enabled for fast snapshot restore in all of the Availability Zones specified across all of the initiated schedules. The highest retention settings of the initiated schedules is used for each Availability Zone. If more than one of the initiated schedules is enabled for cross-Region copy, the snapshot or AMI is copied to all Regions specified across all of the initiated schedules. The highest retention period of the initiated schedules is applied. Quotas Your AWS account has the following quotas related to Amazon Data Lifecycle Manager: DescriptionQuotaLifecycle policies per Region100Tags per resource45

AWS Service Management Connector

AWS Service Management Connector (SMC) enables users to provision, manage, and operate AWS resources and capabilities in familiar IT Service Management (ITSM) tooling (for example, ServiceNow and Atlassian). These integrations enable organizations to migrate and adopt AWS faster and at-scale. With AWS Service Management Connector, you can manage and govern AWS resources directly through your organization's existing operations management tool and system of record.

AWS Security Hub

AWS Security Hub provides you with a comprehensive view of the security state of your AWS resources. Security Hub collects security data from across AWS accounts and services, and helps you analyze your security trends to identify and prioritize the security issues across your AWS environment. AWS Security Hub provides you with a comprehensive view of your security state in AWS and helps you check your environment against security industry standards and best practices. Security Hub collects security data from across AWS accounts, services, and supported third-party partner products and helps you analyze your security trends and identify the highest priority security issues.

AWS Application Migration Service

AWS Application Migration Service (MGN) is a highly automated lift-and-shift solution that simplifies and expedites migration to AWS.

AWS Billing and Cost Management Documentation

AWS Billing and Cost Management is a web service that provides features that helps you pay your bills and optimize your costs. Amazon Web Services bills your account for usage, which ensures that you pay only for what you use. What is AWS Billing? PDFRSS Welcome to the Billing User Guide. The AWS Billing console contains features to pay your AWS bills, organize and report your AWS cost and usage, and manage your consolidated billing if you're a part of AWS Organizations. The Billing console works closely with the AWS Cost Management console. You can use both for a holistic approach to managing your costs. The Billing console contains resources to manage your ongoing payments and payment methods registered to your AWS account. Next, you can use the features in the AWS Cost Management console to optimize your future costs. For information about AWS resources to optimize your costs, see the AWS Cost Management User Guide. Amazon Web Services automatically charges the credit card that you provided when you sign up for an AWS account. You can view or update your credit card information at any time, including designating a different credit card for AWS to charge. You can do this on the Payment Methods page in the Billing console. For more details on Billing features, see Features of Billing. With the AWS Cost Management console and the Billing console, you can do the following tasks: Use casesDescriptionAWS Cost Management feature namesBilling console feature namesOrganizeConstruct your cost allocation and governance foundation with your own tagging strategy.- AWS Cost Categories AWS Cost Allocation Tags Report Raise awareness and accountability of your cloud spend with the detailed, allocable cost data. AWS Cost Explorer AWS Cost and Usage Reports AccessTrack billing information across the organization in one consolidated view.- AWS Consolidated Billing AWS Purchase Order Management AWS Credits Control Establish effective governance mechanisms with the right guardrails in place. AWS Cost Anomaly Detection -ForecastEstimate your resource utilization and spend with forecast dashboards that you create. AWS Cost Explorer AWS Budgets -BudgetKeep your spend in check with custom budget threshold and auto alert notification. AWS Budgets AWS Budgets Actions -PurchaseUse free trials and programmatic discounts based on your workload pattern and needs. Savings Plans AWS Reserved Instances AWS Free Tier RightsizeAlign your service allocation size to your actual workload demand. Rightsizing Recommendations -InspectStay up-to-date with your resource deployment and cost optimization opportunities. AWS Cost Explorer - Features of Billing Manage your account Manage your account settings using the AWS Management Console and Billing console. This includes designating your default currency, editing alternate contacts, adding or removing Regions, updating your tax information, and closing your AWS account. The close your account section calls out considerations such as terminating resources before you proceed with closing an account. This way, you aren't charged for unused services. Documentation: Managing your account View your bill You can use the Billing console to view your past bill details or your estimated charges for your current month at any time. This section outlines how you can view your bills, download PDF copies of your charges, and set up monthly emails to receive your invoices. It also covers how you can use other resources such as AWS Cost and Usage Reports. Documentation: Viewing your bill Managing your payments You can view your estimated bills and pay your AWS invoices in your preferred currency by setting a payment currency. AWS converts your bill to your preferred currency after your bill is finalized. Until then, all of the preferred currency amounts shown in the console are estimated in USD. AWS guarantees your exchange rate. This is so that refunds use the same exchange rate as your original transaction. Note AWS Marketplace invoices aren't eligible for this service and are processed in US dollar. This service is available only if your default payment method is Visa or MasterCard. The rates change daily. The rate that's applied to your invoice is the current rate at the time when your invoice was created. You can check the current rate on the Billing console. Currency conversion is provided by Amazon Services LLC. Documentation: Managing Your Payments. AWS Purchase Order Management Manage your AWS purchase orders in a self-service fashion by taking care of multiple purchase orders all in one place. This can help to reduce your overhead costs and increase the accuracy and efficiency in your overall procure-to-pay process. Use the Billing console to manage your purchase orders and configure how they reflect on your invoices. In this chapter, learn how to add, edit, view details, and set up notifications regarding your purchase orders in the console. Documentation: Managing your purchase orders AWS Cost Categories Manage your AWS costs with AWS Cost Categories by mapping your cost and usage into meaningful categories. This section defines terms that are used in the console for supported dimensions, operations, rule types, and status. The section also provides more information on how you can create, edit, delete, and split the charges within cost categories. Documentation: Managing your costs with AWS Cost Categories Payment profile You can use payment profiles to assign more than one payment method to your automatic payments. If you receive invoices from more than one AWS service provider ("seller of record"), you can use payment profiles to assign a unique payment method for each one. After you create a payment profile for a service provider, your payment profile pays your AWS bills automatically. In this section, learn how to use the Billing console to set up custom payment profile. Documentation: Managing your payment profiles Consolidate billing for AWS Organizations Use the consolidated billing feature for AWS Organizations to combine your billing for multiple AWS accounts. This chapter outlines the consolidated billing process, differences for Amazon Internet Services Pvt. Ltd accounts, and details for discounts. Documentation: Consolidated billing for AWS Organizations Related services AWS Billing Conductor AWS Billing Conductor is a custom billing service to group your accounts by financial owner, configure billing parameters, and generate AWS Cost and Usage Reports per group. AWS Billing Conductor is a fully managed service that can support the showback and chargeback workflows of AWS Solution Providers and Enterprise customers. For more information, see What is AWS Billing Conductor? in the AWS Billing Conductor User Guide. IAM The Billing service and AWS Cost Management service is closely integrated with AWS Identity and Access Management (IAM). You can use IAM with Billing to ensure that other people who work in your account have as much access as they need to get their jobs done. You also use IAM to control access to all of your AWS resources, not only your billing information. It's important that you familiarize yourself with the major concepts and best practices of IAM before you get too far along with setting up the structure of your AWS account. For information about how to work with IAM and why it's important to do so, see IAM Concepts and IAM Best Practices in the IAM User Guide. AWS Organizations (Consolidated Billing) You can use AWS products and services to accommodate a company of any size, from small start-ups to enterprises. If your company is large or likely to grow, you might want to set up multiple AWS accounts that reflect your company's specific structure. For example, you can have one account for the entire company and accounts for each employee, or an account for the entire company with IAM users for each employee. You can have an account for the entire company, accounts for each department or team within the company, and accounts for each employee. If you create multiple accounts, you can use the consolidated billing feature of AWS Organizations to combine all member accounts under a management account. That way, you can receive a single bill for all of your member accounts. For more information, see Consolidated billing for AWS Organizations. What is AWS Cost Management? PDFRSS Welcome to the AWS Cost Management User Guide. The AWS Cost Management console has features that you can use for budgeting and forecasting costs and methods for you to optimize your pricing to reduce your overall AWS bill. The AWS Cost Management console is integrated closely with the Billing console. Using both together, you can manage your costs in a holistic manner. You can use Billing console resources to manage your ongoing payments, and AWS Cost Management console resources to optimize your future costs. For information about AWS resources to understand, pay, or organize your AWS bills, see the AWS Billing User Guide. With the AWS Cost Management console and the Billing console, you can do the following tasks. Use casesDescriptionAWS Cost Management feature namesBilling console feature namesOrganizeConstruct your cost allocation and governance foundation with your own tagging strategy.- AWS Cost Categories AWS Cost Allocation Tags Report Raise awareness and accountability of your cloud spend with the detailed, allocable cost data. AWS Cost Explorer AWS Cost and Usage Reports AccessTrack billing information across the organization in a consolidated view.- AWS Consolidated Billing AWS Purchase Order Management AWS Credits Control Establish effective governance mechanisms with the right guardrails in place. AWS Cost Anomaly Detection -ForecastEstimate your resource utilization and spend with forecast dashboards that you create. AWS Cost Explorer AWS Budgets -BudgetKeep your spend in check with custom budget threshold and auto alert notification. AWS Budgets AWS Budgets Actions -PurchaseUse free trials and programmatic discounts based on your workload pattern and needs. Savings Plans AWS Reserved Instances AWS Free Tier RightsizeAlign your service allocation size to your actual workload demand. Rightsizing Recommendations -InspectStay up to date with your resource deployment and cost optimization opportunities. AWS Cost Explorer - Features of AWS Cost Management AWS Cost Explorer Use case: Report, Forecast, Inspect AWS Cost Explorer is a feature that you can use to visualize your cost data for further analysis. Using it, you can filter graphs by several different values. This includes Availability Zone, AWS service, and AWS Region, It also includes other specifics such as custom cost allocation tag, Amazon EC2 instance type, and purchase option. If you use consolidated billing, you can also filter by member account. In addition, you can see a forecast of future costs based on your historical cost data. Documentation: Analyzing your costs with AWS Cost Explorer AWS Budgets Use case: Forecast, Inspect AWS Budgets tracks your AWS usage and costs. AWS Budgets uses the cost visualization that's provided by AWS Cost Explorer to show the status of your budgets. This provides forecasts of your estimated costs and tracks your AWS usage, including your AWS Free Tier usage. You can also use AWS Budgets to create Amazon Simple Notification Service (Amazon SNS) notifications for when you exceed your budgeted amounts, or when your estimated costs exceed your budgets. Documentation: Managing your costs with AWS Budgets AWS Cost Anomaly Detection Use case: Control AWS Cost Anomaly Detection is a feature that uses machine learning to continuously monitor your cost and usage to detect unusual spends. You can receive alerts individually in aggregated reports, and receive alerts in an email or an Amazon SNS topic. AWS Cost Anomaly Detection is beneficial to analyze and determine the root cause of the anomaly, and identify the factor that is driving the cost increase. Documentation: Detecting unusual spend with AWS Cost Anomaly Detection Rightsizing Recommendations Use case: Control Rightsizing recommendations is a feature that reviews your historical Amazon EC2 usage for the past 14 days to identify opportunities for greater cost and usage efficiency. The feature identifies cost saving opportunities by downsizing or terminating instances in Amazon EC2. Documentation: Accessing Reserved Instance Recommendations Savings Plans Use case: Purchase Savings Plans offers a flexible pricing model that provides savings on AWS usage. Savings Plans provide savings beyond On-Demand rates in exchange for a commitment of using a specified amount of compute power (measured every hour) for a one or three year period. You can manage your plans by using recommendations, performance reporting, and budget alerts in AWS Cost Explorer.

AWS CodeArtifact

AWS CodeArtifact is a secure, scalable, and cost-effective artifact management service for software development. AWS CodeArtifact is a secure, highly scalable, managed artifact repository service that makes it easier for organizations to store and share software packages used for application development. You can use CodeArtifact with popular build tools and package managers such as the NuGet CLI, Maven, Gradle, npm, yarn, pip, and twine. CodeArtifact helps eliminate the need for you to manage your own artifact storage system or worry about scaling its infrastructure. There are no limits on the number or total size of the packages that you can store in a CodeArtifact repository. You can create a connection between your private CodeArtifact repository and an external, public repository, such as npmjs.com or Maven Central. CodeArtifact will then fetch and store packages on-demand from the public repository when they're requested by a package manager. This makes it easier to consume open-source dependencies used by your application and ensure they're always available for builds and development. You can also publish private packages to a CodeArtifact repository. This makes it possible to more easily share proprietary software components between multiple applications and development teams in your organization. For more information, see AWS CodeArtifact. How does CodeArtifact work? CodeArtifact stores software packages in repositories. Repositories are polyglot—a single repository can contain packages of any supported type. Every CodeArtifact repository is a member of a single CodeArtifact domain. We recommend that you use one production domain for your organization with one or more repositories. For example, each repository might be used for a different development team. Packages in your repositories can then be discovered and shared across your development teams. To add packages to a repository, configure a package manager such as npm or maven to use the repository endpoint (URL). You can then use the package manager to publish packages to repository. You can also import open-source packages into a repository by configuring it with an external connection to a public repository such as npmjs, NuGet Gallery, Maven Central, or PyPI. For more information, see Connect a CodeArtifact repository to a public repository. You can make packages in one repository available to another repository in the same domain. To do this, configure one repository as an upstream of the other. All package versions available to the upstream repository are also available to the downstream repository. In addition, all packages that are available to the upstream repository through an external connection to a public repository are available to the downstream repository. For more information, see Working with upstream repositories in CodeArtifact.

AWS Database Migration Service

AWS Database Migration Service (AWS DMS) is a web service you can use to migrate data from your database that is on-premises, on an Amazon Relational Database Service (Amazon RDS) DB instance, or in a database on an Amazon Elastic Compute Cloud (Amazon EC2) instance to a database on an AWS service. These services can include a database on Amazon RDS or a database on an Amazon EC2 instance. You can also migrate a database from an AWS service to an on-premises database. You can migrate between source and target endpoints that use the same database engine, such as from an Oracle database to an Oracle database. You can also migrate between source and target endpoints that use different database engines, such as from an Oracle database to a PostgreSQL database.

AWS Device Farm

AWS Device Farm is an app testing service that enables you to test your iOS, Android and Fire OS apps on real, physical phones and tablets that are hosted by AWS. The service allows you to upload your own tests or use built-in, script-free compatibility tests.

AWS Direct Connect

AWS Direct Connect establishes a dedicated network connection between your on-premises network and AWS. With this connection in place, you can create virtual interfaces directly to the AWS Cloud, bypassing your internet service provider. This can provide a more consistent network experience.

AWS Fault Injection Simulator

AWS Fault Injection Simulator (AWS FIS) is a managed service that enables you to perform fault injection experiments on your AWS workloads.

AWS GameKit

AWS GameKit empowers game developers to build game backend services with AWS while working in a game engine. Ready-to-use AWS solutions and APIs give developers a jump start on delivering cloud-based game features with backends on AWS that are well-architected, secure, and resilient.

AWS Global Accelerator

AWS Global Accelerator is a network layer service in which you create accelerators to improve the security, availability, and performance of your applications for local and global users. Depending on the type of accelerator that you choose, you can gain additional benefits, such as improving availability or mapping users to specific destination endpoints. AWS Global Accelerator is a service in which you create accelerators to improve the performance of your applications for local and global users. Depending on the type of accelerator you choose, you can gain additional benefits: With a standard accelerator, you can improve availability of your internet applications that are used by a global audience. With a standard accelerator, Global Accelerator directs traffic over the AWS global network to endpoints in the nearest Region to the client. With a custom routing accelerator, you can map one or more users to a specific destination among many destinations. Global Accelerator is a global service that supports endpoints in multiple AWS Regions. To determine if Global Accelerator or other services are currently supported in a specific AWS Region, see the AWS Regional Services List. By default, Global Accelerator provides you with static IP addresses that you associate with your accelerator. The static IP addresses are anycast from the AWS edge network. For IPv4, Global Accelerator provides two static IPv4 addresses. For dual-stack, Global Accelerator provides a total of four addresses: two static IPv4 addresses and two static IPv6 addresses. For IPv4, instead of using the addresses that Global Accelerator provides, you can configure these entry points to be IPv4 addresses from your own IP address ranges that you bring to Global Accelerator (BYOIP). Important The static IP addresses remain assigned to your accelerator for as long as it exists, even if you disable the accelerator and it no longer accepts or routes traffic. However, when you delete an accelerator, you lose the static IP addresses that are assigned to it, so you can no longer route traffic by using them. You can use IAM policies, like tag-based permissions with Global Accelerator, to limit the users who have permissions to delete an accelerator. For more information, see Tag-based policies. For standard accelerators, Global Accelerator uses the AWS global network to route traffic to the optimal regional endpoint based on health, client location, and policies that you configure, which increases the availability of your applications. Endpoints for standard accelerators can be Network Load Balancers, Application Load Balancers, Amazon EC2 instances, or Elastic IP addresses that are located in one AWS Region or multiple Regions. The service reacts instantly to changes in health or configuration to ensure that internet traffic from clients is always directed to healthy endpoints. Custom routing accelerators only support virtual private cloud (VPC) subnet endpoint types and route traffic to private IP addresses in that subnet.

AWS IoT Greengrass

AWS IoT Greengrass seamlessly extends AWS onto physical devices so they can act locally on the data they generate, while still using the cloud for management, analytics, and durable storage. AWS IoT Greengrass ensures your devices can respond quickly to local events and operate with intermittent connectivity. AWS IoT Greengrass minimizes the cost of transmitting data to the cloud by enabling you to author custom software and AWS Lambda functions that run on local devices.

AWS Mainframe Modernization

AWS Mainframe Modernization provides tools and resources to help you plan and implement migration and modernization from mainframes to AWS managed runtime environments.

AWS Migration Hub

AWS Migration Hub (Migration Hub) provides a single location to track migration tasks across multiple AWS tools and partner solutions. With Migration Hub, you can choose the AWS and partner migration tools that best fit your needs while providing visibility into the status of your migration projects. Migration Hub also provides key metrics and progress information for individual applications, regardless of which tools are used to migrate them.

AWS Private 5G

AWS Private 5G is a managed service that makes it easy to deploy, operate, and scale your own private mobile network at your on-premises location. Private 5G provides the pre-configured hardware and software for mobile networks, helps automate setup, and scales capacity on demand to support additional devices as needed. You pay only for the network coverage and capacity you need.

AWS RoboMaker

AWS RoboMaker is a service that makes it easy to develop, simulate, and deploy intelligent robotics applications at scale.

AWS Server Migration Service

AWS Server Migration Service (AWS SMS) combines data collection tools with automated server replication to speed the migration of on-premises servers to AWS.

AWS Step Functions

AWS Step Functions makes it easy to coordinate the components of distributed applications as a series of steps in a visual workflow. You can quickly build and run state machines to execute the steps of your application in a reliable and scalable fashion.

AWS X-Ray

AWS X-Ray makes it easy for developers to analyze the behavior of their distributed applications by providing request tracing, exception collection, and profiling capabilities. AWS X-Ray is a service that collects data about requests that your application serves, and provides tools that you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization. For any traced request to your application, you can see detailed information not only about the request and response, but also about calls that your application makes to downstream AWS resources, microservices, databases, and web APIs. AWS X-Ray receives traces from your application, in addition to AWS services your application uses that are already integrated with X-Ray. Instrumenting your application involves sending trace data for incoming and outbound requests and other events within your application, along with metadata about each request. Many instrumentation scenarios require only configuration changes. For example, you can instrument all incoming HTTP requests and downstream calls to AWS services that your Java application makes. There are several SDKs, agents, and tools that can be used to instrument your application for X-Ray tracing. See Instrumenting your application for more information. AWS services that are integrated with X-Ray can add tracing headers to incoming requests, send trace data to X-Ray, or run the X-Ray daemon. For example, AWS Lambda can send trace data about requests to your Lambda functions, and run the X-Ray daemon on workers to make it simpler to use the X-Ray SDK. Instead of sending trace data directly to X-Ray, each client SDK sends JSON segment documents to a daemon process listening for UDP traffic. The X-Ray daemon buffers segments in a queue and uploads them to X-Ray in batches. The daemon is available for Linux, Windows, and macOS, and is included on AWS Elastic Beanstalk and AWS Lambda platforms. X-Ray uses trace data from the AWS resources that power your cloud applications to generate a detailed service map. The service map shows the client, your front-end service, and backend services that your front-end service calls to process requests and persist data. Use the service map to identify bottlenecks, latency spikes, and other issues to solve to improve the performance of your applications.

AWS Transfer Family

AWS Transfer Family is a secure transfer service that stores your data in Amazon S3 and simplifies the migration of Secure File Transfer Protocol (SFTP), File Transfer Protocol Secure (FTPS), and File Transfer Protocol (FTP) workflows to AWS. AWS Transfer Family is a secure transfer service that enables you to transfer files into and out of AWS storage services. AWS Transfer Family supports transferring data from or to the following AWS storage services. Amazon Simple Storage Service (Amazon S3) storage. For information about Amazon S3, see Getting started with Amazon Simple Storage Service. Amazon Elastic File System (Amazon EFS) Network File System (NFS) file systems. For information about Amazon EFS, see What Is Amazon Elastic File System? AWS Transfer Family supports transferring data over the following protocols: Secure Shell (SSH) File Transfer Protocol (SFTP): version 3 File Transfer Protocol Secure (FTPS) File Transfer Protocol (FTP) Applicability Statement 2 (AS2) Note For FTP and FTPS data connections, the port range that Transfer Family uses to establish the data channel is 8192-8200. File transfer protocols are used in data exchange workflows across different industries such as financial services, healthcare, advertising, and retail, among others. Transfer Family simplifies the migration of file transfer workflows to AWS. The following are some common use cases for using Transfer Family with Amazon S3: Data lakes in AWS for uploads from third parties such as vendors and partners. Subscription-based data distribution with your customers. Internal transfers within your organization. The following are some common use cases for using Transfer Family with Amazon EFS: Data distribution Supply chain Content management Web serving applications The following are some common use cases for using Transfer Family with AS2: Workflows with compliance requirements that rely on having data protection and security features built into the protocol Supply chain logistics Payments workflows Business-to-business (B2B) transactions Integrations with enterprise resource planning (ERP) and customer relationship management (CRM) systems With Transfer Family, you get access to a file transfer protocol-enabled server in AWS without the need to run any server infrastructure. You can use this service to migrate your file transfer-based workflows to AWS while maintaining your end users' clients and configurations as is. You first associate your hostname with the server endpoint, then add your users and provision them with the right level of access. After you do this, your users' transfer requests are serviced directly out of your Transfer Family server endpoint. Transfer Family provides the following benefits: A fully managed service that scales in real time to meet your needs. You don't need to modify your applications or run any file transfer protocol infrastructure. With your data in durable Amazon S3 storage, you can use native AWS services for processing, analytics, reporting, auditing, and archival functions. With Amazon EFS as your data store, you get a fully managed elastic file system for use with AWS Cloud services and on-premises resources. Amazon EFS is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files. This helps eliminate the need to provision and manage capacity to accommodate growth. A fully managed, serverless File Transfer Workflow service that makes it easy to set up, run, automate, and monitor processing of files uploaded using AWS Transfer Family. There are no upfront costs, and you pay only for the use of the service. In the following sections, you can find a description of the different features of Transfer Family, a getting started tutorial, detailed instructions on how to set up the different protocol enabled servers, how to use different types of identity providers, and the service's API reference.

AWS Virtual Private Network

AWS Virtual Private Network (AWS VPN) establishes a secure and private tunnel from your network or device to the AWS Cloud. You can extend your existing on-premises network into a VPC, or connect to other AWS resources from a client. AWS VPN offers two types of private connectivity that feature the high availability and robust security necessary for your data. By default, instances that you launch into an Amazon VPC can't communicate with your own (remote) network. You can enable access to your remote network from your VPC by creating an AWS Site-to-Site VPN (Site-to-Site VPN) connection, and configuring routing to pass traffic through the connection. Although the term VPN connection is a general term, in this documentation, a VPN connection refers to the connection between your VPC and your own on-premises network. Site-to-Site VPN supports Internet Protocol security (IPsec) VPN connections. Your Site-to-Site VPN connection is either an AWS Classic VPN or an AWS VPN. For more information, see Site-to-Site VPN categories. Concepts The following are the key concepts for Site-to-Site VPN: VPN connection: A secure connection between your on-premises equipment and your VPCs. VPN tunnel: An encrypted link where data can pass from the customer network to or from AWS. Each VPN connection includes two VPN tunnels which you can simultaneously use for high availability. Customer gateway: An AWS resource which provides information to AWS about your customer gateway device. Customer gateway device: A physical device or software application on your side of the Site-to-Site VPN connection. Target gateway: A generic term for the VPN endpoint on the Amazon side of the Site-to-Site VPN connection. Virtual private gateway: A virtual private gateway is the VPN endpoint on the Amazon side of your Site-to-Site VPN connection that can be attached to a single VPC. Transit gateway: A transit hub that can be used to interconnect multiple VPCs and on-premises networks, and as a VPN endpoint for the Amazon side of the Site-to-Site VPN connection. Working with Site-to-Site VPN You can create, access, and manage your Site-to-Site VPN resources using any of the following interfaces: AWS Management Console— Provides a web interface that you can use to access your Site-to-Site VPN resources. AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. For more information, see AWS Command Line Interface. AWS SDKs — Provide language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Query API— Provides low-level API actions that you call using HTTPS requests. Using the Query API is the most direct way to access Amazon VPC, but it requires that your application handle low-level details such as generating the hash to sign the request, and error handling. For more information, see the Amazon EC2 API Reference. Site-to-Site VPN limitations A Site-to-Site VPN connection has the following limitations. IPv6 traffic is not supported for VPN connections on a virtual private gateway. An AWS VPN connection does not support Path MTU Discovery. In addition, take the following into consideration when you use Site-to-Site VPN. When connecting your VPCs to a common on-premises network, we recommend that you use non-overlapping CIDR blocks for your networks. Pricing You are charged for each VPN connection hour that your VPN connection is provisioned and available. For more information, see AWS Site-to-Site VPN and Accelerated Site-to-Site VPN Connection pricing. You are charged for data transfer out from Amazon EC2 to the internet. For more information, see Data Transfer on the Amazon EC2 On-Demand Pricing page. When you create an accelerated VPN connection, we create and manage two accelerators on your behalf. You are charged an hourly rate and data transfer costs for each accelerator.

Amazon AppFlow

Amazon AppFlow is a fully managed API integration service that you use to connect your software as a service (SaaS) applications to AWS services, and securely transfer data. Use Amazon AppFlow flows to manage and automate your data transfers without needing to write code.

Amazon AppStream 2.0

Amazon AppStream 2.0 is a fully managed, secure application streaming service that lets you stream desktop applications to users without rewriting applications. AppStream 2.0 provides users with instant access to the applications that they need with a responsive, fluid user experience on the device of their choice.

Amazon Athena

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to setup or manage, and you pay only for the queries you run. To get started, simply point to your data in S3, define the schema, and start querying using standard SQL. Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds. Athena is serverless, so there is no infrastructure to set up or manage, and you pay only for the queries you run. Athena scales automatically—running queries in parallel—so results are fast, even with large datasets and complex queries.

Amazon Chime

Amazon Chime is a secure, real-time, unified communications service that transforms meetings by making them more efficient and easier to conduct.

Amazon CloudSearch

Amazon CloudSearch is a fully managed service in the cloud that makes it easy to set up, manage, and scale a search solution for your website. Amazon CloudSearch enables you to search large collections of data such as web pages, document files, forum posts, or product information. With Amazon CloudSearch, you can quickly add search capabilities to your website without having to become a search expert or worry about hardware provisioning, setup, and maintenance. As your volume of data and traffic fluctuates, Amazon CloudSearch automatically scales to meet your needs.

Amazon Connect

Amazon Connect is a contact center as a service (CCaS) solution that offers easy, self-service configuration and enables dynamic, personal, and natural customer engagement at any scale.

Amazon EMR

Amazon EMR is a web service that makes it easy to process vast amounts of data efficiently using Apache Hadoop and services offered by Amazon Web Services.

Amazon Elastic Transcoder

Amazon Elastic Transcoder lets you convert digital media stored in Amazon S3 into the audio and video codecs and the containers required by consumer playback devices. For example, you can convert large, high-quality digital media files into formats that users can play back on mobile devices, tablets, web browsers, and connected televisions.

Amazon EventBridge

Amazon EventBridge is a serverless event bus service that makes it easy to connect your applications with data from a variety of sources. EventBridge delivers a stream of real-time data from your own applications, software-as-a-service (SaaS) applications, and AWS services and routes that data to targets such as AWS Lambda. You can set up routing rules to determine where to send your data to build application architectures that react in real time to all of your data sources. EventBridge enables you to build event-driven architectures that are loosely coupled and distributed.

Amazon FinSpace

Amazon FinSpace is a fully managed data management and analytics service that makes it easy to store, catalog, and prepare financial industry data at scale. Amazon FinSpace reduces the time it takes for financial services industry (FSI) customers to find and access all types of financial data for analysis.

Amazon Forecast

Amazon Forecast is a fully managed deep learning service for time-series forecasting. By providing Amazon Forecast with historical time-series data, you can predict future points in the series. Time-series forecasting is useful in multiple domains, including retail, financial planning, supply chain, and healthcare. You can also use Amazon Forecast to forecast operational metrics for inventory management, and workforce and resource planning and management.

Amazon Honeycode

Amazon Honeycode is a fully managed service that allows you to quickly build mobile and web apps for teams—without programming. Build Honeycode apps for managing almost anything, like projects, customers, operations, approvals, resources, and even your team.

Amazon MQ

Amazon MQ is a managed message broker service that makes it easy to set up and operate message brokers in the cloud. Amazon MQ provides interoperability with your existing applications and services. Amazon MQ works with your existing applications and services without the need to manage, operate, or maintain your own messaging system.

Amazon Managed Blockchain

Amazon Managed Blockchain is a fully managed service that makes it easy to create and manage scalable blockchain networks using the Ethereum or Hyperledger Fabric open-source frameworks.

Amazon Managed Streaming for Apache Kafka

Amazon Managed Streaming for Apache Kafka (Amazon MSK) is a fully managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data.

Amazon OpenSearch Service

Amazon OpenSearch Service is a managed service that makes it easy to deploy, operate, and scale OpenSearch, a popular open-source search and analytics engine. OpenSearch Service also offers security options, high availability, data durability, and direct access to the OpenSearch API.

Amazon Pinpoint Documentation

Amazon Pinpoint helps you engage your customers by sending them email, SMS and voice messages, and push notifications. You can use Amazon Pinpoint to send targeted messages (such as promotions and retention campaigns), as well as transactional messages (such as order confirmations and password reset messages).

VPC Endpoint

Enables Amazon S3 and Amazon DynamoDB access from within your VPC without using an Internet gateway or NAT, and allows you to control the access using VPC endpoint policies.

AWS Application Discovery Service

The AWS Application Discovery Service helps systems integrators quickly and reliably plan application migration projects by automatically identifying applications running in on-premises data centers, their associated dependencies, and their performance profile.

AWS Schema Conversion Tool

The AWS Schema Conversion Tool makes heterogeneous database migrations easy by automatically converting the source database schema and a majority of the custom code to a format compatible with the target database. The custom code that the tool converts includes views, stored procedures, and functions. Any code that the tool cannot convert automatically is clearly marked so that you can convert it yourself. For supported source and target databases, see the User Guide, following.

Amazon Chime SDK

The Amazon Chime SDK allows builders to add real-time voice, video, and messaging powered by machine learning into their applications.

https://docs.aws.amazon.com/general/latest/gr/rande.html?id=docs_gateway#:~:text=AWS%20glossary-,AWS%20service%20endpoints,-PDF

To connect programmatically to an AWS service, you use an endpoint. An endpoint is the URL of the entry point for an AWS web service. The AWS SDKs and the AWS Command Line Interface (AWS CLI) automatically use the default endpoint for each service in an AWS Region. But you can specify an alternate endpoint for your API requests. If a service supports Regions, the resources in each Region are independent of similar resources in other Regions. For example, you can create an Amazon EC2 instance or an Amazon SQS queue in one Region. When you do, the instance or queue is independent of instances or queues in all other Regions.

AWS ParallelCluster

AWS ParallelCluster is an AWS supported open source cluster management tool that helps you to deploy and manage high performance computing (HPC) clusters in the AWS Cloud. AWS ParallelCluster is an AWS supported open source cluster management tool that helps you to deploy and manage high performance computing (HPC) clusters in the AWS Cloud. Built on the open source CfnCluster project, AWS ParallelCluster enables you to quickly build an HPC compute environment in AWS. It automatically sets up the required compute resources and shared filesystem. You can use AWS ParallelCluster with batch schedulers, such as AWS Batch and Slurm. AWS ParallelCluster facilitates quick start proof of concept deployments and production deployments. You can also build higher level workflows, such as a genomics portal that automates an entire DNA sequencing workflow, on top of AWS ParallelCluster.

AWS Service Catalog

AWS Service Catalog enables IT administrators to create, manage, and distribute portfolios of approved products to end users, who can then access the products they need in a personalized portal. Typical products include servers, databases, websites, or applications that are deployed using AWS resources (for example, an Amazon EC2 instance or an Amazon RDS database). You can control which users have access to specific products to enforce compliance with organizational business standards, manage product lifecycles, and help users find and launch products with confidence.

AWS Health Documentation

AWS Health provides personalized information about events that can affect your AWS infrastructure, guides you through scheduled changes, and accelerates the troubleshooting of issues that affect your AWS resources and accounts. AWS Health provides ongoing visibility into your resource performance and the availability of your AWS services and accounts. You can use AWS Health events to learn how service and resource changes might affect your applications running on AWS. AWS Health provides relevant and timely information to help you manage events in progress. AWS Health also helps you be aware of and to prepare for planned activities. The service delivers alerts and notifications triggered by changes in the health of AWS resources, so that you get near-instant event visibility and guidance to help accelerate troubleshooting. All customers can use the AWS Health Dashboard, powered by the AWS Health API. The dashboard requires no setup, and it's ready to use for authenticated AWS users. For more service highlights, see the AWS Health Dashboard detail page.

Amazon Fraud Detector

Amazon Fraud Detector is a fully managed service that helps you detect suspicious online activities. In a few steps, you can create machine learning models to identify a variety of fraudulent activities. Amazon Fraud Detector is a fully managed fraud detection service that automates the detection of potentially fraudulent activities online. These activities include unauthorized transactions and the creation of fake accounts. Amazon Fraud Detector works by using machine learning to analyze your data. It does this in a way that builds off of the seasoned expertise of more than 20 years of fraud detection at Amazon. You can use Amazon Fraud Detector to build customized fraud-detection models, add decision logic to interpret the model's fraud evaluations, and assign outcomes such as pass or send for review for each possible fraud evaluation. With Amazon Fraud Detector, you don't need machine learning expertise to detect fraudulent activities. To get started, collect and prepare fraud data that you collected at your organization. Amazon Fraud Detector then uses this data to train, test, and deploy a custom fraud detection model on your behalf. As a part of this process, Amazon Fraud Detector uses machine learning models that have learned patterns of fraud from AWS and Amazon's own fraud expertise to evaluate your fraud data and generate model scores and model performance data. You configure decision logic to interpret the model's score and assign outcomes for how to deal with each fraud evaluation.

AWS Well-Architected Tool

Use the AWS Well-Architected Tool to review your workloads against current Amazon Web Services architectural best practices. The AWS Well-Architected Tool measures the workload and provides recommendations on how to improve your architecture.

AWS OpsWorks

AWS OpsWorks provides a simple and flexible way to create and manage stacks and applications. With AWS OpsWorks, you can provision AWS resources, manage their configuration, deploy applications to those resources, and monitor their health. AWS OpsWorks is a configuration management service that helps you configure and operate applications in a cloud enterprise by using Puppet or Chef. AWS OpsWorks Stacks and AWS OpsWorks for Chef Automate let you use Chef cookbooks and solutions for configuration management, while OpsWorks for Puppet Enterprise lets you configure a Puppet Enterprise master server in AWS. Puppet offers a set of tools for enforcing the desired state of your infrastructure, and automating on-demand tasks. AWS OpsWorks Services AWS OpsWorks for Puppet Enterprise OpsWorks for Puppet Enterprise lets you create AWS-managed Puppet master servers. A Puppet master server manages nodes in your infrastructure, stores facts about those nodes, and serves as a central repository for your Puppet modules. Modules are reusable, shareable units of Puppet code that contain instructions about how your infrastructure should be configured. You can download community modules from the Puppet Forge, or use the Puppet Development Kit to create your own custom modules, then manage their deployment with Puppet Code Manager. OpsWorks for Puppet Enterprise provides a fully-managed Puppet master, a suite of automation tools that enable you to inspect, deliver, operate, and future-proof your applications, and access to a user interface that lets you view information about your nodes and Puppet activities. OpsWorks for Puppet Enterprise lets you use Puppet to automate how nodes are configured, deployed, and managed, whether they are Amazon EC2 instances or on-premises devices. An OpsWorks for Puppet Enterprise master provides full-stack automation by handling tasks such as software and operating system configurations, package installations, database setups, change management, policy enforcement, monitoring, and quality assurance. Because OpsWorks for Puppet Enterprise manages Puppet Enterprise software, your server can be backed up automatically at a time that you choose, is always running the most current AWS-compatible version of Puppet, and always has the most current security updates applied. You can use Amazon EC2 Auto Scaling groups to associate new Amazon EC2 nodes with your server automatically. AWS OpsWorks for Chef Automate AWS OpsWorks for Chef Automate lets you create AWS-managed Chef servers that include Chef Automate premium features, and use the Chef DK and other Chef tooling to manage them. A Chef server manages nodes in your environment, stores information about those nodes, and serves as a central repository for your Chef cookbooks. The cookbooks contain recipes that are run by the Chef Infra client (chef-client) agent on each node that you manage by using Chef. You can use Chef tools like knife and Test Kitchen to manage nodes and cookbooks on a Chef server in the AWS OpsWorks for Chef Automate service. Chef Automate is an included server software package that provides automated workflow for continuous deployment and compliance checks. AWS OpsWorks for Chef Automate installs and manages Chef Automate, Chef Infra, and Chef InSpec by using a single Amazon Elastic Compute Cloud instance. With AWS OpsWorks for Chef Automate, you can use community-authored or custom Chef cookbooks without making AWS OpsWorks-specific changes. Because AWS OpsWorks for Chef Automate manages Chef Automate components on a single instance, your server can be backed up automatically at a time that you choose, is always running the most current minor version of Chef, and always has the most current security updates applied. You can use Amazon EC2 Auto Scaling groups to associate new Amazon EC2 nodes with your server automatically. AWS OpsWorks Stacks Cloud-based computing usually involves groups of AWS resources, such as EC2 instances and Amazon Relational Database Service (RDS) instances. For example, a web application typically requires application servers, database servers, load balancers, and other resources. This group of instances is typically called a stack. AWS OpsWorks Stacks, the original service, provides a simple and flexible way to create and manage stacks and applications. AWS OpsWorks Stacks lets you deploy and monitor applications in your stacks. You can create stacks that help you manage cloud resources in specialized groups called layers. A layer represents a set of EC2 instances that serve a particular purpose, such as serving applications or hosting a database server. Layers depend on Chef recipes to handle tasks such as installing packages on instances, deploying apps, and running scripts. Unlike AWS OpsWorks for Chef Automate, AWS OpsWorks Stacks does not require or create Chef servers; AWS OpsWorks Stacks performs some of the work of a Chef server for you. AWS OpsWorks Stacks monitors instance health, and provisions new instances for you, when necessary, by using Auto Healing and Auto Scaling.

AWS Outposts

AWS Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility. You can use the same services, tools, and partner solutions to develop for the cloud and on premises. AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools to customer premises. By providing local access to AWS managed infrastructure, AWS Outposts enables customers to build and run applications on premises using the same programming interfaces as in AWS Regions, while using local compute and storage resources for lower latency and local data processing needs. An Outpost is a pool of AWS compute and storage capacity deployed at a customer site. AWS operates, monitors, and manages this capacity as part of an AWS Region. You can create subnets on your Outpost and specify them when you create AWS resources such as EC2 instances, EBS volumes, ECS clusters, and RDS instances. Instances in Outpost subnets communicate with other instances in the AWS Region using private IP addresses, all within the same VPC. Key concepts These are the key concepts for AWS Outposts. Outpost site - The customer-managed physical buildings where AWS will install your Outpost. A site must meet the facility, networking, and power requirements for your Outpost. Outpost configurations - Configurations of Amazon EC2 compute capacity, Amazon EBS storage capacity, and networking support. Each configuration has unique power, cooling, and weight support requirements. Outpost capacity - Compute and storage resources available on the Outpost. You can view and manage the capacity for your Outpost from the AWS Outposts console. Outpost equipment - Physical hardware that provides access to the AWS Outposts service. The hardware includes racks, servers, switches, and cabling owned and managed by AWS. Outpost racks - An Outpost form factor that is an industry-standard 42U rack. Outpost racks include rack-mountable servers, switches, a network patch panel, a power shelf and blank panels. Outpost servers - An Outpost form factor that is an industry-standard 1U or 2U server, which can be installed in a standard EIA-310D 19 compliant 4 post rack. Outpost servers provide local compute and networking services to sites that have limited space or smaller capacity requirements. Service link - Network route that enables communication between your Outpost and its associated AWS Region. Each Outpost is an extension of an Availability Zone and its associated Region. Local gateway - A logical interconnect virtual router that enables communication between an Outpost rack and your on-premises network. Local network interface - A network interface that enables communication from an Outpost server and your on-premises network.

Amazon Keyspaces

Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra-compatible database service. Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra-compatible database service. With Amazon Keyspaces, you don't have to provision, patch, or manage servers, and you don't have to install, maintain, or operate software. Amazon Keyspaces is serverless, so you pay for only the resources that you use, and the service automatically scales tables up and down in response to application traffic. You can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. Note Apache Cassandra is an open-source, wide-column datastore that is designed to handle large amounts of data. For more information, see Apache Cassandra. Amazon Keyspaces makes it easy to migrate, run, and scale Cassandra workloads in the AWS Cloud. With just a few clicks on the AWS Management Console or a few lines of code, you can create keyspaces and tables in Amazon Keyspaces, without deploying any infrastructure or installing software. With Amazon Keyspaces, you can run your existing Cassandra workloads on AWS using the same Cassandra application code and developer tools that you use today.

End-of-Support Migration Program for Windows Server (EMP)

The End-of-Support Migration Program for Windows Server (EMP) migrates legacy applications from Windows Server 2003, 2008, and 2008 R2 to newer, supported versions on AWS without any code refactoring. The EMP technology decouples the application from the existing operating system and packages it with all necessary application files and components. The program then replatforms it to a supported version of the Windows operating system on AWS. After the migration, any outdated API calls are redirected, enabling the application to run on the latest versions of the Windows OS. The AWS End-of-Support Migration Program (EMP) for Windows Server provides the technology and guidance to migrate your applications running on Windows Server 2003, Windows Server 2008, and Windows Server 2008 R2 to the latest, supported versions of Windows Server running on Amazon Web Services (AWS). Using EMP technology, you can decouple critical applications from their underlying operating system so that they can be migrated to a supported version of Windows Server on AWS without code changes.

AWS Secrets Manager

AWS Secrets Manager helps you to securely encrypt, store, and retrieve credentials for your databases and other services. Instead of hardcoding credentials in your apps, you can make calls to Secrets Manager to retrieve your credentials whenever needed. Secrets Manager helps you protect access to your IT resources and data by enabling you to rotate and manage access to your secrets. In the past, when you created a custom application to retrieve information from a database, you typically embedded the credentials, the secret, for accessing the database directly in the application. When the time came to rotate the credentials, you had to do more than just create new credentials. You had to invest time to update the application to use the new credentials. Then you distributed the updated application. If you had multiple applications with shared credentials and you missed updating one of them, the application failed. Because of this risk, many customers choose not to regularly rotate credentials, which effectively substitutes one risk for another. Secrets Manager enables you to replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically. This helps ensure the secret can't be compromised by someone examining your code, because the secret no longer exists in the code. Also, you can configure Secrets Manager to automatically rotate the secret for you according to a specified schedule. This enables you to replace long-term secrets with short-term ones, significantly reducing the risk of compromise. For a list of terms and concepts you need to understand to make full use of Secrets Manager, see Get started with AWS Secrets Manager. Topics Basic AWS Secrets Manager scenario Features of AWS Secrets Manager Compliance with standards for AWS Secrets Manager Pricing for AWS Secrets Manager Support and feedback for AWS Secrets Manager Basic AWS Secrets Manager scenario The following diagram illustrates the most basic scenario. The diagram displays you can store credentials for a database in Secrets Manager, and then use those credentials in an application to access the database. The database administrator creates a set of credentials on the Personnel database for use by an application called MyCustomApp. The administrator also configures those credentials with the permissions required for the application to access the Personnel database. The database administrator stores the credentials as a secret in Secrets Manager named MyCustomAppCreds. Then, Secrets Manager encrypts and stores the credentials within the secret as the protected secret text. When MyCustomApp accesses the database, the application queries Secrets Manager for the secret named MyCustomAppCreds. Secrets Manager retrieves the secret, decrypts the protected secret text, and returns the secret to the client app over a secured (HTTPS with TLS) channel. The client application parses the credentials, connection string, and any other required information from the response and then uses the information to access the database server. Note Secrets Manager supports many types of secrets. However, Secrets Manager can natively rotate credentials for supported AWS databases without any additional programming. However, rotating the secrets for other databases or services requires creating a custom Lambda function to define how Secrets Manager interacts with the database or service. You need some programming skill to create the function. For more information, see Rotate AWS Secrets Manager secrets. Features of AWS Secrets Manager Programmatically retrieve encrypted secret values at runtime Secrets Manager helps you improve your security posture by removing hard-coded credentials from your application source code, and by not storing credentials within the application, in any way. Storing the credentials in or with the application subjects them to possible compromise by anyone who can inspect your application or the components. Since you have to update your application and deploy the changes to every client before you can deprecate the old credentials, this process makes rotating your credentials difficult. Secrets Manager enables you to replace stored credentials with a runtime call to the Secrets Manager Web service, so you can retrieve the credentials dynamically when you need them. Most of the time, your client requires access to the most recent version of the encrypted secret value. When you query for the encrypted secret value, you can choose to provide only the secret name or Amazon Resource Name (ARN), without specifying any version information at all. If you do this, Secrets Manager automatically returns the most recent version of the secret value. However, other versions can exist at the same time. Most systems support secrets more complicated than a simple password, such as full sets of credentials including the connection details, the user ID, and the password. Secrets Manager allows you to store multiple sets of these credentials at the same time. Secrets Manager stores each set in a different version of the secret. During the secret rotation process, Secrets Manager tracks the older credentials, as well as the new credentials you want to start using, until the rotation completes. Store different types of secrets Secrets Manager enables you to store text in the encrypted secret data portion of a secret. This typically includes the connection details of the database or service. These details can include the server name, IP address, and port number, as well as the user name and password used to sign in to the service. For details on secrets, see the maximum and minimum values. The protected text doesn't include: Secret name and description Rotation or expiration settings ARN of the KMS key associated with the secret Any attached AWS tags Encrypt your secret data Secrets Manager encrypts the protected text of a secret by using AWS Key Management Service (AWS KMS). Many AWS services use AWS KMS for key storage and encryption. AWS KMS ensures secure encryption of your secret when at rest. Secrets Manager associates every secret with a KMS key. It can be either AWS managed key for Secrets Manager for the account (aws/secretsmanager), or a customer managed key you create in AWS KMS. Whenever Secrets Manager encrypt a new version of the protected secret data, Secrets Manager requests AWS KMS to generate a new data key from the KMS key. Secrets Manager uses this data key for envelope encryption. Secrets Manager stores the encrypted data key with the protected secret data. Whenever the secret needs decryption, Secrets Manager requests AWS KMS to decrypt the data key, which Secrets Manager then uses to decrypt the protected secret data. Secrets Manager never stores the data key in unencrypted form, and always disposes the data key immediately after use. In addition, Secrets Manager, by default, only accepts requests from hosts using open standard Transport Layer Security (TLS) and Perfect Forward Secrecy. Secrets Manager ensures encryption of your secret while in transit between AWS and the computers you use to retrieve the secret. Automatically rotate your secrets You can configure Secrets Manager to automatically rotate your secrets without user intervention and on a specified schedule. You define and implement rotation with an AWS Lambda function. This function defines how Secrets Manager performs the following tasks: Creates a new version of the secret. Stores the secret in Secrets Manager. Configures the protected service to use the new version. Verifies the new version. Marks the new version as production ready. Staging labels help you to keep track of the different versions of your secrets. Each version can have multiple staging labels attached, but each staging label can only be attached to one version. For example, Secrets Manager labels the currently active and in-use version of the secret with AWSCURRENT. You should configure your applications to always query for the current version of the secret. When the rotation process creates a new version of a secret, Secrets Manager automatically adds the staging label AWSPENDING to the new version until testing and validation completes. Only then does Secrets Manager add the AWSCURRENT staging label to this new version. Your applications immediately start using the new secret the next time they query for the AWSCURRENT version. Databases with fully configured and ready-to-use rotation support When you choose to enable rotation, Secrets Manager supports the following Amazon Relational Database Service (Amazon RDS) databases with AWS written and tested Lambda rotation function templates, and full configuration of the rotation process: Amazon Aurora on Amazon RDS MySQL on Amazon RDS PostgreSQL on Amazon RDS Oracle on Amazon RDS MariaDB on Amazon RDS Microsoft SQL Server on Amazon RDS Other services with fully configured and ready-to-use rotation support You can also choose to enable rotation on the following services, fully supported with AWS written and tested Lambda rotation function templates, and full configuration of the rotation process: Amazon DocumentDB Amazon Redshift You can also store secrets for almost any other kind of database or service. However, to automatically rotate the secrets, you need to create and configure a custom Lambda rotation function. For more information about writing a custom Lambda function for a database or service, see How rotation works. Control access to secrets You can attach AWS Identity and Access Management (IAM) permission policies to your users, groups, and roles that grant or deny access to specific secrets, and restrict management of those secrets. For example, you might attach one policy to a group with members that require the ability to fully manage and configure your secrets. Another policy attached to a role used by an application might grant only read permission on the one secret the application needs to run. Alternatively, you can attach a resource-based policy directly to the secret to grant permissions specifying users who can read or modify the secret and the versions. Unlike an identity-based policy which automatically applies to the user, group, or role, a resource-based policy attached to a secret uses the Principal element to identify the target of the policy. The Principal element can include users and roles from the same account as the secret or principals from other accounts. Compliance with standards for AWS Secrets Manager AWS Secrets Manager has undergone auditing for the following standards and can be part of your solution when you need to obtain compliance certification. Pricing for AWS Secrets Manager When you use Secrets Manager, you pay only for what you use, and no minimum or setup fees. There is no charge for secrets that you have marked for deletion.

AWS Systems Manager

Use AWS Systems Manager to organize, monitor, and automate management tasks on your AWS resources. AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and enables you to automate operational tasks across your AWS resources.

AWS App Runner Documentation

AWS App Runner is a fully managed service that makes it easy for you to deploy from source code or a container image directly to a scalable and secure web application. AWS App Runner is an AWS service that provides a fast, simple, and cost-effective way to deploy from source code or a container image directly to a scalable and secure web application in the AWS Cloud. You don't need to learn new technologies, decide which compute service to use, or know how to provision and configure AWS resources. App Runner connects directly to your code or image repository. It provides an automatic integration and delivery pipeline with fully managed operations, high performance, scalability, and security.

AWS Artifact

AWS Artifact is a web service that enables you to download AWS security and compliance documents such as ISO certifications and SOC reports. AWS Artifact provides on-demand downloads of AWS security and compliance documents, such as AWS ISO certifications, Payment Card Industry (PCI), and Service Organization Control (SOC) reports. You can submit the security and compliance documents (also known as audit artifacts) to your auditors or regulators to demonstrate the security and compliance of the AWS infrastructure and services that you use. You can also use these documents as guidelines to evaluate your own cloud architecture and assess the effectiveness of your company's internal controls. AWS Artifact provides documents about AWS only. AWS customers are responsible for developing or obtaining documents that demonstrate the security and compliance of their companies. For more information, see Shared Responsibility Model. You can also use AWS Artifact to review, accept, and track the status of AWS agreements such as the Business Associate Addendum (BAA). A BAA typically is required for companies that are subject to the Health Insurance Portability and Accountability Act (HIPAA) to ensure that protected health information (PHI) is appropriately safeguarded. With AWS Artifact, you can accept agreements with AWS and designate AWS accounts that can legally process restricted information. You can accept an agreement on behalf of multiple accounts. To accept agreements for multiple accounts, use AWS Organizations to create an organization.

AWS Backup

AWS Backup is a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services in the cloud as well as on premises. AWS Backup is a fully-managed service that makes it easy to centralize and automate data protection across AWS services, in the cloud, and on premises. Using this service, you can configure backup policies and monitor activity for your AWS resources in one place. It allows you to automate and consolidate backup tasks that were previously performed service-by-service, and removes the need to create custom scripts and manual processes. With a few clicks in the AWS Backup console, you can automate your data protection policies and schedules. AWS Backup does not govern backups you take in your AWS environment outside of AWS Backup. Therefore, if you want a centralized, end-to-end solution for business and regulatory compliance requirements, start using AWS Backup today.

AWS Compute Optimizer

AWS Compute Optimizer recommends optimal AWS compute resources for your workloads. It can help you reduce costs and improve performance, by using machine learning to analyze your historical utilization metrics. Compute Optimizer helps you choose the optimal resource configuration based on your utilization data. AWS Compute Optimizer is a service that analyzes the configuration and utilization metrics of your AWS resources. It reports whether your resources are optimal, and generates optimization recommendations to reduce the cost and improve the performance of your workloads. Compute Optimizer also provides graphs showing recent utilization metric history data, as well as projected utilization for recommendations, which you can use to evaluate which recommendation provides the best price-performance trade-off. The analysis and visualization of your usage patterns can help you decide when to move or resize your running resources, and still meet your performance and capacity requirements. Compute Optimizer provides a console experience, and a set of APIs that allows you to view the findings of the analysis and recommendations for your resources across multiple AWS Regions. You can also view findings and recommendations across multiple accounts, if you opt in the management account of an organization. The findings from the service are also reported in the consoles of the supported services, such as the Amazon EC2 console. Supported resources and requirements Compute Optimizer generates recommendations for the following resources: Amazon Elastic Compute Cloud (Amazon EC2) instances Amazon EC2 Auto Scaling groups Amazon Elastic Block Store (Amazon EBS) volumes AWS Lambda functions For Compute Optimizer to generate recommendations for these resources, they must meet a specific set of requirements, and must have accumulated sufficient metric data. For more information, see Supported resources and requirements. Opting in You must opt in to have Compute Optimizer analyze your AWS resources. The service supports standalone AWS accounts, member accounts of an organization, and the management account of an organization. For more information, see Getting started with AWS Compute Optimizer. Analyzing metrics After you opt in, Compute Optimizer begins analyzing the specifications and the utilization metrics of your resources from Amazon CloudWatch for the last 14 days. For example, for Amazon EC2 instances, Compute Optimizer analyzes the vCPUs, memory, storage, and other specifications. It also analyzes the CPU utilization, network in and out, disk read and write, and other utilization metrics of currently running instances. For more information, see Metrics analyzed by AWS Compute Optimizer. Enhancing recommendations After you opt in, you can enhance your recommendations by activating recommendation preferences, such as the enhanced infrastructure metrics paid feature. It extends the metrics analysis look-back period for EC2 instances, including instances in Auto Scaling groups, to three months (compared to the 14-day default). For more information, see Activating recommendation preferences. Viewing findings and recommendations Optimization findings for your resources are displayed on the Compute Optimizer dashboard. For more information, see Viewing the AWS Compute Optimizer dashboard. The top optimization recommendations for each of your resources are listed on the recommendations page. The top 3 optimization recommendations and utilization graphs for a specific resource are listed on the resource details page. For more information, see Viewing resource recommendations. Export your optimization recommendations to record them over time, and share the data with others. For more information, see Exporting recommendations. Availability To view the currently supported AWS Regions and endpoints for Compute Optimizer, see Compute Optimizer Endpoints and Quotas in the AWS General Reference.

AWS Elastic Disaster Recovery

AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. You can increase IT resilience when you use AWS Elastic Disaster Recovery to replicate on-premises or cloud-based applications running on supported operating systems. Use the AWS Management Console to configure replication and launch settings, monitor data replication, and launch instances for drills or recovery. Set up AWS Elastic Disaster Recovery on your source servers to initiate secure data replication. Your data is replicated to a staging area subnet in your AWS account, in the AWS Region you select. The staging area design reduces costs by using affordable storage and minimal compute resources to maintain ongoing replication. You can perform non-disruptive tests to confirm that implementation is complete. During normal operation, maintain readiness by monitoring replication and periodically performing non-disruptive recovery and failback drills. AWS Elastic Disaster Recovery automatically converts your servers to boot and run natively on AWS when you launch instances for drills or recovery. If you need to recover applications, you can launch recovery instances on AWS within minutes, using the most up-to-date server state or a previous point in time. After your applications are running on AWS, you can choose to keep them there, or you can initiate data replication back to your primary site when the issue is resolved. You can fail back to your primary site whenever you're ready.

AWS Firewall Manager

AWS Firewall Manager simplifies your AWS WAF administration and maintenance tasks across multiple accounts and resources. With AWS Firewall Manager, you set up your firewall rules just once. The service automatically applies your rules across your accounts and resources, even as you add new resources. AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for a variety of protections, including AWS WAF, AWS Shield Advanced, Amazon VPC security groups, AWS Network Firewall, and Amazon Route 53 Resolver DNS Firewall. With Firewall Manager, you set up your protections just once and the service automatically applies them across your accounts and resources, even as you add new accounts and resources. Firewall Manager provides these benefits: Helps to protect resources across accounts Helps to protect all resources of a particular type, such as all Amazon CloudFront distributions Helps to protect all resources with specific tags Automatically adds protection to resources that are added to your account Allows you to subscribe all member accounts in an AWS Organizations organization to AWS Shield Advanced, and automatically subscribes new in-scope accounts that join the organization Allows you to apply security group rules to all member accounts or specific subsets of accounts in an AWS Organizations organization, and automatically applies the rules to new in-scope accounts that join the organization Lets you use your own rules, or purchase managed rules from AWS Marketplace Firewall Manager is particularly useful when you want to protect your entire organization rather than a small number of specific accounts and resources, or if you frequently add new resources that you want to protect. Firewall Manager also provides centralized monitoring of DDoS attacks across your organization.

AWS Snowcone

AWS Snowcone is a portable, rugged, and secure device for edge computing and data transfer. You can use a Snowcone device to collect, process, and move data to the AWS Cloud, either offline by shipping the device to AWS, or online by using AWS DataSync. It can be challenging to run applications in austere (non-data center) edge environments, or where there is a lack of consistent network connectivity. These locations often lack the space, power, and cooling needed for data center IT equipment. Snowcone is available in two flavors: Snowcone - Snowcone has two vCPUs, 4 GB of memory, and 8 TB of hard disk drive (HDD) based storage. Snowcone SSD - Snowcone SSD has two vCPUs, 4 GB of memory, and 14 TB of solid state drive (SSD) based storage. With two CPUs and terabytes of storage, a Snowcone device can run edge computing workloads that use Amazon Elastic Compute Cloud (Amazon EC2) instances and store data securely. Snowcone devices are small (8.94" x 5.85" x 3.25" / 227 mm x 148.6 mm x 82.65 mm), so they can be placed next to machinery in a factory to collect, format, and transport data back to AWS for storage and analysis. A Snowcone device weighs about 4.5 lbs. (2 kg), so you can carry one in a backpack, use it with battery-based operation, and use the Wi-Fi interface to gather sensor data. Note Wi-Fi is available only in AWS Regions in North America. Snowcone devices offer a file interface with Network File System (NFS) support. Snowcone devices support data transfer from on-premises Windows, Linux, and macOS servers and file-based applications through the NFS interface. Like AWS Snowball, AWS Snowcone has multiple layers of security encryption capabilities. You can use either of these services to collect, process, and transfer data to AWS, and run edge computing workloads that use Amazon EC2 instances. Snowcone is designed for data migration needs up to dozens of terabytes. It can be used in space-constrained environments where Snowball Edge devices don't fit. Use Cases You can use AWS Snowcone devices for the following use cases: For edge computing applications, to collect data, process the data to gain immediate insight, and then transfer the data online to AWS. To transfer data that is continuously generated by sensors or machines online to AWS in a factory or at other edge locations. To distribute media, scientific, or other content from AWS storage services to your partners and customers. To aggregate content by transferring media, scientific, or other content from your edge locations to AWS. For one-time data migration scenarios where your data is ready to be transferred, Snowcone offers a quick and low-cost way to transfer up to 8 TB or 14 TB of data to the AWS Cloud by shipping the device back to AWS. For mobile deployments, a Snowcone device can run on specified battery power. For a light workload at 25 percent CPU usage, the device can run on a battery for up to approximately 6 hours. You can use the Wi-Fi interface on your Snowcone device to collect data from wireless sensors. An AWS Snowcone device is low power, portable, lightweight, and vibration resistant, so you can use it in a wide variety of remote and austere locations. Pricing You can order a Snowcone device for pay per use and keep the device for up to four years.

AWS WAF

AWS WAF is a web application firewall that lets you monitor web requests that are forwarded to Amazon CloudFront distributions or an Application Load Balancer. You can also use AWS WAF to block or allow requests based on conditions that you specify, such as the IP addresses that requests originate from or values in the requests. AWS WAF is a web application firewall that lets you monitor the HTTP(S) requests that are forwarded to your protected web application resources. You can protect the following resource types: Amazon CloudFront distribution Amazon API Gateway REST API Application Load Balancer AWS AppSync GraphQL API Amazon Cognito user pool AWS WAF also lets you control access to your content. Based on criteria that you specify, such as the IP addresses that requests originate from or the values of query strings, the service associated with your protected resource responds to requests either with the requested content, with an HTTP 403 status code (Forbidden), or with a custom response. Note You can also use AWS WAF to protect your applications that are hosted in Amazon Elastic Container Service (Amazon ECS) containers. Amazon ECS is a highly scalable, fast container management service that makes it easy to run, stop, and manage Docker containers on a cluster. To use this option, you configure Amazon ECS to use an Application Load Balancer that is enabled for AWS WAF to route and protect HTTP(S) layer 7 traffic across the tasks in your service. For more information, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide.

AWS Shield

AWS provides two levels of protection against DDoS attacks: AWS Shield Standard and AWS Shield Advanced. AWS Shield Standard is automatically included at no extra cost beyond what you already pay for AWS WAF and your other AWS services. For added protection against DDoS attacks, AWS offers AWS Shield Advanced. AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers, Amazon CloudFront distributions, and Amazon Route 53 hosted zones. Protection against Distributed Denial of Service (DDoS) attacks is of primary importance for your internet-facing applications. When you build your application on AWS, you can make use of protections that AWS provides at no additional cost. Additionally, you can use the AWS Shield Advanced managed threat protection service to improve your security posture with additional DDoS detection, mitigation, and response capabilities. AWS is committed to providing you with the tools, best practices, and services to help ensure high availability, security, and resiliency in your defense against bad actors on the internet. This guide is provided to help IT decision makers and security engineers understand how to use Shield and Shield Advanced to better protect their applications from DDoS attacks and other external threats. When you build your application on AWS, you receive automatic protection by AWS against common volumetric DDoS attack vectors, like UDP reflection attacks and TCP SYN floods. You can leverage these protections to ensure the availability of the applications that you run on AWS by designing and configuring your architecture for DDoS resiliency. This guide provides recommendations that can help you design, create, and configure your application architectures for DDoS resiliency. Applications that adhere to the best practices provided in this guide can benefit from an improved continuity of availability when they are targeted by larger DDoS attacks and by wider ranges of DDoS attack vectors. Additionally, this guide shows you how to use Shield Advanced to implement an optimized DDoS protection posture for your critical applications. These include applications for which you've guaranteed a certain level of availability to your customers and those that require operational support from AWS during DDoS events.

Amazon Cloud Directory

Amazon Cloud Directory is a cloud-native directory that can store hundreds of millions of application-specific objects with multiple relationships and schemas. Use Cloud Directory when you need a cloud-scale directory to share and control access to hierarchical data between your applications. With Cloud Directory, you can organize application data into multiple hierarchies to support many organizational pivots and relationships across directory information. Amazon Cloud Directory is a highly available multi-tenant directory-based store in AWS. These directories scale automatically to hundreds of millions of objects as needed for applications. This lets operations staff focus on developing and deploying applications that drive the business, not managing directory infrastructure. Unlike traditional directory systems, Cloud Directory does not limit organizing directory objects in a single fixed hierarchy. With Cloud Directory, you can organize directory objects into multiple hierarchies to support many organizational pivots and relationships across directory information. For example, a directory of users may provide a hierarchical view based on reporting structure, location, and project affiliation. Similarly, a directory of devices may have multiple hierarchical views based on its manufacturer, current owner, and physical location. At its core, Cloud Directory is a specialized graph-based directory store that provides a foundational building block for developers. With Cloud Directory, developers can do the following: Create directory-based applications easily and without having to worry about deployment, global scale, availability, and performance Build applications that provide user and group management, permissions or policy management, device registry, customer management, address books, and application or product catalogs Define new directory objects or extend existing types to meet their application needs, reducing the code they need to write Reduce the complexity of layering applications on top of Cloud Directory Manage the evolution of schema information over time, ensuring future compatibility for consumers Cloud Directory includes a set of API operations to access various objects and policies stored in your Cloud Directory-based directories. For a list of available operations, see Amazon Cloud Directory API Actions. For a list of operations and the permissions required to perform each API action, see Amazon Cloud Directory API Permissions: Actions, Resources, and Conditions Reference. For a list of supported Cloud Directory regions, see the AWS Regions and Endpoints documentation. What Cloud Directory Is Not Cloud Directory is not a directory service for IT Administrators who want to manage or migrate their directory infrastructure.

Amazon CloudWatch

Amazon CloudWatch provides a reliable, scalable, and flexible monitoring solution that you can start using within minutes. You no longer need to set up, manage, and scale your own monitoring systems and infrastructure. Use CloudWatch to monitor your AWS resources and the applications you run on AWS in real time Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. The CloudWatch home page automatically displays metrics about every AWS service you use. You can additionally create custom dashboards to display metrics about your custom applications, and display custom collections of metrics that you choose. You can create alarms that watch metrics and send notifications or automatically make changes to the resources you are monitoring when a threshold is breached. For example, you can monitor the CPU usage and disk reads and writes of your Amazon EC2 instances and then use that data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health. Accessing CloudWatch You can access CloudWatch using any of the following methods: Amazon CloudWatch console - https://console.aws.amazon.com/cloudwatch/ AWS CLI - For more information, see Getting Set Up with the AWS Command Line Interface in the AWS Command Line Interface User Guide. CloudWatch API - For more information, see the Amazon CloudWatch API Reference. AWS SDKs - For more information, see Tools for Amazon Web Services. Related AWS services The following services are used along with Amazon CloudWatch: Amazon Simple Notification Service (Amazon SNS) coordinates and manages the delivery or sending of messages to subscribing endpoints or clients. You use Amazon SNS with CloudWatch to send messages when an alarm threshold has been reached. For more information, see Setting up Amazon SNS notifications. Amazon EC2 Auto Scaling enables you to automatically launch or terminate Amazon EC2 instances based on user-defined policies, health status checks, and schedules. You can use a CloudWatch alarm with Amazon EC2 Auto Scaling to scale your EC2 instances based on demand. For more information, see Dynamic Scaling in the Amazon EC2 Auto Scaling User Guide. AWS CloudTrail enables you to monitor the calls made to the Amazon CloudWatch API for your account, including calls made by the AWS Management Console, AWS CLI, and other services. When CloudTrail logging is turned on, CloudWatch writes log files to the Amazon S3 bucket that you specified when you configured CloudTrail. For more information, see Logging Amazon CloudWatch API calls with AWS CloudTrail. AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources for your users. Use IAM to control who can use your AWS resources (authentication) and what resources they can use in which ways (authorization). For more information, see Identity and access management for Amazon CloudWatch.

Amazon Cognito

Amazon Cognito handles user authentication and authorization for your web and mobile apps. With user pools, you can easily and securely add sign-up and sign-in functionality to your apps. With identity pools (federated identities), your apps can get temporary credentials that grant users access to specific AWS resources, whether the users are anonymous or are signed in. A user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito. Your users can also sign in through social identity providers like Google, Facebook, Amazon, or Apple, and through SAML identity providers. Whether your users sign in directly or through a third party, all members of the user pool have a directory profile that you can access through a Software Development Kit (SDK). User pools provide: Sign-up and sign-in services. A built-in, customizable web UI to sign in users. Social sign-in with Facebook, Google, Login with Amazon, and Sign in with Apple, as well as sign-in with SAML identity providers from your user pool. User directory management and user profiles. Security features such as multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification. Customized workflows and user migration through AWS Lambda triggers. After successfully authenticating a user, Amazon Cognito issues JSON web tokens (JWT) that you can use to secure and authorize access to your own APIs, or exchange for AWS credentials. Amazon Cognito provides token handling through the Amazon Cognito user pools Identity SDKs for JavaScript, Android, and iOS. See Getting started with user pools and Using tokens with user pools. The two main components of Amazon Cognito are user pools and identity pools. Identity pools provide AWS credentials to grant your users access to other AWS services. To enable users in your user pool to access AWS resources, you can configure an identity pool to exchange user pool tokens for AWS credentials.

Amazon DynamoDB

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. You can use Amazon DynamoDB to create a database table that can store and retrieve any amount of data, and serve any level of request traffic. Amazon DynamoDB automatically spreads the data and traffic for the table over a sufficient number of servers to handle the request capacity specified by the customer and the amount of data stored, while maintaining consistent and fast performance. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. DynamoDB also offers encryption at rest, which eliminates the operational burden and complexity involved in protecting sensitive data. For more information, see DynamoDB encryption at rest. With DynamoDB, you can create database tables that can store and retrieve any amount of data and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation. You can use the AWS Management Console to monitor resource utilization and performance metrics. DynamoDB provides on-demand backup capability. It allows you to create full backups of your tables for long-term retention and archival for regulatory compliance needs. For more information, see Using On-Demand backup and restore for DynamoDB. You can create on-demand backups and enable point-in-time recovery for your Amazon DynamoDB tables. Point-in-time recovery helps protect your tables from accidental write or delete operations. With point-in-time recovery, you can restore a table to any point in time during the last 35 days. For more information, see Point-in-time recovery: How it works. DynamoDB allows you to delete expired items from tables automatically to help you reduce storage usage and the cost of storing data that is no longer relevant. For more information, see Expiring items by using DynamoDB Time to Live (TTL). High availability and durability DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones in an AWS Region, providing built-in high availability and data durability. You can use global tables to keep DynamoDB tables in sync across AWS Regions. For more information, see Global tables - multi-Region replication for DynamoDB.

EC2

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable computing capacity—literally, servers in Amazon's data centers—that you use to build and host your software systems General Purpose General purpose instances provide a balance of compute, memory and networking resources, and can be used for a variety of diverse workloads. These instances are ideal for applications that use these resources in equal proportions such as web servers and code repositories. Compute Optimized Compute Optimized instances are ideal for compute bound applications that benefit from high performance processors. Instances belonging to this family are well suited for batch processing workloads, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning inference and other compute intensive applications. Memory Optimized Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory. Accelerated Computing Accelerated computing instances use hardware accelerators, or co-processors, to perform functions, such as floating point number calculations, graphics processing, or data pattern matching, more efficiently than is possible in software running on CPUs. Storage Optimized Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.

Amazon Elastic Container Registry

Amazon Elastic Container Registry (Amazon ECR) is a fully managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon Elastic Container Registry (Amazon ECR) is an AWS managed container image registry service that is secure, scalable, and reliable. Amazon ECR supports private repositories with resource-based permissions using AWS IAM. This is so that specified users or Amazon EC2 instances can access your container repositories and images. You can use your preferred CLI to push, pull, and manage Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts. Note Amazon ECR supports public container image repositories as well. For more information, see What is Amazon ECR Public in the Amazon ECR Public User Guide. The AWS container services team maintains a public roadmap on GitHub. It contains information about what the teams are working on and allows all AWS customers the ability to give direct feedback. For more information, see AWS Containers Roadmap. Components of Amazon ECR Amazon ECR contains the following components: Registry An Amazon ECR private registry is provided to each AWS account; you can create one or more repositories in your registry and store images in them. For more information, see Amazon ECR private registry. Authorization token Your client must authenticate to Amazon ECR registries as an AWS user before it can push and pull images. For more information, see Private registry authentication. Repository An Amazon ECR repository contains your Docker images, Open Container Initiative (OCI) images, and OCI compatible artifacts. For more information, see Amazon ECR private repositories. Repository policy You can control access to your repositories and the images within them with repository policies. For more information, see Private repository policies. Image You can push and pull container images to your repositories. You can use these images locally on your development system, or you can use them in Amazon ECS task definitions and Amazon EKS pod specifications. For more information, see Using Amazon ECR images with Amazon ECS and Using Amazon ECR Images with Amazon EKS. Features of Amazon ECR Amazon ECR provides the following features: Lifecycle policies help with managing the lifecycle of the images in your repositories. You define rules that result in the cleaning up of unused images. You can test rules before applying them to your repository. For more information, see Lifecycle policies. Image scanning helps in identifying software vulnerabilities in your container images. Each repository can be configured to scan on push. This ensures that each new image pushed to the repository is scanned. You can then retrieve the results of the image scan. For more information, see Image scanning. Cross-Region and cross-account replication makes it easier for you to have your images where you need them. This is configured as a registry setting and is on a per-Region basis. For more information, see Private registry settings. Pull through cache rules provide a way to cache repositories in remote public registries in your private Amazon ECR registry. Using a pull through cache rule, Amazon ECR will periodically reach out to the remote registry to ensure the cached image in your Amazon ECR private registry is up to date. For more information, see Using pull through cache rules.

Amazon GuardDuty

Amazon GuardDuty is a continuous security monitoring service. Amazon GuardDuty can help to identify unexpected and potentially unauthorized or malicious activity in your AWS environment. Amazon GuardDuty is a continuous security monitoring service that analyzes and processes data sources, such as AWS CloudTrail data events for Amazon S3 logs, CloudTrail management event logs, DNS logs, Amazon EBS volume data, Amazon EKS audit logs, and Amazon VPC flow logs. It uses threat intelligence feeds, such as lists of malicious IP addresses and domains, and machine learning to identify unexpected, potentially unauthorized, and malicious activity within your AWS environment. This can include issues like escalation of privileges, use of exposed credentials, or communication with malicious IP addresses, domains, or presence of malware on your Amazon EC2 instances and container workloads. For example, GuardDuty can detect compromised EC2 instances and container workloads serving malware, or mining bitcoin. It also monitors AWS account access behavior for signs of compromise, such as unauthorized infrastructure deployments, like instances deployed in a Region that has never been used, or unusual API calls like a password policy change to reduce password strength. GuardDuty informs you of the status of your AWS environment by producing security findings that you can view in the GuardDuty console or through Amazon CloudWatch events. Accessing GuardDuty You can work with GuardDuty in any of the following ways: GuardDuty Console https://console.aws.amazon.com/guardduty The console is a browser-based interface to access and use GuardDuty. AWS SDKs AWS provides software development kits (SDKs) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, and more). The SDKs provide a convenient way to create programmatic access to GuardDuty. For information about the AWS SDKs, including how to download and install them, see Tools for Amazon Web Services. GuardDuty HTTPS API You can access GuardDuty and AWS programmatically by using the GuardDuty HTTPS API, which lets you issue HTTPS requests directly to the service.

Amazon Polly

Amazon Polly is a Text-to-Speech (TTS) cloud service that converts text into lifelike speech. You can use Amazon Polly to develop applications that increase engagement and accessibility. Amazon Polly supports multiple languages and includes a variety of lifelike voices, so you can build speech-enabled applications that work in multiple locations and use the ideal voice for your customers. Amazon Polly is a cloud service that converts text into lifelike speech. You can use Amazon Polly to develop applications that increase engagement and accessibility. Amazon Polly supports multiple languages and includes a variety of lifelike voices, so you can build speech-enabled applications that work in multiple locations and use the ideal voice for your customers. With Amazon Polly, you only pay for the text you synthesize. You can also cache and replay Amazon Polly's generated speech at no additional cost. Additionally, Amazon Polly includes a number of Neural Text-to-Speech (NTTS) voices, delivering ground-breaking improvements in speech quality through a new machine learning approach, thereby offering to customers the most natural and human-like text-to-speech voices possible. Neural TTS technology also supports a Newscaster speaking style that is tailored to news narration use cases. Common use cases for Amazon Polly include, but are not limited to, mobile applications such as newsreaders, games, eLearning platforms, accessibility applications for visually impaired people, and the rapidly growing segment of Internet of Things (IoT). Amazon Polly is certified for use with regulated workloads for HIPAA (the Health Insurance Portability and Accountability Act of 1996), and Payment Card Industry Data Security Standard (PCI DSS). Some of the benefits of using Amazon Polly include: High quality - Amazon Polly offers both new neural TTS and best-in-class standard TTS technology to synthesize the superior natural speech with high pronunciation accuracy (including abbreviations, acronym expansions, date/time interpretations, and homograph disambiguation). Low latency - Amazon Polly ensures fast responses, which make it a viable option for low-latency use cases such as dialog systems. Support for a large portfolio of languages and voices - Amazon Polly supports dozens of voices languages, offering male and female voice options for most languages. Neural TTS currently supports three British English voices and eight US English voices. This number will continue to increase as we bring more neural voices online. US English voices Matthew and Joanna can also use the Neural Newscaster speaking style, similar to what you might hear from a professional news anchor. Cost-effective - Amazon Polly's pay-per-use model means there are no setup costs. You can start small and scale up as your application grows. Cloud-based solution - On-device TTS solutions require significant computing resources, notably CPU power, RAM, and disk space. These can result in higher development costs and higher power consumption on devices such as tablets, smart phones, and so on. In contrast, TTS conversion done in the AWS Cloud dramatically reduces local resource requirements. This enables support of all the available languages and voices at the best possible quality. Moreover, speech improvements are instantly available to all end-users and do not require additional updates for devices.

Amazon Quantum Ledger Database

Amazon Quantum Ledger Database (Amazon QLDB) is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log owned by a central trusted authority. You can use Amazon QLDB to track all application data changes, and maintain a complete and verifiable history of changes over time. Amazon Quantum Ledger Database (Amazon QLDB) is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log owned by a central trusted authority. You can use Amazon QLDB to track all application data changes, and maintain a complete and verifiable history of changes over time. Ledgers are typically used to record a history of economic and financial activity in an organization. Many organizations build applications with ledger-like functionality because they want to maintain an accurate history of their applications' data. For example, they might want to track the history of credits and debits in banking transactions, verify the data lineage of an insurance claim, or trace the movement of an item in a supply chain network. Ledger applications are often implemented using custom audit tables or audit trails created in relational databases. Amazon QLDB is a new class of database that helps eliminate the need to engage in the complex development effort of building your own ledger-like applications. With QLDB, the history of changes to your data is immutable—it can't be altered, updated, or deleted. And using cryptography, you can easily verify that there have been no unintended changes to your application's data. QLDB uses an immutable transactional log, known as a journal. The journal is append-only and is composed of a sequenced and hash-chained set of blocks that contain your committed data. Amazon QLDB pricing With Amazon QLDB, you pay only for what you use with no minimum fees or mandatory service usage. You pay only for the resources your ledger database consumes, and you do not need to provision in advance.

Amazon SageMaker

Amazon SageMaker is a fully managed machine learning service. With Amazon SageMaker, data scientists and developers can quickly build and train machine learning models, and then deploy them into a production-ready hosted environment. Amazon SageMaker is a fully managed machine learning service. With SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment. It provides an integrated Jupyter authoring notebook instance for easy access to your data sources for exploration and analysis, so you don't have to manage servers. It also provides common machine learning algorithms that are optimized to run efficiently against extremely large data in a distributed environment. With native support for bring-your-own-algorithms and frameworks, SageMaker offers flexible distributed training options that adjust to your specific workflows. Deploy a model into a secure and scalable environment by launching it with a few clicks from SageMaker Studio or the SageMaker console. Training and hosting are billed by minutes of usage, with no minimum fees and no upfront commitments. Amazon SageMaker Features Amazon SageMaker includes the following features: SageMaker Studio An integrated machine learning environment where you can build, train, deploy, and analyze your models all in the same application. SageMaker Canvas An auto ML service that gives people with no coding experience the ability to build models and make predictions with them. SageMaker Ground Truth Plus A turnkey data labeling feature to create high-quality training datasets without having to build labeling applications and manage the labeling workforce on your own. SageMaker Studio Lab A free service that gives customers access to AWS compute resources in an environment based on open-source JupyterLab. SageMaker Training Compiler Train deep learning models faster on scalable GPU instances managed by SageMaker. SageMaker Studio Universal Notebook Easily discover, connect to, create, terminate and manage Amazon EMR clusters in single account and cross account configurations directly from SageMaker Studio. SageMaker Serverless Endpoints A serverless endpoint option for hosting your ML model. Automatically scales in capacity to serve your endpoint traffic. Removes the need to select instance types or manage scaling policies on an endpoint. SageMaker Inference Recommender Get recommendations on inference instance types and configurations (e.g. instance count, container parameters and model optimizations) to use your ML models and workloads. SageMaker Model Registry Versioning, artifact and lineage tracking, approval workflow, and cross account support for deployment of your machine learning models. SageMaker Projects Create end-to-end ML solutions with CI/CD by using SageMaker projects. SageMaker Model Building Pipelines Create and manage machine learning pipelines integrated directly with SageMaker jobs. SageMaker ML Lineage Tracking Track the lineage of machine learning workflows. SageMaker Data Wrangler Import, analyze, prepare, and featurize data in SageMaker Studio. You can integrate Data Wrangler into your machine learning workflows to simplify and streamline data pre-processing and feature engineering using little to no coding. You can also add your own Python scripts and transformations to customize your data prep workflow. SageMaker Feature Store A centralized store for features and associated metadata so features can be easily discovered and reused. You can create two types of stores, an Online or Offline store. The Online Store can be used for low latency, real-time inference use cases and the Offline Store can be used for training and batch inference. SageMaker JumpStart Learn about SageMaker features and capabilities through curated 1-click solutions, example notebooks, and pretrained models that you can deploy. You can also fine-tune the models and deploy them. SageMaker Clarify Improve your machine learning models by detecting potential bias and help explain the predictions that models make. SageMaker Edge Manager Optimize custom models for edge devices, create and manage fleets and run models with an efficient runtime. SageMaker Ground Truth High-quality training datasets by using workers along with machine learning to create labeled datasets. Amazon Augmented AI Build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers. SageMaker Studio Notebooks The next generation of SageMaker notebooks that include AWS IAM Identity Center (successor to AWS Single Sign-On) (IAM Identity Center) integration, fast start-up times, and single-click sharing. SageMaker Experiments Experiment management and tracking. You can use the tracked data to reconstruct an experiment, incrementally build on experiments conducted by peers, and trace model lineage for compliance and audit verifications. SageMaker Debugger Inspect training parameters and data throughout the training process. Automatically detect and alert users to commonly occurring errors such as parameter values getting too large or small. SageMaker Autopilot Users without machine learning knowledge can quickly build classification and regression models. SageMaker Model Monitor Monitor and analyze models in production (endpoints) to detect data drift and deviations in model quality. SageMaker Neo Train machine learning models once, then run anywhere in the cloud and at the edge. SageMaker Elastic Inference Speed up the throughput and decrease the latency of getting real-time inferences. Reinforcement Learning Maximize the long-term reward that an agent receives as a result of its actions. Preprocessing Analyze and preprocess data, tackle feature engineering, and evaluate models. Batch Transform Preprocess datasets, run inference when you don't need a persistent endpoint, and associate input records with inferences to assist the interpretation of results. Amazon SageMaker Pricing As with other AWS products, there are no contracts or minimum commitments for using Amazon SageMaker.

EC2 Image Builder

EC2 Image Builder is a fully-managed AWS service that makes it easier to automate the creation, management, and deployment of customized, secure, and up-to-date "golden" server images that are pre-installed and pre-configured with software and settings to meet specific IT standards. EC2 Image Builder is a fully managed AWS service that makes it easier to automate the creation, management, and deployment of customized, secure, and up-to-date server images that are pre-installed and pre-configured with software and settings to meet specific IT standards. You can use the AWS Management Console, AWS CLI, or APIs to create custom images in your AWS account. When you use the AWS Management Console, the Image Builder wizard guides you through steps to: Provide starting artifacts Add and remove software Customize settings and scripts Run selected tests Distribute images to AWS Regions The images you build are created in your account and you can configure them for operating system patches on an ongoing basis. For troubleshooting and debugging your image deployment, you can configure build logs to be added to your Amazon Simple Storage Service (Amazon S3) bucket. You can also configure the instance-building application to send logs to CloudWatch. To receive notifications of image build status, and associate an Amazon Elastic Compute Cloud (Amazon EC2) key pair with your instance to perform manual debugging and inspection, you can configure an SNS topic. Along with a final image, Image Builder creates an image recipe, which is a combination of the source image and components for building and testing. You can use the image recipe with existing source code version control systems and continuous integration/continuous deployment pipelines for repeatable automation. Features of EC2 Image Builder EC2 Image Builder provides the following features: Increase productivity and reduce operations for building compliant and up-to-date images Image Builder reduces the amount of work involved in creating and managing images at scale by automating your build pipelines. You can automate your builds by providing your build execution schedule preference. Automation reduces the operational cost of maintaining your software with the latest operating system patches. Increase service uptime Image Builder allows you to test your images before deployment with both AWS-provided and customized tests. AWS will distribute your image only if all of the configured tests have succeeded. Raise the security bar for deployments Image Builder allows you to create images that remove unnecessary exposure to component security vulnerabilities. You can apply AWS security settings to create secure, out-of-the-box images that meet industry and internal security criteria. Image Builder also provides collections of settings for companies in regulated industries. You can use these settings to help you quickly and easily build compliant images for STIG standards. For a complete list of STIG components available through Image Builder, see EC2 Image Builder STIG components. Centralized enforcement and lineage tracking Using built-in integrations with AWS Organizations, Image Builder enables you to enforce policies that restrict accounts to run instances only from approved AMIs. Simplified sharing of resources across AWS accounts EC2 Image Builder integrates with AWS Resource Access Manager (AWS RAM) to allow you to share certain resources with any AWS account or through AWS Organizations. EC2 Image Builder resources that can be shared are: Components Images Image recipes Container recipes

AWS IAM Identity Center

IAM Identity Center provides one place where you can create or connect workforce users and centrally manage their access to all their AWS accounts, Identity Center enabled applications, and applications that support Security Assertion Markup Language (SAML) 2.0. Workforce users benefit from a single sign-on experience and can use the access portal to find all their assigned AWS accounts and applications in one place. AWS IAM Identity Center (successor to AWS Single Sign-On) expands the capabilities of AWS Identity and Access Management (IAM) to provide a central place that brings together administration of users and their access to AWS accounts and cloud applications. Although the service name AWS Single Sign-On has been retired, the term single sign-on is still used throughout this guide to describe the authentication scheme that allows users to sign in one time to access multiple applications and websites. With IAM Identity Center you can manage sign-in security for your workforce by creating or connecting your users and groups to AWS in one place. With multi-account permissions you can assign your workforce identities access to AWS accounts. You can use application assignments to assign your users access to software as a service (SaaS) applications. With a single click, IAM Identity Center enabled application admins can assign access to your workforce users, and can also use application assignments to assign your users access to software as a service (SaaS) applications. IAM Identity Center features IAM Identity Center includes the following core features: Workforce identities Human users that are members of your organization are also known as workforce identities or workforce users. You can create workforce users and groups in IAM Identity Center, or connect and synchronize to an existing set of users and groups in your own identity source for use across all your AWS accounts and applications. Supported identity sources include Microsoft Active Directory Domain Services, and external identity providers such as Okta Universal Directory or Microsoft Azure AD. Application assignments for SAML applications With application assignments, you can grant your workforce users in IAM Identity Center single sign-on access to SAML 2.0 applications, such as Salesforce and Microsoft 365. Your users can access these applications in a single place, without the need for you to set up separate federation. Identity Center enabled applications AWS applications and services, such as Amazon Managed Grafana, Amazon Monitron, and Amazon SageMaker Studio Notebooks, discover and connect to IAM Identity Center automatically to receive sign-in and user directory services. This provides users with a consistent single sign-on experience to these applications with no additional configuration of the applications. Because the applications share a common view of users, groups, and group membership, users also have a consistent experience when sharing application resources with others. Multi-account permissions With multi-account permissions you can plan for and centrally implement IAM permissions across multiple AWS accounts at one time without needing to configure each of your accounts manually. You can create fine-grained permissions based on common job functions or define custom permissions that meet your security needs. You can then assign those permissions to workforce users to control their access over specific accounts. AWS access portal The AWS access portal provides your workforce users with one-click access to all their assigned AWS accounts and cloud applications through a simple web portal.

AWS Serverless Application Repository

The AWS Serverless Application Repository is a managed repository for serverless applications. It enables teams, organizations, and individual developers to find, deploy, publish, share, store, and easily assemble serverless architectures. The AWS Serverless Application Repository makes it easy for developers and enterprises to quickly find, deploy, and publish serverless applications in the AWS Cloud. For more information about serverless applications, see Serverless Computing and Applications on the AWS website. You can easily publish applications, sharing them publicly with the community at large, or privately within your team or across your organization. To publish a serverless application (or app), you can use the AWS Management Console, the AWS SAM command line interface (AWS SAM CLI), or AWS SDKs to upload your code. Along with your code, you upload a simple manifest file, also known as an AWS Serverless Application Model (AWS SAM) template. For more information about AWS SAM, see the AWS Serverless Application Model Developer Guide. The AWS Serverless Application Repository is deeply integrated with the AWS Lambda console. This integration means that developers of all levels can get started with serverless computing without needing to learn anything new. You can use category keywords to browse for applications such as web and mobile backends, data processing applications, or chatbots. You can also search for applications by name, publisher, or event source. To use an application, you simply choose it, configure any required fields, and deploy it with a few clicks. In this guide, you can learn about the two ways to work with the AWS Serverless Application Repository: Publishing Applications - Configure and upload applications to make them available to other developers, and publish new versions of applications. Deploying Applications - Browse for applications and view information about them, including source code and readme files. Also install, configure, and deploy applications of your choosing.

AWS certificate services

The AWS certificate services, AWS Certificate Manager and AWS Private Certificate Authority, make it easy to provision, manage, and deploy SSL/TLS certificates on AWS managed resources, in your organization's internal PKI, and in the Internet of Things. AWS Certificate Manager (ACM) handles the complexity of creating, storing, and renewing public and private SSL/TLS X.509 certificates and keys that protect your AWS websites and applications. You can provide certificates for your integrated AWS services either by issuing them directly with ACM or by importing third-party certificates into the ACM management system. ACM certificates can secure singular domain names, multiple specific domain names, wildcard domains, or combinations of these. ACM wildcard certificates can protect an unlimited number of subdomains. You can also export ACM certificates signed by AWS Private CA for use anywhere in your internal PKI. Is ACM the right service for me? AWS offers two options to customers deploying managed X.509 certificates. Choose the best one for your needs. AWS Certificate Manager (ACM)—This service is for enterprise customers who need a secure web presence using TLS. ACM certificates are deployed through Elastic Load Balancing, Amazon CloudFront, Amazon API Gateway, and other integrated AWS services. The most common application of this kind is a secure public website with significant traffic requirements. ACM also simplifies security management by automating the renewal of expiring certificates. You are in the right place for this service. AWS Private CA—This service is for enterprise customers building a public key infrastructure (PKI) inside the AWS cloud and intended for private use within an organization. With AWS Private CA, you can create your own certificate authority (CA) hierarchy and issue certificates with it for authenticating users, computers, applications, services, servers, and other devices. Certificates issued by a private CA cannot be used on the internet. For more information, see the AWS Private CA User Guide.

AWS service endpoints

To connect programmatically to an AWS service, you use an endpoint. An endpoint is the URL of the entry point for an AWS web service. The AWS SDKs and the AWS Command Line Interface (AWS CLI) automatically use the default endpoint for each service in an AWS Region. But you can specify an alternate endpoint for your API requests. If a service supports Regions, the resources in each Region are independent of similar resources in other Regions. For example, you can create an Amazon EC2 instance or an Amazon SQS queue in one Region. When you do, the instance or queue is independent of instances or queues in all other Regions.

AWS Trusted Advisor

Trusted Advisor draws upon best practices learned from serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment, and then makes recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. If you have a Basic or Developer Support plan, you can use the Trusted Advisor console to access all checks in the Service Limits category and six checks in the Security category. If you have a Business, Enterprise On-Ramp, or Enterprise Support plan, you can use the Trusted Advisor console and the AWS Support API to access all Trusted Advisor checks. You also can use Amazon CloudWatch Events to monitor the status of Trusted Advisor checks. For more information, see Monitoring AWS Trusted Advisor check results with Amazon EventBridge. You can access Trusted Advisor in the AWS Management Console. For more information about controlling access to the Trusted Advisor console, see Manage access for AWS Trusted Advisor.

AWS Organizations and AWS Account Management

With AWS Organizations, you can consolidate multiple AWS accounts into an organization that you create and centrally manage. You can create member accounts and invite existing accounts to join your organization. You can organize those accounts and manage them as a group. With AWS Account Management you can update the alternate contact information for each of your AWS accounts. AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWS Organizations includes account management and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs of your business. As an administrator of an organization, you can create accounts in your organization and invite existing accounts to join the organization. This user guide defines key concepts for AWS Organizations, provides tutorials, and explains how to create and manage an organization. Topics AWS Organizations features AWS Organizations pricing Accessing AWS Organizations Support and feedback for AWS Organizations AWS Organizations features AWS Organizations offers the following features: Centralized management of all of your AWS accounts You can combine your existing accounts into an organization that enables you to manage the accounts centrally. You can create accounts that automatically are a part of your organization, and you can invite other accounts to join your organization. You also can attach policies that affect some or all of your accounts. Consolidated billing for all member accounts Consolidated billing is a feature of AWS Organizations. You can use the management account of your organization to consolidate and pay for all member accounts. In consolidated billing, management accounts can also access the billing information, account information, and account activity of member accounts in their organization. This information may be used for services such as Cost Explorer, which can help management accounts improve their organization's cost performance. Hierarchical grouping of your accounts to meet your budgetary, security, or compliance needs You can group your accounts into organizational units (OUs) and attach different access policies to each OU. For example, if you have accounts that must access only the AWS services that meet certain regulatory requirements, you can put those accounts into one OU. You then can attach a policy to that OU that blocks access to services that do not meet those regulatory requirements. You can nest OUs within other OUs to a depth of five levels, providing flexibility in how you structure your account groups. Policies to centralize control over the AWS services and API actions that each account can access As an administrator of the management account of an organization, you can use service control policies (SCPs) to specify the maximum permissions for member accounts in the organization. In SCPs, you can restrict which AWS services, resources, and individual API actions the users and roles in each member account can access. You can also define conditions for when to restrict access to AWS services, resources, and API actions. These restrictions even override the administrators of member accounts in the organization. When AWS Organizations blocks access to a service, resource, or API action for a member account, a user or role in that account can't access it. This block remains in effect even if an administrator of a member account explicitly grants such permissions in an IAM policy. For more information, see Service control policies (SCPs). Policies to standardize tags across the resources in your organization's accounts You can use tag policies to maintain consistent tags, including the preferred case treatment of tag keys and tag values. For more information, see Tag policies Policies to control how AWS artificial intelligence (AI) and machine learning services can collect and store data. You can use AI services opt-out policies to opt out of data collection and storage for any of the AWS AI services that you don't want to use. For more information, see AI services opt-out policies Policies that configure automatic backups for the resources in your organization's accounts You can use backup policies to configure and automatically apply AWS Backup plans to resources across all your organization's accounts. For more information, see Backup policies Integration and support for AWS Identity and Access Management (IAM) IAM provides granular control over users and roles in individual accounts. AWS Organizations expands that control to the account level by giving you control over what users and roles in an account or a group of accounts can do. The resulting permissions are the logical intersection of what is allowed by AWS Organizations at the account level and the permissions that are explicitly granted by IAM at the user or role level within that account. In other words, the user can access only what is allowed by both the AWS Organizations policies and IAM policies. If either blocks an operation, the user can't access that operation. Integration with other AWS services You can leverage the multi-account management services available in AWS Organizations with select AWS services to perform tasks on all accounts that are members of an organization. For a list of services and the benefits of using each service on an organization-wide level, see AWS services that you can use with AWS Organizations. When you enable an AWS service to perform tasks on your behalf in your organization's member accounts, AWS Organizations creates an IAM service-linked role for that service in each member account. The service-linked role has predefined IAM permissions that allow the other AWS service to perform specific tasks in your organization and its accounts. For this to work, all accounts in an organization automatically have a service-linked role. This role enables the AWS Organizations service to create the service-linked roles required by AWS services for which you enable trusted access. These additional service-linked roles are attached to IAM permission policies that enable the specified service to perform only those tasks that are required by your configuration choices. For more information, see Using AWS Organizations with other AWS services. Global access AWS Organizations is a global service with a single endpoint that works from any and all AWS Regions. You don't need to explicitly select a region to operate in. Data replication that is eventually consistent AWS Organizations, like many other AWS services, is eventually consistent. AWS Organizations achieves high availability by replicating data across multiple servers in AWS data centers within its Region. If a request to change some data is successful, the change is committed and safely stored. However, the change must then be replicated across the multiple servers. For more information, see Changes that I make aren't always immediately visible. Free to use AWS Organizations is a feature of your AWS account offered at no additional charge. You are charged only when you access other AWS services from the accounts in your organization. For information about the pricing of other AWS products, see the Amazon Web Services pricing page. AWS Organizations pricing AWS Organizations is offered at no additional charge. You are charged only for AWS resources that users and roles in your member accounts use. For example, you are charged the standard fees for Amazon EC2 instances that are used by users or roles in your member accounts. For information about the pricing of other AWS services, see AWS Pricing. Accessing AWS Organizations You can work with AWS Organizations in any of the following ways: AWS Management Console The AWS Organizations console is a browser-based interface that you can use to manage your organization and your AWS resources. You can perform any task in your organization by using the console. AWS Command Line Tools With the AWS command line tools, you can issue commands at your system's command line to perform AWS Organizations and AWS tasks. Working with the command line can be faster and more convenient than using the console. The command line tools also are useful if you want to build scripts that perform AWS tasks. AWS provides two sets of command line tools: AWS Command Line Interface (AWS CLI). For information about installing and using the AWS CLI, see the AWS Command Line Interface User Guide. AWS Tools for Windows PowerShell. For information about installing and using the Tools for Windows PowerShell, see the AWS Tools for Windows PowerShell User Guide. AWS SDKs The AWS SDKs consist of libraries and sample code for various programming languages and platforms (for example, Java, Python, Ruby, .NET, iOS, and Android). The SDKs take care of tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For more information about the AWS SDKs, including how to download and install them, see Tools for Amazon Web Services. AWS Organizations HTTPS Query API The AWS Organizations HTTPS Query API gives you programmatic access to AWS Organizations and AWS. The HTTPS Query API lets you issue HTTPS requests directly to the service. When you use the HTTPS API, you must include code to digitally sign requests using your credentials. For more information, see Calling the API by Making HTTP Query Requests and the AWS Organizations API Reference. This guide contains information about AWS accounts. How to create them, how to manage them, and how to use them. An account in AWS is a fundamental part of accessing AWS services. It serves these two basic functions: Container - An AWS account is the basic container for all the AWS resources you can create as an AWS customer. When you create an Amazon Simple Storage Service (Amazon S3) bucket or Amazon Relational Database Service (Amazon RDS) database to store your data, or an Amazon Elastic Compute Cloud (Amazon EC2) instance to process your data, you are creating a resource in your account. Every resource is uniquely identified by an Amazon Resource Name (ARN) that includes the account ID of the account that contains, or owns, the resource. Security boundary - An AWS account is also the basic security boundary for your AWS resources. Resources that you create in your account are available only to users who have credentials for that same account. Among the key resources you can create in your account are identities, such as IAM users and roles. These identities have credentials that someone can use to sign in, or authenticate to AWS. Identities also have permission policies that specify what the person who signed in is authorized to do with the resources in the account. You can create an AWS Identity and Access Management (IAM) user to grant access for a person in your company. That IAM user can have a password that lets the person access the AWS console. The user can also have an access key to let the person run commands from the AWS Command Line Interface (AWS CLI) or invoke APIs from one of the AWS SDKs. IAM roles are particularly flexible because you can associate them with external people by using federation and an identity provider, such as AWS IAM Identity Center (successor to AWS Single Sign-On) (IAM Identity Center). If you already have an identity provider in use by your company, you can use it with federation to simplify how you provide access to the resources in your AWS account. AWS supports identity providers that are compatible with industry standards OpenID Connect (OIDC) or SAML 2.0 (Security Assertion Markup Language 2.0). The latter makes any Active Directory implementation a source identity provider if you combine it with Microsoft Active Directory Federation Services.

AWS App Runner

AWS App Runner is a fully managed service that makes it easy for you to deploy from source code or a container image directly to a scalable and secure web application. AWS App Runner is an AWS service that provides a fast, simple, and cost-effective way to deploy from source code or a container image directly to a scalable and secure web application in the AWS Cloud. You don't need to learn new technologies, decide which compute service to use, or know how to provision and configure AWS resources. App Runner connects directly to your code or image repository. It provides an automatic integration and delivery pipeline with fully managed operations, high performance, scalability, and security. Who is App Runner for? If you're a developer, you can use App Runner to simplify the process of deploying a new version of your code or image repository. For operations teams, App Runner enables automatic deployments each time a commit is pushed to the code repository or a new container image version is pushed to the image repository. Accessing App Runner You can define and configure your App Runner service deployments using any one of the following interfaces: App Runner console - Provides a web interface for managing your App Runner services. App Runner API - Provides a RESTful API for performing App Runner actions. For more information, see AWS App Runner API Reference. AWS Command Line Interface (AWS CLI) - Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. For more information, see AWS Command Line Interface. AWS SDKs - Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Pricing for App Runner App Runner provides a cost-effective way to run your application. You only pay for resources that your App Runner service consumes. Your service scales down to fewer compute instances when request traffic is lower. You have control over scalability settings: the lowest and highest number of provisioned instances, and the highest load an instance handles. For more information about App Runner automatic scaling, see Managing App Runner automatic scaling. For pricing information, see AWS App Runner pricing.

Amazon Detective

Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of security findings or suspicious activities. Detective automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to help you visualize and conduct faster and more efficient security investigations. Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of security findings or suspicious activities. Detective automatically collects log data from your AWS resources. It then uses machine learning, statistical analysis, and graph theory to generate visualizations that help you to conduct faster and more efficient security investigations. The Detective prebuilt data aggregations, summaries, and context help you to quickly analyze and determine the nature and extent of possible security issues. Detective maintains up to a year of historical event data. This data is easily available through a set of visualizations that show changes in the type and volume of activity over a selected time window. Detective links those changes to GuardDuty findings. How does Detective work? Detective automatically extracts time-based events such as login attempts, API calls, and network traffic from AWS CloudTrail and Amazon VPC flow logs. It also ingests findings detected by GuardDuty. From those events, Detective uses machine learning and visualization to create a unified, interactive view of your resource behaviors and the interactions between them over time. You can explore this behavior graph to examine disparate actions such as failed logon attempts or suspicious API calls. You can also see how these actions affect resources such as AWS accounts and Amazon EC2 instances. You can adjust the behavior graph's scope and timeline for a variety of tasks: Rapidly investigate any activity that falls outside the norm. Identify patterns that may indicate a security issue. Understand all of the resources affected by a finding. Detective tailored visualizations provide a baseline for and summarize the account information. These findings can help answer questions such as "Is this an unusual API call for this role?" Or "Is this spike in traffic from this instance expected?" With Detective, you don't have to organize any data or develop, configure, or tune your own queries and algorithms. There are no upfront costs and you pay only for the events analyzed, with no additional software to deploy or other feeds to subscribe to. Who uses Detective? When an account enables Detective, it becomes the administrator account for a behavior graph. A behavior graph is a linked set of extracted and analyzed data from one or more AWS accounts. Administrator accounts invite member accounts to contribute their data to the administrator account's behavior graph. Detective is also integrated with AWS Organizations. Your organization management account designates a Detective administrator account for the organization. The Detective administrator account enables organization accounts as member accounts in the organization behavior graph. For information about how Detective uses source data from behavior graph accounts, see Source data used in a behavior graph. For information on how administrator accounts manage behavior graphs, see Managing accounts. For information on how member accounts manage their behavior graph invitations and memberships, see For member accounts: Managing behavior graph invitations and memberships. The administrator account uses the analytics and visualizations generated from the behavior graph to investigate AWS resources and GuardDuty findings. The Detective integrations with GuardDuty and AWS Security Hub allow you to pivot from a GuardDuty finding in these services directly into the Detective console. A Detective investigation focuses on the activity that is connected to the involved AWS resources.

Amazon MemoryDB for Redis

Amazon MemoryDB for Redis is a fully managed, Redis-compatible, in-memory database that delivers ultra-fast performance and Multi-AZ durability for modern applications built using microservices architectures. MemoryDB for Redis is a durable, in-memory database service that delivers ultra-fast performance. It is purpose-built for modern applications with microservices architectures. MemoryDB is compatible with Redis, a popular open source data store, enabling you to quickly build applications using the same flexible and friendly Redis data structures, APIs, and commands that they already use today. With MemoryDB, all of your data is stored in memory, which enables you to achieve microsecond read and single-digit millisecond write latency and high throughput. MemoryDB also stores data durably across multiple Availability Zones (AZs) using a Multi-AZ transactional log to enable fast failover, database recovery, and node restarts. Delivering both in-memory performance and Multi-AZ durability, MemoryDB can be used as a high-performance primary database for your microservices applications, eliminating the need to separately manage both a cache and durable database.

AWS Console Mobile Application

The AWS Console Mobile Application, provided by Amazon Web Services, lets you view and manage a select set of resources to support incident response while on-the-go. The Console Mobile Application lets you view and manage a set of resources so that you can support incident response while you are away from your computer. With the Console Mobile Application, you can monitor resources and view configuration details, metrics, and alarms for a select subset of AWS services. You can see an overview of the account status with real-time data on Amazon CloudWatch, AWS Personal Health Dashboard, and AWS Billing and Cost Management. You can view ongoing issues and follow through to the relevant CloudWatch alarm screens for a detailed view with graphs and configuration options. In addition, you can check on the status of specific AWS services, view detailed resource screens, and perform some actions. The Console Mobile Application requires an existing AWS account. After you sign in with a root user, IAM user, access keys, or a federated role, the Console Mobile Application stores your credentials so that you can easily switch between identities.

AWS CloudTrail

With AWS CloudTrail, you can monitor your AWS deployments in the cloud by getting a history of AWS API calls for your account, including API calls made by using the AWS Management Console, the AWS SDKs, the command line tools, and higher-level AWS services. You can also identify which users and accounts called AWS APIs for services that support CloudTrail, the source IP address from which the calls were made, and when the calls occurred. You can integrate CloudTrail into applications using the API, automate trail creation for your organization, check the status of your trails, and control how administrators turn CloudTrail logging on and off. AWS CloudTrail is an AWS service that helps you enable operational and risk auditing, governance, and compliance of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs. CloudTrail is enabled on your AWS account when you create it. When activity occurs in your AWS account, that activity is recorded in a CloudTrail event. You can easily view recent events in the CloudTrail console by going to Event history. For an ongoing record of activity and events in your AWS account, create a trail. For more information about CloudTrail pricing, see AWS CloudTrail Pricing. Visibility into your AWS account activity is a key aspect of security and operational best practices. You can use CloudTrail to view, search, download, archive, analyze, and respond to account activity across your AWS infrastructure. You can identify who or what took which action, what resources were acted upon, when the event occurred, and other details to help you analyze and respond to activity in your AWS account. Optionally, you can enable AWS CloudTrail Insights on a trail to help you identify and respond to unusual activity. You can integrate CloudTrail into applications using the API, automate trail creation for your organization, check the status of trails you create, and control how users view CloudTrail events.

AWS Cloud9

A cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser.

AWS Ground Station

AWS Ground Station is a fully managed service that enables you to control satellite communications, process satellite data, and scale your satellite operations. With AWS Ground Station, you don't have to build or manage your own ground station infrastructure.

AWS Glue

AWS Glue is a scalable, serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue is a serverless data integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. You can use it for analytics, machine learning, and application development. It also includes additional productivity and data ops tooling for authoring, running jobs, and implementing business workflows. With AWS Glue, you can discover and connect to more than 70 diverse data sources and manage your data in a centralized data catalog. You can visually create, run, and monitor extract, transform, and load (ETL) pipelines to load data into your data lakes. Also, you can immediately search and query cataloged data using Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum. AWS Glue consolidates major data integration capabilities into a single service. These include data discovery, modern ETL, cleansing, transforming, and centralized cataloging. It's also serverless, which means there's no infrastructure to manage. With flexible support for all workloads like ETL, ELT, and streaming in one service, AWS Glue supports users across various workloads and types of users. Also, AWS Glue makes it easy to integrate data across your architecture. It integrates with AWS analytics services and Amazon S3 data lakes. AWS Glue has integration interfaces and job-authoring tools that are easy to use for all users, from developers to business users, with tailored solutions for varied technical skill sets. With the ability to scale on demand, AWS Glue helps you focus on high-value activities that maximize the value of your data. It scales for any data size, and supports all data types and schema variances. To increase agility and optimize costs, AWS Glue provides built-in high availability and pay-as-you-go billing. For pricing information, see AWS Glue pricing. AWS Glue Studio AWS Glue Studio is a graphical interface that makes it easy to create, run, and monitor data integration jobs in AWS Glue. You can visually compose data transformation workflows and seamlessly run them on the Apache Spark-based serverless ETL engine in AWS Glue. For more information, see What is AWS Glue Studio. With AWS Glue Studio, you can create and manage jobs that gather, transform, and clean data. You can also use AWS Glue Studio to troubleshoot and edit job scripts. Topics AWS Glue features Getting started with AWS Glue Accessing AWS Glue Related services AWS Glue features AWS Glue features fall into three major categories: Discover and organize data Transform, prepare, and clean data for analysis Build and monitor data pipelines Discover and organize data Unify and search across multiple data stores - Store, index, and search across multiple data sources and sinks by cataloging all your data in AWS. Automatically discover data - Use AWS Glue crawlers to automatically infer schema information and integrate it into your AWS Glue Data Catalog. Manage schemas and permissions - Validate and control access to your databases and tables. Connect to a wide variety of data sources - Tap into multiple data sources, both on premises and on AWS, using AWS Glue connections to build your data lake. Transform, prepare, and clean data for analysis Visually transform data with a drag-and-drop interface - Define your ETL process in the drag-and-drop job editor and automatically generate the code to extract, transform, and load your data. Build complex ETL pipelines with simple job scheduling - Invoke AWS Glue jobs on a schedule, on demand, or based on an event. Clean and transform streaming data in transit - Enable continuous data consumption, and clean and transform it in transit. This makes it available for analysis in seconds in your target data store. Deduplicate and cleanse data with built-in machine learning - Clean and prepare your data for analysis without becoming a machine learning expert by using the FindMatches feature. This feature deduplicates and finds records that are imperfect matches for each other. Built-in job notebooks - AWS Glue Studio job notebooks provide serverless notebooks with minimal setup in AWS Glue Studio so you can get started quickly. Edit, debug, and test ETL code - With AWS Glue interactive sessions, you can interactively explore and prepare data. You can explore, experiment on, and process data interactively using the IDE or notebook of your choice. Define, detect, and remediate sensitive data - AWS Glue sensitive data detection lets you define, identify, and process sensitive data in your data pipeline and in your data lake. Build and monitor data pipelines Automatically scale based on workload - Dynamically scale resources up and down based on workload. This assigns workers to jobs only when needed. Automate jobs with event-based triggers - Start crawlers or AWS Glue jobs with event-based triggers, and design a chain of dependent jobs and crawlers. Run and monitor jobs - Run your AWS Glue jobs, and then monitor them with automated monitoring tools, the Apache Spark UI, AWS Glue job run insights, and AWS CloudTrail. Define workflows for ETL and integration activities - Define workflows for ETL and integration activities for multiple crawlers, jobs, and triggers. Getting started with AWS Glue We recommend that you start with the following sections: Overview of using AWS Glue AWS Glue concepts Setting up IAM permissions for AWS Glue Getting started with the AWS Glue Data Catalog Authoring jobs in AWS Glue Getting started with AWS Glue interactive sessions Orchestration in AWS Glue Accessing AWS Glue You can create, view, and manage your AWS Glue jobs using the following interfaces: AWS Glue console - Provides a web interface for you to create, view, and manage your AWS Glue jobs. To access the console, see AWS Glue console. AWS Glue Studio - Provides a graphical interface for you to create and edit your AWS Glue jobs visually. For more information, see What is AWS Glue Studio. AWS Glue section of the AWS CLI Reference - Provides AWS CLI commands that you can use with AWS Glue. For more information, see AWS CLI Reference for AWS Glue. AWS Glue API - Provides a complete API reference for developers.

AWS Lake Formation

AWS Lake Formation is a managed service that makes it easy to set up, secure, and manage your data lakes. Lake Formation helps you discover your data sources and then catalog, cleanse, and transform the data. You can use Lake Formation to secure and ingest the data in an Amazon Simple Storage Service (Amazon S3) data lake. AWS Lake Formation is a fully managed service that makes it easy to build, secure, and manage data lakes. Lake Formation simplifies and automates many of the complex manual steps that are usually required to create data lakes. These steps include collecting, cleansing, moving, and cataloging data, and securely making that data available for analytics and machine learning. Lake Formation provides its own permissions model that augments the IAM permissions model. This centrally defined permissions model enables fine-grained access to data stored in data lakes through a simple grant or revoke mechanism, much like a relational database management system (RDMS). Lake Formation permissions are enforced using granular controls at the column, row, and cell-levels across AWS analytics and machine learning services, including Amazon Athena, Amazon QuickSight, and Amazon Redshift. Topics Lake Formation features AWS service integrations with Lake Formation Supported Regions Getting started with Lake Formation Lake Formation features Lake Formation helps you break down data silos and combine different types of structured and unstructured data into a centralized repository. First, identify existing data stores in Amazon S3 or relational and NoSQL databases, and move the data into your data lake. Then crawl, catalog, and prepare the data for analytics. Next, provide your users with secure self-service access to the data through their choice of analytics services. Topics Setup and data management Security management Setup and data management Import data from databases already in AWS Once you specify where your existing databases are and provide your access credentials, Lake Formation reads the data and its metadata (schema) to understand the contents of the data source. It then imports the data to your new data lake and records the metadata in a central catalog. With Lake Formation, you can import data from MySQL, PostgreSQL, SQL Server, MariaDB, and Oracle databases running in Amazon RDS or hosted in Amazon EC2. Both bulk and incremental data loading are supported. Import data from other external sources You can use Lake Formation to move data from on-premises databases by connecting with Java Database Connectivity (JDBC). Identify your target sources and provide access credentials in the console, and Lake Formation reads and loads your data into the data lake. To import data from databases other than the ones listed above, you can create custom ETL jobs with AWS Glue. Catalog and label your data Lake Formation crawls and reads your data sources to extract technical metadata and creates a searchable catalog to describe this information for users so they can discover available datasets. You can also add your own custom labels to your data (at the table and column level) to define attributes, such as "sensitive information" and "European sales data." Lake Formation provides a text-based search over this metadata so your users can quickly find the data they need to analyze. For more information about adding tables to the Data Catalog, see Managing Data Catalog Tables and Databases. Transform data Lake Formation can perform transformations on your data, such as rewriting various date formats for consistency to ensure that the data is stored in an analytics-friendly fashion. Lake Formation creates transformation templates and schedules jobs to prepare your data for analysis. Your data is transformed with AWS Glue and written in columnar formats, such as Parquet and ORC, for better performance. Clean and deduplicate data Lake Formation helps clean and prepare your data for analysis by providing a machine learning transform called FindMatches for deduplication and finding matching records. For example, use FindMatches to find duplicate records in your database of restaurants, such as when one record lists "Joe's Pizza" at "121 Main St." and another shows "Joseph's Pizzeria" at "121 Main." FindMatches will simply ask you to label sets of records as either "matching" or "not matching." The system will then learn your criteria for calling a pair of records a match and will build an machine learning transform that you can use to find duplicate records within a database or matching records across two databases.For more information about FindMatches, see Matching Records with AWS Lake Formation FindMatches in the AWS Glue Developer Guide. Storage optimizations Analytics performance can be impacted by inefficient storage of many small files that are automatically created as new data is written to the data lake. Processing these many small files creates additional overhead for analytics services and causes slower query responses. Lake Formation includes a storage optimizer that automatically combines small files into larger files to speed up queries by up to 7x. This process, commonly known as compaction, is performed in the background so that there is no performance impact on your production workloads while this is taking place. For more information about the storage optimization features of Lake Formation, see Storage optimizations for governed tables. Row and cell-level security Lake Formation provides data filters that allow you to restrict access to a combination of columns and rows. Use row and cell-level security to protect sensitive data like Personal Identifiable Information (PII). For more information about row-level security, see Overview of data filtering. Security management Define and manage access controls Lake Formation provides a single place to manage access controls for data in your data lake. You can define security policies that restrict access to data at the database, table, column, row, and cell levels. These policies apply to IAM users and roles, and to users and groups when federating through an external identity provider. You can use fine-grained controls to access data secured by Lake Formation within Amazon Redshift Spectrum, Athena, AWS Glue ETL, and Amazon EMR for Apache Spark. Implement audit logging Lake Formation provides comprehensive audit logs with CloudTrail to monitor access and show compliance with centrally defined policies. You can audit data access history across analytics and machine learning services that read the data in your data lake via Lake Formation. This lets you see which users or roles have attempted to access what data, with which services, and when. You can access audit logs in the same way you access any other CloudTrail logs using the CloudTrail APIs and console. For more information about CloudTrail logs see Logging AWS Lake Formation API Calls Using AWS CloudTrail. Tag-based access control You can classify your data and limit access to sensitive information.You can also add your own custom labels (LF-tags) to the data at the table- and column-level to define attributes, like "sensitive information" or "European sales data." Lake Formation provides a text-based search over this metadata, so your users can quickly find the data they need to analyze. You can grant access to the data based on these LF-tags. For more information about tag-based access control, see Lake Formation Tag-based access control. Cross account access Lake Formation permission management capabilities simplify securing and managing distributed data lakes across multiple AWS accounts through a centralized approach, providing fine-grained access control to the Data Catalog and Amazon S3locations. Governed tables Data lakes need to show users the correct view of data at all times, even while there are simultaneous real-time or frequent updates to the data. Loading streaming data or incorporating changes from multiple source data systems requires processing inserts and deletes across multiple tables in parallel. Today, developers write custom application code or use open source tools to manage these updates. These solutions are complex and difficult to scale because writing application code that maintains consistency when concurrently reading and writing the same data is tedious, brittle, and error prone. Lake Formation introduces new APIs that support atomic, consistent, isolated, and durable (ACID) transactions using a new data lake table type, called a governed table. A governed table allows multiple users to concurrently insert and delete data across tables using manifests, while still allowing other users to simultaneously run analytical queries and ML models on the same data sets that return consistent and up-to-date results. AWS service integrations with Lake Formation The following AWS services integrate with AWS Lake Formation and honor Lake Formation permissions. AWS ServiceHow IntegratedAWS GlueAWS Glue and Lake Formation share the same Data Catalog. For console operations (such as viewing a list of tables) and all API operations, AWS Glue users can access only the databases and tables on which they have Lake Formation permissions.Note AWS Glue does not support Lake Formation column permissions. Amazon AthenaWhen Amazon Athena users select the AWS Glue catalog in the query editor, they can query only the databases, tables, and columns that they have Lake Formation permissions on. Queries using manifests are not supported. In addition to principals who authenticate with Athena through AWS Identity and Access Management (IAM), Lake Formation supports Athena users who connect through the JDBC or ODBC driver and authenticate through SAML. Supported SAML providers include Okta and Microsoft Active Directory Federation Service (AD FS). For more information, see Using Lake Formation and the Athena JDBC and ODBC Drivers for Federated Access to Athena in the Amazon Athena User Guide. Note Currently, authorizing access to SAML identities in Lake Formation is not supported in the following regions: Middle East (Bahrain) - me-south-1 Asia Pacific (Hong Kong) - ap-east-1 Africa (Cape Town) - af-south-1 China (Ningxia) - cn-northwest-1 Asia Pacific (Osaka) - ap-northeast-3 Amazon Redshift SpectrumWhen Amazon Redshift users create an external schema on a database in the AWS Glue catalog, they can query only the tables and columns in that schema on which they have Lake Formation permissions. Queries using manifests are not supported. Amazon QuickSight Enterprise EditionWhen an Amazon QuickSight Enterprise Edition user queries a dataset in an Amazon S3 location that is registered with Lake Formation, the user must have the Lake Formation SELECT permission on the data.Amazon EMRLake Formation permissions are enforced when Apache Spark applications are submitted using Apache Zeppelin or EMR Notebooks. Lake Formation also works with AWS Key Management Service (AWS KMS) to enable you to more easily set up these integrated services to encrypt and decrypt data in Amazon Simple Storage Service (Amazon S3) locations.

AWS Support

All users have access to account and billing help in the AWS Support Center. In addition, customers with some support plans have access to additional features, including AWS Trusted Advisor and an API for programmatic access to support cases and Trusted Advisor. AWS Support offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. All support plans provide 24x7 access to customer service, AWS documentation, technical papers, and support forums. For technical support and more resources to plan, deploy, and improve your AWS environment, you can choose a support plan that best aligns with your AWS use case.

Amazon Simple Email Service

Amazon Simple Email Service (Amazon SES) is a reliable, scalable, and cost-effective email service. Digital marketers and application developers can use Amazon SES to send marketing, notification, and transactional emails.

Amazon Simple Queue Service

Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS moves data between distributed application components and helps you decouple these components. Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Amazon SQS offers common constructs such as dead-letter queues and cost allocation tags. It provides a generic web services API that you can access using any programming language that the AWS SDK supports. Amazon SQS supports both standard and FIFO queues. For more information, see Queue types. Topics Benefits of using Amazon SQS Differences between Amazon SQS, Amazon MQ, and Amazon SNS Queue types Common tasks for getting started with Amazon SQS Pricing for Amazon SQS Benefits of using Amazon SQS Security - You control who can send messages to and receive messages from an Amazon SQS queue. You can choose to transmit sensitive data by protecting the contents of messages in queues by using default Amazon SQS managed server-side encryption (SSE), or by using custom SSE keys managed in AWS Key Management Service (AWS KMS). Durability - For the safety of your messages, Amazon SQS stores them on multiple servers. Standard queues support at-least-once message delivery, and FIFO queues support exactly-once message processing. Availability - Amazon SQS uses redundant infrastructure to provide highly-concurrent access to messages and high availability for producing and consuming messages. Scalability - Amazon SQS can process each buffered request independently, scaling transparently to handle any load increases or spikes without any provisioning instructions. Reliability - Amazon SQS locks your messages during processing, so that multiple producers can send and multiple consumers can receive messages at the same time. Customization - Your queues don't have to be exactly alike—for example, you can set a default delay on a queue. You can store the contents of messages larger than 256 KB using Amazon Simple Storage Service (Amazon S3) or Amazon DynamoDB, with Amazon SQS holding a pointer to the Amazon S3 object, or you can split a large message into smaller messages. Differences between Amazon SQS, Amazon MQ, and Amazon SNS Amazon SQS and Amazon SNS are queue and topic services that are highly scalable, simple to use, and don't require you to set up message brokers. We recommend these services for new applications that can benefit from nearly unlimited scalability and simple APIs. Amazon MQ is a managed message broker service that provides compatibility with many popular message brokers. We recommend Amazon MQ for migrating applications from existing message brokers that rely on compatibility with APIs such as JMS or protocols such as AMQP, MQTT, OpenWire, and STOMP. Queue types The following table describes the capabilities of standard queues and FIFO queues. Standard queueFIFO queue Unlimited Throughput - Standard queues support a nearly unlimited number of API calls per second, per API action (SendMessage, ReceiveMessage, or DeleteMessage). At-Least-Once Delivery - A message is delivered at least once, but occasionally more than one copy of a message is delivered. Best-Effort Ordering - Occasionally, messages are delivered in an order different from which they were sent. High Throughput - If you use batching, FIFO queues support up to 3,000 messages per second, per API method (SendMessageBatch, ReceiveMessage, or DeleteMessageBatch). The 3000 messages per second represent 300 API calls, each with a batch of 10 messages. To request a quota increase, submit a support request. Without batching, FIFO queues support up to 300 API calls per second, per API method (SendMessage, ReceiveMessage, or DeleteMessage). Exactly-Once Processing - A message is delivered once and remains available until a consumer processes and deletes it. Duplicates aren't introduced into the queue. First-In-First-Out Delivery - The order in which messages are sent and received is strictly preserved. Send data between applications when the throughput is important, for example: Decouple live user requests from intensive background work: let users upload media while resizing or encoding it. Allocate tasks to multiple worker nodes: process a high number of credit card validation requests. Batch messages for future processing: schedule multiple entries to be added to a database. Send data between applications when the order of events is important, for example: Make sure that user-entered commands are run in the right order. Display the correct product price by sending price modifications in the right order. Prevent a student from enrolling in a course before registering for an account. Common tasks for getting started with Amazon SQS To create your first queue with Amazon SQS and send, receive, and delete a message, see Getting started with Amazon SQS. To trigger a Lambda function, see Configuring a queue to trigger an AWS Lambda function (console). To discover the functionality and architecture of Amazon SQS, see How Amazon SQS works. To find out the guidelines and caveats that will help you make the most of Amazon SQS, see Best practices for Amazon SQS. Explore the Amazon SQS examples for one of the AWS SDKs, such as the AWS SDK for Java 2.x Developer Guide. To learn about Amazon SQS actions, see the Amazon Simple Queue Service API Reference. To learn about Amazon SQS AWS CLI commands, see the AWS CLI Command Reference. Pricing for Amazon SQS Amazon SQS has no upfront costs. The first million monthly requests are free. After that, you pay based on the number and content of requests, and the interactions with Amazon S3 and the AWS Key Management Service.

Amazon Translate

Amazon Translate is a neural machine translation service for translating text to and from English across a breadth of supported languages. Powered by deep-learning technologies, Amazon Translate delivers fast, high-quality, and affordable language translation. It provides a managed, continually trained solution so you can easily translate company and user-authored content or build applications that require support across multiple languages. The machine translation engine has been trained on a wide variety of content across different domains to produce quality translations that serve any industry need.

Amazon WorkDocs

Amazon WorkDocs is a fully managed, secure enterprise storage and sharing service with strong administrative controls and feedback capabilities that improve user productivity.

Amazon CodeGuru

CodeGuru provides intelligent recommendations for improving application performance, efficiency, and code quality in your Java applications.

Tagging AWS resources

You can assign metadata to your AWS resources in the form of tags. Each tag is a label consisting of a user-defined key and value. Tags can help you manage, identify, organize, search for, and filter resources. You can create tags to categorize resources by purpose, owner, environment, or other criteria. Each tag has two parts: A tag key (for example, CostCenter, Environment, or Project). Tag keys are case sensitive. A tag value (for example, 111122223333 or Production). Like tag keys, tag values are case sensitive. You can use tags to categorize resources by purpose, owner, environment, or other criteria. You can tag resources for all cost-accruing services in AWS. For the following services, AWS recommends newer alternative AWS services that support tagging to better meet customer use cases. Amazon Cloud Directory Amazon CloudSearch Amazon Cognito Sync AWS Data Pipeline AWS DeepLens Amazon Elastic Transcoder Amazon Machine Learning AWS OpsWorks Stacks Amazon S3 Glacier Direct Amazon SimpleDB Amazon WorkSpaces Application Manager (Amazon WAM) Best practices As you create a tagging strategy for AWS resources, follow best practices: Do not add personally identifiable information (PII) or other confidential or sensitive information in tags. Tags are accessible to many AWS services, including billing. Tags are not intended to be used for private or sensitive data. Use a standardized, case-sensitive format for tags, and apply it consistently across all resource types. Consider tag guidelines that support multiple purposes, like managing resource access control, cost tracking, automation, and organization. Use automated tools to help manage resource tags. AWS Resource Groups and the Resource Groups Tagging API enable programmatic control of tags, making it easier to automatically manage, search, and filter tags and resources. Use too many tags rather than too few tags. Remember that it is easy to change tags to accommodate changing business requirements, but consider the consequences of future changes. For example, changing access control tags means you must also update the policies that reference those tags and control access to your resources. You can automatically enforce the tagging standards that your organization chooses to adopt by creating and deploying tag policies using AWS Organizations. Tag policies let you specify tagging rules that define valid key names and the values that are valid for each key. You can choose to only monitor, giving you an opportunity to evaluate and clean up your existing tags. Once your tags are in compliance with your chosen standards, you can then turn on enforcement in the tag policies to prevent non-compliant tags from being created. For more information, see Tag policies in the AWS Organizations User Guide. Tagging categories Companies that are most effective in their use of tags typically create business-relevant tag groupings to organize their resources along technical, business, and security dimensions. Companies that use automated processes to manage their infrastructure also include additional, automation-specific tags. Technical tagsTags for automationBusiness tagsSecurity tags Name - Identify individual resources Application ID - Identify resources that are related to a specific application Application Role - Describe the function of a particular resource (such as web server, message broker, database) Cluster - Identify resource farms that share a common configuration and perform a specific function for an application Environment - Distinguish between development, test, and production resources Version - Help distinguish between versions of resources or applications Date/Time - Identify the date or time a resource should be started, stopped, deleted, or rotated Opt in/Opt out - Indicate whether a resource should be included in an automated activity such as starting, stopping, or resizing instances Security - Determine requirements, such as encryption or enabling of Amazon VPC flow logs; identify route tables or security groups that need extra scrutiny Project - Identify projects that the resource supports Owner - Identify who is responsible for the resource Cost Center/Business Unit - Identify the cost center or business unit associated with a resource, typically for cost allocation and tracking Customer - Identify a specific client that a particular group of resources serves Confidentiality - An identifier for the specific data confidentiality level a resource supports Compliance - An identifier for workloads that must adhere to specific compliance requirements Tag naming limits and requirements The following basic naming and usage requirements apply to tags: Each resource can have a maximum of 50 user created tags. System created tags that begin with aws: are reserved for AWS use, and do not count against this limit. You can't edit or delete a tag that begins with the aws: prefix. For each resource, each tag key must be unique, and each tag key can have only one value. The tag key must be a minimum of 1 and a maximum of 128 Unicode characters in UTF-8. The tag value must be a minimum of 0 and a maximum of 256 Unicode characters in UTF-8. Allowed characters can vary by AWS service. For information about what characters you can use to tag resources in a particular AWS service, see its documentation. In general, the allowed characters are letters, numbers, spaces representable in UTF-8, and the following characters: _ . : / = + - @. Tag keys and values are case sensitive. As a best practice, decide on a strategy for capitalizing tags, and consistently implement that strategy across all resource types. For example, decide whether to use Costcenter, costcenter, or CostCenter, and use the same convention for all tags. Avoid using similar tags with inconsistent case treatment. Common tagging strategies Use the following tagging strategies to help identify and manage AWS resources. Contents Tags for resource organization Tags for cost allocation Tags for automation Tags for access control Tags for resource organization Tags are a good way to organize AWS resources in the AWS Management Console. You can configure tags to be displayed with resources, and can search and filter by tag. With the AWS Resource Groups service, you can create groups of AWS resources based on one or more tags or portions of tags. You can also create groups based on their occurrence in an AWS CloudFormation stack. Using Resource Groups and Tag Editor, you can consolidate and view data for applications that consist of multiple services, resources, and Regions in one place. Tags for cost allocation AWS Cost Explorer and detailed billing reports let you break down AWS costs by tag. Typically, you use business tags such as cost center/business unit, customer, or project to associate AWS costs with traditional cost-allocation dimensions. But a cost allocation report can include any tag. This lets you associate costs with technical or security dimensions, such as specific applications, environments, or compliance programs. The following is an example of a partial cost allocation report. For some services, you can use an AWS-generated createdBy tag for cost allocation purposes, to help account for resources that might otherwise go uncategorized. The createdBy tag is available only for supported AWS services and resources. Its value contains data associated with specific API or console events. For more information, see AWS-Generated Cost Allocation Tags in the AWS Billing and Cost Management User Guide. Tags for automation Resource or service-specific tags are often used to filter resources during automation activities. Automation tags are used to opt in or opt out of automated tasks or to identify specific versions of resources to archive, update, or delete. For example, you can run automated start or stop scripts that turn off development environments during nonbusiness hours to reduce costs. In this scenario, Amazon Elastic Compute Cloud (Amazon EC2) instance tags are a simple way to identify instances to opt out of this action. For scripts that find and delete stale, out-of-date, or rolling Amazon EBS snapshots, snapshot tags can add an extra dimension of search criteria. Tags for access control IAM policies support tag-based conditions, letting you constrain IAM permissions based on specific tags or tag values. For example, IAM user or role permissions can include conditions to limit EC2 API calls to specific environments (such as development, test, or production) based on their tags. The same strategy can be used to limit API calls to specific Amazon Virtual Private Cloud (Amazon VPC) networks. Support for tag-based, resource-level IAM permissions is service specific. When you use tag-based conditions for access control, be sure to define and restrict who can modify the tags. For more information about using tags to control API access to AWS resources, see AWS services that work with IAM in the IAM User Guide. Tagging governance An effective tagging strategy uses standardized tags and applies them consistently and programmatically across AWS resources. You can use both reactive and proactive approaches for governing tags in your AWS environment. Reactive governance is for finding resources that are not properly tagged using tools such as the Resource Groups Tagging API, AWS Config Rules, and custom scripts. To find resources manually, you can use Tag Editor and detailed billing reports. Proactive governance uses tools such as AWS CloudFormation, AWS Service Catalog, tag policies in AWS Organizations, or IAM resource-level permissions to ensure standardized tags are consistently applied at resource creation. For example, you can use the AWS CloudFormation Resource Tags property to apply tags to resource types. In AWS Service Catalog, you can add portfolio and product tags that are combined and applied to a product automatically when it is launched. More rigorous forms of proactive governance include automated tasks. For example, you can use the Resource Groups Tagging API to search an AWS environment's tags, or run scripts to quarantine or delete improperly tagged resources.

AWS service quotas

Your AWS account has default quotas, formerly referred to as limits, for each AWS service. Unless otherwise noted, each quota is Region-specific. You can request increases for some quotas, and other quotas cannot be increased. Service Quotas is an AWS service that helps you manage your quotas for many AWS services, from one location. Along with looking up the quota values, you can also request a quota increase from the Service Quotas console. AWS Support might approve, deny, or partially approve your requests.

SQL injection attack

attacks against a web site that take advantage of vulnerabilities in poorly coded SQL (a standard and common database software application) applications in order to introduce malicious program code into a company's systems and networks

AWS Transit Gateway

is a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway. As you grow the number of workloads running on AWS, you need to be able to scale your networks across multiple accounts and Amazon VPCs to keep up with the growth.

AWS IoT Core

managed cloud service that lets connected devices easily and securely interact with cloud applications and other devices AWS IoT Core provides secure, bi-directional communication for Internet-connected devices (such as sensors, actuators, embedded devices, wireless devices, and smart appliances) to connect to the AWS Cloud over MQTT, HTTPS, and LoRaWAN.

AWS CodeCommit

AWS CodeCommit is a version control service that enables you to privately store and manage Git repositories in the AWS Cloud.

AWS CodeDeploy

AWS CodeDeploy is a deployment service that enables developers to automate the deployment of applications to instances and to update the applications as required

AWS CodeStar

AWS CodeStar lets you quickly develop, build, and deploy applications on AWS.

AWS AppSync

AWS AppSync is an enterprise-level, fully managed GraphQL service with real-time data synchronization and offline programming features. AWS AppSync provides a robust, scalable GraphQL interface for application developers to combine data from multiple sources, including Amazon DynamoDB, AWS Lambda, and HTTP APIs. Features of AWS AppSync AWS AppSync includes a variety of features to make building GraphQL a streamlined experience: Powerful GraphQL schema editing through the AWS AppSync console, including automatic GraphQL schema generation from DynamoDB Efficient data caching Integration with Amazon Cognito user pools for fine-grained access control at a per-field level

AWS Cloud Control API

AWS Cloud Control API (Cloud Control API) is a set of common application programming interfaces (APIs) that make it easy for developers and partners to manage the lifecycle of AWS and third-party services. Cloud Control API provides five operations for developers to create, read, update, delete, and list (CRUDL) their cloud infrastructure.

AWS Cloud Map

AWS Cloud Map lets you name and discover your cloud resources.

AWS CodeBuild

AWS CodeBuild is a fully managed build service that compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for popular programming languages and build tools such as Apache Maven, Gradle, and more. You can also customize build environments in CodeBuild to use your own build tools. CodeBuild scales automatically to meet peak build requests. CodeBuild provides these benefits: Fully managed - CodeBuild eliminates the need to set up, patch, update, and manage your own build servers. On demand - CodeBuild scales on demand to meet your build needs. You pay only for the number of build minutes you consume. Out of the box - CodeBuild provides preconfigured build environments for the most popular programming languages. All you need to do is point to your build script to start your first build. How to run CodeBuild You can use the AWS CodeBuild or AWS CodePipeline console to run CodeBuild. You can also automate the running of CodeBuild by using the AWS Command Line Interface (AWS CLI) or the AWS SDKs. To run CodeBuild by using the CodeBuild console, AWS CLI, or AWS SDKs, see Run AWS CodeBuild directly. As the following diagram shows, you can add CodeBuild as a build or test action to the build or test stage of a pipeline in AWS CodePipeline. AWS CodePipeline is a continuous delivery service that you can use to model, visualize, and automate the steps required to release your code. This includes building your code. A pipeline is a workflow construct that describes how code changes go through a release process. To use CodePipeline to create a pipeline and then add a CodeBuild build or test action, see Use CodePipeline with CodeBuild. For more information about CodePipeline, see the AWS CodePipeline User Guide. The CodeBuild console also provides a way to quickly search for your resources, such as repositories, build projects, deployment applications, and pipelines. Choose Go to resource or press the / key, and then enter the name of the resource. Any matches appear in the list. Searches are case insensitive. You only see resources that you have permissions to view. For more information, see Viewing resources in the console.

AWS CodePipeline

AWS CodePipeline is a continuous delivery service that enables you to model, visualize, and automate the steps required to release your software.

AWS Data Exchange

AWS Data Exchange is a service that makes it easy for customers to find, subscribe to, and use third-party data in the AWS Cloud.

AWS Data Pipeline

AWS Data Pipeline is a web service that you can use to automate the movement and transformation of data. With AWS Data Pipeline, you can define data-driven workflows, so that tasks can be dependent on the successful completion of previous tasks.

AWS DataSync

AWS DataSync is a data-transfer service that simplifies, automates, and accelerates moving and replicating data between on-premises storage systems and AWS storage services over the internet or AWS Direct Connect.

Amazon API Gateway

Amazon API Gateway enables you to create and deploy your own REST and WebSocket APIs at any scale. You can create robust, secure, and scalable APIs that access Amazon Web Services or other web services, as well as data that's stored in the AWS Cloud. You can create APIs to use in your own client applications, or you can make your APIs available to third-party app developers.

Amazon Route 53

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. You can use Route 53 to perform three main functions in any combination: domain registration, DNS routing, and health checking. If you choose to use Route 53 for all three functions, be sure to follow the order below: 1. Register domain names Your website needs a name, such as example.com. Route 53 lets you register a name for your website or web application, known as a domain name. For an overview, see How domain registration works. For a procedure, see Registering a new domain. For a tutorial that takes you through registering a domain and creating a simple website in an Amazon S3 bucket, see Getting started with Amazon Route 53. 2. Route internet traffic to the resources for your domain When a user opens a web browser and enters your domain name (example.com) or subdomain name (acme.example.com) in the address bar, Route 53 helps connect the browser with your website or web application. For an overview, see How internet traffic is routed to your website or web application. For procedures, see Configuring Amazon Route 53 as your DNS service. For a procedure on how to route email to Amazon WorkMail, see Routing traffic to Amazon WorkMail. 3. Check the health of your resources Route 53 sends automated requests over the internet to a resource, such as a web server, to verify that it's reachable, available, and functional. You also can choose to receive notifications when a resource becomes unavailable and choose to route internet traffic away from unhealthy resources. For an overview, see How Amazon Route 53 checks the health of your resources. For procedures, see Creating Amazon Route 53 health checks and configuring DNS failover.

Amazon Simple Notification Service

Amazon Simple Notification Service (Amazon SNS) is a web service that enables applications, end-users, and devices to instantly send and receive notifications from the cloud. Amazon Simple Notification Service (Amazon SNS) is a managed service that provides message delivery from publishers to subscribers (also known as producers and consumers). Publishers communicate asynchronously with subscribers by sending messages to a topic, which is a logical access point and communication channel. Clients can subscribe to the SNS topic and receive published messages using a supported endpoint type, such as Amazon Kinesis Data Firehose, Amazon SQS, AWS Lambda, HTTP, email, mobile push notifications, and mobile text messages (SMS).

Amazon Simple Workflow Service

Amazon Simple Workflow Service (Amazon SWF) makes it easy to build applications that coordinate work across distributed components. In Amazon SWF, a task represents a logical unit of work that is performed by a component of your application. Coordinating tasks across the application involves managing intertask dependencies, scheduling, and concurrency in accordance with the logical flow of the application. Amazon SWF gives you full control over implementing tasks and coordinating them without worrying about underlying complexities such as tracking their progress and maintaining their state.

Amazon Virtual Private Cloud

Amazon Virtual Private Cloud (Amazon VPC) enables you to provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you've defined. Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS. Features The following features help you configure a VPC to provide the connectivity that your applications need: Virtual private clouds (VPC) A VPC is a virtual network that closely resembles a traditional network that you'd operate in your own data center. After you create a VPC, you can add subnets. Subnets A subnet is a range of IP addresses in your VPC. A subnet must reside in a single Availability Zone. After you add subnets, you can deploy AWS resources in your VPC. IP addressing You can assign IPv4 addresses and IPv6 addresses to your VPCs and subnets. You can also bring your public IPv4 and IPv6 GUA addresses to AWS and allocate them to resources in your VPC, such as EC2 instances, NAT gateways, and Network Load Balancers. Routing Use route tables to determine where network traffic from your subnet or gateway is directed. Gateways and endpoints A gateway connects your VPC to another network. For example, use an internet gateway to connect your VPC to the internet. Use a VPC endpoint to connect to AWS services privately, without the use of an internet gateway or NAT device. Peering connections Use a VPC peering connection to route traffic between the resources in two VPCs. Traffic Mirroring Copy network traffic from network interfaces and send it to security and monitoring appliances for deep packet inspection. Transit gateways Use a transit gateway, which acts as a central hub, to route traffic between your VPCs, VPN connections, and AWS Direct Connect connections. VPC Flow Logs A flow log captures information about the IP traffic going to and from network interfaces in your VPC. VPN connections Connect your VPCs to your on-premises networks using AWS Virtual Private Network (AWS VPN). Getting started with Amazon VPC Your AWS account includes a default VPC in each AWS Region. Your default VPCs are configured such that you can immediately start launching and connecting to EC2 instances. For more information, see Get started with Amazon VPC. You can choose to create additional VPCs with the subnets, IP addresses, gateways and routing that you need. For more information, see Create a VPC. Working with Amazon VPC You can create and manage your VPCs using any of the following interfaces: AWS Management Console — Provides a web interface that you can use to access your VPCs. AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Query API — Provides low-level API actions that you call using HTTPS requests. Using the Query API is the most direct way to access Amazon VPC, but it requires that your application handle low-level details such as generating the hash to sign the request, and error handling. For more information, see Amazon VPC actions in the Amazon EC2 API Reference. Pricing for Amazon VPC There's no additional charge for using a VPC. There are charges for some VPC components, such as NAT gateways, Reachability Analyzer, and traffic mirroring.

Amazon WorkMai

Amazon WorkMail is a managed email and calendaring service that offers strong security controls and support for existing desktop and mobile clients.

Elastic Load Balancing

Elastic Load Balancing automatically distributes your incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, in one or more Availability Zones. It monitors the health of its registered targets and routes traffic only to the healthy targets. You can select the type of load balancer that best suits your needs.

Amazon GameLift Documentation

GameLift provides solutions for hosting session-based multiplayer game servers in the cloud, including a fully managed service for deploying, operating, and scaling game servers. Built on AWS global computing infrastructure, GameLift helps you deliver high-performance, high-reliability, low-cost game servers while dynamically scaling your resource usage to meet player demand.

Amazon Lumberyard (beta)

Lumberyard is a free AAA game engine deeply integrated with AWS and Twitch—with full source. Lumberyard provides a growing set of tools to help you create the highest quality games, engage massive communities of fans, and connect games to the vast compute and storage of the cloud. Participate in the forums or learn about new changes on our blog.

Red Hat OpenShift Service on AWS

Red Hat OpenShift Service on AWS (ROSA) is a managed service that helps Red Hat OpenShift users to build, scale, and manage containerized applications on AWS. Red Hat OpenShift Service on AWS (ROSA) is a managed service that's available through the AWS Management Console. With ROSA, as a Red Hat OpenShift user, you can build, scale, and manage containerized applications on AWS. You can use ROSA to create Kubernetes clusters using the Red Hat OpenShift APIs and tools, and have access to the full breadth and depth of AWS services. ROSA streamlines moving on-premises Red Hat OpenShift workloads to AWS, and offers tight integration with other AWS services. You can also access Red Hat OpenShift licensing, billing, and support all directly through AWS. There are no up-front costs required to use ROSA. You pay only for the container clusters and nodes that you use. With pay-as-you-go pricing, you don't have to worry about complex, multi-year contracts. This flexibility means that you can use Red Hat OpenShift according to your specific business needs. How ROSA works Each ROSA cluster comes with a fully managed control plane and compute nodes. Installation, management, maintenance, and upgrades are performed by Red Hat site reliability engineers (SRE) with joint Red Hat and Amazon support. ROSA clusters are deployed in your account with support for existing VPCs. Note By default, Red Hat manages all ROSA clusters using the same restrictions, quotas, expectations, and configurations.

AWS security credentials

When you interact with AWS, you specify your AWS security credentials to verify who you are and whether you have permission to access the resources that you are requesting. AWS uses the security credentials to authenticate and authorize your requests. For example, if you want to download a protected file from an Amazon Simple Storage Service (Amazon S3) bucket, your credentials must allow that access. If your credentials don't show you are authorized to download the file, AWS denies your request. However, your AWS security credentials aren't required for you to download a file in an Amazon S3 bucket that is publicly shared.

AWS Elastic Beanstalk

With AWS Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and AWS Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. Amazon Web Services (AWS) comprises over one hundred services, each of which exposes an area of functionality. While the variety of services offers flexibility for how you want to manage your AWS infrastructure, it can be challenging to figure out which services to use and how to provision them. With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. When you deploy your application, Elastic Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, to run your application. You can interact with Elastic Beanstalk by using the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or eb, a high-level CLI designed specifically for Elastic Beanstalk. To learn more about how to deploy a sample web application using Elastic Beanstalk, see Getting Started with AWS: Deploying a Web App. You can also perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console). To use Elastic Beanstalk, you create an application, upload an application version in the form of an application source bundle (for example, a Java .war file) to Elastic Beanstalk, and then provide some information about the application. Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After your environment is launched, you can then manage your environment and deploy new application versions. The following diagram illustrates the workflow of Elastic Beanstalk. After you create and deploy your application, information about the application—including metrics, events, and environment status—is available through the Elastic Beanstalk console, APIs, or Command Line Interfaces, including the unified AWS CLI. Pricing There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes. For details about pricing, see the Elastic Beanstalk service detail page.


Ensembles d'études connexes

Chapter 5: Short-Term and Working Memory

View Set

Chapter 41: Intro to the Sensory System **

View Set

Live Virtual Machine Lab 11.2: Module 11 Troubleshooting Common Networking Issues

View Set

Sherpath- Common Infections of the Respiratory tract

View Set

Micro exam 3 mastering questions

View Set