TD 1

Ace your homework & exams now with Quizwiz!

DynamoDB

- NoSQL Database (non relational database that can handle frequently changing schemas)

Use partition keys with high-cardinality attributes, which have a large number of distinct values for each item. Attributes with high cardinality have a large and diverse range of values. When used as partition keys, they help distribute data more evenly across the physical partitions of a DynamoDB table. Avoid using a composite primary key, which is composed of a partition key and a sort key is incorrect because as a composite primary key will provide more partition for the table and in turn, improves the performance. Hence, it should be used and not avoided. Reducing the number of partition keys in the DynamoDB table is incorrect. Instead of doing this, you should actually add more to improve its performance to distribute the I/O requests evenly and not avoid "hot" partitions.

A Docker application, which is running on an Amazon ECS cluster behind a load balancer, is heavily using DynamoDB. You are instructed to improve the database performance by distributing the workload evenly and using the provisioned throughput efficiently. Which of the following would you consider to implement for your DynamoDB table? Use partition keys with low-cardinality attributes, which have a few number of distinct values for each item. Use partition keys with high-cardinality attributes, which have a large number of distinct values for each item. Reduce the number of partition keys in the DynamoDB table. Avoid using a composite primary key, which is composed of a partition key and a sort key.

Amazon Aurora Global Database Multi-AZ Amazon RDS database with cross-region read replicas is incorrect because a Multi-AZ deployment is only applicable inside a single region and not in a multi-region setup.. This database setup is not capable of providing an RPO of 1 second and an RTO of less than 1 minute. Moreover, the cross-region RDS Read Replica replication is not as fast as Amazon Aurora Global Databases.

A Solutions Architect needs to set up a relational database and come up with a disaster recovery plan to mitigate multi-region failure. The solution requires a Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute. Which of the following AWS services can fulfill this requirement? Amazon RDS for PostgreSQL with cross-region read replicas Amazon Quantum Ledger Database (Amazon QLDB) Amazon Timestream Amazon Aurora Global Database

Create a native function or a stored procedure that invokes a Lambda function. Configure the Lambda function to send event notifications to an Amazon SQS queue for the processing system to consume. RDS events cover operational aspects like DB instance, parameter group, security group, and snapshot events. For capturing data-modifying events (INSERT, DELETE, UPDATE) in the given scenario, using native functions or stored procedures is necessary.

A car dealership website hosted in Amazon EC2 stores car listings in an Amazon Aurora database managed by Amazon RDS. Once a vehicle has been sold, its data must be removed from the current listings and forwarded to a distributed processing system. Which of the following options can satisfy the given requirement? Create an RDS event subscription and send the notifications to AWS Lambda. Configure the Lambda function to fanout the event notifications to multiple Amazon SQS queues to update the target groups. Create an RDS event subscription and send the notifications to Amazon SQS. Configure the SQS queues to fan out the event notifications to multiple Amazon SNS topics. Process the data using Lambda functions. Create a native function or a stored procedure that invokes a Lambda function. Configure the Lambda function to send event notifications to an Amazon SQS queue for the processing system to consume. Create an RDS event subscription and send the notifications to AWS Lambda. Configure the Lambda function to fan out the event notifications to multiple Amazon SQS queues to update the processing system.

Enable Transfer Acceleration in the destination bucket and upload the collected data using Multipart Upload. S3 Transfer Acceleration: This feature speeds up the transfer of files to an S3 bucket by routing the data through Amazon CloudFront's globally distributed edge locations Use AWS Snowball Edge to transfer large amounts of data is incorrect because the end-to-end time to transfer up to 80 TB of data into AWS Snowball Edge is approximately one week (which isn't the fastest way)

A company collects atmospheric data such as temperature, air pressure, and humidity from different countries. Each site location is equipped with various weather instruments and a high-speed Internet connection. The average collected data in each location is around 500 GB and will be analyzed by a weather forecasting application hosted in Northern Virginia. As the Solutions Architect, you need to aggregate all the data in the fastest way. Which of the following options can satisfy the given requirement? Enable Transfer Acceleration in the destination bucket and upload the collected data using Multipart Upload. Use AWS Snowball Edge to transfer large amounts of data. Upload the data to the closest S3 bucket. Set up a cross-region replication and copy the objects to the destination bucket. Set up a Site-to-Site VPN connection.

Migrate the existing file share configuration to Amazon FSx for Windows File Server. An AWS Storage Gateway is more suitable for hybrid storage scenarios rather than fully migrating to AWS.

A company has a web application that uses Internet Information Services (IIS) for Windows Server. A file share is used to store the application data on the network-attached storage of the company's on-premises data center. To achieve a highly available system, they plan to migrate the application and file share to AWS. Which of the following can be used to fulfill this requirement? Migrate the existing file share configuration to AWS Storage Gateway. Migrate the existing file share configuration to Amazon FSx for Windows File Server. Migrate the existing file share configuration to Amazon EFS. Migrate the existing file share configuration to Amazon EBS.

Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled and --auth-token parameters enabled. Setting up an IAM Policy and MFA which requires the Cloud Engineers to enter their IAM credentials and token before they can access the ElastiCache cluster is incorrect because this is not possible in IAM. You have to use the Redis AUTH option instead.

A company is designing a banking portal that uses Amazon ElastiCache for Redis as its distributed session management component. Since the other Cloud Engineers in your department have access to your ElastiCache cluster, you have to secure the session data in the portal by requiring them to enter a password before they are granted permission to execute Redis commands. As the Solutions Architect, which of the following should you do to meet the above requirement? Enable the in-transit encryption for Redis replication groups. Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled and --auth-token parameters enabled. Set up a Redis replication group and enable the AtRestEncryptionEnabled parameter. Set up an IAM Policy and MFA which requires the Cloud Engineers to enter their IAM credentials and token before they can access the ElastiCache cluster.

Use throttling limits in API Gateway Throttling limits can help protect your API gateway from traffic spikes API Gateway will automatically scale and handle massive traffic spikes so you do not have to do anything is incorrect. Although it can scale using AWS Edge locations, you still need to configure the throttling to further manage the bursts of your APIs.

A company is using a combination of API Gateway and Lambda for the web services of the online web portal that is being accessed by hundreds of thousands of clients each day. They will be announcing a new revolutionary product and it is expected that the web portal will receive a massive number of visitors all around the globe. How can you protect the backend systems and applications from traffic spikes? Use throttling limits in API Gateway API Gateway will automatically scale and handle massive traffic spikes so you do not have to do anything. Manually upgrade the EC2 instances being used by API Gateway Deploy Multi-AZ in API Gateway with Read Replica

Use AWS Key Management Service to create a CMK in a custom key store and store the non-extractable key material in AWS CloudHSM. Using a custom key store backed by AWS CloudHSM allows you to create and manage your own encryption keys in KMS while leveraging the hardware security modules (HSMs) provided by CloudHSM. Additionally, when you store encryption keys in AWS CloudHSM and manage them through a custom key store in AWS Key Management Service (KMS), you maintain full control over these keys. Use AWS Key Management Service to create a CMK in a custom key store and store the non-extractable key material in Amazon S3 is incorrect because Amazon S3 is not a suitable storage service to use in storing encryption keys. You have to use AWS CloudHSM instead.

A company requires all the data stored in the cloud to be encrypted at rest. To easily integrate this with other AWS services, they must have full control over the encryption of the created keys and also the ability to immediately remove the key material from AWS KMS. The solution should also be able to audit the key usage independently of AWS CloudTrail. Which of the following options will meet this requirement? Use AWS Key Management Service to create a CMK in a custom key store and store the non-extractable key material in Amazon S3. Use AWS Key Management Service to create AWS-owned CMKs and store the non-extractable key material in AWS CloudHSM. Use AWS Key Management Service to create AWS-managed CMKs and store the non-extractable key material in AWS CloudHSM. Use AWS Key Management Service to create a CMK in a custom key store and store the non-extractable key material in AWS CloudHSM.

Enable the IAM DB Authentication. Use a combination of IAM and STS (Security Token Service) to restrict access to your RDS instance via a temporary token is incorrect: Using IAM DB Authentication simplifies the process. EC2 instances can request authentication tokens from IAM, and these tokens can be used to connect to the RDS instance. This process is more straightforward compared to setting up a combination of IAM and STS for this purpose. IAM DB Authentication is also a feature specifically designed for Amazon RDS to manage database access. It directly integrates with RDS and supports token-based authentication, which is what the scenario requires.

A financial application is composed of an Auto Scaling group of EC2 instances, an Application Load Balancer, and a MySQL RDS instance in a Multi-AZ Deployments configuration. To protect the confidential data of your customers, you have to ensure that your RDS database can only be accessed using the profile credentials specific to your EC2 instances via an authentication token. As the Solutions Architect of the company, which of the following should you do to meet the above requirement? Configure SSL in your application to encrypt the database connection to RDS. Enable the IAM DB Authentication. Use a combination of IAM and STS to restrict access to your RDS instance via a temporary token. Create an IAM Role and assign it to your EC2 instances which will grant exclusive access to your RDS instance.

Create an Amazon S3 bucket with encryption enabled. Launch an AWS Transfer for SFTP endpoint to securely upload files to the S3 bucket. Configure an S3 lifecycle rule to delete files after a month. To get started with AWS Transfer for SFTP (AWS SFTP) you create an SFTP server and map your domain to the server endpoint Create an Amazon Elastic Filesystem (EFS) file system and enable encryption. Configure AWS Transfer for SFTP to securely upload files to the EFS file system. Apply an EFS lifecycle policy to delete files after 30 days is incorrect. This may be possible, however, the EFS lifecycle management doesn't delete objects. It can only transition files in and out of the "Infrequent Access" tier.

A logistics company plans to automate its order management application. The company wants to use SFTP file transfer in uploading business-critical documents. Since the files are confidential, the files need to be highly available and must be encrypted at rest. The files must also be automatically deleted a month after they are created. Which of the following options should be implemented to meet the company requirements with the least operation overhead? Create an Amazon Elastic Filesystem (EFS) file system and enable encryption. Configure AWS Transfer for SFTP to securely upload files to the EFS file system. Apply an EFS lifecycle policy to delete files after 30 days. Create an Amazon S3 bucket with encryption enabled. Configure AWS Transfer for SFTP to securely upload files to the S3 bucket. Configure the retention policy on the SFTP server to delete files after a month. Provision an Amazon EC2 instance and install the SFTP service. Mount an encrypted EFS file system on the EC2 instance to store the uploaded files. Add a cron job to delete the files older than a month. Create an Amazon S3 bucket with encryption enabled. Launch an AWS Transfer for SFTP endpoint to securely upload files to the S3 bucket. Configure an S3 lifecycle rule to delete files after a month.

Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses. Customize the content that the CloudFront web distribution delivers to your users using Lambda@Edge, which allows your Lambda functions to execute the authentication process in AWS locations closer to the users. Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user is incorrect b/c it may improve performance but incurs high costs due to multi-region deployment. The scenario requires a cost-effective solution for performance enhancement

A popular social media website uses a CloudFront web distribution to serve their static contents to their millions of users around the globe. They are receiving a number of complaints recently that their users take a lot of time to log into their website. There are also occasions when their users are getting HTTP 504 errors. You are instructed by your manager to significantly reduce the user's login time to further optimize the system. Which of the following options should you use together to set up a cost-effective solution that can improve your application's performance? (Select TWO.) Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user. Use multiple and geographically disperse VPCs to various AWS regions then create a transit VPC to connect all of your resources. In order to handle the requests faster, set up Lambda functions in each region using the AWS Serverless Application Model (SAM) service. Customize the content that the CloudFront web distribution delivers to your users using Lambda@Edge, which allows your Lambda functions to execute the authentication process in AWS locations closer to the users. Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase the cache hit ratio of your CloudFront distribution. Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses.

Utilize an AWS Glue extract, transform, and load (ETL) job to process and convert the .csv files to Apache Parquet format and then store the output files into the target S3 bucket. Set up an S3 Event Notification to track every S3 PUT event and invoke the ETL job in AWS Glue through Amazon SQS. Create an ETL (Extract, Transform, Load) job and a Data Catalog table in AWS Glue. Configure the AWS Glue crawler to run on a schedule to check for new files in the S3 bucket every hour and convert them to Parquet format is incorrect. Although it is right to create an ETL job using AWS Glue, triggering the job on a scheduled basis rather than being triggered automatically by a new file upload is not ideal. It is not as efficient as using an S3 event trigger to initiate the conversion process immediately upon file upload.

A retail company receives raw .csv data files into its Amazon S3 bucket from various sources on an hourly basis. The average file size of these data files is 2 GB. An automated process must be set up to convert these .csv files to a more efficient Apache Parquet format and store the output files in another S3 bucket. Additionally, the conversion process must be automatically triggered whenever a new file is uploaded into the S3 bucket. Which of the following options must be implemented to meet these requirements with the LEAST operational overhead? Use a Lambda function triggered by an S3 PUT event to convert the .csv files to Parquet format. Use the AWS Transfer Family with SFTP service to move the output files to the target S3 bucket. Set up an Apache Spark job running in an Amazon EC2 instance and create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor S3 PUT events in the S3 bucket. Configure AWS Lambda to invoke the Spark job for every new .csv file added via a Function URL. Create an ETL (Extract, Transform, Load) job and a Data Catalog table in AWS Glue. Configure the AWS Glue crawler to run on a schedule to check for new files in the S3 bucket every hour and convert them to Parquet format. Utilize an AWS Glue extract, transform, and load (ETL) job to process and convert the .csv files to Apache Parquet format and then store the output files into the target S3 bucket. Set up an S3 Event Notification to track every S3 PUT event and invoke the ETL job in AWS Glue through Amazon SQS.

Create a new KMS key and use it to enable encryption helpers that leverage on AWS Key Management Service to store and encrypt the sensitive information is correct b/c creating a new KMS key and using it with encryption helpers in AWS Lambda is the best way to securely store and manage sensitive environment variables like database credentials and API keys AWS Lambda does not provide encryption for environment variables is incorrect. AWS Lambda does support encryption for environment variables using KMS. The statement that AWS Lambda automatically encrypts environment variables using AWS KMS is misleading. While default encryption is provided, sensitive data can still be visible to users with Lambda console access. For greater security, especially with sensitive information, using a customer-managed KMS key is advised.

A software development company is using serverless computing with AWS Lambda to build and run applications without having to set up or manage servers. They have a Lambda function that connects to a MongoDB Atlas, which is a popular Database as a Service (DBaaS) platform and also uses a third party API to fetch certain data for their application. One of the developers was instructed to create the environment variables for the MongoDB database hostname, username, and password as well as the API credentials that will be used by the Lambda function for DEV, SIT, UAT, and PROD environments. Considering that the Lambda function is storing sensitive database and API credentials, how can this information be secured to prevent other developers in the team, or anyone, from seeing these credentials in plain text? Select the best option that provides maximum security. Enable SSL encryption that leverages on AWS CloudHSM to store and encrypt the sensitive information. AWS Lambda does not provide encryption for the environment variables. Deploy your code to an EC2 instance instead. There is no need to do anything because, by default, AWS Lambda already encrypts the environment variables using the AWS Key Management Service. Create a new KMS key and use it to enable encryption helpers that leverage on AWS Key Management Service to store and encrypt the sensitive information.

Create an API using Amazon API Gateway and use AWS Lambda to handle the bursts of traffic in seconds. Lambda can run your code to thousands of available AWS-managed EC2 instances (that could already be running) within seconds to accommodate traffic. This is faster than the Auto Scaling process of launching new EC2 instances that could take a few minutes or so. Lambda is a serverless computing service that automatically scales to handle the workload. This means it can rapidly adjust to handle bursts of traffic, which is ideal for scenarios where traffic patterns are unpredictable and can spike suddenly. With Lambda, you don't need to manage the underlying infrastructure, and you pay only for the compute time you consume.

A startup is using Amazon RDS to store data from a web application. Most of the time, the application has low user activity but it receives bursts of traffic within seconds whenever there is a new product announcement. The Solutions Architect needs to create a solution that will allow users around the globe to access the data using an API. What should the Solutions Architect do meet the above requirement? Create an API using Amazon API Gateway and use Amazon Elastic Beanstalk with Auto Scaling to handle the bursts of traffic in seconds. Create an API using Amazon API Gateway and use an Auto Scaling group of Amazon EC2 instances to handle the bursts of traffic in seconds. Create an API using Amazon API Gateway and use the Amazon ECS cluster with Service Auto Scaling to handle the bursts of traffic in seconds. Create an API using Amazon API Gateway and use AWS Lambda to handle the bursts of traffic in seconds.

The EC2 instance launched from the oldest launch configuration.

A suite of web applications is hosted in an Auto Scaling group of EC2 instances across three Availability Zones and is configured with default settings. There is an Application Load Balancer that forwards the request to the respective target group on the URL path. The scale-in policy has been triggered due to the low number of incoming traffic to the application. Which EC2 instance will be the first one to be terminated by your Auto Scaling group? The EC2 instance which has the least number of user sessions The EC2 instance which has been running for the longest time The EC2 instance launched from the oldest launch configuration The instance will be randomly selected by the Auto Scaling group

Configure a Scheduled scaling policy for the Auto Scaling group to launch new instances before the start of the day. You need to configure a Scheduled scaling policy. This will ensure that the instances are already scaled up and ready before the start of the day since this is when the application is used the most. Configure a Predictive scaling policy for the Auto Scaling group to automatically adjust the number of Amazon EC2 instances is incorrect. Take note that the scenario mentioned that the Auto Scaling group consists of Amazon EC2 instances with different instance types and sizes. Predictive scaling assumes that your Auto Scaling group is homogenous, which means that all EC2 instances are of equal capacity. The forecasted capacity can be inaccurate if you are using a variety of EC2 instance sizes and types on your Auto Scaling group.

A tech company has a CRM application hosted on an Auto Scaling group of On-Demand EC2 instances with different instance types and sizes. The application is extensively used during office hours from 9 in the morning to 5 in the afternoon. Their users are complaining that the performance of the application is slow during the start of the day but then works normally after a couple of hours. Which of the following is the MOST operationally efficient solution to implement to ensure the application works properly at the beginning of the day? Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the CPU utilization. Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the Memory utilization. Configure a Predictive scaling policy for the Auto Scaling group to automatically adjust the number of Amazon EC2 instances Configure a Scheduled scaling policy for the Auto Scaling group to launch new instances before the start of the day.

AWS Directory Service AD Connector IAM Roles AWS Directory Service AD Connector can be used to enable identity federation, allowing users in the Active Directory to access AWS resources without needing to create separate IAM users for each of them. while IAM roles are indeed used for granting permissions to AWS services, they are also crucial for managing access for users, particularly in federation scenarios where external identities need to interact with AWS resources. AWS Directory Service Simple AD: Simple AD is a standalone directory service on AWS and is not designed to integrate with an on-premises Active Directory for identity federation purposes. IAM groups are used within AWS to assign permissions to IAM users. However, in a federation scenario where you're integrating with an external identity provider (like Active Directory), IAM roles are the preferred method for granting access, not IAM groups.

A telecommunications company is planning to give AWS Console access to developers. Company policy mandates the use of identity federation and role-based access control. Currently, the roles are already assigned using groups in the corporate Active Directory. In this scenario, what combination of the following services can provide developers access to the AWS console? (Select TWO.) Lambda AWS Directory Service AD Connector AWS Directory Service Simple AD IAM Groups IAM Roles

Aurora Global Database It supports storage-based replication that has a latency of less than 1 second. If there is an unplanned outage, one of the secondary regions you assigned can be promoted to read and write capabilities in less than 1 minute.

AWS Database that has the fastest RPO and RTO

remember

AWS Directory Service AD Connector can be used to enable identity federation, allowing users in the Active Directory to access AWS resources without needing to create separate IAM users for each of them.

remember

AWS Directory Service Simple AD: Simple AD is a standalone directory service on AWS and is not designed to integrate with an on-premises Active Directory for identity federation purposes.

remember

AWS PrivateLink is another name for VPC Endpoints

remmeber

Additionally, when you store encryption keys in AWS CloudHSM and manage them through a custom key store in AWS Key Management Service (KMS), you maintain full control over these keys.

remember

Amazon EFS and NFS are mainly used for Linux systems, not Windows.

remember

Amazon EFS does not support Windows systems, only Linux OS.

remember

Amazon FSx for Windows File Server is used for migration scenarios (transitioning from on-premises Windows file servers to the cloud) while storage gateway is used for hybdrid scenarios (connects on-premises environments with AWS cloud storage, allowing seamless integration between the two.)

remember

Amazon S3 is not a suitable storage service to use in storing encryption keys. You have to use AWS CloudHSM instead.

remember

An AWS Storage Gateway is more suitable for hybrid storage scenarios rather than fully migrating to AWS.

Enable Enhanced Monitoring in RDS. Enhanced Monitoring in RDS provides in-depth metrics like memory, file system, and disk I/O at the operating system level, crucial for detailed database load and performance analysis at the process level Other options are less suitable because: 1) Amazon CloudWatch only offers basic CPU utilization monitoring without detailed per-process metrics; 2) Creating and maintaining a custom script for CloudWatch is labor-intensive and might not match Enhanced Monitoring's data detail and reliability 3) The RDS console provides basic CPU% and MEM% metrics but lacks the granularity of Enhanced Monitoring.

An online cryptocurrency exchange platform is hosted in AWS which uses ECS Cluster and RDS in Multi-AZ Deployments configuration. The application is heavily using the RDS instance to process complex read and write database operations. To maintain the reliability, availability, and performance of your systems, you have to closely monitor how the different processes or threads on a DB instance use the CPU, including the percentage of the CPU bandwidth and total memory consumed by each process. Which of the following is the most suitable solution to properly monitor your database? Use Amazon CloudWatch to monitor the CPU Utilization of your database. Enable Enhanced Monitoring in RDS. Create a script that collects and publishes custom metrics to CloudWatch, which tracks the real-time CPU Utilization of the RDS instance, and then set up a custom CloudWatch dashboard to view the metrics. Check the CPU% and MEM% metrics which are readily available in the Amazon RDS console that shows the percentage of the CPU bandwidth and total memory consumed by each database process of your RDS instance.

Use S3 client-side encryption with a client-side master key. When using an AWS KMS-managed customer master key to enable client-side data encryption, you provide an AWS KMS customer master key ID (CMK ID) to AWS. On the other hand, when you use client-side master key for client-side data encryption, your client-side master keys and your unencrypted data are never sent to AWS

An online medical system hosted in AWS stores sensitive Personally Identifiable Information (PII) of the users in an Amazon S3 bucket. Both the master keys and the unencrypted data should never be sent to AWS to comply with the strict compliance and regulatory requirements of the company. Which S3 encryption technique should the Architect use? Use S3 client-side encryption with a client-side master key. Use S3 client-side encryption with a KMS-managed customer master key. Use S3 server-side encryption with customer provided key. Use S3 server-side encryption with a KMS managed key.

Create a custom endpoint in Aurora based on the specified criteria for the production traffic and another custom endpoint to handle the reporting queries. Do nothing since by default, Aurora will automatically direct the production traffic to your high-capacity instances and the reporting queries to your low-capacity instances is incorrect because Aurora does not do this by default. You have to create custom endpoints in order to accomplish this requirement.

An online shopping platform is hosted on an Auto Scaling group of Spot EC2 instances and uses Amazon Aurora PostgreSQL as its database. There is a requirement to optimize your database workloads in your cluster where you have to direct the production traffic to your high-capacity instances and point the reporting queries sent by your internal staff to the low-capacity instances. Which is the most suitable configuration for your application as well as your Aurora database cluster to achieve this requirement? Configure your application to use the reader endpoint for both production traffic and reporting queries, which will enable your Aurora database to automatically perform load-balancing among all the Aurora Replicas. In your application, use the instance endpoint of your Aurora database to handle the incoming production traffic and use the cluster endpoint to handle reporting queries. Create a custom endpoint in Aurora based on the specified criteria for the production traffic and another custom endpoint to handle the reporting queries. Do nothing since by default, Aurora will automatically direct the production traffic to your high-capacity instances and the reporting queries to your low-capacity instances.

remember

Attributes with high cardinality have a large and diverse range of values. When used as partition keys, they help distribute data more evenly across the physical partitions of a DynamoDB table.

remember

Canary deployments do not require setting up a completely separate environment (like in blue-green deployments), making it a more cost-effective approach.

remember

CloudWatch agent & SSM Agent works with both Linux and Windows instances.

remember

EFS lifecycle management doesn't delete objects. It can only transition files in and out of the "Infrequent Access" tier.

remember

IAM groups are used within AWS to assign permissions to IAM users. However, in a federation scenario where you're integrating with an external identity provider (like Active Directory), IAM roles are the preferred method for granting access, not IAM groups.

remember

If you're trying to implement a "follow" feature with a DynamoDB table, make sure that the DynamoDB stream is enabled. You have to manually enable it because it's not enabled by default

AWS GuardDuty

Intelligent Threat discovery, uses machine learning and anomaly detection (does NOT do traffic inspection or filtering)

remember

Multi-AZ deployment is only applicable inside a single region and not in a multi-region setup for RDS

remember

Network File System (NFS) file share is mainly used for Linux systems, NOT Windows

remember

Oracle RMAN (Recovery Manager) is not supported in RDS.

remember

Predictive scaling assumes that your Auto Scaling group is homogenous, which means that all EC2 instances are of equal capacity. The forecasted capacity can be inaccurate if you are using a variety of EC2 instance sizes and types on your Auto Scaling group.

remember

Producers send the message to an SQS queue while Consumers poll (take out) the message

remember

S3 IA (Infrequently Accessed) tiers are not for data archiving For archiving, use S3 Glacier

remember

SSH runs over the TCP protocol

remember again

SSH runs over the TCP protocol

Memory Utilization of an EC2 instance These are all the custom metrics you have to setup for Cloudwatch - Memory utilization - Disk swap utilization - Disk space utilization - Page file utilization - Log collection

The company that you are working for has a highly available architecture consisting of an elastic load balancer and several EC2 instances configured with auto-scaling in three Availability Zones. You want to monitor your EC2 instances based on a particular metric, which is not readily available in CloudWatch. Which of the following is a custom metric in CloudWatch which you have to manually set up? Disk Reads activity of an EC2 instance Network packets out of an EC2 instance Memory Utilization of an EC2 instance CPU Utilization of an EC2 instance

remember

To get started with AWS Transfer for SFTP (AWS SFTP) you create an SFTP server and map your domain to the server endpoint

remember

Use signed cookies for the following cases: - You want to provide access to multiple restricted files, for example, all of the files for a video in HLS format or all of the files in the subscribers' area of a website. - You don't want to change your current URLs.

remember

Using a custom KMS key with encryption helpers in AWS Lambda is the most secure method for managing sensitive environment variables, providing robust encryption and precise access control

simple scaling Target tracking or step scaling policies can trigger a scaling activity immediately without waiting for the cooldown period to expire Target tracking/step scaling should be the preferred choice over simple scaling

Using this, you will need to wait for the cooldown period to complete before initiating additional scaling activities.

remember

When using an AWS KMS-managed customer master key to enable client-side data encryption, you provide an AWS KMS customer master key ID (CMK ID) to AWS

remember

With AWS Firewall Manager integration, you can centrally define and manage your rules and reuse them across all the web applications that you need to protect.

remember

With S3 Object Lock, you can store objects using a write-once-read-many (WORM) model. You can use Object Lock to help meet regulatory requirements that require WORM storage, or to simply add another layer of protection against object changes and deletion.

IAM Database (DB) Authentication

With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token.

remember

You cannot set a time period for a legal hold (example, ensuring that no object can be overwritten or deleted by any user in a period of one year only.) To set a time period, you need to use the "retention period" You can only do this using the "retention period" option. Take note that a legal hold will still restrict users from changing the S3 objects even after the retention period has elapsed.

remember

You need to Create a custom endpoint in Aurora based on the specified criteria for the production traffic and another custom endpoint to handle the reporting queries. Aurora does not do this by default. You have to create custom endpoints in order to accomplish the requirement of directing the production traffic to your high-capacity instances and point the reporting queries sent by your internal staff to the low-capacity instances..

remember

You only need ONE NAT gateway in each Availability Zone (not 2 or 3).

Amazon WorkDocs

allows you to easily create, edit, share, and collaborate on documents in the cloud.

remember

blue-green deployment typically involves maintaining two separate but identical environments (blue and green). This can be more resource-intensive and costly compared to a canary release.

remember

by default, CloudWatch does not automatically provide memory and disk utilization metrics of your instances. You have to set up custom CloudWatch metrics to monitor the memory, disk swap, disk space, and page file utilization of your instances.

AWS Lake Formation

can consolidate data lakes from multiple accounts into a single account

File Gateway

can provide a local cache to access data which allows frequently accessed data to be stored locally for low-latency access.

- Memory utilization - Disk swap utilization - Disk space utilization - Page file utilization - Log collection

custom metrics you have to setup for Cloudwatch

Amazon Redshift

data warehousing solution built on a relational database model, not a NoSQL model

Cross-origin resource sharing (CORS)

defines a way for client web applications that are loaded in one domain to interact with resources in a different domain For example: Imagine you have two websites: Client Website: https://client.example.com - A web application that users interact with. API Server: https://api.service.com - A separate domain where your server-side API is hosted. Without ___: By default, browsers implement the same-origin policy. This policy prevents a website from making requests to a different domain than the one that served the web page. So, in this case, the browser will block the request from https://client.example.com to https://api.service.com for security reasons. With ___ : To enable the client website to access the resources from the API server, the server at https://api.service.com needs to include _____ headers in its response. These headers tell the browser that the web application at https://client.example.com is permitted to access the resources.

Aurora Auto Scaling

ensures that your database cluster scales up or down as needed without manual intervention.

Amazon Kendra

enterprise search service that allows developers to add search capabilities to their applications.

remember

for in-depth monitoring of CPU and memory usage by different processes or threads on an RDS instance, enabling Enhanced Monitoring in RDS is the most suitable

Amazon Quantum Ledger Database (Amazon QLDB)

fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log.

Amazon Fraud Detector

fully managed service for identifying potentially fraudulent activities and for catching more online fraud faster.

ParallelCluster

makes it easy for you to deploy and manage High-Performance Computing (HPC) clusters on AWS

90 days

maximum days for the EFS lifecycle policy

rate-based rule

monitors request rates per IP address, triggering an action when requests exceed a set limit within a 5-minute period. This rule is useful for temporarily blocking IPs that send excessive requests, such as during a DDoS attack.

AWS Glue

powerful ETL (Extract, Transform, Load) service that easily moves data between different data stores & can convert data to different formats

Amazon FSx For Lustre

provides a high-performance (HPC), parallel file system

AWS Shield

provides detection and mitigation against large and sophisticated DDoS attack

AWS Artifact

provides on-demand access to AWS' security and compliance reports and select online agreements

AWS Security Hub

provides you a comprehensive view of your high-priority security alerts

Amazon Timestream

serverless time series database service that is commonly used for IoT and operational applications.

AWS Resource Access Manager

service that enables you to easily and securely share AWS resources with any AWS account or within your AWS Organization. You can share AWS Transit Gateways, Subnets, AWS License Manager configurations, and Amazon Route 53 Resolver rules resources

Amazon Polly

service that turns text into speech

Amazon Inspector

simply a security tool for detecting vulnerabilities in AWS workloads, does not provide compliance reports

Amazon GuardDuty

threat detection service

remember

to invoke an AWS lambda function from an aurora db cluster, you need to use a native function or a stored procedure

Redshift

used for OLAP (Online analytical processing) sytems

Amazon Macie

used for data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data (PII)

Direct Connect

used to establish a dedicated physical network connection from your premises to AWS

Web Identity Federation

used to let users sign in via a well-known external identity provider (IdP), such as Login with Amazon, Facebook, Google. It does not utilize Active Directory.

remember

versioning can help protect your s3 objects from overwriting

remember

when you use client-side master key for client-side data encryption, your client-side master keys and your unencrypted data are never sent to AWS.

remember

while IAM roles are indeed used for granting permissions to AWS services, they are also crucial for managing access for users, particularly in federation scenarios where external identities need to interact with AWS resources.

remember

you can configure an S3 Access Point to restrict data access requests to a specific VPC (so only that VPC and no other VPCs can access the bucket)

remember

you can't poll Amazon SNS, you can only subscribe to an SNS topic

remember

you cannot directly use an AWS Network Firewall to restrict S3 bucket data access requests to a specific Amazon VPC only (you must use an Access Point instead).

remember

you cannot manually disable the Object Versioning feature if you have already selected the Object Lock option.

AWS VPN CloudHub

· If you have multiple sites, each with its own VPN connection, you can use this to connect those sites together · Hub-and-spoke model


Related study sets

Ch 20: Assessment of Respiratory Function

View Set

NIHSS Group C - Patients 1-6 for me

View Set

THEO Quiz #10, THEO Make Up Quiz, THEO Quiz #9, THEO Quiz #8, THEO Quiz #7

View Set

Psych Final Thinking and Language Critical Thinking

View Set

THERAPIST MULTIPLE-CHOICE EXAMINATION: PRACTICE TEST

View Set