AWS Solution Architect - Test 1

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

The solo founder at a tech startup has just created a brand new AWS account. The founder has provisioned an EC2 instance 1A which is running in region A. Later, he takes a snapshot of the instance 1A and then creates a new AMI in region A from this snapshot. This AMI is then copied into another region B. The founder provisions an instance 1B in region B using this new AMI in region B. At this point in time, what entities exist in region B? 1 EC2 instance and 1 AMI exist in region B 1 EC2 instance and 2 AMIs exist in region B 1 EC2 instance and 1 snapshot exist in region B 1 EC2 instance, 1 AMI and 1 snapshot exist in region B

1 EC2 instance, 1 AMI and 1 snapshot exist in region B

A cyber security company is running a mission critical application using a single Spread placement group of EC2 instances. The company needs 15 Amazon EC2 instances for optimal performance. How many Availability Zones (AZs) will the company need to deploy these EC2 instances per the given use-case? 7 14 3 15

3

A biotechnology company wants to seamlessly integrate its on-premises data center with AWS cloud-based IT systems which would be critical to manage as well as scale-up the complex planning and execution of every stage of its drug development process. As part of a pilot program, the company wants to integrate data files from its analytical instruments into AWS via an NFS interface. Which of the following AWS service is the MOST efficient solution for the given use-case? AWS Storage Gateway - File Gateway AWS Storage Gateway AWS Storage Gateway AWS Site-to-Site VPN

AWS Storage Gateway - File Gateway

The DevOps team at a major financial services company uses Multi-Availability Zone (Multi-AZ) deployment for its MySQL RDS database in order to automate its database replication and augment data durability. The DevOps team has scheduled a maintenance window for a database engine level upgrade for the coming weekend. Which of the following is the correct outcome during the maintenance window? Any database engine level upgrade for an RDS DB instance with Multi-AZ deployment triggers both the primary and standby DB instances to be upgraded at the same time. This causes downtime until the upgrade is complete Any database engine level upgrade for an RDS DB instance with Multi-AZ deployment triggers both the primary and standby DB instances to be upgraded at the same time. However, this does not cause any downtime until the upgrade is complete Any database engine level upgrade for an RDS DB instance with Multi-AZ deployment triggers the standby DB instance to be upgraded which is then followed by the upgrade of the primary DB instance. This does not cause any downtime for the duration of the upgrade Any database engine level upgrade for an RDS DB instance with Multi-AZ deployment triggers the primary DB instance to be upgraded which is then followed by the upgrade of the standby DB instance. This does not cause any downtime for the duration of the upgrade

Any database engine level upgrade for an RDS DB instance with Multi-AZ deployment triggers both the primary and standby DB instances to be upgraded at the same time. This causes downtime until the upgrade is complete

Which of the following is true regarding cross-zone load balancing as seen in Application Load Balancer versus Network Load Balancer? By default, cross-zone load balancing is disabled for both Application Load Balancer and Network Load Balancer By default, cross-zone load balancing is enabled for both Application Load Balancer and Network Load Balancer By default, cross-zone load balancing is disabled for Application Load Balancer and enabled for Network Load Balancer By default, cross-zone load balancing is enabled for Application Load Balancer and disabled for Network Load Balancer

By default, cross-zone load balancing is enabled for Application Load Balancer and disabled for Network Load Balancer

An IT Company wants to move all the compute components of its AWS Cloud infrastructure into serverless architecture. Their development stack comprises a mix of backend programming languages and the company would like to explore the support offered by the AWS Lambda runtime for their programming languages stack. Can you identify the programming languages supported by the Lambda runtime? (Select two) C#/.NET C Go PHP R

C#/.NET Go

A DevOps engineer at an IT company was recently added to the admin group of the company's AWS account. The AdministratorAccess managed policy is attached to this group. Can you identify the AWS tasks that the DevOps engineer CANNOT perform even though he has full Administrator privileges (Select two)? Configure an Amazon S3 bucket to enable MFA (Multi Factor Authentication) delete Delete the IAM user for his manager Delete an S3 bucket from the production environment Change the password for his own IAM user account Close the company's AWS account

Configure an Amazon S3 bucket to enable MFA (Multi Factor Authentication) delete Close the company's AWS account

The engineering team at a data analytics company has observed that its flagship application functions at its peak performance when the underlying EC2 instances have a CPU utilization of about 50%. The application is built on a fleet of EC2 instances managed under an Auto Scaling group. The workflow requests are handled by an internal Application Load Balancer that routes the requests to the instances. As a solutions architect, what would you recommend so that the application runs near its peak performance state? Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target metric with a target value of 50% Configure the Auto Scaling group to use step scaling policy and set the CPU utilization as the target metric with a target value of 50% Configure the Auto Scaling group to use simple scaling policy and set the CPU utilization as the target metric with a target value of 50% Configure the Auto Scaling group to use a Cloudwatch alarm triggered on a CPU utilization threshold of 50%

Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target metric with a target value of 50%

A technology blogger wants to write a review on the comparative pricing for various storage types available on AWS Cloud. The blogger has created a test file of size 1GB with some random data. Next he copies this test file into AWS S3 Standard storage class, provisions an EBS volume (General Purpose SSD (gp2)) with 100GB of provisioned storage and copies the test file into the EBS volume, and lastly copies the test file into an EFS Standard Storage filesystem. At the end of the month, he analyses the bill for costs incurred on the respective storage types for the test file. What is the correct order of the storage charges incurred for the test file on these three storage types? Cost of test file storage on S3 Standard < Cost of test file storage on EBS < Cost of test file storage on EFS Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test file storage on EBS Cost of test file storage on EFS < Cost of test file storage on S3 Standard < Cost of test file storage on EBS Cost of test file storage on EBS < Cost of test file storage on S3 Standard < Cost of test file storage on EFS

Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test file storage on EBS

The DevOps team at an analytics company has noticed that the performance of its proprietary Machine Learning workflow has deteriorated ever since a new Auto Scaling group was deployed a few days back. Upon investigation, the team found out that the Launch Configuration selected for the Auto Scaling group is using the incorrect instance type that is not optimized to handle the Machine Learning workflow. As a solutions architect, what would you recommend to provide a long term resolution for this issue? Modify the launch configuration to use the correct instance type and continue to use the existing Auto Scaling group No need to modify the launch configuration. Just modify the Auto Scaling group to use the correct instance type Create a new launch configuration to use the correct instance type. Modify the Auto Scaling group to use this new launch configuration. Delete the old launch configuration as it is no longer needed No need to modify the launch configuration. Just modify the Auto Scaling group to use more number of existing instance types. More instances may offset the loss of performance

Create a new launch configuration to use the correct instance type. Modify the Auto Scaling group to use this new launch configuration. Delete the old launch configuration as it is no longer needed

An IT consultant is helping the owner of a medium-sized business set up an AWS account. What are the security recommendations he must follow while creating the AWS account root user? (Select two) Encrypt the access keys and save them on Amazon S3 Create a strong password for the AWS account root user Enable Multi Factor Authentication (MFA) for the AWS account root user account Create AWS account root user access keys and share those keys only with the business owner Send an email to the business owner with details of the login username and password for the AWS root user. This will help the business owner to troubleshoot any login issues in future

Create a strong password for the AWS account root user Enable Multi Factor Authentication (MFA) for the AWS account root user account

A silicon valley based research group is working on a High Performance Computing (HPC) application in the area of Computational Fluid Dynamics. The application carries out simulations of the external aerodynamics around a car and needs to be deployed on EC2 instances with a requirement for high levels of inter-node communications and high network traffic between the instances. As a solutions architect, which of the following options would you recommend to the engineering team at the startup? (Select two) Deploy EC2 instances in a spread placement group Deploy EC2 instances in a partition placement group Deploy EC2 instances with Elastic Fabric Adapter Deploy EC2 instances behind a Network Load Balancer Deploy EC2 instances in a cluster placement group

Deploy EC2 instances with Elastic Fabric Adapter Deploy EC2 instances in a cluster placement group

A leading social media analytics company is contemplating moving its dockerized application stack into AWS Cloud. The company is not sure about the pricing for using Elastic Container Service (ECS) with the EC2 launch type compared to the Elastic Container Service (ECS) with the Fargate launch type. Which of the following is correct regarding the pricing for these two services? Both ECS with EC2 launch type and ECS with Fargate launch type are charged based on vCPU and memory resources that the containerized application requests Both ECS with EC2 launch type and ECS with Fargate launch type are charged based on EC2 instances and EBS volumes used ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests Both ECS with EC2 launch type and ECS with Fargate launch type are just charged based on Elastic Container Service used per hour

ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests

A healthcare startup needs to enforce compliance and regulatory guidelines for objects stored in Amazon S3. One of the key requirements is to provide adequate protection against accidental deletion of objects. As a solutions architect, what are your recommendations to address these guidelines? (Select two) Create an event trigger on deleting any S3 object. The event invokes an SNS notification via email to the IT manager Enable MFA delete on the bucket Establish a process to get managerial approval for deleting S3 objects Change the configuration on AWS S3 console so that the user needs to provide additional confirmation while deleting any S3 object Enable versioning on the bucket

Enable versioning on the bucket Enable MFA delete on the bucket

A geological research agency maintains the seismological data for the last 100 years. The data has a velocity of 1GB per minute. You would like to store the data with only the most relevant attributes to build a predictive model for earthquakes. What AWS services would you use to build the most cost-effective solution with the LEAST amount of infrastructure maintenance? Ingest the data in Kinesis Data Analytics and use SQL queries to filter and transform the data before writing to S3 Ingest the data in AWS Glue job and use Spark transformations before writing to S3 Ingest the data in Kinesis Data Firehose and use a Lambda function to filter and transform the incoming stream before the output is dumped on S3 Ingest the data in a Spark Streaming Cluster on EMR use Spark Streaming transformations before writing to S3

Ingest the data in Kinesis Data Firehose and use a Lambda function to filter and transform the incoming stream before the output is dumped on S3

The IT department at a consulting firm is conducting a training workshop for new developers. As part of an evaluation exercise on Amazon S3, the new developers were asked to identify the invalid storage class lifecycle transitions for objects stored on S3. Can you spot the INVALID lifecycle transitions from the options below? (Select two) S3 Intelligent-Tiering => S3 Standard S3 One Zone-IA => S3 Standard-IA S3 Standard => S3 Intelligent-Tiering S3 Standard-IA => S3 Intelligent-Tiering S3 Standard-IA => S3 One Zone-IA

S3 Intelligent-Tiering => S3 Standard S3 One Zone-IA => S3 Standard-IA

A streaming solutions company is building a video streaming product by using an Application Load Balancer (ALB) that routes the requests to the underlying EC2 instances. The engineering team has noticed a peculiar pattern. The ALB removes an instance whenever it is detected as unhealthy but the Auto Scaling group fails to kick-in and provision the replacement instance. What could explain this anomaly? The Auto Scaling group is using ALB based health check and the Application Load Balancer is using EC2 based health check Both the Auto Scaling group and Application Load Balancer are using ALB based health check The Auto Scaling group is using EC2 based health check and the Application Load Balancer is using ALB based health check Both the Auto Scaling group and Application Load Balancer are using EC2 based health check

The Auto Scaling group is using EC2 based health check and the Application Load Balancer is using ALB based health check

A software engineering intern at an e-commerce company is documenting the process flow to provision EC2 instances via the Amazon EC2 API. These instances are to be used for an internal application that processes HR payroll data. He wants to highlight those volume types that cannot be used as a boot volume. Can you help the intern by identifying those storage volume types that CANNOT be used as boot volumes while creating the instances? (Select two) General Purpose SSD (gp2) Provisioned IOPS SSD (io1) Throughput Optimized HDD (st1) Instance Store Cold HDD (sc1)

Throughput Optimized HDD (st1) Cold HDD (sc1)

A gaming company uses Amazon Aurora as its primary database service. The company has now deployed 5 multi-AZ read replicas to increase the read throughput and for use as failover target. The replicas have been assigned the following failover priority tiers and corresponding sizes are given in parentheses: tier-1 (16TB), tier-1 (32TB), tier-10 (16TB), tier-15 (16TB), tier-15 (32TB). In the event of a failover, Amazon RDS will promote which of the following read replicas? Tier-15 (32TB) Tier-1 (16TB) Tier-10 (16TB) Tier-1 (32TB)

Tier-1 (32TB)

An e-commerce company wants to explore a hybrid cloud environment with AWS so that it can start leveraging AWS services for some of its data analytics workflows. The engineering team at the e-commerce company wants to establish a dedicated, encrypted, low latency, and high throughput connection between its data center and AWS Cloud. The engineering team has set aside sufficient time to account for the operational overhead of establishing this connection. As a solutions architect, which of the following solutions would you recommend to the company? Use AWS Direct Connect plus VPN to establish a connection between the data center and AWS Cloud Use site-to-site VPN to establish a connection between the data center and AWS Cloud Use VPC transit gateway to establish a connection between the data center and AWS Cloud Use AWS Direct Connect to establish a connection between the data center and AWS Cloud

Use AWS Direct Connect plus VPN to establish a connection between the data center and AWS Cloud

A digital media streaming company wants to use AWS Cloudfront to distribute its content only to its service subscribers. As a solutions architect, which of the following solutions would you suggest in order to deliver restricted content to the bona fide end users? (Select two) Forward HTTPS requests to the origin server by using the ECDSA or RSA ciphers Use CloudFront signed URLs Require HTTPS for communication between CloudFront and your custom origin Require HTTPS for communication between CloudFront and your S3 origin Use CloudFront signed cookies

Use CloudFront signed URLs Use CloudFront signed cookies

An IT company wants to review its security best-practices after an incident was reported where a new developer on the team was assigned full access to DynamoDB. The developer accidentally deleted a couple of tables from the production environment while building out a new feature. Which is the MOST effective way to address this issue so that such incidents do not recur? Remove full database access for all IAM users in the organization The CTO should review the permissions for each new developer's IAM user so that such incidents don't recur Use permissions boundary to control the maximum permissions employees can grant to the IAM principals Only root user should have full database access in the organization

Use permissions boundary to control the maximum permissions employees can grant to the IAM principals

A cyber forensics company runs its EC2 servers behind an Application Load Balancer along with an Auto Scaling group. The engineers at the company want to be able to install proprietary forensic tools on each instance and perform a pre-activation status check of these tools whenever an instance is provisioned because of a scale-out event from an auto-scaling policy. Which of the following options can be used to enable this custom action? Use the Auto Scaling group scheduled action to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check Use the EC2 instance meta data to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check Use the EC2 instance user data to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check Use the Auto Scaling group lifecycle hook to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check

Use the Auto Scaling group lifecycle hook to put the instance in a wait state and launch a custom script that installs the proprietary forensic tools and performs a pre-activation status check

A silicon valley based startup uses a fleet of EC2 servers to manage its CRM application. These EC2 servers are behind an Elastic Load Balancer (ELB). Which of the following configurations are NOT allowed for the Elastic Load Balancer? Use the ELB to distribute traffic for four EC2 instances. All the four instances are deployed across two Availability Zones of us-east-1 region Use the ELB to distribute traffic for four EC2 instances. All the four instances are deployed in Availability Zone A of us-east-1 region Use the ELB to distribute traffic for four EC2 instances. All the four instances are deployed in Availability Zone B of us-west-1 region Use the ELB to distribute traffic for four EC2 instances. Two of these instances are deployed in Availability Zone A of us-east-1 region and the other two instances are deployed in Availability Zone B of us-west-1 region

Use the ELB to distribute traffic for four EC2 instances. Two of these instances are deployed in Availability Zone A of us-east-1 region and the other two instances are deployed in Availability Zone B of us-west-1 region

The engineering team at an e-commerce company wants to set up a custom domain for internal usage such as internaldomainexample.com. The team wants to use the private hosted zones feature of Route 53 to accomplish this. Which of the following settings of the VPC need to be enabled? (Select two) enableDnsHostnames enableVpcSupport enableVpcHostnames enableDnsDomain enableDnsSupport

enableDnsHostnames enableDnsSupport

A silicon valley based startup wants to be the global collaboration platform for API development. The product team at the startup has figured out a market need to support both stateful and stateless client-server communications via the APIs developed using its platform. You have been hired by the startup as an AWS solutions architect to build a Proof-of-Concept to fulfill this market need using AWS API Gateway. Which of the following would you recommend to the startup? API Gateway creates RESTful APIs that enable stateless client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server API Gateway creates RESTful APIs that enable stateful client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server API Gateway creates RESTful APIs that enable stateless client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateless, full-duplex communication between client and server API Gateway creates RESTful APIs that enable stateful client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateless, full-duplex communication between client and server

API Gateway creates RESTful APIs that enable stateless client-server communication and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server

A chip design startup is running an Electronic Design Automation (EDA) application, which is a high-performance workflow used to simulate performance and failures during the design phase of silicon chip production. The application produces massive volumes of data that can be divided into two categories. The 'hot data' needs to be both processed and stored quickly in a parallel and distributed fashion. The 'cold data' needs to be kept for reference with quick access for reads and updates at a low cost. Which of the following AWS services is BEST suited to accelerate the aforementioned chip design process? Amazon FSx for Windows File Server Amazon EMR Amazon FSx for Lustre AWS Glue

Amazon FSx for Lustre

A large financial institution operates an on-premises data center with hundreds of PB of data managed on Microsoft's Distributed File System (DFS). The CTO wants the organization to transition into a hybrid cloud environment and run data-intensive analytics workloads that support DFS. Which of the following AWS services can facilitate the migration of these workloads? Amazon FSx for Lustre AWS Managed Microsoft AD Amazon FSx for Windows File Server Microsoft SQL Server on Amazon

Amazon FSx for Windows File Server

A US-based non-profit organization develops learning methods for primary and secondary vocational education, delivered through digital learning platforms, which are hosted on AWS under a hybrid cloud setup. After experiencing stability issues with their cluster of self-managed RabbitMQ message brokers, the organization wants to explore an alternate solution on AWS. As a solutions architect, which of the following AWS services would you recommend that can provide support for quick and easy migration from RabbitMQ? Amazon SQS FIFO (First-In-First-Out) Amazon MQ Amazon SQS Standard Amazon Simple Notification Service (Amazon SNS)

Amazon MQ

The audit department at one of the leading consultancy firms generates and accesses the audit reports only during the last month of a financial year. The department uses AWS Step Functions to orchestrate the report creating process with failover and retry scenarios built into the solution and the data should be available with millisecond latency. The underlying data to create these audit reports is stored on S3 and runs into hundreds of Terabytes. As a solutions architect, which is the MOST cost-effective storage class that you would recommend to be used for this use-case? Amazon S3 Standard Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering) Amazon S3 Standard-Infrequent Access (S3 Standard-IA) Amazon S3 Glacier (S3 Glacier)

Amazon S3 Standard-Infrequent Access (S3 Standard-IA)

The engineering team at a Spanish professional football club has built a notification system on the web platform using Amazon SNS notifications which are then handled by a Lambda function for end-user delivery. During the off-season, the notification systems need to handle about 100 requests per second. During the peak football season, the rate touches about 5000 requests per second and it is noticed that a significant number of the notifications are not being delivered to the end-users on the web platform. As a solutions architect, which of the following would you suggest as the BEST possible solution to this issue? Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for Lambda, so the team needs to contact AWS support to raise the account limit The engineering team needs to provision more servers running the SNS service The engineering team needs to provision more servers running the Lambda service Amazon SNS has hit a scalability limit, so the team needs to contact AWS support to raise the account limit

Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for Lambda, so the team needs to contact AWS support to raise the account limit

The DevOps team at an e-commerce company has deployed a fleet of EC2 instances under an Auto Scaling group (ASG). The instances under the ASG span two Availability Zones (AZ) within the us-east-1 region. All the incoming requests are handled by an Application Load Balancer (ALB) that routes the requests to the EC2 instances under the ASG. As part of a test run, two instances (instance 1 and 2, belonging to AZ A) were manually terminated by the DevOps team causing the Availability Zones to become unbalanced. Later that day, another instance (belonging to AZ B) was detected as unhealthy by the Application Load Balancer's health check. Can you identify the correct outcomes for these events? (Select two) As the Availability Zones got unbalanced, Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones. When rebalancing, Amazon EC2 Auto Scaling launches new instances before terminating the old ones, so that rebalancing does not compromise the performance or availability of your application Amazon EC2 Auto Scaling creates a new scaling activity for launching a new instance to replace the unhealthy instance. Later, EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it Amazon EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. Later, another scaling activity launches a new instance to replace the terminated instance As the Availability Zones got unbalanced, Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones. When rebalancing, Amazon EC2 Auto Scaling terminates old instances before launching new instances, so that rebalancing does not cause extra instances to be launched Amazon EC2 Auto Scaling creates a new scaling activity to terminate the unhealthy instance and launch the new instance simultaneously

As the Availability Zones got unbalanced, Amazon EC2 Auto Scaling will compensate by rebalancing the Availability Zones. When rebalancing, Amazon EC2 Auto Scaling launches new instances before terminating the old ones, so that rebalancing does not compromise the performance or availability of your application Amazon EC2 Auto Scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. Later, another scaling activity launches a new instance to replace the terminated instance

A social photo-sharing company uses Amazon S3 to store the images uploaded by the users. These images are kept encrypted in S3 by using AWS-KMS and the company manages its own Customer Master Key (CMK) for encryption. A member of the DevOps team accidentally deleted the CMK a day ago, thereby rendering the user's photo data unrecoverable. You have been contacted by the company to consult them on possible solutions to this crisis. As a solutions architect, which of the following steps would you recommend to solve this issue? Contact AWS support to retrieve the CMK from their backup The CMK can be recovered by the AWS root account user The company should issue a notification on its web application informing the users about the loss of their data As the CMK was deleted a day ago, it must be in the 'pending deletion' status and hence you can just cancel the CMK deletion and recover the key

As the CMK was deleted a day ago, it must be in the 'pending deletion' status and hence you can just cancel the CMK deletion and recover the key

A social media analytics company uses a fleet of EC2 servers to manage its analytics workflow. These EC2 servers operate under an Auto Scaling group. The engineers at the company want to be able to download log files whenever an instance terminates because of a scale-in event from an auto-scaling policy. Which of the following features can be used to enable this custom action? EC2 instance meta data Auto Scaling group lifecycle hook EC2 instance user data Auto Scaling group scheduled action

Auto Scaling group lifecycle hook

The engineering team at an online fashion retailer uses AWS Cloud to manage its technology infrastructure. The EC2 server fleet is behind an Application Load Balancer and the fleet strength is managed by an Auto Scaling group. Based on the historical data, the team is anticipating a huge traffic spike during the upcoming Thanksgiving sale. As an AWS solutions architect, what feature of the Auto Scaling group would you leverage so that the potential surge in traffic can be preemptively addressed? Auto Scaling group scheduled action Auto Scaling group target tracking scaling policy Auto Scaling group step scaling policy Auto Scaling group lifecycle hook

Auto Scaling group scheduled action

A file hosting startup offers cloud storage and file synchronization services to its end users. The file-hosting service uses Amazon S3 under the hood to power its storage offerings. Currently all the customer files are uploaded directly under a single S3 bucket. The engineering team has started seeing scalability issues where customer file uploads have started failing during the peak access hours in the evening with more than 5000 requests per second. Which of the following is the MOST resource efficient and cost-optimal way of addressing this issue? Change the application architecture to create a new S3 bucket for each customer and then upload each customer's files directly under the respective buckets Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations Change the application architecture to create a new S3 bucket for each day's data and then upload the daily files directly under that day's bucket Change the application architecture to use EFS instead of Amazon S3 for storing the customers' uploaded files

Change the application architecture to create customer-specific custom prefixes within the single bucket and then upload the daily files into those prefixed locations

The payroll department at a company initiates several computationally intensive workloads on EC2 instances at a designated hour on the last day of every month. The payroll department has noticed a trend of severe performance lag during this hour. The engineering team has figured out a solution by using Auto Scaling Group for these EC2 instances and making sure that 10 EC2 instances are available during this peak usage hour. For normal operations only 2 EC2 instances are enough to cater to the workload. As a solutions architect, which of the following steps would you recommend to implement the solution? Configure your Auto Scaling group by creating a target tracking policy and setting the instance count to 10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the designated hour Configure your Auto Scaling group by creating a simple tracking policy and setting the instance count to 10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the designated hour Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the desired capacity of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the min count as well as the max count of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour

Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour on the last day of the month. Set the desired capacity of instances to 10. This causes the scale-out to happen before peak traffic kicks in at the designated hour

A social gaming startup has its flagship application hosted on a fleet of EC2 servers running behind an Elastic Load Balancer. These servers are part of an Auto Scaling Group. 90% of the users start logging into the system at 6 pm every day and continue till midnight. The engineering team at the startup has observed that there is a significant performance lag during the initial hour from 6 pm to 7 pm. The application is able to function normally thereafter. As a solutions architect, which of the following steps would you recommend addressing the performance bottleneck during that initial hour of traffic spike? Configure your Auto Scaling group by creating a scheduled action that kicks-off before 6 pm. This causes the scale-out to happen even before peak traffic kicks in at 6 pm Configure your Auto Scaling group by creating a lifecycle hook that kicks-off before 6 pm. This causes the scale-out to happen even before peak traffic kicks in at 6 pm Configure your Auto Scaling group by creating a target tracking policy. This causes the scale-out to happen even before peak traffic kicks in at 6 pm Configure your Auto Scaling group by creating a step scaling policy. This causes the scale-out to happen even before peak traffic kicks in at 6 pm

Configure your Auto Scaling group by creating a scheduled action that kicks-off before 6 pm. This causes the scale-out to happen even before peak traffic kicks in at 6 pm

An organization wants to delegate access to a set of users from the development environment so that they can access some resources in the production environment which is managed under another AWS account. As a solutions architect, which of the following steps would you recommend? Create new IAM user credentials for the production environment and share these credentials with the set of users from the development environment It is not possible to access cross-account resources Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment Both IAM roles and IAM users can be used interchangeably for cross-account access

Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment

A company has multiple EC2 instances operating in a private subnet which is part of a custom VPC. These instances are running an image processing application that needs to access images stored on S3. Once each image is processed, the status of the corresponding record needs to be marked as completed in a DynamoDB table. How would you go about providing private access to these AWS resources which are not part of this custom VPC? Create a separate gateway endpoint for S3 and DynamoDB each. Add two new target entries for these two gateway endpoints in the route table of the custom VPC Create a gateway endpoint for S3 and add it as a target in the route table of the custom VPC. Create an interface endpoint for DynamoDB and then connect to the DynamoDB service using the private IP address Create a gateway endpoint for DynamoDB and add it as a target in the route table of the custom VPC. Create an interface endpoint for S3 and then connect to the S3 service using the private IP address Create a separate interface endpoint for S3 and DynamoDB each. Then connect to these services using the private IP address

Create a separate gateway endpoint for S3 and DynamoDB each. Add two new target entries for these two gateway endpoints in the route table of the custom VPC

A global media company is using Amazon CloudFront to deliver media-rich content to its audience across the world. The Content Delivery Network (CDN) offers a multi-tier cache by default, with regional edge caches that improve latency and lower the load on the origin servers when the object is not already cached at the edge. However there are certain content types that bypass the regional edge cache, and go directly to the origin. Which of the following content types skip the regional edge cache? (Select two) Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin E-commerce assets such as product photos User-generated videos Static content such as style sheets, JavaScript files Dynamic content, as determined at request time (cache-behavior configured to forward all headers)

Dynamic content, as determined at request time (cache-behavior configured to forward all headers) Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin

A silicon valley based startup focused on the advertising technology (ad tech) space uses DynamoDB as a data store for storing various kinds of marketing data, such as user profiles, user events, clicks, and visited links. Some of these use-cases require a high request rate (millions of requests per second), low predictable latency, and reliability. The startup now wants to add a caching layer to support high read volumes. As a solutions architect, which of the following AWS services would you recommend as a caching layer for this use-case? (Select two) ElastiCache RDS DynamoDB Accelerator (DAX) Elasticsearch Redshift

DynamoDB Accelerator (DAX) ElastiCache

The CTO of an online home rental marketplace wants to re-engineer the caching layer of the current architecture for its relational database. He wants the caching layer to have replication and archival support built into the architecture. Which of the following AWS service offers the capabilities required for the re-engineering of the caching layer? ElastiCache for Redis ElastiCache for Memcached DynamoDB Accelerator (DAX) DocumentDB

ElastiCache for Redis

A new DevOps engineer has joined a large financial services company recently. As part of his onboarding, the IT department is conducting a review of the checklist for tasks related to AWS Identity and Access Management. As a solutions architect, which best practices would you recommend (Select two)? Enable MFA for privileged users Grant maximum privileges to avoid assigning privileges again Configure AWS CloudTrail to record all account activity Create a minimum number of accounts and share these account credentials among employees Use user credentials to provide access specific permissions for Amazon EC2 instances

Enable MFA for privileged users Configure AWS CloudTrail to record all account activity

A leading carmaker would like to build a new car-as-a-sensor service by leveraging fully serverless components that are provisioned and managed automatically by AWS. The development team at the carmaker does not want an option that requires the capacity to be manually provisioned, as it does not want to respond manually to changing volumes of sensor data. Given these constraints, which of the following solutions is the BEST fit to develop this car-as-a-sensor service? Ingest the sensor data in an Amazon SQS standard queue, which is polled by an application running on an EC2 instance and the data is written into an auto-scaled DynamoDB table for downstream processing Ingest the sensor data in a Kinesis Data Stream, which is polled by an application running on an EC2 instance and the data is written into an auto-scaled DynamoDB table for downstream processing Ingest the sensor data in an Amazon SQS standard queue, which is polled by a Lambda function in batches and the data is written into an auto-scaled DynamoDB table for downstream processing Ingest the sensor data in a Kinesis Data Stream, which is polled by a Lambda function in batches, and the data is written into an auto-scaled DynamoDB table for downstream processing

Ingest the sensor data in an Amazon SQS standard queue, which is polled by a Lambda function in batches and the data is written into an auto-scaled DynamoDB table for downstream processing

The engineering team at a leading online real estate marketplace uses Amazon MySQL RDS because it simplifies much of the time-consuming administrative tasks typically associated with databases. The team uses Multi-Availability Zone (Multi-AZ) deployment to further automate its database replication and augment data durability and also deploys read replicas. A new DevOps engineer has joined the team and wants to understand the replication capabilities for Multi-AZ as well as Read-replicas. Which of the following correctly summarizes these capabilities for the given database? Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region Multi-AZ follows asynchronous replication and spans one Availability Zone within a single region. Read replicas follow synchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region Multi-AZ follows asynchronous replication and spans at least two Availability Zones within a single region. Read replicas follow synchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region Multi-AZ follows asynchronous replication and spans at least two Availability Zones within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region

Multi-AZ follows synchronous replication and spans at least two Availability Zones within a single region. Read replicas follow asynchronous replication and can be within an Availability Zone, Cross-AZ, or Cross-Region

A leading video streaming provider is migrating to AWS Cloud infrastructure for delivering its content to users across the world. The company wants to make sure that the solution supports at least a million requests per second for its EC2 server farm. As a solutions architect, which type of Elastic Load Balancer would you recommend as part of the solution stack? Application Load Balancer Network Load Balancer Classic Load Balancer Infrastructure Load Balancer

Network Load Balancer

A video analytics organization has been acquired by a leading media company. The analytics organization has 10 independent applications with an on-premises data footprint of about 70TB for each application. The media company has its IT infrastructure on the AWS Cloud. The terms of the acquisition mandate that the on-premises data should be migrated into AWS Cloud and the two organizations establish connectivity so that collaborative development efforts can be pursued. The CTO of the media company has set a timeline of one month to carry out this transition. Which of the following are the MOST cost-effective options for completing the data transfer and then establishing connectivity? (Select two) Order 1 Snowmobile to complete the one-time data transfer Setup AWS direct connect to establish connectivity between the on-premises data center and AWS Cloud Order 10 Snowball Edge Storage Optimized devices to complete the one-time data transfer Order 70 Snowball Edge Storage Optimized devices to complete the one-time data transfer Setup Site-to-Site VPN to establish connectivity between the on-premises data center and AWS Cloud

Order 10 Snowball Edge Storage Optimized devices to complete the one-time data transfer Setup Site-to-Site VPN to establish connectivity between the on-premises data center and AWS Cloud

The development team at an e-commerce startup has set up multiple microservices running on EC2 instances under an Application Load Balancer. The team wants to route traffic to multiple back-end services based on the URL path of the HTTP header. So it wants requests for https://www.example.com/orders to go to a specific microservice and requests for https://www.example.com/products to go to another microservice. Which of the following features of Application Load Balancers can be used for this use-case? Path-based Routing Query string parameter-based routing HTTP header-based routing Host-based Routing

Path-based Routing

The engineering team at an in-home fitness company is evaluating multiple in-memory data stores with the ability to power its on-demand, live leaderboard. The company's leaderboard requires high availability, low latency, and real-time processing to deliver customizable user data for the community of users working out together virtually from the comfort of their home. As a solutions architect, which of the following solutions would you recommend? (Select two) Power the on-demand, live leaderboard using ElastiCache Redis as it meets the in-memory, high availability, low latency requirements Power the on-demand, live leaderboard using DynamoDB as it meets the in-memory, high availability, low latency requirements Power the on-demand, live leaderboard using RDS Aurora as it meets the in-memory, high availability, low latency requirements Power the on-demand, live leaderboard using DynamoDB with DynamoDB Accelerator (DAX) as it meets the in-memory, high availability, low latency requirements Power the on-demand, live leaderboard using AWS Neptune as it meets the in-memory, high availability, low latency requirements

Power the on-demand, live leaderboard using ElastiCache Redis as it meets the in-memory, high availability, low latency requirements Power the on-demand, live leaderboard using DynamoDB with DynamoDB Accelerator (DAX) as it meets the in-memory, high availability, low latency requirements

The DevOps team at an e-commerce company wants to perform some maintenance work on a specific EC2 instance that is part of an Auto Scaling group using a step scaling policy. The team is facing a maintenance challenge - every time the team deploys a maintenance patch, the instance health check status shows as out of service for a few minutes. This causes the Auto Scaling group to provision another replacement instance immediately. As a solutions architect, which are the MOST time/resource efficient steps that you would recommend so that the maintenance work can be completed at the earliest? (Select two) Put the instance into the Standby state and then update the instance by applying the maintenance patch. Once the instance is ready, you can exit the Standby state and then return the instance to service Take a snapshot of the instance, create a new AMI and then launch a new instance using this AMI. Apply the maintenance patch to this new instance and then add it back to the Auto Scaling Group by using the manual scaling policy. Terminate the earlier instance that had the maintenance issue Delete the Auto Scaling group and apply the maintenance fix to the given instance. Create a new Auto Scaling group and add all the instances again using the manual scaling policy Suspend the ScheduledActions process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can activate the ScheduledActions process type again Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can activate the ReplaceUnhealthy process type again

Put the instance into the Standby state and then update the instance by applying the maintenance patch. Once the instance is ready, you can exit the Standby state and then return the instance to service Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply the maintenance patch to the instance. Once the instance is ready, you can activate the ReplaceUnhealthy process type again

A leading video streaming service delivers billions of hours of content from Amazon S3 to customers around the world. Amazon S3 also serves as the data lake for its big data analytics solution. The data lake has a staging zone where intermediary query results are kept only for 24 hours. These results are also heavily referenced by other parts of the analytics pipeline. Which of the following is the MOST cost-effective strategy for storing this intermediary query data? Store the intermediary query results in S3 Intelligent-Tiering storage class Store the intermediary query results in S3 Standard-Infrequent Access storage class Store the intermediary query results in S3 Standard storage class Store the intermediary query results in S3 One Zone-Infrequent Access storage class

Store the intermediary query results in S3 Standard storage class

One of the largest healthcare solutions provider in the world uses Amazon S3 to store and protect a petabyte of critical medical imaging data for its AWS based Health Cloud service, which connects hundreds of thousands of imaging machines and other medical devices. The engineering team has observed that while some of the objects in the imaging data bucket are frequently accessed, others sit idle for a considerable span of time. As a solutions architect, what is your recommendation to build the MOST cost-effective solution? Store the objects in the imaging data bucket using the S3 Standard-IA storage class Store the objects in the imaging data bucket using the S3 Intelligent-Tiering storage class Create a data monitoring application on an EC2 instance in the same region as the imaging data bucket. The application is triggered daily via CloudWatch and it changes the storage class of infrequently accessed objects to S3 One Zone-IA and the frequently accessed objects are migrated to S3 Standard class Create a data monitoring application on an EC2 instance in the same region as the imaging data bucket. The application is triggered daily via CloudWatch and it changes the storage class of infrequently accessed objects to S3 Standard-IA and the frequently accessed objects are migrated to S3 Standard class

Store the objects in the imaging data bucket using the S3 Intelligent-Tiering storage class

The planetary research program at an ivy-league university is assisting NASA to find potential landing sites for exploration vehicles of unmanned missions to our neighboring planets. The program uses High Performance Computing (HPC) driven application architecture to identify these landing sites. Which of the following EC2 instance topologies should this application be deployed on? The EC2 instances should be deployed in a partition placement group so that distributed workloads can be handled effectively The EC2 instances should be deployed in a spread placement group so that there are no correlated failures The EC2 instances should be deployed in a cluster placement group so that the underlying workload can benefit from low network latency and high network throughput The EC2 instances should be deployed in an Auto Scaling group so that application meets high availability requirements

The EC2 instances should be deployed in a cluster placement group so that the underlying workload can benefit from low network latency and high network throughput

A junior scientist working with the Deep Space Research Laboratory at NASA is trying to upload a high-resolution image of a nebula into Amazon S3. The image size is approximately 3GB. The junior scientist is using S3 Transfer Acceleration (S3TA) for faster image upload. It turns out that S3TA did not result in an accelerated transfer. Given this scenario, which of the following is correct regarding the charges for this image transfer? The junior scientist does not need to pay any transfer charges for the image upload The junior scientist needs to pay both S3 transfer charges and S3TA transfer charges for the image upload The junior scientist only needs to pay S3 transfer charges for the image upload The junior scientist only needs to pay S3TA transfer charges for the image upload

The junior scientist does not need to pay any transfer charges for the image upload

A research group at an ivy-league university needs a fleet of EC2 instances operating in a fault-tolerant architecture for a specialized task that must deliver high random I/O performance. Each instance in the fleet would have access to a dataset that is replicated across the instances. Because of the resilient architecture, the specialized task would continue to be processed even if any of the instances goes down as the underlying application architecture would ensure the replacement instance has access to the required dataset. Which of the following options is the MOST cost-optimal and resource-efficient solution to build this fleet of EC2 instances? Use EBS based EC2 instances Use EC2 instances with EFS mount points Use EC2 instances with access to S3 based storage Use Instance Store based EC2 instances

Use Instance Store based EC2 instances

The sourcing team at the US headquarters of a global e-commerce company is preparing a spreadsheet of the new product catalog. The spreadsheet is saved on an EFS file system created in us-east-1 region. The sourcing team counterparts from other AWS regions such as Asia Pacific and Europe also want to collaborate on this spreadsheet. As a solutions architect, what is your recommendation to enable this collaboration with the LEAST amount of operational overhead? The spreadsheet will have to be copied in Amazon S3 which can then be accessed from any AWS region The spreadsheet data will have to be moved into an RDS MySQL database which can then be accessed from any AWS region The spreadsheet will have to be copied into EFS file systems of other AWS regions as EFS is a regional service and it does not allow access from other AWS regions The spreadsheet on the EFS file system can be accessed from EC2 instances running in other AWS regions by using an inter-region VPC peering connection

The spreadsheet on the EFS file system can be accessed from EC2 instances running in other AWS regions by using an inter-region VPC peering connection

A media company wants to get out of the business of owning and maintaining its own IT infrastructure. As part of this digital transformation, the media company wants to archive about 5PB of data in its on-premises data center to durable long term storage. As a solutions architect, what is your recommendation to migrate this data in the MOST cost-optimal way? Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into AWS Glacier Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier Setup AWS direct connect between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier Setup Site-to-Site VPN connection between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier

Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier

A news network uses Amazon S3 to aggregate the raw video footage from its reporting teams across the US. The news network has recently expanded into new geographies in Europe and Asia. The technical teams at the overseas branch offices have reported huge delays in uploading large video files to the destination S3 bucket. Which of the following are the MOST cost-effective options to improve the file upload speed into S3? (Select two) Create multiple AWS direct connect connections between the AWS Cloud and branch offices in Europe and Asia. Use the direct connect connections for faster file uploads into S3 Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination S3 bucket Create multiple site-to-site VPN connections between the AWS Cloud and branch offices in Europe and Asia. Use these VPN connections for faster file uploads into S3 Use AWS Global Accelerator for faster file uploads into the destination S3 bucket Use multipart uploads for faster file uploads into the destination S3 bucket

Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination S3 bucket Use multipart uploads for faster file uploads into the destination S3 bucket

A major bank is using SQS to migrate several core banking applications to the cloud to ensure high availability and cost efficiency while simplifying administrative complexity and overhead. The development team at the bank expects a peak rate of about 1000 messages per second to be processed via SQS. It is important that the messages are processed in order. Which of the following options can be used to implement this system? Use Amazon SQS FIFO queue in batch mode of 2 messages per operation to process the messages at the peak rate Use Amazon SQS FIFO queue to process the messages Use Amazon SQS standard queue to process the messages Use Amazon SQS FIFO queue in batch mode of 4 messages per operation to process the messages at the peak rate

Use Amazon SQS FIFO queue in batch mode of 4 messages per operation to process the messages at the peak rate

The data engineering team at an e-commerce company has set up a workflow to ingest the clickstream data into the raw zone of the S3 data lake. The team wants to run some SQL based data sanity checks on the raw zone of the data lake. What AWS services would you recommend for this use-case such that the solution is cost-effective and easy to maintain? Load the incremental raw zone data into an EMR based Spark Cluster on an hourly basis and use SparkSQL to run the SQL based sanity checks Use Athena to run SQL based analytics against S3 data Load the incremental raw zone data into Redshift on an hourly basis and run the SQL based sanity checks Load the incremental raw zone data into RDS on an hourly basis and run the SQL based sanity checks

Use Athena to run SQL based analytics against S3 data

An IT company has built a custom data warehousing solution for a retail organization by using Amazon Redshift. As part of the cost optimizations, the company wants to move any historical data (any data older than a year) into S3, as the daily analytical reports consume data for just the last one year. However the analysts want to retain the ability to cross-reference this historical data along with the daily reports. The company wants to develop a solution with the LEAST amount of effort and MINIMUM cost. As a solutions architect, which option would you recommend to facilitate this use-case? Setup access to the historical data via Athena. The analytics team can run historical data queries on Athena and continue the daily reporting on Redshift. In case the reports need to be cross-referenced, the analytics team need to export these in flat files and then do further analysis Use the Redshift COPY command to load the S3 based historical data into Redshift. Once the ad-hoc queries are run for the historic data, it can be removed from Redshift Use Glue ETL job to load the S3 based historical data into Redshift. Once the ad-hoc queries are run for the historic data, it can be removed from Redshift Use Redshift Spectrum to create Redshift cluster tables pointing to the underlying historical data in S3. The analytics team can then query this historical data to cross-reference with the daily reports from Redshift

Use Redshift Spectrum to create Redshift cluster tables pointing to the underlying historical data in S3. The analytics team can then query this historical data to cross-reference with the daily reports from Redshift

One of the biggest football leagues in Europe has granted the distribution rights for live streaming its matches in the US to a silicon valley based streaming services company. As per the terms of distribution, the company must make sure that only users from the US are able to live stream the matches on their platform. Users from other countries in the world must be denied access to these live-streamed matches. Which of the following options would allow the company to enforce these streaming restrictions? (Select two) Use Route 53 based geolocation routing policy to restrict distribution of content to only the locations in which you have distribution rights Use georestriction to prevent users in specific geographic locations from accessing content that you're distributing through a CloudFront web distribution Use Route 53 based latency routing policy to restrict distribution of content to only the locations in which you have distribution rights Use Route 53 based weighted routing policy to restrict distribution of content to only the locations in which you have distribution rights Use Route 53 based failover routing policy to restrict distribution of content to only the locations in which you have distribution rights

Use Route 53 based geolocation routing policy to restrict distribution of content to only the locations in which you have distribution rights Use georestriction to prevent users in specific geographic locations from accessing content that you're distributing through a CloudFront web distribution

A US-based healthcare startup is building an interactive diagnostic tool for COVID-19 related assessments. The users would be required to capture their personal health records via this tool. As this is sensitive health information, the backup of the user data must be kept encrypted in S3. The startup does not want to provide its own encryption keys but still wants to maintain an audit trail of when an encryption key was used and by whom. Which of the following is the BEST solution for this use-case? Use SSE-KMS to encrypt the user data on S3 Use SSE-S3 to encrypt the user data on S3 Use SSE-C to encrypt the user data on S3 Use client-side encryption with client provided keys and then upload the encrypted user data to S3

Use SSE-KMS to encrypt the user data on S3

Which of the following features of an Amazon S3 bucket can only be suspended once they have been enabled? Server Access Logging Versioning Static Website Hosting Requester Pays

Versioning


Kaugnay na mga set ng pag-aaral

"All Summer in a Day" Vocabulary

View Set

Unit 12: Listing Agreement (REPI)

View Set

Chapter 14 Partnerships: Formation and Operation

View Set

BIOL 121: Human Nutrition Chapters 1-4

View Set