AWS Solution Architect - Test 5

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

A multi-national retail company has multiple business divisions with each division having its own AWS account. The engineering team at the company would like to debug and trace data across these AWS accounts and visualize it in a centralized account. As a Solutions Architect, which of the following solutions would you suggest for the given use-case? CloudWatch Events CloudTrail X-ray VPC Flow Logs

x-ray

As an AWS Certified Solutions Architect Associate, you have been hired to work with the engineering team at a company to create a REST API using the serverless architecture. Which of the following solutions will you choose to move the company to the serverless architecture paradigm? API Gateway exposing Lambda Functionality Fargate with Lambda at the front Public-facing Application Load Balancer with ECS on Amazon EC2 Route 53 with EC2 as backend

API Gateway exposing Lambda Functionality

A multi-national company uses different AWS accounts for their distinct business divisions. The communication between these divisions is heavily based on Amazon Simple Notification Service (SNS). For a particular use case, the AWS account for the Human Resources division needs to have access to an Amazon SNS topic that falls under the AWS account of the Finance division. Which of the following represents the best solution for the given requirement? Add the appropriate policy to all the IAM users of the Human Resources account that need access to the topic Add a policy to the topic under the Finance account, where the Principal is defined as the Human Resources account Create a group and add all IAM users under it (users from both the accounts). Add the appropriate policy to allow access to the topic Add a policy to the topic under the Finance account, where the Resource is defined as the Human Resources account

Add a policy to the topic under the Finance account, where the Principal is defined as the Human Resources account

A Big Data analytics company wants to set up an AWS cloud architecture that throttles requests in case of sudden traffic spikes. To augment its custom technology stack, the company is looking for AWS services that can be used for buffering or throttling to handle traffic variations. Which of the following services can be used to support this requirement? Amazon SQS, Amazon SNS and AWS Lambda Amazon Gateway Endpoints, Amazon SQS and Amazon Kinesis Elastic Load Balancer, Amazon SQS, AWS Lambda Amazon API Gateway, Amazon SQS and Amazon Kinesis

Amazon API Gateway, Amazon SQS and Amazon Kinesis

A retail company manages its IT infrastructure on AWS Cloud. A fleet of Amazon EC2 instances sits behind an Auto Scaling Group (ASG) that helps manage the fleet size to meet the demand. After analyzing the application, the team has configured two metrics that control the scale-in and scale-out policy of ASG. One is a target tracking policy that uses a custom metric to add and remove two new instances, based on the number of SQS messages in the queue. The other is a step scaling policy that uses the Amazon CloudWatch CPUUtilization metric to launch one new instance when the existing instance exceeds 90 percent utilization for a specified length of time. While testing, the scale-out policy criteria for both policies was met at the same time. How many new instances will be launched because of these multiple scaling policies? Amazon EC2 Auto Scaling chooses the minimum capacity from each of the policies that meet the criteria. So, one new instance will be launched by the ASG Amazon EC2 Auto Scaling chooses the latest policy after running the algorithm defined during ASG configuration. Based on this output, either of the policies will be chosen for scaling out Amazon EC2 Auto Scaling chooses the sum of the capacity of all the policies that meet the criteria. So, three new instances will be launched by the ASG Amazon EC2 Auto Scaling chooses the policy that provides the largest capacity, so policy with the custom metric is triggered, and two new instances will be launched by the ASG

Amazon EC2 Auto Scaling chooses the policy that provides the largest capacity, so policy with the custom metric is triggered, and two new instances will be launched by the ASG

A media company is evaluating the possibility of moving its IT infrastructure to the AWS Cloud. The company needs at least 10 TB of storage with the maximum possible I/O performance for processing certain files which are mostly large videos. The company also needs close to 450 TB of very durable storage for storing media content and almost double of it, i.e. 900 TB for archival of legacy data. As a Solutions Architect, which set of services will you recommend to meet these requirements? Amazon S3 standard storage for maximum performance, Amazon S3 Intelligent-Tiering for intelligent, durable storage, and Amazon S3 Glacier Deep Archive for archival storage Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage Amazon EC2 instance store for maximum performance, AWS Storage Gateway for on-premises durable data access and Amazon S3 Glacier Deep Archive for archival storage

Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage

A telecommunications company is looking at moving its real-time traffic analytics infrastructure to AWS Cloud. The company owns thousands of hardware devices like switches, routers, cables, and so on. The status of all these devices has to be fed into an analytics system for real-time processing. If a malfunction is detected, communication has to be initiated to the responsible team, to fix the hardware. Also, another application needs to read this same incoming data in parallel and analyze all the connecting lines that may go down because of the hardware failure. As a Solutions Architect, can you suggest the right solution to be used for this requirement? Amazon Simple Queue Service (SQS) with Amazon Simple Notification Service (SNS) Amazon Simple Queue Service (SQS) with Amazon Simple Email Service (Amazon SES) Amazon Kinesis Data Streams Amazon Simple Notification Service (SNS)

Amazon Kinesis Data Streams

A solutions architect is tasked with a requirement to design a low-latency solution for a static, single-page application, accessed by users through a custom domain name. The solution must be serverless, provide in-transit data encryption and needs to be cost-effective. Which AWS services can be combined to build a solution for the company's requirement? Host the application on Amazon EC2 instance with instance store volume for high performance of the application to provide low latency access to users Amazon S3 can be used to host the static website. While Amazon CloudFront can be used to distribute the content for low latency access Host the application on AWS Fargate and front it with an Elastic Load Balancer for an improved performance Configure Amazon S3 to store the static data and use AWS Fargate for hosting the application on serverless architecture

Amazon S3 can be used to host the static website. While Amazon CloudFront can be used to distribute the content for low latency access

You are a cloud architect in Silicon Valley. Many companies in this area have mobile apps that capture and send data to Amazon Kinesis Data Streams. They have been getting a ProvisionedThroughputExceededException exception. You have been contacted to help and upon careful analysis, you are seeing that messages are being sent one by one while being sent at a high rate. Which of the following options will help with the exception while keeping costs at a minimum? Use Exponential Backoff Increase the number of shards Decrease the Stream retention duration Batch messages

Batch messages

A financial services company is looking at moving their on-premises infrastructure to AWS Cloud and leverage the serverless architecture. As part of this process, their engineering team has been studying various best practices for developing a serverless solution. They intend to use AWS Lambda extensively and want to focus on the key points to consider when using Lambda as a backbone for this architecture. As a Solutions Architect, which of the following options would you recommend for this requirement? (Select three) The bigger your deployment package, the slower your Lambda function will cold-start. Hence, AWS suggests packaging dependencies as a separate package from the actual Lambda package Serverless architecture and containers complement each other and you should leverage Docker containers within the Lambda functions By default, Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs. Once a Lambda function is VPC-enabled, it will need a route through a NAT gateway in a public subnet to access public resources Lambda allocates compute power in proportion to the memory you allocate to your function. AWS, thus recommends to over provision your function time out settings for the proper performance of Lambda functions Since Lambda functions can scale extremely quickly, its a good idea to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold If you intend to reuse code in more than one Lambda function, you should consider creating a Lambda Layer for the reusable code

By default, Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs. Once a Lambda function is VPC-enabled, it will need a route through a NAT gateway in a public subnet to access public resources Since Lambda functions can scale extremely quickly, its a good idea to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold If you intend to reuse code in more than one Lambda function, you should consider creating a Lambda Layer for the reusable code

A mobile chat application uses DynamoDB as its database service to provide low latency chat updates. A new developer has joined the team and is reviewing the configuration settings for DynamoDB which have been tweaked for certain technical requirements. CloudTrail service has been enabled on all the resources used for the project. Yet, DynamoDB encryption details are nowhere to be found. Which of the following options can explain the root cause for the given issue? By default, all DynamoDB tables are encrypted under AWS managed CMKs, which do not write to CloudTrail logs By default, all DynamoDB tables are encrypted under Customer managed CMKs, which do not write to CloudTrail logs By default, all DynamoDB tables are encrypted using Data keys, which do not write to CloudTrail logs By default, all DynamoDB tables are encrypted under an AWS owned customer master key (CMK), which do not write to CloudTrail logs

By default, all DynamoDB tables are encrypted under an AWS owned customer master key (CMK), which do not write to CloudTrail logs

An IT company is working on a project that uses two separate AWS accounts for accessing different AWS services. The team has just configured an Amazon S3 bucket in the first AWS account for writing data from the Amazon Redshift cluster present in the second AWS account. The developer has noticed that the files created in the S3 bucket using the UNLOAD command from the Redshift cluster are not accessible to the S3 bucket owner. What could be the reason for this denial of permission for resources created in the same AWS account? By default, an S3 object is owned by the AWS account that uploaded it. So the S3 bucket owner will not implicitly have access to the objects written by Redshift cluster When objects are uploaded to S3 bucket from a different AWS account, the S3 bucket owner will get implicit permissions to access these objects. It is an upload error that can be fixed by providing manual access from AWS console The owner of an S3 bucket has implicit access to all objects in his bucket. Permissions are set on objects after they are completely copied to the target location. Since the owner is unable to access the uploaded files, the write operation may be still in progress When two different AWS accounts are accessing an S3 bucket, both the accounts need to share the bucket policies, explicitly defining the actions possible for each account. An erroneous policy can lead to such permission failures

By default, an S3 object is owned by the AWS account that uploaded it. So the S3 bucket owner will not implicitly have access to the objects written by Redshift cluster

A financial services firm runs its technology operations on a fleet of Amazon EC2 instances. The firm needs a certain software to be available on the instances to support their daily workflows. The engineering team has been told to use the user data feature of EC2 instances to ensure new instances are ready for operations. Which of the following are true about the EC2 user data configuration? (Select two) When an instance is running, you can update user data by using root user credentials By default, scripts entered as user data do not have root user privileges for executing By default, scripts entered as user data are executed with root user privileges . By default, user data runs only during the boot cycle when you first launch an instance By default, user data is executed every time an EC2 instance is re-started

By default, scripts entered as user data are executed with root user privileges . By default, user data runs only during the boot cycle when you first launch an instance

A media company runs a photo-sharing web application that is currently accessed across three different countries. The application is deployed on several Amazon EC2 instances running behind an Application Load Balancer. With new government regulations, the company has been asked to block access from two countries and allow access only from the home country of the company. Which configuration should be used to meet this changed requirement? Configure the security group on the Application Load Balancer Configure the security group for the EC2 instances Configure AWS WAF on the Application Load Balancer in a VPC Use Geo Restriction feature of Amazon CloudFront in a VPC

Configure AWS WAF on the Application Load Balancer in a VPC

An IT training company hosted its website on Amazon S3 a couple of years ago. Due to COVID-19 related travel restrictions, the training website has suddenly gained traction. With an almost 300% increase in the requests served per day, the company's AWS costs have sky-rocketed for just the S3 outbound data costs. As a Solutions Architect, can you suggest an alternate method to reduce costs while keeping the latency low? To reduce S3 cost, the data can be saved on an EBS volume connected to an EC2 instance that can host the application Configure Amazon CloudFront to distribute the data hosted on Amazon S3, cost-effectively Use Amazon Elastic File System (Amazon EFS), as it provides a shared, scalable, fully managed elastic NFS file system for storing AWS Cloud or on-premises data Configure S3 Batch Operations to read data in bulk at one go, to reduce the number of calls made to S3 buckets

Configure Amazon CloudFront to distribute the data hosted on Amazon S3, cost-effectively

A company has its application servers in the public subnet that connect to the RDS instances in the private subnet. For regular maintenance, the RDS instances need patch fixes that need to be downloaded from the internet. Considering that the company uses only IPv4 addressing and is looking for a fully managed service with little or no overhead, which of the following would you suggest as an optimal solution? Configure a NAT Gateway in the public subnet of the VPC Configure an Egress-only internet gateway for the resources in the private subnet of the VPC Configure a NAT instance in the public subnet of the VPC . Configure the Internet Gateway of the VPC to be accessible to the private subnet resources, by changing the route tables

Configure a NAT Gateway in the public subnet of the VPC

A media agency stores its re-creatable artifacts on Amazon S3 buckets. The artifacts are accessed by a large volume of users for the first few days and the frequency of access falls down drastically after a week. Although the artifacts would be accessed occasionally after the first week, but they must continue to be immediately accessible when required. The cost of maintaining all the artifacts on S3 storage is turning out to be very expensive and the agency is looking at reducing costs as much as possible. As a Solutions Architect, can you suggest a way to lower the storage costs while fulfilling the business requirements Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 7 days Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days

Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days

A startup has recently moved their monolithic web application to AWS Cloud. The application runs on a single EC2 instance. Currently, the user base is small and the startup does not want to spend effort on elaborate disaster recovery strategies or Auto Scaling Group. The application can afford a maximum downtime of 10 minutes. In case of a failure, which of these options would you suggest as a cost-effective and automatic recovery procedure for the instance? Configure an Amazon CloudWatch alarm that triggers the recovery of the EC2 instance, in case the instance fails. The instance, however, should only be configured with an EBS volume Configure Amazon CloudWatch events that can trigger the recovery of the EC2 instance, in case the instance or the application fails Configure an Amazon CloudWatch alarm that triggers the recovery of the EC2 instance, in case the instance fails. The instance can be configured with EBS volume or with instance store volumes Configure AWS Trusted Advisor to monitor the health check of EC2 instance and provide a remedial action in case an unhealthy flag is detected

Configure an Amazon CloudWatch alarm that triggers the recovery of the EC2 instance, in case the instance fails. The instance, however, should only be configured with an EBS volume

A financial services application deployed on the on-premises infrastructure has recently been moved to Amazon EC2 instances. The application data contains critical personal information about all its customers and needs to be protected from all types of cyberattacks. The company is considering using the AWS Web Application Firewall (WAF) to handle this requirement. Can you identify the correct solution leveraging the capabilities of WAF? Configure an Application Load Balancer (ALB) to balance the workload for all the EC2 instances. Configure CloudFront to distribute from an ALB since WAF cannot be directly configured on ALBs. This configuration not only provides necessary safety but is scalable too AWS WAF can be directly configured on Amazon EC2 instances for ensuring the security of the underlying application data AWS WAF can be directly configured only on an Application Load Balancer (ALB) or an Amazon API Gateway. One of these two services can then be configured with Amazon EC2 to build the needed secure architecture Create a CloudFront distribution for the application on Amazon EC2 instances. Deploy AWS WAF on Amazon CloudFront to provide the necessary safety measures

Create a CloudFront distribution for the application on Amazon EC2 instances. Deploy AWS WAF on Amazon CloudFront to provide the necessary safety measures

The application maintenance team at a company has noticed that the production application is very slow when the business reports are run on the RDS database. These reports fetch a large amount of data and have complex queries with multiple joins, spanning across multiple business-critical core tables. CPU, memory, and storage metrics are around 50% of the total capacity. Can you recommend an improved and cost-effective way of generating the business reports while keeping the production application unaffected? Configure the RDS instance to be Multi-AZ DB instance, and connect the report generation tool to the DB instance in a different AZ Create a read replica and connect the report generation tool/application to it Increase the size of RDS instance Migrate from General Purpose SSD to magnetic storage to enhance IOPS

Create a read replica and connect the report generation tool/application to it

A retail company has built their AWS solution using serverless architecture by leveraging AWS Lambda and Amazon S3. The development team has a requirement to implement AWS Lambda across AWS accounts. The requirement entails using a Lambda function with an IAM role from an AWS account A to access an Amazon S3 bucket in AWS account B. As a Solutions Architect, which of the following will you recommend as the BEST solution to meet this requirement? The S3 bucket owner can delegate permissions to users in the other AWS account Create an AWS Identity and Access Management (IAM) role for the Lambda function that also grants access to the S3 bucket. Set the IAM role as the Lambda function's execution role. Verify that the bucket policy grants access to the Lambda function's execution role AWS Lambda cannot access resources across AWS accounts. Use Identity federation to work around this limitation of Lambda Create an AWS Identity and Access Management (IAM) role for the Lambda function that also grants access to the S3 bucket. Set the IAM role as the Lambda function's execution role

Create an AWS Identity and Access Management (IAM) role for the Lambda function that also grants access to the S3 bucket. Set the IAM role as the Lambda function's execution role. Verify that the bucket policy grants access to the Lambda function's execution role

A large retail company uses separate VPCs with different configurations for each of their lines of business. As part of the specific security requirement, an administrator had created a private hosted zone and associated it with the required virtual private cloud (VPC). However, the domain names remain unresolved thereby resulting in errors. As a Solutions Architect, can you identify the Amazon VPC options to configure, to get the private hosted zone to work? You might have private and public hosted zones that have overlapping namespaces DNS hostnames and DNS resolution are required settings for private hosted zones Name server (NS) record and Start Of Authority (SOA) records are created with wrong configurations This error may happen when there is a private hosted zone and a Resolver rule that routes traffic to your network for the same domain name, resulting in ambiguity over the route to be taken

DNS hostnames and DNS resolution are required settings for private hosted zones

A health-care solutions company wants to run their applications on single-tenant hardware to meet regulatory guidelines. Which of the following is the MOST cost-effective way of isolating their Amazon EC2 instances to a single tenant? Spot Instances Dedicated Hosts On-Demand Instances Dedicated Instances

Dedicated Instances

A financial services company uses Amazon GuardDuty for analyzing their AWS account metadata to adhere to the compliance requirements mandated by the regulatory authorities. However, the company has now decided to stop using GuardDuty service. All the existing findings have to be deleted and cannot persist anywhere on AWS Cloud. Which of the following techniques will help the company meet this requirement? Disable the service in the general settings Suspend the service in the general settings De-register the service under services tab Raise a service request with Amazon to completely delete the data from all their backups

Disable the service in the general settings

You have just terminated an instance in the us-west-1a availability zone. The attached EBS volume is now available for attachment to other instances. An intern launches a new Linux EC2 instance in the us-west-1b availability zone and is attempting to attach the EBS volume. The intern informs you that it is not possible and needs your help. Which of the following explanations would you provide to them? EBS volumes are region locked The required IAM permissions are missing The EBS volume is encrypted EBS volumes are AZ locked

EBS volumes are AZ locked

The engineering team at a retail company has developed a REST API which is deployed in an Auto Scaling group behind an Application Load Balancer. The API stores the data payload in DynamoDB and the static content is served through S3. On analyzing the usage trends, it is found that 90% of the read requests are shared across all users. As a Solutions Architect, which of the following is the MOST efficient solution to improve the application performance? Enable DynamoDB Accelerator (DAX) for DynamoDB and CloudFront for S3 Enable DAX for DynamoDB and ElastiCache Memcached for S3 Enable ElastiCache Redis for DynamoDB and ElastiCache Memcached for S3 Enable ElastiCache Redis for DynamoDB and CloudFront for S3

Enable DynamoDB Accelerator (DAX) for DynamoDB and CloudFront for S3

A startup has created a cost-effective backup solution in another AWS Region. The application is running in warm standby mode and has Application Load Balancer (ALB) to support it from the front. The current failover process is manual and requires updating the DNS alias record to point to the secondary ALB in another Region in case of failure of the primary ALB. As a Solutions Architect, what will you recommend to automate the failover process? Enable an ALB health check Enable an Amazon Route 53 health check Enable an EC2 instance health check Configure Trusted Advisor to check on unhealthy instances

Enable an Amazon Route 53 health check

A retail company has its business-critical data stored on the AWS Cloud in different forms such as files on a filesystem, object storage in S3, block storage and in relational databases. The company has hired you as a Solutions Architect to make this data available in alternate regions as part of the disaster recovery strategy. Which of the following would you suggest as the key points to consider for each of the storage technologies that the company is currently using? (Select three) For application data, you must initiate and ensure the EBS Snapshots of your data volumes are configured for cross-region copy Amazon Elastic File System (Amazon EFS), used to store files, is scoped to individual AZ. You need to employ EFS File Sync to quickly replicate files across multiple AZs For OS images, when using Amazon EC2 and Amazon EBS, the appropriate Amazon Machine Images (AMIs) are automatically copied and available in the alternate Region as specified by the user For static application data stored in Amazon S3, you need to enable Cross-Region Replication (CRR) Amazon EBS volumes are regionally scoped. To ensure high availability, you should replicate your EBS Snapshots to another region For data stored in databases, Amazon RDS Read Replicas provide enhanced performance and durability for database instances

For application data, you must initiate and ensure the EBS Snapshots of your data volumes are configured for cross-region copy For static application data stored in Amazon S3, you need to enable Cross-Region Replication (CRR) For data stored in databases, Amazon RDS Read Replicas provide enhanced performance and durability for database instances

An IT company is working on multiple client projects and some of these projects span across multiple teams that use different AWS accounts. For one such project, two of the teams have a requirement to set up Amazon Simple Email Service (Amazon SES) event notification in one AWS account that needs to send data to an Amazon Kinesis data stream in another AWS account. As a Solutions Architect, which of the following would you recommend as the MOST optimal solution to address this requirement? From the SES account, configure SES to push data to the Lambda function. Lambda permissions need to be set up for both the accounts (to read from one and write to another). Set up Lambda to send data to Kinesis Data Streams, present in the second AWS account (Here, SES, SNS, and Lambda are present in SES account and Kinesis data streams is present in the second account) From the SES account, configure SES to push data to an Amazon SNS topic. Subscribe a Lambda function to this SNS topic. Lambda permissions need to be set up for both the accounts (to read from one and write to another). Set up Lambda to send data to Kinesis Data Streams, present in the second AWS account (Here, SES, SNS, and Lambda are present in SES account and Kinesis data streams is present in the second account) AWS Lambda cannot write across accounts. So, Amazon SNS has to be set up in both the accounts that need to communicate. SNS topic from account one will write data to SNS topic of the second account. In the second account, use Lambda as a subscriber to SNS. When Lambda fires, it will update Amazon Kinesis data stream with the information payload Configure SES for multiple AWS accounts. This way, SES can directly write to Amazon Kinesis Data streams in the other account

From the SES account, configure SES to push data to an Amazon SNS topic. Subscribe a Lambda function to this SNS topic. Lambda permissions need to be set up for both the accounts (to read from one and write to another). Set up Lambda to send data to Kinesis Data Streams, present in the second AWS account (Here, SES, SNS, and Lambda are present in SES account and Kinesis data streams is present in the second account)

A profitable small business has been running their IT systems using the on-premises infrastructure. With growing demand, the business is finding it difficult to manage the IT infrastructure, which is not their core competency. The business plans to move to AWS Cloud, in light of their plans of extending their operations to other countries. As a Solutions Architect, can you suggest a cost-effective, serverless solution for their flagship application that has both static and dynamic content as part of its core data model? Host the static content on Amazon S3 and use Lambda with DynamoDB for the serverless web application that handles dynamic content. Amazon CloudFront will sit in front of Lambda for distribution across diverse regions Host both the static and dynamic content of the web application on Amazon S3 and use Amazon CloudFront for distribution across diverse regions/countries Host the static content on Amazon S3 and use Amazon EC2 with RDS for generating the dynamic content. Amazon CloudFront can be configured in front of EC2 instance, to make global distribution easy Host both the static and dynamic content of the web application on Amazon EC2 with RDS as the database. Amazon CloudFront should be configured to distribute the content across geographically disperse regions

Host the static content on Amazon S3 and use Lambda with DynamoDB for the serverless web application that handles dynamic content. Amazon CloudFront will sit in front of Lambda for distribution across diverse regions

An automobile company is running its flagship application on a fleet of EC2 instances behind an Auto Scaling Group (ASG). The ASG has been configured more than a year ago. A young developer has just joined the development team and wants to understand the best practices to manage and configure an ASG. As a Solutions Architect, which of these would you identify as the key characteristics that the developer needs to remember regarding ASG configurations? (Select three) If you configure the ASG to a certain base capacity, you cannot use a combined purchasing model to fulfill the instance requirements. You will need to choose either On-Demand instances or Reserved Instances only If you have an EC2 Auto Scaling group (ASG) with running instances and you choose to delete the ASG, the instances will be terminated and the ASG will be deleted Amazon EC2 Auto Scaling can automatically add a volume when the existing one is approaching capacity. This, however, is a configuration parameter and needs to be set explicitly You can only specify one launch configuration for an EC2 Auto Scaling group at a time. But, you can modify a launch configuration after you've created it EC2 Auto Scaling groups can span Availability Zones, but not AWS regions Data is not automatically copied from existing instances to a new dynamically created instance

If you have an EC2 Auto Scaling group (ASG) with running instances and you choose to delete the ASG, the instances will be terminated and the ASG will be deleted EC2 Auto Scaling groups can span Availability Zones, but not AWS regions Data is not automatically copied from existing instances to a new dynamically created instance

A highly successful gaming company has consistently used the standard architecture of configuring an Application Load Balancer (ALB) in front of Amazon EC2 instances for different services and microservices. As they have expanded to different countries and additional features have been added, the architecture has become complex with too many ALBs in multiple regions. Security updates, firewall configurations, and traffic routing logic have become complex with too many IP addresses and configurations. The company is looking at an easy and effective way to bring down the number of IP addresses allowed by the firewall and easily manage the entire network infrastructure. Which of these options represents an appropriate solution for this requirement? Configure Elastic IPs for each of the ALBs in each region . Set up a Network Load Balancer (NLB) with Elastic IPs. To this NLB, register the private IPs of all the ALBs as targets Launch AWS Global Accelerator and create endpoints for all the Regions. Register the ALBs of each Region to the corresponding endpoints Assign an Elastic IP to an Auto Scaling Group (ASG), and set up multiple Amazon EC2 to run behind the ASGs, for each of the regions

Launch AWS Global Accelerator and create endpoints for all the Regions. Register the ALBs of each Region to the corresponding endpoints

A data analytics company uses custom data-integration services to produce data and log files in S3 buckets. As part of the process re-engineering, the company now wants to stream the existing data files and ongoing changes from Amazon S3 to Amazon Kinesis Data Streams. The timelines are quite stringent and the company is looking at implementing this functionality as soon as possible. As a Solutions Architect, which of the following would you suggest as the fastest possible way of getting the updated solution deployed in production? Configure CloudWatch events for the bucket actions on Amazon S3. An AWS Lambda function can then be triggered from the CloudWatch event that will send the necessary data to Amazon Kinesis Data Streams Leverage S3 event notification to trigger a Lambda function for the file create event. The Lambda function will then send the necessary data to Amazon Kinesis Data Streams Amazon S3 bucket actions can be directly configured to write data into Amazon Simple Notification Service (SNS). SNS can then be used to send the updates to Amazon Kinesis Data Streams Leverage AWS Database Migration Service (AWS DMS) as a bridge between Amazon S3 and Amazon Kinesis Data Streams

Leverage AWS Database Migration Service (AWS DMS) as a bridge between Amazon S3 and Amazon Kinesis Data Streams

A media company has set up its technology infrastructure using AWS services such as Amazon EC2 instances, Lambda functions, Amazon S3 storage service and Amazon ElastiCache Redis to enhance the performance of its RDS database layer. The company has hired you as a Solutions Architect to implement a robust disaster recovery strategy for its caching layer so that it guarantees minimal downtime and data loss while ensuring top application performance. Which of the following solutions will you recommend to address the given use-case? Schedule daily automatic backups at a time when you expect low resource utilization for your cluster Schedule Manual backups using Redis append-only file (AOF) Opt for Multi-AZ configuration with automatic failover functionality to help mitigate failure Add read-replicas across multiple availability zones to reduce the risk of potential data loss because of failure

Opt for Multi-AZ configuration with automatic failover functionality to help mitigate failure

The Development team at an e-commerce company is working on securing their databases. Which of the following AWS database engines can be configured with IAM Database Authentication? (Select two) RDS Sequel Server RDS PostGreSQL RDS Oracle RDS Maria DB RDS MySQL

RDS PostGreSQL RDS MySQL

An Internet-of-Things(IoT) technology company has leveraged a distributed architecture to build its AWS Cloud based solution. This distributed system relies on communications networks to interconnect its three different service components - service A, service B and Service C. Service A invokes service B which in turn invokes service C for sending a response back to service A. During the testing phase, it has been noticed that the failure of service C results in the failure of service A too. As a Solutions Architect, which of the following will you suggest to fix this problem? Service B should return an error when service C fails Service B should return a static response, a simple alternative to returning an error when service C fails to respond Service B should re-compute the response using different means (or service) to replace the failure of service C Since service C is critical for the entire architecture, in the event of a failure, service C should have a standby to fall back on

Service B should return a static response, a simple alternative to returning an error when service C fails to respond

An online gaming company has global users accessing its flagship application in different AWS Regions. Few weeks ago, an Elastic Load Balancer (ELB) had malfunctioned in a region taking down all the traffic with it. The manual intervention cost the company significant time and resulted in a huge revenue loss. Additionally, the users have also complained of poor performance when accessing the application over the internet. What should a solutions architect recommend to reduce internet latency and add automatic failover across AWS Regions? Set up an Amazon Route 53 geoproximity routing policy to route traffic Set up AWS Global Accelerator and add endpoints to cater to users in different geographic locations Set up AWS Direct Connect as the backbone for each of the AWS Regions where the application is deployed Create S3 buckets in different AWS Regions and configure CloudFront to pick the nearest edge location to the user

Set up AWS Global Accelerator and add endpoints to cater to users in different geographic locations

A company's cloud architect has set up a solution that uses Route 53 to configure the DNS records for the primary website with the domain pointing to the Application Load Balancer (ALB). The company wants a solution where users will be directed to a static error page, configured as a backup, in case of unavailability of the primary website. Which configuration will meet the company's requirements, while keeping the changes to a bare minimum? Set up a Route 53 active-active failover configuration. If Route 53 health check determines the ALB endpoint as unhealthy, the traffic will be diverted to a static error page, hosted on Amazon S3 bucket Use Route 53 Latency-based routing. Create a latency record to point to the Amazon S3 bucket that holds the error page to be displayed Set up a Route 53 active-passive failover configuration. If Route 53 health check determines the ALB endpoint as unhealthy, the traffic will be diverted to a static error page, hosted on Amazon S3 bucket Use Route 53 Weighted routing to give minimum weight to Amazon S3 bucket that holds the error page to be displayed. In case of primary failure, the requests get routed to the error page

Set up a Route 53 active-passive failover configuration. If Route 53 health check determines the ALB endpoint as unhealthy, the traffic will be diverted to a static error page, hosted on Amazon S3 bucket

An enterprise is planning their journey to AWS Cloud and the CTO has decided to move the secondary workloads such as backups and archives to AWS cloud. The CTO wishes to move the data stored on physical tapes to Cloud, without changing their current backup workflows. The company holds petabytes of data on tapes and needs a cost-optimized solution to move the huge chunks of data and store it cost-effectively. What is an optimal solution that meets these requirements while keeping the costs to a minimum? AWS DataSync makes it simple and fast to move large amounts of data online between on-premises storage and AWS Cloud. Data moved to Cloud can then be stored cost-effectively in Amazon S3 archiving storage classes Use AWS Direct Connect, a cloud service solution that makes it easy to establish a dedicated network connection from on-premises to AWS to transfer data. Once this is done, Amazon S3 can be used to store data at lesser costs Tape Gateway can be used to move on-premises tape data onto AWS Cloud. From here, Amazon S3 archiving storage classes can be used to store data cost-effectively for years Use AWS VPN connection between the on-premises datacenter and your Amazon VPC. Once this is established, you can use Amazon Elastic File System (Amazon EFS) to get a scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources

Tape Gateway can be used to move on-premises tape data onto AWS Cloud. From here, Amazon S3 archiving storage classes can be used to store data cost-effectively for years

A health-care company runs its extremely critical web service on Amazon EC2 instances behind an Auto Scaling Group (ASG). The company provides ambulances for critical patients and needs the application to be reliable without issues. The workload of the company can be managed on 2 EC2 instances and can peak up to 6 instances when workload builds up. As a Solutions Architect, which of the following configurations would you select as the best fit for these requirements? The ASG should be configured with the minimum capacity set to 4, with 2 instances each in two different AWS Regions. The maximum capacity of the ASG should be set to 6 The ASG should be configured with the minimum capacity set to 4, with 2 instances each in two different Availability Zones. The maximum capacity of the ASG should be set to 6 The ASG should be configured with the minimum capacity set to 2, with 1 instance each in two different Availability Zones. The maximum capacity of the ASG should be set to 6 The ASG should be configured with the minimum capacity set to 2 and the maximum capacity set to 6 in a single Availability Zone

The ASG should be configured with the minimum capacity set to 4, with 2 instances each in two different Availability Zones. The maximum capacity of the ASG should be set to 6

You create an Auto Scaling group to work with an Application Load Balancer. The scaling group is configured with a minimum size value of 10, a maximum value of 30, and the desired capacity value of 20. One of the 20 EC2 instances has been reported as unhealthy. Which of the following actions will take place? The ASG will detach the EC2 instance from the group, and leave it running The ASG will keep the instance running and re-start the application The ASG will terminate the EC2 Instance The ASG will format the root EBS drive on the EC2 instance and run the User Data again

The ASG will terminate the EC2 Instance

The engineering team at a logistics company is working on a shipments application deployed on a fleet of Amazon EC2 instances behind an Auto Scaling Group (ASG). While configuring new changes for an upcoming release, a team member has noticed that the ASG is not terminating an unhealthy instance. As a Solutions Architect, which of the following options would you suggest to troubleshoot the issue? (Select three) The health check grace period for the instance has not expired The instance maybe in Impaired status The EC2 instance could be a spot instance type, which cannot be terminated by ASG A user might have updated the configuration of ASG and increased the minimum number of instances forcing ASG to keep all instances alive A custom health check might have failed. ASG does not terminate instances that are set unhealthy by custom checks The instance has failed the ELB health check status

The health check grace period for the instance has not expired The instance maybe in Impaired status The instance has failed the ELB health check status

The health check for a system monitors 100,000 servers on a distributed network. If a server is found to be unhealthy, an SNS notification is sent, along with a priority message on registered phone numbers. A major event had stalled 10,000 of these servers and the company realized that their design couldn't stand the load of firing thousands of notifications and updates simultaneously. They also understood that, even if half the servers go unhealthy, it will choke the network and the company will not be able to update the clients on-time about the status of their servers. As a Solutions Architect, which of the following options do you suggest to address this scenario? The SNS service is not meant for heavy workloads of this order. Opting for SQS would have kept the system stable during server fails The health check system should send the full snapshot of the current state of all the servers each time, denoting them as bits of data to reduce workload and keep spikes at bay The health check system should send the current state of only the failed servers, denoting them as bits of data to reduce workload Use AWS Lambda with SNS to speed up the processing of records

The health check system should send the full snapshot of the current state of all the servers each time, denoting them as bits of data to reduce workload and keep spikes at bay

While troubleshooting, a cloud architect realized that the Amazon EC2 instance is unable to connect to the Internet using the Internet Gateway. Which conditions should be met for internet connectivity to be established? (Select two) The instance's subnet is not associated with any route table The network ACLs associated with the subnet must have rules to allow inbound and outbound traffic The instance's subnet is associated with multiple route tables with conflicting configurations The subnet has been configured to be Public and has no access to internet The route table in the instance's subnet should have a route to an Internet Gateway

The network ACLs associated with the subnet must have rules to allow inbound and outbound traffic The route table in the instance's subnet should have a route to an Internet Gateway

The lead engineer at a social media company has created an Elastic Load Balancer that has marked all the EC2 instances in the target group as unhealthy. Surprisingly, when he enters the IP address of the EC2 instances in the web browser, he can access the website. What could be the reason the instances are being marked as unhealthy? (Select two) The EBS volumes have been improperly mounted The security group of the EC2 instance does not allow for traffic from the security group of the Application Load Balancer Your web-app has a runtime that is not supported by the Application Load Balancer You need to attach Elastic IP to the EC2 instances The route for the health check is misconfigured

The security group of the EC2 instance does not allow for traffic from the security group of the Application Load Balancer The route for the health check is misconfigured

A legacy application is built using a tightly-coupled monolithic architecture. With an increased number of users, the company is unable to provide a good user experience. Performance, scalability, and security have become an issue, since a small change propagates to all the connected components, making it difficult to develop, test, and maintain the application features. The company has decided to decouple the architecture and adopt AWS microservices architecture. Some of the microservices handle need to handle fast running processes whereas other microservices need to handle slower processes. As a Solutions Architect, which of these options would you identify as the right way of connecting these microservices Upstream microservices should publish their output to the configured Amazon SNS topic. The downstream, slower microservices will get the notifications for working on them to completion, at their own pace Upstream microservices can stream their output to Amazon Kinesis Data Streams. All the consumer microservices can fetch the data from these streams and work on them at their own pace The upstream microservices can send their output to Amazon EventBridge, which can forward or distribute the work across different downstream microservices Upstream microservices should send their output to the configured Amazon SQS queue. The downstream slower microservices can pick messages from the respective queues for processing

Upstream microservices should send their output to the configured Amazon SQS queue. The downstream slower microservices can pick messages from the respective queues for processing

A Silicon Valley startup's cloud infrastructure consists of a few Amazon EC2 instances, Amazon RDS instances and Amazon S3 storage. A year into their business operations, the startup is incurring costs that seem too high to support their business requirements. As a Solutions Architect, which of the following options represents a valid cost-optimization solution? Use Amazon S3 Storage class analysis to get recommendations for transitions of objects to S3 Glacier storage classes to reduce storage costs. You can also automate moving these objects into lower-cost storage tier using Lifecycle Policies Use AWS Trusted Advisor checks on Amazon EC2 Reserved Instances to automatically renew Reserved Instances. Trusted advisor also suggests Amazon RDS idle DB instances Use AWS Cost Explorer Resource Optimization to get a report of EC2 instances that are either idle or have low utilization and use AWS Compute Optimizer to look at instance type recommendations Use AWS Compute Optimizer recommendations to help you choose the optimal Amazon EC2 purchasing options and help reserve your instance capacities at reduced costs

Use AWS Cost Explorer Resource Optimization to get a report of EC2 instances that are either idle or have low utilization and use AWS Compute Optimizer to look at instance type recommendations

A retail company manages its global application on AWS Cloud and their engineering team runs several deployments as part of phased rollouts. The digital marketing team wants to test its blue-green deployment on its customer base in the next couple of days. Most of the customers use mobile phones which are prone to DNS caching. The company has only two days left for the annual Thanksgiving sale to commence. As a Solutions Architect, which of the following options would you recommend to test the deployment on as many users as possible in the given time frame Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment Use Route 53 weighted routing to spread traffic across different deployments Use Elastic Load Balancer to distribute traffic across deployments Use AWS CodeDeploy deployment options to choose the right deployment

Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment

A Pharmaceuticals company has decided to move most of their IT infrastructure to AWS Cloud. Some of the applications, however, will remain on their on-premises data center to meet certain regulatory guidelines. The company is looking for a scalable solution to connect the on-premises applications to the ones on AWS Cloud. As a Solutions Architect, can you suggest the MOST optimal solution for this requirement? Use Transit VPC Solution to connect the Amazon VPCs to the on-premises networks Use AWS Transit Gateway to connect the Amazon VPCs to the on-premises networks Partially meshed VPC peering can be used to connect the Amazon VPCs to the on-premises networks Fully meshed VPC peering can be used to connect the Amazon VPCs to the on-premises networks

Use AWS Transit Gateway to connect the Amazon VPCs to the on-premises networks

You are working as an Engineering Manager at a company and you manage several development teams and monitor their access across multiple accounts. To get the teams up and running quickly, you had initially created multiple roles with broad permissions that are based on job function in the development accounts. Now, the developers are ready to deploy workloads to production accounts and you only want to grant them minimum possible permissions that the developers would actually need. Which of the following is the best way of implementing this requirement? Create IAM Roles with limited permissions for each of the teams. Any missing permissions can be added to these roles if developers face issues in accessing services they need Use Access Advisor to determine the permissions the developers have used in the last few months and only give those permissions (with new IAM roles) while reverting the rest Create service accounts for each of the teams and restrict the associated policy to only the permissions needed for the particular functionality Use Access Advisor to know the action last accessed information for the last few months and create IAM Groups with the permissions gained from these insights

Use Access Advisor to determine the permissions the developers have used in the last few months and only give those permissions (with new IAM roles) while reverting the rest

An online gaming application has been built on AWS Cloud. Of late, a large chunk of traffic is coming from users who download the historic leaderboard reports and their game tactics for various games. The current infrastructure and design are unable to cope up with the traffic numbers and application freezes on most of the pages. The company has hired you as a Solutions Architect to provide a cost-optimal and easily implementable solution that does not need provisioning of infrastructure resources. Which of the following will you recommend? Configure AWS Lambda with an RDS database solution, to provide a serverless architecture Use Amazon CloudFront with DynamoDB for greater speed and low latency access Use Amazon CloudFront with S3 as the storage solution Use AWS Lambda with ElastiCache and Amazon RDS for serving content at high speed and low latency

Use Amazon CloudFront with S3 as the storage solution

An IT company is looking at securing their APIs using AWS best practices. APIs are often targeted by attackers because of the operations that they can perform and the valuable data they can provide. The company has hired you as a Solutions Architect to advise the company on the various authentication/authorization mechanisms that AWS offers to authorize an API call within the API Gateway. The company would prefer to implement a solution that offers built-in user management. Which of the following solutions would you suggest as the best fit for the given use-case? Use AWS_IAM authorization Use Amazon Cognito User Pools Use API Gateway Lambda authorizer Use Amazon Cognito Identity Pools

Use Amazon Cognito User Pools

A CRM company is moving their IT infrastructure to AWS Cloud to take advantage of the scalability, flexibility and cost optimization it offers. The company has a SaaS (Software as a Service) CRM application that feeds updates to a multitude of other in-house applications as well as several third-party SaaS applications. These in-house applications are also being migrated to use AWS services and the company is looking at connecting the SaaS CRM with the in-house as well as third-party SaaS applications. As a Solutions Architect, which of the following would you suggest to asynchronously decouple the architecture? Use Amazon Simple Notification Service (SNS), a fully managed messaging service for both system-to-system and app-to-person (A2P) communication. It enables you to communicate between systems through publish/subscribe (pub/sub) patterns that enable asynchronous messaging between decoupled microservice applications or to communicate directly to users via SMS, mobile push and email Use Amazon Simple Queue Service (SQS), a fully managed message queuing service that enables you to asynchronously decouple and scale microservices, distributed systems, and serverless applications Use Amazon EventBridge, which is a serverless event bus that makes it easy to connect applications and is event-based, works asynchronously to decouple the system architecture Use Elastic Load Balancing, that automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions for effective decoupling of system architecture

Use Amazon EventBridge, which is a serverless event bus that makes it easy to connect applications and is event-based, works asynchronously to decouple the system architecture

An IT company hosts windows based applications on its on-premises data center. The company is looking at moving the business to the AWS Cloud. The cloud solution should offer shared storage space that multiple applications can access without a need for replication. Also, the solution should integrate with the company's self-managed Active Directory domain. Which of the following solutions addresses these requirements with the minimal integration effort? Use Amazon FSx for Windows File Server, for shared storage space Use File Gateway of AWS Storage Gateway to create a hybrid storage solution Use Amazon FSx for Lustre, for effective shared storage and millisecond latencies Use Amazon Elastic File System (Amazon EFS) as a shared storage solution

Use Amazon FSx for Windows File Server, for shared storage space

A health care startup has built their application on REST-based interface that receives near real-time data from the small devices connected to patients. Once the patient's health metrics flow in, the downstream applications process the data and generate a report of the diagnosis that is sent back to the doctor. The process worked well when the velocity of data ingestion was less. With the startup gaining traction and being used by more doctors, the system has become slow and sometimes even unresponsive as it does not have a retry mechanism. The startup is looking at a scalable solution that has minimal implementation overhead. As a Solutions Architect, which of the following would you recommend as a scalable alternative to the current solution? Use Amazon Simple Notification Service (Amazon SNS) for data ingestion and configure Lambda to trigger logic for downstream processing Use Amazon Kinesis Data Streams to ingest the data. Process this data using AWS Lambda function or run analytics using Kinesis Data Analytics Use Amazon Simple Queue Service (SQS) for data ingestion and configure Lambda to trigger logic for downstream processing Use Amazon API Gateway with the existing REST-based interface to create a high performing architecture

Use Amazon Kinesis Data Streams to ingest the data. Process this data using AWS Lambda function or run analytics using Kinesis Data Analytics

A media company uses Amazon S3 buckets for storing their business-critical files. Initially, the development team used to provide bucket access to specific users within the same account. With changing business requirements, cross-account S3 access requirements are also growing. The company is looking for a granular solution that can offer user level as well as account-level access permissions for the data stored in S3 buckets. As a Solutions Architect, which of the following would you suggest as the MOST optimized way of controlling access for this use-case? Use Identity and Access Management (IAM) policies Use Access Control Lists (ACLs) Use Security Groups Use Amazon S3 Bucket Policies

Use Amazon S3 Bucket Policies

A media startup is looking at hosting their web application on AWS Cloud. The application will be accessed by users from different geographic regions of the world. The main feature of the application requires the upload and download of video files that can reach a maximum size of 10GB. The startup wants the solution to be cost-effective and scalable with the lowest possible latency for a great user experience. As a Solutions Architect, which of the following will you suggest as an optimal solution to meet the given requirements? Use Amazon S3 for hosting the web application and use Amazon CloudFront for faster distribution of content to geographically dispersed users Use Amazon EC2 with Global Accelerator for faster distribution of content, while using Amazon S3 as storage service Use Amazon EC2 with ElastiCache for faster distribution of content, while Amazon S3 can be used as a storage service Use Amazon S3 for hosting the web application and use S3 Transfer Acceleration to reduce the latency that geographically dispersed users might face

Use Amazon S3 for hosting the web application and use S3 Transfer Acceleration to reduce the latency that geographically dispersed users might face

A call center used to hire experienced specialists to analyze the customer service calls attended by their small group of call center representatives. Each call center representative handles about 40-50 calls in a day. Over the years, as the company grew to more than a hundred employees, hiring specialists to analyze calls has not only become cumbersome but also ineffective. The company wants to move to AWS Cloud and is looking at an automated solution that can help them analyze the calls for sentiment and security analysis. As a Solutions Architect, which of the following solutions would you recommend to meet the given requirements? Use Kinesis Data Streams to read in audio files as they are generated. Use Amazon provided machine learning (ML) algorithms to initially convert the audio files into text. These text files can be analyzed using various ML algorithms to generate reports for customer sentiment analysis Use Kinesis Data Streams to read in audio files into Amazon Alexa, which will convert the audio files into text. Kinesis Data Analytics can be used to analyze these files and Amazon Quicksight can be used to visualize and display the output Use Amazon Transcribe to convert audio files to text. Use Amazon Quicksight to run analysis on these text files to understand the underlying patterns. Visualize and display them onto user Dashboards for human analysis Use Amazon Transcribe to convert audio files to text. Run analysis on these text files using Amazon Athena to understand the underlying customer sentiments

Use Amazon Transcribe to convert audio files to text. Run analysis on these text files using Amazon Athena to understand the underlying customer sentiments

An e-commerce company has created a hub-and-spoke network with AWS Transit Gateway. VPCs have been provisioned into multiple AWS accounts to facilitate network isolation and delegate network administration. The company is looking for a cost-effective, easy, and secure way of maintaining this distributed architecture. As a Solutions Architect, which of the following options would you recommend to address the given requirement? Use Centralized VPC Endpoints for connecting with multiple VPCs, also known as shared services VPC Use Transit VPC to reduce cost and share the resources across VPCs Use Fully meshed VPC Peers Use VPCs connected with AWS Direct Connect

Use Centralized VPC Endpoints for connecting with multiple VPCs, also known as shared services VPC

A social media company manages its flagship application on an EC2 server fleet running behind an Application Load Balancer and the traffic is fronted by a CloudFront distribution. The engineering team wants to decouple the user authentication process for the application so that the application servers can just focus on the business logic. As a Solutions Architect, which of the following solutions would you recommend to the development team so that it requires minimal development effort? Use Cognito Authentication via Cognito Identity Pools for your Application Load Balancer Use Cognito Authentication via Cognito User Pools for your CloudFront distribution Use Cognito Authentication via Cognito Identity Pools for your CloudFront distribution Use Cognito Authentication via Cognito User Pools for your Application Load Balancer

Use Cognito Authentication via Cognito User Pools for your Application Load Balancer

A medical devices company extensively uses Amazon S3 buckets for their critical data storage. Hundreds of buckets are used to keep the images' data segregated and faster to access. During the company's monthly meetings, the finance team submitted a report with the high costs incurred on S3 storage. Upon analysis, the IT team realized that the lifecycle policies on the S3 buckets have not been applied most optimally. The company is looking at an easy way to reduce storage costs on S3 while keeping the IT team's involvement to a minimum. As a Solutions Architect, can you recommend the most appropriate option from the choices below? Configure Amazon Elastic File System (Amazon EFS) to provide a fast, cost-effective and sharable storage service Use S3 One Zone-Infrequent Access, to reduce the costs on S3 storage Use S3 Intelligent-Tiering storage class to optimize the S3 storage costs Use S3 Outposts storage class to reduce the costs on S3 storage by storing the data on-premises

Use S3 Intelligent-Tiering storage class to optimize the S3 storage costs

You are working as a Solutions Architect with a gaming company that has deployed an application that allows its customers to play games online. The application connects to an Amazon Aurora database, and the entire stack is currently deployed in the United States. The company has plans to expand to Europe and Asia for its operations. It needs the games table to be accessible globally but needs the users and games_played table to be regional only. How would you implement this with minimal application refactoring? Use an Amazon Aurora Global Database for the games table and use DynamoDB for the users and games_played tables Use a DynamoDB Global Table for the games table and use Amazon Aurora for the users and games_played tables Use a DynamoDB Global Table for the games table and use DynamoDB for the users and games_played tables Use an Amazon Aurora Global Database for the games table and use Amazon Aurora for the users and games_played tables

Use an Amazon Aurora Global Database for the games table and use Amazon Aurora for the users and games_played tables

An e-commerce company is looking for a highly available architecture to migrate their flagship application which is planned to be hosted on a fleet of Amazon EC2 instances. The company is also looking at facilitating content-based routing in its architecture. As a Solutions Architect, which of the following will you suggest for the company? Use a Network Load Balancer for distributing traffic to the EC2 instances spread across different Availability Zones. Configure a Private IP address to mask any failure of an instance Use an Application Load Balancer for distributing traffic to the EC2 instances spread across different Availability Zones. Configure Auto Scaling group to mask any failure of an instance Use an Auto Scaling group for distributing traffic to the EC2 instances spread across different Availability Zones. Configure an Elastic IP address to mask any failure of an instance Use an Auto Scaling group for distributing traffic to the EC2 instances spread across different Availability Zones. Configure a Public IP address to mask any failure of an instance

Use an Application Load Balancer for distributing traffic to the EC2 instances spread across different Availability Zones. Configure Auto Scaling group to mask any failure of an instance

The engineering team at a retail company uses EC2 instances, API Gateway, Amazon RDS, Elastic Load Balancer and CloudFront services. As per suggestions from a risk advisory group, all the development teams have been advised to ramp up the security of the applications by analyzing the data sources from AWS services that are part of their stack. The CTO at the company also wants the teams to assess the viability of using Amazon GuardDuty. As a Solutions Architect, can you identify the data sources that GuardDuty analyzes? VPC Flow Logs, API Gateway logs, S3 access logs ELB logs, DNS logs, CloudTrail events CloudFront logs, API Gateway logs, CloudTrail events VPC Flow Logs, DNS logs, CloudTrail events

VPC Flow Logs, DNS logs, CloudTrail events

A popular chatting application has millions of users. The engineering team responsible for the application has used ElastiCache to bolster the performance even as the user base has increased by two folds over the past six months. After due diligence, the team has chosen TTL (Time to Live) caching strategy for their requirements which has worked reasonably well so far. Recently, the database administrator has found sudden spikes in queries to the database that are fetching the same data. This effect may cripple the database as more users sign-up for the application thereby causing a bad user experience. As a Solutions Architect, can you identify the pattern that could be causing these sudden spikes and a resolution to address the issue? When different application processes simultaneously request a cache key, get a cache miss, and then each hits the same database query for data, it results in the database getting swamped with identical queries. The solution is to prewarm the cache TTL caching strategy is the right fit for this use case. Opt for Write-Through caching strategy. The write-through strategy adds or updates data in the cache whenever data is written to the database, making sure that the database never gets too many read requests Delete the existing cache keys and opt for Lazy loading cache technique for lesser hits on database ElastiCache seems to be preconfigured with an eviction policy that is forcing data out of the memory. Disable the eviction policy to remedy the fault

When different application processes simultaneously request a cache key, get a cache miss, and then each hits the same database query for data, it results in the database getting swamped with identical queries. The solution is to prewarm the cache


संबंधित स्टडी सेट्स

International Business 3450 Chapter 10 Quiz

View Set

chapter 6 - the flow of food: purchasing and receiving

View Set

Ch. 28 - Obstructive Pulmonary Disease

View Set

Exam 2 - Communication and Conflict Resolution

View Set

Hybrid-Introduction to Powertrains (AD12-102-2.0.1) UNIT 5 QUIZ STUDY GUIDE

View Set

physics chapter 6: work and energy

View Set

Foundations of Engineering and Technology A Final Exam Study Guide

View Set

BIBL 104-Quiz: The Old Testament Books of Prophecy

View Set

Geriatric Emergencies (Multiple Choice)

View Set