AWS Solution Architect - Test 4

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Your company has deployed an application that will perform a lot of overwrites and deletes on data and require the latest information to be available anytime data is read. As a Solutions Architect, which database technology will you recommend? Amazon ElastiCache Amazon Relational Database Service (Amazon RDS) Amazon Simple Storage Service (Amazon S3) Amazon Neptune

Amazon Relational Database Service (Amazon RDS)

Your company is building a video streaming service accessible to users who have paid an ongoing subscription. The subscription data is stored in DynamoDB. You would like to expose the users to a serverless architecture allowing them to request the video files that sit on Amazon S3 and are distributed by CloudFront and protected by an origin access identity (OAI). What do you recommend? (Select two) Use DynamoDB triggers to generate the URL Use API Gateway to generate the URL Generate an S3 pre-signed URL Use AWS Lambda to generate the URL Generate a CloudFront signed URL

Use AWS Lambda to generate the URL Generate a CloudFront signed URL

A company has grown from a small startup to an enterprise employing over 1000 people. As part of the scaling up of the AWS Cloud teams, the company has observed some strange behavior with S3 buckets settings being changed regularly. How can you figure out what's happening without restricting the rights of the users? Implement an IAM policy to forbid users to change S3 bucket settings Use CloudTrail to analyze API calls Use S3 access logs to analyze user access using Athena Implement a bucket policy requiring MFA for all operations

Use CloudTrail to analyze API calls

A financial services firm has traditionally operated with an on-premise data center and would like to create a disaster recovery strategy leveraging the AWS Cloud. As a Solutions Architect, you would like to ensure that a scaled-down version of a fully functional environment is always running in the AWS cloud, and in case of a disaster, the recovery time is kept to a minimum. Which disaster recovery strategy is that? Backup and Restore Pilot Light Multi Site Warm Standby

Warm Standby

A leading e-commerce company runs its IT infrastructure on AWS Cloud. The company has a batch job running at 7 am daily on an RDS database. It processes shipping orders for the past day, and usually gets around 2000 records that need to be processed sequentially in a batch job via a shell script. The processing of each record takes about 3 seconds. What platform do you recommend to run this batch job? Amazon EC2 AWS Glue Amazon Kinesis Data Streams AWS Lambda

Amazon EC2

An e-commerce website is migrating towards a microservices-based approach for their website and plans to expose their website from the same load balancer, linked to different target groups with different URLs: checkout.mycorp.com, www.mycorp.com, mycorp.com/products, and mycorp.com/orders. The website would like to use ECS on the backend to manage these microservices and possibly host the same container of the application multiple times on the same EC2 instance. Which feature can help you achieve this with minimal effort? Application Load Balancer + Reverse Proxy running as a Docker daemon on each ECS host Classic Load Balancer + dynamic port mapping Application Load Balancer + dynamic port mapping Network Load Balancer + dynamic port mapping

Application Load Balancer + dynamic port mapping

A web hosting company has deployed their application behind a Network Load Balancer (NLB) and an Auto Scaling Group (ASG). The system administrator has now released a new cost-optimized AMI that should be used to launch instances for the Auto Scaling Group, going ahead. As a Solutions Architect, how can you update the ASG to launch from this new AMI? Create a new launch configuration with the new AMI ID Update the current launch configuration with the new AMI ID Swap the underlying root EBS volumes for your instances Launch a script on the EC2 instance to query the metadata service at

Create a new launch configuration with the new AMI ID

A development team has deployed a microservice to the ECS. The application layer is in a Docker container that provides both static and dynamic content through an Application Load Balancer. With increasing load, the ECS cluster is experiencing higher network usage. The development team has drilled into the network usage and found that 90% of it is due to distributing static content of the application. As a Solutions Architect, what do you recommend to improve the application's network usage and decrease costs? Distribute the static content through Amazon S3 Distribute the dynamic content through Amazon S3 Distribute the static content through Amazon EFS Distribute the dynamic content through Amazon EFS

Distribute the static content through Amazon S3

As an e-sport tournament hosting company, you have servers that need to scale and be highly available. Therefore you have deployed an Elastic Load Balancer (ELB) with an Auto Scaling group (ASG) across 3 Availability Zones (AZs). When e-sport tournaments are running, the servers need to scale quickly. And when tournaments are done, the servers can be idle. As a general rule, you would like to be highly available, have the capacity to scale and optimize your costs. What do you recommend? (Select two) Set the minimum capacity to 1 Set the minimum capacity to 2 Set the minimum capacity to 3 Use Dedicated hosts for the minimum capacity Use Reserved Instances for the minimum capacity

Set the minimum capacity to 2 Use Reserved Instances for the minimum capacity

An IT company is working on an engagement to revamp a monolith CRM application into a modern de-coupled application built on AWS Cloud. From a solutions architecture perspective, the new application has been written from scratch keeping in mind the best practices for performance and resilience. To finalize the migration to the new architecture, you have updated a Route 53 simple record to point "myapp.mydomain.com" from the old Load Balancer to the new one. The users are still not redirected to the new Load Balancer. What has gone wrong in the configuration? The TTL is still in effect The CNAME Record is misconfigured The Alias Record is misconfigured The health checks are failing

The TTL is still in effect

Your company is deploying a website running on Elastic Beanstalk. The website takes over 45 minutes for the installation and contains both static as well as dynamic files that must be generated during the installation process. As a Solutions Architect, you would like to bring the time to create a new Instance in your Elastic Beanstalk deployment to be less than 2 minutes. What do you recommend? (Select two) Use EC2 user data to customize the dynamic installation parts at boot time Store the installation files in S3 so they can be quickly retrieved Use EC2 user data to install the application at boot time Create a Golden AMI with the static installation components already setup Use Elastic Beanstalk deployment caching feature

Use EC2 user data to customize the dynamic installation parts at boot time Create a Golden AMI with the static installation components already setup

Your company is building a music sharing platform on which users can upload their songs. As a solutions architect for the platform, you have designed an architecture that will leverage a Network Load Balancer linked to an Auto Scaling Group across multiple availability zones. The songs live on an FTP that your EC2 instances can easily access. You are currently running with 4 EC2 instances in your ASG, but when a very popular song is released, your Auto Scaling Group scales to 100 instances and you start to incur high network and compute fees. The company is looking at finding a way to dramatically decrease the costs without changing any of the application code, what do you recommend? Leverage AWS Storage Gateway Move the songs to S3 Use a CloudFront distribution Move the songs to Glacier

Use a CloudFront distribution

A developer in your company has set up a classic 2 tier architecture consisting of an Application Load Balancer and an Auto Scaling group (ASG) managing a fleet of EC2 instances. The ALB is deployed in a subnet of size 10.0.1.0/18 and the ASG is deployed in a subnet of size 10.0.4.0/17. As a solutions architect, you would like to adhere to the security pillar of the well-architected framework. How do you configure the security group of the EC2 instances to only allow traffic coming from the ALB? Add a rule to authorize the CIDR 10.0.4.0/17 Add a rule to authorize the security group of the ALB Add a rule to authorize the security group of the ASG Add a rule to authorize the CIDR 10.0.1.0/18

Add a rule to authorize the security group of the ALB

A company has been evolving towards creating sandbox environments for developers. The entire infrastructure is managed by AWS CloudFormation to easily recreate stack. An RDS database is managed by CloudFormation in a "dev" environment and has been costing a lot, as it has been mirroring the production setup. After running cost analysis reports, you realize that the storage layer (io1) accounts for 90% of the cost, and the remaining 10% is used by EC2 instance. The CloudWatch metrics report that both the EC2 instance and the EBS volume are under-utilized. The CloudWatch metrics show the EBS volume does frequent IO bursts, but the database is rarely steadily used. As a Solutions Architect, what do you propose, to reduce costs drastically? Keep the EBS volume to io1 and reduce the IOPS Convert the Amazon EC2 instance EBS volume to gp2 Change the Amazon EC2 instance type to something much smaller Don't use a CloudFormation template to create the database as the CloudFormation service incurs greater service charges

Convert the Amazon EC2 instance EBS volume to gp2

As a Solutions Architect, you would like to completely secure the communications between your CloudFront distribution and your S3 bucket which contains the static files for your website. Users should only be able to access the S3 bucket through CloudFront and not directly. What do you recommend? Create an origin access identity (OAI) and update the S3 Bucket Policy Update the S3 bucket security groups to only allow traffic from the CloudFront security group . Make the S3 bucket public Create a bucket policy to only authorize the IAM role attached to the CloudFront distribution

Create an origin access identity (OAI) and update the S3 Bucket Policy

Your company runs a web portal to match developers to clients who need their help. As a solutions architect, you've designed the architecture of the website to be fully serverless with API Gateway & AWS Lambda. The backend is leveraging a DynamoDB table. You would like to automatically congratulate your developers on important milestones, such as - their first paid contract. All the contracts are stored in DynamoDB. Which DynamoDB feature can you use to implement this functionality such that there is LEAST delay in sending automatic notifications? DynamoDB DAX + API Gateway Amazon SQS + Lambda CloudWatch Events + Lambda DynamoDB Streams + Lambda

DynamoDB Streams + Lambda

You are working as a Solutions Architect for a photo processing company that has a proprietary algorithm to compress an image without any loss in quality. Because of the efficiency of the algorithm, your clients are willing to wait for a response that carries their compressed images back. You also want to process these jobs asynchronously and scale quickly, to cater to the high demand. Additionally, you also want the job to be retried in case of failures. Which combination of choices do you recommend to minimize cost and comply with the requirements? (Select two) Amazon Simple Notification Service (SNS) EC2 Spot Instances Amazon Simple Queue Service (SQS) EC2 Reserved Instances EC2 On-Demand Instances

EC2 Spot Instances Amazon Simple Queue Service (SQS)

You have built an application that is deployed with an Elastic Load Balancer and an Auto Scaling Group. As a Solutions Architect, you have configured aggressive CloudWatch alarms, making your Auto Scaling Group (ASG) scale in and out very quickly, renewing your fleet of Amazon EC2 instances on a daily basis. A production bug appeared two days ago, but the team is unable to SSH into the instance to debug the issue, because the instance has already been terminated by the ASG. The log files are saved on the EC2 instance. How will you resolve the issue and make sure it doesn't happen again? Disable the Termination from the ASG any time a user reports an issue Install a CloudWatch Logs agents on the EC2 instances to send logs to CloudWatch Make a snapshot of the EC2 instance just before it gets terminated Use AWS Lambda to regularly SSH into the EC2 instances and copy the log files to S3

Install a CloudWatch Logs agents on the EC2 instances to send logs to CloudWatch

As a Solutions Architect, you are tasked to design a distributed application that will run on various EC2 instances. This application needs to have the highest performance local disk to cache data. Also, data is copied through an EC2 to EC2 replication mechanism. It is acceptable if the instance loses its data when stopped or terminated. Which storage solution do you recommend? Amazon Elastic Block Store (EBS) Amazon Elastic File System (Amazon EFS) Instance Store Amazon Simple Storage Service (Amazon S3)

Instance Store

Your company has created a data warehouse using Redshift that is used to analyze data from Amazon S3. From the usage pattern, you have detected that after 30 days, the data is rarely queried in Redshift and it's not "hot data" anymore. You would like to preserve the SQL querying capability on your data and get the queries started immediately. Also, you want to adopt a pricing model that allows you to save the maximum amount of cost on Redshift. What do you recommend? (Select two) Move the data to S3 Standard IA after 30 days Migrate the Redshift underlying storage to S3 IA Create a smaller Redshift Cluster with the cold data Analyze the cold data with Athena Move the data to S3 Glacier after 30 days

Move the data to S3 Standard IA after 30 days Analyze the cold data with Athena

A silicon valley based startup helps its users legally sign highly confidential contracts. In order to meet the strong industry requirements and governance guidelines, the startup must ensure that the signed contracts are encrypted using the AES-256 algorithm via an encryption key that is generated internally. The startup is now migrating to AWS Cloud and would like you, a Solution Architect, to advise them on the encryption scheme to adopt. The startup wants to continue using their existing encryption key generation mechanism. What do you recommend? SSE-KMS SSE-S3 Client-Side Encryption SSE-C

SSE-C

A company hosting a Network File System on-premise, has managed it well till date. However, the teams have realized that it is getting challenging to manage the entire process and the company is looking to adopt an hybrid cloud strategy to connect their on-premise applications to an AWS NFS that is backed by Amazon S3. Which service do you recommend? Storage Gateway Volume Storage Gateway File Storage Gateway Tape Amazon Elastic File System (Amazon EFS)

Storage Gateway File

You are using AWS Lambda to implement a batch job for a big data analytics workflow. Based on historical trends, a similar job runs for 30 minutes on average. The Lambda function pulls data from Amazon S3, processes it, and then writes the results back to S3. When you deployed your AWS Lambda function, you noticed an issue where the Lambda function abruptly failed after 15 minutes of execution. As a solutions architect, which of the following would you identify as the root cause of the issue? The AWS Lambda function is running out of memory The AWS Lambda function chosen runtime is wrong The AWS Lambda function is timing out The AWS Lambda function is missing IAM permissions

The AWS Lambda function is timing out

A Big Data processing company has created a distributed data processing framework that performs best if the network performance between the processing machines is high. The application has to be deployed on AWS, and the company is only looking at performance as its key measure. As a Solutions Architect, which deployment do you recommend? Use Spot Instances Use a Cluster placement group Optimize the EC2 kernel using EC2 User Data Use a Spread placement group

Use a Cluster placement group

As a Solutions Architect, you are responsible for all operations in the us-west-1 region, which has a complex infrastructure composed of several Lambda functions, API Gateways and DynamoDB tables. As part of the disaster recovery strategy, you would like to be in a position to quickly re-create your entire infrastructure in another region, if needed. Which technology choice is apt for this requirement? AWS OpsWorks AWS CloudFormation AWS Trusted Advisor AWS Elastic Beanstalk

AWS CloudFormation

A developer in your team has set up a classic 3 tier architecture composed of an Application Load Balancer, an Auto Scaling group managing a fleet of EC2 instances, and an Aurora database. As a Solutions Architect, you would like to adhere to the security pillar of the well-architected framework. How do you configure the security group of the Aurora database to only allow traffic coming from the EC2 instances? Add a rule authorizing the Aurora security group Add a rule authorizing the ASG's subnets CIDR Add a rule authorizing the ELB security group Add a rule authorizing the EC2 security group

Add a rule authorizing the EC2 security group

As part of the design of a mobile application, a firm has decided to use a traditional serverless architecture using AWS Lambda, API Gateway & DynamoDB. The firm is looking for a technology that allows the users to connect through a Google login and have the capability to turn on MFA (Multi-Factor Authentication) to have maximum security. Ideally, the solution should be fully managed by AWS. Which technology do you recommend for managing the users' accounts? Write a Lambda function with Auth0 3rd party integration AWS Identity and Access Management (IAM) Amazon Cognito Enable the AWS Google Login Service

Amazon Cognito

A leading global consumer robot company, designs and builds robots that empower people to do more both inside and outside the home. The company's engineers are building an ecosystem of robots to enable the idea of smart homes. The company is planning on distributing an additional sensor to install at people's homes, to measure and monitor the robotic movement. In order to provide adjustment commands, the sensors must insert the data in a database, from which a stream of changes will be analyzed and acted upon. The company would like this database to be horizontally scalable and highly available. The company would also like to have Auto Scaling capabilities and change the data schema over time, in case they update their devices. As a Solutions Architect, which database will you recommend? Amazon Relational Database Service (Amazon RDS) Amazon Aurora Amazon DynamoDB Amazon Redshift

Amazon DynamoDB

For security purposes, a team has decided to put their instances in a private subnet. They plan to deploy a VPC endpoint to access these services. The members of the team would like to know about the only two AWS services that require a Gateway Endpoint instead of an Interface Endpoint. As a solutions architect, which of the following services would you suggest for this requirement? (Select two) Amazon Simple Queue Service (SQS) Amazon Simple Notification Service (SNS) Amazon Kinesis Amazon S3 DynamoDB

Amazon S3 DynamoDB

You have developed a new REST API leveraging the API Gateway, AWS Lambda and Aurora database services. Most of the workload on the website is read-heavy. The data rarely changes and it is acceptable to serve users outdated data for about 24 hours. Recently, the website has been experiencing high load and the costs incurred on the Aurora database have been very high. How can you easily reduce the costs while improving performance, with minimal changes? Switch to using an Application Load Balancer Add Aurora Read Replicas Enable AWS Lambda In Memory Caching Enable API Gateway Caching

Enable API Gateway Caching

Your firm has implemented a multi-tiered networking structure within the VPC - two public and two private subnets. The public subnets are used to deploy the Application Load Balancers, while the two private subnets are used to deploy the application on Amazon EC2 instances. As part of the firm's security and compliance needs, they need the Amazon EC2 instances to have access to the internet. As a Solutions Architect, what will you recommend, the solution has to be fully managed by AWS and working over IPv4? NAT Instances deployed in your public subnet NAT Gateways deployed in your public subnet Internet Gateways deployed in your private subnet Egress-Only Internet Gateways deployed in your private subnet

NAT Gateways deployed in your public subnet

You are working as an AWS architect for a government facility that manages a critical application for the government. You are asked to set up a Disaster Recovery (DR) mechanism with minimum costs to make sure that there is no loss of critical data in case of failure. As a Solutions Architect, which DR method will you suggest? Backup and Restore Warm Standby Pilot Light Multi-Site

Pilot Light

One of the fastest-growing rideshare companies in the United States uses AWS Cloud for its IT infrastructure. The rideshare service is available in more than 200 cities facilitating millions of rides per month. The company uses AWS to move faster and manage its exponential growth, leveraging AWS products to support more than 100 microservices that enhance every element of its customers' experience. The company wants to improve the ride-tracking system that stores GPS coordinates for all rides. The engineering team at the company is looking for a NoSQL database that has single-digit millisecond latency, can scale horizontally, and is serverless, so that they can perform high-frequency lookups reliably. As a Solutions Architect, which database do you recommend for their requirements? Amazon ElastiCache Amazon Relational Database Service (Amazon RDS) Amazon DynamoDB Amazon Neptune

Amazon DynamoDB

A digital advertising and marketing firm, segments users and customers based on the collection and analysis of non-personally identifiable data from browsing sessions. This requires applying data mining methods across historical clickstreams to identify effective segmentation and categorization algorithms and techniques. The engineering team at the firm wants to create a daily big data analysis job leveraging Spark for analyzing online/offline sales and customer loyalty data to create customized reports on a client-by-client basis. The big data analysis job needs to read the data from Amazon S3 and output it back to S3. Finally, the results need to be sent back to the firm's clients. Which technology do you recommend to run the Big Data analysis job? Amazon Redshift Amazon Athena AWS Glue Amazon EMR

Amazon EMR

Your company is building a music sharing platform on which users can upload the songs of their choice. As a solutions architect for the platform, you have designed an architecture that will leverage a Network Load Balancer linked to an Auto Scaling Group across multiple availability zones. You are currently running with 100 Amazon EC2 instances with an Auto Scaling Group that needs to be able to share the storage layer for the music files. Which technology do you recommend? Instance Store Amazon Elastic File System (Amazon EFS) EBS volumes mounted in RAID 1 EBS volumes mounted in RAID 0

Amazon Elastic File System (Amazon EFS)

One of the largest consumer electronics companies has a suite of smart products and services which feature Artificial Intelligence (AI) technology. The company embeds Wi-Fi chips in these smart products, which allows these products to communicate with each other while learning about their user's behavioral patterns and environment. The company is planning on distributing a master sensor in people's homes to measure the key metrics from these smart products and make adjustments to the default settings for these products. In order to provide adjustment commands, the company would like to have a streaming system that supports ordered data based on the sensor's key, and also sustains high throughput messages (thousands of messages per second). As a solutions architect, which of the following AWS services would you recommend for this use-case? Amazon Simple Queue Service (SQS) Amazon Simple Notification Service (SNS) AWS Lambda Amazon Kinesis Data Streams (KDS)

Amazon Kinesis Data Streams (KDS)

One of the biggest global oil and gas companies has recently migrated to the AWS Cloud. To reap the benefits of speed of data collection, flexibility, and rapid experimentation via the Internet of Things (IoT) devices, the company is planning on distributing a sensor to install at individual residents to measure the temperature and make adjustments to the heating system. To provide adjustment commands, the company would like to have a streaming system that performs real-time analytics on the data. Once the analytics are done, the company would like to send notifications back to the mobile applications of the users. As a solutions architect, which of the following AWS technologies would you recommend to send these notifications to the mobile applications? Amazon Simple Queue Service (SQS) with Amazon Simple Notification Service (SNS) Amazon Kinesis with Simple Email Service (Amazon SES) Amazon Kinesis with Simple Queue Service (SQS) Amazon Kinesis with Amazon Simple Notification Service (SNS)

Amazon Kinesis with Amazon Simple Notification Service (SNS)

A company's business logic is built on several microservices, running on-premise. They currently communicate using a message broker that supports the MQTT protocol. The company is looking at migrating these applications and the message broker to AWS Cloud without changing the application logic. Which technology allows you to get a managed message broker that supports the MQTT protocol? Amazon Simple Queue Service (SQS) Amazon Kinesis Data Streams Amazon Simple Notification Service (SNS) Amazon MQ

Amazon MQ

A leading social media startup has developed a mobile app that allows users to create custom animated videos and share it with their friends. The app has grown at a rate of 150% new users on a month by month basis over the last 1 year. The startup is now moving to AWS Cloud to better manage the IT infrastructure and scale efficiently. The engineering team is evaluating various AWS services as part of the solution stack for the data store layer. The AWS service should be able to handle some complicated queries such as "What are the number of likes on the videos that have been posted by friends of a user A?". As a solutions architect, which of the following services would you recommend? Amazon ElasticSearch Amazon Neptune Amazon Redshift Amazon Aurora

Amazon Neptune

An application deployed to Elastic Beanstalk uses Amazon DynamoDB as the data layer. Recently, your database has seen a spike in writes and your users often get errors from your application because of the writes being unsuccessful due to a throughput provisioned exception. You would like to prevent your users from seeing these errors while guaranteeing them that the data they're trying to write to the backend will be written. You have decided to de-couple the application layer from the database layer and dedicate a worker process to writing the data to DynamoDB. Which middleware do you recommend on using that can scale infinitely and meet these requirements? DynamoDB DAX Kinesis Data Streams Amazon Simple Notification Service (SNS) Amazon Simple Queue Service (SQS)

Amazon Simple Queue Service (SQS)

Due to COVID-19 pandemic, a large social media company has asked more than 90% of its employees to work remotely. The rapid growth of remote and mobile workers has put tremendous pressure on IT to provide fast, easy access to corporate applications from the device of choice. The company is looking for an AWS service to help mobile and remote employees access the applications needed, by delivering a cloud desktop, accessible anywhere with an internet connection, using any supported device. As a Solutions Architect, which of the following AWS services would you recommend for this use-case? AWS Organizations AWS AppSync Amazon Workspaces AWS Single Sign-On

Amazon Workspaces

A company has recently created a new department to handle their services workload. An IT team has been asked to create a custom VPC to isolate the resources created in this new department. They have set up the public subnet and internet gateway (IGW). However, they are not able to ping the Amazon EC2 instances with Elastic IP launched in the newly created VPC. As a Solutions Architect, the team has requested your help. How will you troubleshoot this scenario? (Select two) Check if the route table is configured with IGW Disable Source / Destination check on the EC2 instance Create a secondary IGW to attach with public subnet and move the current IGW to private and write route tables Check if the security groups allows ping from the source Contact AWS support to map your VPC with subnet

Check if the route table is configured with IGW Check if the security groups allows ping from the source

A company helps its customers legally sign highly confidential contracts. To meet the strong industry requirements, the company must ensure that the signed contracts are encrypted using the company's proprietary algorithm. The company is now migrating to AWS Cloud using AWS S3 and would like you, the solution architect, to advise them on the encryption scheme to adopt. What do you recommend? SSE-KMS Client Side Encryption SSE-S3 SSE-C

Client Side Encryption

As a Solutions Architect, you have set up a database on a single EC2 instance that has an EBS volume of type gp2. You currently have 300GB of space on the gp2 device. The EC2 instance is of type m5.large. The database performance has recently been poor and upon looking at CloudWatch, you realize the IOPS on the EBS volume is maxing out. The disk size of the database must not change because of a licensing issue. How do you troubleshoot this issue? Stop the CloudWatch agent to improve performance Convert the gp2 volume to an io1 Increase the IOPS on the gp2 volume Convert the EC2 instance to an i3.4xlarge

Convert the gp2 volume to an io1

A photo hosting service publishes a master pack of beautiful mountain images, every month, that are over 50 GB in size and downloaded all around the world. The content is currently hosted on EFS and distributed by Elastic Load Balancing (ELB) and Amazon EC2 instances. The website is experiencing high load each month and very high network costs. As a Solutions Architect, what can you recommend that won't force an application refactor and reduce network costs and EC2 load drastically? Host the master pack onto Amazon S3 for faster access Upgrade the EC2 instances Create a CloudFront distribution Enable Elastic Load Balancer caching

Create a CloudFront distribution

You have an S3 bucket that contains files in different subfolders, for example s3://my-bucket/images and s3://my-bucket/thumbnails. When an image is first uploaded and new, it is viewed several times. But after 45 days, analytics prove that image files are on average rarely requested, but the thumbnails still are. After 180 days, you would like to archive the image files and the thumbnails. Overall you would like to remain highly available to prevent disasters happening against a whole AZ. How can you implement an efficient cost strategy for your S3 bucket? (Select two) Create a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after 45 days Create a Lifecycle Policy to transition all objects to S3 Standard IA after 45 days Create a Lifecycle Policy to transition objects to Glacier using a prefix after 180 days Create a Lifecycle Policy to transition objects to S3 One Zone IA using a prefix after 45 days Create a Lifecycle Policy to transition all objects to Glacier after 180 days

Create a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after 45 days Create a Lifecycle Policy to transition all objects to Glacier after 180 days

A team has around 200 users, each of these having an IAM user account in AWS. Currently, they all have read access to an Amazon S3 bucket. The team wants 50 among them to have write and read access to the buckets. How can you provide these users access in the least possible time, with minimal changes? Update the S3 bucket policy Create a policy and assign it manually to the 50 users Create a group, attach the policy to the group and place the users in the group Create an MFA user with read / write access and link 50 IAM with MFA

Create a group, attach the policy to the group and place the users in the group

The engineering team at a global e-commerce company is currently reviewing their disaster recovery strategy. The team has outlined that they need to be able to quickly recover their application stack with a Recovery Time Objective (RTO) of 5 minutes, in all of the AWS Regions that the application runs. The application stack currently takes over 45 minutes to install on a Linux system. As a Solutions architect, which of the following options would you recommend as the disaster recovery strategy? Store the installation files in Amazon S3 for quicker retrieval Use Amazon EC2 user data to speed up the installation process Create an AMI after installing the software and use this AMI to run the recovery process in other Regions Create an AMI after installing the software and copy the AMI across all Regions. Use this Region-specific AMI to run the recovery process in the respective Regions

Create an AMI after installing the software and copy the AMI across all Regions. Use this Region-specific AMI to run the recovery process in the respective Regions

Your company is evolving towards a microservice approach for their website. The company plans to expose the website from the same load balancer, linked to different target groups with different URLs, that are similar to these - checkout.mycorp.com, www.mycorp.com, mycorp.com/profile, and mycorp.com/search. As a Solutions Architect, which Load Balancer type do you recommend to achieve this routing feature with MINIMUM configuration and development effort? Create an NGINX based load balancer on an EC2 instance to have advanced routing capabilities Create a Network Load Balancer Create an Application Load Balancer Create a Classic Load Balancer

Create an Application Load Balancer

What does this CloudFormation snippet do? (Select three) SecurityGroupIngress: - IpProtocol: tcp FromPort: 80 ToPort: 80 CidrIp: 0.0.0.0/0 - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: 192.168.1.1/32​ It only allows the IP 0.0.0.0 to reach HTTP It configures an NACL's inbound rules It allows any IP to pass through on the HTTP port It prevents traffic from reaching on HTTP unless from the IP 192.168.1.1 It configures a security group's outbound rules It configures a security group's inbound rules It lets traffic flow from one IP on port 22

It allows any IP to pass through on the HTTP port It configures a security group's inbound rules It lets traffic flow from one IP on port 22

A junior developer has downloaded a sample Amazon S3 bucket policy to make changes to it based on new company-wide access policies. He has requested your help in understanding this bucket policy. As a Solutions Architect, which of the following would you identify as the correct description for the given policy? { "Version": "2012-10-17", "Id": "S3PolicyId1", "Statement": [ { "Sid": "IPAllow", "Effect": "Allow", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::examplebucket/*", "Condition": { "IpAddress": {"aws:SourceIp": "54.240.143.0/24"}, "NotIpAddress": {"aws:SourceIp": "54.240.143.188/32"} } } ] } It ensures the S3 bucket is exposing an external IP within the CIDR range specified, except one IP It ensures EC2 instances that have inherited a security group can access the bucket It authorizes an IP address and a CIDR to access the S3 bucket It authorizes an entire CIDR except one IP address to access the S3 bucket

It authorizes an entire CIDR except one IP address to access the S3 bucket

One of the largest Ed-tech platforms boasts of a learning-management system being used by more than 100 million users across the globe. Of late, the company has been finding it difficult to scale its business. That's because the company's most popular applications, including the learning-management system, are hosted on more than 30 on-premises data centers throughout the world. The engineering team at the company wants to migrate to AWS Cloud and is currently evaluating Amazon RDS as its main database. To ensure High Availability, the team wants to go for Multi-AZ deployment and they would like to understand what happens when the primary instance in Multi-AZ goes down. As a Solutions Architect, which of the following will you identify as the outcome of the scenario? The URL to access the database will change to the standby DB An email will be sent to the System Administrator asking for manual intervention The application will be down until the primary database has recovered itself The CNAME record will be updated to point to the standby DB

The CNAME record will be updated to point to the standby DB

As a solutions architect, you have created a solution that utilizes an Application Load Balancer with stickiness and an Auto Scaling Group (ASG). The ASG spawns across 2 Availability Zones (AZ). AZ-A has 3 EC2 instances and AZ-B has 4 EC2 instances. The ASG is about to go into a scale-in event due to the triggering of a CloudWatch alarm. What will happen under the default ASG configuration? A random instance in the AZ-A will be terminated The instance with the oldest launch configuration will be terminated in AZ-B An instance in the AZ-A will be created A random instance will be terminated in AZ-B

The instance with the oldest launch configuration will be terminated in AZ-B

A small rental company had 5 employees, all working under the same AWS cloud account. These employees deployed their applications built for various functions- including billing, operations, finance, etc. Each of these employees has been operating in their own VPC. Now, there is a need to connect these VPCs so that the applications can communicate with each other. Which of the following is the MOST cost-effective solution for this use-case? Use an Internet Gateway Use a Direct Connect Use VPC peering Use a NAT Gateway

Use VPC peering

A CRM web application was written as a monolith in PHP and is facing scaling issues because of performance bottlenecks. The CTO wants to re-engineer towards microservices architecture and expose their website from the same load balancer, linked to different target groups with different URLs: checkout.mycorp.com, www.mycorp.com, mycorp.com/profile and mycorp.com/search. The CTO would like to expose all these URLs as HTTPS endpoints for security purposes. As a solutions architect, which of the following would you recommend as a solution that requires MINIMAL configuration effort? Use a wildcard SSL certificate Use an HTTP to HTTPS redirect Use SSL certificates with SNI Change the ELB SSL Security Policy

Use SSL certificates with SNI

A company runs a popular photo-sharing website on the AWS Cloud. As a Solutions Architect, you've designed the architecture of the website to follow a serverless pattern on the AWS Cloud using API Gateway and AWS Lambda. The backend is leveraging an RDS PostgreSQL database. The website is experiencing high read traffic and the Lambda functions are putting an increased read load on the RDS database. The architecture team is planning to increase the read throughput of the database, without changing the application's core logic. As a Solutions Architect, what do you recommend? Use Amazon RDS Read Replicas Use Amazon RDS Multi-AZ feature Use Amazon ElastiCache Use Amazon DynamoDB

Use Amazon RDS Read Replicas

A company runs a niche e-commerce website on the AWS Cloud. As a Solutions Architect, you've designed the architecture of the website to follow some serverless pattern on the API side, with API Gateway and AWS Lambda. The backend is leveraging an RDS Aurora MySQL database. The web portal was initially launched in the Americas, and it has been doing well and the company would like to expand it to Europe, where a read-only version will be available to improve latency. You plan on deploying the API Gateway and AWS Lambda using CloudFormation, but would like to have a read-only copy of your data in Europe as well. As a Solutions Architect, what do you recommend? Use Aurora Read Replicas Use Aurora Multi-AZ Use a DynamoDB Streams Create a Lambda function to periodically back up and restore the Aurora database in another region

Use Aurora Read Replicas

You started a new job as a Solutions Architect in a big company that has both AWS experts and people learning AWS. You would like to have everyone empowered for building and configuring best-practices driven architecture without making manual mistakes. Recently, an intern misconfigured a newly created RDS database which resulted in a production outage. How can you make sure to pass on the RDS specific best practices into a reusable infrastructure template to be used by all your AWS users? Use CloudFormation to manage RDS databases Store your recommendations in a custom Trusted Advisor rule Create a Lambda function that sends emails when it finds misconfigured RDS databases Attach an IAM policy to interns preventing them from creating an RDS database

Use CloudFormation to manage RDS databases

You are a long-time Solutions Architect for your company that has been traditionally operating with an on-premise data center. As part of the new strategic direction from the CTO, you are adopting hybrid cloud infrastructure to leverage some AWS services such as S3. Yet, your company needs strong security requirements to have a connection between your on-premise data center and AWS to be private. In case of failures though, it needs to guarantee uptime over security and is willing to use the public internet. What do you recommend? (Select two) Use Egress Only Internet Gateway as a backup connection Use Direct Connect as a primary connection Use Site to Site VPN as a primary connection Use Direct Connect as a backup connection Use Site to Site VPN as a backup connection

Use Direct Connect as a primary connection Use Site to Site VPN as a backup connection

A niche social media application allows users to connect with sports athletes. As a solutions architect, you've designed the architecture of the application to be fully serverless using API Gateway & AWS Lambda. The backend is leveraging a DynamoDB table. Some of the star athletes using the application are highly popular, and therefore DynamoDB has increased the RCUs. Still, hotkey experiencing a hot partition problem. What can you do to improve the performance of DynamoDB and eliminate the hot key problem without a lot of application refactoring? Use DynamoDB Global Tables Use DynamoDB Streams Use DynamoDB DAX Use Amazon ElastiCache

Use DynamoDB DAX

A company runs a popular dating website on the AWS Cloud. As a Solutions Architect, you've designed the architecture of the website to follow a serverless pattern on the AWS Cloud using API Gateway and AWS Lambda. The backend is leveraging an RDS PostgreSQL database. Currently, you are using a classic username and password combination to connect the Lambda function to the RDS database. You would like to implement greater security at the authentication level, leveraging short-lived credentials. What will you choose? (Select two) Embed a credential rotation logic in the AWS Lambda, retrieving them from SSM Use IAM authentication from Lambda to RDS PostgreSQL Restrict the RDS database security group to the Lambda's security group Deploy AWS Lambda in a VPC Attach an AWS Identity and Access Management (IAM) role to AWS Lambda

Use IAM authentication from Lambda to RDS PostgreSQL Attach an AWS Identity and Access Management (IAM) role to AWS Lambda

Your company runs a website for evaluating coding skills. As a Solutions Architect, you've designed the architecture of the website to follow a serverless pattern on the AWS Cloud using API Gateway and AWS Lambda. The backend is leveraging an RDS PostgreSQL database. Caching is implemented using a Redis ElastiCache cluster. You would like to increase the security of your authentication to Redis from the Lambda function, leveraging a username and password combination. As a solutions architect, which of the following options would you recommend? Use IAM Auth and attach an IAM role to Lambda Enable KMS Encryption Use Redis Auth Create an inbound rule to restrict access to Redis Auth only from the Lambda security group

Use Redis Auth

You are working for a SaaS (Software as a Service) company as a solutions architect and help design solutions for the company's customers. One of the customers is a bank and has a requirement to whitelist up to two public IPs when the bank is accessing external services across the internet. Which architectural choice do you recommend to maintain high availability, support scaling-up to 10 instances and comply with the bank's requirements? Use a Classic Load Balancer with an Auto Scaling Group (ASG) Use an Application Load Balancer with an Auto Scaling Group (ASG) Use a Network Load Balancer with an Auto Scaling Group (ASG) Use an Auto Scaling Group (ASG) with Dynamic Elastic IPs attachment

Use a Network Load Balancer with an Auto Scaling Group (ASG)

The engineering team at a leading e-commerce company is anticipating a surge in the traffic because of a flash sale planned for the weekend. You have estimated the web traffic to be 10x. The content of your website is highly dynamic and changes very often. As a Solutions Architect, which of the following options would you recommend to make sure your infrastructure scales for that day? Use a CloudFront distribution in front of your website Deploy the website on S3 Use an Auto Scaling Group Use a Route53 Multi Value record

Use an Auto Scaling Group

A digital media company needs to manage uploads of around 1TB from an application being used by a partner company. As a Solutions Architect, how will you handle the upload of these files to Amazon S3? Use Amazon S3 Versioning Use Direct Connect connection to provide extra bandwidth Use AWS Snowball Use multi-part upload feature of Amazon S3

Use multi-part upload feature of Amazon S3


Ensembles d'études connexes

Unit 7- Reduction of Risk Potential

View Set

Nursing Pellico Ch 28 Renal 788-802

View Set

Intro to Networks Chapter 6 Review David Jackson

View Set

Simulation Lab 11.1: Module 11 Harden PC with Group Policy Editor

View Set