AWS Solutions Arch-Professional-CG-Test1

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Which of the following is an example of buffer-based approach to controlling costs?

A mobile image upload and processing service makes use of SQS to smooth an erratic demand curve.

A client is having a challenge with performance of a custom data collection application. The application collects data from machines on their factory floor at up to 1000 records per second. It uses a Python script to collect data from the machines and write records to a DynamoDB table. Unfortunately, under times of peak data generation, which only last 1-2 minutes at a time, the Python application has timeouts when trying to write to DynamoDB. They don't do any analytics on the data but only have to keep it for potential warranty issues. They are willing to re-architect the whole solution if it will mean a more reliable process. Which of the following options would you recommend to give them the most scalable and cost-efficient solution?

Change the application design to use Kinesis to take in the data. Use Kinesis Firehose to spool the data files out to S3. Use S3 Lifecycle to transition the files to Glacier after a few days.

You are an AWS architect working for a B2B Merger and Acquisitions consulting firm, which has 15 business units spread across several US cities. Each business unit has its own AWS account. For administrative ease and standardization of AWS Usage patterns, corporate headquarters have decided to use AWS Organizations to manage the individual accounts by grouping them into relevant Organization Units (OU-s). You have assisted the Organization Administrator to write and attach Service Control Policies (SCP-s) to the OU-s. SCP-s have been configured as the default Deny list, and they are written to explicitly deny actions wherever required. Data Scientists in one of the Business Units are complaining that they are unable to spin up or access Sagemaker Clusters for building, training and deploying Machine Learning models. Which of the following can be a possible cause and how can this be fixed?

The IAM Policy attached to the IAM Role that the data scientists are assuming in the Business Unit Account does not grant them Sagemaker access. To fix this, add the following to the IAM Policy Statement for that Role - Effect set to Allow, Action set to Everything starting with SageMaker, Resource set to All

Your company has contracted with a third-party Security Consulting company to perform some risk assessments on existing AWS resources. As part of a routine list of activities, they inform you that they will be launching a simulated attack on one of your EC2 instances. After the Security Group performed all their activities, they issue their report. In their report, they claim that they were successful at taking the EC2 instance offline because it stopped responding soon after the simulated attack began. However, you're quite certain that machine did not go offline and have the logs prove it. What might explain the Security company's experience?

The Security Company's traffic was seen as a threat and blocked dynamically by AWS.

You are working with a company to design a DR strategy for the data layer of their news website. The site serves customers globally so a multi-region disaster recovery plan is required. The RTO is defined as 4 hours and RPO have been defined as 5 minutes. Which of the following provide the most cost-effective DR strategy for this client?

Configure RDS Read Replicas to use cross-region replication from the primary to a backup region.

You have configured a VPC Gateway Endpoint to S3 from your VPC named VPC1 with a CIDR block of 10.0.0.0/16. You have lots of buckets but following least privilege, you want to only allow the instances in VPC1 access to the only two buckets they need. What is the most efficient way of doing this?

Create a endpoint policy that explicitly allows access to the two required buckets.

You currently manage a website that consists of two web servers behind an Application Load Balancer. You currently use Route 53 as a DNS service. Going with the current trend of websites doing away with the need to enter "www" in front of the domain, you want to allow your users to simply enter your domain name. What is required to allow this?

Create an alias record to map the zone apex DNS name to the DNS name of your ELB load balancer.

A client calls you in a panic. They notice on their RDS console that one of their mission-critical production databases has an "Available" listed under the Maintenance column. They are extremely concerned that any sort of updates to the database will negatively impact their DB-intensive mission-critical application. They at least want to review the update before it gets applied, but they are not sure when they will get around to that. What do you suggest they do?

Defer the updates indefinitely until they are comfortable.

A hotel chain has decided to migrate their business analytics functions to AWS to achieve higher agility when future analytics needs change, and to lower their costs. The primary data sources for their current on-premises solution are CSV downloads from Adobe Analytics and transactional records from an Oracle database. They've entered into a multi-year agreement with Tableau to be their visualization platform. For the time being, they will not be migrating their transactional systems to AWS. Which architecture will provide them with the most flexible analytics capability at the lowest cost?

Employ AWS Database Migration Service to continuously replicate Oracle transactional data to Amazon S3. Configure AWS Glue to aggregate the transactional data from S3 for each dimension into Amazon Redshift. Use AWS Glue to write the Adobe Analytics data to Amazon S3 in Parquet format. Install Tableau on Amazon EC2 and write queries to Amazon Redshift Spectrum.

You are considering a migration of your on-premises containerized web application and Couchbase database to AWS. Which migration approach has the lowest risk and lowest ongoing administration requirements after migration?

Import the containers into Elastic Container Registry. Deploy the web application and Couchbase database on ECS using an EC2 cluster. Once the AWS version is proven, do a final commit of the container state to the latest version in the registry and use Force New Deployment on the ECS console for the service. Change over DNS entries to point to the new AWS landscape.

As an AWS Solutions Architect, you are responsible for the configuration of your company's Organization in AWS. In the Organization, the Root is connected with two Organizational Units (OUs) called Monitor_OU and Project_OU. Monitor_OU has AWS accounts to manage and monitor AWS services. Project_OU has another two OUs as its children named Team1_OU and Team2_OU. Both Team1_OU and Team2_OU have invited several AWS accounts as their members. To simplify management, all the AWS accounts under Team1_OU and Team2_OU were added with a common administrative IAM role which is supposed to be used by EC2 instances in their accounts. For security concerns, this role should not be deleted or modified by users in these AWS accounts. How would you implement this?

Make sure that Root node has a SCP policy that allows all actions. Create a SCP policy that restricts IAM principals from changing this particular IAM role. Attach the SCP policy to Project_OU.

You are trying to help a customer figure out a puzzling issue they recently experienced during a Disaster Recovery Drill. They wanted to test the failover capability of their Multi-AZ RDS instance. They initiated a reboot with failover for the instance and expected only a short outage while the standby replica was promoted and the DNS path was updated. Unfortunately after the failover, they could not reach the database from their on-prem network despite the database being in an "Available" state. Only when they initiated a second reboot with failover were they again able to access the database. What is the most likely cause for this?

The subnets in the subnet group did not have the same routing rules. The standby subnet did not have a valid route back to the on-prem network so the database could not be reached despite being available.

You have built an amazing new machine learning algorithm that you believe would be of benefit to many paying business customers. You want to expose it as a REST API to your customers and offer three different consumption levels: Silver, Gold and Platinum. The backend is completely serverless using Lambda functions. What is the most efficient and least cost way to make your API available for paying customers with a per-request pricing model?

Setup your own API Gateway Serverless Developer Portal to create API keys for subscribers. Register as a seller with AWS Marketplace and specify the usage plans and developer portal. Submit a product load form with a dimension named "apigateway" of the "requests" type. Create a metering IAM role to allow metrics to be sent to AWS Marketplace. Associate your provided Product Code with the corresponding usage plan.

You have a running EC2 instance and the name of its SSH key pair is "adminKey". The SSH private key file was accidentally put into a GitHub public repository by a junior developer and may get leaked. After you find this security issue, you immediately remove the file from the repository and also delete the SSH key pair in AWS EC2 Management Console. Which actions do you still need to do to prevent the running EC2 instance from unexpected SSH access?

Stop and terminate the instance immediately as someone can still SSH to the instance using the key. Launch a new instance with another SSH key pair. SSH to the EC2 instance using the new key.

You have just been informed that your company's data center has been struck by a meteor and it is a total loss. Your company's applications were not capable of being deployed with high availability so everything is currently offline. You do have recent VM images and Oracle DB backup stored off-site. Your CTO has made a crisis decision to migrate to AWS as soon as possible since it would take months to rebuild the data center. Which of the following options will get your company's most critical applications up and running again in the fastest way possible?

Upload the VM image to S3 using the AWS CLI and create the AMI of key servers. Manually start them in a single AZ. Stand-up a single AZ RDS instance and use the backup files to restore the database data.

Your company has come under some hard times resulting in downsizing and cuts in operating budgets. You have been asked to create a process that will increase expense awareness and your enhance your team's ability to contain costs. Given the reduction in staff, any sort of manual analysis would not be popular so you need to leverage the AWS platform itself for automation. What is the best design for this objective?

Use AWS Budgets to create a budget. Choose to be notified when monthly costs are forecasted to exceed your updated monthly target.

An external auditor is reviewing your process documentation for a Payment Card Industry (PCI) audit. The scope of this audit will extend to your immediate vendors where you store, transmit or process cardholder data. Because you do store cardholder data in the AWS Cloud, the auditor would like to review AWS's PCI DSS Attestation of Compliance and Responsibility. How would you go about getting this document?

AWS Artifact

You work for a Clothing Retailer and have just been informed the company is planning a huge promotional sale in the coming weeks. You are very concerned about the performance of your eCommerce site because you have reached capacity in your data center. Just normal day-to-day traffic pushes your web servers to their limit. Even your on-prem load balancer is maxed out, mostly because that's where you terminate SSL and use sticky sessions. You have evaluated various options including buying new hardware but there just isn't enough time. Your company is a current AWS customer with a nice large Direct Connect pipe between your data center and AWS. You already use Route 53 to manage your public domains. You currently use VMware to run your on-prem web servers and sadly, the decision was made long ago to move the eCommerce site over to AWS last. Your eCommerce site can scale easily by just adding VMs, but you just don't have the capacity. Given this scenario, what is the best choice that would leverage as much of your current infrastructure as possible but also allow the landscape to scale in a cost-effective manner?

Use Server Migration Service to import a VM of a current web server into AWS as an AMI. Create an ALB on AWS. Define a target group using private IP addresses of your on-prem web servers and additional AWS-based EC2 instances created from the imported AMI. Use Route 53 to update your public facing eCommerce name to point to the ALB as an alias record.

Your company is preparing for a large sales promotion coming up in a few weeks. This promotion is going to increase the load on your web server landscape substantially. In past promotions, you've run into scaling issues because the region and AZ of your web landscape is very heavily used. Being unable to scale due to lack of resources is a very real possibility. You need some way to absolutely guarantee that resources will be available for this one-time event. Which of the following would be the most cost-effective in this scenario.

Use an On-Demand Capacity Reservation.-If we only need a short-term resource availability guarantee, it does not make sense to contract for a whole year worth of Reserved Instance. We can instead use On-Demand Capacity Reservations

Quality Auto Parts, Inc. has installed IoT sensors across all of their manufacturing lines. The devices send data to both AWS IoT Core and Amazon Kinesis Data Streams. Kinesis Data Streams triggers a Lambda function to format the data, and then forwards it to AWS IoT Analytics to perform monitoring and time-series analyses, and to take actions based on business processes. After an equipment failure on one of the manufacturing lines causes tens of thousands of dollars in revenue losses, it's determined that alarms for a specific piece of equipment where received seventy-five seconds after the issue originated, and that automated corrective action within a few seconds of the problem could have avoided the financial losses altogether. What changes should be made to the architecture to improve the latency of device alerts?

Add Amazon Kinesis Data Analytics as a second consumer of the Kinesis Data Stream to detect anomalies in the data. Invoke another AWS Lambda function from Kinesis Data Analytics to perform device corrective action when needed.

You build a CloudFormation stack for a new project. The CloudFormation template includes an AWS::EC2::Volume resource that specifies an Amazon Elastic Block Store (Amazon EBS) volume. The EBS volume is mounted in an EC2 instance and contains some important customer data and logs. However, when the CloudFormation stack is deleted, the EBS volume is deleted as well and the data is lost. You want to create a snapshot of the volume when the resource is deleted by CloudFormation. What is the easiest method for you to take?

Add a DeletePolicy attribute in the CloudFormation template and specify "Snapshot" to have AWS CloudFormation create a snapshot for the EBS volume before deleting the resource.

A new project needs a simple, scalable Amazon Elastic File System (EFS) to be used by several Amazon EC2 instances located in different availability zones. The EFS file system has a mount target in each availability zone within a customized VPC. You have already attached a security group to EC2 instances that allows all outbound traffic. In the meantime, how would you configure the EFS volume to allow the ingress traffic from these EC2 instances?

Attach a security group to the mount targets in all availability zones. Allow the NFS port 2049 for its inbound rule. Identify the EC2 security group name as the source in the rule of the security group.

ACME Company has decided to migrate their on-premises 800 TB data warehouse and their 200 TB Hadoop cluster to Amazon Redshift. The migration plan calls for staging all of the data in Amazon S3 before loading it into Redshift in order to accomplish the desired distribution across the compute nodes in the cluster. ACME has an AWS Direct Connect 500 Mbps connection to AWS. However, calculations are showing the effective transfer rate won't allow them to complete the migration during the two-month time frame they have to complete the project. What migration approach should they implement to complete the project on time and with the least amount of effort for the migration team?

Attach multiple Snowball Edge devices to the on-premises network. Load the data warehouse data with an S3 interface supported by the data warehouse platform. Mount the Hadoop filesystem from a staging workstation using native connectors and transfer the data through the AWS CLI.

Your client is a small engineering firm which has decided to migrate their engineering CAD files to the cloud. They currently have an on-prem SAN with 30TB of CAD files and growing at about 1TB a month as they take on new projects. Their engineering workstations are Windows-based and mount the SAN via SMB shares. Propose a design solution that will make the best use of AWS services, be easy to manage and reduce costs where possible.

Use the AWS CLI to sync the CAD files to S3. Setup Storage Gateway-File Gateway locally and configure the CAD workstations to mount as SMB.

You are working with a client to help them design a future AWS architecture for their web environment. They are open with regard to the specific services and tools used, but it needs to consist of a presentation layer and a data store layer. These options were discussed in a brainstorming session. As the consulting architect, which of these would you consider feasible?

Use the AngularJS framework to create a single-page application. Use the API Gateway to provide public access to DynamoDB to serve as the data layer. Store the web page on S3, and deploy it using CloudFront. When changes are required, upload the new web page to S3. Use S3 Events to trigger a Lambda function that expires the cache on CloudFront.

You have been contracted by a small start-up to help them get ready for their new product release--a web-based application that lets users browse through detailed photographs of the world's most famous paintings. The company is expecting a huge debut with very heavy traffic so the solution should be robust and scalable with the least amount of hands-on management. A key feature of their app is that they have created separate web sites specifically optimized for three different form factors: mobile phone, tablet and desktop. As such, they need the ability to detect the device and direct the requester to the proper version of the site. Which architecture will do this and meet the requirements?

Build a custom Lambda function to dynamically redirect the requester to the proper S3 origin based on device type. Associate a CloudFront distribution with a Lambda@Edge function.

A global digital automotive marketplace is using Lambda@Edge function with CloudFront to redirect incoming HTTP traffic to custom origins based on matching custom headers or client IP addresses with a list of redirection rules. The Lambda@Edge function reads these rules from a file, rules.json, which it fetches from an S3 bucket. The file changes every day because several teams in the company uses the file for different purposes, including but not limited to, (a) the security team uses the file to honeypot potential malicious traffic (b) the engineering team uses the file to do A-B testing on new features, (c) the product team experiments with new mobile platforms by redirecting traffic from a specific kind of mobile device to a specific set of server farms, etc.. As a result, the file can be as big as 200 KB. Recently, the response time of the website has degraded. On investigation, you have found that this Lambda@Edge function is taking too long to fetch the rules.json file from the S3 bucket. The existing CI-CD pipeline deploys the file to a versioning-enabled S3 bucket when any change is committed to source control. Any change in rules.json must reflect within 1 hour at all Cloudfront Edge locations. Select two options from the ones below that will not work in improving the latency of fetching this file?

Change the Lambda@Edge code to save the contents of the rules.json file in a global variable so that it is cached in Lambda@Edge memory, with a TTL of 55 minutes, persisted between invocations. Lambda@Edge guarantees persistence of variables in memory between invocations.

An application in your company that requires extremely high disk IO is running on m3.2xlarge EC2 instances with Provisioned IOPS SSD EBS Volumes. The EC2 instances have been EBS-optimized to provide up to 8000 IOPS. During a period of heavy usage, the EBS volume on an instance failed, and the volume was completely non-functional. The AWS Operations Team restored the volume from the latest snapshot as quickly as possible, re-attached it to the affected instance and put the instance back into production. However, the performance of the restored volume was found to be extremely poor right after it went live, during which period the latency of I/O operations was significantly high. Thousands of incoming requests timed out during this phase of poor performance. You are the AWS Architect. The CTO wants to know why this happened and how the poor performance from a freshly restored EBS Volume can be prevented in the future. Which answer best reflects the reason and mitigation strategy?

When a data block is accessed for the first time on a freshly restored EBS Volume, EBS has to download the block from S3 first. This increases the I/O latency until all blocks are accessed at least once. To fix this, update the restoration process to run tools to read the entire volume before putting the instance back to production.

The information security group at your company has implemented an automated approach to checking Amazon S3 object integrity for compliance reasons. The solution consists of scripts that launch an AWS Step Functions state machine to invoke AWS Lambda functions. These Lambda functions will retrieve an S3 object, compute its checksum, and validate the computed checksum against the entity tag checksum returned with the S3 object. However, an unexpected number of S3 objects are failing the integrity check. You discover the issue is with objects that where uploaded with S3 multipart upload. What would you recommend that the security group do to resolve this issue?

When performing S3 multipart uploads, calculate the checksum of the source file and store it in a custom metadata parameter. Have the Lambda function that compares checksums use the custom metadata parameter if it's present instead of the entity tag checksum. Reload all objects that were written with multipart upload that need to be included in the integrity check.

A toy company needs to reduce customer service costs related to email handling. With the current process, representatives read the emails, determine the intent, identify next best action, and compile all information required for a response. The company would like to automate as much of the process as possible. They've already established connectivity to AWS for other applications, and they're planning to link their corporate email exchange to Amazon Simple Email Service (SES). Which architecture will provide them with the best cost reduction opportunity over their current solution?

Configure SES to write the emails to S3 and publish notifications to an Amazon Simple Notification Service topic. From the SNS publish, trigger an AWS Lambda function to invoke Amazon Comprehend to perform keyword extraction. Trigger another Lambda function to call Amazon SageMaker intent determination and next-best-action model endpoints. Based on confidence scores, have another Lambda function either build and send a response email through SES, or route the original email to a representative for handling

You are helping a client design their AWS network for the first time. They have a fleet of servers that run a very precise and proprietary data analysis program. It is highly dependent on keeping the system time across the servers in sync. As a result, the company has invested in a high-precision stratum-0 atomic clock and network appliance which all servers sync to using NTP. They would like any new AWS-based EC2 instances to also be in sync as close as possible to the on-prem atomic clock as well. What is the most cost-effective, lowest maintenance way to design for this requirement?

Configure a DHCP Option Set with the on-prem NTP server address and assign it to each VPC. Ensure NTP (UDP port 123) is allowed between AWS and your on-prem network.

Your team uses a CloudFormation stack to manage AWS infrastructure resources in production. As the AWS resources are used by a large number of customers, the update to the CloudFormation stack should be very cautious. Your manager asks for additional insight into the changes that CloudFormation is planning to perform when it updates the stack with a new template. The change needs to be reviewed before being applied by a DevOps engineer. What is the best method to achieve this requirement?

Create a CloudFormation Change Set using AWS Management Console or CLI, review the changes to see if the modifications are as expected and execute the changes to update the stack.

The security monitor team informs you that two EC2 instances are not compliant reported by an AWS Config rule and the team receives SNS notifications. They require you to fix the issues as soon as possible for security concerns. You check that the Config rule uses a custom Lambda function to inspect if EBS volumes are encrypted using a key with imported key material. However, at the moment the EBS volumes in the EC2 instances are not encrypted at all. You know that the EC2 instances are owned by developers but you do not know the details about how the instances are created. What is the best way for you to address the issue?

Create a Customer Managed Key (CMK) in KMS with imported key material. Create a snapshot of the EBS volume. Copy the snapshot and encrypt the new one with the new CMK. Then create a volume from the snapshot. Detach the original volume and attach the new encrypted EBS to the same device name of the instance.

Your team is architecting an application for an insurance company. The application will use a series of machine learning methods encapsulated in an API call to evaluate claims submitted by customers. Whenever possible, the claim is approved automatically but in some cases were the ML API is unable to determine approval, the claim is routed to a human for evaluation. Given this scenario, which of the following architectures would most aligned with current AWS best practices?

Create a State Machine using Step Functions and a Lambda function for calling the API. Intake the claims into an S3 bucket configured with a CloudWatch Event. Trigger the Step Function from the CloudWatch Event. Create an Activity Task after the API check to email an unapproved claim to a human.

Your company is bringing to market a new Windows-based application for Computer Aided Manufacturing. As part of the promotion campaign, you want to allow users an opportunity to try the software without having to purchase it. The software is quite complex and requires access to compute capabilities not available on local devices, so it's not conducive to allowing the public to download and install in their own systems. Rather you want to control the installation and configuration. Therefore, you want something such as a VDI concept. You'll also need to have a landing page as well as a custom subdomain (demo.company.com) and limit users to 1 hour of use at a time to contain costs. Which of the following would you recommend to minimize cost and complexity?

Create a landing page in HTML and deploy to an S3 bucket configured as a Static Web Host. Use Route 53 to create a DNS record for the "demo" subdomain as alias record for the S3 bucket. Deploy your application using Amazon AppStream. Set Maximum Session Duration for 1 hour.

You are consulting for a large multi-national company that is designing their AWS account structure. The company policy says that they must maintain a centralized logging repository but localized security management. For economic efficiency, they also require all sub-account charges to roll up under one invoice. Which of the following solutions most efficiently addresses these requirements?

Create a stand-alone consolidated logging account and configure all sub-account CloudWatch and CloudTrail activity to route to that account. Use an SCP to restrict sub-accounts from changing CloudWatch and CloudTrail configuration. Configure consolidated billing under a single account and register all sub-accounts to that billing account. Create localized IAM Admin accounts for each sub-account.

Your company has an online shopping web application. It has adopted a microservices architecture approach and a standard SQS queue is used to receive the orders placed by the customers. A Lambda function sends orders to the queue and another Lambda function fetches messages from the queue and processes them. On some occasions the message in the queue cannot be handled properly. For example, when an order has a deleted production ID, the message cannot be consumed successfully and is returned to the queue. The problematic messages in the queue keep growing and the ability to process normal messages is affected. You need a mechanism to handle the message failure and isolate error messages for further analysis. Which method would you choose?

Create a standard queue as the dead letter queue and configure a redrive policy to put error messages to the dead letter queue. Analyze the contents of messages in the dead letter queue to diagnose the issues.

You are helping a client to troubleshoot a problem. The client has several Ubuntu Linux servers in a private subnet within a VPC. The servers are configured to use IPv6 only, and must periodically communicate to the internet to get security patches for applications installed on them. Unfortunately, the servers are unable to reach the internet. An internet gateway has been created and attached to your VPC, and your public subnets have routes to your internet gateway. Which of the following could fix the issue?

Create an egress-only internet gateway and a custom route table, add a route that sends IPv6 traffic to the gateway, and then associate the route with your private subnet.

Your company has an Inventory Control database running on Amazon Aurora deployed as a single Writer role. Over the years more departments have started querying the database and you have scaled up when necessary. Now the Aurora instance cannot be scaled vertically any longer, but demand is still growing. The traffic is 90% Read based. Choose an option from below which would meet the needs of the company in the future.

Create multiple additional Readers within the Aurora cluster and alter the application to make use of Read-Write splitting

A sporting goods retailer runs WordPress on Amazon EC2 Linux instances to host their customer-facing website. An ELB Application Load Balancer sits in front of the EC2 instances in Auto Scaling Groups in two different Availability Zones of a single AWS region. The load balancer serves as an origin for Amazon CloudFront. Amazon Aurora provides the database for WordPress with the master instance in one of the Availability Zones and a read replica in the other. Many custom and downloaded WordPress plugins have been installed. Much of the DevOps teams' time is spent manually updating plugins across the EC2 instances in the two Availability Zones. The website suffers from poor performance between the Thanksgiving and Christmas holidays due to a high occurrence of product catalog lookups. What should be done to increase ongoing operational efficiency and performance during high-volume periods?

Deploy Amazon ElastiCache Memcached as a caching layer between the EC2 instances and the database. Install a WordPress plugin to read from Memcached. Implement Amazon Elastic File System to store the WordPress files and create mount targets in each EC2 subnet.

To be sure costs of AWS resources are allocated to the proper budgets, you are trying to come up with a way to allocate the AWS bill to the proper cost centers. Which solution provides a TagOption library to allow administrators to easily manage tags on provisioned AWS products?

Deploy products within AWS Service Catalog and only allow users to deploy resources using the catalog. Use TagOptions to provide the users a list from which they can select their cost center. Activate the cost center tag in the Billing Console.

You have been asked to help develop a process for monitoring and alerting staff when malicious or unauthorized activity occurs. Your Chief Security Officer is asking for a solution that is both fast to implement but also very low maintenance. Which option best fits these requirements?

Enable AWS GuardDuty to monitor for malicious and unauthorized behavior. Configure a custom blacklist for the IPs which you have seen suspect activity in the past. Setup a Lambda function triggered from a CloudWatch event when anomalies are detected.

On your last Security Penetration Test Audit, the auditors noticed that you were not effectively protecting against SQL injection attacks. Even though you don't have any resources that are vulnerable to that type of attack, your Chief Information Security Officer insists you do something. Your organization consists of approximately 30 AWS accounts. Which steps will allow you to most efficiently protect against SQL injection attacks?

Ensure all sub-accounts are members of an organization in the AWS Organizations service. Use Firewall Manager to create an ACL rule to deny requests that contain SQL code. Apply the ACL to WAF instances across all organizational accounts.

You are consulting with a client who is in the process of migrating over to AWS. Their current on-prem Linux servers use RAID1 to provide redundancy. One of the big benefits they are looking forward to with moving to AWS is the ability to create snapshots of EBS volumes without downtime. Right now, they intend on migrating the servers over to AWS and retaining the same disk configuration. What is your advice for them?

Evaluate carefully why you want to use RAID on EBS going forward.

You have run out of root disk space on your Windows EC2 instance. What is the most efficient way to solve this?

From the AWS Console, select Modify Volume for the EBS volume. Enter the new size and confirm the change. Connect to your Windows instance and use Disk Manager to extend the newly resized volume.

Your company's DevOps manager has asked you to implement a CI/CD methodology and tool chain for a new financial analysis application that will run on AWS. Code will be written by multiple teams, each team owning a separate AWS account. Each team will also be responsible for a Docker image for their piece of the application. Each team's Docker image will need to include code from other teams. Which approach will provide the most operationally efficient solution?

Implement AWS CodePipeline from a single DevOps account to orchestrate builds in the team accounts. Perform cross-account access from AWS CodeCommit in the DevOps account to AWS CodeCommit in the team accounts to get the latest code. Perform cross-account access from AWS CodeBuild in the DevOps account to AWS CodeBuild in the team accounts to get the Docker images from ECR repositories. Perform deployments from AWS CodeDeploy in the DevOps account

A clothing retailer has decided to run all of their online applications on AWS. These applications are written in Java and currently run on Tomcat application servers hosted on VMware ESXi Linux virtual machines on-premises. Because many of the applications require extremely high availability, they've deployed Oracle RAC as their database layer. Some business logic resides in stored procedures in the database. Due to the timing of other business initiatives, the migration needs to take place in a span of four months. Which architecture will provide the most reliable and operationally efficient solution?

Implement AWS Elastic Beanstalk to run the Tomcat servers in multiple Availability Zones. Run Oracle RAC on VMware Cloud on AWS in multiple Availability Zones. Connect the Tomcat servers to the database instances with VMware Cloud ENI route table entries. Use Oracle Recovery Manager to backup the database to Amazon S3.

A regional bank would like to create open banking APIs for payment information service providers that may not have AWS accounts, with the objective to perform automated digital transactions with them. The backend applications will be hosted on AWS with calls being made through Amazon API Gateway. AWS Direct Connects have been established to the bank's corporate network. The security team is concerned about potential authentication attacks through the open banking APIs. Which architecture will provide the highest level of authentication security for the solution?

Implement a Network Load Balancer as an authentication endpoint. Validate the service provider's certificate. Provide the bank's certificate to the service provider for mutual TLS authentication. Once the handshake has occurred, obtain an access token and send it to the service provider for API access. Establish an API endpoint on another Network Load Balancer. Deploy a reverse proxy on EC2 to frontend Amazon API Gateway calls

You are helping a client migrate over an internal application from on-prem to AWS. The application landscape on AWS will consist of a fleet of EC2 instances behind an Application Load Balancer. The application client is an in-house custom application that communicates to the server via HTTPS and is used by around 40,000 users globally across several business units. The same exact application and landscape will be deployed in US-WEST-2 as well as EU-CENTRAL-1. Route 53 will then be used to redirect users to the closest region. When the application was originally built, they chose to use a self-signed 2048-bit RSA X.509 certificate (SSL/TLS server certificate) and embedded the self-signed certificate information into the in-house custom client application. Regarding the SSL certificate, which activities are both feasible and minimize extra administrative work?

Import the existing certificate and private key into Certificate Manager in both regions. Assign that imported certificate to the Application Load Balancers using their respective regionally imported certificate.

You have just completed the move of a Microsoft SQL Server database over to a Windows Server EC2 instance. Rather than logging in periodically to check for patches, you want something more proactive. Which of the following would be the most appropriate for this?

Make use of Patch Manager and the AWS-DefaultPatchBaseline predefined baseline

Your organization currently runs an on-premises Windows file server. Your manager has requested that you utilize the existing Direct Connect connection to AWS to provide a method of storing and accessing these files securely in the cloud. The method should be simple to configure, appear as a standard file share on the existing servers, use native Windows technology, and also have an SLA. Choose an option that meets these needs.

Map an SMB share to the Windows file server using Amazon FSx for Windows File Server and use RoboCopy to copy the files across

You are the solution architect for a research paper monetization company that makes large PDF Research papers available for download from an S3 bucket. The S3 bucket is configured as a static website. A Route53 CNAME record points the custom website domain to the website endpoint of the S3-hosted static website. As demand for downloads has increased throughout the world, the architecture board has decided to use a Cloudfront web distribution that fetches content from the website endpoint of the static website hosted on S3. The Route 53 CNAME record will be modified to point at the Cloudfront distribution URL. For security, it is required that all request from client browsers use HTTPS. Additionally, the system must block anyone from accessing the S3-hosted static website directly other than the Cloudfront distribution. Which approach meets the above requirements?

While setting up the Cloudfront Web Distribution, use the website endpoint of the S3-hosted static website as the Origin Domain Name. Also, set up Origin Custom Header. Then specify a header like Referrer, with its value set to some secret value. Set the bucket policy of the S3 bucket to allow s3 GetObject on the condition that the HTTP request includes the custom Referrer header. In the Cloudfront web distribution, set the value of the property Viewer Protocol Policy to HTTPS Only, or Redirect HTTP to HTTPS.

You are setting up a new EC2 instance for an ERP upgrade project. You have taken a snapshot and built an AMI from your production landscape and will be creating a duplicate of that system for testing purposes in a different VPC and AZ. Because you will only be testing an upgrade process on this new landscape and it will not have the user volume of your production landscape, you select an EC2 instance that is smaller than the size of your production instance. You create some EBS volumes from your snapshots but when you go to mount those on the EC2 instances, you notice they are not available. What is the most likely cause?

You created them in a different availability zone than your testing EC2 instance.

You manage a group of EC2 instances that host a critical business application. You are concerned about the stability of the underlying hardware and want to reduce the risk of a single hardware failure impacting multiple nodes. Regarding Placement Groups, which of the following would be the best course of action in this case?

You would use the AWS CLI to move the existing instances into a spread placement group.

An application that collects time-series data uses DynamoDB as its data store and has amassed quite a collection of data--1TB in all. Over time, you have noticed a regular query has slowed down to the point where it is causing issues. You have verified that the query is optimized to use the partition key so you need to look elsewhere for performance improvements. Which of the following when done together could you do to improve performance without increasing AWS costs?

a. Archive off as much old data as possible to reduce the size of the table. b. Export the data then import it into a newly created table.

You work for a specialty retail organization. They are building out their AWS VPC for running a few applications. They store sensitive customer information in two different encrypted S3 buckets. The applications running in the VPC access, store and process sensitive customer information by reading from and writing to both the S3 buckets. The company is also using a hybrid approach and has several workloads running on-premises. The on-premises datacenter is connected to their AWS VPC using Direct Connect. You have proposed that an S3 VPC Endpoint be created to access the two S3 buckets from the VPC so that sensitive customer data is not exposed to the internet. Select two correct statements from the following that relate to designing this solution using VPC Endpoint.

a. Bucket policies on the two S3 buckets can specify the id of each VPC Endpoint using AWS attribute sourceVpce to further restrict which VPC Endpoints can access each bucket b. Each VPC Endpoint is a Gateway Endpoint that also requires correct routes in the Route Table associated with each subnet that wants to access the endpoint

You are troubleshooting a CloudFront setup for a client. The client has an Apache web server that is configured for both HTTP and HTTPS. It has a valid TLS certificate acquired from LetsEncrypt.org. They have also configured the Apache server to redirect HTTP to HTTPS to ensure a safe connection. In front of that web server, they have created a CloudFront distribution with the web server as the origin. The distribution is set for GET and HEAD HTTP methods, an Origin Protocol Policy of HTTP only, Minimum TTL of zero and Default TTL of 86400 seconds. When a web browser tries to connect to the CloudFront URL, the browser just spins and never reaches the web server. However, when a web browser points to the web server itself, we get the page properly. Which of the following if done by themselves would most likely fix the problem?

a. Change the CloudFront distribution origin protocol policy to use only HTTPS. b. Remove the redirection policy on the origin server and allow it to accept HTTP.

Your client is a software company starting their initial architecture steps for their new multi-tenant CRM application. They are concerned about responsiveness for companies with employees scattered around the globe. Which of the following ideas should you suggest to help with the overall latency of the application?

a. Install key parts of the application in multiple AWS regions chosen to balance latency for geographically diverse users. Use Lambda@Edge to dynamically select the appropriate region based on the users location. b. Architect the system to use as many static objects as possible with high TTL. Use CloudFront to retrieve both static and dynamic objects. POST and PUT new data through CloudFront.

You have just set up a Service Catalog portfolio and collection of products for your users. Unfortunately, the users are having difficulty launching one of the products and are getting "access denied" messages. What could be the cause of this?

a. The launch constraint does not have permissions to CloudFormation. b. The product does not have a launch constraint assigned. c. The user launching the product does not have required

A client is trying to set up a new VPC from scratch. They are not able to reach the Amazon Linux web server instance launched in their VPC from their on-premises network using a web browser. You have verified the internet gateway is attached and the main route table is configured to route 0.0.0.0/0 to the internet gateway properly. The instance also is being assigned a public IP address. Which of the following could be other possible causes of the problem?

a. The outbound network ACL allows ports 80 and 22 only.

A client calls you in a panic. They have just accidentally deleted the private key portion of their EC2 key pair. Now, they are unable to SSH into their Amazon Linux servers. Unfortunately the keys were not backed up and are considered gone for good. What can this customer do to regain access to their instances? (Choose 2).

a. Use AWS Systems Manager Automation with the AWSSupport-ResetAccess document to create a new SSH key for your current instance. b. Stop the instances, detach its root volume and attach it as a data volume to another instances. Modify the authorized_keys file, move the volume back to the original instance and restart the instances.


Ensembles d'études connexes

CH.11 The Nervous System: Integration and Control

View Set

Psychology 2450: FINAL EXAM - Chapters 13-15, 17

View Set

Prof. Paslaru - Philosophy 103, Exam 1

View Set