AWS Solutions Archi-Test5-SkillCert

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

A global data analytics firm has various data centers from different countries all over the world. The staff are regularly uploading analytics, financial, and regulatory files of each of their respective data centers to a web portal deployed in AWS, which uses an S3 bucket named global-analytics-reports-bucket to durably store the data. The staff download various reports from a CloudFront distribution which uses the global-analytics-reports-bucket S3 bucket as the origin. You noticed that the staff are using both the CloudFront link and the direct Amazon S3 URLs to download the reports. The IT Security team of the company sees this as a security risk and they recommended that you implement a way to prevent anyone from bypassing CloudFront and using the direct Amazon S3 URLs. What would you do to meet the above requirement?

1. Create a special CloudFront user called an origin access identity (OAI) and associate it with your CloudFront distribution. 2. Give the origin access identity permission to read the objects in your bucket. 3. Remove anyone else's permission to use Amazon S3 URLs to read the objects. (Needs Review)

A data analytics company has recently adopted a hybrid cloud infrastructure with AWS. They are in the business of collecting and processing vast amounts of data. Each data set generates up to several thousands of files which can range from 10 MB to 1 GB in size. The archived data is rarely restored and in case there is a request to retrieve it, the company has a maximum of 24 hours to send the files. The data sets can be searched using its file ID, set name, authors, tags, and other criteria.Which of the following options provides the most cost-effective architecture to meet the above requirements?

1. For each completed data set, compress and concatenate all of the files into a single Glacier archive. 2. Store the associated archive ID for the compressed files along with other search metadata in a DynamoDB table. 3. For retrieving the data, query the DynamoDB table for files that match the search criteria and then restore the files from the retrieved archive ID. (Needs review)

A leading aerospace engineering company has over 1TB of aeronautical data stored on the corporate file server of their on-premises network. This data is used by a lot of their in-house analytical and engineering applications. The aeronautical data consists of technical files which can have a file size of a few megabytes to multiple gigabytes. The data scientists typically modify an average of 10 percent of these files everyday. Recently, the management decided to adopt a hybrid cloud architecture to better serve their clients around the globe. You are tasked to migrate their applications to AWS over the weekend to minimize any business impact and system downtime. The on-premises data center has a 50-Mbps Internet connection which can be used to transfer all of the 1TB of data in AWS but based on your calculations, it will take at least 48 hours to complete this task. Which of the following options will allow you to move all of the aeronautical data to AWS to meet the above requirement?

1. Synchronize the on-premises data to an S3 bucket one week before the migration schedule using the AWS CLI's S3 sync command. 2. Perform a final synchronization task on Friday after the end of business hours. 3. Set up your application hosted in a large EC2 instance in your VPC to use the S3 bucket. ?

A Classic Load Balancer in AWS routes each request independently to the registered instance with the smallest load. However, you can use the sticky session feature, which enables the load balancer to bind a user's session to a specific EC2 instance. This ensures that all requests from the user during the session are sent to the exact same instance. What is the name of the cookie that the Elastic Load Balancing creates, which is used to map the session to the EC2 instance?

AWSELB Elastic Load Balancing creates a cookie, named AWSELB, that is used to map the session to the instance. you can use the sticky session feature (also known as session affinity), which enables the load balancer to bind a user's session to a specific instance. This ensures that all requests from the user during the session are sent to the same instance.

The latest version of an existing online accounting system, which is using Elastic Beanstalk, is ready to be deployed. The system is used by more than 10,000 clients worldwide 24 hours a day, 7 days a week, which means that the deployment must strictly have zero downtime. Which of the following deployment methods should you not use for your deployment?

All at once All at once - Deploy the new version to all instances simultaneously. All instances in your environment are out of service for a short time while the deployment occurs. If the deployment fails, a system downtime will occur. Deploying a new version of your application to an environment is typically a fairly quick process. The new source bundle is deployed to an instance and extracted. Then the web container or application server picks up the new version and, if necessary, restarts. During deployment, your application might still become unavailable to users for a few seconds. You can prevent this by configuring your environment to use rolling deployments to deploy the new version to instances in batches.

A leading e-commerce company plans to launch a donation website for all the victims of the recent super typhoon in South East Asia for its Corporate and Social Responsibility program. The company will advertise their program on TV and in social media, which is why they anticipate incoming traffic on their donation website. Donors can send their donations in cash, which can be transferred electronically, or they can simply post their home address where a team of volunteers can pick-up their used clothes, canned goods and other donations. Donors can optionally write a positive and encouraging message to the victims along with their donations. These features of the donation website will eventually result in a high number of write operations on their database tier considering that there are millions of generous donors around the globe who want to help. Which of the following options is the best solution for this scenario?

Amazon DynamoDB with a provisioned write throughput. Use an SQS queue to buffer the large incoming traffic to your Auto Scaled EC2 instances, which processes and writes the data to DynamoDB. (Needs Review)

You are developing a prototype distributed system which requires a multithreaded event-based key/value cache store. The system will cache small arbitrary data, such as strings and objects, from the results of database and API calls. The cache layer should also enable the client programs to automatically identify all of the nodes in a cache cluster, and to initiate and maintain connections to all of these nodes.Which of the following is the most suitable and cost-effective service that you should use?

Amazon ElastiCache for Memcached

The national election will be held in 6 months time and your startup won the bid to build an e-voting system. There would be millions of voters, which is why the system must be able to handle large incoming requests and also have a web page to show the real-time poll. What would be the best and most cost-effective method of architecting this system?

Build a javascript application using Angular or React for the UI of the voting system and host it in S3 static website hosting. Use CloudFront as the CDN and Route 53 for routing. Build an API using Lambda and API Gateway which communicates directly with DynamoDB to post and get the voting data (Needs review).

A data analytics company is running a Redshift data warehouse for one of its major clients. In compliance with the Business Continuity Program of the client, they need to provide a Recovery Point Objective of 24 hours and a Recovery Time Objective of 1 hour. The data warehouse should be available even in the event that the entire AWS Region is down. Which of the following is the most suitable configuration for this scenario?

Configure Redshift to have automatic snapshots and do a cross-region snapshot copy to automatically replicate the current production cluster to the disaster recovery region.

A cryptocurrency exchange company has recently signed up for a 3rd party online auditing system, which is also using AWS, to perform regulatory compliance audit on their cloud systems. The online auditing system needs to access certain AWS resources in your network to perform the audit. In this scenario, which of the following approach is the most secured way of providing access to the 3rd party online auditing system?

Create a new IAM role for cross-account access which allows the online auditing system account to assume the role. Assign it a policy that allows only the actions required for the compliance audit.

You are working as a solutions architect for a large media company based in Los Angeles, California. They have a requirement to have the read replica of a running MySQL RDS instance inside of AWS cloud on their on-premises data center.In this scenario, which is the most secure way of performing this replication?

Create an IPSec VPN connection using either OpenVPN or VPN/VGW through the Virtual Private Cloud service. Prepare an instance of MySQL running external to Amazon RDS. Configure the MySQL DB instance to be the replication source. Use mysqldump to transfer the database from the Amazon RDS instance to the on-premises MySQL instance and start the replication from the Amazon RDS Read Replica.

You are working as a Cloud Engineer at a leading insurance company in South East Asia. Your team has recently deployed a new portal that enables your users to login and manage their accounts, view their insurance plans and pay their monthly premiums. After a few weeks, you noticed that there are several incoming traffic from a country in which the insurance company does not operate. Later on, you see that the same set of IP addresses coming from the unsupported country is sending out massive amounts of requests to your portal which has caused some minor performance issues. Which of the following is the best solution to implement to block the series of attacks coming from a set of determined IP ranges?

Create an inbound Network Access control list associated with explicit deny rules to block the attacking IP addresses.

An online home loan system is deployed across multiple Availability Zones in the ap-southeast-2 region. As part of their Disaster Recovery Plan, the RTO must be less than 2 hours and the RPO must be 10 minutes. At 12:00 PM, there was a production incident in their database and the operations team found out that they cannot recover the transactions made from 10:30 AM onwards or 1.5 hours ago. How can you change the current architecture to achieve the required RTO and RPO in case a similar system failure occurred again?

Create database backups every hour and store it in an S3 bucket with Cross-Region Replication enabled. Store the transaction logs in the same S3 bucket every 5 minutes. (Needs Review)

A supermarket chain is planning to launch an online shopping website to allow its loyal shoppers to buy their groceries online. Since there are a lot of online shoppers at any time of the day, the website should be highly available 24/7 and fault tolerant. Which of the following options provides the best architecture that meets the above requirement?

Deploy the website across 3 Availability Zones with Auto Scaled EC2 instances behind an Application Load Balancer and a RDS configured with Multi-AZ Deployments.

The software development team in your department is building an online real estate advertising website which is using a static S3 website containing the photos of the houses and condominiums for sale and for lease. One of the developers noticed that the photos from the static S3 website are not loading on their online real estate portal. You tried to make all of the photos in the S3 bucket to be public yet the issue still persists. What would you do to solve this issue?

Enable Cross-origin resource sharing (CORS) configuration in the bucket. (Needs review)

A company stores confidential financial documents as well as sensitive corporate information in an Amazon S3 bucket. There is a new security policy that prohibits any public S3 objects in the company's S3 bucket. In the event that a public object was identified, the IT Compliance team must be notified immediately and the object's permissions must be remediated automatically. The notification must be sent as soon as a public object was created in the bucket.What is the MOST suitable solution that should be implemented by the Solutions Architect to comply with this data policy?

Enable object-level logging in the S3 bucket to automatically track S3 actions using CloudTrail. Set up an Amazon CloudWatch Events rule with an SNS Topic to notify the IT Compliance team when a PutObject API call with public-read permission is detected in the CloudTrail logs. Launch another CloudWatch Events rule that invokes an AWS Lambda function to turn the newly uploaded public object to private.(Needs Review)

You are working as a Solutions Architect for a major insurance company. They are planning to migrate a MySQL database from their on-premises data center to their AWS Cloud. This is used by a legacy batch application which has steady-state workloads in the morning but has its peak load at night for the end-of-day processing. You are instructed to set up the EC2 and EBS volumes which can handle a maximum of 450 GB of data and can also be used as the system boot volume for your EC2 instance.Which of the following is the most cost-effective storage type to use in this scenario?

General Purpose (gp2) (Needs review) More info to know: Throughput Optimized HDD (st1) General Purpose (gp2) Cold HDD (sc1) Provisioned IOPS (io1)

An online shopping website, which provides cheap bargains and discounts on various products, has recently moved from their previous hosting provider to AWS. Their architecture uses an Application Load Balancer (ALB) in front of an Auto Scaling group of Spot and On-Demand EC2 instances. You need to set up a CloudFront web distribution which uses a custom domain name and where the origin is set to point to the ALB. Which of the following is the correct implementation of an end-to-end HTTPS connection from the origin to the CloudFront viewers?

Import a certificate that is signed by a trusted third-party certificate authority, store it to ACM then attach it in your ALB. Set the Viewer Protocol Policy to HTTPS Only in CloudFront and use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store. (Needs Review)

A multinational consumer goods company runs their website entirely on their on-premises data center. Due to the unprecedented growth of their popular product, they are expecting an increase in the incoming traffic to their website in the coming days ahead. The CTO asked you to urgently do the necessary architectural changes to be able to handle the demand. You suggested to migrate their application to AWS but the CTO decided that they need at least 3 months to implement a hybrid cloud architecture. In this scenario, what could they do with their current on-premises website to help offload some of the traffic and scale out to meet the demand in a cost-effective way?

Launch a CloudFront web distribution with the URL of the on-premises web application as the origin. Offload the DNS to AWS to handle CloudFront traffic. (Needs Review)

You are developing a cryptocurrency exchange platform in AWS which uses Lambda, API Gateway, and DynamoDB. It is expected that millions of investors will sign up and use your platform. Which of the following can reduce the load on your DynamoDB database?

Launch a new SQS queue to buffer the incoming load. Configure Lambda to pull messages from the queue. Process the data and persist the data to DynamoDB

A technology company asked you to develop an educational mobile app for students, with an exam feature that also allows them to submit their answers. You used React Native so the app can be deployed on both iOS and Android devices. You used Lambda and API Gateway for the backend services and DynamoDB as the database service. After a month, you released the app which has been downloaded over 3 million times. However, there are a lot of users who complain about the slow processing of the app especially when they are submitting their answers in the multiple-choice exams. The diagrams and images on the exam also take a lot of time to load, which is not a good user experience.Which of the following options provides the most cost-effective and scalable architecture for your app?

Launch an SQS queue and develop a custom service which integrates with SQS to buffer the incoming requests. Use a web distribution in CloudFront and Amazon S3 to host the diagrams, images, and other static assets of the mobile app.

A multinational food manufacturing company has recently decided to adopt a hybrid cloud architecture. They have set up their own AWS account and launched their new VPC. The company wants to have a fast and dedicated network connection from the on-premises data center to the VPC. They aim to increase bandwidth throughput and provide a more consistent network experience to their hybrid architecture than Internet-based connections. How would you implement this requirement in AWS?

Provide a DirectConnect connection between the on-premises data center and your VPC.

Your government's technology agency has recently hired you to build a mobile tax app that allows users to upload their tax deductions and income records using their devices. They would also be able to view or download their uploaded files later on. These files are confidential, tax-related documents which need to be stored in a single, secure S3 bucket. Your mobile app's design is to allow the users to upload, view, and download their files directly from an Amazon S3 bucket via the mobile app. Since this app will be used by potentially hundreds of thousands of taxpayers in the country, you need to make sure proper user authentication and security features are in place. How should you implement your architecture when a new user registers on the app?

Record the user's information in Amazon RDS and create a role in IAM with appropriate permissions. When the user uses his/her mobile app, create temporary credentials using the 'AssumeRole' function in STS. Store these credentials in the mobile app's memory and use them to access the S3 bucket. Generate new credentials the next time the user runs the mobile app.

You are working for a top corporate and investment bank which has a loan origination application hosted on an EBS-backed Reserved EC2 instance. The financial data contains highly sensitive information and for regulatory compliance, the expected RTO is less than a minute. Which of the following options is the best architecture that will meet the above requirements?

Set up a RAID 1 EBS configuration to store the financial data.

You are working for a multinational investment bank which has multiple cloud architectures across the globe. They have a VPC in the US East region for their East Coast office and another VPC in the US West for their West Coast office. There is a requirement to establish a low latency, high-bandwidth connection between their on-premises data center in Texas and both of their VPCs in AWS.As the Solutions Architect, how will you implement this in a cost-effective manner?

Set up an AWS Direct Connect Gateway with two virtual private gateways. Launch and connect the required Private Virtual Interfaces to the Direct Connect Gateway.

A financial online analytical processing application is hosted in AWS and uses Redshift. The application has two different groups: the stocks group and the mutual funds group, which both use data stored in a Redshift cluster. Each query issued by the stocks group takes approximately 2 hours to analyze while the queries from the mutual fund group only take about 10 minutes. You don't want the mutual fund group's queries to wait until the stock group's queries are completed. Your manager instructed you to implement a solution to optimize the processing time of your application. Which of the following is the most cost-effective and suitable solution for this scenario?

Set up two separate workload management groups for both the stocks and mutual fund group.

A high-performance computing (HPC) application has been launched in the company's Amazon VPC. The application is composed of hundreds of private EC2 instances running in a cluster placement group, which allows the instances to communicate with each other at network speeds of up to 10 Gbps. There is also a custom cluster controller EC2 instance that closely controls and monitors the system performance of each instance. The cluster controller has the same instance type and AMI as the other instances. It is configured with a public IP address and running outside the placement group. The Solutions Architect has been tasked to improve the network performance between the controller instance and the EC2 instances in the placement group.Which option provides the MOST suitable solution that the Architect must implement to satisfy the requirement while maintaining low-latency network performance?

Stop the custom cluster controller instance and move it to the existing placement group.

The website of a new travel and tours agency, which is deployed in AWS, only supports HTTP. To improve their SEO ranking and to provide more security for their customers, they decided to enable SSL on their website. They would also like to ensure a separation of roles between the Development team and the Security team in handling the sensitive SSL certificate. The Development team can login to EC2 Instances but they should not have access to the SSL certificate, which only the Security team has exclusive control of. Currently, they are using an Application Load Balancer which provides loads of incoming traffic to an Auto Scaling group of On-Demand EC2 instances. In this scenario, which configuration option should you implement to satisfy the requirement?

Store the SSL certificate in IAM and authorize access only to the Security team using an IAM policy. Configure the Application Load Balancer to use the SSL certificate instead of the EC2 instances. (Needs Review)

A top Internet of Things (IoT) company has developed a wrist-worn activity tracker for soldiers deployed in the field. The device acts as a sensor to monitor the health and vital statistics of the wearer. It is expected that there would be thousands of devices that will send data to the server every minute and after 5 years, the number will increase to tens of thousands. One of the requirements is that, you need to be able to accept the incoming data, run it through ETL to store in a data warehouse and archive the old data. The officers in the military headquarters should have a real-time dashboard to view the sensor data. What is the most suitable architecture to implement in this scenario?

Store the data directly to Amazon Kinesis and output the data to an S3 bucket. For archiving, create a lifecycle policy from S3 to Glacier. Use Lambda to process the data through EMR and sends the output to Redshift.

An advertisement company has a NodeJS application that is hosted across multiple EC2 instances and has incoming traffic balanced by an Application Load Balancer. The web application allows the clients to upload their high-resolution advertising media files which are stored to EC2 instances. Each instance stores the files and a background task synchronizes the data between other EC2 instances. Due to the growth of the company, the synchronization task can no longer cope with the high number of files being uploaded.In this scenario, what solution could you implement to improve the storage scalability and durability of the files in the most cost-effective way?

Store the media files in an S3 bucket.

A multinational bank has recently set up AWS Organizations to manage their multiple AWS accounts from their various business units. The Senior Solutions Architect attached the SCP below to an Organizational Unit (OU) to define the services that its member accounts can use:{"Version":"2012-10-17″,"Statement":[{"Effect":"Allow","Action":["EC2:*","S3:*"],"Resource":"*"}]}In one of the member accounts under that OU, an IAM user tried to create a new S3 bucket but was unsuccessful. Which of the following is the root cause of this issue?

The IAM user in the member account does not have IAM policies which explicitly grant EC2 or S3 service actions. (Needs Review)

The biggest fast-food chain in Asia is planning to implement a location-based alert on their existing mobile app. If the user is in proximity of their restaurant, an alert will be shown on the mobile phone. The notification needs to happen in less than a minute while the user is still in the vicinity. Currently, the mobile app has 10 million users in the Philippines, China, Korea and other Asian countries. Which one of the following AWS architecture is the most suitable option for this scenario?

The mobile app will send device location to an SQS endpoint. Set up an API that utilizes an Application Load Balancer and an Auto Scaling group of EC2 instances, which will retrieve the relevant offers from DynamoDB. Use AWS Mobile Push to send offers to the mobile app. (Needs Review)

You are working for a digital media publishing company as an IT consultant. They have an online portal which is composed of a Classic Load Balancer and EC2 instances deployed across multiple Availability Zones. The architecture is using a combination of Reserved EC2 Instances to handle steady state load and On-Demand EC2 Instances to handle the peak load. Currently, the web servers operate at 90% utilization during peak load. Which of the following is the most cost-effective option to enable the online portal to quickly recover in the event that one of the Availability Zones is unavailable during peak load?

To handle the peak load in the most cost-effective manner, launch a Spot Fleet of EC2 instances with a diversified allocation strategy on all Availability Zones instead of On-Demand instances.

Your company has several AWS accounts for each separate department such as Accounting, Human Resources, IT, and many others. These accounts are not linked to each other and hence, it is quite difficult to have a consolidated view of all of the bills for each accounts. Which of the following options can you implement in order to have a unified view of all accounts and their respective billings that are used in your organization?

Use AWS Organization, which enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. Use one master account and multiple member accounts for each department.

A company instructed their Solutions Architect to design a secure content management solution that can be accessed by its external custom applications via API calls. The solution should enable users to upload documents as well as download a specific version or the latest version of a document. There is also a requirement to enable customer administrators to simply submit an API call which can roll back changes to existing files sent to the system.Which of the following options is the MOST secure and suitable solution that the Architect should implement?

Use Amazon WorkDocs for document storage and utilize its user access management, version control, and built-in encryption. Integrate the Amazon WorkDocs Content Manager to the external custom applications. Develop a rollback feature to replace the current document version with the previous version from Amazon WorkDocs.

A privately funded aerospace manufacturer and sub-orbital spaceflight services company hosts its rapid-evolving applications in AWS. For their deployment process, they are using CloudFormation templates which are regularly updated to map the latest AMI IDs. It takes a lot of time to execute this on a regular basis which is why you were instructed to automate this process.In this scenario, which of the following options is the most suitable solution that can satisfy the requirement?

Use CloudFormation with Systems Manager Parameter Store to retrieve the latest AMI IDs for your template. Whenever you decide to update the EC2 instances, call the update-stack API in CloudFormation in your CloudFormation template. (Needs review)

You are working in a fintech startup as a Solutions Architect where you are setting up a cloud architecture that uses an Application Load Balancer in front of an Auto Scaling group of On-Demand EC2 instances. To lower down the overall cost, one EC2 instance should be terminated whenever the overall CPU utilization is at 15% or lower. In this scenario, how can you implement a cost-effective and scalable architecture to satisfy the requirement?

Use CloudWatch for the monitoring and configure the scaling in policy of the Auto Scaling group to terminate one EC2 instance when the CPU Utilization is 15% or below. (Needs review)

The web portal of a top university is hosted on an Auto Scaling group of EBS-backed EC2 instances with Amazon Aurora as its database. The students can upload their research documents to the portal where the files are stored in one of the attached EBS Volumes. For data redundancy, there is a scheduled job that synchronizes the documents stored in one EBS Volume to all available EBS Volumes attached to all EC2 instances. The IT Manager noticed that the system performance is quite slow and he has instructed you to implement a solution to improve the architecture.In this scenario, what will you do to implement a scalable, high throughput POSIX-compliant file system?

Use EFS. (Needs Review) This question tests your understanding of EBS, EFS, and S3. The system performance is quite slow because the architecture doesn't provide the EC2 instances a parallel shared access to the file documents. Remember that an EBS Volume can be attached to one EC2 instance at a time, hence, no other EC2 instance can connect to that EBS Volume. Take note as well that the type of storage needed here is a "file storage" which means that S3 is not the correct service to use because it is mainly used for "object storage". This is why the correct answer is EFS.

The leading media company in the country is building a voting system for their popular singing competition show on national TV. The viewers who watch the performances can visit their dynamic website to vote for their favorite singer. After the show has finished, it is expected that the site will receive millions of visitors who would like to cast their votes. Web visitors should login using their social media accounts and then submit their votes. The webpage will display the winner after the show, as well as the vote total for each singer. You are hired to build the voting site and to ensure that it can handle the rapid influx of incoming traffic in the most cost-effective way possible. Which of the following architecture should you use to meet the requirement?

Use a CloudFront web distribution and an Application Load Balancer in front of an Auto Scaling group of EC2 instances. Use Amazon Cognito for user authentication. The web servers will process the user's vote and pass the result in an SQS queue. Set up an IAM Role to grant the EC2 instances permissions to write to the SQS queue. A group of EC2 instances will then retrieve and process the items from the queue. Finally, store the results in a DynamoDB table.

You are the Lead DevOps engineer in your team where you manage the entire AWS cloud infrastructure just like a code. Your team's main focus is to make the build and deployment process management of AWS resources more effective. Your goal is to have a system where you can deploy different versions of your infrastructure, easily stage changes into different environments, pass variables to application environments, and programmatically manage your infrastructure just like an application code. Which of the following options is the most suitable to use in this scenario?

Use a version control system like SVN or GIT and AWS CloudFormation to deploy and manage your infrastructure.

A FinTech startup has recently consolidated their multiple AWS accounts using AWS Organizations. They currently have two teams in their organization, a security team and a development team. The former is responsible for protecting their cloud infrastructure and making sure that all of their resources are compliant, while the latter is responsible for developing new applications that are deployed to EC2 instances. The security team is required to set up a system that will check if all of the running EC2 instances are using an approved AMI. However, the solution should not stop the development team from deploying an EC2 instance running on a non-approved AMI. The disruption is only allowed once the deployment has been completed. In addition, they have to set up a notification system that sends the compliance state of your resources to determine whether they are compliant.Which of the following is the most suitable solution that the security team should implement?

Use an AWS Config Managed Rule and specify a list of approved AMI IDs. This rule will check whether running EC2 instances are using specified AMIs. Configure AWS Config to stream configuration changes and notifications to an Amazon SNS topic which will send a notification for non-compliant instances.

An Internet-of-Things (IoT) company is building a portal that stores data coming from its 20,000 gas sensors. The gas sensors, which have unique IDs, are used to detect a gas leak or other emissions inside the oil facility. Every 15 minutes, the sensors will send a datapoint throughout the day containing its ID, current gas level data as well as the timestamp. Each datapoint contains the critical information coming from the gas sensors. The company would like to query the information coming from a particular gas sensor for the past week and would like to delete all data that are older than eight weeks. The application is using a NoSQL database which is why they are using Amazon DynamoDB service. How would you implement this in the most cost-effective way?

Use one table every week, with a composite primary key which is the sensor ID as the partition key and the timestamp as the sort key. (needs review)

A company has a suite of IBM products in their on-premises data center such as IBM WebSphere, IBM MQ, and IBM DB2 servers. You are instructed to migrate all of their systems to AWS in the most cost-effective way and improve the availability of your cloud infrastructure.Which of the following is the MOST suitable solution that you have to implement to meet the requirement?

Use the AWS Database Migration Service (DMS) and the AWS Schema Conversion Tool (SCT) to convert, migrate, and re-architect the IBM Db2 database to Amazon Aurora. Set up an Auto Scaling group of EC2 instances with an ELB in front to migrate and re-host your IBM WebSphere. Migrate and re-platform IBM MQ to Amazon MQ in a phased approach.

You are instructed to design a solution to automatically recover impaired EC2 instances in your VPC, which typically entails manual steps before you could regain access. The goal is to automatically fix an instance that has become unreachable due to network misconfigurations, RDP issues, firewall settings, and many others to meet the compliance requirements.As the Solutions Architect, which of the following is the most suitable solution that you should implement to meet the above requirement?

Use the EC2Rescue tool to diagnose and troubleshoot problems on your Amazon EC2 Linux and Windows Server instances. Run the tool automatically by using the Systems Manager Automation and the AWSSupport-ExecuteEC2Rescue document.

A leading electronics company is getting ready to do a major public announcement of its latest smartphone. Their official website uses an Application Load Balancer in front of Auto Scaling group of On-Demand EC2 instances, which are deployed across multiple Availability Zones with a Multi-AZ RDS MySQL database. In preparation for their new product launch, you checked the performance of their website and found that the database takes a lot of time to retrieve the data when there are over 100,000 simultaneous requests on the server. The static content such as the images and videos are promptly loaded as expected, but not the customer information that is fetched from the database. Which of the following could be done to solve this issue in a cost-effective way? (Select TWO.)

a. Add Read Replicas in RDS for each Availability Zone. b. Implement a caching system using ElastiCache in-memory cache on each Availability Zone.

A legal consulting firm is running a WordPress website on EC2 instances deployed across multiple Availability Zones with a Multi-AZ RDS MySQL database instance. Their website is designed to use an eventual consistency model and performs a high number of read and write operations. There is a growing number of people who are reporting that the website is slow and after checking, the root cause is due to the slow read processing in your database tier. Which of the following options would solve this issue? (Choose 2)

a. Add an RDS MySQL Read Replica in each Availability Zone. b. Deploy an Amazon ElastiCache Cluster with nodes running in each Availability Zone.

You are working for a health startup as a Solutions Architect. Due to budget constraints, you decided to host a MySQL database in an EBS-backed EC2 instance instead of using RDS to lower the cost. An IT contractor was hired to develop the UI which was written in ReactJS and then hosted in S3 static web hosting. After some tests, the DevOps team noticed that the write throughput of the database is quite slow and should be increased.Which combination of the following options provides the MOST effective solution in this scenario? (Choose 2)

a. Add more EBS Volumes and set up a RAID 0 configuration. b. Upgrade the EC2 Instance to a larger instance type. (Needs Review)

You are working as a Solutions Architect for a computer hardware manufacturer which has a supply chain application written in NodeJS. The application is deployed on a Reserved EC2 instance which has been allocated with an IAM Role that provides access to data files stored in an S3 bucket.In this architecture, which of the following IAM policies control access to your data files in S3? (Choose 2)

a. An IAM access policy that allows the EC2 role to access S3 objects. b.An IAM trust policy that allows the EC2 instance to assume an EC2 instance role. (Needs Review)

You formed a new startup company where you develop health-related mobile apps on both iOS and Android devices. Your co-founder developed a sleep tracking app which collects the user's biometric data then stores them in a DynamoDB table, which is configured with on-demand provisioned throughput capacity. Every 9 in the morning, a scheduled task scans the DynamoDB table to extract and aggregate last night's data for each user and stores the results in an S3 bucket. When the new data is available, the users are then notified via Amazon SNS mobile push notifications. Due to budget constraints, you want to optimize the current architecture of the backend system to lower costs and increase your bottom line. Which of the following can you do to further lower the cost in AWS? (Choose 2)

a. Avail a reserved capacity for provisioned throughput for DynamoDB. b. Set up a scheduled job to drop the DynamoDB table for the previous day that contains the biometric data after it is successfully stored in the S3 bucket. Create another DynamoDB table for the day and perform the deletion and creation process everyday. (Needs review)

You are working at the IT department of a top law firm in the country. It was decided that Amazon S3 will be used for storage after an extensive Total Cost of ownership (TCO) analysis comparing S3 versus acquiring more on-premises storage hardware. The attorneys, paralegals, clerks and other employees of the law firm will be using Amazon S3 to store their legal documents and other media files. For a better user experience, you are planning to implement a single sign-on system in which the user can just use his or her existing Active Directory login to access the S3 storage to avoid having to remember yet another password. Which of the following should you do to implement this feature and to also provide a mechanism that restricts access for each user to a designated user folder in a bucket? (Choose 2)

a. Configure an IAM Policy that restricts access only to the user-specific folders in the Amazon S3 Bucket. b. Set up a federation proxy or a custom identity provider and use AWS Security Token Service to generate temporary tokens. Use an IAM Role to enable access to AWS services.

Many software developers and DevOps engineers in your company are frustrated because they have to memorize two passwords for their AWS account and another one for their corporate account. The solution is to implement a Single Sign-On feature where the employees will only have to sign-in once on your corporate Windows Active Directory and then they can log into the AWS console easily. In this scenario, which of the following options will you choose to set up single sign-on (SSO) feature? (Choose 4)

a. Configure your identity store (Windows Active Directory) to work with a SAML-based identity provider (IdP) such as Windows Active Directory Federation Services. b. Create an IAM role that identifies your identity provider as a principal (trusted entity) for purposes of federation. c. Use AWS Security Token Service to generate temporary tokens. d. Set up an AWS SSO endpoint. (Needs Review)

An automative company that sells electric vehicles around the world will be releasing its latest car model which has a semi-autonomous driving technology also known as "autopilot" capability. Due to the popularity of their new product, they are planning to use CloudFront for static asset caching to speed up the delivery of new models' images and other static contents to viewers across the globe. The static contents should be delivered over HTTPS using their own domain name which provides visitors of their website the required security over an SSL connection plus lower latency and higher reliability. How can you implement this setup in AWS using CloudFront? (Choose 2)

a. Create a CloudFront distribution with a custom SSL certificate that is stored in AWS Certificate Manager (ACM). b. Create a CloudFront distribution with a custom SSL certificate that is stored in IAM.(Needs Review)

A blockchain application was deployed in AWS a year ago using Opsworks. There has been a lot of security patches lately for the underlying linux servers of the blockchain application, which means that the Opsworks stack instances should be updated.In this scenario, which of the following are the best practices when updating an AWS stack? (Select TWO)

a. Create and start new instances to replace your current online instances. Then delete the current instances. The new instances will have the latest set of security patches installed during setup. b. Run the Update Dependencies stack command for Linux based instances

A major telecommunications company has contracted you to design and build an online shopping portal in which they can sell their new smart home devices. It is expected that there will be a lot of buyers on the online shopping portal due to the popularity of smart home devices so the manager wants to ensure that the architecture that you will design can also mitigate distributed denial-of-service (DDoS) attacks. Which of the following options are mitigation techniques in AWS? (Choose 3)

a. Distribute static content using Amazon CloudFront. b. Configure an alert in Amazon CloudWatch to look for high NetworkIn and CPUUtilization metrics. c. Launch multiple EC2 instances with Auto Scaling to multiple Availability Zones. Place these behind an Elastic Load Balancer to distribute incoming traffic to your application across these EC2 instances.

A multinational consumer goods company is currently using a VMWare vCenter Server to manage their virtual machines, multiple ESXi hosts, and all dependent components from a single centralized location. To save costs and to avail the benefits of cloud computing, the company decided to move their virtual machines to AWS. As the Solutions Architect, you are required to generate new AMIs of your virtual machines which can be launched as an EC2 instance in your VPC.Which combination of steps should the Architect do to properly execute the cloud migration? (Choose 2)

a. Install the Server Migration Connector to your on-premises virtualization environment. b. Use the AWS Server Migration Service (SMS) to migrate your on-premises workloads to AWS (Needs Review)

A multinational medical research company is migrating their on-premises online repository application to AWS. The application hosts high-resolution endoscopic, cryo-electron microscopy and other anatomical images which are scanned and uploaded by the medical team. The online repository provides various ways to view these images including the ability to zoom in and zoom out on their front-end web application written in ReactJS. The developers implemented the system by splitting each high-resolution image into small individual tiles at multiple zoom levels which are used on various viewing options such as thumbnail, full image, and pinch-to-zoom view. The document can be zoomed at a maximum of 8000 x 6000 pixels in dimension which are split into multiple 20px by 20px image tiles. A group of On-Demand EC2 instances process these tiles by batch and then stored to an S3 bucket. The front-end application fetches the tiles from the S3 bucket and displays them to viewers as they zoom in and pan around each image. 50 MB is the average size of the tiles for all zoom levels. The original high resolution images are archived in Amazon Glacier to save costs. The medical research company expects to process and host over a million of scanned documents every year. Which of the following should you implement to make the current architecture more cost-effective and scalable? (Choose 3)

a. Launch a CloudFront web distribution and use the S3 bucket which hosts the tiles as the origin. b. At the maximum zoom level, increase the width and height of the individual tiles from 20px by 20px to a much larger 40px by 40px dimension. c. Use S3 One-Availability Zone Storage class to store the tiles for each zoom level.

A leading blockchain company is getting ready to do a major announcement of their latest product next month on their public website which is hosted in AWS. It is running on an Auto Scaling group of Spot EC2 instances deployed across multiple Availability Zones with an MySQL database instance. The website performs a high number of read operations to load the articles for their clients around the globe and a relatively low number of write operations to store the comments and inquiries of customers on their products. Before the major announcement, you did a performance testing and found out that the database could not handle a surge of incoming requests. In this scenario, which of the following are the cost-effective and suitable options to solve the database performance issue? (Choose 2)

a. Launch a Read Replica in each Availability Zone. b.Use Provisioned IOPS storage to improve the read performance of your database.

Amazon Virtual Private Cloud provides features such as Security groups, Network access control lists (ACLs) and Flow logs that you can use to monitor and secure your virtual private cloud (VPC). There is a newly launched VPC in your AWS account and your technical manager instructed you to fortify the security of your cloud infrastructure, in which you have to use both Security groups and Network access control lists (ACLs). Which options are correct regarding the differences between these two security features? (Choose 3)

a. Security groups operate at the EC2 instance level while Network ACLs operate at the subnet level. b. Security groups are stateful because return traffic is automatically allowed regardless of any rules while Network ACLs are stateless as return traffic must be explicitly allowed by rules. c. Security Groups support allow rules only while Network ACLs support both allow rules and deny rules. (needs review)

A technology company has a large enterprise resource planning (ERP) system with 70 TB of data hosted in their data center, which they want to migrate to AWS. The migration activities should not exceed 7 days to avoid any downtime as the on-premises data storage being used by the ERP system is almost full. Since the data to be migrated is 79 TB, the company is choosing between Snowball and Snowball Edge as their preferred service to transfer their data. The service should also provide a durable local storage to ensure data durability. Which of the following are the use cases where Snowball Edge is more suitable than the standard Snowball service? (Choose 4)

a. Use with AWS Greengrass (IoT). b. Local compute instances c. Transfer files through NFS with a GUI. d. Local compute with AWS Lambda. (Needs Review)


संबंधित स्टडी सेट्स

Emerging Technologies (Lesson 3)

View Set

Short Story Literary Terms Study Guide

View Set

Ch_3_Interaction of X-Radiation With Matter

View Set

Managerial Accounting RQ Chapter 13 #1

View Set

Chapter 7 (Energy and Metabolism) and Chapter 8 (Cellular Respiration)

View Set

PNE 105 Chapter 46: Caring for Clients with Disorders of the Lower GI Tract. Med-Surg.

View Set

nclex pn safe, effective care environment

View Set

AP Calculus AB Unit 1: Limits and Continuity

View Set