A LOT

¡Supera tus tareas y exámenes ahora con Quizwiz!

A business has an application that sells tickets online and sees spikes in demand every seven days. The program is comprised of a stateless display layer that runs on Amazon EC2, an Oracle database for storing unstructured data catalog information, and a backend API layer. The front-end layer distributes the load over nine On-Demand instances spread across three Availability Zones through an Elastic Load Balancer (AZs). Oracle is being run on a single EC2 instance. When conducting more than two continuous campaigns, the firm has performance challenges. A solutions architect is responsible for developing a solution that satisfies the following requirements: ✑ Address scalability issues. ✑ Increase the level of concurrency. ✑ Eliminate licensing costs. ✑ Improve reliability. Which procedure should the solutions architect follow? A. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce costs. Convert the Oracle database into a single Amazon RDS reserved DB instance. B. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce costs. Create two additional copies of the database instance, then distribute the databases in separate AZs. C. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce costs. Convert the tables in the Oracle database into Amazon DynamoDB tables. D. Convert the On-Demand Instances into Spot instances to reduce costs for the front end. Convert the tables in the Oracle database into Amazon DynamoDB tables.

A

A business has many teams, and each team has its own Amazon RDS database with a total capacity of 100 TB. The firm is developing a data query platform that will enable Business Intelligence Analysts to create a weekly business report. The new system must be capable of doing ad-hoc SQL queries. Which approach is the MOST cost-effective? A. Create a new Amazon Redshift cluster. Create an AWS Glue ETL job to copy data from the RDS databases to the Amazon Redshift cluster. Use Amazon Redshift to run the query. B. Create an Amazon EMR cluster with enough core nodes. Run an Apache Spark job to copy data from the RDS databases to a Hadoop Distributed File System (HDFS). Use a local Apache Hive metastore to maintain the table definition. Use Spark SQL to run the query. C. Use an AWS Glue ETL job to copy all the RDS databases to a single Amazon Aurora PostgreSQL database. Run SQL queries on the Aurora PostgreSQL database. D. Use an AWS Glue crawler to crawl all the databases and create tables in the AWS Glue Data Catalog. Use an AWS Glue ETL job to load data from the RDS databases to Amazon S3, and use Amazon Athena to run the queries.

A

A business intends to restructure a monolithic application into a contemporary application architecture that will be delivered on Amazon Web Services. The CI/CD pipeline must be improved to accommodate the application's contemporary architecture, which includes the following requirements: ✑ It should allow changes to be released several times every hour. ✑ It should be able to roll back the changes as quickly as possible. Which design will satisfy these criteria? A. Deploy a CI/CD pipeline that incorporates AMIs to contain the application and their configurations. Deploy the application by replacing Amazon EC2 instances. B. Specify AWS Elastic Beanstalk to stage in a secondary environment as the deployment target for the CI/CD pipeline of the application. To deploy, swap the staging and production environment URLs. C. Use AWS Systems Manager to re-provision the infrastructure for each deployment. Update the Amazon EC2 user data to pull the latest code artifact from Amazon S3 and use Amazon Route 53 weighted routing to point to the new environment. D. Roll out the application updates as part of an Auto Scaling event using prebuilt AMIs. Use new versions of the AMIs to add instances, and phase out all instances that use the previous AMI version with the configured termination policy during a deployment event.

A

A read-only news reporting site with a mixed web and application layer and a database tier that faces high and unexpected traffic demands must be able to adapt automatically to these changes. Which Amazon Web Offerings (AWS) services should be employed to achieve these requirements? A. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaimg group monitored with CloudWatch and RDS with read replicas. B. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas. C. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and multi-AZ RDS. D. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS.

A

A retail firm processes point-of-sale data in its data center using application servers and publishes the results to an Amazon DynamoDB database. The data center is linked to the company's VPC through an AWS Direct Connect (DX) connection, and the application servers need a reliable network connection with a minimum of 2 Gbps.The organization determines that the DynamoDB table should be highly available and fault tolerant. According to corporate policy, data should be accessible in two areas. What modifications should the business make to comply with these requirements? A. Establish a second DX connection for redundancy. Use DynamoDB global tables to replicate data to a second Region. Modify the application to fail over to the second Region. B. Use an AWS managed VPN as a backup to DX. Create an identical DynamoDB table in a second Region. Modify the application to replicate data to both Regions. C. Establish a second DX connection for redundancy. Create an identical DynamoDB table in a second Region. Enable DynamoDB auto scaling to manage throughput capacity. Modify the application to write to the second Region. D. Use AWS managed VPN as a backup to DX. Create an identical DynamoDB table in a second Region. Enable DynamoDB streams to capture changes to the table. Use AWS Lambda to replicate changes to the second Region.

A

Amazon S3 is used by a business to host a web application. The organization now employs a continuous integration tool running on an Amazon EC2 instance to create and deploy the application through an S3 bucket. A Solutions Architect is responsible for enhancing the platform's security in accordance with the following requirements: ✑ A build process should be executed in a distinct account from the web application's hosting account. ✑ A construction process should have just the bare minimum access to the account under which it runs. ✑ Long-lived credentials should be avoided at all costs.To begin, the Development team built two AWS accounts: one for the application's web account process and another for the application's build account. Which solution should the Solutions Architect implement in order to satisfy the security requirements? A. In the build account, create a new IAM role, which can be assumed by Amazon EC2 only. Attach the role to the EC2 instance running the continuous integration process. Create an IAM policy to allow s3: PutObject calls on the S3 bucket in the web account. In the web account, create an S3 bucket policy attached to the S3 bucket that allows the build account to use s3:PutObject calls. B. In the build account, create a new IAM role, which can be assumed by Amazon EC2 only. Attach the role to the EC2 instance running the continuous integration process. Create an IAM policy to allow s3: PutObject calls on the S3 bucket in the web account. In the web account, create an S3 bucket policy attached to the S3 bucket that allows the newly created IAM role to use s3:PutObject calls. C. In the build account, create a new IAM user. Store the access key and secret access key in AWS Secrets Manager. Modify the continuous integration process to perform a lookup of the IAM user credentials from Secrets Manager. Create an IAM policy to allow s3: PutObject calls on the S3 bucket in the web account, and attack it to the user. In the web account, create an S3 bucket policy attached to the S3 bucket that allows the newly created IAM user to use s3:PutObject calls. D. In the build account, modify the continuous integration process to perform a lookup of the IAM user credentials from AWS Secrets Manager. In the web account, create a new IAM user. Store the access key and secret access key in Secrets Manager. Attach the PowerUserAccess IAM policy to the IAM user.

A

An architect has deployed an operational workload on Amazon EC2 instances in an Auto Scaling group. The VPC design spans two Availability Zones (AZ), each with a dedicated subnet for the Auto Scaling group. The VPC is physically attached to an on-premises environment and cannot be disconnected. The Auto Scaling group may have a maximum of 20 instances in service. The IPv4 addressing scheme for the VPC is as follows: VPC CIDR: 10.0.0.0/23 AZ1 subnet CIDR: 10.0.0.0/24 AZ2 subnet CIDR: 10.0.1.0/24 Since deployment, the Region has gained access to a third AZ. The solutions architect want to implement the new AZ without expanding IPv4 address space or causing service outage. Which solution will satisfy these criteria? A. Update the Auto Scaling group to use the AZ2 subnet only. Delete and re-create the AZ1 subnet using half the previous address space. Adjust the Auto Scaling group to also use the new AZ1 subnet. When the instances are healthy, adjust the Auto Scaling group to use the AZ1 subnet only. Remove the current AZ2 subnet. Create a new AZ2 subnet using the second half of the address space from the original AZ1 subnet. Create a new AZ3 subnet using half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new subnets. B. Terminate the EC2 instances in the AZ1 subnet. Delete and re-create the AZ1 subnet using half the address space. Update the Auto Scaling group to use this new subnet. Repeat this for the second AZ. Define a new subnet in AZ3, then update the Auto Scaling group to target all three new subnets. C. Create a new VPC with the same IPv4 address space and define three subnets, with one for each AZ. Update the existing Auto Scaling group to target the new subnets in the new VPC. D. Update the Auto Scaling group to use the AZ2 subnet only. Update the AZ1 subnet to have the previous address space. Adjust the Auto Scaling group to also use the AZ1 subnet again. When the instances are healthy, adjust the Auto Scaling group to use the AZ1 subnet only. Update the current AZ2 subnet and assign the second half of the address space from the original AZ1 subnet. Create a new AZ3 subnet using half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new subnets.

A

The application of a business is growing in popularity and suffering increased delay as a result of large volume reads on the database server. The following properties apply to the service: ✑ A highly available REST API hosted in one region using Application Load Balancer (ALB) with auto scaling. ✑ A MySQL database hosted on an Amazon EC2 instance in a single Availability Zone. The organization wishes to minimize latency, improve in-region database read performance, and have multi-region disaster recovery capabilities capable of automatically performing a live recovery without data or performance loss (HA/DR). Which deployment technique will satisfy these criteria? A. Use AWS CloudFormation StackSets to deploy the API layer in two regions. Migrate the database to an Amazon Aurora with MySQL database cluster with multiple read replicas in one region and a read replica in a different region than the source database cluster. Use Amazon Route 53 health checks to trigger a DNS failover to the standby region if the health checks to the primary load balancer fail. In the event of Route 53 failover, promote the cross-region database replica to be the master and build out new read replicas in the standby region. B. Use Amazon ElastiCache for Redis Multi-AZ with an automatic failover to cache the database read queries. Use AWS OpsWorks to deploy the API layer, cache layer, and existing database layer in two regions. In the event of failure, use Amazon Route 53 health checks on the database to trigger a DNS failover to the standby region if the health checks in the primary region fail. Back up the MySQL database frequently, and in the event of a failure in an active region, copy the backup to the standby region and restore the standby database. C. Use AWS CloudFormation StackSets to deploy the API layer in two regions. Add the database to an Auto Scaling group. Add a read replica to the database in the second region. Use Amazon Route 53 health checks on the database to trigger a DNS failover to the standby region if the health checks in the primary region fail. Promote the cross-region database replica to be the master and build out new read replicas in the standby region. D. Use Amazon ElastiCache for Redis Multi-AZ with an automatic failover to cache the database read queries. Use AWS OpsWorks to deploy the API layer, cache layer, and existing database layer in two regions. Use Amazon Route 53 health checks on the ALB to trigger a DNS failover to the standby region if the health checks in the primary region fail. Back up the MySQL database frequently, and in the event of a failure in an active region, copy the backup to the standby region and restore the standby database.

A

The processing team of a business has an AWS account with a production application. The application is hosted on Amazon EC2 instances that are routed via a Network Load Balancer (NLB). Private subnets inside a VPC in the eu-west-1 Region are used to host the EC2 instances. A CIDR block of 10.0.0.0/16 was allocated to the VPC. Recently, the billing team built a new AWS account and deployed an application on EC2 instances housed on private subnets under a VPC in the eu-central-1 Region. The new virtual private cloud is allocated the CIDR block 10.0.0.0/16. Secure communication between the processing application and the billing application over a proprietary TCP port is required. What should a solutions architect do to ensure that this need is met with the MINIMUM amount of operational work possible? A. In the billing teamג€™s account, create a new VPC and subnets in eu-central-1 that use the CIDR block of 192.168.0.0/16. Redeploy the application to the new subnets. Configure a VPC peering connection between the two VPCs. B. In the processing teamג€™s account, add an additional CIDR block of 192.168.0.0/16 to the VPC in eu-west-1. Restart each of the EC2 instances so that they obtain a new IP address. Configure an inter-Region VPC peering connection between the two VPCs. C. In the billing teamג€™s account, create a new VPC and subnets in eu-west-1 that use the CIDR block of 192.168.0.0/16. Create a VPC endpoint service (AWS PrivateLink) in the processing teamג€™s account and an interface VPC endpoint in the new VPC. Configure an inter-Region VPC peering connection in the billing teamג€™s account between the two VPCs. D. In each account, create a new VPC with the CIDR blocks of 192.168.0.0/16 and 172.16.0.0/16. Create inter-Region VPC peering connections between the billing teamג€™s VPCs and the processing teamג€™s VPCs. Create gateway VPC endpoints to allow traffic to route between the VPCs.

A

You've been hired as a solutions architect to help a business client migrate their e-commerce platform to Amazon Virtual Private Cloud (VPC) The previous architect had previously implemented a three-tier virtual private cloud.The following is the configuration: VPC: vpc-2f8bc447 IGW: igw-2d8bc445 NACL: ad-208bc448 Subnets and Route Tables: Web servers: subnet-258bc44d Application servers: subnet-248bc44c Database servers: subnet-9189c6f9 Route Tables: rrb-218bc449 rtb-238bc44b Associations: subnet-258bc44d : rtb-218bc449 subnet-248bc44c : rtb-238bc44b subnet-9189c6f9 : rtb-238bc44b You are now prepared to begin provisioning EC2 instances inside the VPC. Web servers must have direct internet connectivity; application and database servers cannot.Which of the following configurations enables remote administration of your application and database servers, as well as the ability for these servers to download updates from the Internet? A. Create a bastion and NAT instance in subnet-258bc44d, and add a route from rtb- 238bc44b to the NAT instance. B. Add a route from rtb-238bc44b to igw-2d8bc445 and add a bastion and NAT instance within subnet-248bc44c. C. Create a bastion and NAT instance in subnet-248bc44c, and add a route from rtb- 238bc44b to subnet-258bc44d. D. Create a bastion and NAT instance in subnet-258bc44d, add a route from rtb-238bc44b to Igw-2d8bc445, and a new NACL that allows access between subnet-258bc44d and subnet-248bc44c.

A

Your firm is headquartered in Tokyo and has branch offices located across the globe. It uses a logistics software that is multi-regionally deployed on AWS in Japan, Europe, and the United States. The logistic software is built on a three-tier design and presently stores data in MySQL 5.6. Each area has its own database in place. In the headquarters region, you run an hourly batch process that reads data from all regions and generates cross-regional reports that are sent to all offices. This batch process must be finished as rapidly as possible to maximize logistics. How do you structure the database architecture to ensure that it satisfies the requirements? A. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region B. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region C. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region D. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region E. Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process

A

Your organization now operates a two-tier web application from an on-premises data center. You've had many infrastructure failures during the last two months, resulting in substantial financial losses. Your CIO is adamant about moving the application to AWS. While he works to get buy-in from other corporate leaders, he wants you to prepare a disaster recovery plan to aid in short-term business continuity. He targets a Recovery Time Objective (RTO) of four hours and a Recovery Point Objective (RPO) of one hour or less. Additionally, he requests that you implement the solution within two weeks. Your database is 200GB in size, and your Internet connection is 20Mbps. How would you do this while keeping prices low? A. Create an EBS backed private AMI which includes a fresh install of your application. Develop a CloudFormation template which includes your AMI and the required EC2, AutoScaling, and ELB resources to support deploying the application across Multiple- Availability-Zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection. B. Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zones. Asynchronously replicate transactions from your on- premises database to a database instance in AWS across a secure VPN connection. C. Create an EBS backed private AMI which includes a fresh install of your application. Setup a script in your data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an S3 bucket using multi-part upload. D. Install your application on a compute-optimized EC2 instance capable of supporting the application's average load. Synchronously replicate transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection.

A

ABC developed a multi-tenant Learning Management System (LMS) (LMS). The application is hosted for five distinct tenants (customers) in VPCs associated with each tenant's AWS account. ABC want to establish a centralized server that can communicate with the LMS of each tenant and perform necessary upgrades. Additionally, ABC want to guarantee that one tenant VPC is unable to communicate with the other tenant VPC for security concerns. How is ABC going to put up this scenario? A. ABC has to setup one centralized VPC which will peer in to all the other VPCs of the tenants. B. ABC should setup VPC peering with all the VPCs peering each other but block the IPs from CIDR of the tenant VPCs to deny them. C. ABC should setup all the VPCs with the same CIDR but have a centralized VPC. This way only the centralized VPC can talk to the other VPCs using VPC peering. D. ABC should setup all the VPCs meshed together with VPC peering for all VPCs.

A A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. It enables the user to launch AWS resources into a virtual network that the user has defined. A VPC peering connection allows the user to route traffic between the peer VPCs using private IP addresses as if they are a part of the same network.This is helpful when one VPC from the same or different AWS account wants to connect with resources of the other VPC. The organization wants to setup that one VPC can connect with all the other VPCs but all other VPCs cannot connect among each other. This can be achieved by configuring VPC peering where oneVPC is peered with all the other VPCs, but the other VPCs are not peered to each other. The VPCs are in the same or a separate AWS account and should not have overlapping CIDR blocks. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/peering-configurations-full-access.html#many-vpcs-full-acces

A user intends to utilize EBS to meet his database requirements. The user already has an Amazon Elastic Compute Cloud (EC2) instance operating in the VPC private subnet. How can a user connect an EBS volume to an already-running instance? A. The user can create EBS in the same zone as the subnet of instance and attach that EBS to instance. B. It is not possible to attach an EBS to an instance running in VPC until the instance is stopped. C. The user can specify the same subnet while creating EBS and then attach it to a running instance. D. The user must create EBS within the same VPC and then attach it to a running instance.

A A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. The user can create subnets as per the requirement within a VPC. TheVPC is always specific to a region. The user can create a VPC which can span multiple Availability Zones by adding one or more subnets in each AvailabilityZone. The instance launched will always be in the same availability zone of the respective subnet. When creating an EBS the user cannot specify the subnet orVPC. However, the user must create the EBS in the same zone as the instance so that it can attach the EBS volume to the running instance. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html#VPCSubnet

AWS CloudFormation ______ are template-specific actions that you use to set values to attributes that are not accessible until runtime. A. intrinsic functions B. properties declarations C. output functions D. conditions declarations

A AWS CloudFormation intrinsic functions are special actions you use in your template to assign values to properties not available until runtime. Each function is declared with a name enclosed in quotation marks (""), a single colon, and its parameters. Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-fuctions-structure.html

Which of the following is NOT a benefit of using Amazon Web Services Direct Connect? A. AWS Direct Connect provides users access to public and private resources by using two different connections while maintaining network separation between the public and private environments. B. AWS Direct Connect provides a more consistent network experience than Internet-based connections. C. AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. D. AWS Direct Connect reduces your network costs.

A AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections. By using industry standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space, and private resources such as Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC) using private IP space, while maintaining network separation between the public and private environments. Reference: http://aws.amazon.com/directconnect/#details

What sorts of identities are supported by Amazon Cognito identity pools? A. They support both authenticated and unauthenticated identities. B. They support only unauthenticated identities. C. They support neither authenticated nor unauthenticated identities. D. They support only authenticated identities.

A Amazon Cognito identity pools support both authenticated and unauthenticated identities. Authenticated identities belong to users who are authenticated by a public login provider or your own backend authentication process. Unauthenticated identities typically belong to guest users. Reference: http://docs.aws.amazon.com/cognito/devguide/identity/identity-pools/

Determine the right expiry date for the "Letter of Authorization and Connecting Facility Assignment (LOA-CFA)," which enables you to complete the Cross Connect stage of AWS Direct Connect configuration. A. If the cross connect is not completed within 90 days, the authority granted by the LOA-CFA expires. B. If the virtual interface is not created within 72 days, the LOA-CFA becomes outdated. C. If the cross connect is not completed within a user-defined time, the authority granted by the LOA- CFA expires. D. If the cross connect is not completed within the specified duration from the appropriate provider, the LOA-CFA expires.

A An AWS Direct Connect location provides access to AWS in the region it is associated with. You can establish connections with AWS Direct Connect locations in multiple regions, but a connection in one region does not provide connectivity to other regions. Note: If the cross connect is not completed within 90 days, the authority granted by the LOA-CFA expires. Reference: http://docs.aws.amazon.com/directconnect/latest/UserGuide/Colocation.html

You wish to use DNS names to mount an Amazon EFS file system on an Amazon EC2 instance.Which of the following generic DNS names for a mount target must you use when mounting the file system? A. availability-zone.file-system-id.efs.aws-region.amazonaws.com B. efs-system-id.availability-zone.file-aws-region.amazonaws.com C. $file-system-id.$availability-zone.$efs.aws-region.$amazonaws.com D. #aws-region.#availability-zone.#file-system-id.#efs.#amazonaws.com

A An Amazon EFS file system can be mounted on an Amazon EC2 instance using DNS names. This can be done with either a DNS name for the file system or a DNS name for the mount target. To construct the mount target's DNS name, use the following generic form: availability-zone.file-system-id.efs.aws-region.amazonaws.com Reference: http://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html#mounting-fs-install-nfsclient

If you have a running instance that is utilizing an Amazon EBS boot partition, you may use the _______ API to free up compute resources while maintaining the boot partition's data. A. Stop Instances B. Terminate Instances C. AMI Instance D. Ping Instance

A If you have a running instance using an Amazon EBS boot partition, you can also call the Stop Instances API to release the compute resources but preserve the data on the boot partition. Reference: https://aws.amazon.com/ec2/faqs/#How_quickly_will_systems_be_running

What is the maximum number of Cache Nodes that you may operate in Amazon ElastiCache by default? A. 20 B. 50 C. 100 D. 200

A In Amazon ElastiCache, you can run a maximum of 20 Cache Nodes.

With respect to Amazon SNS, you may send notification messages to mobile devices through any of the available push notification providers EXCEPT: A. Microsoft Windows Mobile Messaging (MWMM) B. Google Cloud Messaging for Android (GCM) C. Amazon Device Messaging (ADM) D. Apple Push Notification Service (APNS)

A In Amazon SNS, you have the ability to send notification messages directly to apps on mobile devices. Notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts. Microsoft Windows Mobile Messaging (MWMM) doesn't exist and is not supported by Amazon SNS. Reference: http://docs.aws.amazon.com/sns/latest/dg/SNSMobilePush.html

To guarantee that a table write happens, the specified throughput settings for the table and global secondary indexes in DynamoDB must be__________; otherwise, the table write will be throttled. A. enough write capacity to accommodate the write B. no additional write cost for the index C. 100 bytes of overhead per index item D. the size less than or equal to 1 KB

A In order for a table write to succeed in DynamoDB, the provisioned throughput settings for the table and global secondary indexes must have enough write capacity to accommodate the write; otherwise, the write will be throttled. Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html

You are developing a solution to prevent data leakage in your VPC environment. You want your VPC Instances to be able to connect to online software repositories and distributions for product updates. By their URLs, the depots and distributions are available via third-party CDNs.You wish to expressly prevent any more outbound connections from your VPC instances to external hosts. Which of the following are you considering? A. Configure a web proxy server in your VPC and enforce URL-based rules for outbound access Remove default routes. B. Implement security groups and configure outbound rules to only permit traffic to software depots. C. Move all your instances into private VPC subnets remove default routes from all routing tables and add specific routes to the software depots and distributions only. D. Implement network access control lists to all specific destinations, with an Implicit deny all rule.

A Organizations usually implement proxy solutions to provide URL and web content filtering, IDS/IPS, data loss prevention, monitoring, and advanced threat protection. Reference: https://d0.awsstatic.com/aws-answers/Controlling_VPC_Egress_Traffic.pdf

AWS Organizations is used by a business to manage one parent account and nine member accounts. The number of member accounts is likely to expand in lockstep with the growth of the firm. For compliance considerations, a security engineer has requested consolidation of AWS CloudTrail logs into the parent account. Existing logs in Amazon S3 buckets inside each member account should not be lost. All subsequent member accounts should adhere to the logging approach. What operationally efficient solution satisfies these criteria? A. Create an AWS Lambda function in each member account with a cross-account role. Trigger the Lambda functions when new CloudTrail logs are created and copy the CloudTrail logs to a centralized S3 bucket. Set up an Amazon CloudWatch alarm to alert if CloudTrail is not configured properly. B. Configure CloudTrail in each member account to deliver log events to a central S3 bucket. Ensure the central S3 bucket policy allows PutObject access from the member accounts. Migrate existing logs to the central S3 bucket. Set up an Amazon CloudWatch alarm to alert if CloudTrail is not configured properly. C. Configure an organization-level CloudTrail in the parent account to deliver log events to a central S3 bucket. Migrate the existing CloudTrail logs from each member account to the central S3 bucket. Delete the existing CloudTrail and logs in the member accounts. D. Configure an organization-level CloudTrail in the parent account to deliver log events to a central S3 bucket. Configure CloudTrail in each member account to deliver log events to the central S3 bucket.

A Reference: https://aws.amazon.com/blogs/architecture/stream-amazon-cloudwatch-logs-to-a-centralized-account-for-audit-and-analysis/

You are in charge of a legacy web application whose server environment is nearing the end of its useful life. You want to transfer this program to AWS as soon as feasible, since the present environment of the application has the following limitations: ✑ The VM's single 10GB VMDK is almost full; ✑ Me virtual network interface still uses the 10Mbps driver, which leaves your 100Mbps WAN connection completely underutilized; ✑ It is currently running on a highly customized. Windows VM within a VMware environment; ✑ You do not have me installation media; This is a mission-critical application with an RTO of 8 hours. 1 hour RPO (Recovery Point Objective). How might you transfer this application to AWS in the most efficient manner while still adhering to your business continuity requirements? A. Use the EC2 VM Import Connector for vCenter to import the VM into EC2. B. Use Import/Export to import the VM as an ESS snapshot and attach to EC2. C. Use S3 to create a backup of the VM and restore the data into EC2. D. Use me ec2-bundle-instance API to Import an Image of the VM into EC2

A Reference: https://aws.amazon.com/developertools/2759763385083070

A business has deployed an application to a variety of AWS environments, including production and testing. The organization maintains distinct accounts for production and testing, and users may establish extra application users as required for team members or services. The Security team has requested increased isolation between production and testing environments, centralized control over security credentials, and improved management of permissions across environments from the Operations team. Which of the following methods would fulfill this objective the MOST SECURELY? A. Create a new AWS account to hold user and service accounts, such as an identity account. Create users and groups in the identity account. Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles. B. Modify permissions in the production and testing accounts to limit creating new IAM users to members of the Operations team. Set a strong IAM password policy on each account. Create new IAM users and groups in each account to limit developer access to just the services required to complete their job function. C. Create a script that runs on each account that checks user accounts for adherence to a security policy. Disable any user or service accounts that do not comply. D. Create all user accounts in the production account. Create roles for access in the production account and testing accounts. Grant cross-account access from the production account to the testing account.

A Reference: https://aws.amazon.com/ru/blogs/security/how-to-centralize-and-automate-iam-policy-creation-in-sandbox-development-and-test-environments/

A business has an internal AWS Elastic Beanstalk worker environment contained inside a VPC that requires access to an external payment gateway API accessible through an HTTPS endpoint on the public internet. Due to security restrictions, the application team at the payment gateway may allow access to just one public IP address. Which architecture will enable the company's application to be accessed through an Elastic Beanstalk environment without requiring various modifications on the company's end? A. Configure the Elastic Beanstalk application to place Amazon EC2 instances in a private subnet with an outbound route to a NAT gateway in a public subnet. Associate an Elastic IP address to the NAT gateway that can be whitelisted on the payment gateway application side. B. Configure the Elastic Beanstalk application to place Amazon EC2 instances in a public subnet with an internet gateway. Associate an Elastic IP address to the internet gateway that can be whitelisted on the payment gateway application side. C. Configure the Elastic Beanstalk application to place Amazon EC2 instances in a private subnet. Set an HTTPS_PROXY application parameter to send outbound HTTPS connections to an EC2 proxy server deployed in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the payment gateway application side. D. Configure the Elastic Beanstalk application to place Amazon EC2 instances in a public subnet. Set the HTTPS_PROXY and NO_PROXY application parameters to send non-VPC outbound HTTPS connections to an EC2 proxy server deployed in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the payment gateway application side.

A Reference: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc.html

A Solutions Architect is creating a network solution for a corporation whose applications are hosted in a Northern Virginia data center. The company's data center applications demand predictable performance in comparison to those operating in a virtual private cloud (VPC) in us-east-1 and a secondary VPC in us-west-2 inside the same account. The company's data center is colocated in a US-east-1 AWS Direct Connect facility. The organization has already placed an order for an AWS Direct Connect connection and created a cross-connect. Which option satisfies the criteria AT THE CHEAPEST price? A. Provision a Direct Connect gateway and attach the virtual private gateway (VGW) for the VPC in us-east-1 and the VGW for the VPC in us-west-2. Create a private VIF on the Direct Connect connection and associate it to the Direct Connect gateway. B. Create private VIFs on the Direct Connect connection for each of the companyג€™s VPCs in the us-east-1 and us-west-2 regions. Configure the companyג€™s data center router to connect directly with the VPCs in those regions via the private VIFs. C. Deploy a transit VPC solution using Amazon EC2-based router instances in the us-east-1 region. Establish IPsec VPN tunnels between the transit routers and virtual private gateways (VGWs) located in the us-east-1 and us-west-2 regions, which are attached to the companyג€™s VPCs in those regions. Create a public VIF on the Direct Connect connection and establish IPsec VPN tunnels over the public VIF between the transit routers and the companyג€™s data center router. D. Order a second Direct Connect connection to a Direct Connect facility with connectivity to the us-west-2 region. Work with a partner to establish a network extension link over dark fiber from the Direct Connect facility to the companyג€™s data center. Establish private VIFs on the Direct Connect connections for each of the companyג€™s VPCs in the respective regions. Configure the companyג€™s data center router to connect directly with the VPCs in those regions via the private VIFs.

A Reference:https://aws.amazon.com/blogs/aws/new-aws-direct-connect-gateway-inter-region-vpc-access/

Which of the following statements is true about temporary security credentials in IAM? A. Once you issue temporary security credentials, they cannot be revoked. B. None of these are correct. C. Once you issue temporary security credentials, they can be revoked only when the virtual MFA device is used. D. Once you issue temporary security credentials, they can be revoked.

A Temporary credentials in IAM are valid throughout their defined duration of time and hence can't be revoked. However, because permissions are evaluated each time an AWS request is made using the credentials, you can achieve the effect of revoking the credentials by changing the permissions for the credentials even after they have been issued. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_disable-perms.html

An firm is using Elastic Beanstalk to create a highly scalable application. Elastic Load Balancing (ELB) and a Virtual Private Cloud (VPC) with public and private subnets are being used. They must meet the following criteria: - All the EC2 instances should have a private IP - All the EC2 instances should receive data via the ELB's. Which of these will be unnecessary in this configuration? A. Launch the EC2 instances with only the public subnet. B. Create routing rules which will route all inbound traffic from ELB to the EC2 instances. C. Configure ELB and NAT as a part of the public subnet only. D. Create routing rules which will route all outbound traffic from the EC2 instances through NAT.

A The Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon WebServices (AWS) cloud. The user has complete control over the virtual networking environment. If the organization wants the Amazon EC2 instances to have a private IP address, he should create a public and private subnet for VPC in each Availability Zone (this is an AWS Elastic Beanstalk requirement). The organization should add their public resources, such as ELB and NAT to the public subnet, and AWC Elastic Beanstalk will assign them unique elastic IP addresses (a static, public IP address). The organization should launch Amazon EC2 instances in a private subnet so that AWS Elastic Beanstalk assigns them non-routable private IP addresses. Now the organization should configure route tables with the following rules: ✑ route all inbound traffic from ELB to EC2 instances ✑ route all outbound traffic from EC2 instances through NAT Reference:http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo-vpc.html

Which of the following statements about the DynamoDB Console is NOT true? A. It allows you to add local secondary indexes to existing tables. B. It allows you to query a table. C. It allows you to set up alarms to monitor your table's capacity usage. D. It allows you to view items stored in a tables, add, update, and delete items.

A The DynamoDB Console lets you do the following: Create, update, and delete tables. The throughput calculator provides you with estimates of how many capacity units you will need to request based on the usage information you provide. View items stored in a tables, add, update, and delete items. Query a table. Set up alarms to monitor your table's capacity usage. View your table's top monitoring metrics on real-time graphs from CloudWatch. View alarms configured for each table and create custom alarms.html.

The Principal element of an IAM policy denotes the particular entity to whom authorization should be granted or denied, while the denotes everyone except the specified entity. A. NotPrincipal B. Vendor C. Principal D. Action

A The element NotPrincipal that is included within your IAM policy statements allows you to specify an exception to a list of principals to whom the access to a specific resource is either allowed or denied. Use the NotPrincipal element to specify an exception to a list of principals. For example, you can deny access to all principals except the one named in the NotPrincipal element. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Principal

What is the default maximum number of virtual private clouds (VPCs) per region? A. 5 B. 10 C. 100 D. 15

A The maximum number of VPCs allowed per region is 5. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html

When utilizing string conditions inside IAM, it is possible to utilize condensed versions of the available comparators rather than the more verbose ones. streqi is the abbreviation for the _______ string condition. A. StringEqualsIgnoreCase B. StringNotEqualsIgnoreCase C. StringLikeStringEquals D. StringNotEquals

A When using string conditions within IAM, short versions of the available comparators can be used instead of the more verbose versions. For instance, streqi is the short version of StringEqualsIgnoreCase that checks for the exact match between two strings ignoring their case. Reference: http://awsdocs.s3.amazonaws.com/SNS/20100331/sns-gsg-2010-03-31.pdf

You are creating a URL whitelisting system for a business that want to limit outbound HTTP'S connections from its EC2-hosted apps to particular sites. You setup a single EC2 instance running proxy software to accept traffic from all subnets and EC2 instances inside the VPC. You configure the proxy to forward traffic only to the domains specified in its whitelist setting. You have a nightly or ten-minute maintenance window during which all instances download new software upgrades. Each update is around 200MB in size, and there are 500 instances in the VPC that fetch updates on a regular basis. After a few days, you may discover that certain computers are unable to download some, but not all, of their scheduled updates during the maintenance window. The download URLs for these updates are appropriately displayed in the proxy's whitelist setup, and they can be accessed manually on the instances through a web browser. What may be going on? (Select two.) A. You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all instances to download their updates in time. B. You are running the proxy on a sufficiently-sized EC2 instance in a private subnet and its network throughput is being throttled by a NAT running on an undersized EC2 instance. C. The route table for the subnets containing the affected EC2 instances is not configured to direct network traffic for the software update locations to the proxy. D. You have not allocated enough storage to the EC2 instance running the proxy so the network buffer is filling up, causing some requests to fail. E. You are running the proxy in a public subnet but have not allocated enough EIPs to support the needed network throughput through the Internet Gateway (IGW).

AB

A large firm is developing a platform for infrastructure services for its consumers. The following conditions have been established by the company: ✑ Provide least privilege access to users when launching AWS infrastructure so users cannot provision unapproved services. ✑ Use a central account to manage the creation of infrastructure services. ✑ Distribute infrastructure services across many AWS Organizations accounts. ✑ Allow for the enforcement of tags on any infrastructure that is initiated by users. Which combination of AWS services-based activities will satisfy these requirements? (Select three.) A. Develop infrastructure services using AWS Cloud Formation templates. Add the templates to a central Amazon S3 bucket and add the-IAM roles or users that require access to the S3 bucket policy. B. Develop infrastructure services using AWS Cloud Formation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the Organizations structure created for the company. C. Allow user IAM roles to have AWSCloudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at the AWS account root user level to deny all services except AWS CloudFormation and Amazon S3. D. Allow user IAM roles to have ServiceCatalogEndUserAccess permissions only. Use an automation script to import the central portfolios to local AWS accounts, copy the TagOption assign users access and apply launch constraints. E. Use the AWS Service Catalog TagOption Library to maintain a list of tags required by the company. Apply the TagOption to AWS Service Catalog products or portfolios. F. Use the AWS CloudFormation Resource Tags property to enforce the application of tags to any CloudFormation templates that will be created for users.

ABE

A business formerly relied on a third-party supplier for its content delivery network but just switched to Amazon CloudFront. The development team is committed to provide the best possible performance for the worldwide user base. The organization makes use of a content management system (CMS) to handle both static and dynamic information. The CMS is hidden behind an Application Load Balancer (ALB), which is configured as the distribution's default origin. Static materials are supplied from a bucket on Amazon S3. Although the Origin Access Identity (OAI) was successfully constructed and the S3 bucket policy was adjusted to permit the GetObject operation from the OAI, static assets are generating a 404 error. Which measures should the solutions architect do in combination to correct the error? (Select two.) A. Add another origin to the CloudFront distribution for the static assets. B. Add a path-based rule to the ALB to forward requests for the static assets. C. Add an RTMP distribution to allow caching of both static and dynamic content. D. Add a behavior to the CloudFront distribution for the path pattern and the origin of the static assets. E. Add a host header condition to the ALB listener and forward the header from CloudFront to add traffic to the allow list.

AC

A solutions architect is developing a publicly available online application that will be distributed through Amazon CloudFront and will originate from an Amazon S3 website endpoint. After deploying the solution, the website displays an Error 403: Access Denied notice. How should the solutions architect proceed to resolve the issue? (Select two.) A. Remove the S3 block public access option from the S3 bucket. B. Remove the requester pays option from the S3 bucket. C. Remove the origin access identity (OAI) from the CloudFront distribution. D. Change the storage class from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA). E. Disable S3 object versioning.

AC

Which security components of AWS are the customer's responsibility? (Select four.) A. Security Group and ACL (Access Control List) settings B. Decommissioning storage devices C. Patch management on the EC2 instance's operating system D. Life-cycle management of IAM credentials E. Controlling physical access to compute resources F. Encryption of EBS (Elastic Block Storage) volumes

ACDF

A financial services firm is migrating to AWS and wants to allow developers to experiment and innovate while restricting access to production apps. The following conditions have been established by the company: ✑ Production workloads cannot be directly connected to the internet. ✑ All workloads must be restricted to the us-west-2 and eu-central-1 Regions. ✑ Notification should be sent when developer sandboxes exceed $500 in AWS spending monthly. Which combination of steps is required to develop a multi-account structure that fits the requirements of the business? (Select three.) A. Create accounts for each production workload within an organization in AWS Organizations. Place the production accounts within an organizational unit (OU). For each account, delete the default VPC. Create an SCP with a Deny rule for the attach an internet gateway and create a default VPC actions. Attach the SCP to the OU for the production accounts. B. Create accounts for each production workload within an organization in AWS Organizations. Place the production accounts within an organizational unit (OU). Create an SCP with a Deny rule on the attach an internet gateway action. Create an SCP with a Deny rule to prevent use of the default VPC. Attach the SCPs to the OU for the production accounts. C. Create a SCP containing a Deny Effect for cloudfront:*, iam:*, route53:*, and support:* with a StringNotEquals condition on an aws:RequestedRegion condition key with us-west-2 and eu-central-1 values. Attach the SCP to the organization's root. D. Create an IAM permission boundary containing a Deny Effect for cloudfront:*, iam:*, route53:*, and support:* with a StringNotEquals condition on an aws:RequestedRegion condition key with us-west-2 and eu-central-1 values. Attach the permission boundary to an IAM group containing the development and production users. E. Create accounts for each development workload within an organization in AWS Organizations. Place the development accounts within an organizational unit (OU). Create a custom AWS Config rule to deactivate all IAM users when an account's monthly bill exceeds $500. F. Create accounts for each development workload within an organization in AWS Organizations. Place the development accounts within an organizational unit (OU). Create a budget within AWS Budgets for each development account to monitor and report on monthly spending exceeding $500.

ACF

A media firm is utilizing Amazon CloudFront to provide video files stored in Amazon S3. The development team need access to the logs in order to identify problems and monitor the service. CloudFront log files may include personally identifiable information about users. Before making logs accessible to the development team, the corporation utilizes a log processing provider to remove sensitive information. The following criteria apply to unprocessed logs: ✑ The logs must be encrypted at rest and must be accessible by the log processing service only.Only the data protection team can control access to the unprocessed log files. ✑ AWS CloudFormation templates must be stored in AWS CodeCommit. ✑ AWS CodePipeline must be triggered on commit to perform updates made to CloudFormation templates. ✑ CloudFront is already writing the unprocessed logs to an Amazon S3 bucket, and the log processing service is operating against this S3 bucket. Which sequence of actions should a solutions architect perform in order to satisfy the business's requirements? (Select two.)

AD

A business wishes to operate Amazon EC2 instances only from AMIs that have been pre-approved by the information security department. The development team uses an agile continuous integration and deployment approach that is invulnerable to the solution's stuttering. Which strategy imposes the necessary restrictions with the LEAST amount of influence on the development process? (Select two.) A. Use IAM policies to restrict the ability of users or other automated entities to launch EC2 instances based on a specific set of pre-approved AMIs, such as those tagged in a specific way by Information Security. B. Use regular scans within Amazon Inspector with a custom assessment template to determine if the EC2 instance that the Amazon Inspector Agent is running on is based upon a pre-approved AMI. If it is not, shut down the instance and inform Information Security by email that this occurred. C. Only allow launching of EC2 instances using a centralized DevOps team, which is given work packages via notifications from an internal ticketing system. Users make requests for resources using this ticketing tool, which has manual information security approval steps to ensure that EC2 instances are only launched from approved AMIs. D. Use AWS Config rules to spot any launches of EC2 instances based on non-approved AMIs, trigger an AWS Lambda function to automatically terminate the instance, and publish a message to an Amazon SNS topic to inform Information Security that this occurred. E. Use a scheduled AWS Lambda function to scan through the list of running instances within the virtual private cloud (VPC) and determine if any of these are based on unapproved AMIs. Publish a message to an SNS topic to inform Information Security that this occurred and then shut down the instance.

AD Reference: https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules_getting-started.html

A financial services organization maintains an on-premises system that consumes market data feeds from stock exchanges, processes the data, and distributes it to an internal Apache Kafka cluster. Management want to utilize AWS services in order to develop a scalable and near-real-time solution capable of delivering stock market data to a web application in a consistent manner. Which stages should a solutions architect follow while developing a solution? (Select three.) A. Establish an AWS Direct Connect connection from the on-premises data center to AWS. B. Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Consumer Library to put the data into an Amazon Kinesis data stream. C. Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Kinesis Producer Library to put the data into a Kinesis data stream. D. Create a WebSocket API in Amazon API Gateway, create an AWS Lambda function to process an Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients. E. Create a GraphQL API in AWS AppSync, create an AWS Lambda function to process the Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients. F. Establish a Site-to-Site VPN from the on-premises data center to AWS.

ADE

A business is launching a web-based application in many countries. The application has both static and dynamic content, which is stored in a private Amazon S3 bucket and hosted in Amazon ECS containers behind an Application Load Balancer (ALB). The organization stipulates that all static and dynamic application material must be available through Amazon CloudFront. Which combination of procedures should a solutions architect propose to protect CloudFront's direct content access? (Select three.) A. Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB. B. Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the CloudFront distribution. C. Configure CloudFront to add a custom header to origin requests. D. Configure the ALB to add a custom header to HTTP requests. E. Update the S3 bucket ACL to allow access from the CloudFront distribution only. F. Create a CloudFront Origin Access Identity (OAI) and add it to the CloudFront distribution. Update the S3 bucket policy to allow access to the OAI only.

ADF

You've been entrusted with the responsibility of migrating a legacy application from a virtual machine hosted in your datacenter to an Amazon VPC. Regrettably, this app needs access to many on-premises services, and the person who set it no longer works for your firm. Worse still, there is no documentation. What enables the application operating inside the VPC to communicate with and access its internal dependencies without requiring reconfiguration? (Select three.) A. An AWS Direct Connect link between the VPC and the network housing the internal services. B. An Internet Gateway to allow a VPN connection. C. An Elastic IP address on the VPC instance D. An IP address space that does not conflict with the one on-premises E. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies' IP addresses F. A VM Import of the current virtual machine

ADF

A solutions architect is assessing the dependability of a freshly transferred application running on Amazon Web Services. Amazon S3 is used to host the front end, which is expedited through Amazon CloudFront. The application layer is implemented as a stateless Docker container running on an Amazon EC2 On-Demand Instance with an Elastic IP address. The storage layer is a MongoDB database operating in the same Availability Zone as the application layer on an EC2 Reserved Instance. Which sequence of actions should the solutions architect do to reduce single points of failure while requiring minimum modifications to the application's code? (Select two.) A. Create a REST API in Amazon API Gateway and use AWS Lambda functions as the application layer B. Create an Application Load Balancer and migrate the Docker container to AWS Fargate C. Migrate the storage layer to Amazon DynamoDB D. Migrate the storage layer to Amazon DocumentDB (with MongoDB compatibility) E. Create an Application Load Balancer and move the storage layer to an EC2 Auto Scaling group

AE

A business demands that all internal applications utilize private IP addresses for connection. A solutions architect has established interface endpoints to connect to AWS public services in order to accommodate this policy. The solutions architect observes that the service names resolve to public IP addresses and that internal services are unable to connect to the interface endpoints. Which procedure should the solutions architect use in order to remedy this issue? A. Update the subnet route table with a route to the interface endpoint B. Enable the private DNS option on the VPC attributes C. Configure the security group on the interface endpoint to allow connectivity to the AWS services D. Configure an Amazon Route 53 private hosted zone with a conditional forwarder for the internal application

B

A business is creating a gene reporting device that will gather genetic data to aid researchers in the collection of huge samples of data from a varied population. The gadget will transmit 8 KB of genomic data per second to a data platform, which will be responsible for processing and analyzing the data and communicating the results to researchers. The data platform must comply with the following specifications: ✑ Analyze incoming genomic data in near-real time ✑ Ascertain that the data is adaptable, parallel, and durable ✑ Deliver processed data to a data warehouse Which approach should a solutions architect use in order to satisfy these requirements? A. Use Amazon Kinesis Data Firehouse to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon RDS instance. B. Use Amazon Kinesis Data Streams to collect the inbound sensor data, analyze the data with Kinesis clients, and save the results to an Amazon Redshift cluster using Amazon EMR. C. Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SQS with Kinesis, and save the results to an Amazon Redshift cluster. D. Use an Amazon API Gateway to put requests into an Amazon SQS queue, analyze the data with an AWS Lambda function, and save the results to an Amazon Redshift cluster using Amazon EMR.

B

A business is transferring from on-premises to the AWS Cloud its three-tier web application. The following criteria apply to the migration process: ✑ Ingest machine images from the on-premises environment. ✑ Synchronize changes from the on-premises environment to the AWS environment until the production cutover. ✑ Minimize downtime when executing the production cutover. ✑ Migrate the virtual machines' root volumes and data volumes. Which option will meet these criteria with the least amount of operational overhead? A. Use AWS Server Migration Service (SMS) to create and launch a replication job for each tier of the application. Launch instances from the AMIs created by AWS SMS. After initial testing, perform a final replication and create new instances from the updated AMIs. B. Create an AWS CLI VM Import/Export script to migrate each virtual machine. Schedule the script to run incrementally to maintain changes in the application. Launch instances from the AMIs created by VM Import/Export. Once testing is done, rerun the script to do a final import and launch the instances from the AMIs. C. Use AWS Server Migration Service (SMS) to upload the operating system volumes. Use the AWS CLI import-snapshot command for the data volumes. Launch instances from the AMIs created by AWS SMS and attach the data volumes to the instances. After initial testing, perform a final replication, launch new instances from the replicated AMIs, and attach the data volumes to the instances. D. Use AWS Application Discovery Service and AWS Migration Hub to group the virtual machines as an application. Use the AWS CLI VM Import/Export script to import the virtual machines as AMIs. Schedule the script to run incrementally to maintain changes in the application. Launch instances from the AMIs. After initial testing, perform a final virtual machine import and launch new instances from the AMIs.

B

A business operates an e-commerce platform that is divided into front-end and e-commerce layers. Both levels are built on LAMP stacks, with the front-end instances operating behind an AWS-hosted load balancing appliance. At the moment, the Operations team logs into instances via SSH to maintain patches and solve other issues. Recently, the site has been the subject of a number of assaults, including ✑ A DDoS attack. ✑ An SQL injection attack. ✑ Several successful dictionary attacks on SSH accounts on the web servers. By shifting to AWS, the firm hopes to increase the security of its e-commerce platform. The Solutions Architects at the organization have chosen the following approach: ✑ Code review the existing application and fix any SQL injection issues. ✑ Migrate the web application to AWS and leverage the latest AWS Linux AMI to address initial security patching. ✑ Install AWS Systems Manager to manage patching and allow the system administrators to run commands on all instances, as needed. What further efforts will address the threat types identified while maintaining high availability and lowering risk? A. Enable SSH access to the Amazon EC2 instances using a security group that limits access to specific IPs. Migrate on-premises MySQL to Amazon RDS Multi- AZ. Install the third-party load balancer from the AWS Marketplace and migrate the existing rules to the load balancerג€™s AWS instances. Enable AWS Shield Standard for DDoS protection. B. Disable SSH access to the Amazon EC2 instances. Migrate on-premises MySQL to Amazon RDS Multi-AZ. Leverage an Elastic Load Balancer to spread the load and enable AWS Shield Advanced for protection. Add an Amazon CloudFront distribution in front of the website. Enable AWS WAF on the distribution to manage the rules. C. Enable SSH access to the Amazon EC2 instances through a bastion host secured by limiting access to specific IP addresses. Migrate on-premises MySQL to a self-managed EC2 instance. Leverage an AWS Elastic Load Balancer to spread the load and enable AWS Shield Standard for DDoS protection. Add an Amazon CloudFront distribution in front of the website. D. Disable SSH access to the EC2 instances. Migrate on-premises MySQL to Amazon RDS Single-AZ. Leverage an AWS Elastic Load Balancer to spread the load. Add an Amazon CloudFront distribution in front of the website. Enable AWS WAF on the distribution to manage the rules.

B

A cloud-based application will be transferred from on-premises. The application is composed of a single Elasticsearch virtual machine with data source feeds from non-migrated local systems and a Java web application running on Apache Tomcat on three virtual machines. Elasticsearch now consumes 1 TB of storage space out of a total of 16 TB available, and the web application is updated every four months. The web application is accessible through the Internet by several users. A 10Gbit AWS Direct Connect connection has been built, and the application may now be transferred within a 48-hour planned change window. Which option will have the MINIMUM effect on Operations personnel after the migration? A. Create an Elasticsearch server on Amazon EC2 right-sized with 2 TB of Amazon EBS and a public AWS Elastic Beanstalk environment for the web application. Pause the data sources, export the Elasticsearch index from on premises, and import into the EC2 Elasticsearch server. Move data source feeds to the new Elasticsearch server and move users to the web application. B. Create an Amazon ES cluster for Elasticsearch and a public AWS Elastic Beanstalk environment for the web application. Use AWS DMS to replicate Elasticsearch data. When replication has finished, move data source feeds to the new Amazon ES cluster endpoint and move users to the new web application. C. Use the AWS SMS to replicate the virtual machines into AWS. When the migration is complete, pause the data source feeds and start the migrated Elasticsearch and web application instances. Place the web application instances behind a public Elastic Load Balancer. Move the data source feeds to the new Elasticsearch server and move users to the new web Application Load Balancer. D. Create an Amazon ES cluster for Elasticsearch and a public AWS Elastic Beanstalk environment for the web application. Pause the data source feeds, export the Elasticsearch index from on premises, and import into the Amazon ES cluster. Move the data source feeds to the new Amazon ES cluster endpoint and move users to the new web application.

B

A corporation has an on-premises data center with a High Performance Computing (HPC) cluster that executes thousands of tasks in parallel for one week each month, processing petabytes of photos. The photos are archived on a network file server and duplicated to a disaster recovery location. The on-premises data center has reached capacity and has begun spreading the tasks over the month in order to maximize the use of the cluster, resulting in a delay in work completion. The firm has tasked its Solutions Architect with developing a cost-effective solution on AWS that would enable it to go beyond its present capacity of 5,000 cores and 10 petabytes of data. The solution must be as low-maintenance as possible while maintaining the existing degree of durability. Which option will best fulfill the needs of the business? A. Create a container in the Amazon Elastic Container Registry with the executable file for the job. Use Amazon ECS with Spot Fleet in Auto Scaling groups. Store the raw data in Amazon EBS SC1 volumes and write the output to Amazon S3. B. Create an Amazon EMR cluster with a combination of On Demand and Reserved Instance Task Nodes that will use Spark to pull data from Amazon S3. Use Amazon DynamoDB to maintain a list of jobs that need to be processed by the Amazon EMR cluster. C. Store the raw data in Amazon S3, and use AWS Batch with Managed Compute Environments to create Spot Fleets. Submit jobs to AWS Batch Job Queues to pull down objects from Amazon S3 onto Amazon EBS volumes for temporary storage to be processed, and then write the results back to Amazon S3. D. Submit the list of jobs to be processed to an Amazon SQS to queue the jobs that need to be processed. Create a diversified cluster of Amazon EC2 worker instances using Spot Fleet that will automatically scale based on the queue depth. Use Amazon EFS to store all the data sharing it across all instances in the cluster.

B

A policy such as the one below may be linked to an IAM group. It enables an IAM user in that group to use the console to access a "home directory" on AWS S3 that matches their user name. { "Version": "2012-10-17", "Statement": [ { "Action": ["s3:*"], "Effect": "Allow", "Resource": ["arn:aws:s3:::bucket-name"], "Condition":{"StringLike":{"s3:prefix":["home/${aws:username}/*"]}} }, { "Action":["s3:*"], "Effect":"Allow", "Resource": ["arn:aws:s3:::bucket-name/home/${aws:username}/*"] } ] } A. True B. False

B

A solutions architect at a big organization is responsible for configuring network security for outgoing traffic to the internet from all AWS accounts inside the corporation's AWS Organizations. The business has over 100 AWS accounts, which are connected through a centralized AWS Transit Gateway. Each account is equipped with both an internet gateway and a NAT gateway for outgoing internet traffic. The organization limits its AWS resource deployments to a single AWS Region.The business need the ability to implement centrally controlled rule-based filtering to all outgoing traffic to the internet for all AWS accounts. In each Availability Zone, the peak load of outbound traffic will not exceed 25 Gbps. Which solution satisfies these criteria? A. Create a new VPC for outbound traffic to the internet. Connect the existing transit gateway to the new VPC. Configure a new NAT gateway. Create an Auto Scaling group of Amazon EC2 instances that run an open-source internet proxy for rule-based filtering across all Availability Zones in the Region. Modify all default routes to point to the proxyג€™s Auto Scaling group. B. Create a new VPC for outbound traffic to the internet. Connect the existing transit gateway to the new VPC. Configure a new NAT gateway. Use an AWS Network Firewall firewall for rule-based filtering. Create Network Firewall endpoints in each Availability Zone. Modify all default routes to point to the Network Firewall endpoints. C. Create an AWS Network Firewall firewall for rule-based filtering in each AWS account. Modify all default routes to point to the Network Firewall firewalls in each account. D. In each AWS account, create an Auto Scaling group of network-optimized Amazon EC2 instances that run an open-source internet proxy for rule-based filtering. Modify all default routes to point to the proxyג€™s Auto Scaling group.

B

Amazon S3 is used by a business to store documents that are exclusively accessible through an Amazon EC2 instance in a particular virtual private cloud (VPC). The organization is concerned that a hostile insider with access to this instance may also create an EC2 instance in another VPC and use it to access these data. Which of the following options provide the needed level of protection? A. Use an S3 VPC endpoint and an S3 bucket policy to limit access to this VPC endpoint. B. Use EC2 instance profiles and an S3 bucket policy to limit access to the role attached to the instance profile. C. Use S3 client-side encryption and store the key in the instance metadata. D. Use S3 server-side encryption and protect the key with an encryption context.

B

On Amazon EC2, a web startup hosts its very successful social news service using an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application servers, and DynamoDB as the data store. The primary web application performs optimally on m2 x big instances because to its high memory requirements. Each new deployment necessitates the semi-automated building and testing of a new AMI for the application servers, which takes a considerable amount of time and is consequently only performed once per week. Recently, a new chat functionality was introduced into the architecture using nodejs and wails. The first testing indicate that the new component is CPU-bound. Due to the company's prior experience with Chef, they chose to streamline the deployment process by using AWS Ops Works as an application life cycle management platform to simplify application administration and minimize deployment cycles. What AWS Ops Works setup is required to incorporate the new chat module in the most cost-effective and flexible manner possible? A. Create one AWS OpsWorks stack, create one AWS Ops Works layer, create one custom recipe B. Create one AWS OpsWorks stack create two AWS Ops Works layers, create one custom recipe C. Create two AWS OpsWorks stacks create two AWS Ops Works layers, create one custom recipe D. Create two AWS OpsWorks stacks create two AWS Ops Works layers, create two custom recipe

B

The data science team at a large corporation wants to create a secure, cost-effective method for providing quick access to Amazon SageMaker. The data scientists are unfamiliar with AWS and want the ability to deploy a Jupyter notebook instance. The notebook instance must be prepared with an AWS KMS key in order to encrypt data at rest on the machine learning storage volume without disclosing the extensive setup requirements. Which strategy enables the business to provide a self-service mechanism for data scientists to start Jupyter notebooks in its AWS accounts with the LEAST amount of operational overhead? A. Create a serverless front end using a static Amazon S3 website to allow the data scientists to request a Jupyter notebook instance by filling out a form. Use Amazon API Gateway to receive requests from the S3 website and trigger a central AWS Lambda function to make an API call to Amazon SageMaker that will launch a notebook instance with a preconfigured KMS key for the data scientists. Then call back to the front-end website to display the URL to the notebook instance. B. Create an AWS CloudFormation template to launch a Jupyter notebook instance using the AWS::SageMaker::NotebookInstance resource type with a preconfigured KMS key. Add a user-friendly name to the CloudFormation template. Display the URL to the notebook using the Outputs section. Distribute the CloudFormation template to the data scientists using a shared Amazon S3 bucket. C. Create an AWS CloudFormation template to launch a Jupyter notebook instance using the AWS::SageMaker::NotebookInstance resource type with a preconfigured KMS key. Simplify the parameter names, such as the instance size, by mapping them to Small, Large, and X-Large using the Mappings section in CloudFormation. Display the URL to the notebook using the Outputs section, then upload the template into an AWS Service Catalog product in the data scientist's portfolio, and share it with the data scientist's IAM role. D. Create an AWS CLI script that the data scientists can run locally. Provide step-by-step instructions about the parameters to be provided while executing the AWS CLI script to launch a Jupyter notebook with a preconfigured KMS key. Distribute the CLI script to the data scientists using a shared Amazon S3 bucket.

B

You've created a CloudFormation template that deploys a single Elastic Load Balancer in front of two EC2 Instances. Which portion of the template should you alter to ensure that the load balancer's DNS is returned when the stack is created? A. Parameters B. Outputs C. Mappings D. Resources

B

You've created an Amazon EC2 instance and connected four (4) 500 GB EBS Provisioned IOPS volumes. The EC2 instance is optimized for EBS and offers a throughput of 500 Mbps between EC2 and EBS. The four EBS volumes are set in RAID 0, and each Provided IOPS volume is provisioned with 4,000 IOPS (4,000 16KB reads or writes), giving the instance a total of 16,000 random IOPS. The EC2 instance initially performs at the desired rate of 16,000 IOPS random read and write. Later on, to boost the instance's overall random I/O performance, you add two 500 GB EBS Provisioned IOPS volumes to the RAID. Each volume, like the original four, is provisioned to 4,000 IOPS, for a total of 24,000 IOPS on the EC2 instance. Monitoring indicates that the CPU usage of the EC2 instance went from 50% to 70%, while the total random IOPS observed at the instance level remained constant. What is the issue and what is a viable solution? A. The EBS-Optimized throughput limits the total IOPS that can be utilized; use an EBSOptimized instance that provides larger throughput. B. Small block sizes cause performance degradation, limiting the I/O throughput; configure the instance device driver and filesystem to use 64KB blocks to increase throughput. C. The standard EBS Instance root volume limits the total IOPS rate; change the instance root volume to also be a 500GB 4,000 Provisioned IOPS volume. D. Larger storage volumes support higher Provisioned IOPS rates; increase the provisioned volume storage of each of the 6 EBS volumes to 1TB. E. RAID 0 only scales linearly to about 4 devices; use RAID 0 with 4 EBS Provisioned IOPS volumes, but increase each Provisioned IOPS EBS volume to 6,000 IOPS.

B

Your firm creates one-of-a-kind skiing helmets on commission from customers, fusing high fashion with specific technological advancements. Customers may flaunt their Individuality on the ski slopes, thanks to the availability of head-up displays. GPS rearview cameras and any other technological advancements they desire to include into the helmet. The present production process is data-intensive and sophisticated, requiring evaluations to guarantee that the bespoke electronics and materials used to construct the helmets meet the highest requirements. Assessments are a combination of manual and computerized evaluations. You must add a new set of assessments to simulate the failure modes of the bespoke electronics utilizing GPUs and CUDA, distributed over a cluster of servers with low-latency networking. Which architecture would enable you to automate an existing process using a hybrid approach while also ensuring that the architecture is capable of supporting process change over time? A. Use AWS Data Pipeline to manage movement of data & meta-data and assessments Use an auto-scaling group of G2 instances in a placement group. B. Use Amazon Simple Workflow (SWF) to manages assessments, movement of data & meta-data Use an auto-scaling group of G2 instances in a placement group. C. Use Amazon Simple Workflow (SWF) to manages assessments movement of data & meta-data Use an auto-scaling group of C3 instances with SR-IOV (Single Root I/O Virtualization). D. Use AWS data Pipeline to manage movement of data & meta-data and assessments use auto-scaling group of C3 with SR-IOV (Single Root I/O virtualization).

B

Your system was recently unavailable due to the troubleshooting procedure. You discovered that a new administrator killed multiple production EC2 instances by mistake.Which of the following techniques will assist in preventing a repeat of this incident in the future?The administrator must continue to have the ability to: ✑ launch, start stop, and terminate development resources. ✑ launch and start production instances. A. Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection. B. Leverage resource based tagging, along with an IAM user which can prevent specific users from terminating production, EC2 resources. C. Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances D. Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances.

B

After your Lambda function has been running for some time, you'll want to examine certain metrics to see how well it's functioning. You'll want to accomplish this using the AWS CLI. Which of the following commands must be performed to get access to these metrics using the AWS Command Line Interface (CLI)? A. mon-list-metrics and mon-get-stats B. list-metrics and get-metric-statistics C. ListMetrics and GetMetricStatistics D. list-metrics and mon-get-stats

B AWS Lambda automatically monitors functions on your behalf, reporting metrics through Amazon CloudWatch.To access metrics using the AWS CLI Use the list-metrics and get-metric-statistics commands. Reference: http://docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-access-metrics.html

You've been suggested a new customer, and you know he's into online gaming. You're pretty confident he'll want to build up an online gaming site, which will need a database service that delivers quick and reliable performance with seamless scalability. Which of the following Amazon Web Services databases is the greatest fit for an online gaming website? A. Amazon SimpleDB B. Amazon DynamoDB C. Amazon Redshift D. Amazon ElastiCache

B Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. You can use AmazonDynamoDB to create a database table that can store and retrieve any amount of data, and serve any level of request traffic. Amazon DynamoDB automatically spreads the data and traffic for the table over a sufficient number of servers to handle the request capacity specified by the customer and the amount of data stored, while maintaining consistent and fast performance. Reference: http://aws.amazon.com/documentation/dynamodb/

Which of the following Amazon RDS storage types is optimal for applications that need just light or burst I/O? A. Both magnetic and Provisioned IOPS storage B. Magnetic storage C. Provisioned IOPS storage D. None of these

B Amazon RDS provides three storage types: magnetic, General Purpose (SSD), and Provisioned IOPS (input/output operations per second). Magnetic (Standard) storage is ideal for applications with light or burst I/O requirements. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html

How can a user access the IAM Role that was established as part of the launch configuration? A. as-describe-launch-configs -iam-profile B. as-describe-launch-configs -show-long C. as-describe-launch-configs -iam-role D. as-describe-launch-configs -role

B As-describe-launch-configs describes all the launch config parameters created by the AWS account in the specified region. Generally, it returns values, such asLaunch Config name, Instance Type and AMI ID. If the user wants additional parameters, such as the IAM Profile used in the config, he has to run command: as-describe-launch-configs --show-long

Temporary security credentials for an IAM user are normally good for 12 hours, but you may request a length of up to ________ hours. A. 24 B. 36 C. 10 D. 48

B By default, temporary security credentials for an IAM user are valid for a maximum of 12 hours, but you can request a duration as short as 15 minutes or as long as 36 hours. Reference: http://docs.aws.amazon.com/STS/latest/UsingSTS/CreatingSessionTokens.html

Dave is Example Corp.'s primary administrator, and he chooses to employ paths to better segment the company's users, creating a distinct administrator group for each path-based division. The following is a partial list of the routes he intends to take: * /marketing * /sales * /legal Dave establishes a new administrator group called Marketing Admin for the company's marketing department. He categorizes it as /marketing path. arn:aws:iam::123456789012:group/marketing/Marketing_Admin is the group's ARN. Dave creates the Marketing_Admin group and grants it authority to perform all IAM activities on all groups and users on the /marketing route. Additionally, the policy grants the Marketing_Admin group authority to conduct any AWS S3 activities on the items in the corporate bucket's section. { "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Action": "iam:*", "Resource": [ "arn:aws:iam::123456789012:group/marketing/*", "arn:aws:iam::123456789012:user/marketing/*" ] }, { "Effect": "Allow", "Action": "s3:*", "Resource": "arn:aws:s3:::example_bucket/marketing/*" }, { "Effect": "Allow", "Action": "s3:ListBucket*", "Resource": "arn:aws:s3:::example_bucket", "Condition":{"StringLike":{"s3:prefix": "marketing/*"}} } ] } A. True B. False

B Effect Deny

How many cg1.4xlarge on-demand instances can a user operate in a single region without obtaining AWS clearance for a limit increase? A. 20 B. 2 C. 5 D. 10

B Generally, AWS EC2 allows running 20 on-demand instances and 100 spot instances at a time. This limit can be increased by requesting at https:// aws.amazon.com/contact-us/ec2-request. Excluding certain types of instances, the limit is lower than mentioned above. For cg1.4xlarge, the user can run only 2 on-demand instances at a time. Reference: http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ec2

A user wishes to run both a web server and an application server on a single EC2 instance that is part of a VPC's public subnet.How can the user configure two distinct public IP addresses and security groups for the application and the web server? A. Launch VPC with two separate subnets and make the instance a part of both the subnets. B. Launch a VPC instance with two network interfaces. Assign a separate security group and elastic IP to them. C. Launch a VPC instance with two network interfaces. Assign a separate security group to each and AWS will assign a separate public IP to them. D. Launch a VPC with ELB such that it redirects requests to separate VPC instances of the public subnet.

B If you need to host multiple websites (with different IPs) on a single EC2 instance, the following is the suggested method from AWS.Launch a VPC instance with two network interfaces.Assign elastic IPs from VPC EIP pool to those interfaces (Because, when the user has attached more than one network interface with an instance, AWS cannot assign public IPs to them.) Assign separate Security Groups if separate Security Groups are needed This scenario also helps for operating network appliances, such as firewalls or load balancers that have multiple private IP addresses for each network interface. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html

A business has a data center that must be swiftly moved to AWS. The data center is connected to Amazon Web Services through a 500 Mbps AWS Direct Connect connection and a separate, fully accessible 1 Gbps ISP connection. A Solutions Architect is tasked with the responsibility of transferring 20 terabytes of data from the data center to an Amazon S3 bucket. What is the FASTEST method of data transfer? A. Upload the data to the S3 bucket using the existing DX link. B. Send the data to AWS using the AWS Import/Export service. C. Upload the data using an 80 TB AWS Snowball device. D. Upload the data to the S3 bucket using S3 Transfer Acceleration.

B Import/Export supports importing and exporting data into and out of Amazon S3 buckets. For significant data sets, AWS Import/Export is often faster than Internet transfer and more cost effective than upgrading your connectivity. Reference: https://stackshare.io/stackups/aws-direct-connect-vs-aws-import-export

In Amazon ElastiCache, the loss of a single cache node might affect the availability of your application and the strain on your back-end database, while ElastiCache creates a new cache node and populates it. Which of the following is a solution for mitigating this possible effect on availability? A. Spread your memory and compute capacity over fewer number of cache nodes, each with smaller capacity. B. Spread your memory and compute capacity over a larger number of cache nodes, each with smaller capacity. C. Include fewer number of high capacity nodes. D. Include a larger number of cache nodes, each with high capacity.

B In Amazon ElastiCache, the number of cache nodes in the cluster is a key factor in the availability of your cluster running Memcached. The failure of a single cache node can have an impact on the availability of your application and the load on your back-end database while ElastiCache provisions a replacement for the failed cache node and it get repopulated. You can reduce this potential availability impact by spreading your memory and compute capacity over a larger number of cache nodes, each with smaller capacity, rather than using a fewer number of high capacity nodes. Reference: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/CacheNode.Memcached.html

You're operating an application on an Amazon EC2 instance that enables users to download files from a private S3 bucket using a pre-signed URL. Prior to establishing the URL, the application should ensure that the file exists in S3. How should the application safely access the S3 bucket using AWS credentials? A. Use the AWS account access keys; the application retrieves the credentials from the source code of the application. B. Create an IAM role for EC2 that allows list access to objects In the S3 bucket; launch the Instance with the role, and retrieve the role's credentials from the EC2 instance metadata. C. Create an IAM user for the application with permissions that allow list access to the S3 bucket; the application retrieves the 1AM user credentials from a temporary directory with permissions that allow read access only to the Application user. D. Create an IAM user for the application with permissions that allow list access to the S3 bucket; launch the instance as the IAM user, and retrieve the IAM user's credentials from the EC2 instance user data.

B Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

A big multinational corporation wishes to deploy a stateless mission-critical application to Amazon Web Services (AWS). On a z/OS operating system, the application is built on IBM WebSphere (application and integration middleware), IBM MQ (messaging middleware), and IBM DB2 (database software). What procedures should the Solutions Architect follow while migrating the application to AWS? A. Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling. Re-platform the IBM MQ to an Amazon EC2-based MQ. Re-platform the z/OS-based DB2 to Amazon RDS DB2. B. Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling. Re-platform the IBM MQ to an Amazon MQ. Re-platform z/ OS-based DB2 to Amazon EC2-based DB2. C. Orchestrate and deploy the application by using AWS Elastic Beanstalk. Re-platform the IBM MQ to Amazon SQS. Re-platform z/OS-based DB2 to Amazon RDS DB2. D. Use the AWS Server Migration Service to migrate the IBM WebSphere and IBM DB2 to an Amazon EC2-based solution. Re-platform the IBM MQ to an Amazon MQ.

B Reference: https://aws.amazon.com/blogs/database/aws-database-migration-service-and-aws-schema-conversion-tool-now-support-ibm-db2-as-a-source/ https://aws.amazon.com/quickstart/architecture/ibm-mq/

Recently, a company's CFO evaluated the company's monthly AWS bill and saw an opportunity to minimize the cost of the company's AWS Elastic Beanstalk instances in use. The CFO has tasked a Solutions Architect with designing a highly available solution that would automatically fire up an Elastic Beanstalk environment in the morning and shut it down at the end of the day. The solution should be created with the least amount of operational overhead and the lowest possible cost. Additionally, it should be able to manage the rising usage of Elastic Beanstalk instances by various teams and offer a centralized scheduling solution for all teams to keep operating expenses low. Which design will satisfy these criteria? A. Set up a Linux EC2 Micro instance. Configure an IAM role to allow the start and stop of the Elastic Beanstalk environment and attach it to the instance. Create scripts on the instance to start and stop the Elastic Beanstalk environment. Configure cron jobs on the instance to execute the scripts. B. Develop AWS Lambda functions to start and stop the Elastic Beanstalk environment. Configure a Lambda execution role granting Elastic Beanstalk environment start/stop permissions, and assign the role to the Lambda functions. Configure cron expression Amazon CloudWatch Events rules to trigger the Lambda functions. C. Develop an AWS Step Functions state machine with ג€waitג€ as its type to control the start and stop time. Use the activity task to start and stop the Elastic Beanstalk environment. Create a role for Step Functions to allow it to start and stop the Elastic Beanstalk environment. Invoke Step Functions daily. D. Configure a time-based Auto Scaling group. In the morning, have the Auto Scaling group scale up an Amazon EC2 instance and put the Elastic Beanstalk environment start command in the EC2 instance user data. At the end of the day, scale down the instance number to 0 to terminate the EC2 instance.

B Reference: https://aws.amazon.com/premiumsupport/knowledge-center/schedule-elastic-beanstalk-stop-restart/

Within a firm, a development team is releasing new APIs as serverless apps. At the moment, the team is utilizing the AWS Management Console to deploy resources for Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. A Solutions Architect has been charged with automating future serverless API installations. How is this possible? A. Use AWS CloudFormation with a Lambda-backed custom resource to provision API Gateway. Use the AWS::DynamoDB::Table and AWS::Lambda::Function resources to create the Amazon DynamoDB table and Lambda functions. Write a script to automate the deployment of the CloudFormation template. B. Use the AWS Serverless Application Model to define the resources. Upload a YAML template and application files to the code repository. Use AWS CodePipeline to connect to the code repository and to create an action to build using AWS CodeBuild. Use the AWS CloudFormation deployment provider in CodePipeline to deploy the solution. C. Use AWS CloudFormation to define the serverless application. Implement versioning on the Lambda functions and create aliases to point to the versions. When deploying, configure weights to implement shifting traffic to the newest version, and gradually update the weights as traffic moves over. D. Commit the application code to the AWS CodeCommit code repository. Use AWS CodePipeline and connect to the CodeCommit code repository. Use AWS CodeBuild to build and deploy the Lambda functions using AWS CodeDeploy. Specify the deployment preference type in CodeDeploy to gradually shift traffic over to the new version.

B Reference: https://aws.amazon.com/quickstart/architecture/serverless-cicd-for-enterprise/ https://aws-quickstart.s3.amazonaws.com/quickstart-trek10-serverless-enterprise-cicd/doc/serverless-cicd-for-the-enterprise-on-the-aws-cloud.pdf

A Solutions Architect is tasked with the responsibility of developing a cost-effective backup solution for a company's 500MB source code repository including proprietary and sensitive applications. The repository is Linux-based and does daily tape backups. Backup tapes are retained for one year. The existing solution does not satisfy the demands of the firm since it is a manual process that is prone to mistake, costly to maintain, and does not fulfill the need for a Recovery Point Objective (RPO) of 1 hour or a Recovery Time Objective (RTO) of 2 hours. The new disaster recovery criteria is that backups be kept offshore and that a single file may be restored if necessary. Which solution satisfies the customer's RTO, RPO, and disaster recovery requirements with the LEAST amount of work and expense? A. Replace local tapes with an AWS Storage Gateway virtual tape library to integrate with current backup software. Run backups nightly and store the virtual tapes on Amazon S3 standard storage in US-EAST-1. Use cross-region replication to create a second copy in US-WEST-2. Use Amazon S3 lifecycle policies to perform automatic migration to Amazon Glacier and deletion of expired backups after 1 year. B. Configure the local source code repository to synchronize files to an AWS Storage Gateway file Amazon gateway to store backup copies in an Amazon S3 Standard bucket. Enable versioning on the Amazon S3 bucket. Create Amazon S3 lifecycle policies to automatically migrate old versions of objects to Amazon S3 Standard - Infrequent Access, then Amazon Glacier, then delete backups after 1 year. C. Replace the local source code repository storage with a Storage Gateway stored volume. Change the default snapshot frequency to 1 hour. Use Amazon S3 lifecycle policies to archive snapshots to Amazon Glacier and remove old snapshots after 1 year. Use cross-region replication to create a copy of the snapshots in US-WEST-2. D. Replace the local source code repository storage with a Storage Gateway cached volume. Create a snapshot schedule to take hourly snapshots. Use an Amazon CloudWatch Events schedule expression rule to run an hourly AWS Lambda task to copy snapshots from US-EAST -1 to US-WEST-2.

B Reference: https://d1.awsstatic.com/whitepapers/aws-storage-gateway-file-gateway-for-hybrid-architectures.pdf

In terms of the AWS Lambda permissions paradigm, when you construct a Lambda function, you define an IAM role that AWS Lambda may take in order to run the Lambda function on your behalf. Additionally, this job is referred to as the _________ role. A. configuration B. execution C. delegation D. dependency

B Regardless of how your Lambda function is invoked, AWS Lambda always executes the function. At the time you create a Lambda function, you specify an IAM role that AWS Lambda can assume to execute your Lambda function on your behalf. This role is also referred to as the execution role. Reference: http://docs.aws.amazon.com/lambda/latest/dg/lambda-dg.pdf

What is the maximum length of an AWS IAM instance profile name? A. 512 characters B. 128 characters C. 1024 characters D. 64 characters

B The maximum length for an instance profile name is 128 characters. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/LimitationsOnEntities.html

What is the default maximum number of BGP advertised routes per route table in Amazon VPC? A. 15 B. 100 C. 5 D. 10

B The maximum number of BGP advertised routes allowed per route table is 100. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html

What is the suggested average queue length by AWS to obtain the lowest possible latency for the 200 PIOPS EBS volume? A. 5 B. 1 C. 2 D. 4

B The queue length is the number of pending I/O requests for a device. The optimal average queue length will vary for every customer workload, and this value depends on a particular application's sensitivity to IOPS and latency. If the workload is not delivering enough I/O requests to maintain the optimal average queue length, then the EBS volume might not consistently deliver the IOPS that have been provisioned. However, if the workload maintains an average queue length that is higher than the optimal value, then the per-request I/O latency will increase; in this case, the user should provision more IOPS for his volume. AWS recommends that the user should target an optimal average queue length of 1 for every 200 provisioned IOPS and tune that value based on his application requirements. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-workload-demand.html

Who is responsible for updating the routing tables and networking ACLs in a VPC in order to guarantee that a database instance is available from other VPC instances? A. AWS administrators B. The owner of the AWS account C. Amazon D. The DB engine vendor

B You are in charge of configuring the routing tables of your VPC as well as the network ACLs rules needed to make your DB instances accessible from all the instances of your VPC that need to communicate with it. Reference: http://aws.amazon.com/rds/faqs/

Your supervisor has tasked you with the responsibility of developing an elastic network interface for each of your web servers that connects to a mid-tier network that houses an application server. Additionally, he want to configure this as a Dual-homed Instance on Distinct Subnets. Rather of routing network packets through the dual-homed instances, where should each dual-homed instance receive and process requests that satisfy his criteria? A. On one of the web servers B. On the front end C. On the back end D. Through a security group

B You can place an elastic network interface on each of your web servers that connects to a mid- tier network where an application server resides. The application server can also be dual-homed to a back-end network (subnet) where the database server resides. If it is set up like this, instead of routing network packets through the dual-homed instances, each dual-homed instance receives and processes requests on the front end and initiates a connection to the back end before finally sending requests to the servers on the back-end network. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html

Is the Amazon RDS API capable of modifying database instances inside a VPC and associating them with database security groups? A. Yes, Amazon does this but only for MySQL RDS. B. Yes C. No D. Yes, Amazon does this but only for Oracle RDS.

B You can use the action Modify DB Instance, available in the Amazon RDS API, to pass values for the parameters DB Instance Identifier and DB Security Groups specifying the instance ID and the DB Security Groups you want your instance to be part of. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html

A multimedia firm is creating an application for a worldwide user base using a single AWS account. The storage and bandwidth needs of an application are unexpected. As the web layer, the application will employ Amazon EC2 instances behind an Application Load Balancer, while the database tier will use Amazon DynamoDB. The application's environment must match the following requirements: ✑ Low latency when accessed from any part of the world ✑ WebSocket support ✑ End-to-end encryption ✑ Protection against the latest security threats ✑ Managed layer 7 DDoS protection What activities should the solutions architect take to ensure compliance with these requirements? (Select two.) A. Use Amazon Route 53 and Amazon CloudFront for content distribution. Use Amazon S3 to store static content B. Use Amazon Route 53 and AWS Transit Gateway for content distribution. Use an Amazon Elastic Block Store (Amazon EBS) volume to store static content C. Use AWS WAF with AWS Shield Advanced to protect the application D. Use AWS WAF and Amazon Detective to protect the application E. Use AWS Shield Standard to protect the application

BC

A business wishes to relocate its on-premises data center to the Amazon Web Services (AWS) Cloud. This comprises hundreds of virtualized Linux and Microsoft Windows servers, storage area networks (SANs), Java and PHP applications running on MYSQL and Oracle databases. Numerous departmental services are hosted either in-house or elsewhere. Technical documentation is insufficient and out of current. A solutions architect must comprehend the present environment and forecast the cost of cloud resources after the move. Which tools or services should a solutions architect use while planning a cloud migration? (Choose three.) A. AWS Application Discovery Service B. AWS SMS C. AWS x-Ray D. AWS Cloud Adoption Readness Tool (CART) E. Amazon Inspector F. AWS Migration Hub

BCF

Which of the following attributes of Amazon VPC subnets are true? (Select two.) A. Each subnet spans at least 2 Availability Zones to provide a high-availability environment. B. Each subnet maps to a single Availability Zone. C. CIDR block mask of /25 is the smallest range supported. D. By default, all subnets can route between each other, whether they are private or public. E. Instances in a private subnet can communicate with the Internet only if they have an Elastic IP.

BD

User photographs are uploaded to Amazon S3 for processing by a media storage application. According to end users, some submitted photographs are not being processed correctly. The Application Developers examine the logs and discover that AWS Lambda is having execution troubles when thousands of users are concurrently connected to the system. Issues arise as a result of: ✑ Limits around concurrent executions. ✑ The performance of Amazon DynamoDB when saving data. Which steps may be performed to improve the application's performance and reliability? (Select two.) A. Evaluate and adjust the read capacity units (RCUs) for the DynamoDB tables. B. Evaluate and adjust the write capacity units (WCUs) for the DynamoDB tables. C. Add an Amazon ElastiCache layer to increase the performance of Lambda functions. D. Configure a dead letter queue that will reprocess failed or timed-out Lambda functions. E. Use S3 Transfer Acceleration to provide lower-latency access to end users.

BD Reference: B: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html D: https://aws.amazon.com/blogs/compute/robust-serverless-application-design-with-aws-lambda-dlq/

A corporation uses AWS to host a three-tier application. According to users, application speed varies significantly depending on the time of day and feature engaged.Components of the application include the following: ✑ Eight t2.large front-end web servers that serve static content and proxy dynamic content from the application tier. ✑ Four t2.large application servers. ✑ One db.m4.large Amazon RDS MySQL Multi-AZ DB instance. The web and application layers have been identified as network limited by operations. Which of the following methods is the most cost effective for increasing application performance? (Select two.) A. Replace web and app tiers with t2.xlarge instances B. Use AWS Auto Scaling and m4.large instances for the web and application tiers C. Convert the MySQL RDS instance to a self-managed MySQL cluster on Amazon EC2 D. Create an Amazon CloudFront distribution to cache content E. Increase the size of the Amazon RDS instance to db.m4.xlarge

BD Reference: https://aws.amazon.com/ec2/instance-types/

A business wishes to develop a serverless application utilizing Amazon API Gateway, Amazon Lambda, and Amazon DynamoDB. They demonstrated the idea and said that the average response time exceeds the tolerances of their upstream services. Amazon CloudWatch measurements revealed no problems with DynamoDB, but did suggest that certain Lambda operations had reached their timeout. Which of the following should the Solutions Architect consider while optimizing performance? (Select two.) A. Configure the AWS Lambda function to reuse containers to avoid unnecessary startup time. B. Increase the amount of memory and adjust the timeout on the Lambda function. Complete performance testing to identify the ideal memory and timeout configuration for the Lambda function. C. Create an Amazon ElastiCache cluster running Memcached, and configure the Lambda function for VPC integration with access to the Amazon ElastiCache cluster. D. Enable API cache on the appropriate stage in Amazon API Gateway, and override the TTL for individual methods that require a lower TTL than the entire stage. E. Increase the amount of CPU, and adjust the timeout on the Lambda function. Complete performance testing to identify the ideal CPU and timeout configuration for the Lambda function.

BD Reference: https://lumigo.io/blog/aws-lambda-timeout-best-practices/

A solutions architect is developing a web application that will be hosted on an Amazon RDS for PostgreSQL database. The database instance is projected to get a much greater number of reads than writes. The architect of the solution must guarantee that the high volume of read traffic can be handled and that the database instance is highly available. What procedures should the solutions architect take to ensure compliance with these requirements? (Select three.) A. Create multiple read replicas and put them into an Auto Scaling group. B. Create multiple read replicas in different Availability Zones. C. Create an Amazon Route 53 hosted zone and a record set for each read replica with a TTL and a weighted routing policy. D. Create an Application Load Balancer (ALB) and put the read replicas behind the ALB. E. Configure an Amazon CloudWatch alarm to detect a failed read replicas. Set the alarm to directly invoke an AWS Lambda function to delete its Route 53 record set. F. Configure an Amazon Route 53 health check for each read replica using its endpoint.

BDE

A business wishes to relocate its website from an on-premises data center to Amazon Web Services (AWS). Simultaneously, it wishes to transition the website to a containerized microservices architecture in order to increase availability and cost effectiveness. According to the company's security policy, privileges and network permissions must be established in accordance with best practices, with the least privilege possible. A Solutions Architect must design a containerized architecture that adheres to the application's security criteria and has deployed it on an Amazon ECS cluster. What procedures are necessary upon deployment to ensure compliance with the requirements? (Select two.) A. Create tasks using the bridge network mode. B. Create tasks using the awsvpc network mode. C. Apply security groups to Amazon EC2 instances, and use IAM roles for EC2 instances to access other resources. D. Apply security groups to the tasks, and pass IAM credentials into the container at launch time to access other resources. E. Apply security groups to the tasks, and use IAM roles for tasks to access other resources.

BE Reference: https://aws.amazon.com/about-aws/whats-new/2017/11/amazon-ecs-introduces-awsvpc-networking-mode-for-containers-to-support-full-networking-capabilities/ https://amazonaws-china.com/blogs/compute/introducing-cloud-native-networking-for-ecs-containers/ https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html

A financial institution is doing market simulations on a high-performance computing cluster powered by Amazon EC2 instances. When instances are started, a DNS record must be established in an Amazon Route 53 private hosted zone. After instances are terminated, the DNS record must be deleted.Currently, the organization creates the DNS record using a mix of Amazon CloudWatch Events and AWS Lambda. While the approach worked fine in testing with small clusters, in production with clusters comprising thousands of instances, the organization encounters the following Lambda log error: HTTP 400 status code (Bad request). Additionally, the response header contains a status code element with the value 'Throttling', as well as a status message element with the value 'Rate exceeded'. Which measures should the Solutions Architect do in combination to overcome these issues? (Select three.) A. Configure an Amazon SOS FIFO queue and configure a CloudWatch Events rule to use this queue as a target. Remove the Lambda target from the CloudWatch Events rule. B. Configure an Amazon Kinesis data stream and configure a CloudWatch Events rule to use this queue as a target. Remove the Lambda target from the CloudWatch Events rule. C. Update the CloudWatch Events rule to trigger on Amazon EC2 ג€Instance Launch Successfulג€ and ג€Instance Terminate Successfulג€ events for the Auto Scaling group used by the cluster. D. Configure a Lambda function to retrieve messages from an Amazon SQS queue. Modify the Lambda function to retrieve a maximum of 10 messages then batch the messages by Amazon Route 53 API call type and submit. Delete the messages from the SQS queue after successful API calls. E. Configure an Amazon SQS standard queue and configure the existing CloudWatch Events rule to use this queue as a target. Remove the Lambda target from the CloudWatch Events rule. F. Configure a Lambda function to read data from the Amazon Kinesis data stream and configure the batch window to 5 minutes. Modify the function to make a single API call to Amazon Route 53 with all records read from the kinesis data stream.

BEF

A Solutions Architect is responsible for designing a highly available application that enables authorized users to remain connected to the application even when underlying components fail. Which solution will satisfy these criteria? A. Deploy the application on Amazon EC2 instances. Use Amazon Route 53 to forward requests to the EC2 instances. Use Amazon DynamoDB to save the authenticated connection details. B. Deploy the application on Amazon EC2 instances in an Auto Scaling group. Use an internet-facing Application Load Balancer to handle requests. Use Amazon DynamoDB to save the authenticated connection details. C. Deploy the application on Amazon EC2 instances in an Auto Scaling group. Use an internet-facing Application Load Balancer on the front end. Use EC2 instances to save the authenticated connection details. D. Deploy the application on Amazon EC2 instances in an Auto Scaling group. Use an internet-facing Application Load Balancer on the front end. Use EC2 instances hosting a MySQL database to save the authenticated connection details.

C

A business is transferring its on-premises systems to Amazon Web Services (AWS). The following systems comprise the user environment: ✑ Windows and Linux virtual machines running on VMware. ✑ Physical servers running Red Hat Enterprise Linux. Prior to shifting to AWS, the organization want to be able to complete the following steps: ✑ Identify dependencies between on-premises systems. ✑ Group systems together into applications to build migration plans. ✑ Review performance data using Amazon Athena to ensure that Amazon EC2 instances are right-sized. How are these stipulations to be met? A. Populate the AWS Application Discovery Service import template with information from an on-premises configuration management database (CMDB). Upload the completed import template to Amazon S3, then import the data into Application Discovery Service. B. Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems. Allow the Discovery Agent to collect data for a period of time. C. Install the AWS Application Discovery Service Discovery Connector on each of the on-premises systems and in VMware vCenter. Allow the Discovery Connector to collect data for one week. D. Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Allow the Discovery Agent to collect data for a period of time.

C

A business just launched a new application on a cluster of Amazon EC2 Linux instances contained inside a VPC. The organization created an EC2 Linux instance as a bastion host in a peering VPC. The application instances' security group restricts access to TCP port 22 from the bastion host's private IP address. The bastion host's security group permits access to TCP port 22 from 0.0.0.0/0, allowing system administrators to remotely connect in to application instances through SSH from several branch offices. While poring through the bastion host's operating system logs, a cloud engineer detects hundreds of unsuccessful SSH login attempts from places all over the globe. The cloud engineer wants to modify the way remote access to application instances is given and wishes to adhere to the following requirements: ✑ Eliminate brute-force SSH login attempts. ✑ Retain a log of commands run during an SSH session. ✑ Retain the ability to forward ports. Which solution satisfies these remote access criteria for application instances? A. Configure the application instances to communicate with AWS Systems Manager. Grant access to the system administrators to use Session Manager to establish a session with the application instances. Terminate the bastion host. B. Update the security group of the bastion host to allow traffic from only the public IP addresses of the branch offices. C. Configure an AWS Client VPN endpoint and provision each system administrator with a certificate to establish a VPN connection to the application VPC. Update the security group of the application instances to allow traffic from only the Client VPN IPv4 CIDR. Terminate the bastion host. D. Configure the application instances to communicate with AWS Systems Manager. Grant access to the system administrators to issue commands to the application instances by using Systems Manager Run Command. Terminate the bastion host.

C

A business operates its AWS infrastructure in two AWS Regions. The firm operates four virtual private clouds in the eu-west-1 region and two in the us-east-1 region. Additionally, the firm has an on-premises data center in Europe, which is connected to AWS through two AWS Direct Connect connections in eu-west-1.The organization requires a solution that enables Amazon EC2 instances inside each VPC to communicate with one another using private IP addresses. Additionally, servers in the on-premises data center must be able to access those VPCs through private IP addresses. Which approach is the MOST cost-effective in terms of meeting these requirements? A. Create an AWS Transit Gateway in each Region, and attach each VPC to the transit gateway in that Region. Create cross-Region peering between the transit gateways. Create two transit VIFs, and attach them to a single Direct Connect gateway. Associate each transit gateway with the Direct Connect gateway. B. Create VPC peering between each VPC in the same Region. Create cross-Region peering between each VPC in different Regions. Create two private VIFs, and attach them to a single Direct Connect gateway. Associate each VPC with the Direct Connect gateway. C. Create VPC peering between each VPC in the same Region. Create cross-Region peering between each VPC in different Regions. Create two public VIFs that are configured to route AWS IP addresses globally to on-premises servers. D. Create an AWS Transit Gateway in each Region, and attach each VPC to the transit gateway in that Region. Create cross-Region peering between the transit gateways. Create two private VIFs, and attach them to a single Direct Connect gateway. Associate each VPC with the Direct Connect gateway.

C

A business uses AWS CloudFormation as their application deployment tool. It stores all application binaries and templates in a versioned Amazon S3 bucket. The integrated development environment is hosted on an Amazon EC2 instance (IDE). After running the unit tests locally, the developers download the application binaries from Amazon S3 to the EC2 instance, make changes, and publish the binaries to an S3 bucket. The developers want to enhance the current deployment method and to enable continuous integration and delivery (CI/CD) utilizing AWS CodePipeline. The developers are looking for the following: ✑ Use AWS CodeCommit for source control. ✑ Automate unit testing and security scanning. ✑ Alert the Developers when unit tests fail. ✑ Turn application features on and off, and customize deployment dynamically as part of CI/CD. ✑ Have the lead Developer provide approval before deploying an application. Which solution will satisfy these criteria? A. Use AWS CodeBuild to run tests and security scans. Use an Amazon EventBridge rule to send Amazon SNS alerts to the Developers when unit tests fail. Write AWS Cloud Developer kit (AWS CDK) constructs for different solution features, and use a manifest file to turn features on and off in the AWS CDK application. Use a manual approval stage in the pipeline to allow the lead Developer to approve applications. B. Use AWS Lambda to run unit tests and security scans. Use Lambda in a subsequent stage in the pipeline to send Amazon SNS alerts to the developers when unit tests fail. Write AWS Amplify plugins for different solution features and utilize user prompts to turn features on and off. Use Amazon SES in the pipeline to allow the lead developer to approve applications. C. Use Jenkins to run unit tests and security scans. Use an Amazon EventBridge rule in the pipeline to send Amazon SES alerts to the developers when unit tests fail. Use AWS CloudFormation nested stacks for different solution features and parameters to turn features on and off. Use AWS Lambda in the pipeline to allow the lead developer to approve applications. D. Use AWS CodeDeploy to run unit tests and security scans. Use an Amazon CloudWatch alarm in the pipeline to send Amazon SNS alerts to the developers when unit tests fail. Use Docker images for different solution features and the AWS CLI to turn features on and off. Use a manual approval stage in the pipeline to allow the lead developer to approve applications.

C

A business uses AWS to host a three-tier application that includes a web server, an application server, and an Amazon RDS MySQL database instance. A solutions architect is developing a disaster recovery (DR) solution with a 5 minute recovery point objective (RPO). Which option will best fulfill the needs of the business? A. Configure AWS Backup to perform cross-Region backups of all servers every 5 minutes. Reprovision the three tiers in the DR Region from the backups using AWS CloudFormation in the event of a disaster. B. Maintain another running copy of the web and application server stack in the DR Region using AWS CloudFormation drift detection. Configure cross-Region snapshots of the DB instance to the DR Region every 5 minutes. In the event of a disaster, restore the DB instance using the snapshot in the DR Region. C. Use Amazon EC2 Image Builder to create and copy AMIs of the web and application server to both the primary and DR Regions. Create a cross-Region read replica of the DB instance in the DR Region. In the event of a disaster, promote the read replica to become the master and reprovision the servers with AWS CloudFormation using the AMIs. D. Create AMIs of the web and application servers in the DR Region. Use scheduled AWS Glue jobs to synchronize the DB instance with another DB instance in the DR Region. In the event of a disaster, switch to the DB instance in the DR Region and reprovision the servers with AWS CloudFormation using the AMIs.

C

A business with many Amazon Web Services accounts makes use of AWS Organizations and service control rules (SCPs). The following SCP was generated by an administrator and associated to an organizational unit (OU) that holds the AWS account 1111-1111-1111: { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowsAllActions", "Effect": "Allow", "Action": "*". "Resource": "*: }, { "Sid": "DenyCloudTrail", "Effect": "Deny", "Action": "cloudtrail:*", "Resource": "*" } ] } Developers in account 1111-1111-1111 report being unable to build Amazon S3 buckets. How should the Administrator proceed in resolving this issue? A. Add s3:CreateBucket with ג€Allowג€ effect to the SCP. B. Remove the account from the OU, and attach the SCP directly to account 1111-1111-1111. C. Instruct the Developers to add Amazon S3 permissions to their IAM entities. D. Remove the SCP from account 1111-1111-1111.

C

A company's main office hosts a sizable on-premises MySQL database that supports an issue tracking system utilized by workers worldwide. The organization already utilizes AWS for some workloads and has configured an Amazon Route 53 entry for the database endpoint to refer to the on-premises database. Management is worried about the database serving as a single point of failure and requests that a solutions architect relocate the database to AWS without causing data loss or downtime. Which set of activities should be implemented by the solutions architect? A. Create an Amazon Aurora DB cluster. Use AWS Database Migration Service (AWS DMS) to do a full load from the on-premises database to Aurora. Update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on-premises database. B. During nonbusiness hours, shut down the on-premises database and create a backup. Restore this backup to an Amazon Aurora DB cluster. When the restoration is complete, update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on-premises database. C. Create an Amazon Aurora DB cluster. Use AWS Database Migration Service (AWS DMS) to do a full load with continuous replication from the on-premises database to Aurora. When the migration is complete, update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on- premises database. D. Create a backup of the database and restore it to an Amazon Aurora multi-master cluster. This Aurora cluster will be in a master-master replication configuration with the on-premises database. Update the Route 53 entry for the database to point to the Aurora cluster endpoint, and shut down the on- premises database.

C

A corporation generated accounts for each of its Development teams, totaling 200 accounts. Each account has a single virtual private cloud (VPC) in a single region, each of which has many microservices operating in Docker containers and requiring communication with microservices in other accounts. According to the Security team's needs, these microservices must not traverse the public internet, and only specific internal services should be permitted to contact other internal services. If any network traffic for a service is refused, the Security team must be alerted, including the originating IP. How can connection between services be developed while adhering to security requirements? A. Create a VPC peering connection between the VPCs. Use security groups on the instances to allow traffic from the security group IDs that are permitted to call the microservice. Apply network ACLs and allow traffic from the local VPC and peered VPCs only. Within the task definition in Amazon ECS for each of the microservices, specify a log configuration by using the awslogs driver. Within Amazon CloudWatch Logs, create a metric filter and alarm off of the number of HTTP 403 responses. Create an alarm when the number of messages exceeds a threshold set by the Security team. B. Ensure that no CIDR ranges are overlapping, and attach a virtual private gateway (VGW) to each VPC. Provision an IPsec tunnel between each VGW and enable route propagation on the route table. Configure security groups on each service to allow the CIDR ranges of the VPCs in the other accounts. Enable VPC Flow Logs, and use an Amazon CloudWatch Logs subscription filter for rejected traffic. Create an IAM role and allow the Security team to call the AssumeRole action for each account. C. Deploy a transit VPC by using third-party marketplace VPN appliances running on Amazon EC2, dynamically routed VPN connections between the VPN appliance, and the virtual private gateways (VGWs) attached to each VPC within the region. Adjust network ACLs to allow traffic from the local VPC only. Apply security groups to the microservices to allow traffic from the VPN appliances only. Install the awslogs agent on each VPN appliance, and configure logs to forward to Amazon CloudWatch Logs in the security account for the Security team to access. D. Create a Network Load Balancer (NLB) for each microservice. Attach the NLB to a PrivateLink endpoint service and whitelist the accounts that will be consuming this service. Create an interface endpoint in the consumer VPC and associate a security group that allows only the security group IDs of the services authorized to call the producer service. On the producer services, create security groups for each microservice and allow only the CIDR range of the allowed services. Create VPC Flow Logs on each VPC to capture rejected traffic that will be delivered to an Amazon CloudWatch Logs group. Create a CloudWatch Logs subscription that streams the log data to a security account.

C

A corporation is preparing to introduce a new billing application in two weeks. The application is being tested on ten Amazon EC2 instances controlled by an Auto Scaling group in the subnet 172.31.0.0/24 of VPC A with the CIDR block 172.31.0.0/16. The developers saw connection timeout issues in the application logs while attempting to connect to an Oracle database operating on an Amazon EC2 instance in the same region under VPC B with CIDR block 172.50.0.0/16. The database instance's IP address is hard-coded into the application instances. Which suggestions should a Solutions Architect provide to the Developers in order to resolve the issue in the most secure manner possible with the least amount of maintenance and overhead? A. Disable the SrcDestCheck attribute for all instances running the application and Oracle Database. Change the default route of VPC A to point ENI of the Oracle Database that has an IP address assigned within the range of 172.50.0.0/16 B. Create and attach internet gateways for both VPCs. Configure default routes to the internet gateways for both VPCs. Assign an Elastic IP for each Amazon EC2 instance in VPC A C. Create a VPC peering connection between the two VPCs and add a route to the routing table of VPC A that points to the IP address range of 172.50.0.0/16 D. Create an additional Amazon EC2 instance for each VPC as a customer gateway; create one virtual private gateway (VGW) for each VPC, configure an end- to-end VPC, and advertise the routes for 172.50.0.0/16

C

A newspaper organization maintains an on-premises application that enables the public to search for and obtain specific newspaper pages through a Java-based website. They scanned the old newspapers into JPEG files (about 17TB) and used Optical Character Recognition (OCR) to feed a commercial search engine. The hosting infrastructure and software are no longer supported, and the business want to transition its archive to AWS in order to create a cost-effective architecture while maintaining availability and durability. Which is the most suitable? A. Use S3 with reduced redundancy lo store and serve the scanned files, install the commercial search application on EC2 Instances and configure with auto- scaling and an Elastic Load Balancer. B. Model the environment using CloudFormation use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index. C. Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones. D. Use a single-AZ RDS MySQL instance lo store the search index 33d the JPEG images use an EC2 instance to serve the website and translate user queries into SQL. E. Use a CloudFront download distribution to serve the JPEGs to the end users and Install the current commercial search product, along with a Java Container Tor the website on EC2 instances and use Route53 with DNS round-robin.

C

On AWS, a huge multinational corporation hosts a timesheet application that is utilized by employees worldwide. The application is hosted on Amazon EC2 instances in an Auto Scaling group behind an Elastic Load Balancing (ELB) load balancer and uses an Amazon RDS MySQL Multi-AZ database instance for data storage.The CFO is worried about the business's potential effect if the application is unavailable. The application's downtime cannot exceed two hours, and the solution must be as cost-effective as feasible. How should the Solutions Architect balance the needs of the CFO with the goal of reducing data loss? A. In another region, configure a read replica and create a copy of the infrastructure. When an issue occurs, promote the read replica and configure as an Amazon RDS Multi-AZ database instance. Update the DNS record to point to the other regionג€™s ELB. B. Configure a 1-day window of 60-minute snapshots of the Amazon RDS Multi-AZ database instance. Create an AWS CloudFormation template of the application infrastructure that uses the latest snapshot. When an issue occurs, use the AWS CloudFormation template to create the environment in another region. Update the DNS record to point to the other regionג€™s ELB. C. Configure a 1-day window of 60-minute snapshots of the Amazon RDS Multi-AZ database instance which is copied to another region. Create an AWS CloudFormation template of the application infrastructure that uses the latest copied snapshot. When an issue occurs, use the AWS CloudFormation template to create the environment in another region. Update the DNS record to point to the other regionג€™s ELB. D. Configure a read replica in another region. Create an AWS CloudFormation template of the application infrastructure. When an issue occurs, promote the read replica and configure as an Amazon RDS Multi-AZ database instance and use the AWS CloudFormation template to create the environment in another region using the promoted Amazon RDS instance. Update the DNS record to point to the other regionג€™s ELB.

C

Requests for Auto Scaling are signed using a _________ signature that is computed from the request and the user's private key. A. SSL B. AES-256 C. HMAC-SHA1 D. X.509

C

The Solutions Architect is responsible for implementing perimeter security protection while developing big applications on the AWS Cloud. AWS applications have the following endpoints: ✑ Application Load Balancer ✑ Amazon API Gateway regional endpoint ✑ Elastic IP address-based EC2 instances. ✑ Amazon S3 hosted websites. ✑ Classic Load Balancer The Solutions Architect is responsible for designing a solution that protects all of the web front ends stated above and includes the following security capabilities: ✑ DDoS protection ✑ SQL injection protection ✑ IP address whitelist/blacklist ✑ HTTP flood protection ✑ Bad bot scraper protection How should the solution be designed by the Solutions Architect? A. Deploy AWS WAF and AWS Shield Advanced on all web endpoints. Add AWS WAF rules to enforce the companyג€™s requirements. B. Deploy Amazon CloudFront in front of all the endpoints. The CloudFront distribution provides perimeter protection. Add AWS Lambda-based automation to provide additional security. C. Deploy Amazon CloudFront in front of all the endpoints. Deploy AWS WAF and AWS Shield Advanced. Add AWS WAF rules to enforce the companyג€™s requirements. Use AWS Lambda to automate and enhance the security posture. D. Secure the endpoints by using network ACLs and security groups and adding rules to enforce the companyג€™s requirements. Use AWS Lambda to automatically update the rules.

C

You need the capacity to analyze massive volumes of data stored on Amazon S3 through Amazon Elastic Map Reduce. You're utilizing the cc2 8xlarge instance type, which has the majority of its CPUs idle during processing. Which of the following would be the most cost effective method of reducing the job's runtime? A. Create more, smaller flies on Amazon S3. B. Add additional cc2 8xlarge instances by introducing a task group. C. Use smaller instances that have higher aggregate I/O performance. D. Create fewer, larger files on Amazon S3.

C

You're operating an application on an EC2 instance that enables customers to download flies from a private S3 bucket using a pre-signed URL. Prior to establishing the URL, the application should ensure that the file exists in S3. How should the application safely access the S3 bucket using AWS credentials? A. Use the AWS account access Keys the application retrieves the credentials from the source code of the application. B. Create an IAM user for the application with permissions that allow list access to the S3 bucket launch the instance as the IAM user and retrieve the IAM user's credentials from the EC2 instance user data. C. Create an IAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role's credentials from the EC2 Instance metadata D. Create an IAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the IAM user credentials from a temporary directory with permissions that allow read access only to the application user.

C

Your department generates frequent analytics reports from the log files of your business. All log data is stored in Amazon S3 and processed daily by Amazon Elastic MapReduce (EMR) operations that produce daily PDF reports and aggregated CSV tables for an Amazon Redshift data warehouse. Your CFO demands that you improve this system's cost structure. Which of the following choices will reduce expenses without jeopardizing the system's average performance or the raw data's integrity? A. Use reduced redundancy storage (RRS) for all data In S3. Use a combination of Spot Instances and Reserved Instances for Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. B. Use reduced redundancy storage (RRS) for PDF and .csv data in S3. Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Redshift. C. Use reduced redundancy storage (RRS) for PDF and .csv data In Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift. D. Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift.

C

What is the maximum write throughput that a single Dynamic DB table can support? A. 1,000 write capacity units B. 100,000 write capacity units C. Dynamic DB is designed to scale without limits, but if you go beyond 10,000 you have to contact AWS first. D. 10,000 write capacity units

C Reference: https://aws.amazon.com/dynamodb/faqs/

A business purchases licensed software. A software license may be assigned to just one MAC Address. The organization will host the software on AWS. How can the organization meet the licensing requirement when the MAC address of each instance is changed when it is started, halted, or terminated? A. It is not possible to have a fixed MAC address with AWS. B. The organization should use VPC with the private subnet and configure the MAC address with that subnet. C. The organization should use VPC with an elastic network interface which will have a fixed MAC Address. D. The organization should use VPC since VPC allows to configure the MAC address for each EC2 instance.

C A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. It enables the user to launch AWS resources into a virtual network that the user has defined. An Elastic Network Interface (ENI) is a virtual network interface that the user can attach to an instance in a VPC. An ENI can include attributes such as: a primary private IP address, one or more secondary private IP addresses, one elastic IP address per private IP address, one public IP address, one or more security groups, a MAC address, a source/destination check flag, and a description. The user can create a network interface, attach it to an instance, detach it from an instance, and attach it to another instance. The attributes of a network interface follow the network interface as it is attached or detached from an instance and reattached to another instance. Thus, the user can maintain a fixed MAC using the network interface. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html

CloudWatch receives a custom metric from a user.If several calls to the CloudWatch APIs have different dimensions but the same metric name, how would CloudWatch handle them all? A. It will reject the request as there cannot be a separate dimension for a single metric. B. It will group all the calls into a single call. C. It will treat each unique combination of dimensions as a separate metric. D. It will overwrite the previous dimension data with the new dimension data.

C A dimension is a key-value pair used to uniquely identify a metric. CloudWatch treats each unique combination of dimensions as a separate metric. Thus, if the user is making 4 calls with the same metric name but a separate dimension, it will create 4 separate metrics. Reference:http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html

Identify an application that monitors AWS Data Pipeline for new jobs and then executes them. A. A task executor B. A task deployer C. A task runner D. A task optimizer

C A task runner is an application that polls AWS Data Pipeline for tasks and then performs those tasks. You can either use Task Runner as provided by AWS DataPipeline, or create a custom Task Runner application. Task Runner is a default implementation of a task runner that is provided by AWS Data Pipeline. When Task Runner is installed and configured, it polls AWS DataPipeline for tasks associated with pipelines that you have activated. When a task is assigned to Task Runner, it performs that task and reports its status back toAWS Data Pipeline. If your workflow requires non-default behavior, you'll need to implement that functionality in a custom task runner. Reference: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-remote-taskrunner-client.html

Which of the following cannot be accomplished with the use of AWS Data Pipeline? A. Create complex data processing workloads that are fault tolerant, repeatable, and highly available. B. Regularly access your data where it's stored, transform and process it at scale, and efficiently transfer the results to another AWS service. C. Generate reports over data that has been stored. D. Move data between different AWS compute and storage services as well as on premise data sources at specified intervals.

C AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as on premise data sources at specified intervals. With AWS Data Pipeline, you can regularly access your data where it's stored, transform and process it at scale, and efficiently transfer the results to another AWS. AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available. AWS Data Pipeline also allows you to move and process data that was previously locked up in on premise data silos. Reference: http://aws.amazon.com/datapipeline/

AWS Direct Connect does not include any resources to which you may restrict access. As a result, you will not be able to utilize AWS Direct Connect Amazon Resource Names (ARNs) in an Identity and Access Management (IAM) policy. With this in mind, how can a policy be written to restrict access to AWS Direct Connect actions? A. You can leave the resource name field blank. B. You can choose the name of the AWS Direct Connection as the resource. C. You can use an asterisk (*) as the resource. D. You can create a name for the resource.

C AWS Direct Connect itself has no specific resources for you to control access to. Therefore, there are no AWS Direct Connect ARNs for you to use in an IAM policy. You use an asterisk (*) as the resource when writing a policy to control access to AWS Direct Connect actions. Reference: http://docs.aws.amazon.com/directconnect/latest/UserGuide/using_iam.html

Once a user has configured ElastiCache for an application and it is running, which services does Amazon not offer the user with: A. The ability for client programs to automatically identify all of the nodes in a cache cluster, and to initiate and maintain connections to all of these nodes B. Automating common administrative tasks such as failure detection and recovery, and software patching. C. Providing default Time to Live (TTL) in the AWS Elasticache Redis Implementation for different type of data. D. Providing detailed monitoring metrics associated with your Cache Nodes, enabling you to diagnose and react to issues very quickly

C Amazon provides failure detection and recovery, and software patching and monitoring tools which is called CloudWatch. In addition it provides also AutoDiscovery to automatically identify and initialize all nodes of cache cluster for Amazon ElastiCache. Reference: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/WhatIs.html

Which RAID configuration is utilized on the Cloud Block Storage back-end to provide the highest possible degree of reliability and performance? A. RAID 1 (Mirror) B. RAID 5 (Blocks striped, distributed parity) C. RAID 10 (Blocks mirrored and striped) D. RAID 2 (Bit level striping)

C Cloud Block Storage back-end storage volumes employs the RAID 10 method to provide a very high level of reliability and performance. Reference: http://www.rackspace.com/knowledge_center/product-faq/cloud-block-storage

You're attempting to remove an SSL certificate from the IAM certificate store and get the following message: "Certificate: certificate-id> is currently in use by CloudFront."Which of the following assertions is most likely the cause of this error? A. Before you can delete an SSL certificate you need to set up https on your server. B. Before you can delete an SSL certificate, you need to set up the appropriate access level in IAM C. Before you can delete an SSL certificate, you need to either rotate SSL certificates or revert from using a custom SSL certificate to using the default CloudFront certificate. D. You can't delete SSL certificates. You need to request it from AWS.

C CloudFront is a web service that speeds up distribution of your static and dynamic web content, for example, .html, .css,.php, and image files, to end users. EveryCloudFront web distribution must be associated either with the default CloudFront certificate or with a custom SSL certificate. Before you can delete an SSL certificate, you need to either rotate SSL certificates (replace the current custom SSL certificate with another custom SSL certificate) or revert from using a customSSL certificate to using the default CloudFront certificate. Reference:http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Troubleshooting.html

How many g2.2xlarge on-demand instances can a customer operate in a single region without obtaining AWS clearance for a limit increase? A. 20 B. 2 C. 5 D. 10

C Generally, AWS EC2 allows running 20 on-demand instances and 100 spot instances at a time. This limit can be increased by requesting at https:// aws.amazon.com/contact-us/ec2-request. Excluding certain types of instances, the limit is lower than mentioned above. For g2.2xlarge, the user can run only 5 on-demand instance at a time. Reference: http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ec2

If no explicit deny is discovered when IAM's Policy Evaluation Logic is applied, the enforcement code searches for any __________ commands that apply to the request. A. "cancel" B. "suspend" C. "allow" D. "valid"

C If an explicit deny is not found among the applicable policies for a specific request, IAM's Policy Evaluation Logic checks for any "allow" instructions to check if the request can be successfully completed. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_EvaluationLogic.html

You establish a virtual private network (VPN) connection, and your VPN device supports the Border Gateway Protocol (BGP).Which of the following should be mentioned during the VPN connection's configuration? A. Classless routing B. Classfull routing C. Dynamic routing D. Static routing

C If you create a VPN connection, you must specify the type of routing that you plan to use, which will depend upon on the make and model of your VPN devices. If your VPN device supports Border Gateway Protocol (BGP), you need to specify dynamic routing when you configure your VPN connection. If your device does not support BGP, you should specify static routing. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html

You've added a new instance to your Auto Scaling group, which is now subject to ELB health checks. A health check performed by the ELB indicates that the new instance's status is out of service. What role does Auto Scaling play in this scenario? A. It replaces the instance with a healthy one B. It stops the instance C. It marks an instance as unhealthy D. It terminates the instance

C If you have attached a load balancer to your Auto Scaling group, you can have Auto Scaling include the results of Elastic Load Balancing health checks when it determines the health status of an instance. After you add ELB health checks, Auto Scaling will mark an instance as unhealthy if Elastic Load Balancing reports the instance state as Out of Service. Frequently, an Auto Scaling instance that has just come into service needs to warm up before it can pass the Auto Scaling health check. Auto Scaling waits until the health check grace period ends before checking the health status of the instance. While the EC2 status checks and ELB health checks can complete before the health check grace period expires, Auto Scaling does not act on them until the health check grace period expires. To provide ample warm-up time for your instances, ensure that the health check grace period covers the expected startup time for your application. Reference: http://docs.aws.amazon.com/autoscaling/latest/userguide/healthcheck.html

What happens when a VPC's dedicated instances are launched? A. If you launch an instance into a VPC that has an instance tenancy of dedicated, you must manually create a Dedicated instance. B. If you launch an instance into a VPC that has an instance tenancy of dedicated, your instance is created as a Dedicated instance, only based on the tenancy of the instance. C. If you launch an instance into a VPC that has an instance tenancy of dedicated, your instance is automatically a Dedicated instance, regardless of the tenancy of the instance. D. None of these are true.

C If you launch an instance into a VPC that has an instance tenancy of dedicated, your instance is automatically a Dedicated instance, regardless of the tenancy of the instance. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/dedicated-instance.html

You have purchased a support plan for AWS Business and Enterprise. Your organization is experiencing a backlog of issues, and around 20 of your IAM users are required to initiate technical support cases. How many users are permitted to open technical support tickets under the AWS Business and Enterprise support plans? A. 5 users B. 10 users C. Unlimited D. 1 user

C In the context of AWS support, the Business and Enterprise support plans allow an unlimited number of users to open technical support cases (supported by AWSIdentity and Access Management (IAM)). Reference: https://aws.amazon.com/premiumsupport/faqs/

A user considers using the EBS PIOPS volume. Which of the following alternatives is the most appropriate use case for the PIOPS EBS volume? A. Analytics B. System boot volume C. Mongo DB D. Log processing

C Provisioned IOPS volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads that are sensitive to storage performance and consistency in random access I/O throughput. Provisioned IOPS volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency in random access I/O throughput business applications, database workloads, such asNoSQL DB, RDBMS, etc. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

Your organization previously created a highly trafficked, dynamically routed VPN connection between your on-premises data center and AWS. You recently provisioned a DirectConnect connection and want to begin utilizing it. Which of the following alternatives, after establishing DirectConnect settings in the AWS Console, will offer the most smooth transition for your users? A. Delete your existing VPN connection to avoid routing loops configure your DirectConnect router with the appropriate settings and verity network traffic is leveraging DirectConnect. B. Configure your DirectConnect router with a higher BGP priority man your VPN router, verify network traffic is leveraging Directconnect and then delete your existing VPN connection. C. Update your VPC route tables to point to the DirectConnect connection configure your DirectConnect router with the appropriate settings verify network traffic is leveraging DirectConnect and then delete the VPN connection. D. Configure your DirectConnect router, update your VPC route tables to point to the DirectConnect connection, configure your VPN connection with a higher BGP priority, and verify network traffic is leveraging the DirectConnect connection.

C Q. Can I use AWS Direct Connect and a VPN Connection to the same VPC simultaneously? Yes. However, only in fail-over scenarios. The Direct Connect path will always be preferred, when established, regardless of AS path prepending. Reference: https://aws.amazon.com/directconnect/faqs/

Which of the striping options available for EBS volumes has the drawback of 'Doubling the amount of I/O needed from the instance to EBS in comparison to RAID 0, since you're mirroring all writes to a pair of volumes, limiting the amount of striping possible.'? A. Raid 1 B. Raid 0 C. RAID 1+0 (RAID 10) D. Raid 2

C RAID 1+0 (RAID 10) doubles the amount of I/O required from the instance to EBS compared to RAID 0, because you're mirroring all writes to a pair of volumes, limiting how much you can stripe. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html

You're going to use AWS Direct Connect. You want to access AWS public service endpoints such as Amazon S3 using the AWS Direct Connect connection. You want other Internet traffic to utilize your current Internet Service Provider connection.How should AWS Direct connect be configured for access to services such as Amazon S3? A. Configure a public Interface on your AWS Direct Connect link. Configure a static route via your AWS Direct Connect link that points to Amazon S3 Advertise a default route to AWS using BGP. B. Create a private interface on your AWS Direct Connect link. Configure a static route via your AWS Direct connect link that points to Amazon S3 Configure specific routes to your network in your VPC. C. Create a public interface on your AWS Direct Connect link. Redistribute BGP routes into your existing routing infrastructure; advertise specific routes for your network to AWS. D. Create a private interface on your AWS Direct connect link. Redistribute BGP routes into your existing routing infrastructure and advertise a default route to AWS.

C Reference: https://aws.amazon.com/directconnect/faqs/

The __________ service is intended for businesses with a large number of users or systems that make use of AWS products such as Amazon EC2, Amazon SimpleDB, and the AWS Management Console. A. Amazon RDS B. AWS Integrity Management C. AWS Identity and Access Management D. Amazon EMR

C Reference: https://aws.amazon.com/documentation/iam/?nc1=h_ls

A Solutions Architect has designed an AWS CloudFormation template for a three-tier application. The template includes an Auto Scaling group of Amazon EC2 instances running a custom AMI. The Solutions Architect want to guarantee that future upgrades to the custom AMI may be deployed to a running stack by first changing the template to refer to the new AMI and then performing UpdateStack to replace the EC2 instances with new AMI instances. How can these needs be met via the deployment of AMI updates? A. Create a change set for a new version of the template, view the changes to the running EC2 instances to ensure that the AMI is correctly updated, and then execute the change set. B. Edit the AWS::AutoScaling::LaunchConfiguration resource in the template, changing its DeletionPolicy to Replace. C. Edit the AWS::AutoScaling::AutoScalingGroup resource in the template, inserting an UpdatePolicy attribute. D. Create a new stack from the updated template. Once it is successfully deployed, modify the DNS records to point to the new stack and delete the old stack.

C Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html

What is the purpose of the PollForTask action when it is invoked by an AWS Data Pipeline task runner? A. It is used to retrieve the pipeline definition. B. It is used to report the progress of the task runner to AWS Data Pipeline. C. It is used to receive a task to perform from AWS Data Pipeline. D. It is used to inform AWS Data Pipeline of the outcome when the task runner completes a task.

C Task runners call PollForTask to receive a task to perform from AWS Data Pipeline. If tasks are ready in the work queue, PollForTask returns a response immediately. If no tasks are available in the queue, PollForTask uses long-polling and holds on to a poll connection for up to 90 seconds, during which time any newly scheduled tasks are handed to the task agent. Your remote worker should not call PollForTask again on the same worker group until it receives a response, and this may take up to 90 seconds. Reference: http://docs.aws.amazon.com/datapipeline/latest/APIReference/API_PollForTask.html

Which of the following is not included in the billing metrics delivered to Amazon CloudWatch by Billing? A. Recurring fees for AWS products and services B. Total AWS charges C. One-time charges and refunds D. Usage charges for AWS products and services

C Usage charges and recurring fees for AWS products and services are included in the metrics sent from Billing to Amazon CloudWatch. You will have a metric for total AWS charges, as well as one additional metric for each AWS product or service that you use. However, one-time charges and refunds are not included. Reference: https://aws.amazon.com/blogs/aws/monitor-estimated-costs-using-amazon-cloudwatch-billing-metrics-and-alarms

When setting your customer gateway to connect to your VPC, the _________ Association between the virtual private gateway and the customer gateway is formed first, utilizing the Pre-Shared Key as an authenticator. A. IPsec B. BGP C. IKE Security D. Tunnel

C When configuring your customer gateway to connect to your VPC, several steps need to be completed. The IKE Security Association is established first between the virtual private gateway and customer gateway using the Pre-Shared Key as the authenticator. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/NetworkAdminGuide/Introduction.html

When you load your table straight from an Amazon_____ table, you have the option of limiting the amount of provided throughput used. A. RDS B. DataPipeline C. DynamoDB D. S3

C When you load your table directly from an Amazon DynamoDB table, you have the option to control the amount of Amazon DynamoDB provisioned throughput you consume. Reference: http://docs.aws.amazon.com/redshift/latest/dg/t_Loading_tables_with_the_COPY_command.html

Which of the following assertions is FALSE when interacting with your AWS Direct Connect connection once it has been fully configured? A. You can manage your AWS Direct Connect connections and view the connection details. B. You can delete a connection as long as there are no virtual interfaces attached to it. C. You cannot view the current connection ID and verify if it matches the connection ID on the Letter of Authorization (LOA). D. You can accept a host connection by purchasing a hosted connection from the partner (APN).

C You can manage your AWS Direct Connect connections and view connection details, accept hosted connections, and delete connections. You can view the current status of your connection. You can also view your connection ID, which looks similar to this example dxcon-xxxx, and verify that it matches the connectionID on the Letter of Authorization (LOA) that you received from Amazon. Reference: http://docs.aws.amazon.com/directconnect/latest/UserGuide/viewdetails.html

You'd want to utilize AWS CodeDeploy to deploy an application to Amazon EC2 instances inside an Amazon Virtual Private Cloud (VPC).Which criteria must be satisfied in order for something to be possible? A. The AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access only the public AWS CodeDeploy endpoint. B. The AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access only the public Amazon S3 service endpoint. C. The AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access the public AWS CodeDeploy and Amazon S3 service endpoints. D. It is not currently possible to use AWS CodeDeploy to deploy an application to Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC.)

C You can use AWS CodeDeploy to deploy an application to Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC). However, the AWS CodeDeploy agent installed on the Amazon EC2 instances must be able to access the public AWS CodeDeploy and Amazon S3 service endpoints. Reference: http://aws.amazon.com/codedeploy/faqs/

Amazon Cognito authenticates your mobile app with the Identity Provider (IdP) using the provider's SDK. After authenticating the end user with the IdP, your app passes the OAuth or OpenID Connect token given by the IdP to Amazon Cognito, which provides a new _________ for the user and a set of temporary, limited-privilege AWS credentials. A. Cognito Key Pair B. Cognito API C. Cognito ID D. Cognito SDK

C Your mobile app authenticates with the identity provider (IdP) using the provider's SDK. Once the end user is authenticated with the IdP, the OAuth or OpenIDConnect token returned from the IdP is passed by your app to Amazon Cognito, which returns a new Cognito ID for the user and a set of temporary, limited- privilege AWS credentials. Reference: http://aws.amazon.com/cognito/faqs/

A Solutions Architect is tasked with the responsibility of migrating a legacy application from on-premises to AWS. On-premises, the application runs on two Linux servers protected by a load balancer and connects to a master-master database spread over two servers. Each application server needs a license file that is associated with the server's network adapter's MAC address. The software provider need 12 hours to give ne licensing files through email. To utilize static, the program needs configuration files. IPv4 addresses, not DNS, are used to connect to database servers. Which measures, in combination with the others, should be completed to provide a scalable architecture for the application servers? (Select two.) A. Create a pool of ENIs, request license files from the vendor for the pool, and store the license files within Amazon S3. Create automation to download an unused license, and attach the corresponding ENI at boot time. B. Create a pool of ENIs, request license files from the vendor for the pool, store the license files on an Amazon EC2 instance, modify the configuration files, and create an AMI from the instance. use this AMI for all instances. C. Create a bootstrap automation to request a new license file from the vendor with a unique return email. Have the server configure itself with the received license file. D. Create bootstrap automation to attach an ENI from the pool, read the database IP addresses from AWS Systems Manager Parameter Store, and inject those parameters into the local configuration files. Keep SSM up to date using a Lambda function. E. Install the application on an EC2 instance, configure the application, and configure the IP address information. Create an AMI from this instance and use if for all instances.

CD

Which of the following is an AWS Storage service? (Select two.) A. AWS Relational Database Service (AWS RDS) B. AWS ElastiCache C. AWS Glacier D. AWS Import/Export

CD

A public retail web application utilizes an Application Load Balancer (ALB) in front of Amazon EC2 instances distributed across different Availability Zones (AZs) within a Region that is powered by an Amazon RDS MySQL Multi-AZ deployment. The health checks for the target group are set to utilize HTTP and refer to the product catalog page. Auto Scaling is designed to maintain the size of the web fleet in accordance with the results of the ALB health check. Recently, the application was unavailable. Throughout the downtime, Auto Scaling constantly replaced the instances. Following an inspection, it was revealed that although the web server metrics were within normal range, the database tier was under heavy stress, resulting in significantly increased query response times. Which of the following adjustments would resolve these concerns while also increasing monitoring capabilities for the whole application stack's availability and functioning in preparation for future growth? (Select two.) A. Configure read replicas for Amazon RDS MySQL and use the single reader endpoint in the web application to reduce the load on the backend database tier. B. Configure the target group health check to point at a simple HTML page instead of a product catalog page and the Amazon Route 53 health check against the product page to evaluate full application functionality. Configure Amazon CloudWatch alarms to notify administrators when the site fails. C. Configure the target group health check to use a TCP check of the Amazon EC2 web server and the Amazon Route 53 health check against the product page to evaluate full application functionality. Configure Amazon CloudWatch alarms to notify administrators when the site fails. D. Configure an Amazon CloudWatch alarm for Amazon RDS with an action to recover a high-load, impaired RDS instance in the database tier. E. Configure an Amazon ElastiCache cluster and place it between the web application and RDS MySQL instances to reduce the load on the backend database tier.

CE

You've created a VPC with a CIDR block of 10.0.0.0/28 and deployed a three-tier web application. Two web servers, two application servers, two database servers, and one NAT instance are originally deployed to a total of seven EC2 instances. Two availability zones are used to distribute the web, application, and database servers (AZs). Additionally, you deploy an ELB in front of the two web servers and use Route53 for DNS Web (raffle gradually increases in the days following the deployment, and you attempt to double the number of instances in each tier of the application to handle the increased load; however, some of these new instances fail to launch. Which of the following is the most likely cause? (Select two.) A. AWS reserves the first and the last private IP address in each subnet's CIDR block so you do not have enough addresses left to launch all of the new EC2 instances B. The Internet Gateway (IGW) of your VPC has scaled-up, adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches C. The ELB has scaled-up, adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches D. AWS reserves one IP address in each subnet's CIDR block for Route53 so you do not have enough addresses left to launch all of the new EC2 instances E. AWS reserves the first four and the last IP address in each subnet's CIDR block so you do not have enough addresses left to launch all of the new EC2 instances

CE Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

A business is in the process of transferring its apps to AWS. The apps will be deployed to business units' AWS accounts. The firm employs many development teams that are responsible for the creation and maintenance of all apps. The firm anticipates tremendous user growth.The chief technology officer of the organization must have the following requirements: ✑ Developers must launch the AWS infrastructure using AWS CloudFormation. ✑ Developers must not be able to create resources outside of CloudFormation. ✑ The solution must be able to scale to hundreds of AWS accounts. Which of the following would satisfy these criteria? (Select two.) A. Using CloudFormation, create an IAM role that can be assumed by CloudFormation that has permissions to create all the resources the company needs. Use CloudFormation StackSets to deploy this template to each AWS account. B. In a central account, create an IAM role that can be assumed by developers, and attach a policy that allows interaction with CloudFormation. Modify the AssumeRolePolicyDocument action to allow the IAM role to be passed to CloudFormation. C. Using CloudFormation, create an IAM role that can be assumed by developers, and attach policies that allow interaction with and passing a role to CloudFormation. Attach an inline policy to deny access to all other AWS services. Use CloudFormation StackSets to deploy this template to each AWS account. D. Using CloudFormation, create an IAM role for each developer, and attach policies that allow interaction with CloudFormation. Use CloudFormation StackSets to deploy this template to each AWS account. E. In a central AWS account, create an IAM role that can be assumed by CloudFormation that has permissions to create the resources the company requires. Create a CloudFormation stack policy that allows the IAM role to manage resources. Use CloudFormation StackSets to deploy the CloudFormation stack policy to each AWS account.

CE Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html

A large corporation utilizes a multi-account AWS approach. Separate accounts are used to manage development staging and production workloads. The following criteria have been set to help manage expenses and enhance governance: ✑ The company must be able to calculate the AWS costs for each project. ✑ The company must be able to calculate the AWS costs for each environment development staging and production. ✑ Commonly deployed IT services must be centrally managed. ✑ Business units can deploy pre-approved IT services only. ✑ Usage of AWS resources in the development account must be limited. Which measures should be conducted in combination to achieve these requirements? (Select three.) A. Apply environment, cost center, and application name tags to all taggable resources. B. Configure custom budgets and define thresholds using Cost Explorer. C. Configure AWS Trusted Advisor to obtain weekly emails with cost-saving estimates. D. Create a portfolio for each business unit and add products to the portfolios using AWS CloudFormation in AWS Service Catalog. E. Configure a billing alarm in Amazon CloudWatch. F. Configure SCPs in AWS Organizations to allow services available using AWS.

CEF

A business is managing thousands of Amazon EC2 instances with the help of an existing orchestration technology. A recent penetration test discovered a weakness in the software stack of the organization. This risk led the organization to conduct a comprehensive assessment of its present manufacturing environment. According to the investigation, the following vulnerabilities exist in the environment: ✑ Operating systems with outdated libraries and known vulnerabilities are being used in production. ✑ Relational databases hosted and managed by the company are running unsupported versions with known vulnerabilities. ✑ Data stored in databases is not encrypted. The solutions architect aims to utilize AWS Config to regularly audit and analyze compliance with the company's rules and standards for AWS resource settings. What additional measures will allow the business to protect its surroundings and manage its resources while adhering to best practices? A. Use AWS Application Discovery Service to evaluate all running EC2 instances Use the AWS CLI to modify each instance, and use EC2 user data to install the AWS Systems Manager Agent during boot. Schedule patching to run as a Systems Manager Maintenance Windows task. Migrate all relational databases to Amazon RDS and enable AWS KMS encryption. B. Create an AWS CloudFormation template for the EC2 instances. Use EC2 user data in the CloudFormation template to install the AWS Systems Manager Agent, and enable AWS KMS encryption on all Amazon EBS volumes. Have CloudFormation replace all running instances. Use Systems Manager Patch Manager to establish a patch baseline and deploy a Systems Manager Maintenance Windows task to execute AWS-RunPatchBaseline using the patch baseline. C. Install the AWS Systems Manager Agent on all existing instances using the companyג€™s current orchestration tool. Use the Systems Manager Run Command to execute a list of commands to upgrade software on each instance using operating system-specific tools. Enable AWS KMS encryption on all Amazon EBS volumes. D. Install the AWS Systems Manager Agent on all existing instances using the companyג€™s current orchestration tool. Migrate all relational databases to Amazon RDS and enable AWS KMS encryption. Use Systems Manager Patch Manager to establish a patch baseline and deploy a Systems Manager Maintenance Windows task to execute AWS-RunPatchBaseline using the patch baseline.

D

A business requires that only particularly hardened AMIs be launched into public subnets inside a VPC, and that the AMIs be associated with a certain security group. Allowing non-compliant instances to launch onto the public subnet may provide a severe security risk if permitted to run. A mapping of permitted AMIs to subnets and security groups occurs in the same AWS account's Amazon DynamoDB database. The business developed an AWS Lambda function that, when run, terminates an Amazon EC2 instance if the AMI, subnet, and security group combination is not authorized in the DynamoDB database. What should the Solutions Architect do to limit the risk of compliance deviations as fast as possible? A. Create an Amazon CloudWatch Events rule that matches each time an EC2 instance is launched using one of the allowed AMIs, and associate it with the Lambda function as the target. B. For the Amazon S3 bucket receiving the AWS CloudTrail logs, create an S3 event notification configuration with a filter to match when logs contain the ec2:RunInstances action, and associate it with the Lambda function as the target. C. Enable AWS CloudTrail and configure it to stream to an Amazon CloudWatch Logs group. Create a metric filter in CloudWatch to match when the ec2:RunInstances action occurs, and trigger the Lambda function when the metric is greater than 0. D. Create an Amazon CloudWatch Events rule that matches each time an EC2 instance is launched, and associate it with the Lambda function as the target.

D

A business that specializes in the Internet of Things has deployed a fleet of sensors to monitor temperatures in distant regions. Each device establishes a connection with AWS IoT Core and transmits a message every 30 seconds, which updates an Amazon DynamoDB database. A system administrator uses AWS IoT to check that devices are still communicating with AWS IoT Core: the database is not updating. What should a Solutions Architect look for when a database is not being updated? A. Verify the AWS IoT Device Shadow service is subscribed to the appropriate topic and is executing the AWS Lambda function. B. Verify that AWS IoT monitoring shows that the appropriate AWS IoT rules are being executed, and that the AWS IoT rules are enabled with the correct rule actions. C. Check the AWS IoT Fleet indexing service and verify that the thing group has the appropriate IAM role to update DynamoDB. D. Verify that AWS IoT things are using MQTT instead of MQTT over WebSocket, then check that the provisioning has the appropriate policy attached.

D

A financial institution is running its mission-critical application on Amazon Web Services' current-generation Linux EC2 instances. The program features a self-managed MySQL database that handles a large amount of I/O. The application is doing well in terms of handling a reasonable quantity of traffic throughout the month. However, it slows down significantly during the last three days of each month owing to month-end reporting, despite the fact that the firm uses Elastic Load Balancers and Auto Scaling to meet increasing demand. Which of the following actions would enable the database to manage the month-end load with the LEAST amount of performance degradation? A. Pre-warming Elastic Load Balancers, using a bigger instance type, changing all Amazon EBS volumes to GP2 volumes. B. Performing a one-time migration of the database cluster to Amazon RDS, and creating several additional read replicas to handle the load during end of month. C. Using Amazon CloudWatch with AWS Lambda to change the type, size, or IOPS of Amazon EBS volumes in the cluster based on a specific CloudWatch metric. D. Replacing all existing Amazon EBS volumes with new PIOPS volumes that have the maximum available storage size and I/O per second by taking snapshots before the end of the month and reverting back afterwards.

D

A firm intends to implement a new business analytics application that will need 10,000 hours of compute time each month. Flexible availability of computational resources is acceptable, but they must be as cost-effective as feasible. Additionally, the organization will offer a reporting service for distributing analytics results, which must be available at all times. How should the Solutions Architect approach developing a solution that satisfies these requirements? A. Deploy the reporting service on a Spot Fleet. Deploy the analytics application as a container in Amazon ECS with AWS Fargate as the compute option. Set the analytics application to use a custom metric with Service Auto Scaling. B. Deploy the reporting service on an On-Demand Instance. Deploy the analytics application as a container in AWS Batch with AWS Fargate as the compute option. Set the analytics application to use a custom metric with Service Auto Scaling. C. Deploy the reporting service as a container in Amazon ECS with AWS Fargate as the compute option. Deploy the analytics application on a Spot Fleet. Set the analytics application to use a custom metric with Amazon EC2 Auto Scaling applied to the Spot Fleet. D. Deploy the reporting service as a container in Amazon ECS with AWS Fargate as the compute option. Deploy the analytics application on an On-Demand Instance and purchase a Reserved Instance with a 3-year term. Set the analytics application to use a custom metric with Amazon EC2 Auto Scaling applied to the On-Demand Instance.

D

You may restrict access to S3 buckets and objects using the following: A. Identity and Access Management (IAM) Policies. B. Access Control Lists (ACLs). C. Bucket Policies. D. All of the above

D

On AWS, a user hosts a public website. The user wishes to host both the database and the application server inside an AWS VPC. The user wishes to configure a database that is capable of connecting to the Internet in order to perform patch upgrades but is unable to accept any requests from the internet. How does the user configure this? A. Setup DB in a private subnet with the security group allowing only outbound traffic. B. Setup DB in a public subnet with the security group allowing only inbound data. C. Setup DB in a local data center and use a private gateway to connect the application with DB. D. Setup DB in a private subnet which is connected to the internet via NAT for outbound.

D A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. It enables the user to launch AWS resources into a virtual network that the user has defined. AWS provides two features that the user can use to increase security in VPC: security groups and network ACLs. When the user wants to setup both the DB and App on VPC, the user should make one public and one private subnet. The DB should be hosted in a private subnet and instances in that subnet cannot reach the internet. The user can allow an instance in his VPC to initiate outbound connections to the internet but prevent unsolicited inbound connections from the internet by using a Network Address Translation (NAT) instance. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

Which of the following statements about the number of security groups and rules that apply to an EC2-Classic instance and an EC2-VPC network interface is correct? A. In EC2-Classic, you can associate an instance with up to 5 security groups and add up to 50 rules to a security group. In EC2-VPC, you can associate a network interface with up to 500 security groups and add up to 100 rules to a security group. B. In EC2-Classic, you can associate an instance with up to 500 security groups and add up to 50 rules to a security group. In EC2-VPC, you can associate a network interface with up to 5 security groups and add up to 100 rules to a security group. C. In EC2-Classic, you can associate an instance with up to 5 security groups and add up to 100 rules to a security group. In EC2-VPC, you can associate a network interface with up to 500 security groups and add up to 50 rules to a security group. D. In EC2-Classic, you can associate an instance with up to 500 security groups and add up to 100 rules to a security group. In EC2-VPC, you can associate a network interface with up to 5 security groups and add up to 50 rules to a security group.

D A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. If you're using EC2-Classic, you must use security groups created specifically for EC2-Classic. In EC2-Classic, you can associate an instance with up to 500 security groups and add up to 100 rules to a security group. If you're using EC2-VPC, you must use security groups created specifically for your VPC. In EC2-VPC, you can associate a network interface with up to 5 security groups and add up to 50 rules to a security group. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html

How many metrics does CloudWatch provide for Auto Scaling? A. 7 metrics and 5 dimensions B. 5 metrics and 1 dimension C. 1 metric and 5 dimensions D. 8 metrics and 1 dimension

D AWS Auto Scaling supports both detailed as well as basic monitoring of the CloudWatch metrics. Basic monitoring happens every 5 minutes, while detailed monitoring happens every minute. It supports 8 metrics and 1 dimension. The metrics are: GroupMinSize GroupMaxSize GroupDesiredCapacity GroupInServiceInstances GroupPendingInstances GroupStandbyInstances GroupTerminatingInstances GroupTotalInstances The dimension is AutoScalingGroupName Reference: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/supported_services.html

Is it possible to have a direct connection to Amazon Web Services (AWS)? A. No, AWS only allows access from the public Internet. B. No, you can create an encrypted tunnel to VPC, but you cannot own the connection. C. Yes, you can via Amazon Dedicated Connection D. Yes, you can via AWS Direct Connect.

D AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard 1 gigabit or 10 gigabit Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct Connect router. With this connection in place, you can create virtual interfaces directly to the AWS cloud (for example, to Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3)) and to Amazon Virtual PrivateCloud (Amazon VPC), bypassing Internet service providers in your network path. Reference: http://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html

You've created your first Lambda function and want to monitor it using Cloudwatch metrics.Cloudwatch can monitor which of the following Lambda metrics? A. Total requests only B. Status Check Failed, total requests, and error rates C. Total requests and CPU utilization D. Total invocations, errors, duration, and throttles

D AWS Lambda automatically monitors functions on your behalf, reporting metrics through Amazon CloudWatch (CloudWatch). These metrics include total invocations, errors, duration, and throttles. Reference: http://docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-metrics.html

Which of the following statements is correct about Amazon EBS encryption keys? A. Amazon EBS encryption uses the Customer Master Key (CMK) to create an AWS Key Management Service (AWS KMS) master key. B. Amazon EBS encryption uses the EBS Magnetic key to create an AWS Key Management Service (AWS KMS) master key. C. Amazon EBS encryption uses the EBS Magnetic key to create a Customer Master Key (CMK). D. Amazon EBS encryption uses the AWS Key Management Service (AWS KMS) master key to create a Customer Master Key (CMK).

D Amazon EBS encryption uses AWS Key Management Service (AWS KMS) master keys when creating encrypted volumes and any snapshots created from your encrypted volumes. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html

Which of the following is the Amazon Resource Name (ARN) condition operator that may be used in an Identity and Access Management (IAM) policy to ensure that the ARN is case-insensitive? A. ArnCheck B. ArnMatch C. ArnCase D. ArnLike

D Amazon Resource Name (ARN) condition operators let you construct Condition elements that restrict access based on comparing a key to an ARN. ArnLike, for instance, is a case-insensitive matching of the ARN. Each of the six colon-delimited components of the ARN is checked separately and each can include a multi- character match wildcard (*) or a single-character match wildcard (?). Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_ElementDescriptions.html

An elastic network interface (ENI) is a virtual network interface that may be attached to a virtual private cloud (VPC) instance. An ENI may include a single public IP address, which is automatically allocated to the elastic network interface for eth0 when an instance is launched, but only when you_____. A. create an elastic network interface for eth1 B. include a MAC address C. use an existing network interface D. create an elastic network interface for eth0

D An elastic network interface (ENI) is defined as a virtual network interface that you can attach to an instance in a VPC and can include one public IP address, which can be auto-assigned to the elastic network interface for eth0 when you launch an instance, but only when you create an elastic network interface for eth0 instead of using an existing network interface. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html

Which of the following cannot be used to manage and administer Amazon ElastiCache? A. AWS software development kits (SDKs) B. Amazon S3 C. ElastiCache command line interface (CLI) D. AWS CloudWatch

D CloudWatch is a monitoring tool and doesn't give users access to manage Amazon ElastiCache. Reference: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/WhatIs.Managing.html

A user is attempting to comprehend the intricacies of the CloudWatch monitoring concept.Which of the following services does not provide comprehensive monitoring through CloudWatch? A. AWS RDS B. AWS ELB C. AWS Route53 D. AWS EMR

D CloudWatch is used to monitor AWS as well as the custom services. It provides either basic or detailed monitoring for the supported AWS products. In basic monitoring, a service sends data points to CloudWatch every five minutes, while in detailed monitoring a service sends data points to CloudWatch every minute.Services, such as RDS, EC2, Auto Scaling, ELB, and Route 53 can provide the monitoring data every minute.Reference:http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/supported_services.html

You intend to utilize Amazon Redshift and will be deploying dw1.8xlarge nodes.What is the bare minimum number of nodes that you must deploy in this configuration? A. 1 B. 4 C. 3 D. 2

D For a single-node configuration in Amazon Redshift, the only option available is the smallest of the two options. The 8XL extra-large nodes are only available in a multi-node configuration. Reference: http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html

Which phase of the "get started with AWS Direct Connect" process tags the virtual interface you constructed with a customer-supplied tag that conforms with the Ethernet 802.1Q standard? A. Download Router Configuration. B. Complete the Cross Connect. C. Configure Redundant Connections with AWS Direct Connect. D. Create a Virtual Interface.

D In the list of using Direct Connect steps, the create a Virtual Interface step is to provision your virtual interfaces. Each virtual interface must be tagged with a customer-provided tag that complies with the Ethernet 802.1Q standard. This tag is required for any traffic traversing the AWS Direct Connect connection. Reference: http://docs.aws.amazon.com/directconnect/latest/UserGuide/getstarted.html#createvirtualinterface

What is the network performance of the Amazon EC2 c4.8xlarge instance? A. Very High but variable B. 20 Gigabit C. 5 Gigabit D. 10 Gigabit

D Networking performance offered by the c4.8xlarge instance is 10 Gigabit. Reference: http://aws.amazon.com/ec2/instance-types/

To handle Web traffic for a popular product, your chief financial officer and information technology director have purchased ten m1.large high-utilization Reserved Instances (RIs) that are evenly distributed across two availability zones; Route 53 is used to route the traffic to an Elastic Load Balancer (ELB). After a few months, the product becomes even more popular, necessitating the augmentation of capacity. As a consequence, your organization acquires two C3.2xlarge Ris with a medium usage rate. You register the two c3.2xlarge instances with your ELB and shortly discover that the m1.large instances are fully used but the c3.2xlarge instances have substantial unused capacity. Which option is the most cost effective and makes the most use of EC2 capacity? A. Configure Autoscaling group and Launch Configuration with ELB to add up to 10 more on-demand m1.large instances when triggered by Cloudwatch. Shut off c3.2xlarge instances. B. Configure ELB with two c3.2xlarge instances and use on-demand Autoscaling group for up to two additional c3.2xlarge instances. Shut off m1.large instances. C. Route traffic to EC2 m1.large and c3.2xlarge instances directly using Route 53 latency based routing and health checks. Shut off ELB. D. Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round robin.

D Reference: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

To comply with industry laws, a Solutions Architect must create a solution that stores a business's vital data across various public AWS Regions, including the United States, where the business has its headquarters. The Solutions Architect is responsible for ensuring that the data stored in AWS is accessible over the company's worldwide WAN network. The security team has mandated that no traffic requesting access to this data be sent over the public internet. How should the Solutions Architect develop a highly accessible, cost-effective solution that satisfies the requirements? A. Establish AWS Direct Connect connections from the company headquarters to all AWS Regions in use. Use the company WAN to send traffic over to the headquarters and then to the respective DX connection to access the data. B. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region. Use the company WAN to send traffic over a DX connection. Use inter-region VPC peering to access the data in other AWS Regions. C. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region. Use the company WAN to send traffic over a DX connection. Use an AWS transit VPC solution to access data in other AWS Regions. D. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region. Use the company WAN to send traffic over a DX connection. Use Direct Connect Gateway to access data in other AWS Regions.

D Reference: https://aws.amazon.com/blogs/aws/new-aws-direct-connect-gateway-inter-region-vpc-access/

Users may bid on collectable objects on an auction website. According to the auction regulations, each bid must be processed only once and in the sequence in which it was received. The present approach is based on an Amazon EC2 web server fleet that writes bid records to Amazon Kinesis Data Streams. A single t2.large instance is configured with a cron job that runs the bid processor, which receives and analyzes incoming bids from Kinesis Data Streams. Although the auction site is gaining popularity, users are reporting that certain bids are not being registered. Troubleshooting suggests that the bid processor is inefficient during high demand hours, crashes periodically during processing, and occasionally loses track of which records are being processed. What adjustments should be made to increase the reliability of bid processing? A. Refactor the web application to use the Amazon Kinesis Producer Library (KPL) when posting bids to Kinesis Data Streams. Refactor the bid processor to flag each record in Kinesis Data Streams as being unread, processing, and processed. At the start of each bid processing run, scan Kinesis Data Streams for unprocessed records. B. Refactor the web application to post each incoming bid to an Amazon SNS topic in place of Kinesis Data Streams. Configure the SNS topic to trigger an AWS Lambda function that processes each bid as soon as a user submits it. C. Refactor the web application to post each incoming bid to an Amazon SQS FIFO queue in place of Kinesis Data Streams. Refactor the bid processor to continuously the SQS queue. Place the bid processing EC2 instance in an Auto Scaling group with a minimum and a maximum size of 1. D. Switch the EC2 instance type from t2.large to a larger general compute instance type. Put the bid processor EC2 instances in an Auto Scaling group that scales out the number of EC2 instances running the bid processor, based on the IncomingRecords metric in Kinesis Data Streams.

D Reference: https://d0.awsstatic.com/whitepapers/Building_a_Real_Time_Bidding_Platform_on_AWS_v1_Final.pdf

A business wants to use Amazon WorkSpaces to deliver desktop as a service (DaaS) to a number of workers. WorkSpaces will need authorisation to access files and services hosted on-premises depending on the company's Active Directory. The network will be connected using an existing AWS Direct Connect connection.The answer must meet the following criteria: ✑ Credentials from Active Directory should be used to access on-premises files and services. ✑ Credentials from Active Directory should not be stored outside the company. ✑ End users should have single sign-on (SSO) to on-premises files and services once connected to WorkSpaces. Which authentication technique should the solutions architect employ for end users? A. Create an AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) directory within the WorkSpaces VPC. Use the Active Directory Migration Tool (ADMT) with the Password Export Server to copy users from the on-premises Active Directory to AWS Managed Microsoft AD. Set up a one- way trust allowing users from AWS Managed Microsoft AD to access resources in the on-premises Active Directory. Use AWS Managed Microsoft AD as the directory for WorkSpaces. B. Create a service account in the on-premises Active Directory with the required permissions. Create an AD Connector in AWS Directory Service to be deployed on premises using the service account to communicate with the on-premises Active Directory. Ensure the required TCP ports are open from the WorkSpaces VPC to the on-premises AD Connector. Use the AD Connector as the directory for WorkSpaces. C. Create a service account in the on-premises Active Directory with the required permissions. Create an AD Connector in AWS Directory Service within the WorkSpaces VPC using the service account to communicate with the on-premises Active Directory. Use the AD Connector as the directory for WorkSpaces. D. Create an AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) directory in the AWS Directory Service within the WorkSpaces VPC. Set up a one-way trust allowing users from the on-premises Active Directory to access resources in the AWS Managed Microsoft AD. Use AWS Managed Microsoft AD as the directory for WorkSpaces. Create an identity provider with AWS Identity and Access Management (IAM) from an on-premises ADFS server. Allow users from this identity provider to assume a role with a policy allowing them to run WorkSpaces.

D Reference: https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html

Which characteristic of the load balancing service aims to send future connections to a service to the same node as long as it is online? A. Node balance B. Session retention C. Session multiplexing D. Session persistence

D Session persistence is a feature of the load balancing service. It attempts to force subsequent connections to a service to be redirected to the same node as long as it is online. Reference: http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/Concepts-d1e233.html

Which Amazon Web Services instance address has the following characteristics? "When an instance is stopped, its Elastic IP address becomes unmapped, which must be remapped when the instance is restarted." A. Both A and B B. None of these C. VPC Addresses D. EC2 Addresses

D Stopping an instance -EC2-Classic -If you stop an instance, its Elastic IP address is disassociated, and you must reassociate the Elastic IP address when you restart the instance.EC2-VPC -If you stop an instance, its Elastic IP address remains associated. Reference:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html

Which EC2 feature enables the user to cluster Cluster Compute instances? A. Cluster group B. Cluster security group C. GPU units D. Cluster placement group

D The Amazon EC2 cluster placement group functionality allows users to group cluster compute instances in clusters. Reference: https://aws.amazon.com/ec2/faqs/

How much memory is provided by the cr1.8xlarge instance type? A. 224 GB B. 124 GB C. 184 GB D. 244 GB

D The CR1 instances are part of the memory optimized instances. They offer lowest cost per GB RAM among all the AWS instance families. CR1 instances are part of the new generation of memory optimized instances, which can offer up to 244 GB RAM and run on faster CPUs (Intel Xeon E5-2670 with NUMA support) in comparison to the M2 instances of the same family. They support cluster networking for bandwidth intensive applications. cr1.8xlarge is one of the largest instance types of the CR1 family, which can offer 244 GB RAM. Reference: http://aws.amazon.com/ec2/instance-types/

Which of the following IAM policy components allows for the definition of an exception to a set of actions? A. NotException B. ExceptionAction C. Exception D. NotAction

D The NotAction element lets you specify an exception to a list of actions. Reference :http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_ElementDescriptions.html

A benefits enrollment firm hosts a three-tier web application on AWS in a VPC that contains a public Web tier NAT (Network Address Translation) instance. There is sufficient allocated capacity to handle the anticipated volume of work throughout the next fiscal year's benefit enrollment period, plus some additional overhead. Enrollment proceeds normally for two days, at which point the web tier becomes unresponsive. Upon investigation using CloudWatch and other monitoring tools, it is discovered that an extremely large and unexpected amount of inbound traffic is coming from a set of 15 specific IP addresses over port 80 from a country where the benefits company has no customers. The web tier instances are so overburdened that administrators of benefit enrollment cannot even SSH into them. Which action would be most effective in fighting off this attack? A. Create a custom route table associated with the web tier and block the attacking IP addresses from the IGW (Internet Gateway) B. Change the EIP (Elastic IP Address) of the NAT instance in the web tier subnet and update the Main Route Table with the new EIP C. Create 15 Security Group rules to block the attacking IP addresses over port 80 D. Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny rules to block the attacking IP addresses

D Use AWS Identity and Access Management (IAM) to control who in your organization has permission to create and manage security groups and network ACLs(NACL). Isolate the responsibilities and roles for better defense. For example, you can give only your network administrators or security admin the permission to manage the security groups and restrict other roles.

In the context of the IAM service, a GROUP is defined as: A. A collection of AWS accounts B. It's the group of EC2 machines that gain the permissions specified in the GROUP. C. There's no GROUP in IAM, but only USERS and RESOURCES. D. A collection of users.

D Use groups to assign permissions to IAM usersInstead of defining permissions for individual IAM users, itג€™s usually more convenient to create groups that relate to job functions (administrators, developers, accounting, etc.), define the relevant permissions for each group, and then assign IAM users to those groups. All the users in an IAM group inherit the permissions assigned to the group. That way, you can make changes for everyone in a group in just one place. As people move around in your company, you can simply change what IAM group their IAM user belongs to. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#use-groups-for-permissions

A customer is going to deploy an SSL-enabled web application to AWS and wishes to establish a separation of roles between the EC2 service administrators, who have access to instances and API calls, and the security officers, who will manage and have exclusive access to the application's X.509 certificate, which contains the private key. A. Upload the certificate on an S3 bucket owned by the security officers and accessible only by EC2 Role of the web servers. B. Configure the web servers to retrieve the certificate upon boot from an CloudHSM is managed by the security officers. C. Configure system permissions on the web servers to restrict access to the certificate only to the authority security officers D. Configure IAM policies authorizing access to the certificate store only to the security officers and terminate SSL on an ELB.

D You'll terminate the SSL at ELB. and the web request will get unencrypted to the EC2 instance, even if the certs are stored in S3, it has to be configured on the web servers or load balancers somehow, which becomes difficult if the keys are stored in S3. However, keeping the keys in the cert store and using IAM to restrict access gives a clear separation of concern between security officers and developers. Developerג€™s personnel can still configure SSL on ELB without actually handling the keys.

A firm has a social networking application for picture sharing. To provide an uniform user experience, the business does some image processing on user-uploaded photographs prior to publishing them on the program. The image processing is accomplished via the use of a collection of Python libraries.As of now, the architecture is as follows: ✑ The image processing Python code runs in a single Amazon EC2 instance and stores the processed images in an Amazon S3 bucket named ImageBucket. ✑ The front-end application, hosted in another bucket, loads the images from ImageBucket to display to users. With worldwide development aspirations, the firm want to modify its present architecture in order to expand the application for higher demand while also reducing administration complexity as the program grows. Which adjustments should a solutions architect make in combination? (Select two.) A. Place the image processing EC2 instance into an Auto Scaling group. B. Use AWS Lambda to run the image processing tasks. C. Use Amazon Rekognition for image processing. D. Use Amazon CloudFront in front of ImageBucket. E. Deploy the applications in an Amazon ECS cluster and apply Service Auto Scaling.

DE

Which combination of procedures may a Solutions Architect take to safeguard a web workload hosted on Amazon EC2 from DDoS and application layer attacks? (Choose two.) A. Put the EC2 instances behind a Network Load Balancer and configure AWS WAF on it. B. Migrate the DNS to Amazon Route 53 and use AWS Shield. C. Put the EC2 instances in an Auto Scaling group and configure AWS WAF on it. D. Create and use an Amazon CloudFront distribution and configure AWS WAF on it. E. Create and use an internet gateway in the VPC and use AWS Shield.

DE Reference: https://aws.amazon.com/answers/networking/aws-ddos-attack-mitigation/

A business operates a well-known public-facing ecommerce website. Its user base is rapidly expanding from a local to a national level. The website is hosted in-house using web servers and a MySQL database. The business wishes to move its burden to AWS. A solutions architect must develop a solution for the following: ✑ Improve security ✑ Improve reliability ✑ Improve availability ✑ Reduce latency ✑ Reduce maintenance Which measures should the solutions architect do in combination to satisfy these requirements? (Select three.) A. Use Amazon EC2 instances in two Availability Zones for the web servers in an Auto Scaling group behind an Application Load Balancer. B. Migrate the database to a Multi-AZ Amazon Aurora MySQL DB cluster. C. Use Amazon EC2 instances in two Availability Zones to host a highly available MySQL database cluster. D. Host static website content in Amazon S3. Use S3 Transfer Acceleration to reduce latency while serving webpages. Use AWS WAF to improve website security. E. Host static website content in Amazon S3. Use Amazon CloudFront to reduce latency while serving webpages. Use AWS WAF to improve website security. F. Migrate the database to a single-AZ Amazon RDS for MySQL DB instance.

DEF


Conjuntos de estudio relacionados

Computer Organization and Architecture Chapter #1

View Set

Sociology of Self in Modern Society EXAM 1

View Set

My Brother Sam is Dead Chapters 1-4 Questions (Including Literature Circle packet, Pro-Americans, and Pro-British Questions) : )

View Set

Harry Potter and the Deathly Hallows Trivia

View Set

Buttaro chapter 133 diverticular diseases

View Set

CH7: Power, Politics, and Leadership -m

View Set