SA need to know mar 17

¡Supera tus tareas y exámenes ahora con Quizwiz!

A company has a stateless web application that runs on AWS Lambda functions that are invoked by Amazon API Gateway. The company v wants to deploy the application across multiple AWS Regions to provide Regional failover capabilities. What should a solutions architect do to route traffic to multiple Regions? A. Create Amazon Route 53 health checks for each Region. Use an active-active failover configuration. B. Create an Amazon CloudFront distribution with an origin for each Region. Use CloudFront health checks to route traffic. C. Create a transit gateway. Attach the transit gateway to the API Gateway endpoint in each Region. Configure the transit gateway to route requests. D. Create an Application Load Balancer in the primary Region. Set the target group to point to the API Gateway endpoint hostnames in each Region.

A. Create Amazon Route 53 health checks for each Region. Use an active-active failover configuration.

what is outpost

An Outpost is a pool of AWS compute and storage capacity deployed at a customer site. AWS operates, monitors, and manages this capacity as part of an AWS Region. You can create subnets on your Outpost and specify them when you create AWS resources such as EC2 instances, EBS volumes, ECS clusters, and RDS instances. benefitsRun AWS Services on premises. Extend AWS compute, networking, security, and other services on premises for low latency, local data processing, and data residency needs. Fully managed infrastructure. ... Truly consistent hybrid experience.

Q10CO offers serv that is growing rapidly. B/C of the growth CO order processing sys is exper scaling prob during peak traffic hours. The current arch includ following • A grp ofEC2 that run in EC2 ASG to collect orders from the app • Another grp of EC2that run inEC2 ASG to fulfill orders Order collection process occurs quickly, but order fulfill process can take longer. Data must not be lost b/c ofscaling event SA must ensure order collectionprocess & order fulfill process can both scale properly during peak traffic hours. The sol must opt util of CO AWS resources Which sol A. Use CloudWatch metrics to monitor the CPU of each instance in the AutoScaling groups. Configure each ASG's minimum capacity according to peakworkload values B. UseCloudWatch metrics to monitor the CPU of each instance in the AutoScaling groups. Configure a CloudWatch alarm to invoke an SNS topic that creates additional ASGs on demand

Answer on next one

Q28A company is preparing a new data platform that will ingest real-time streaming data from multi sources. The company needs to transform the data before writing the data to Amazon S3. The company needs the ability to use SQL to query the transformed data. Which solutions will meet these requirements? select2 A. Use Amazon Kinesis Data Streams to stream the data. Use Amazon Kinesis DataAnalytics to transform the data. Use Amazon Kinesis Data Firehose to write the data S3. Use Amazon Athena to query the transformed data from S3. B. Use Amazon Managed Streaming for Apache Kafka to stream the data.Use AWS Glue to transform the data and to write the data to Amazon S3. Use AmazonAthena to query the transformed data from Amazon S3. C. Use Database Migration Service to ingest the data. Use AmazonEMR to transform the data and to write the data to Amazon S3. Use Amazon Athena toquery the transformed data from Amazon S3.

Answer on the other one

A company is deploying an application in three AWS Regions using an Application Load Balancer Amazon Route 53 will be used to distribute traffic between these Regions. Which Route 53 configuration should a solutions architect use to provide the MOST highperforming experience? A. Create an A record with a latency policy. B. Create an A record with a geolocation policy. C. Create a CNAME record with a failover policy. D. Create a CNAME record with a geoproximity policy.

Answer: A Explanation: To provide the most high-performing experience for the users of the application, a solutions architect should use a latency routing policy for the Route 53 A record. This policy allows Route 53 to route traffic to the AWS Region that provides the lowest possible latency for the users1. A latency routing policy can also improve the availability of the application, as Route 53 can automatically route traffic to another Region if the primary Region becomes unavailable2.

A company migrated a MySQL database from the company's on-premises data center to an Amazon RDS for MySQL DB instance. The company sized the RDS DB instance to meet the company's average daily workload. Once a month, the database performs slowly when the company runs queries for a report. The company wants to have the ability to run reports and maintain the performance of the daily workloads. Which solution will meet these requirements? A. Create a read replica of the database. Direct the queries to the read replica. B. Create a backup of the database. Restore the backup to another DB instance. Direct the queries to the new database. C. Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket. D. Resize the DB instance to accommodate the additional workload.

Answer: A Create a read replica of the database. Direct the queries to the read replica. from website

A company is developing an application that will run on a production Amazon Elastic Kubernetes Service (Amazon EKS) cluster The EKS cluster has managed node groups that are provisioned with On-Demand Instances. The company needs a dedicated EKS cluster for development work. The company will use the development cluster infrequently to test the resiliency of the application. The EKS cluster must manage all the nodes. Which solution will meet these requirements MOST cost-effectively? A. Create a managed node group that contains only Spot Instances. B. Create two managed node groups. Provision one node group with On-DemandInstances. Provision the second node group with Spot Instances. C. Create an Auto Scaling group that has a launch configuration that uses Spot Instances.Configure the user data to add the nodes to the EKS cluster. D. Create a managed node group that contains only On-Demand Instances.

Answer: A Explanation: Spot Instances are EC2 instances that are available at up to a 90% discount compared to On-Demand prices. Spot Instances are suitable for stateless, fault-tolerant, and flexible workloads that can tolerate interruptions. Spot Instances can be reclaimed by EC2 when the demand for On-Demand capacity increases, but they provide a two-minute warning before termination. EKS managed node groups automate the provisioning and lifecycle management of nodes for EKS clusters. Managed node groups can use Spot Instances to reduce costs and scale the cluster based on demand. Managed node groups also support features such as Capacity Rebalancing and Capacity Optimized allocation strategy to improve the availability and resilience of Spot Instances. This solution will meet the requirements most cost-effectively, as it leverages the lowest-priced EC2 capacity and does not require any manual intervention.

A solutions architect is designing a highly available Amazon ElastiCache for Redis based solution. The solutions architect needs to ensure that failures do not result in performance degradation or loss of data locally and within an AWS Region. The solution needs to provide high availability at the node level and at the Region level. Which solution will meet these requirements? A. Use Multi-AZ Redis replication groups with shards that contain multiple nodes. B. Use Redis shards that contain multiple nodes with Redis append only files (AOF) turedon. C. Use a Multi-AZ Redis cluster with more than one read replica in the replication group. D. Use Redis shards that contain multiple nodes with Auto Scaling turned on.

Answer: A Explanation: This answer is correct because it provides high availability at the node level and at the Region level for the ElastiCache for Redis solution. A Multi-AZ Redis replication group consists of a primary cluster and up to five read replica clusters, each in a different Availability Zone. If the primary cluster fails, one of the read replicas is automatically promoted to be the new primary cluster. A Redis replication group with shards enables partitioning of the data across multiple nodes, which increases the scalability and performance of the solution. Each shard can have one or more replicas to provide redundancy and read scaling.

Q36A company uses an organization in AWS Organizations to manage AWS accounts that contain applications. The company sets up a dedicated monitoring member account in the organization. The company wants to query and visualize observability data across the accounts by using Amazon CloudWatch. Which solution will meet these requirements? C. Configure a new IAM user in the monitoring account. In each AWS account, configurean 1AM policy to have access to query and visualize the CloudWatch data in the account.Attach the new 1AM policy to the new 1AM user. D. Create a new IAM user in the monitoring account. Create cross-account 1AM policies ineach AWS account. Attach the 1AM policies to the new IAM user.

Answer: A Explanation: This solution meets the requirements because it allows the monitoring account to query and visualize observability data across the accounts by using CloudWatch. CloudWatch cross-account observability is a feature that enables a central monitoring account to view and interact with observability data shared by other accounts. To enable cross-account observability, the monitoring account needs to configure the types of data to be shared (metrics, logs, and traces) and the source accounts to be linked. The source accounts can be specified by account IDs, organization IDs, or organization paths. To share the data with the monitoring account, the source accounts need to deploy an AWS CloudFormation template provided by the monitoring account. This template creates an observability link resource that represents the link between the source account and the monitoring account. The template also creates a sink resource that represents an attachment point in the monitoring account. The source accounts can share their observability data with the sink in the monitoring account. The monitoring account can then use the CloudWatch console, API, or CLI to search, analyze, and correlate the observability data across the accounts.

A company is deploying an application that processes large quantities of data in parallel. The company plans to use Amazon EC2 instances for the workload. The network architecture must be configurable to prevent groups of nodes from sharing the same underlying hardware. Which networking solution meets these requirements? A. Run the EC2 instances in a spread placement group. B. Group the EC2 instances in separate accounts. C. Configure the EC2 instances with dedicated tenancy. D. Configure the EC2 instances with shared tenancy.

Answer: A Explanation: it allows the company to deploy an application that processes large quantities of data in parallel and prevent groups of nodes from sharing the same underlying hardware. By running the EC2 instances in a spread placement group, the company can launch a small number of instances across distinct underlying hardware to reduce correlated failures. A spread placement group ensures that each instance is isolated from each other at the rack level.

A company sends AWS CloudTrail logs from multiple AWS accounts to an Amazon S3 bucket in a centralized account. The company must keep the CloudTrail logs. The company must also be able to query the CloudTrail logs at any time Which solution will meet these requirements? A. Use the CloudTraiI event history in the centralized account to create an Amazon Athenatable. Query the CloudTrail logs from Athena. B. Configure an Amazon Neptune instance to manage the CloudTrail logs. Query theCloudTraiI logs from Neptune. C. Configure CloudTrail to send the logs to an Amazon DynamoDB table. Create adashboard in Amazon QulCkSight to query the logs in the table. D. use Amazon Athena to create an Athena notebook. Configure CloudTrail to send thelogs to the notebook. Run queries from Athena.

Answer: A Explanation: it allows the company to keep the CloudTrail logs and query them at any time. By using the CloudTrail event history in the centralized account, the company can view, filter, and download recent API activity across multiple AWS accounts. By creating an Amazon Athena table from the CloudTrail event history, the company can use a serverless interactive query service that makes it easy to analyze data in S3 using standard SQL. By querying the CloudTrail logs from Athena, the company can gain insights into user activity and resource changes.

A solutions architect wants to use the following JSON text as an identity-based policy to grant specific permissions: Which IAM principals can the solutions architect attach this policy to? (Select TWO.) A. Role B. Group C. Organization D. Amazon Elastic Container Service (Amazon ECS) resource E. Amazon EC2 resource

Answer: A,B Explanation: This JSON text is an identity-based policy that grants specific permissions. The IAM principals that the solutions architect can attach this policy to are Role and Group. This is because the policy is written in JSON and is an identity-based policy, which can be attached to IAM principals such as users, groups, and roles. Identity-based policies are permissions policies that you attach to IAM identities (users, groups, or roles) and explicitly state what that identity is allowed (or denied) to do1. Identity-based policies are different from resource-based policies, which define the permissions around the specific resource1. Resource-based policies are attached to a resource, such as an Amazon S3 bucket or an Amazon EC2 instance1. Resource-based policies can also specify a principal, which is the entity that is allowed or denied access to the resource1. Organization is not an IAM principal, but a feature of AWS Organizations that allows you to manage multiple AWS accounts centrally2. Amazon ECS resource and Amazon EC2 resource are not IAM principals, but AWS resources that can have resource-based policies attached to them34.

Q28A company is preparing a new data platform that will ingest real-time streaming data from multiple sources. The company needs to transform the data before writing the data to Amazon S3. The company needs the ability to use SQL to query the transformed data. Which solutions will meet these requirements? (Choose two.) D. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data.Use Amazon Kinesis Data Analytics to transform the data and to write the data to AmazonS3. Use the Amazon RDS query editor to query the transformed data from Amazon S3. E. Use Amazon Kinesis Data Streams to stream the data. Use AWS Glue to transform thedata. Use Amazon Kinesis Data Firehose to write the data to Amazon S3. Use the AmazonRDS query editor to query the transformed data from Amazon S3.

Answer: A,B Explanation: To ingest, transform, and query real-time streaming data from multiple sources, Amazon Kinesis and Amazon MSK are suitable solutions. Amazon Kinesis Data Streams can stream the data from various sources and integrate with other AWS services. Amazon Kinesis Data Analytics can transform the data using SQL or Apache Flink. Amazon Kinesis Data Firehose can write the data to Amazon S3 or other destinations. Amazon Athena can query the transformed data from Amazon S3 using standard SQL. Amazon MSK can stream the data using Apache Kafka, which is a popular open-source platform for streaming data. AWS Glue can transform the data using Apache Spark or Python scripts and write the data to Amazon S3 or other destinations. Amazon Athena can also query the transformed data from Amazon S3 using standard SQL.

Q46A company used an Amazon RDS for MySQL DB instance during application testing. Before terminating the DB instance at the end of the test cycle, a solutions architect created two backups. The solutions architect created the first backup by using the mysqldump utility to create a database dump. The solutions architect created the second backup by enabling the final DB snapshot option on RDS termination. The company is now planning for a new test cycle and wants to create a new DB instance from the most recent backup. The company has chosen a MySQL-compatible edition of Amazon Aurora to host the DB instance. Which solutions will create the new DB instance? (Select TWO.) D. Use AWS Database Migration Service (AWS DMS) to import the RDS snapshot into Aurora. E. Upload the database dump to Amazon S3. Then use AWS Database Migration Service (AWS DMS) to import the database dump into Aurora.

Answer: A,C Explanation: These answers are correct because they meet the requirements of creating a new DB instance from the most recent backup and using a MySQL-compatible edition of Amazon Aurora to host the DB instance. You can import the RDS snapshot directly into Aurora if the MySQL DB instance and the Aurora DB cluster are running the same version of MySQL. For example, you can restore a MySQL version 5.6 snapshot directly to Aurora MySQL version 5.6, but you can't restore a MySQL version 5.6 snapshot directly to Aurora MySQL version 5.7. This method is simple and requires the fewest number of steps. You can upload the database dump to Amazon S3 and then import the database dump into Aurora if the MySQL DB instance and the Aurora DB cluster are running different versions of MySQL. For example, you can import a MySQL version 5.6 database dump into Aurora MySQL version 5.7, but you can't restore a MySQL version 5.6 snapshot directly to Aurora MySQL version 5.7. This method is more flexible and allows you to migrate across different versions of MySQL.

Q53A company wants to move from many standalone AWS accounts to a consolidated, multiaccount architecture The company plans to create many new AWS accounts for different business units. The company needs to authenticate access to these AWS accounts by using a centralized corporate directory service. Which combination of actions should a solutions architect recommend to meet these requirements? (Select TWO.) D. Create a new organization in AWS Organizations. Configure the organization's authentication mechanism to use AWS Directory Service directly. E. Set up AWS 1AM Identity Center (AWS Single Sign-On) in the organization. Configure 1AM Identity Center, and integrate it with the company's corporate directory service.

Answer: A,EExplanation: AWS Organizations is a service that helps users centrally manage and govern multiple AWS accounts. It allows users to create organizational units (OUs) to group accounts based on business needs or other criteria. It also allows users to define and attach service control policies (SCPs) to OUs or accounts to restrict the actions that can be performed by the accounts1. By creating a new organization in AWS Organizations with all features turned on, the solution can consolidate and manage the new AWS accounts for different business units. AWS IAM Identity Center (formerly known as AWS Single Sign-On) is a service that provides single sign-on access for all of your AWS accounts and cloud applications. It connects with Microsoft Active Directory through AWS Directory Service to allow users in that directory to sign in to a personalized AWS access portal using their existing Active Directory user names and passwords. From the AWS access portal, users have access to all the AWS accounts and cloud applications that they have permissions for2. By setting up IAM Identity Center in the organization and integrating it with the company's corporate directory service, the solution can authenticate access to these AWS accounts using a centralized corporate directory service. B. Set up an Amazon Cognito identity pool. Configure AWS 1AM Identity Center (AWS Single Sign-On) to accept Amazon Cognito authentication. This solution will not meet the requirement of authenticating access to these AWS accounts by using a centralized corporate directory service, as Amazon Cognito is a service that provides user sign-up, sign-in, and access control for web and mobile application

A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones. The instances host applications that use a hierarchical directory structure. The applications need to read and write rapidly and concurrently to shared storage. What should a solutions architect do to meet these requirements? A. Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC. B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS filesystem from each EC2 instance. C. Create a file system on a Provisioned IOPS SSD (102) Amazon Elastic Block Store(Amazon EBS) volume. Attach the EBS volume to all the EC2 instances. D. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that areattached to each EC2 instance. Synchromze the EBS volumes across the different EC2instances.

Answer: B Explanation: it allows the EC2 instances to read and write rapidly and concurrently to shared storage across two Availability Zones. Amazon EFS provides a scalable, elastic, and highly available file system that can be mounted from multiple EC2 instances. Amazon EFS supports high levels of throughput and IOPS, and consistent low latencies. Amazon EFS also supports NFSv4 lock upgrading and downgrading, which enables high levels of concurrency.

Q60A social media company wants to allow its users to upload images in an application that is hosted in the AWS Cloud. The company needs a solution that automatically resizterm-60es the images so that the images can be displayed on multiple device types. The application experiences unpredictable traffic patterns throughout the day. The company is seeking a highly available solution that maximizes scalability. What sol C. Create a dynamic website hosted on a web server that runs on an Amazon EC2 instance Configure a process that runs on the EC2 instance to resize the images and store the images in an Amazon S3 bucket. D. Create a dynamic website hosted on an automatically scaling Amazon Elastic Container Service (Amazon ECS) cluster that creates a resize job in Amazon Simple Queue Service (Amazon SQS). Set up an image-resizing program that runs on an Amazon EC2 instance to process the resize jobs

Answer: AExplanation: By using Amazon S3 and AWS Lambda together, you can create a serverless architecture that provides highly scalable and available image resizing capabilities. Here's how the solution would work: Set up an Amazon S3 bucket to store the original images uploaded by users. Configure an event trigger on the S3 bucket to invoke an AWS Lambda function whenever a new image is uploaded. The Lambda function can be designed to retrieve the uploaded image, perform the necessary resizing operations based on device requirements, and store the resized images back in the S3 bucket or a different bucket designated for resized images. Configure the Amazon S3 bucket to make the resized images publicly accessible for serving to users.

Q49A company needs to migrate a MySQL database from its on-premises data center to AWS within 2 weeks. The database is 20 TB in size. The company wants to complete the migration with minimal downtime. Which solution will migrate the database MOST cost-effectively? C. Order an AWS Snowball Edge Compute Optimized with GPU device. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with ongoing changes. Send the Snowball device to AWS to finish the migration and continue the ongoing replication. D. Order a 1 GB dedicated AWS Direct Connect connection to establish a connection with the data center. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool(AWS SCT) to migrate the database with replication of ongoing changes.

Answer: AExplanation: This answer is correct because it meets the requirements of migrating a 20 TB MySQL database within 2 weeks with minimal downtime and cost-effectively. The AWS Snowball Edge Storage Optimized device has up to 80 TB of usable storage space, which is enough to fit the database. The AWS Database Migration Service (AWS DMS) can migrate data from MySQL to Amazon Aurora, Amazon RDS for MySQL, or MySQL on Amazon EC2 with minimal downtime by continuously replicating changes from the source to the target. The AWS Schema Conversion Tool (AWS SCT) can convert the source schema and code to a format compatible with the target database. By using these services together, the company can migrate the database to AWS with minimal downtime and cost. The Snowball Edge device can be shipped back to AWS to finish the migration and continue the ongoing replication until the database is fully migrated.

Q55A company hosts its application in the AWS Cloud. The application runs on Amazon EC2 instances behind an Elastic Load Balancer in an Auto Scaling group and with an Amazon DynamoDB table. The 'company wants to ensure the application can be made available in another AWS Region with minimal downtime. What should a solutions architect do to meet these requirements with the LEAST amount of downtime? C. Create an AWS CloudFormation template to create EC2 instances and a load balancer to be launched when needed. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery Region's load balancer. D. Create an Auto Scaling group and load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Create an Amazon CloudWatch alarm to trigger an AWS Lambda function that updates Amazon Route 53 pointing to the disaster recovery load balancer.

Answer: AExplanation: This answer is correct because it meets the requirements of securely migrating the existing data to AWS and satisfying the new regulation. AWS DataSync is a service that makes it easy to move large amounts of data online between on-premises storage and Amazon S3. DataSync automatically encrypts data in transit and verifies data integrity during transfer. AWS CloudTrail is a service that records AWS API calls for your account and delivers log files to Amazon S3. CloudTrail can log data events, which show the resource operations performed on or within a resource in your AWS account, such as S3 object-level API activity. By using CloudTrail to log data events, you can audit access at all levels of the stored data. References: https://docs.aws.amazon.com/datasync/latest/userguide/what-is-datasync.html https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-eventswith-cloudtrail.html

A company has five organizational units (OUs) as part of its organization in AWS Organizations. Each OU correlates to the five businesses that the company owns. The company's research and development (R&D) business is separating from the company and will need its own organization. A solutions architect creates a separate new management account for this purpose. What should the solutions architect do next in the new management account? A. Have the R&D AWS account be part of both organizations during the transition. B. Invite the R&D AWS account to be part of the new organization after the R&D AWSaccount has left the prior organization. C. Create a new R&D AWS account in the new organization. Migrate resources from theprior R&D AWS account to the new R&D AWS account. D. Have the R&D AWS account join the new organization. Make the new managementaccount a member of the prior organization.

Answer: B B. Invite the R&D AWS account to be part of the new organization after the R&D AWSaccount has left the prior organization. Explanation: it allows the solutions architect to create a separate organization for the research and development (R&D) business and move its AWS account to the new organization. By inviting the R&D AWS account to be part of the new organization after it has left the prior organization, the solutions architect can ensure that there is no overlap or conflict between the two organizations. The R&D AWS account can accept or decline the invitation to join the new organization. Once accepted, it will be subject to any policies and controls applied by the new organization.

A company runs applications on AWS that connect to the company's Amazon RDS database. The applications scale on weekends and at peak times of the year. The company wants to scale the database more effectively for its applications that connect to the database. Which solution will meet these requirements with the LEAST operational overhead A. Use Amazon DynamoDB with connection pooling with a target group configuration forthe database. Change the applications to use the DynamoDB endpoint. B. Use Amazon RDS Proxy with a target group for the database. Change the applicationsto use the RDS Proxy endpoint. C. Use a custom proxy that runs on Amazon EC2 as an intermediary to the database.Change the applications to use the custom proxy endpoint. D. Use an AWS Lambda function to provide connection pooling with a target groupconfiguration for the database. Change the applications to use the Lambda function.

Answer: B Explanation: Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more secure1. RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability2. RDS Proxy also reduces failover times for Aurora and RDS databases by up to 66% and enables IAM authentication and Secrets Manager integration for database access1. RDS Proxy can be enabled for most applications with no code changes2.

CO is building a shopping app on AWS. App offers a catalog that changes once each month and needs to scale with traffic volume. The company wants the lowest possible latency from the application. Data from each user's shopping carl needs to be highly avail. User session data must be avail even if the user is disconnected and reconnects. What should SA do to ensure that the shopping cart data is preserved at all times? A. Configure ALB to enable the sticky sessions feature (sessionaffinity) for access to the catalog inAurora. B. Configure ElastiCacJie for Redis to cache catalog data from AmazonDynamoDB and shopping carl data from the user's session. C. Configure OpenSearch Service to cache catalog data from DynamoDB and shopping cart data from the user's session. D. Configure anEC2 instance with Elastic Block Store (Amazon EBS)storage for the catalog and shopping cart. Configure automated snapshots.

Answer: B Explanation: To ensure that the shopping cart data is preserved at all times, a solutions architect should configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and shopping cart data from the user's session. This solution has the following benefits: It offers the lowest possible latency from the application, as ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications1. It scales with traffic volume, as ElastiCache for Redis supports horizontal scaling by adding more nodes or shards to the cluster, and vertical scaling by changing the node type2. It is highly available, as ElastiCache for Redis supports replication across multiple Availability Zones and automatic failover in case of a primary node failure3. It preserves user session data even if the user is disconnected and reconnects, as ElastiCache for Redis can store session data, such as user login information and shopping cart contents, in a persistent and durable manner using snapshots or AOF (append-only file) persistence4. References: 1: https://aws.amazon.com/elasticache/redis/ 2: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Scaling.html 3: https://docs.aws.amazon.com/AmazonElastiCache/latest/redug/ Replication.html 4: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups.htmlc

A company has deployed its application on Amazon EC2 instances with an Amazon RDS database. The company used the principle of least privilege to configure the database access credentials. The company's security team wants to protect the application and the database from SQL injection and other web-based attacks. Which solution will meet these requirements with the LEAST operational overhead? A. Use security groups and network ACLs to secure the database and application servers. B. Use AWS WAF to protect the application. Use RDS parameter groups to configure thesecurity settings. C. Use AWS Network Firewall to protect the application and the database. D. Use different database accounts in the application code for different functions. Avoidgranting excessive privileges to the database users.

Answer: B Explanation: AWS WAF is a web application firewall that helps protect web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF allows users to create rules that block, allow, or count web requests based on customizable web security rules. One of the types of rules that can be created is an SQL injection rule, which allows users to specify a list of IP addresses or IP address ranges that they want to allow or block. By using AWS WAF to protect the application, the company can prevent SQL injection and other web-based attacks from reaching the application and the database. RDS parameter groups are collections of parameters that define how a database instance operates. Users can modify the parameters in a parameter group to change the behavior and performance of the database. By using RDS parameter groups to configure the security settings, the company can enforce best practices such as disabling remote root login, requiring SSL connections, and limiting the maximum number of connections. The other options are not correct because they do not effectively protect the application and the database from SQL injection and other web-based attacks. Using security groups and network ACLs to secure the database and application servers is not sufficient because they only filter traffic at the network layer, not at the application layer. Using AWS Network Firewall to protect the application and the database is not necessary because it is a stateful firewall service that provides network protection for VPCs, not for individual applications or d

A company operates an ecommerce website on Amazon EC2 instances behind an Application Load Balancer (ALB) in an Auto Scaling group. The site is experiencing performance issues related to a high request rate from illegitimate external systems with changing IP addresses. The security team is worried about potential DDoS attacks against the website. The company must block the illegitimate incoming requests in a way that has a minimal impact on legitimate users. What should a solutions architect recommend? A. Deploy Amazon Inspector and associate it with the ALB. B. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule. C. Deploy rules to the network ACLs associated with the ALB to block the incoming traffic. D. Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.

Answer: B Explanation: This answer is correct because it meets the requirements of blocking the illegitimate incoming requests in a way that has a minimal impact on legitimate users. AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define. You can associate AWS WAF with an ALB to protect the web application from malicious requests. You can configure a rate-limiting rule in AWS WAF to track the rate of requests for each originating IP address and block requests from an IP address that exceeds a certain limit within a five-minute period. This way, you can mitigate potential DDoS attacks and improve the performance of your website.

A company stores its data on premises. The amount of data is growing beyond the company's available capacity. The company wants to migrate its data from the on-premises location to an Amazon S3 bucket The company needs a solution that will automatically validate the integrity of the data after the transfer Which solution will meet these requirements? A. Order an AWS Snowball Edge device Configure the Snowball Edge device to performthe online data transfer to an S3 bucket. B. Deploy an AWS DataSync agent on premises. Configure the DataSync agent to performthe online data transfer to an S3 bucket. C. Create an Amazon S3 File Gateway on premises. Configure the S3 File Gateway toperform the online data transfer to an S3 bucket D. Configure an accelerator in Amazon S3 Transfer Acceleration on premises. Configurethe accelerator to perform the online data transfer to an S3 bucket.

Answer: B Explanation: it allows the company to migrate its data from the on-premises location to an Amazon S3 bucket and automatically validate the integrity of the data after the transfer. By deploying an AWS DataSync agent on premises, the company can use a fully managed data transfer service that makes it easy to move large amounts of data to and from AWS. By configuring the DataSync agent to perform the online data transfer to an S3 bucket, the company can take advantage of DataSync's features, such as encryption, compression, bandwidth throttling, and data validation. DataSync automatically verifies data integrity at both source and destination after each transfer task.

A company runs its applications on Amazon EC2 instances. The company performs periodic financial assessments of itsAWS costs. The company recently identified unusual spending. The company needs a solution to prevent unusual spending. The solution must monitor costs and notify responsible stakeholders in the event of unusual spending. Which solution will meet these requirements? A. Use an AWS Budgets template to create a zero spend budget B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and CostManagement console. C. CreateAWS Pricing Calculator estimates for the current running workload pricingdetails_ D. Use Amazon CloudWatch to monitor costs and to identify unusual spending

Answer: B Explanation: it allows the company to monitor costs and notify responsible stakeholders in the event of unusual spending. By creating an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console, the company can use a machine learning service that automatically detects and alerts on anomalous spend. By configuring alert thresholds, notification preferences, and root cause analysis, the company can prevent unusual spending and identify its source.

A company is building an ecommerce application and needs to store sensitive customer information. The company needs to give customers the ability to complete purchase transactions on the website. The company also needs to ensure that sensitive customer data is protected, even from database administrators. Which sol A. Store sensitive data in an Amazon Elastic Block Store (Amazon EBS) volume. Use EBSencryption to encrypt the data. Use an IAM instance role to restrict access. B. Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service(AWS KMS) client-side encryption to encrypt the data. C. Store sensitive data in Amazon S3. Use AWS Key Management Service (AWS KMS)server-side encryption to encrypt the data. Use S3 bucket policies to restrict access. D. Store sensitive data in Amazon FSx for Windows Server. Mount the file share onapplication servers. Use Windows file permissions to restrict access.

Answer: B Explanation: it allows the company to store sensitive customer information in a managed AWS service and give customers the ability to complete purchase transactions on the website. By using AWS Key Management Service (AWS KMS) client-side encryption, the company can encrypt the data before sending it to Amazon RDS for MySQL. This ensures that sensitive customer data is protected, even from database administrators, as only the application has access to the encryption keys. References: Using Encryption with Amazon RDS for MySQL Encrypting Amazon RDS Resources

Q23A company has created a multi-tier application for its ecommerce website. The website uses an Application Load Balancer that resides in the public subnets, a web tier in the public subnets, and a MySQL cluster hosted on Amazon EC2 instances in the private subnets. The MySQL database needs to retrterm-23ieve product catalog and pricing information that is hosted on the internet by a third-party provider. A solutions architect must devise a strategy that maximizes security without increasing operational overhead. What should the solutions architect do to meet these requirements? C. Configure an internet gateway and attach it to the VPC. Modify the private subnet routetable to direct internet-bound traffic to the internet gateway. D. Configure a virtual private gateway and attach it to the VPC. Modify the private subnetroute table to direct internet-bound traffic to the virtual private gateway.

Answer: B term-23Explanation: To allow the MySQL database in the private subnets to access the internet without exposing it to the public, a NAT gateway is a suitable solution. A NAT gateway enables instances in a private subnet to connect to the internet or other AWS services, but prevents the internet from initiating a connection with those instances. A NAT gateway resides in the public subnets and can handle high throughput of traffic with low latency. A NAT gateway is also a managed service that does not require any operational overhead.

Q58SA to design a reliable arch for its app. App consists of 1RDS DB &2 manually provisioned EC2s that run web servers. Ec2are located in a single AZ. An employee recently deleted the DB inst, app was unavailable for 24 hours as a result. Co is concerned with the overall reliability of its env. What should SA do to max reliability of the apps infrastructure? C. Create an additional DB instance along with an Amazon API Gateway and an AWS Lambda function. Configure the application to invoke the Lambda function through API Gateway. Have the Lambda function write the data to the two DB instances. D. Place the EC2 instances in an EC2 Auto Scaling group that has multiple subnets located in multiple Availability Zones. Use Spot Instances instead of On-Demand Instances. Set up Amazon CloudWatch alarms to monitor the health of the instances. Update the DB instance to be Multi-AZ, and enable deletion protection.

Answer: B Explanation: This answer is correct because it meets the requirements of maximizing the reliability of the application's infrastructure. You can update the DB instance to be Multi-AZ, which means that Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance. It can also help protect your databases against DB instance failure and Availability Zone disruption. You can also enable deletion protection on the DB instance, which prevents the DB instance from being deleted by any user. You can place the EC2 instances behind an Application Load Balancer, which distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. This increases the availability and fault tolerance of your applications. You can run the EC2 instances in an EC2 Auto Scaling group across multiple Availability Zones, which ensures that you have the correct number of EC2 instances available to handle the load for your application. You can use scaling policies to adjust the number of instances in your Auto Scaling group in response to changing demand. References: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSin gleStandby.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_DeleteInstan ce.html#USER_DeleteInstance.DeletionProtection https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.h tml https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html

A company has deployed a multiplayer game for mobile devices. The game requires live location tracking of players based on latitude and longitude. The data store for the game must support rapid updates and retrieval of locations. The game uses RDS for PostgreSQL DB instance with read replicas to store the location data. During peak usage periods, the database is unable to maintain perf that is needed for reading and writing updates. The game's user base is increasing rapidly. What should SA do to improve the performance of the data tier? A. Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZenabled. B. Migrate from Amazon RDS to OpenSearch Service with OpenSearchDashboards. C. Deploy DynamoDB Accelerator (DAX) in front of the existing DB inst.Modify the game to use DAX. D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB inst.Modify the game to use Redis.

Answer: D Explanation: The solution that will improve the performance of the data tier is to deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance and modify the game to use Redis. This solution will enable the game to store and retrieve the location data of the players in a fast and scalable way, as Redis is an in-memory data store that supports geospatial data types and commands. By using ElastiCache for Redis, the game can reduce the load on the RDS for PostgreSQL DB instance, which is not optimized for high-frequency updates and queries of location data. ElastiCache for Redis also supports replication, sharding, and auto scaling to handle the increasing user base of the game. The other solutions are not as effective as the first one because they either do not improve the performance, do not support geospatial data, or do not leverage caching. Taking a snapshot of the existing DB instance and restoring it with Multi-AZ enabled will not improve the performance of the data tier, as it only provides high availability and durability, but not scalability or low latency. Migrating from Amazon RDS to Amazon OpenSearch Service with OpenSearch Dashboards will not improve the performance of the data tier, as OpenSearch Service is mainly designed for full-text search and analytics, not for real-time location tracking. OpenSearch Service also does not support geospatial data types and commands natively, unlike Redis. Deploying Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance and modifying the game to use DAX will not improve the performance of the data tier, as DAX is only compatible with DynamoDB, not with RDS for PostgreSQL. DAX also does not support geospatial data types and commands.

A company is moving its data and applications to AWS during a multiyear migration project. The company wants to securely access data on Amazon S3 from the company's AWS Region and from the company's on-premises location. The data must not traverse the internet. The company has established an AWS Direct Connect connection between its Region and its on-premises location Which sol A. Create gateway endpoints for Amazon S3. Use the gateway endpoints to securelyaccess the data from the Region and the on-premises location. B. Create a gateway in AWS Transit Gateway to access Amazon S3 securely from theRegion and the on-premises location. C. Create interface endpoints for Amazon S3_ Use the interface endpoints to securelyaccess the data from the Region and the on-premises location. D. Use an AWS Key Management Service (AWS KMS) key to access the data securelyfrom the Region and the on-premises location.

Answer: B website says answer is C You can access Amazon S3 from your VPC using gateway VPC endpoints. After you create the gateway endpoint, you can add it as a target in your route table for traffic destined from your VPC to Amazon S3. There is no additional charge for using gateway endpoints. Amazon S3 supports both gateway endpoints and interface endpoints. With a gateway endpoint, you can access Amazon S3 from your VPC, without requiring an internet gateway or NAT device for your VPC, and with no additional cost. However, gateway endpoints do not allow access from on-premises networks, from peered VPCs in other AWS Regions, or through a transit gateway. For those scenarios, you must use an interface endpoint, which is available for an additional cost. For more information, see Types of VPC endpoints for Amazon S3 in the Amazon S3 User Guide. https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html.......... Explanation: A gateway endpoint is a gateway that is a target for a specified route in your route table, used for traffic destined to a supported AWS service1. Amazon S3 does not support gateway endpoints, only interface endpoints2. Therefore, option A is incorrect. An interface endpoint is an elastic network interface with a private IP address that serves as an entry point for traffic destined to a supported service1. An interface endpoint can provide secure access to Amazon S3 from within the Region, but not from the on-premises location. Therefore, option C is incorrect. AWS Key Management Service (AWS KMS) is a service that allows you to create and manage encryption keys to protect your data3. AWS KMS does not provide a way to access data on Amazon S3 without traversing the internet. Therefore, option D is incorrect. AWS Transit Gateway is a service that enables you

A company is using AWS Key Management Service (AWS KMS) keys to encrypt AWS Lambda environment variables. A solutions architect needs to ensure that the required permissions are in place to decrypt and use the environment variables. Which steps must the solutions architect take to implement the correct permissions? (Choose two.) A. Add AWS KMS permissions in the Lambda resource policy. B. Add AWS KMS permissions in the Lambda execution role. C. Add AWS KMS permissions in the Lambda function policy. D. Allow the Lambda execution role in the AWS KMS key policy. E. Allow the Lambda resource policy in the AWS KMS key policy.

Answer: B,D Explanation: B and D are the correct answers because they ensure that the Lambda execution role has the permissions to decrypt and use the environment variables, and that the AWS KMS key policy allows the Lambda execution role to use the key. The Lambda execution role is an IAM role that grants the Lambda function permission to access AWS resources, such as AWS KMS. The AWS KMS key policy is a resource-based policy that controls access to the key. By adding AWS KMS permissions in the Lambda execution role and allowing the Lambda execution role in the AWS KMS key policy, the solutions architect can implement the correct permissions for encrypting and decrypting environment variables.

Q63A company has migrated multiple Microsoft Windows Server workloads to Amazon EC2 instances that run in the us-west-1 Region. The company manually backs up the workloads to create an image as needed. In the event of a natural disaster in the us-west-1 Region, the company wants to recover workloads quickly in the us-west-2 Region. The company wants no more than 24 hours of data loss on the EC2 instances. The company also wants to automate any backups of the EC2 instances. Which solutions will meet these requirements with the LEAST administrative effort? (Select TWO.) E. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Specify the backup schedule to run twice daily. Copy on demand to us-west-2.

Answer: B,DExplanation: Option B suggests using an EC2-backed Amazon Machine Image (AMI) lifecycle policy to automate the backup process. By configuring the policy to run twice daily and specifying the copy to the us-west-2 Region, the company can ensure regular backups are created and copied to the alternate region. Option D proposes using AWS Backup, which provides a centralized backup management solution. By creating a backup vault and backup plan based on tag values, the company can automate the backup process for the EC2 instances. The backup schedule can be set to run twice daily, and the destination for the copy can be defined as the us-west-2 Region. Both options automate the backup process and include copying the backups to the uswest-2 Region, ensuring data resilience in the event of a disaster. These solutions minimize administrative effort by leveraging automated backup and copy mechanisms provided by AWS services.

A company that uses AWS needs a solution to predict the resources needed for manufacturing processes each month. The solution must use historical values that are currently stored in an Amazon S3 bucket The company has no machine learning (ML) experience and wants to use a managed service for the training and predictions. Which combination of steps will meet these requirements? (Select TWO.) A. Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference. B. Use Amazon SageMaker to train a model by using the historical data in the S3 bucket. C. Configure an AWS Lambda function with a function URL that uses Amazon SageMakerendpoints to create predictions based on the inputs. D. Configure an AWS Lambda function with a function URL that uses an Amazon Forecastpredictor to create a prediction based on the inputs. E. Train an Amazon Forecast predictor by using the historical data in the S3 bucket.

Answer: B,E other site as DE Explanation: To predict the resources needed for manufacturing processes each month using historical values that are currently stored in an Amazon S3 bucket, a solutions architect should use Amazon SageMaker to train a model by using the historical data in the S3 bucket, and deploy an Amazon SageMaker model and create a SageMaker endpoint for inference. Amazon SageMaker is a fully managed service that provides an easy way to build, train, and deploy machine learning (ML) models. The solutions architect can use the built-in algorithms or frameworks provided by SageMaker, or bring their own custom code, to train a model using the historical data in the S3 bucket as input. The trained model can then be deployed to a SageMaker endpoint, which is a scalable and secure web service that can handle requests for predictions from the application. The solutions architect does not need to have any ML experience or manage any infrastructure to use SageMaker.

A solutions architect is implementing a complex Java application with a MySQL database. The Java application must be deployed on Apache Tomcat and must be highly available. What should the solutions architect do to meet these requirements? A. Deploy the application in AWS Lambda. Configure an Amazon API Gateway API to connect with the Lambda functions. B. Deploy the application by using AWS Elastic Beanstalk. Configure a load-balanced environment and a rolling deployment policy. C. Migrate the database to Amazon ElastiCache. Configure the ElastiCache security group to allow access from the application. D. Yauch an Amazon EC2 instance. Install a MySQL server on the EC2 instance. Configure the application on the server. Create an AMI. Use the AMI to create a launch template with an Auto caling group.

Answer: BExplanation: AWS Elastic Beanstalk provides an easy and quick way to deploy, manage, and scale applications. It supports a variety of platforms, including Java and Apache Tomcat. By using Elastic Beanstalk, the solutions architect can upload the Java application and configure the environment to run Apache Tomcat.

A company runs a container application by using Amazon Elastic Kubernetes Service (Amazon EKS). The application includes microservices that manage customers and place orders. The company needs to route incoming requests to the appropriate microservices. Which solution will meet this requirement MOST cost-effectively? A. Use the AWS Load Balancer Controller to provision a Network Load Balancer. B. Use the AWS Load Balancer Controller to provision an Application Load Balancer. C. Use an AWS Lambda function to connect the requests to Amazon EKS. D. Use Amazon API Gateway to connect the requests to Amazon EKS.

Answer: BExplanation: An Application Load Balancer is a type of Elastic Load Balancer that operates at the application layer (layer 7) of the OSI model. It can distribute incoming traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can also route requests based on the content of the request, such as the host name, path, or query parameters1. The AWS Load Balancer Controller is a controller that helps you manage Elastic Load Balancers for your Kubernetes cluster. It can provision Application Load Balancers or Network Load Balancers when you create Kubernetes Ingress or Service resources2. By using the AWS Load Balancer Controller to provision an Application Load Balancer for your Amazon EKS cluster, you can achieve the following benefits: You can route incoming requests to the appropriate microservices based on the rules you define in your Ingress resource. For example, you can route requests with different host names or paths to different microservices that handle customers and orders2. You can improve the performance and availability of your container applications by distributing the load across multiple targets and enabling health checks and automatic scaling1. You can reduce the cost and complexity of managing your load balancers by using a single controller that integrates with Amazon EKS and Kubernetes. You do not need to manually create or configure load balancers or update them when your cluster changes2.

Q51A company has developed a new video game as a web application. The application is in a three-tier architecture in a VPC with Amazon RDS for MySQL in the database layer. Several players will compete concurrently online. The game's developers want to display a top-10 scoreboard in near-real time and offer the ability to stop and restore the game while preserving the current scores. What should a solutions architect do to meet these requirements? C. Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard in a section of the application. D. Create a read replica on Amazon RDS for MySQL to run queries to compute the scoreboard and serve the read traffic to the web application.

Answer: BExplanation: This answer is correct because it meets the requirements of displaying a top10 scoreboard in near-real time and offering the ability to stop and restore the game while preserving the current scores. Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. You can use Amazon ElastiCache for Redis to set up an ElastiCache for Redis cluster to compute and cache the scores for the web application to display. You can use Redis data structures such as sorted sets and hashes to store and rank the scores of the players, and use Redis commands such as ZRANGE and ZADD to retrieve and update the scores efficiently. You can also use Redis persistence features such as snapshots and append-only files (AOF) to enable point-in-time recovery of your data, which can help you stop and restore the game while preserving the current scores.

A company needs to connect several VPCs in the us-east-1 Region that span hundreds of AWS accounts. The company's networking team has its own AWS account to manage the cloud network. What is the MOST operationally efficient solution to connect the VPCs? A. Set up VPC peering connections between each VPC. Update each associated subnet'sroute table. B. Configure a NAT gateway and an internet gateway in each VPC to connect each VPCthrough the internet. C. Create an AWS Transit Gateway in the networking team's AWS account. Configurestatic routes from each VPC. D. Deploy VPN gateways in each VPC. Create a transit VPC in the networking team's AWSaccount to connect to each VPC.

Answer: C Explanation: AWS Transit Gateway is a highly scalable and centralized hub for connecting multiple VPCs, on-premises networks, and remote networks. It simplifies network connectivity by providing a single entry point and reducing the number of connections required. In this scenario, deploying an AWS Transit Gateway in the networking team's AWS account allows for efficient management and control over the network connectivity across multiple VPCs.

A company has applications that run on Amazon EC2 instances. The EC2 instances connect to Amazon RDS databases by using an 1AM role that has associated policies. The company wants to use AWS Systems Manager to patch the EC2 instances without disrupting the running applications. Which solution will meet these requirements? A. Create a new 1AM role. Attach the AmazonSSMManagedlnstanceCore policy to thenew 1AM role. Attach the new 1AM role to the EC2 instances and the existing 1AM role. B. Create an 1AM user. Attach the AmazonSSMManagedlnstanceCore policy to the 1AMuser. Configure Systems Manager to use the 1AM user to manage the EC2 instances. C. Enable Default Host Configuration Management in Systems Manager to manage theEC2 instances. D. Remove the existing policies from the existing 1AM role. Add theAmazonSSMManagedlnstanceCore policy to the existing 1AM role.

Answer: C Explanation: The most suitable solution for the company's requirements is to enable Default Host Configuration Management in Systems Manager to manage the EC2 instances. This solution will allow the company to patch the EC2 instances without disrupting the running applications and without manually creating or modifying IAM roles or users. Default Host Configuration Management is a feature of AWS Systems Manager that enables Systems Manager to manage EC2 instances automatically as managed instances. A managed instance is an EC2 instance that is configured for use with Systems Manager. The benefits of managing instances with Systems Manager include the following: Connect to EC2 instances securely using Session Manager. Perform automated patch scans using Patch Manager. View detailed information about instances using Systems Manager Inventory. Track and manage instances using Fleet Manager. Keep SSM Agent up to date automatically. Default Host Configuration Management makes it possible to manage EC2 instances without having to manually create an IAM instance profile. Instead, Default Host Configuration Management creates and applies a default IAM role to ensure that Systems Manager has permissions to manage all instances in the Region and account where it is activated. If the permissions provided are not sufficient for the use case, the default IAM role can be modified or replaced with a custom role1. The other options are not correct because they either have more operational overhead or do not meet the requirements. Creating a new IAM role, attaching the AmazonSSMManagedInstanceCore policy to the new IAM role, and attaching the new IAM role and the existing IAM role to the EC2 instances is not

Q10CO offers serv that is growing rapidly. B/C of the growth CO order processing sys is exper scaling prob during peak traffic hours. The current arch includ following • A grp ofEC2 that run in EC2 ASG to collect orders from the app • Another grp of EC2that run inEC2 ASG to fulfill orders Order collection process occurs quickly, but order fulfill process can take longer. Data must not be lost b/c ofscaling event SA must ensure order collectionprocess & order fulfill process can both scale properly during peak traffic hours. The sol must opt util of CO AWS resources Which sol C Provision 2sqs queues: 1for ordercollection& another for order fulfill. Config EC2 to poll theirrespective queue. Scale ASG based on notif that the queuessend D Provision 2SQS queues: 1for ordercollection&another for order fulfill. Config EC2to poll theirrespective queue. Create metric based on backlog per inst calc. Scale ASG based on met

Answer: D Explanation: The number of instances in your Auto Scaling group can be driven by how long it takes to process a message and the acceptable amount of latency (queue delay). The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain.

A company has an online gaming application that has TCP and UDP multiplayer gaming capabilities. The company uses Amazon Route 53 to point the application traffic to multiple Network Load Balancers (NLBs) in different AWS Regions. The company needs to improve application performance and decrease latency for the online game in preparation for user growth. Which solution will meet these requirements? A. Add an Amazon CloudFront distribution in front of the NLBs. Increase the Cache-Control: max-age parameter. B. Replace the NLBs with Application Load Balancers (ALBs). Configure Route 53 to uselatency-based routing. C. Add AWS Global Accelerator in front of the NLBs. Configure a Global Acceleratorendpoint to use the correct listener ports. D. 'Add an Amazon API Gateway endpoint behind the NLBs. Enable API caching. Overridemethod caching for the different stages.

Answer: C Explanation: This answer is correct because it improves the application performance and decreases latency for the online game by using AWS Global Accelerator. AWS Global Accelerator is a networking service that helps you improve the availability, performance, and security of your public applications. Global Accelerator provides two global static public IPs that act as a fixed entry point to your application endpoints, such as NLBs, in different AWS Regions. Global Accelerator uses the AWS global network to route traffic to the optimal regional endpoint based on health, client location, and policies that you configure. Global Accelerator also terminates TCP and UDP traffic at the edge locations, which reduces the number of hops and improves the network performance. By adding AWS Global Accelerator in front of the NLBs, you can achieve up to 60% improvement in latency for your online game.

A company has two VPCs that are located in the us-west-2 Region within the same AWS account. The company needs to allow network traffic between these VPCs. Approximately 500 GB of data transfer will occur between the VPCs each month. What is the MOST cost-effective solution to connect these VPCs? A. Implement AWS Transit Gateway to connect the VPCs. Update the route tables of eachVPC to use the transit gateway for inter-VPC communication. B. Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tablesof each VPC to use the VPN tunnel for inter-VPC communication. C. Set up a VPC peering connection between the VPCs. Update the route tables of eachVPC to use the VPC peering connection for inter-VPC communication. D. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the routetables of each VPC to use the Direct Connect connection for inter-VPC communication.

Answer: C Explanation: To connect two VPCs in the same Region within the same AWS account, VPC peering is the most cost-effective solution. VPC peering allows direct network traffic between the VPCs without requiring a gateway, VPN connection, or AWS Transit Gateway. VPC peering also does not incur any additional charges for data transfer between the VPCs.

A company has an on-premises MySQL database that handles transactional data. The company is migrating the database to the AWS Cloud. The migrated database must maintain compatibility with the company's applications that use the database. The migrated database also must scale automatically during periods of increased demand. Which migration solution will meet these requirements? A. Use native MySQL tools to migrate the database to Amazon RDS for MySQL. Configureelastic storage scaling. B. Migrate the database to Amazon Redshift by using the mysqldump utility. Turn on AutoScaling for the Amazon Redshift cluster. C. Use AWS Database Migration Service (AWS DMS) to migrate the database to AmazonAurora. Turn on Aurora Auto Scaling. D. Use AWS Database Migration Service (AWS DMS) to migrate the database to AmazonDynamoDB. Configure an Auto Scaling policy.

Answer: C Explanation: To migrate a MySQL database to AWS with compatibility and scalability, Amazon Aurora is a suitable option. Aurora is compatible with MySQL and can scale automatically with Aurora Auto Scaling. AWS Database Migration Service (AWS DMS) can be used to migrate the database from on-premises to Aurora with minimal downtime.

A solutions architect is creating a new Amazon CloudFront distribution for an application. Some of the information submitted by users is sensitive. The application uses HTTPS but needs another layer of security. The sensitive information should.be protected throughout the entire application stack, and access to the information should be restricted to certain applications. Which action should the solutions architect take? A. Configure a CloudFront signed URL. B. Configure a CloudFront signed cookie. C. Configure a CloudFront field-level encryption profile. D. Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for theViewer Protocol Policy.

Answer: C Explanation: it allows the company to protect sensitive information submitted by users throughout the entire application stack and restrict access to certain applications. By configuring a CloudFront field-level encryption profile, the company can encrypt specific fields of user data at the edge locations before sending it to the origin servers. By using public-private key pairs, the company can ensure that only authorized applications can decrypt and access the sensitive information.

A company runs a website that stores images of historical events. Website users need the ability to search and view images based on the year that the event in the image occurred. On average, users request each image only once or twice a year The company wants a highly available solution to store and deliver the images to users. Which solution will meet these requirements MOST cost-effectively? A. Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runson Amazon EC2_ B. Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runson Amazon EC2. C. Store images in Amazon S3 Standard. use S3 Standard to directly deliver images byusing a static website. D. Store images in Amazon S3 Standard-InfrequentAccess (S3 Standard-IA). use S3Standard-IA to directly deliver images by using a static website.

Answer: C other website has D Explanation: it allows the company to store and deliver images to users in a highly available and cost-effective way. By storing images in Amazon S3 Standard, the company can use a durable, scalable, and secure object storage service that offers high availability and performance. By using S3 Standard to directly deliver images by using a static website, the company can avoid running web servers and reduce operational overhead. S3 Standard also offers low storage pricing and free data transfer within AWS Regions.

Q16 A company runs a website that uses a content management system on EC2. The CMS runs on a single EC2 and uses Aurora MySQL Multi- AZ DB instance for the data tier. Website images are stored onEBS vol that is mounted inside the EC2 instance. Which combination of actions sA take to improve the perf and resilience of the website? (Select 2 D. Create AMI from the existing EC2Use the AMI toprovision new instances behind an Application Load Balancer as part of an ASG. Configure the ASG to maintain a minimum of two instances.Configure an accelerator in AWS Global Accelerator for the website. E. Create AMI from the existing EC2 instance. Use the AMI toprovision new instances behind an Application Load Balancer as part of an Auto Scalinggroup. Configure the Auto Scaling group to maintain a minimum of two instances.Configure an Amazon CloudFront distribution for the website.

Answer: C,E Explanation: Option C provides moving the website images onto an Amazon EFS file system that is mounted on every EC2 instance. Amazon EFS provides a scalable and fully managed file storage solution that can be accessed concurrently from multiple EC2 instances. This ensures that the website images can be accessed efficiently and consistently by all instances, improving performance In Option E The Auto Scaling group maintains a minimum of two instances, ensuring resilience by automatically replacing any unhealthy instances. Additionally, configuring an Amazon CloudFront distribution for the website further improves performance by caching content at edge locations closer to the end-users, reducing latency and improving content delivery. Hence combining these actions, the website's performance is improved through efficient image storage and content delivery

A company wants to host a scalable web application on AWS. The application will be accessed by users from different geographic regions of the world. Application users will be able to download and upload unique data up to gigabytes in size. The development team wants a cost-effective solution to minimize upload and download latency and maximize performance. What should a solutions architect do to accomplish this? A. Use Amazon S3 with Transfer Acceleration to host the application. B. Use Amazon S3 with CacheControl headers to host the application. C. Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application. D. Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.

Answer: C. website says A. is answer . B is for caching so not sure how/if that helps the upload speed for global users A is correct as Transfer Accelerator is best for uploading and downloading unique items near the user's region/location .................................................Explanation: This answer is correct because it meets the requirements of hosting a scalable web application that can handle large data transfers from different geographic regions. Amazon EC2 provides scalable compute capacity for hosting web applications. Auto Scaling can automatically adjust the number of EC2 instances based on the demand and traffic patterns. Amazon CloudFront is a content delivery network (CDN) that can cache static and dynamic content at edge locations closer to the users, reducing latency and improving performance. CloudFront can also use S3 Transfer Acceleration to speed up the transfers between S3 buckets and CloudFront edge locations.

Q5 A company has an AWS Direct Connect connection from its corporate data center to its VPC in the us-east-1 Region. The company recently acquired a corporation that has several VPCs and a Direct Connect connection between its on-premises data center and the eu-west-2 Region. The CIDR blocks for the VPCs of the company and the corporation do not overlap. The company requires connectivity between two Regions and the data centers. The company needs a solution that is scalable while reducing operational overhead. What should a solutions architect do to meet these requirements? C. Establish VPN appliances in a fully meshed VPN network hosted by Amazon EC2. UseAWS VPN CloudHub to send and receive data between the data centers and each VPC. D. Connect the existing Direct Connect connection to a Direct Connect gateway. Routetraffic from the virtual private gateways of the VPCs in each Region to the Direct Connectgateway.

Answer: D D. Connect the existing Direct Connect connection to a Direct Connect gateway. Routetraffic from the virtual private gateways of the VPCs in each Region to the Direct Connectgateway. Explanation: This solution meets the requirements because it allows the company to use a single Direct Connect connection to connect to multiple VPCs in different Regions using a Direct Connect gateway. A Direct Connect gateway is a globally available resource that enables you to connect your on-premises network to VPCs in any AWS Region, except the AWS China Regions. You can associate a Direct Connect gateway with a transit gateway or a virtual private gateway in each Region. By routing traffic from the virtual private gateways of the VPCs to the Direct Connect gateway, you can enable inter-Region and onpremises connectivity for your VPCs. This solution is scalable because you can add more VPCs in different Regions to the Direct Connect gateway without creating additional connections. This solution also reduces operational overhead because you do not need to manage multiple VPN appliances, VPN connections, or VPC peering connections.

Q38A CO devs want a secure way to gain SSH access on COEC2that run the latest version of Linux. Dev work remotely and in corp office. COwants to use AWS serv as a part of the solution. EC2 are hosted in a VPC private subnet and access the internet through a NAT gateway that is deployed in a public subnet. What should a solutions architect do to MOST cost-effectively? C. Create a bastion host in the public subnet of the VPC. Configure the security groups andSSH keys of the bastion host to only allow connections and SSH authentication from thedevelopers' corporate and remote networks. Instruct the developers to connect through thebastion host by using SSH to reach the EC2 instances. D. Attach the AmazonSSMManagedlnstanceCore 1AM policy to an 1AM role that isassociated with the EC2 instances. Instruct the developers to use AWS Systems ManagerSession Manager to access the EC2 instances.

Answer: D Explanation: AWS Systems Manager Session Manager is a service that enables you to securely connect to your EC2 instances without using SSH keys or bastion hosts. You can use Session Manager to access your instances through the AWS Management Console, the AWS CLI, or the AWS SDKs. Session Manager uses IAM policies and roles to control who can access which instances. By attaching the AmazonSSMManagedlnstanceCore IAM policy to an IAM role that is associated with the EC2 instances, you grant the Session Manager service the necessary permissions to perform actions on your instances. You also need to attach another IAM policy to the developers' IAM users or roles that allows them to start sessions to the instances. Session Manager uses the AWS Systems Manager Agent (SSM Agent) that is installed by default on Amazon Linux 2 and other supported Linux distributions. Session Manager also encrypts all session data between your client and your instances, and streams session logs to Amazon S3, Amazon CloudWatch Logs, or both for auditing purposes. This solution is the most cost-effective, as it does not require any additional resources or services, such as bastion hosts, VPN connections, or NAT gateways. It also simplifies the security and management of SSH access, as it eliminates the need for SSH keys, port opening, or firewall rules.

A company wants to use an event-driven programming model with AWS Lambda. The company wants to reduce startup latency for Lambda functions that run on Java 11. The company does not have strict latency requirements for the applications. The company wants to reduce cold starts and outlier latencies when a function scales up. Which solution will meet these requirements MOST cost-effectively? A. Configure Lambda provisioned concurrency. B. Increase the timeout of the Lambda functions. C. Increase the memory of the Lambda functions. D. Configure Lambda SnapStart.

Answer: D Explanation: To reduce startup latency for Lambda functions that run on Java 11, Lambda SnapStart is a suitable solution. Lambda SnapStart is a feature that enables faster cold starts and lower outlier latencies for Java 11 functions. Lambda SnapStart uses a preinitialized Java Virtual Machine (JVM) to run the functions, which reduces the initialization time and memory footprint. Lambda SnapStart does not incur any additional charges.

A company needs to minimize the cost of its 1 Gbps AWS Direct Connect connection. The company's average connection utilization is less than 10%. A solutions architect must recommend a solution that will reduce the cost without compromising security. Which solution will meet these requirements? A. Set up a new 1 Gbps Direct Connect connection. Share the connection with anotherAWS account. B. Set up a new 200 Mbps Direct Connect connection in the AWS Management Console. C. Contact an AWS Direct Connect Partner to order a 1 Gbps connection. Share theconnection with another AWS account. D. Contact an AWS Direct Connect Partner to order a 200 Mbps hosted connection for anexisting AWS account.

Answer: D Explanation: company need to setup a cheaper connection (200 M) but B is incorrect because you can only order port speeds of 1, 10, or 100 Gbps for more flexibility you can go with hosted connection, You can order port speeds between 50 Mbps and 10 Gbps. https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-directconnect. html

Q67A global marketing company has applications that run in the ap-southeast-2 Region and the eu-west-1 Region. Applications that run in a VPC in eu-west-1 need to communicate securely with databases that run in a VPC in ap-southeast-2. Which network design will meet these requirements? C. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west1 VPC. Update the subnet route tables Create an inbound rule in the ap-southeast-2 database security group that allows traffic from the eu-west-1 application server IP addresses. D. Create a transit gateway with a peering attachment between the eu-west-1 VPC and the ap-southeast-2 VPC. After the transit gateways are properly peered and routing is configured, create an inbound rule in the database security group that references the security group ID of the application servers in eu-west-1.

C

A security audit reveals that Amazon EC2 instances are not being patched regularly. SA needs to provide a solution that will run regular security scans across alarge fleet of EC2. The sol should also patch the EC2s on a regular schedule and provide a report of each inst's patch status. Which sol A. SetupMacie to scan the EC2 for software vul. Set up acron job on each EC2 to patch the inst on a regular schedule B. Turn onGuardDuty in the acct. Configure GuardDuty to scan the EC2instances for software vul. Set up AWS Systems Manager Session Manager topatch the EC2 instances on a regular schedule C. Setup Detective to scan the EC2 instances for software vul. SetupEventBridge scheduled rule to patch the EC2 instances on a regular schedule D. Turn onInspector in the acct. Configure Inspector to scan the EC2instances for software vol. Set up AWS Systems Manager Patch Manager topatch the EC2 instances on a regular schedule

D. Turn onInspector in the acct. Configure Inspector to scan the EC2instances for software vol. Set up AWS Systems Manager Patch Manager topatch the EC2 instances on a regular schedule Explanation: Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity1. Amazon Inspector can scan the EC2 instances for software vulnerabilities and provide a report of each instance's patch status. AWS Systems Manager Patch Manager is a capability of AWS Systems Manager that automates the process of patching managed nodes with both security-related updates and other types of updates. Patch Manager uses patch baselines, which include rules for autoapproving patches within days of their release, in addition to optional lists of approved and rejected patches. Patch Manager can patch fleets of Amazon EC2 instances, edge devices, on-premises servers, and virtual machines (VMs) by operating system type2. Patch Manager can patch the EC2 instances on a regular schedule and provide a report of each instance's patch status. Therefore, the combination of Amazon Inspector and AWS Systems Manager Patch Manager will meet the requirements of the question.

Q16A company runs a website that uses a content management system on EC2. The CMS runs on a single EC2 and uses Aurora MySQL Multi- AZ DB instance for the data tier. Website images are stored onEBS vol that is mounted inside the EC2 instance. Which combination of actions sA take to improve the perf and resilience of the website? (Select 2 A. Move the website images into an Amazon S3 bucket that is mounted on every EC2instance. B. Share the website images by using an NFS share from the primary EC2 instance. Mountthis share on the other EC2 instances. C. Move the website images onto an Amazon Elastic File System (Amazon EFS) filesystem that is mounted on every EC2 instance.

answer on next one

Q23A company has created a multi-tier application for its ecommerce website. The website uses an Application Load Balancer that resides in the public subnets, a web tier in the public subnets, and a MySQL cluster hosted on Amazon EC2 instances in the private subnets. The MySQL database needs to retrieve product catalog and pricing information that is hosted on the internet by a third-party provider. A solutions architect must devise a strategy that maximizes security without increasing operational overhead. What should the solutions architect do to meet these requirements? A. Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NATinstance. B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table todirect all internet-bound traffic to the NAT gateway.

answer on next one

Q5 A company has an AWS Direct Connect connection from its corporate data center to its VPC in the us-east-1 Region. The company recently acquired a corporation that has several VPCs and a Direct Connect connection between its on-premises data center and the eu-west-2 Region. The CIDR blocks for the VPCs of the company and the corporation do not overlap. The company requires connectivity between two Regions and the data centers. The company needs a solution that is scalable while reducing operational overhead. What should a solutions architect do to meet these requirements? A. Set up inter-Region VPC peering between the VPC in us-east-1 and the VPCs in euwest-2. B. Create private virtual interfaces from the Direct Connect connection in us-east-1 to theVPCs in eu-west-2.

answer on next one

Q55A company hosts its application in the AWS Cloud. The application runs on Amazon EC2 instances behind an Elastic Load Balancer in an Auto Scaling group and with an Amazon DynamoDB table. The 'company wants to ensure the application can be made available in another AWS Region with minimal downtime. What should a solutions architect do to meet these requirements with the LEAST amount of downtime? A. Create an Auto Scaling group and a load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery Region's load balancer. B. Create an AWS CloudFormation template to create EC2 instances, load balancers, and DynamoDB tables to be launched when needed. Configure DNS failover to point to the new disaster recovery Region's load balancer.

answer on next one

Q36A company uses an organization in AWS Organizations to manage AWS accounts that contain applications. The company sets up a dedicated monitoring member account in the organization. The company wants to query and visualize observability data across the accounts by using Amazon CloudWatch. Which solution will meet these requirements? A. Enable CloudWatch cross-account observability for the monitoring account. Deploy anAWS CloudFormation template provided by the monitoring account in each AWS accountto share the data with the monitoring account. B. Set up service control policies (SCPs) to provide access to CloudWatch in themonitoring account under the Organizations root organizational unit (OU).

answer on other one

Q46A company used an Amazon RDS for MySQL DB instance during application testing. Before terminating the DB instance at the end of the test cycle, a solutions architect created two backups. The solutions architect created the first backup by using the mysqldump utility to create a database dump. The solutions architect created the second backup by enabling the final DB snapshot option on RDS termination. The company is now planning for a new test cycle and wants to create a new DB instance from the most recent backup. The company has chosen a MySQL-compatible edition of Amazon Aurora to host the DB instance. Which solutions will create the new DB instance? (Select TWO.) A. Import the RDS snapshot directly into Aurora. B. Upload the RDS snapshot to Amazon S3. Then import the RDS snapshot into Aurora. C. Upload the database dump to Amazon S3. Then import the database dump into Aurora.

answer on other one

Q49A company needs to migrate a MySQL database from its on-premises data center to AWS within 2 weeks. The database is 20 TB in size. The company wants to complete the migration with minimal downtime. Which solution will migrate the database MOST cost-effectively? A. Order an AWS Snowball Edge Storage Optimized device. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with replication of ongoing changes. Send the Snowball Edge device to AWS to finish the migration and continue the ongoing replication. B. Order an AWS Snowmobile vehicle. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database wjgh ongoing changes. Send the Snowmobile vehicle back to AWS to finish the migration and continue the ongoing replication.

answer on other one

Q51A company has developed a new video game as a web application. The application is in a three-tier architecture in a VPC with Amazon RDS for MySQL in the database layer. Several players will compete concurrently online. The game's developers want to display a top-10 scoreboard in near-real time and offer the ability to stop and restore the game while preserving the current scores. What should a solutions architect do to meet these requirements? A. Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web application to display. B. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.

answer on other one

Q58SA to design a reliable arch for its app. App consists of 1RDS DB &2 manually provisioned EC2s that run web servers. Ec2are located in a single AZ. An employee recently deleted the DB inst, app was unavailable for 24 hours as a result. Co is concerned with the overall reliability of its env. What should SA do to max reliability of the apps infrastructure? A. Delete one EC2 instance and enable termination protection on the other EC2 instance. Update the DB instance to be Multi-AZ, and enable deletion protection. B. Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple Availability Zones.

answer on other one

Q60A social media company wants to allow its users to upload images in an application that is hosted in the AWS Cloud. The company needs a solution that automatically resizes the images so that the images can be displayed on multiple device types. The application experiences unpredictable traffic patterns throughout the day. The company is seeking a highly available solution that maximizes scalability. What should a solutions architect do to meet these requirements? A. Create a static website hosted in Amazon S3 that invokes AWS Lambda functions to resize the images and store the images in an Amazon S3 bucket. B. Create a static website hosted in Amazon CloudFront that invokes AWS Step Functions to resize the images and store the images in an Amazon RDS database.

next

Q63Co has migrated multiple Microsoft Windows Server workloads to EC2 instances that run in the us-west-1 Region. Co manually backs up the workloads to create an image as needed. In the event of a natural disaster in the us-west-1 Region, Co wants to recover workloads quickly in the us-west-2 Region. Co wants no more than 24 hours of data loss on the EC2 instances. Co also wants to automate any backups of the EC2. Which sol LEAST administrative effort? (Select TWO.) CCreate backup vaults in us-west-1 and in us-west-2 by using AWS Backup. Create a backup plan for the EC2 instances based on tag values. Create an AWS Lambda function to run as a scheduled job to copy the backup data to us-west-2 DCreate a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Define the destination for the copy as us-west2. Specify the backup schedule to run twice daily

next

Q67A global marketing company has applications that run in the ap-southeast-2 Region and the eu-west-1 Region. Applications that run in a VPC in eu-west-1 need to communicate securely with databases that run in a VPC in ap-southeast-2. Which network design will meet these requirements? A. Create a VPC peering connection between the eu-west-1 VPC and the ap-southeast-2 VPC. Create an inbound rule in the eu-west-1 application security group that allows traffic from the database server IP addresses in the ap-southeast-2 security group. B. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west1 VPC. Update the subnet route tables. Create an inbound rule in the ap-southeast-2 database security group that references the security group ID of the application servers in eu-west-1.

next

Q63Co has migrated multiple Microsoft Windows Server workloads to EC2 instances that run in the us-west-1 Region. Co manually backs up the workloads to create an image as needed. In the event of a natural disaster in the us-west-1 Region, Co wants to recover workloads quickly in the us-west-2 Region. Co wants no more than 24 hours of data loss on the EC2 instances. Co also wants to automate any backups of the EC2. Which sol LEAST administrative effort? (Select TWO.) A. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run twice daily. Copy the image on demand. B. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run twice daily. Configure the copy to the us-west-2 Region.

next 2


Conjuntos de estudio relacionados

Chapter 12- diseases of the urinary system

View Set

Topic 5 Digestion & Nutrition PrepU (Gero Exam 2)

View Set

biology chapter 11“During muscle contraction, which of the following regions decrease(s) in length? 1 only 1 and 2 only 3 and 4 only 2, 3, and 4 only” Excerpt From MCAT Biology Review 2023-2024 Kaplan Test Prep This material may be protected by copyrigh

View Set

PNE 105 Chapter 45: Caring for Clients with Disorders of the Upper Gastrointestinal Tract. Med-Surg.

View Set

(1) Property Ownership & Land Use Controls & Regulations (15%) (75%)

View Set