AWS Certified Solutions Architect Associate #5

Ace your homework & exams now with Quizwiz!

As a Solutions Architect, you are responsible for all operations in the us-west-1 region, which has a complex infrastructure consisting of several Lambda functions, API Gateways and DynamoDB tables. As part of the disaster recovery strategy, you would like to be in a position to quickly re-create your entire infrastructure in another region, if needed. Which technology choice is apt for this requirement? a.) AWS CloudFormation b.) AWS Elastic Beanstalk c.) AWS Trusted Advisor d.) AWS OpsWorks

a.) AWS CloudFormation AWS CloudFormation provides a common language for you to model and provision AWS and third-party application resources in the cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This gives you a single source of truth for your AWS and third-party resources. AWS CloudFormation provisions your application resources in a safe, repeatable manner, allowing you to build and rebuild your infrastructure and applications, without having to perform manual actions or write custom scripts. CloudFormation takes care of determining the right operations to perform when managing your stack, orchestrating them most efficiently, and rolls back changes automatically if errors are detected.

A developer in your company has set up a classic 2 tier architecture consisting of an Application Load Balancer and an Auto Scaling group (ASG) managing a fleet of EC2 instances. The ALB is deployed in a subnet of size 10.0.1.0/24 and the ASG is deployed in a subnet of size 10.0.4.0/22. As a solutions architect, you would like to adhere to the security pillar of the well-architected framework. How do you configure the security group of the EC2 instances to only allow traffic coming from the ALB? a.) Add a rule to authorize the security group of the ALB b.) Add a rule to authorize the security group of the ASG c.) Add a rule to authorize the CIDR 10.0.4.0/22 d.) Add a rule to authorize the CIDR 10.0.1.0/24

a.) Add a rule to authorize the security group of the ALB An Auto Scaling group (ASG) contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management. An Auto Scaling group also enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies. Both maintaining the number of instances in an Auto Scaling group and automatic scaling are the core functionality of the Amazon EC2 Auto Scaling service. Add a rule to authorize the security group of the ALB A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you can specify one or more security groups; otherwise, we use the default security group. You can add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group. When deciding to allow traffic to reach an instance, all the rules from all the security groups that are associated with the instance are evaluated. The following are the characteristics of security group rules: 1. By default, security groups allow all outbound traffic. 2. Security group rules are always permissive; you can't create rules that deny access. 3. Security groups are stateful Application Load Balancer (ALB) operates at the request level (layer 7), routing traffic to targets - EC2 instances, containers, IP addresses and Lambda functions based on the content of the request. Ideal for advanced load balancing of HTTP and HTTPS traffic, Application Load Balancer provides advanced request routing targeted at delivery of modern application architectures, including microservices and container-based applications.

A ride-sharing company wants to improve the ride-tracking system that stores GPS coordinates for all rides. The engineering team at the company is looking for a NoSQL database that has single-digit millisecond latency, can scale horizontally, and is serverless, so that they can perform high-frequency lookups reliably. As a Solutions Architect, which database do you recommend for their requirements? a.) Amazon DynamoDB b.) Amazon ElastiCache c.) Amazon Relational Database Service (Amazon RDS) d.) Amazon Neptune

a.) Amazon DynamoDB Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-Region, multi-master, durable NoSQL database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second. DynamoDB is serverless, has single-digit millisecond latency and scales horizontally. This is the correct choice for the given requirements.

The engineering team at an e-commerce company has been tasked with migrating to a serverless architecture. The team wants to focus on the key points of consideration when using Lambda as a backbone for this architecture. As a Solutions Architect, which of the following options would you identify as correct for the given requirement? (Select three) a.) By default, Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs. Once a Lambda function is VPC-enabled, it will need a route through a NAT gateway in a public subnet to access public resources b.) Serverless architecture and containers complement each other but you cannot package and deploy Lambda functions as container images c.) Lambda allocates compute power in proportion to the memory you allocate to your function. AWS, thus recommends to over provision your function time out settings for the proper performance of Lambda functions d.) The bigger your deployment package, the slower your Lambda function will cold-start. Hence, AWS suggests packaging dependencies as a separate package from the actual Lambda package e.) Since Lambda functions can scale extremely quickly, it's a good idea to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold f.) If you intend to reuse code in more than one Lambda function, you should consider creating a Lambda Layer for the reusable code

a.) By default, Lambda functions always operate from an AWS-owned VPC and hence have access to any public internet address or public AWS APIs. Once a Lambda function is VPC-enabled, it will need a route through a NAT gateway in a public subnet to access public resources e.) Since Lambda functions can scale extremely quickly, it's a good idea to deploy a CloudWatch Alarm that notifies your team when function metrics such as ConcurrentExecutions or Invocations exceeds the expected threshold f.) If you intend to reuse code in more than one Lambda function, you should consider creating a Lambda Layer for the reusable code Lambda functions always operate from an AWS-owned VPC. By default, your function has the full ability to make network requests to any public internet address — this includes access to any of the public AWS APIs. For example, your function can interact with AWS DynamoDB APIs to PutItem or Query for records. You should only enable your functions for VPC access when you need to interact with a private resource located in a private subnet. An RDS instance is a good example. Once your function is VPC-enabled, all network traffic from your function is subject to the routing rules of your VPC/Subnet. If your function needs to interact with a public resource, you will need a route through a NAT gateway in a public subnet.

A company has recently created a new department to handle their services workload. An IT team has been asked to create a custom VPC to isolate the resources created in this new department. They have set up the public subnet and internet gateway (IGW). However, they are not able to ping the Amazon EC2 instances with Elastic IP launched in the newly created VPC. As a Solutions Architect, the team has requested your help. How will you troubleshoot this scenario? (Select two) a.) Check if the security groups allow ping from the source b.) Create a secondary IGW to attach with public subnet and move the current IGW to private and write route tables c.) Contact AWS support to map your VPC with subnet d.) Check if the route table is configured with IGW e.) Disable Source / Destination check on the EC2 instance

a.) Check if the security groups allow ping from the source d.) Check if the route table is configured with IGW An internet gateway (IGW) is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. An internet gateway supports IPv4 and IPv6 traffic. To enable access to or from the internet for instances in a subnet in a VPC, you must do the following: 1. Attach an internet gateway to your VPC. 2. Add a route to your subnet's route table that directs internet-bound traffic to the internet gateway. 3. Ensure that instances in your subnet have a globally unique IP address 4. Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. A route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed. After creating an IGW, make sure the route tables are updated. Additionally, ensure the security group allows the ICMP protocol for ping requests. Check if the security groups allows ping from the source - A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you can specify one or more security groups; otherwise, AWS uses the default security group. You can add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group. To decide whether to allow traffic to reach an instance, all the rules from all the security groups that are associated with the instance are evaluated. The following are the characteristics of security group rules: 1. By default, security groups allow all outbound traffic. 2. Security group rules are always permissive; you can't create rules that deny access. 3. Security groups are stateful

A company has noticed that its EBS storage volume (io1) accounts for 90% of the cost and the remaining 10% cost can be attributed to the EC2 instance. The CloudWatch metrics report that both the EC2 instance and the EBS volume are under-utilized. The CloudWatch metrics also show that the EBS volume has occasional I/O bursts. The entire infrastructure is managed by AWS CloudFormation. As a Solutions Architect, what do you propose to reduce the costs? a.) Convert the Amazon EC2 instance EBS volume to gp2 b.) Don't use a CloudFormation template to create the database as the CloudFormation service incurs greater service charges c.) Keep the EBS volume to io1 and reduce the IOPS d.) Change the Amazon EC2 instance type to something much smaller

a.) Convert the Amazon EC2 instance EBS volume to gp2 Amazon EBS provides the various volume types, that differ in performance characteristics and price so that you can tailor your storage performance and cost to the needs of your applications. The volumes types fall into two categories: SSD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS. HDD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS Provisioned IOPS SSD (io1) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency. Unlike gp2, which uses a bucket and credit model to calculate performance, an io1 volume allows you to specify a consistent IOPS rate when you create the volume, and Amazon EBS delivers the provisioned performance 99.9 percent of the time.

Your company is deploying a website running on Elastic Beanstalk. The website takes over 45 minutes for the installation and contains both static as well as dynamic files that must be generated during the installation process. As a Solutions Architect, you would like to bring the time to create a new instance in your Elastic Beanstalk deployment to be less than 2 minutes. Which of the following options should be combined to build a solution for this requirement? (Select two) a.) Create a Golden AMI with the static installation components already setup b.) Use Elastic Beanstalk deployment caching feature c.) Use EC2 user data to install the application at boot time d.) Store the installation files in S3 so they can be quickly retrieved e.) Use EC2 user data to customize the dynamic installation parts at boot time

a.) Create a Golden AMI with the static installation components already setup e.) Use EC2 user data to customize the dynamic installation parts at boot time WS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time. When you create an AWS Elastic Beanstalk environment, you can specify an Amazon Machine Image (AMI) to use instead of the standard Elastic Beanstalk AMI included in your platform version. A custom AMI can improve provisioning times when instances are launched in your environment if you need to install a lot of software that isn't included in the standard AMIs. Create a Golden AMI with the static installation components already setup - A Golden AMI is an AMI that you standardize through configuration, consistent security patching, and hardening. It also contains agents you approve for logging, security, performance monitoring, etc. For the given use-case, you can have the static installation components already setup via the golden AMI. Use EC2 user data to customize the dynamic installation parts at boot time - EC2 instance user data is the data that you specified in the form of a configuration script while launching your instance. You can use EC2 user data to customize the dynamic installation parts at boot time, rather than installing the application itself at boot time.

The engineering team at a global e-commerce company is currently reviewing their disaster recovery strategy. The team has outlined that they need to be able to quickly recover their application stack with a Recovery Time Objective (RTO) of 5 minutes, in all of the AWS Regions that the application runs. The application stack currently takes over 45 minutes to install on a Linux system. As a Solutions architect, which of the following options would you recommend as the disaster recovery strategy? a.) Create an AMI after installing the software and copy the AMI across all Regions. Use this Region-specific AMI to run the recovery process in the respective Regions b.) Store the installation files in Amazon S3 for quicker retrieval c.) Create an AMI after installing the software and use this AMI to run the recovery process in other Regions d.) Use Amazon EC2 user data to speed up the installation process

a.) Create an AMI after installing the software and copy the AMI across all Regions. Use this Region-specific AMI to run the recovery process in the respective Regions An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an AMI when you launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration. You can use different AMIs to launch instances when you need instances with different configurations. For the current use case, you need to create an AMI such that the application stack is already set up. But AMIs are bound to the Region they are created in. So, you need to copy the AMI across Regions for disaster recovery readiness. Copying a source AMI results in an identical but distinct target AMI with its own unique identifier. In the case of an Amazon EBS-backed AMI, each of its backing snapshots is, by default, copied to an identical but distinct target snapshot. (The sole exceptions are when you choose to encrypt or re-encrypt the snapshot.) You can change or deregister the source AMI with no effect on the target AMI. The reverse is also true. There are no charges for copying an AMI. However, standard storage and data transfer rates apply. If you copy an EBS-backed AMI, you will incur charges for the storage of any additional EBS snapshots. AWS does not copy launch permissions, user-defined tags, or Amazon S3 bucket permissions from the source AMI to the new AMI. After the copy operation is complete, you can apply launch permissions, user-defined tags, and Amazon S3 bucket permissions to the new AMI.

Your company runs a web portal to match developers to clients who need their help. As a solutions architect, you've designed the architecture of the website to be fully serverless with API Gateway & AWS Lambda. The backend uses a DynamoDB table. You would like to automatically congratulate your developers on important milestones, such as - their first paid contract. All the contracts are stored in DynamoDB. Which DynamoDB feature can you use to implement this functionality such that there is LEAST delay in sending automatic notifications? a.) DynamoDB Streams + Lambda b.) Amazon SQS + Lambda c.) DynamoDB DAX + API Gateway d.) CloudWatch Events + Lambda

a.) DynamoDB Streams + Lambda Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB Streams + Lambda - A DynamoDB stream is an ordered flow of information about changes to items in a DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table. Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attributes of the items that were modified. A stream record contains information about a data modification to a single item in a DynamoDB table. DynamoDB Streams will contain a stream of all the changes that happen to a DynamoDB table. It can be chained with a Lambda function that will be triggered to react to these changes, one of which is the developer's milestone. Therefore, this is the correct option.

A Big Data analytics company writes data and log files in Amazon S3 buckets. The company now wants to stream the existing data files as well as any ongoing file updates from Amazon S3 to Amazon Kinesis Data Streams. As a Solutions Architect, which of the following would you suggest as the fastest possible way of building a solution for this requirement? a.) Leverage AWS Database Migration Service (AWS DMS) as a bridge between Amazon S3 and Amazon Kinesis Data Streams b.) Configure CloudWatch events for the bucket actions on Amazon S3. An AWS Lambda function can then be triggered from the CloudWatch event that will send the necessary data to Amazon Kinesis Data Streams c.) Leverage S3 event notification to trigger a Lambda function for the file create event. The Lambda function will then send the necessary data to Amazon Kinesis Data Streams d.) Amazon S3 bucket actions can be directly configured to write data into Amazon Simple Notification Service (SNS). SNS can then be used to send the updates to Amazon Kinesis Data Streams

a.) Leverage AWS Database Migration Service (AWS DMS) as a bridge between Amazon S3 and Amazon Kinesis Data Streams You can achieve this by using AWS Database Migration Service (AWS DMS). AWS DMS enables you to seamlessly migrate data from supported sources to relational databases, data warehouses, streaming platforms, and other data stores in AWS cloud. The given requirement needs the functionality to be implemented in the least possible time. You can use AWS DMS for such data-processing requirements. AWS DMS lets you expand the existing application to stream data from Amazon S3 into Amazon Kinesis Data Streams for real-time analytics without writing and maintaining new code. AWS DMS supports specifying Amazon S3 as the source and streaming services like Kinesis and Amazon Managed Streaming of Kafka (Amazon MSK) as the target. AWS DMS allows migration of full and change data capture (CDC) files to these services. AWS DMS performs this task out of box without any complex configuration or code development. You can also configure an AWS DMS replication instance to scale up or down depending on the workload. AWS DMS supports Amazon S3 as the source and Kinesis as the target, so data stored in an S3 bucket is streamed to Kinesis. Several consumers, such as AWS Lambda, Amazon Kinesis Data Firehose, Amazon Kinesis Data Analytics, and the Kinesis Consumer Library (KCL), can consume the data concurrently to perform real-time analytics on the dataset. Each AWS service in this architecture can scale independently as needed.

You are working as an AWS architect for a weather tracking facility. You are asked to set up a Disaster Recovery (DR) mechanism with minimum costs. In case of failure, the facility can only bear data loss of a few minutes without jeopardizing the forecasting models. As a Solutions Architect, which DR method will you suggest? a.) Pilot Light b.) Multi-Site c.) Warm Standby d.) Backup and Restore

a.) Pilot Light The term pilot light is often used to describe a DR scenario in which a minimal version of an environment is always running in the cloud. The idea of the pilot light is an analogy that comes from the gas heater. In a gas heater, a small flame that's always on can quickly ignite the entire furnace to heat up a house. This scenario is similar to a backup-and-restore scenario. For example, with AWS you can maintain a pilot light by configuring and running the most critical core elements of your system in AWS. For the given use-case, a small part of the backup infrastructure is always running simultaneously syncing mutable data (such as databases or documents) so that there is no loss of critical data. When the time comes for recovery, you can rapidly provision a full-scale production environment around the critical core. For Pilot light, RPO is in minutes, so this is the correct solution.

A startup's cloud infrastructure consists of a few Amazon EC2 instances, Amazon RDS instances and Amazon S3 storage. A year into their business operations, the startup is incurring costs that seem too high for their business requirements. Which of the following options represents a valid cost-optimization solution? a.) Use AWS Cost Explorer Resource Optimization to get a report of EC2 instances that are either idle or have low utilization and use AWS Compute Optimizer to look at instance type recommendations b.) Use AWS Trusted Advisor checks on Amazon EC2 Reserved Instances to automatically renew Reserved Instances. Trusted advisor also suggests Amazon RDS idle DB instances c.) Use Amazon S3 Storage class analysis to get recommendations for transitions of objects to S3 Glacier storage classes to reduce storage costs. You can also automate moving these objects into lower-cost storage tier using Lifecycle Policies d.) Use AWS Compute Optimizer recommendations to help you choose the optimal Amazon EC2 purchasing options and help reserve your instance capacities at reduced costs

a.) Use AWS Cost Explorer Resource Optimization to get a report of EC2 instances that are either idle or have low utilization and use AWS Compute Optimizer to look at instance type recommendations AWS Cost Explorer helps you identify under-utilized EC2 instances that may be downsized on an instance by instance basis within the same instance family, and also understand the potential impact on your AWS bill by taking into account your Reserved Instances and Savings Plans. AWS Compute Optimizer recommends optimal AWS Compute resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics. Compute Optimizer helps you choose the optimal Amazon EC2 instance types, including those that are part of an Amazon EC2 Auto Scaling group, based on your utilization data.

A company wants to adopt a hybrid cloud infrastructure where it uses some AWS services such as S3 alongside its on-premises data center. The company wants a dedicated private connection between the on-premise data center and AWS. In case of failures though, the company needs to guarantee uptime and is willing to use the public internet for an encrypted connection. What do you recommend? (Select two) a.) Use Direct Connect as a primary connection b.) Use Egress Only Internet Gateway as a backup connection c.) Use Site to Site VPN as a backup connection d.) Use Site to Site VPN as a primary connection e.) Use Direct Connect as a backup connection

a.) Use Direct Connect as a primary connection c.) Use Site to Site VPN as a backup connection Use Direct Connect as a primary connection - AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry-standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. AWS Direct Connect does not involve the Internet; instead, it uses dedicated, private network connections between your intranet and Amazon VPC. Use Site to Site VPN as a backup connection - AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). You can securely extend your data center or branch office network to the cloud with an AWS Site-to-Site VPN connection. A VPC VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity. Direct Connect as a primary connection guarantees great performance and security (as the connection is private). Using Direct Connect as a backup solution would work but probably carries a risk it would fail as well. As we don't mind going over the public internet (which is reliable, but less secure as connections are going over the public route), we should use a Site to Site VPN which offers an encrypted connection to handle failover scenarios.

Your company runs a website for evaluating coding skills. As a Solutions Architect, you've designed the architecture of the website to follow a serverless pattern on the AWS Cloud using API Gateway and AWS Lambda. The backend is using an RDS PostgreSQL database. Caching is implemented using a Redis ElastiCache cluster. You would like to increase the security of your authentication to Redis from the Lambda function, leveraging a username and password combination. As a solutions architect, which of the following options would you recommend? a.) Use Redis Auth b.) Enable KMS Encryption c.) Use IAM Auth and attach an IAM role to Lambda d.) Create an inbound rule to restrict access to Redis Auth only from the Lambda security group

a.) Use Redis Auth Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Amazon ElastiCache for Redis is a great choice for real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session store. ElastiCache for Redis supports replication, high availability, and cluster sharding right out of the box. IAM Auth is not supported by ElastiCache. Redis authentication tokens enable Redis to require a token (password) before allowing clients to execute commands, thereby improving data security.

You are working for a SaaS (Software as a Service) company as a solutions architect and help design solutions for the company's customers. One of the customers is a bank and has a requirement to whitelist up to two public IPs when the bank is accessing external services across the internet. Which architectural choice do you recommend to maintain high availability, support scaling-up to 10 instances and comply with the bank's requirements? a.) Use a Network Load Balancer with an Auto Scaling Group (ASG) b.) Use an Application Load Balancer with an Auto Scaling Group(ASG) c.) Use a Classic Load Balancer with an Auto Scaling Group (ASG) d.) Use an Auto Scaling Group (ASG) with Dynamic Elastic IPs attachment

a.) Use a Network Load Balancer with an Auto Scaling Group (ASG) Network Load Balancer is best suited for use-cases involving low latency and high throughput workloads that involve scaling to millions of requests per second. Network Load Balancer operates at the connection level (Layer 4), routing connections to targets - Amazon EC2 instances, microservices, and containers - within Amazon Virtual Private Cloud (Amazon VPC) based on IP protocol data. A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. Network Load Balancers expose a fixed IP to the public web, therefore allowing your application to be predictably reached using these IPs, while allowing you to scale your application behind the Network Load Balancer using an ASG. Incorrect options: Classic Load Balancers and Application Load Balancers use the private IP addresses associated with their Elastic network interfaces as the source IP address for requests forwarded to your web servers. These IP addresses can be used for various purposes, such as allowing the load balancer traffic on the web servers and for request processing. It's a best practice to use security group referencing on the web servers for whitelisting load balancer traffic from Classic Load Balancers or Application Load Balancers. However, because Network Load Balancers don't support security groups, based on the target group configurations, the IP addresses of the clients or the private IP addresses associated with the Network Load Balancers must be allowed on the web server's security group.

The engineering team at a leading e-commerce company is anticipating a surge in the traffic because of a flash sale planned for the weekend. You have estimated the web traffic to be 10x. The content of your website is highly dynamic and changes very often. As a Solutions Architect, which of the following options would you recommend to make sure your infrastructure scales for that day? a.) Use an Auto Scaling Group b.) Use a CloudFront distribution in front of your website c.) Use a Route53 Multi Value record c.) Deploy the website on S3

a.) Use an Auto Scaling Group An Auto Scaling group (ASG) contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management. An Auto Scaling group also enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies. Both maintaining the number of instances in an Auto Scaling group and automatic scaling are the core functionality of the Amazon EC2 Auto Scaling service. The size of an Auto Scaling group depends on the number of instances that you set as the desired capacity. You can adjust its size to meet demand, either manually or by using automatic scaling. An Auto Scaling group starts by launching enough instances to meet its desired capacity. It maintains this number of instances by performing periodic health checks on the instances in the group. The Auto Scaling group continues to maintain a fixed number of instances even if an instance becomes unhealthy. If an instance becomes unhealthy, the group terminates the unhealthy instance and launches another instance to replace it. ASG is the correct answer here.

The engineering team at a social media company has recently migrated to AWS Cloud from its on-premises data center. The team is evaluating CloudFront to be used as a CDN for its flagship application. The team has hired you as an AWS Certified Solutions Architect Associate to advise on CloudFront capabilities on routing, security, and high availability. Which of the following would you identify as correct regarding CloudFront? (Select three) a.) Use field level encryption in CloudFront to protect sensitive data for specific content b.) Use an origin group with primary and secondary origins to configure CloudFront for high-availability and failover c.) Use geo restriction to configure CloudFront for high-availability and failover d.) CloudFront can route to multiple origins based on the content type e.) Use KMS encryption in CloudFront to protect sensitive data for specific content f.) CloudFront can route to multiple origins based on the price class

a.) Use field level encryption in CloudFront to protect sensitive data for specific content b.) Use an origin group with primary and secondary origins to configure CloudFront for high-availability and failover d.) CloudFront can route to multiple origins based on the content type ou can configure a single CloudFront web distribution to serve different types of requests from multiple origins. For example, if you are building a website that serves static content from an Amazon Simple Storage Service (Amazon S3) bucket and dynamic content from a load balancer, you can serve both types of content from a CloudFront web distribution. Use an origin group with primary and secondary origins to configure CloudFront for high availability and failover You can set up CloudFront with origin failover for scenarios that require high availability. To get started, you create an origin group with two origins: a primary and a secondary. If the primary origin is unavailable or returns specific HTTP response status codes that indicate a failure, CloudFront automatically switches to the secondary origin. To set up origin failover, you must have a distribution with at least two origins. Next, you create an origin group for your distribution that includes two origins, setting one as the primary. Finally, you create or update a cache behavior to use the origin group. Field-level encryption allows you to enable your users to securely upload sensitive information to your web servers. The sensitive information provided by your users is encrypted at the edge, close to the user, and remains encrypted throughout your entire application stack. This encryption ensures that only applications that need the data—and have the credentials to decrypt it—are able to do so. To use field-level encryption, when you configure your CloudFront distribution, specify the set of fields in POST requests that you want to be encrypted, and the public key to use to encrypt them. You can encrypt up to 10 data fields in a request. (You can't encrypt all of the data in a request with field-level encryption; you must specify individual fields to encrypt.)

A company wants an easy way to deploy and manage large fleets of Snowball devices. Which of the following solutions can be used to address the given requirements? a.) AWS OpsWorks b.) AWS OpsHub c.) AWS Config d.) AWS CodeDeploy

b.) AWS OpsHub AWS OpsHub is a graphical user interface you can use to manage your AWS Snowball devices, enabling you to rapidly deploy edge computing workloads and simplify data migration to the cloud. With just a few clicks in AWS OpsHub, you have the full functionality of the Snowball devices at your fingertips; you can unlock and configure devices, drag-and-drop data to devices, launch applications, and monitor device metrics. AWS OpsHub supports the Snowball Edge Storage Optimized and Snowball Edge Compute Optimized devices.

As part of the on-premises data center migration to AWS Cloud, a company is looking at using multiple AWS Snow Family devices to move their on-premises data. Which Snow Family service offers the feature of storage clustering? a.) AWS Snowmobile b.) AWS Snowball Edge Compute Optimized c.) AWS Snowmobile Storage Compute d.) AWS Snowcone

b.) AWS Snowball Edge Compute Optimized AWS Snowball is a data migration and edge computing device that comes in two device options: Compute Optimized and Storage Optimized. Snowball Edge Storage Optimized devices provide 40 vCPUs of compute capacity coupled with 80 terabytes of usable block or Amazon S3-compatible object storage. It is well-suited for local storage and large-scale data transfer. Snowball Edge Compute Optimized devices provide 52 vCPUs, 42 terabytes of usable block or object storage, and an optional GPU for use cases such as advanced machine learning and full-motion video analysis in disconnected environments. Customers can use these two options for data collection, machine learning and processing, and storage in environments with intermittent connectivity (such as manufacturing, industrial, and transportation) or in extremely remote locations (such as military or maritime operations) before shipping it back to AWS. These devices may also be rack mounted and clustered together to build larger, temporary installations. Therefore, both AWS Snowball Edge Storage Optimized and AWS Snowball Edge Compute Optimized offer the storage clustering feature.

An e-commerce company has copied 1 PB of data from its on-premises data center to an Amazon S3 bucket in the us-west-1 Region using an AWS Direct Connect link. The company now wants to copy the data to another S3 bucket in the us-east-1 Region. The on-premises data center does not allow the use of AWS Snowball. As a Solutions Architect, which of the following would you recommend to accomplish this? a.) Set up cross-Region replication to copy objects across S3 buckets in different Regions b.) Copy data from the source bucket to the destination bucket using the aws S3 sync command c.) Use Snowball Edge device to copy the data from one Region to another Region d.) Copy data from the source S3 bucket to a target S3 bucket using the S3 console

b.) Copy data from the source bucket to the destination bucket using the aws S3 sync command The aws S3 sync command uses the CopyObject APIs to copy objects between S3 buckets. The sync command lists the source and target buckets to identify objects that are in the source bucket but that aren't in the target bucket. The command also identifies objects in the source bucket that have different LastModified dates than the objects that are in the target bucket. The sync command on a versioned bucket copies only the current version of the object—previous versions aren't copied. By default, this preserves object metadata, but the access control lists (ACLs) are set to FULL_CONTROL for your AWS account, which removes any additional ACLs. If the operation fails, you can run the sync command again without duplicating previously copied objects. You can use the command like so: aws s3 sync s3://DOC-EXAMPLE-BUCKET-SOURCE s3://DOC-EXAMPLE-BUCKET-TARGET

A web hosting company has deployed their application behind a Network Load Balancer (NLB) and an Auto Scaling Group (ASG). The system administrator has now released a new cost-optimized AMI that should be used to launch instances for the Auto Scaling Group, going ahead. As a Solutions Architect, how can you update the ASG to launch from this new AMI? a.) Launch a script on the EC2 instance to query the metadata service at http://169.254.169.254/meta-data/ami-update b.) Create a new launch configuration with the new AMI ID c.) Swap the underlying root EBS volumes for your instances d.) Update the current launch configuration with the new AMI ID

b.) Create a new launch configuration with the new AMI ID An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management. An Auto Scaling group also enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies. A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances. Include the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. If you've launched an EC2 instance before, you specified the same information in order to launch the instance. An Auto Scaling group is associated with one launch configuration at a time, and you can't modify a launch configuration after you've created it. To change the launch configuration for an Auto Scaling group, use an existing launch configuration as the basis for a new launch configuration. Then, update the Auto Scaling group to use the new launch configuration. After you change the launch configuration for an Auto Scaling group, any new instances are launched using the new configuration options, but existing instances are not affected. To update the existing instances, terminate them so that they are replaced by your Auto Scaling group, or allow automatic scaling to gradually replace older instances with newer instances based on your termination policies.

A junior developer has downloaded a sample Amazon S3 bucket policy to make changes to it based on new company-wide access policies. He has requested your help in understanding this bucket policy. As a Solutions Architect, which of the following would you identify as the correct description for the given policy? { "Version": "2012-10-17", "Id": "S3PolicyId1", "Statement": [ { "Sid": "IPAllow", "Effect": "Allow", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::examplebucket/*", "Condition": { "IpAddress": {"aws:SourceIp": "54.240.143.0/24"}, "NotIpAddress": {"aws:SourceIp": "54.240.143.188/32"} } } ] } a.) It ensures EC2 instances that have inherited a security group can access the bucket b.) It authorizes an entire CIDR except one IP address to access the S3 bucket c.) It authorizes an IP address and a CIDR to access the S3 bucket d.) It ensures the S3 bucket is exposing an external IP within the CIDR range specified, except one IP

b.) It authorizes an entire CIDR except one IP address to access the S3 bucket You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, AWS Organizations SCPs, ACLs, and session policies. Let's analyze the bucket policy one step at a time: The snippet "Effect": "Allow" implies an allow effect. The snippet "Principal": "*" implies any Principal. The snippet "Action": "s3:*" implies any S3 API. The snippet "Resource": "arn:aws:s3:::examplebucket/*" implies that the resource can be the bucket examplebucket and its contents. Consider the last snippet of the given bucket policy: "Condition": { "IpAddress": {"aws:SourceIp": "54.240.143.0/24"}, "NotIpAddress": {"aws:SourceIp": "54.240.143.188/32"} } This snippet implies that if the source IP is in the CIDR block "54.240.143.0/24" (== 54.240.143.0 - 54.240.143.255), then it is allowed to access the examplebucket and its contents. However, the source IP cannot be in the CIDR "54.240.143.188/32" (== 1 IP, 54.240.143.188/32), which means one IP address is explicitly blocked from accessing the examplebucket and its contents.

An Elastic Load Balancer has marked all the EC2 instances in the target group as unhealthy. Surprisingly, when a developer enters the IP address of the EC2 instances in the web browser, he can access the website. What could be the reason the instances are being marked as unhealthy? (Select two) a.) You need to attach Elastic IP to the EC2 instances b.) The route for the health check is misconfigured c.) The EBS volumes have been improperly mounted Your web-app has a runtime that is not supported by the Application Load Balancer d.) The security group of the EC2 instance does not allow for traffic from the security group of the Application Load Balancer

b.) The route for the health check is misconfigured d.) The security group of the EC2 instance does not allow for traffic from the security group of the Application Load Balancer An Application Load Balancer periodically sends requests to its registered targets to test their status. These tests are called health checks. Each load balancer node routes requests only to the healthy targets in the enabled Availability Zones for the load balancer. Each load balancer node checks the health of each target, using the health check settings for the target groups with which the target is registered. If a target group contains only unhealthy registered targets, the load balancer nodes route requests across its unhealthy targets. You must ensure that your load balancer can communicate with registered targets on both the listener port and the health check port. Whenever you add a listener to your load balancer or update the health check port for a target group used by the load balancer to route requests, you must verify that the security groups associated with the load balancer allow traffic on the new port in both directions.

An IT company has a large number of clients opting to build their APIs by using Docker containers. To facilitate the hosting of these containers, the company is looking at various orchestration services available with AWS. As a Solutions Architect, which of the following solutions will you suggest? (Select two) a.) Use Amazon SageMaker for serverless orchestration of the containerized services b.) Use Amazon EKS with AWS Fargate for serverless orchestration of the containerized services c.) Use Amazon ECS with Amazon EC2 for serverless orchestration of the containerized services d.) Use Amazon EMR for serverless orchestration of the containerized services e.) Use Amazon ECS with AWS Fargate for serverless orchestration of the containerized services

b.) Use Amazon EKS with AWS Fargate for serverless orchestration of the containerized services e.) Use Amazon ECS with AWS Fargate for serverless orchestration of the containerized services Building APIs with Docker containers has been gaining momentum over the years. For hosting and exposing these container-based APIs, they need a solution which supports HTTP requests routing, autoscaling, and high availability. In some cases, user authorization is also needed. For this purpose, many organizations are orchestrating their containerized services with Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS), while hosting their containers on Amazon EC2 or AWS Fargate. Then, they can add scalability and high availability with Service Auto Scaling (in Amazon ECS) or Horizontal Pod Auto Scaler (in Amazon EKS), and they expose the services through load balancers. When you use Amazon ECS as an orchestrator (with EC2 or Fargate launch type), you also have the option to expose your services with Amazon API Gateway and AWS Cloud Map instead of a load balancer. AWS Cloud Map is used for service discovery: no matter how Amazon ECS tasks scale, AWS Cloud Map service names would point to the right set of Amazon ECS tasks. Then, API Gateway HTTP APIs can be used to define API routes and point them to the corresponding AWS Cloud Map services.

As an e-sport tournament hosting company, you have servers that need to scale and be highly available. Therefore you have deployed an Elastic Load Balancer (ELB) with an Auto Scaling group (ASG) across 3 Availability Zones (AZs). When e-sport tournaments are running, the servers need to scale quickly. And when tournaments are done, the servers can be idle. As a general rule, you would like to be highly available, have the capacity to scale and optimize your costs. What do you recommend? (Select two) a.) Use Dedicated hosts for the minimum capacity b.) Use Reserved Instances for the minimum capacity c.) Set the minimum capacity to 3 d.) Set the minimum capacity to 1 e.) Set the minimum capacity to 2

b.) Use Reserved Instances for the minimum capacity e.) Set the minimum capacity to 2 An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management. An Auto Scaling group also enables you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies. Both maintaining the number of instances in an Auto Scaling group and automatic scaling are the core functionality of the Amazon EC2 Auto Scaling service. You configure the size of your Auto Scaling group by setting the minimum, maximum, and desired capacity. The minimum and maximum capacity are required to create an Auto Scaling group, while the desired capacity is optional. If you do not define your desired capacity upfront, it defaults to your minimum capacity. An Auto Scaling group is elastic as long as it has different values for minimum and maximum capacity. All requests to change the Auto Scaling group's desired capacity (either by manual scaling or automatic scaling) must fall within these limits. Here, even though our ASG is deployed across 3 AZs, the minimum capacity to be highly available is 2. When we specify 2 as the minimum capacity, the ASG would create these 2 instances in separate AZs. If demand goes up, the ASG would spin up a new instance in the third AZ. Later as the demand subsides, the ASG would scale-in and the instance count would be back to 2. Use Reserved Instances for the minimum capacity Reserved Instances provide you with significant savings on your Amazon EC2 costs compared to On-Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account. These On-Demand Instances must match certain attributes, such as instance type and Region, to benefit from the billing discount. Since minimum capacity will always be maintained, it is cost-effective to choose reserved instances than any other option. In case of an AZ outage, the instance in that AZ would go down however the other instance would still be available. The ASG would provision the replacement instance in the third AZ to keep the minimum count to 2.

A CRM web application was written as a monolith in PHP and is facing scaling issues because of performance bottlenecks. The CTO wants to re-engineer towards microservices architecture and expose their website from the same load balancer, linked to different target groups with different URLs: checkout.mycorp.com, www.mycorp.com, mycorp.com/profile and mycorp.com/search. The CTO would like to expose all these URLs as HTTPS endpoints for security purposes. As a solutions architect, which of the following would you recommend as a solution that requires MINIMAL configuration effort? a.) Use a wildcard SSL certificate b.) Use SSL certificates with SNI c.) Use an HTTP to HTTPS redirect d.) Change the ELB SSL Security Policy

b.) Use SSL certificates with SNI You can host multiple TLS secured applications, each with its own TLS certificate, behind a single load balancer. To use SNI, all you need to do is bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client. ALB's smart certificate selection goes beyond SNI. In addition to containing a list of valid domain names, certificates also describe the type of key exchange and cryptography that the server supports, as well as the signature algorithm (SHA2, SHA1, MD5) used to sign the certificate. With SNI support AWS makes it easy to use more than one certificate with the same ALB. The most common reason you might want to use multiple certificates is to handle different domains with the same load balancer. It's always been possible to use wildcard and subject-alternate-name (SAN) certificates with ALB, but these come with limitations. Wildcard certificates only work for related subdomains that match a simple pattern and while SAN certificates can support many different domains, the same certificate authority has to authenticate each one. That means you have to reauthenticate and reprovision your certificate every time you add a new domain.

A Big Data processing company has created a distributed data processing framework that performs best if the network performance between the processing machines is high. The application has to be deployed on AWS, and the company is only looking at performance as the key measure. As a Solutions Architect, which deployment do you recommend? a.) Optimize the EC2 kernel using EC2 User Data b.) Use a Cluster placement group c.) Use Spot Instances d.) Use a Spread placement group

b.) Use a Cluster placement group When you launch a new EC2 instance, the EC2 service attempts to place the instance in such a way that all of your instances are spread out across underlying hardware to minimize correlated failures. You can use placement groups to influence the placement of a group of interdependent instances to meet the needs of your workload. Depending on the type of workload, you can create a placement group using one of the following placement strategies: Cluster - packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications. Partition - spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka. Spread - strictly places a small group of instances across distinct underlying hardware to reduce correlated failures. There is no charge for creating a placement group.

A digital media company needs to manage uploads of around 1TB from an application being used by a partner company. As a Solutions Architect, how will you handle the upload of these files to Amazon S3? a.) Use AWS Snowball b.) Use multi-part upload feature of Amazon S3 c.) Use Amazon S3 Versioning d.) Use Direct Connect to provide extra bandwidth

b.) Use multi-part upload feature of Amazon S3 Multi-part upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. AWS recommends that you use multi-part uploading in the following ways: If you're uploading large objects over a stable high-bandwidth network, use multi-part uploading to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance. If you're uploading over a spotty network, use multi-part uploading to increase resiliency to network errors by avoiding upload restarts. When using multi-part uploading, you need to retry uploading only parts that are interrupted during the upload. You don't need to restart uploading your object from the beginning. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation. If the file is greater than 5GB in size, you must use multi-part upload to upload that file to S3.

A leading e-commerce company runs its IT infrastructure on AWS Cloud. The company has a batch job running at 7 am daily on an RDS database. It processes shipping orders for the past day, and usually gets around 2000 records that need to be processed sequentially in a batch job via a shell script. The processing of each record takes about 3 seconds. What platform do you recommend to run this batch job? a.) Amazon Kinesis Data Streams b.) AWS Glue c.) Amazon EC2 d.) AWS Lambda

c.) Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2's simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon's proven computing environment. AWS Batch can be used to plan, schedule, and execute your batch computing workloads on Amazon EC2 Instances. Amazon EC2 is the right choice as it can accommodate batch processing and run customized scripts, as is the needed requirement.

You are working as a Solutions Architect for a photo processing company that has a proprietary algorithm to compress an image without any loss in quality. Because of the efficiency of the algorithm, your clients are willing to wait for a response that carries their compressed images back. You also want to process these jobs asynchronously and scale quickly, to cater to the high demand. Additionally, you also want the job to be retried in case of failures. Which combination of choices do you recommend to minimize cost and comply with the requirements? (Select two) a.) EC2 Reserved Instances b.) EC2 On-Demand Instances c.) Amazon Simple Queue Service (SQS) d.) EC2 Spot Instances e.) Amazon Simple Notification Service (SNS) Explanation

c.) Amazon Simple Queue Service (SQS) d.) EC2 Spot Instances A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can lower your Amazon EC2 costs significantly. The hourly price for a Spot Instance is called a Spot price. The Spot price of each instance type in each Availability Zone is set by Amazon EC2 and adjusted gradually based on the long-term supply of and demand for Spot Instances. Your Spot Instance runs whenever capacity is available and the maximum price per hour for your request exceeds the Spot price. To process these jobs, due to the unpredictable nature of their volume, and the desire to save on costs, spot Instances are recommended as compared to on-demand instances. As spot instances are cheaper than reserved instances and do not require long term commitment, spot instances are a better fit for the given use-case. Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent. SQS will allow you to buffer the image compression requests and process them asynchronously. It also has a direct built-in mechanism for retries and scales seamlessly.

A company runs a popular dating website on the AWS Cloud. As a Solutions Architect, you've designed the architecture of the website to follow a serverless pattern on the AWS Cloud using API Gateway and AWS Lambda. The backend uses an RDS PostgreSQL database. Currently, the application uses a username and password combination to connect the Lambda function to the RDS database. You would like to improve the security at the authentication level by leveraging short-lived credentials. What will you choose? (Select two) a.) Embed a credential rotation logic in the AWS Lambda, retrieving them from SSM b.) Restrict the RDS database security group to the Lambda's security group c.) Attach an AWS Identity and Access Management (IAM) role to AWS Lambda d.) Deploy AWS Lambda in a VPC e.) Use IAM authentication from Lambda to RDS PostgreSQL

c.) Attach an AWS Identity and Access Management (IAM) role to AWS Lambda e.) Use IAM authentication from Lambda to RDS PostgreSQL You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token. An authentication token is a unique string of characters that Amazon RDS generates on request. Authentication tokens are generated using AWS Signature Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials in the database, because authentication is managed externally using IAM. You can also still use standard database authentication. IAM database authentication provides the following benefits: 1. Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL). 2. You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance. 3. For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security. Incorrect options: AWS Systems Manager Parameter Store (aka SSM Parameter Store) provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, EC2 instance IDs, Amazon Machine Image (AMI) IDs, and license codes as parameter values. You can store values as plain text or encrypted data.

You have an S3 bucket that contains files in two different folders - s3://my-bucket/images and s3://my-bucket/thumbnails. When an image is first uploaded and new, it is viewed several times. But after 45 days, analytics prove that image files are on average rarely requested, but the thumbnails still are. After 180 days, you would like to archive the image files and the thumbnails. Overall you would like the solution to remain highly available to prevent disasters happening against a whole AZ. How can you implement an efficient cost strategy for your S3 bucket? (Select two) a.) Create a Lifecycle Policy to transition objects to Glacier using a prefix after 180 days b.) Create a Lifecycle Policy to transition all objects to S3 Standard IA after 45 days c.) Create a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after 45 days d.) Create a Lifecycle Policy to transition objects to S3 One Zone IA using a prefix after 45 days e.) Create a Lifecycle Policy to transition all objects to Glacier after 180 days

c.) Create a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after 45 days e.) Create a Lifecycle Policy to transition all objects to Glacier after 180 days o manage your S3 objects, so they are stored cost-effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions: Transition actions — Define when objects transition to another storage class. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class one year after creating them. Expiration actions — Define when objects expire. Amazon S3 deletes expired objects on your behalf. Create a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after 45 days S3 Standard-IA is for data that is accessed less frequently but requires rapid access when needed. S3 Standard-IA offers high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance makes S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. The minimum storage duration charge is 30 days. As the use-case mentions that after 45 days, image files are rarely requested, but the thumbnails still are. So you need to use a prefix while configuring the Lifecycle Policy so that only objects in the s3://my-bucket/images are transitioned to Standard IA and not all the objects in the bucket. Create a Lifecycle Policy to transition all objects to Glacier after 180 days Amazon S3 Glacier and S3 Glacier Deep Archive are secure, durable, and extremely low-cost Amazon S3 cloud storage classes for data archiving and long-term backup. They are designed to deliver 99.999999999% durability, and provide comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements.

You have developed a new REST API leveraging the API Gateway, AWS Lambda and Aurora database services. Most of the workload on the website is read-heavy. The data rarely changes and it is acceptable to serve users outdated data for about 24 hours. Recently, the website has been experiencing high load and the costs incurred on the Aurora database have been very high. How can you easily reduce the costs while improving performance, with minimal changes? a.) Enable AWS Lambda In Memory Caching b.) Add Aurora Read Replicas c.) Enable API Gateway Caching d.) Switch to using an Application Load Balancer

c.) Enable API Gateway Caching Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications. You can enable API caching in Amazon API Gateway to cache your endpoint's responses. With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API. When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds. API Gateway then responds to the request by looking up the endpoint response from the cache instead of requesting your endpoint. The default TTL value for API caching is 300 seconds. The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled. Using API Gateway Caching feature is the answer for the use case, as we can accept stale data for about 24 hours.

An IT company runs a high-performance computing (HPC) workload on AWS. The workload requires high network throughput and low-latency network performance along with tightly coupled node-to-node communications. The EC2 instances are properly sized for compute and storage capacity and are launched using default options. Which of the following solutions can be used to improve the performance of the workload? a.) Select the appropriate capacity reservation while launching EC2 instances b.) Select dedicated instance tenancy while launching EC2 instances c.) Select a cluster placement group while launching EC2 instances d.) Select an Elastic Inference accelerator while launching EC2 instances

c.) Select a cluster placement group while launching EC2 instances When you launch a new EC2 instance, the EC2 service attempts to place the instance in such a way that all of your instances are spread out across underlying hardware to minimize correlated failures. You can use placement groups to influence the placement of a group of interdependent instances to meet the needs of your workload. Cluster placement group packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly coupled node-to-node communication that is typical of HPC applications.

A company uses Application Load Balancers (ALBs) in multiple AWS Regions. The ALBs receive inconsistent traffic that varies throughout the year. The engineering team at the company needs to allow the IP addresses of the ALBs in the on-premises firewall to enable connectivity. Which of the following represents the MOST scalable solution with minimal configuration changes? a.) Migrate all ALBs in different Regions to the Network Load Balancer (NLBs). Configure the on-premises firewall's rule to allow the Elastic IP addresses of all the NLBs b.) Develop an AWS Lambda script to get the IP addresses of the ALBs in different Regions. Configure the on-premises firewall's rule to allow the IP addresses of the ALBs c.) Set up AWS Global Accelerator. Register the ALBs in different Regions to the Global Accelerator. Configure the on-premises firewall's rule to allow static IP addresses associated with the Global Accelerator d.) Set up a Network Load Balancer (NLB) in one Region. Register the private IP addresses of the ALBs in different Regions with the NLB. Configure the on-premises firewall's rule to allow the Elastic IP address attached to the NLB

c.) Set up AWS Global Accelerator. Register the ALBs in different Regions to the Global Accelerator. Configure the on-premises firewall's rule to allow static IP addresses associated with the Global Accelerator AWS Global Accelerator is a networking service that helps you improve the availability and performance of the applications that you offer to your global users. AWS Global Accelerator is easy to set up, configure, and manage. It provides static IP addresses that provide a fixed entry point to your applications and eliminate the complexity of managing specific IP addresses for different AWS Regions and Availability Zones. Associate the static IP addresses provided by AWS Global Accelerator to regional AWS resources or endpoints, such as Network Load Balancers, Application Load Balancers, EC2 Instances, and Elastic IP addresses. The IP addresses are anycast from AWS edge locations so they provide onboarding to the AWS global network close to your users. Simplified and resilient traffic routing for multi-Region applications using Global

A systems administrator is creating IAM policies and attaching them to IAM identities. After creating the necessary identity-based policies, the administrator is now creating resource-based policies. Which is the only resource-based policy that the IAM service supports? a.) Permissions boundary b.) AWS Organizations Service Control Policies (SCP) c.) Trust policy d.) Access control list (ACL)

c.) Trust policy You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. Resource-based policies are JSON policy documents that you attach to a resource such as an Amazon S3 bucket. These policies grant the specified principal permission to perform specific actions on that resource and define under what conditions this applies. Trust policy - Trust policies define which principal entities (accounts, users, roles, and federated users) can assume the role. An IAM role is both an identity and a resource that supports resource-based policies. For this reason, you must attach both a trust policy and an identity-based policy to an IAM role. The IAM service supports only one type of resource-based policy called a role trust policy, which is attached to an IAM role.

A Pharmaceuticals company is looking for a simple solution to connect its VPCs and on-premises networks through a central hub. As a Solutions Architect, which of the following would you suggest as the solution that requires the LEAST operational overhead? a.) Partially meshed VPC peering can be used to connect the Amazon VPCs to the on-premises networks b.) Fully meshed VPC peering can be used to connect the Amazon VPCs to the on-premises networks c.) Use AWS Transit Gateway to connect the Amazon VPCs to the on-premises networks d.) Use Transit VPC Solution to connect the Amazon VPCs to the on-premises networks

c.) Use AWS Transit Gateway to connect the Amazon VPCs to the on-premises networks The AWS Transit Gateway allows customers to connect their Amazon VPCs and their on-premises networks to a single gateway. As your number of workloads running on AWS increases, you need to be able to scale your networks across multiple accounts and Amazon VPCs to keep up with the growth. With AWS Transit Gateway, you only have to create and manage a single connection from the central gateway into each Amazon VPC, on-premises data center, or remote office across your network. AWS Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks, which act like spokes. This hub and spoke model simplifies management and reduces operational costs because each network only has to connect to the Transit Gateway and not to every other network.

You started a new job as a solutions architect at a company that has both AWS experts and people learning AWS. Recently, a developer misconfigured a newly created RDS database which resulted in a production outage. How can you ensure that RDS specific best practices are incorporated into a reusable infrastructure template to be used by all your AWS users? a.) Attach an IAM policy to interns preventing them from creating an RDS database b.) Create a Lambda function which sends emails when it finds misconfigured RDS databases c.) Use CloudFormation to manage RDS databases d.) Store your recommendations in a custom Trusted Advisor rule

c.) Use CloudFormation to manage RDS databases AWS CloudFormation provides a common language for you to model and provision AWS and third-party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This gives you a single source of truth for your AWS and third-party resources. CloudFormation allows you to keep your infrastructure as code and re-use the best practices around your company for configuration parameters. Therefore, this is the correct option for the given use-case.

A healthcare company is evaluating storage options on Amazon S3 to meet regulatory guidelines. The data should be stored in such a way on S3 that it cannot be deleted until the regulatory time period has expired. As a solutions architect, which of the following would you recommend for the given requirement? a.) Use S3 cross-Region Replication b.) Activate MFA delete on the S3 bucket c.) Use S3 Object Lock d.) Use S3 Glacier Vault Lock

c.) Use S3 Object Lock Amazon S3 Object Lock is an Amazon S3 feature that allows you to store objects using a write once, read many (WORM) model. You can use WORM protection for scenarios where it is imperative that data is not changed or deleted after it has been written. Whether your business has a requirement to satisfy compliance regulations in the financial or healthcare sector, or you simply want to capture a golden copy of business records for later auditing and reconciliation, S3 Object Lock is the right tool for you. Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.

An enterprise has decided to move its secondary workloads such as backups and archives to AWS cloud. The CTO wishes to move the data stored on physical tapes to Cloud, without changing their current tape backup workflows. The company holds petabytes of data on tapes and needs a cost-optimized solution to move this data to cloud. What is an optimal solution that meets these requirements while keeping the costs to a minimum? a.) Use AWS DataSync, which makes it simple and fast to move large amounts of data online between on-premises storage and AWS Cloud. Data moved to Cloud can then be stored cost-effectively in Amazon S3 archiving storage classes b.) Use AWS Direct Connect, a cloud service solution that makes it easy to establish a dedicated network connection from on-premises to AWS to transfer data. Once this is done, Amazon S3 can be used to store data at lesser costs c.) Use Tape Gateway, which can be used to move on-premises tape data onto AWS Cloud. Then, Amazon S3 archiving storage classes can be used to store data cost-effectively for years d.) Use AWS VPN connection between the on-premises datacenter and your Amazon VPC. Once this is established, you can use Amazon Elastic File System (Amazon EFS) to get a scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources

c.) Use Tape Gateway, which can be used to move on-premises tape data onto AWS Cloud. Then, Amazon S3 archiving storage classes can be used to store data cost-effectively for years Tape Gateway enables you to replace using physical tapes on-premises with virtual tapes in AWS without changing existing backup workflows. Tape Gateway supports all leading backup applications and caches virtual tapes on-premises for low-latency data access. Tape Gateway encrypts data between the gateway and AWS for secure data transfer and compresses data while transitioning virtual tapes between Amazon S3 and Amazon S3 Glacier, or Amazon S3 Glacier Deep Archive, to minimize storage costs. Tape Gateway compresses and stores archived virtual tapes in the lowest-cost Amazon S3 storage classes, Amazon S3 Glacier and Amazon S3 Glacier Deep Archive. This makes it feasible for you to retain long-term data in the AWS Cloud at a very low cost. With Tape Gateway, you only pay for what you consume, with no minimum commitments and no upfront fees. Tape Gateway stores your virtual tapes in S3 buckets managed by the AWS Storage Gateway service, so you don't have to manage your own Amazon S3 storage. Tape Gateway integrates with all leading backup applications allowing you to start using cloud storage for on-premises backup and archive without any changes to your backup and archive workflows.

A music-sharing company uses a Network Load Balancer to direct traffic to 5 EC2 instances managed by an Auto Scaling group. When a very popular song is released, the Auto Scaling Group scales to 100 instances and the company incurs high network and compute fees. The company wants a solution to reduce the costs without changing any of the application code. What do you recommend? a.) Move the songs to S3 b.) Move the songs to Glacier c.) Use a CloudFront distribution d.) Leverage AWS Storage Gateway

c.) Use a CloudFront distribution Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront points of presence (POPs) (edge locations) make sure that popular content can be served quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content. Regional edge caches help with all types of content, particularly content that tends to become less popular over time. Examples include user-generated content, such as video, photos, or artwork; e-commerce assets such as product photos and videos; and news and event-related content that might suddenly find new popularity. CloudFront is the right answer because we can put it in front of our ASG and leverage a Global Caching feature that will help us distribute the content reliably with dramatically reduced costs (the ASG won't need to scale as much).

A social media company wants the capability to dynamically alter the size of a geographic area from which traffic is routed to a specific server resource. Which feature of Route 53 can help achieve this functionality? a.) Latency-based routing b.) Geolocation routing c.) Weighted routing d.) Geoproximity routing

d.) Geoproximity routing Geoproximity routing lets Amazon Route 53 route traffic to your resources based on the geographic location of your users and your resources. You can also optionally choose to route more traffic or less to a given resource by specifying a value, known as a bias. A bias expands or shrinks the size of the geographic region from which traffic is routed to a resource. To optionally change the size of the geographic region from which Route 53 routes traffic to a resource, specify the applicable value for the bias: 1. To expand the size of the geographic region from which Route 53 routes traffic to a resource, specify a positive integer from 1 to 99 for the bias. Route 53 shrinks the size of adjacent regions. To shrink the size of the geographic region from which Route 53 routes traffic to a resource, specify a negative bias of -1 to -99. Route 53 expands the size of adjacent regions.

As a Solutions Architect, you are tasked to design a distributed application that will run on various EC2 instances. This application needs to have the highest performance local disk to cache data. Also, data is copied through an EC2 to EC2 replication mechanism. It is acceptable if the instance loses its data when stopped or terminated. Which storage solution do you recommend? a.) Amazon Elastic Block Store (EBS) b.) Amazon Simple Storage Service (Amazon S3) c.) Amazon Elastic File System (Amazon EFS) d.) Instance Store

d.) Instance Store An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance store volumes are included as part of the instance's usage cost. Some instance types use NVMe or SATA-based solid-state drives (SSD) to deliver high random I/O performance. This is a good option when you need storage with very low latency, but you don't need the data to persist when the instance terminates.

What does this CloudFormation snippet do? (Select three) SecurityGroupIngress: - IpProtocol: tcp FromPort: 80 ToPort: 80 CidrIp: 0.0.0.0/0 - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: 192.168.1.1/32 a.) It configures a security group's outbound rules b.) It allows any IP to pass through on the HTTP port c.) It configures an NACL's inbound rules d.) It configures a security group's inbound rules e.) It only allows the IP 0.0.0.0 to reach HTTP f.) It lets traffic flow from one IP on port 22 g.) It prevents traffic from reaching on HTTP unless from the IP 192.168.1.1

d.) It configures a security group's inbound rules b.) It allows any IP to pass through on the HTTP port f.) It lets traffic flow from one IP on port 22 A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you can specify one or more security groups; otherwise, we use the default security group. You can add rules to each security group that allows traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group. When we decide whether to allow traffic to reach an instance, we evaluate all the rules from all the security groups that are associated with the instance. The following are the characteristics of security group rules: 1. By default, security groups allow all outbound traffic. 2. Security group rules are always permissive; you can't create rules that deny access. 3. Security groups are stateful AWS CloudFormation provides a common language for you to model and provision AWS and third-party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This gives you a single source of truth for your AWS and third-party resources. Considering the given CloudFormation snippet, 0.0.0.0/0 means any IP, not the IP 0.0.0.0. Ingress means traffic going into your instance, and Security Groups are different from NACL. Each "-" in our security group rule represents a different rule (YAML syntax) Therefore the CloudFormation snippet creates two Security Group inbound rules that allow any IP to pass through on the HTTP port and lets traffic flow from one source IP (192.168.1.1) on port 22.

A media company uses Amazon ElastiCache Redis to enhance the performance of its RDS database layer. The company wants a robust disaster recovery strategy for its caching layer that guarantees minimal downtime as well as minimal data loss while ensuring good application performance. Which of the following solutions will you recommend to address the given use-case? a.) Schedule manual backups using Redis append-only file (AOF) b.) Schedule daily automatic backups at a time when you expect low resource utilization for your cluster c.) Add read-replicas across multiple availability zones to reduce the risk of potential data loss because of failure d.) Opt for Multi-AZ configuration with automatic failover functionality to help mitigate failure

d.) Opt for Multi-AZ configuration with automatic failover functionality to help mitigate failure Multi-AZ is the best option when data retention, minimal downtime, and application performance are a priority. Data-loss potential - Low. Multi-AZ provides fault tolerance for every scenario, including hardware-related issues. Performance impact - Low. Of the available options, Multi-AZ provides the fastest time to recovery, because there is no manual procedure to follow after the process is implemented. Cost - Low to high. Multi-AZ is the lowest-cost option. Use Multi-AZ when you can't risk losing data because of hardware failure or you can't afford the downtime required by other options in your response to an outage.

A CRM company has a SaaS (Software as a Service) application that feeds updates to other in-house and third-party applications. The SaaS application and the in-house applications are being migrated to use AWS services for this inter-application communication. As a Solutions Architect, which of the following would you suggest to asynchronously decouple the architecture? a.) Use Amazon Simple Notification Service (SNS) to communicate between systems and decouple the architecture b.) Use Elastic Load Balancing for effective decoupling of system architecture c.) Use Amazon Simple Queue Service (SQS) to decouple the architecture d.) Use Amazon EventBridge to decouple the system architecture

d.) Use Amazon EventBridge to decouple the system architecture Both Amazon EventBridge and Amazon SNS can be used to develop event-driven applications, but for this use case, EventBridge is the right fit. Amazon EventBridge is recommended when you want to build an application that reacts to events from SaaS applications and/or AWS services. Amazon EventBridge is the only event-based service that integrates directly with third-party SaaS partners. Amazon EventBridge also automatically ingests events from over 90 AWS services without requiring developers to create any resources in their account. Further, Amazon EventBridge uses a defined JSON-based structure for events and allows you to create rules that are applied across the entire event body to select events to forward to a target. Amazon EventBridge currently supports over 15 AWS services as targets, including AWS Lambda, Amazon SQS, Amazon SNS, and Amazon Kinesis Streams and Firehose, among others. At launch, Amazon EventBridge is has limited throughput (see Service Limits) which can be increased upon request, and typical latency of around half a second.

A company has developed a popular photo-sharing website using a serverless pattern on the AWS Cloud using API Gateway and AWS Lambda. The backend uses an RDS PostgreSQL database. The website is experiencing high read traffic and the Lambda functions are putting an increased read load on the RDS database. The architecture team is planning to increase the read throughput of the database, without changing the application's core logic. As a Solutions Architect, what do you recommend? a.) Use Amazon DynamoDB b.) Use Amazon RDS Multi-AZ feature c.) Use Amazon ElastiCache d.) Use Amazon RDS Read Replicas

d.) Use Amazon RDS Read Replicas Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances.

A niche social media application allows users to connect with sports athletes. As a solutions architect, you've designed the architecture of the application to be fully serverless using API Gateway & AWS Lambda. The backend uses a DynamoDB table. Some of the star athletes using the application are highly popular, and therefore DynamoDB has increased the RCUs. Still, the application is experiencing a hot partition problem. What can you do to improve the performance of DynamoDB and eliminate the hot partition problem without a lot of application refactoring? a.) Use DynamoDB Global Tables b.) Use Amazon ElastiCache c.) Use DynamoDB Streams d.) Use DynamoDB DAX

d.) Use DynamoDB DAX Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement - from milliseconds to microseconds - even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management. DAX will be transparent and won't require an application refactoring, and will cache the "hotkeys". Therefore, this is the correct option.

A small rental company had 5 employees, all working under the same AWS cloud account. These employees deployed their applications built for various functions- including billing, operations, finance, etc. Each of these employees has been operating in their own VPC. Now, there is a need to connect these VPCs so that the applications can communicate with each other. Which of the following is the MOST cost-effective solution for this use-case? a.) Use an Internet Gateway b.) Use a Direct Connect c.) Use a NAT Gateway d.) Use VPC Peering

d.) Use VPC Peering A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection). VPC Peering helps connect two VPCs and is not transitive. To connect VPCs together, the best available option is to use VPC peering.

A company wants to grant access to an S3 bucket to users in its own AWS account as well as to users in another AWS account. Which of the following options can be used to meet this requirement? a.) Use either a bucket policy or a user policy to grant permission to users in its account as well as to users in another account b.) Use a user policy to grant permission to users in its account as well as to users in another account c.) Use permissions boundary to grant permission to users in its account as well as to users in another account d.) Use a bucket policy to grant permission to users in its account as well as to users in another account

d.) Use a bucket policy to grant permission to users in its account as well as to users in another account A bucket policy is a type of resource-based policy that can be used to grant permissions to the principal that is specified in the policy. Principals can be in the same account as the resource or in other accounts. For cross-account permissions to other AWS accounts or users in another account, you must use a bucket policy.

A financial services firm has traditionally operated with an on-premise data center and would like to create a disaster recovery strategy leveraging the AWS Cloud. As a Solutions Architect, you would like to ensure that a scaled-down version of a fully functional environment is always running in the AWS cloud, and in case of a disaster, the recovery time is kept to a minimum. Which disaster recovery strategy is that? a.) Backup and Restore b.) Multi Site c.) Pilot Light d.) Warm Standby

d.) Warm Standby The term warm standby is used to describe a DR scenario in which a scaled-down version of a fully functional environment is always running in the cloud. A warm standby solution extends the pilot light elements and preparation. It further decreases the recovery time because some services are always running. By identifying your business-critical systems, you can fully duplicate these systems on AWS and have them always on.

A development team has configured an Elastic Load Balancer for host-based routing. The idea is to support multiple subdomains and different top-level domains. The rule *.example.com matches which of the following? a.) example.com b.) EXAMPLE.COM c.) example.test.com d.) test.example.com

d.) test.example.com You can use host conditions to define rules that route requests based on the hostname in the host header (also known as host-based routing). This enables you to support multiple subdomains and different top-level domains using a single load balancer. A hostname is not case-sensitive, can be up to 128 characters in length, and can contain any of the following characters: 1. A-Z, a-z, 0-9 2. - . 3. * (matches 0 or more characters) 4. ? (matches exactly 1 character) You must include at least one "." character. You can include only alphabetical characters after the final "." character. The rule *.example.com matches test.example.com but doesn't match example.com.


Related study sets

Principles of Management chapter 9

View Set

PSYC E-1050 (Harvard Extension) - EXAM 2

View Set

5.1 Stakeholder Management/ 5.2 Identify Stakeholders

View Set

2016 AP Lang Practice Exam: Multiple Choice

View Set

Unit 1; Practice File Management

View Set