AWS Solution Architect Sample Questions
A company wants to build a brand new application on the AWS Cloud. They want to ensure that this application follows the Microservices architecture. Which of the following services can be used to build this sort of architecture? Choose 3 answers from the options given below.
1. AWS Lambda 2. AWS ECS 3. AWS API Gateway **AWS Lambda is a serverless compute service that allows you to build independent services. The Elastic Container service (ECS) can be used to manage containers. The API Gateway is a serverless component for managing access to APIs.
There is an urgent requirement to monitor some database metrics for a database hosted on AWS and send notifications. Which AWS services can accomplish this? Choose 2 answers from the options given below.
1. Amazon CloudWatch 2. Amazon Simple Notification Service **Amazon CloudWatch will be used to monitor the IOPS metrics from the RDS Instance and Amazon Simple Notification Service will be used to send the notification if any alarm is triggered.
You are responsible for deploying a critical application to AWS. It is required to ensure that the controls set for this application meet PCI compliance. Also, there is a need to monitor web application logs to identify any malicious activity. Which of the following services can be used to fulfill this requirement? Choose 2 answers from the options given below.
1. Amazon CloudWatch Logs 2. Amazon CloudTrail **AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Amazon Route 53, and other sources. You can then retrieve the associated log data from CloudWatch Logs.
You have a set of EC2 Instances that support an application. They are currently hosted in the US Region. In the event of a disaster, you need a way to ensure that you can quickly provision the resources in another region. How could this be accomplished? Choose 2 answers from the options given below.
1. Create EBS Snapshots and then copy them to the destination region. 2. Create AMIs for the underlying instances. **AMIs can be used to create a snapshot or template of the underlying instance. You can then copy the AMI to another region. You can also make snapshots of the volumes and then copy them to the destination region.
A customer planning on hosting an AWS RDS instance, needs to ensure that the underlying data is encrypted. How can this be achieved? Choose 2 answers from the options given below
1. Ensure that the right instance class is chosen for the underlying instance. 2. Encrypt the database during creation. **Encryption for the database can be done during the creation of the database. Also, you need to ensure that the underlying instance type supports DB encryption.
Your company has a requirement to host a static web site in AWS. Which of the following steps would help implement a quick and cost-effective solution for this requirement? Choose 2 answers from the options given below. Each answer forms a part of the solution.
1. Upload the static content to an S3 bucket. 2. Enable web site hosting for the S3 bucket. **You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. They might also contain client-side scripts.
A company hosts data in S3. There is a requirement to control access to the S3 buckets. Which are the 2 ways in which this can be achieved?
1. Use Bucket Policies. 2. Use IAM user policies. **Amazon S3 offers access policy options broadly categorized as resource-based policies and user policies. Access policies you attach to your resources (buckets and objects) are referred to as resource-based policies. For example, bucket policies and access control lists (ACLs) are resource-based policies. You can also attach access policies to users in your account. These are called user policies. You may choose to use resource-based policies, user policies, or some combination of these to manage permissions to your Amazon S3 resources.
Your company currently has an infrastructure hosted On-premise. You have been requested to devise an architecture on AWS for migrating some of the On-premise components. A current concern is the data storage layer. Minimum administrative overheads are also required for the underlying infrastructure in AWS. Which of the following would be included in your proposed architecture? Choose 2 answers from the options given below.
1. Use DynamoDB to store data in tables. 2. Use the Simple Storage Service to store data. **Both the Simple Storage Service and DynamoDB are complete serverless offerings from AWS which You don't need to maintain servers, and your applications have automated high availability.
You have been tasked with creating a VPC network topology for your company. The VPC network must support both internet-facing applications and internal-facing applications accessed only over VPN. Both Internet-facing and internal-facing applications must be able to leverage at least 3 AZs for high availability. At a minimum, how many subnets must you create within your VPC to accommodate these requirements?
6 **Since each subnet corresponds to one Availability Zone and you need 3 AZs for both the internet and intranet applications, you will need 6 subnets.
You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable?
A single Amazon S3 bucket **Amazon S3 is the perfect storage solution for audio and text files. It is a highly available and durable storage device.
A company wants to create standard templates for deployment of their Infrastructure. These would also be used to provision resources in another region during disaster recovery scenarios. Which AWS service can be used in this regard?
AWS CloudFormation **AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use AWS CloudFormation's sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don't need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize your templates as diagrams and edit them using a drag-and-drop interface with the AWS CloudFormation Designer.
A company wants to have a NoSQL database hosted on the AWS Cloud, but do not have the necessary staff to manage the underlying infrastructure. Which of the following choices would be ideal for this requirement?
AWS DynamoDB **Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling
Your company currently has an entire data warehouse of assets that needs to be migrated to the AWS Cloud. Which of the following services should this be migrated to?
AWS Redshift **Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers.
Your Operations department is using an incident based application hosted on a set of EC2 Instances. These instances are placed behind an Auto Scaling Group to ensure the right number of instances are in place to support the application. The Operations department has expressed dissatisfaction with regard to poor application performance at 9:00 AM each day. However, it is also noted that the system performance returns to optimal at 9:45 AM. What can be done to ensure that this issue gets fixed?
Add a Scheduled Scaling Policy at 8:30 AM. **Scheduled Scaling can be used to ensure that the capacity is peaked before 9:00 AM each day. AWS Documentation further mentions the following on Scheduled Scaling: Scaling based on a schedule allows you to scale your application in response to predictable load changes. For example, every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the predictable traffic patterns of your web application
An infrastructure is being hosted in AWS using the following resources: a) A couple of EC2 Instances serving a Web-Based application b) An Elastic Balancer in front of the EC2 Instances c) An AWS RDS which has Multi-AZ enabled Which of the following can be added to the setup to ensure scalability?
Add an Auto Scaling Group to the setup. **AWS Auto Scaling enables you to configure automatic scaling for the scalable AWS resources for your application in a matter of minutes. AWS Auto Scaling uses the Auto Scaling and Application Auto Scaling services to configure scaling policies for your scalable AWS resources.
An application with a 150 GB relational database runs on an EC2 Instance. This application will be used frequently with a high database reads and writes requests. What is the most cost-effective storage type among the options below?
Amazon EBS Provisioned IOPS SSD **The question is focusing on the most cost effective storage option for the application. As per AWS documentation Provisioned IOPS (SSD) are used for applications that require high Inputs/Outputs Operations per sec and is mainly used in large databases such as Mongo, Cassandra, Microsoft SQL Server, MySQL, PostgreSQL, Oracle where as Throughput optimized HDD although it is cheaper compared to PIOPS is used for data warehouses where it is designed to work with throughput intensive workloads such as big data, log processing etc.
A company is planning on hosting a set of EC2 Instances on the AWS Cloud. They also need to ensure that data can be stored on the EC2 Instances. Which block level storage device could make this possible?
Amazon EBS Volumes **An Amazon EBS Volume is a durable, block-level storage device that you can attach to a single EC2 instance. You can use EBS Volumes as primary storage for data that requires frequent updates, such as the system drive for an instance or storage for a database application. You can also use them for throughput-intensive applications that perform continuous disk scans.
You are deploying an application to track the GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?
Amazon Kinesis **Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the processing can begin.
You are designing a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. You expect this bucket to receive over 150 PUT requests per second. What should you do to ensure optimal performance?
Amazon S3 will automatically manage performance at this scale. **Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/POST/DELETE and 5,500 GET requests per second per prefix in a bucket. There are no limits to the number of prefixes in a bucket. It is simple to increase your read or write performance exponentially. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000 read requests per second.
Your company has an application that takes care of uploading, processing and publishing videos posted by users. The current architecture for this application includes the following: a) A set of EC2 Instances to transfer user uploaded videos to S3 buckets b) A set of EC2 worker processes to process and publish the videos c) An Auto Scaling Group for the EC2 worker processes Which of the following can be added to the architecture to make it more reliable?
Amazon SQS **Amazon SQS is used to decouple systems. It can store requests to process videos to be picked up by the worker processes. AWS Documentation mentions the following: Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.
Your Development team wants to start making use of EC2 Instances to host their Application and Web servers. In the space of automation, they want the Instances to always download the latest version of the Web and Application servers when they are launched. As an architect, what would you recommend for this scenario?
Ask the Development team to create scripts which can be added to the User Data section when the instance is launched. **When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. You can also pass this data into the launch wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls).
You plan on creating a VPC from scratch and launching EC2 Instances in the subnet. What should be done to ensure that the EC2 Instances are accessible from the Internet?
Attach an Internet Gateway to the VPC and add a route for 0.0.0.0/0 to the Route table.
A company has assigned two web servers instances to an Elastic Load Balancer. However, the instances and the ELB are not reachable via URL to the elastic load balancer serving the web app data from the EC2 instances. How might you resolve the issue so that your instances are serving the web app data to the public Internet? Choose the correct answer from the options given below
Attach an Internet Gateway to the VPC and route it to the subnet. **If the Internet Gateway is not attached to the VPC, which is a prerequisite for the instances to be accessed from the Internet, the instances will not be reachable.
Currently, you're helping design and architect a highly available application. After building the initial environment, you discover that a part of your application does not work correctly until port 443 is added to the security group. After adding port 443 to the appropriate security group, how much time will it take before the changes are applied and the application begins working correctly? Choose the correct answer from the options below.
Changes apply instantly to the security group, and the application should be able to respond to 443 requests. **Some systems for setting up firewalls let you filter on source ports. Security groups let you filter only on destination ports. When you add or remove rules, they are automatically applied to all instances associated with the security group.
Your current setup in AWS consists of the following architecture: 2 public subnets, one subnet which has web servers accessed by users across the Internet and another subnet for the database server. Which of the following changes to the architecture adds a better security boundary to the resources hosted in this setup?
Consider moving the database server to a private subnet. **The ideal setup is to host the web server in the public subnet so that it can be accessed by users on the Internet. The database server can be hosted in the private subnet.
A company is running three production web server reserved EC2 Instances with EBS-backed root volumes. These instances have a consistent CPU load of 80%. Traffic is being distributed to these instances by an Elastic Load Balancer. They also have production and development Multi-AZ RDS MySQL databases. What recommendation would you make to reduce cost in this environment without affecting availability of mission-critical systems? Choose the correct answer from the options given below.
Consider not using a Multi-AZ RDS deployment for the development database. **Multi-AZ databases are better for production environments rather than for development environments, so you can reduce costs by not using these for development environments. Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
A company has a set of EC2 Linux based instances hosted in AWS. There is a need to have a standard file interface for files to be used across all Linux based instances. Which of the following can be used for this purpose?
Consider using AWS EFS **When mounted on Amazon EC2 instances, an Amazon EFS file system provides a standard file system interface and file system access semantics, allowing you to seamlessly integrate Amazon EFS with your existing applications and tools. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, allowing Amazon EFS to provide a common data source for workloads and applications running on more than one Amazon EC2 instance.
A company is planning on using the AWS Redshift service. The Redshift service and data on it would be used continuously for the next 3 years as per the current business plan. Which of the following would be the most cost-effective solution in this scenario?
Consider using Reserved Instances for the Redshift Cluster. **If you intend to keep your Amazon Redshift cluster running continuously for a prolonged period, you should consider purchasing reserved node offerings. These offerings provide significant savings over on-demand pricing, but they require you to reserve compute nodes and commit to paying for those nodes for either a one-year or three-year duration.
Instances in your private subnet hosted in AWS, need access to important documents in S3. Due to the confidential nature of these documents, you have to ensure that this traffic does not traverse through the internet. As an architect, how would you you implement this solution?
Consider using a VPC Endpoint. **A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other services does not leave the Amazon network.
A CloudFront distribution is being used to distribute content from an S3 bucket. It is required that only a particular set of users get access to certain content. How can this be accomplished?
Create CloudFront signed URLs and then distribute these URLs to the users. **Many companies that distribute content via the internet want to restrict access to documents, business data, media streams, or content that is intended for selected users, for example, users who have paid a fee. To securely serve this private content using CloudFront, you can do the following: Require that your users access your private content by using special CloudFront signed URLs or signed cookies. Require that your users access your Amazon S3 content using CloudFront URLs, not Amazon S3 URLs. Requiring CloudFront URLs isn't required, but we recommend it to prevent users from bypassing the restrictions that you specify in signed URLs or signed cookies.
You are developing a new mobile application which is expected to be used by thousands of customers. You are considering storing user preferences in AWS, and need a data store to save the same. Each data item is expected to be 20KB in size. The solution needs to be cost-effective, highly available, scalable and secure. How would you design the data layer?
Create a DynamoDB table with the required Read and Write capacity and use it as the data layer **In this case, since each data item is 20KB and given the fact that DynamoDB is an ideal data layer for storing user preferences, this would be an ideal choice. Also, DynamoDB is a highly scalable and available service.
A company currently hosts their architecture in the US region. They now need to duplicate this architecture to the Europe region and extend the application hosted on this architecture to the new region. In order to ensure that users across the globe get the same seamless experience from either setups, what among the following needs to be done?
Create a Geolocation Route 53 Policy to route the policy based on the location. **Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from.
Your company currently has data hosted in an Amazon Aurora MySQL DB. Since this data is critical, there is a need to ensure that it can be made available in another region in case of a disaster. How can this be achieved?
Create a Read Replica for the database. **You can create an Amazon Aurora MySQL DB cluster as a Read Replica in a different AWS Region than the source DB cluster. Taking this approach can improve your disaster recovery capabilities, let you scale read operations into a region that is closer to your users, and make it easier to migrate from one region to another.
A company has an on-premises infrastructure which they want to extend to the AWS Cloud. There is a need to ensure that communication across both environments is possible over the Internet. What would you create in this case to fulfill this requirement?
Create a VPN connection between the on-premises and the AWS Environment **A VPC VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity. AWS Direct Connect does not involve the Internet; instead, it uses dedicated, private network connections between your intranet and Amazon VPC.
Your company is planning on using Route 53 as the DNS provider. There is a need to ensure that the company's domain name points to an existing CloudFront distribution. How can this be achieved?
Create an Alias record which points to the CloudFront distribution. **While ordinary Amazon Route 53 records are standard DNS records, alias records provide a Route 53-specific extension to DNS functionality. Instead of an IP address or a domain name, an alias record contains a pointer to a CloudFront distribution, an Elastic Beanstalk environment, an ELB Classic, Application, or Network Load Balancer, an Amazon S3 bucket that is configured as a static website, or another Route 53 record in the same hosted zone. When Route 53 receives a DNS query that matches the name and type in an alias record, Route 53 follows the pointer and responds with the applicable value.
Your company has a set of applications that make use of Docker containers used by the Development team. There is a need to move these containers to AWS. Which of the following methods could be used to set up these Docker containers in a separate environment in AWS?
Create an Elastic Beanstalk environment with the necessary Docker containers. **The Elastic Beanstalk service can be used to host Docker containers. AWS Documentation further mentions the following: Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.
You have both production and development based instances running on your VPC. It is required to ensure that people responsible for the development instances do not have access to work on production instances for better security. Which of the following would be the best way to accomplish this using policies? Choose the correct answer from the options given below.
Define the tags on the test and production servers and add a condition to the IAM Policy which allows access to specific tags. **You can easily add tags to define which instances are production and which ones are development instances. These tags can then be used while controlling access via an IAM Policy.
You have a business-critical two-tier web application currently deployed in 2 Availability Zones in a single region, using Elastic Load Balancing and Auto Scaling. The app depends on synchronous replication at the database layer. The application needs to remain fully available even if one application AZ goes offline and if Auto Scaling cannot launch new instances in the remaining AZ. How can the current architecture be enhanced to ensure this?
Deploy in 3 AZ with Auto Scaling minimum set to handle 50 per cent peak load per zone. **Since the requirement states that the application should never go down even if an AZ is not available, we need to maintain 100% availability. Options A and D are incorrect because region deployment is not possible for ELB. ELBs can manage traffic within a region and not between regions. Option B is incorrect because even if one AZ goes down, we would be operating at only 66% and not the required 100%.
An application is currently hosted on an EC2 Instance which has attached EBS Volumes. The data on these volumes is frequently accessed. But after a duration of a week, the documents need to be moved to infrequent access storage. Which of the following EBS volume type provides cost efficiency for the moved documents?
EBS Cold HDD **Cold HDD (sc1) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. With a lower throughput limit than st1, sc1 is a good fit ideal for large, sequential cold-data workloads. If you require infrequent access to your data and are looking to save costs, sc1 provides inexpensive block storage.
Your company has confidential documents stored in the Simple Storage Service. Due to compliance requirements, there is a need for the data in the S3 bucket to be available in a different geographical location. As an architect, what change would you make to comply with this requirement?
Enable Cross-Region Replication for the S3 bucket. **This is mentioned clearly as a use case for S3 Cross-Region Replication. You might configure Cross-Region Replication on a bucket for various reasons, including the following: Compliance requirements - Although, by default, Amazon S3 stores your data across multiple geographically distant Availability Zones, compliance requirements might dictate that you store data at even further distances. Cross-region replication allows you to replicate data between distant AWS Regions to satisfy these compliance requirements.
A company hosts data in S3. There is now a mandate that going forward, all data in the S3 bucket needs to be encrypted at rest. How can this be achieved?
Enable Server-side encryption on the S3 bucket. **Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects.
A company currently storing a set of documents in the AWS Simple Storage Service, is worried about the potential loss if these documents are ever deleted. Which of the following can be used to ensure protection from loss of the underlying documents in S3?
Enable Versioning for the underlying S3 bucket. **Versioning is on the bucket level and can be used to recover prior versions of an object.
You have created an AWS Lambda function that will write data to a DynamoDB table. Which of the following must be in place to ensure that the Lambda function can interact with the DynamoDB table?
Ensure an IAM Role is attached to the Lambda function which has the required DynamoDB privileges. **Each Lambda function has an IAM role (execution role) associated with it. You specify the IAM role when you create your Lambda function. Permissions you grant to this role determine what AWS Lambda can do when it assumes the role. There are two types of permissions that you grant to the IAM role: - If your Lambda function code accesses other AWS resources, such as to read an object from an S3 bucket or write logs to CloudWatch Logs, you need to grant permissions for relevant Amazon S3 and CloudWatch actions to the role. - If the event source is stream-based (Amazon Kinesis Data Streams and DynamoDB streams), AWS Lambda polls these streams on your behalf. AWS Lambda needs permissions to poll the stream and read new records on the stream so you need to grant the relevant permissions to this role.
A customer has an instance hosted in the AWS Public Cloud. The VPC and subnet used to host the instance have been created with the default settings for the Network Access Control Lists. An IT Administrator needs to be provided secure access to the underlying instance. How can this be accomplished?
Ensure that the security group allows Inbound SSH traffic from the IT Administrator's Workstation. **Ensure that the security group allows Inbound SSH traffic from the IT Administrator's Workstation. Since Security groups are stateful, we do not have to configure outbound traffic. What enters the inbound traffic is allowed in the outbound traffic too. Note: The default network ACL is configured to allow all traffic to flow in and out of the subnets to which it is associated. Since the question does not mention that it is a custom VPC we would assume it to be the default one.
A VPC has been setup with a subnet and an internet gateway. The EC2 instance is set up with a public IP but you are still not able to connect to it via the Internet. The right security groups are also in place. What should you do to connect to the EC2 Instance from the Internet?
Ensure the right route entry is there in the Route table. **You have to ensure that the Route table has an entry to the Internet Gateway because this is required for instances to communicate over the Internet.
A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files. This repository is growing at 500GB a year and must be presented as a single logical volume. The customer is becoming increasingly constrained with their local storage capacity and wants an offsite backup of this data, while maintaining low-latency access to their frequently accessed data. Which AWS Storage Gateway configuration meets the customer requirements?
Gateway-Cached Volumes with snapshots scheduled to Amazon S3 **Gateway-cached volumes let you use Amazon Simple Storage Service (Amazon S3) as your primary data storage while retaining frequently accessed data locally in your storage gateway. Gateway-cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data. You can create storage volumes up to 32 TiB in size and attach to them as iSCSI devices from your on-premises application servers. Your gateway stores data that you write to these volumes in Amazon S3 and retains recently read data in your on-premises storage gateway's cache and upload buffer storage.
An application consists of a couple of EC2 Instances. One EC2 Instance hosts a web application and the other Instance hosts the database server. Which of the following changes can be made to ensure high availability of the database layer?
Have another EC2 Instance in the another Availability Zone with replication configured. **To ensure high availability, have the EC2 Instance in another Availability Zone, so even if one goes down, the other one will still be available.
You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point, you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this?
Remove public read access and use signed URLs with expiry dates. **Option B is incorrect, because CloudFront is only used for the distribution of content across edge or region locations, and not for restricting access to content. Option C is not feasible. Because of their dynamic nature, blocking IPs is challenging and you will not know which sites are accessing your main site. Option D is incorrect since storing photos on an EBS Volume is neither a good practice nor an ideal architectural approach for an AWS Solutions Architect.
A company has a workflow that sends video files from their on-premises system to AWS for transcoding. They use EC2 worker instances to pull transcoding jobs from SQS. Why is SQS an appropriate service for this scenario?
SQS helps to facilitate horizontal scaling of encoding tasks. **Even though SQS guarantees the order of messages for FIFO queues, the main reason for using it is because it helps in horizontal scaling of AWS resources and is used for decoupling systems. SQS can neither be used for transcoding output nor for checking the health of worker instances. The health of worker instances can be checked via ELB or CloudWatch.
There is a requirement to host a database server. This server should not be able to connect to the Internet except while downloading required database patches. Which of the following solutions would best satisfy all the above requirements? Choose the correct answer from the options below.
Set up the database in a private subnet which connects to the Internet via a NAT Instance. **The configuration for this scenario includes a virtual private cloud (VPC) with a public subnet and a private subnet. We recommend this scenario if you want to run a public-facing web application, while maintaining back-end servers that aren't publicly accessible. A common example is a multi-tier website, with the web servers in a public subnet and the database servers in a private subnet. You can set up security and routing so that the web servers can communicate with the database servers.
You have a video transcoding application running on Amazon EC2. Each instance polls a queue to find out which video should be transcoded, and then runs a transcoding process. If this process is interrupted, the video gets transcoded by another instance based on the queuing system. You have a large backlog of videos that need to be transcoded and you would like to reduce this backlog by adding more instances. These instances will only be needed until the backlog is reduced. What Amazon EC2 Instance type should you use to reduce the backlog in the most cost-efficient way?
Spot Instances **Since the above scenario is similar to a batch processing job, the best instance type to use is a Spot Instance. Spot Instances are normally used in batch processing jobs. Since these jobs don't last for an entire year, they can be bid upon and allocated and deallocated as requested. Reserved Instances/Dedicated Instances cannot be used since this is not a 100% used application. There is no mention on a continuous demand of work in the above scenario, hence there is no need to use On-Demand Instances.
A company wants to store their documents in AWS. Initially, these documents will be used frequently, and after a duration of 6 months, they will need to be archived. How would you architect this requirement?
Store the files in Amazon S3 and create a Lifecycle Policy to archive the files after 6 months. **Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows: - Transition actions - In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. - Expiration actions - In which you specify when the objects expire. Amazon S3 deletes the expired objects on your behalf.
You currently manage a set of web servers hosted on EC2 Servers with public IP addresses. These IP addresses are mapped to domain names. There was an urgent maintenance activity that had to be carried out on the servers and the servers had to be restarted. Now the web application hosted on these EC2 Instances is not accessible via the domain names configured earlier. Which of the following could be a reason for this?
The public IP addresses have changed after the instance was stopped and started **By default, the public IP address of an EC2 Instance is released after the instance is stopped and started. Hence, the earlier IP address which was mapped to the domain names would have become invalid now.
A company's requirement is to have a Stack-based model for its resources in AWS. There is a need to have different stacks for the Development and Production environments. Which of the following can be used to fulfill this required methodology?
Use AWS OpsWorks to define the different layers for your application. **The requirement can be fulfilled via the OpsWorks service. The AWS Documentation given below supports this requirement: AWS OpsWorks Stacks lets you manage applications and servers on AWS and on-premises. With OpsWorks Stacks, you can model your application as a stack containing different layers, such as load balancing, database, and application server. You can deploy and configure Amazon EC2 instances in each layer or connect other resources such as Amazon RDS databases.
A company is planning to run a number of Admin related scripts using the AWS Lambda service. There is a need to detect errors that occur while the scripts run. How can this be accomplished in the most effective manner?
Use CloudWatch metrics and logs to watch for errors. **AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, Lambda logs all requests handled by your function and also automatically stores logs generated by your code through Amazon CloudWatch Logs.
You have the following architecture deployed in AWS: a) A set of EC2 Instances which sit behind an ELB b) A database hosted in AWS RDS Of late, the performance on the database has been slacking due to a high number of read requests. Which of the following can be added to the architecture to alleviate the performance issue?
Use ElastiCache in front of the database. **Amazon ElastiCache is an in-memory cache which can be used to cache common read requests.
When managing permissions for the API Gateway, what can be used to ensure that the right level of permissions are given to Developers, IT Admins and users? These permissions should be easily managed.
Use IAM Policies to create different policies for different types of users. **You control access to Amazon API Gateway with IAM permissions by controlling access to the following two API Gateway component processes: - To create, deploy, and manage an API in API Gateway, you must grant the API developer permissions to perform the required actions supported by the API management component of API Gateway. - To call a deployed API or to refresh the API caching, you must grant the API caller permissions to perform required IAM actions supported by the API execution component of API Gateway.
You are designing an architecture on AWS with disaster recovery in mind. Currently the architecture consists of an ELB and underlying EC2 Instances in a primary and secondary region. How can you establish a switchover in case of failure in the primary region?
Use Route 53 Health Checks and then do a failover. **If you have multiple resources that perform the same function, you can configure DNS failover so that Route 53 will route your traffic from an unhealthy resource to a healthy resource. For example, if you have two web servers and one web server becomes unhealthy, Route 53 can route traffic to the other web server.
A company website is set to launch in the upcoming weeks. There is a probability that the traffic will be quite high during the initial weeks. In the event of a load failure, how can you set up DNS failover to a static website? Choose the correct answer from the options given below.
Use Route 53 with the failover option to failover to a static S3 website bucket or CloudFront distribution. **Amazon Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources. If you have multiple resources that perform the same function, you can configure DNS failover so that Amazon Route 53 will route your traffic from an unhealthy resource to a healthy resource. For example, if you have two web servers and one web server becomes unhealthy, Amazon Route 53 can route traffic to the other web server. So you can route traffic to a website hosted on S3 or to a cloudFront distribution.
A database hosted in AWS is currently encountering an extended number of write operations and is not able to handle the load. What can be done to the architecture to ensure that the write operations are not lost under any circumstance?
Use SQS Queues to queue the database writes. **SQS Queues can be used to store the pending database writes, and these writes can then be added to the database. It is the perfect queuing system for such architecture. Note that adding more IOPS may help the situation but will not totally eliminate chances of losing database writes.
Your infrastructure in AWS currently consists of a private and public subnet. The private subnet consists of database servers and the public subnet has a NAT Instance which helps the instances in the private subnet to communicate with the Internet. The NAT Instance is now becoming a bottleneck. Which of the following changes to the current architecture can help prevent this issue from occurring in the future?
Use a NAT Gateway instead of the NAT Instance. **The NAT Gateway is a managed resource which can be used in place of a NAT Instance. While you can consider changing the instance type for the underlying NAT Instance, this does not guarantee that the issue will not reoccur in the future.
A company needs to extend their storage infrastructure to the AWS Cloud. The storage needs to be available as iSCSI devices for on-premises application servers. Which of the following would be able to fulfill this requirement?
Use the AWS Storage Gateway-cached volumes service. **By using cached volumes, you can use Amazon S3 as your primary data storage, while retaining frequently accessed data locally in your storage gateway. Cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data. You can create storage volumes up to 32 TiB in size and attach to them as iSCSI devices from your on-premises application servers. Your gateway stores data that you write to these volumes in Amazon S3 and retains recently read data in your on-premises storage gateway's cache and upload buffer storage.
A company is planning to use the AWS ECS service to work with containers. There is a need for the least amount of administrative overhead while launching containers. How can this be achieved?
Use the Fargate launch type in AWS ECS. **The Fargate launch type allows you to run your containerized applications without the need to provision and manage the backend infrastructure. Just register your task definition and Fargate launches the container for you.
Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of Spot EC2 Instances. Files submitted by your premium customers must be transformed with the highest priority. How would you implement such a system?
Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue. **The best way is to use 2 SQS queues. Each queue can be polled separately. The high priority queue can be polled first.
A customer wants to import their existing virtual machines to the cloud. Which service can they use for this? Choose one answer from the options given below.
VM Import/Export VM Import/Export enables customers to import Virtual Machine (VM) images in order to create Amazon EC2 instances. Customers can also export previously imported EC2 instances to create VMs. Customers can use VM Import/Export to leverage their previous investments in building VMs by migrating their VMs to Amazon EC2.