AWS Practice Exam Questions
You are working on a web application which needs somewhere to store user session state. Which of the following approaches is the best way to deal with user session state?
Store session state in RDS. Use an ElastiCache cluster. ElastiCache is the best option for storing session state as it is scalable, highly available and can be accessed by multiple web servers. RDS is not optimal, but could be used for storing session state. Since you need to provide 2 answers it is the only other viable answer.
A developer is deploying a new application to ECS. The application requires permissions to send messages to an SQS queue. Which role should the developer apply the policy to so that the application can access the SQS queue?
The execution role attached to the ECS Task. The policy must be attached to the ECS Task's execution role to allow the application running in the container access SQS.
You are developing an application in API Gateway, and need to categorize your APIs based on their status as: sandbox, test, or prod. You want to use a name-value pair system to label and manage your APIs. What feature of API Gateway would you use to accomplish this task?
Use stage variables based on the API deployment stage to interact with different backend endpoints.
Which section of the AWS Serverless Application Model template would you use to describe the configuration of a Lambda function and an API Gateway endpoint, if you were deploying your application using AWS SAM?
Transform Use the Transform section to describe your Serverless functions when using the serverless application model. Under the Transform section, you define the resources you want to deploy.
Your application stores files in an S3 bucket located in us-east-1, however many of your users are located in ap-south-1. The files are less than 50MB in size, however users are frequently experiencing delays when attempting to upload files. Which of the following options will maximize the upload speed?
Utilize S3 Transfer Acceleration S3 Transfer Acceleration is recommended to increase upload speeds and especially useful in cases where your bucket resides in a Region other than the one in which the file transfer was originated. Multipart upload is a good option for large files, e.g. >100MB in size.
You store a new object in Amazon S3 and receive a confirmation that it has been successfully stored. You then immediately make another API call attempting to read this object. Will you be able to read this object immediately?
Yes. S3 has read-after-write consistency, which means you will have access to the object immediately. Amazon S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES.
You are attempting to list the objects contained in an S3 bucket. The bucket contains over 3000 objects and the list-objects command times out and does not complete successfully, however when you run the same command on a different bucket, it works without errors. What could be the reason for this?
You are running the command on a bucket which contains a large number of resources, and the default page size might be too high. If you see issues when running list commands on a large number of resources, the default page size of 1000 might be too high. This can cause calls to AWS services to exceed the maximum allowed time and generate a "timed out" error. You can use the --page-size option to specify that the AWS CLI request a smaller number of items from each call to the AWS service. The CLI still retrieves the full list, but performs a larger number of service API calls in the background and retrieves a smaller number of items with each call. This gives the individual calls a better chance of succeeding without a timeout. Changing the page size doesn't affect the output; it affects only the number of API calls that need to be made to generate the output.
Your application is trying to upload a 6TB file to S3 and you receive an error message telling you that your proposed upload exceeds the maximum allowed object size. What is the best way to accomplish this file upload?
You cannot fix this, as the maximum size of an S3 object is 5TB. Amazon S3 allows a maximum object size of 5TB. However, objects 5GB or larger are required to be uploaded using the multipart upload API.
You are using CodeBuild to create a Docker image and add the image to your Elastic Container Registry. Which of the following commands should you include in the buildspec.yml?
docker build -t $REPOSITORY_URI:latest docker push $REPOSITORY_URI:latest Use the docker push command to add your image to your Elastic Container Registry
An organization wishes to use CodeDeploy to automate its application deployments. The organization has asked a developer to advise on which of their services can integrate with CodeDeploy. Which of the following services can the developer advise are compatible with CodeDeploy managed deployments?
CodeDeploy supports EC2, ECS (both EC2 and Fargate), Lambda, and on-premise servers.
Upon creating your code repository, you remember that you want to receive recommendations on improving the quality of the Java code for all pull requests in the repository. Which of the following services provide this ability?
CodeGuru Reviewer for Java When creating your repository, you have the option of enabling 'Amazon CodeGuru Reviewer for Java.' This will automate reviews of your code to spot problems that can be hard for you to detect, in addition to the recommendations to fix the code. CodeGuru Reviewer would have been the correct answer, but it's not specific to the type of code (always look for the *most* correct answer in exams.) Creating the repository itself requires the use of CodeCommit, which includes the 'Amazon CodeGuru Reviewer for Java' option; that's why CodeCommit is the wrong answer. Finally, CodeBuild is for building code, rather than improving it.
You are developing an online gaming application which needs to synchronize user profile data, preferences and game state across multiple mobile devices. Which of the following Cognito features enables you to do this?
Cognito Events Cognito Sync Amazon Cognito Sync is an AWS service and client library that enable cross-device syncing of application-related user data. You can use it to synchronize user profile data across mobile devices and web applications. The client libraries cache data locally so your app can read and write data regardless of device connectivity status. When the device is online, you can synchronize data, and if you set up push sync, notify other devices immediately that an update is available.
You are developing a healthy-eating application which tracks nutrition and water intake on a daily basis. Your users mainly access the application using a mobile device like a cell phone or tablet. You are planning to run a promotion to attract new users by providing a free trial period and you would like to make it easy for guest users to trial your application. Which of the following can you use to configure access for guest users?
Cognito Identity Pools With a Cognito identity pool, your users can obtain temporary AWS credentials to access AWS services, such as Amazon S3 and DynamoDB. Identity pools support anonymous guest users, as well as federation through third-party IdPs.
A company wants to monitor all traffic to a network interface on their bastion host. They wish to be alerted if there are more than 10 attempts to connect to the host via SSH within a one-hour time interval. What solution can the company employ to meet this requirement?
Configure a VPC flow log with CloudWatch Logs as the destination. Create a CloudWatch metric filter for destination port 22. Create a CloudWatch Alarm trigger. VPC flow logs can be sent to CloudWatch Logs. A CloudWatch metric filter and alarm can be configured to send notifications when the specified criteria are satisfied. CloudTrail is not a supported destination for VPC flow logs. Amazon Inspector cannot be used to inspect network traffic in the way specified by the requirements. It performs vulnerability assessments on the host VM. Lambda functions cannot mount EBS volumes.
You are developing an application which will use Cognito to allow authenticated Facebook users to sign-in and use your application. You would like to use Cognito to handle temporary access allowing authenticated users to access product and transaction data that your application stores in S3 and DynamoDB. Which is the best approach?
Configure an Identity Pool to provide temporary AWS credentials to your users to allow temporary access to AWS resources Cognito is the recommended approach for user sign-up and sign-in for mobile applications which allow access to users with Facebook, Google or Amazon.com credentials. Identity pools enable you to grant your users temporary access to AWS services. User pools are user directories that provide sign-up and sign-in options for your app users.
You are working on a Serverless application written in Python and running in Lambda. You have uploaded multiple versions of your code to Lambda, but would like to make sure your test environment always utilizes the latest version. How can you configure this?
Reference the function using a qualified ARN and the $LATEST suffix Reference the function using an unqualified ARN When you create a Lambda function, there is only one version: $LATEST. You can refer to the function using its Amazon Resource Name (ARN). There are two ARNs associated with this initial version, the qualified ARN which is the function ARN plus a version suffix e.g. $LATEST. Or the unqualified ARN which is the function ARN without the version suffix. The function version for an unqualified function always maps to $LATEST, so you can access the latest version using either the qualified ARN with $LATEST, or the unqualified function ARN. Lambda also supports creating aliases for each of your Lambda function versions. An alias is a pointer to a specific Lambda function version, aliases will not be updated automatically when a new version of the function becomes available.
You are using CloudFormation to automate the build of several application servers in your test environment. Which of the following are valid sections that can be used in your CloudFormation template?
Resources Outputs Parameters Parameters, Resources and Outputs are all valid. It is worth learning the CloudFormation template anatomy and understanding how each section relates.
You need to push a docker image to your Amazon ECR repository called my-repository located in us-east-1. Which of the following commands do you need to run in order to achieve this?
Run: docker tag -i my-image latest Then run: docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-repository Run: aws ecr get-login --no-include-email --region us-east-1 Run the docker login command that was returned in the previous step. Then run: docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-repository The aws ecr get-login command provides an authorization token that is valid for 12 hours. You need to run the command which was returned by the ecr get-login command to authorize you to push images to the ECR repository. For a full list of the steps, see the link below.
You are building a serverless web application which will serve both static and dynamic content. Which of the following services would you use to create your application?
S3 Lambda API Gateway Lambda lets you run code without provisioning servers, API Gateway is a managed service which makes APIs available to your user base in a secure way, S3 can be used to serve static web content. EC2 and RDS are not serverless. Elasticache is not required for this solution.
Your application needs to access content located in an S3 bucket which is residing in a different AWS account, which of the following API calls should be used to gain access?
STS:AssumeRole The STS AssumeRole API call returns a set of temporary security credentials which can be used to access AWS resources, incuding those in a different account
A developer has been tasked with writing a new Lambda function that generates statistics from data stored in DynamoDB. The function must be able to be invoked both synchronous and asynchronously. Which of the following AWS services would use synchronous invocations to trigger the Lambda function?
Application Load Balancer API Gateway API Gateway and Application Load Balancers both require Lambda functions to be invoked synchronously, as they require a response from the function before returning their own response to the user. S3 and SNS do not require a response from the Lambda function.
Your application is using Kinesis to ingest click-stream data relating to your products from a variety of social media sites. Your company has been trending this quarter because a high profile movie star has recently signed a contract to endorse your products. As a result, the amount of data flowing through Kinesis has increased, causing you to increase the number of shards in your stream from 4 to 6. The application consuming the data runs on a single EC2 instance in us-east-1a with a second instance in us-east-1b which is used as a cold standby in case the primary instance fails. How many consumer instances will you now need in total to cope with the increased number of shards?
1 instance in us-east-1a and 1 instance in us-east-1b Resharding enables you to increase or decrease the number of shards in a stream in order to adapt to changes in the rate of data flowing through the stream. You should ensure that the number of instances does not exceed the number of shards (except for failure standby purposes). Each shard is processed by exactly one KCL worker and has exactly one corresponding record processor, so you never need multiple instances to process one shard. However, one worker can process any number of shards, so it's fine if the number of shards exceeds the number of instances. When resharding increases the number of shards in the stream, the corresponding increase in the number of record processors increases the load on the EC2 instances that are hosting them. If the instances are part of an Auto Scaling group, and the load increases sufficiently, the Auto Scaling group adds more instances to handle the increased load.
Your 'forums' table has a primary key of 'comment_id'. Using DynamoDB, you're able to query the data based on the 'comment_id' primary key. You need to be able to query the forums table by userId. What would you add to the table during table creation time?
A secondary index The creation of a secondary index will allow you to sort by userId.
An organization is hosting their static website on S3, using a custom domain name. Users have started reporting that their web browsers' are alerting them to the fact that the organization's website is "Not Secure" because it is not served via a secure HTTPS connection. What is the easiest way to start serving the website via HTTPS?
Add a CloudFront distribution in front of the S3 static website, which supports HTTPS with a custom domain name. S3 buckets do not directly support HTTPS with a custom domain name. The simplest solution is to create a CloudFront distribution and set its origin to the S3 bucket. CloudFront allows you to specify a custom domain name, and supports managed certificates via Amazon Certificate Manager. Enabling AES-256 Default Encryption on the S3 bucket only affects the object at rest. Application Load Balancers do support SSL termination but do not support S3 as a target. AWS Shield relates to Distributed Denial of Service protection, not encryption over the wire.
An organization has mandated that all files stored in their newly created S3 bucket, 'top-secret-documents', must be encrypted using a Customer Master Key stored in KMS. What is the best way to enforce this requirement?
Add a bucket policy that denies PUT operations that don't contain the HTTP header `x-amz-server-side-encryption: aws:kms` To ensure objects are stored using a specific type of server-side encryption, you must use a bucket policy. In this case, the bucket policy must ensure the encryption type matches SSE-KMS. Setting a default encryption type on the bucket is not sufficient, as the default only applies to uploaded objects that do not specify any encryption type. For example, if the default encryption is set to AWS-KMS, but an object is uploaded with the header `x-amz-server-side-encryption: AES256`, the resulting object is encrypted using SSE-S3, not SSE-KMS.
You have developed a Lambda function that is triggered by an application running on EC2. The Lambda function currently has an execution role that allows read/write access to EC2, and also has the AWSLambdaBasicExecutionRole managed policy attached. After some architectural changes in your environment, the Lambda now needs to access some data stored in S3. What changes are required for your Lambda function to fulfill this new task of accessing data in S3?
Add permissions to the function's execution role to grant it the necessary access to S3 in IAM. An AWS Lambda function's execution role grants it permission to access AWS services and resources. You provide this role when you create a function, and Lambda assumes the role when your function is invoked. You can update (add or remove) the permissions of a function's execution role at any time, or configure your function to use a different role. Add permissions for any services that your function calls with the AWS SDK, and for services that Lambda uses to enable optional features. The AWSLambdaBasicExecutionRole AWS managed policy grants Lambda the permissions to upload logs to CloudWatch. This policy alone will not grant it access to S3. You do not need to create a new function as you can add or remove permissions to an execution role at any time. Resource-based policies let you grant usage permission to other accounts on a per-resource basis, and allow an AWS service to invoke your function, so this is not a valid way to grant S3 permissions to the existing Lambda function.
A company is developing its first lambda function. The function needs access to their existing EC2 instances, which are all hosted in private subnets within their VPC. What must the company do to ensure their lambda can access the EC2 instances?
Configure the lambda function to connect the private subnets used by the EC2 instances. Configure lambda's execution role to have permissions for managing an ENI within the VPC. Configure lambda's security group, so it has access to the EC2 instances. To configure a lambda to connect to a VPC, one or more subnets into which it can connect must be defined. The lambda function creates an Elastic Network Interface in one of the given subnets. It, therefore, needs an execution policy that allows it permissions to do so. The specific permissions required are in the attached AWS documentation link. The Elastic Network Interface through which the lambda connects should then be associated with one or more security groups that allow network communication to the desired destinations, over the desired ports.
You are developing a latency-sensitive application which stores a lot of data in DynamoDB. Each item is 3.5KB in size. Which of the following DynamoDB settings would give you the greatest read throughput?
Configure the table with 10 read capacity units and use eventually consistent reads A read capacity unit represents one strongly consistent read per second, or two eventually consistent reads per second, for an item up to 4 KB in size. Eventually consistent reads provide greater throughput than strongly consistent.
This domain validates your ability to deploy applications as well as written code using existing CI/CD pipelines, processes, and patterns. Questions for this domain comprise 22% of the total questions for this exam.
Create a CodePipeline separated by three stages. For each stage organize actions in a pipeline. Have CodePipeline complete all actions in a stage before the stage processes new artifacts. Continuous delivery is a release practice in which code changes are automatically built, tested, and prepared for release to production. With AWS CloudFormation and CodePipeline, you can use continuous delivery to automatically build and test changes to your AWS CloudFormation templates before promoting them to production stacks. This release process lets you rapidly and reliably make changes to your AWS infrastructure. Although you can manually interact with CloudFormation to execute the various stages, this is not the most efficient method. Amazon Inspector is an automated security assessment service which evaluates the security loopholes in deployed resources specific to EC2. Config is a monitoring and governance tool that tracks changes to your AWS environment based on rules you configure.
You are building an application that requires an Auto Scaling group in order to scale in and out based on demand. You have the proper IAM permissions to create an Auto Scaling group, and also to create EC2 resources for the instances. What additional requirements are necessary for you to move forward?
Create a launch template with the required Amazon Machine Image (AMI) content. When you create an Auto Scaling group, you must specify the necessary information to configure the Amazon EC2 instances, the subnets for the instances, and the initial number of instances. Before you can create an Auto Scaling group using a launch template, you must create a launch template that includes the parameters required to launch an EC2 instance, such as the Amazon Machine Image (AMI) ID and an instance type. Key pairs and instance roles are optional configuration settings when creating the launch template. You do not have to create a security group beforehand as AWS will provide a default security group. Lifecycle hooks enable you to perform custom actions by pausing instances as an Auto Scaling group launches or terminates them. These are optional configurations that can be attached to an Auto Scaling policy.
One of your junior developers has never had AWS Access before and needs access to an Elastic Load Balancer in your custom VPC. This is the first and only time she will need access. Which of the following choices is the most secure way to grant this access?
Create a new IAM user with *only* the required credentials and delete that IAM user after the developer has finished her work. It's always best practice to grant users access via IAM roles and groups. In this case, we would *not* assign the junior Dev to an existing group, as most Dev groups will have *more* access than is required for this Dev to perform the single task she has been asked to accomplish. Remember - always grant the *fewest* privileges possible.
A developer is configuring CodeDeploy to deploy an application to an EC2 instance. The application's source code is stored within AWS CodeCommit. What permissions need to be configured to allow CodeDeploy to perform the deployment to EC2?
Create an IAM policy with an acton to allow `codecommit:GitPull` on the required repository. Attach the policy to the EC2 instance profile role. CodeDeploy interacts with EC2 via the CodeDeploy Agent, which must be installed and running on the EC2 instance. During a deployment the CodeDeploy Agent running on EC2 pulls the source code from CodeCommit. The EC2 instance accesses CodeCommit using the permissions defined in its instance profile role; therefore, it is the EC2 instance itself that needs CodeCommit access. The specific CodeCommit permission needed to pull code is `codecommit:GitPull`.
Which of the following could you NOT achieve using the Amazon SQS Extended Client Library for Java?
Create a new S3 bucket and move a batch of SQS messages into the bucket You can use Amazon S3 and the Amazon SQS Extended Client Library for Java to manage Amazon SQS messages stored in S3. This includes specifying when messages should be stored in S3, referencing message objects stored in S3, getting them, and deleting them.
A developer needs to share an EBS volume with a second AWS account. What actions need to be performed to accomplish this task in the most optimal way?
Create an EBS volume snapshot. Modify EBS snapshot permissions and add the second AWS account ID to share the snapshot. In the second AWS account, create an EBS volume from the snapshot. It is not possible to directly share an EBS volume with another account. In order to accomplish the required task, it is required to create an EBS volume snapshot and grant permissions to that snapshot to the second AWS account. Although EBS volume snapshots are stored in S3, they are not in a user-visible bucket. Sharing a private AMI with a second account does not meet the specific requirement as defined in the question.
You want to quickly deploy and manage an application in the AWS Cloud without having to learn about the infrastructure that runs the application. Elastic Beanstalk is the first service that comes to mind. You have written you application with C#. How would you launch your application on Elastic Beanstalk in the most efficient manner?
Create your own Elastic Beanstalk platform using Packer. Use this platform for your application. AWS Elastic Beanstalk supports custom platforms which lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility enables you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn't provide a managed platform. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. Rewriting your application would not be the most efficient way if you can create your own platform. Launching an EC2 instance would still require you to manage your own infrastructure. OpsWorks manages infrastructure deployment by organizing applications into layers to provision EC2 instances and resources for an application.
You have an application running on a number of Docker containers running on AWS Elastic Container Service. You have noticed significant performance degradation after you made a number of changes to the application and would like to troubleshoot the application end-to-end to find out where the problem lies. What should you do?
Deploy the AWS X-Ray daemon as a new container alongside your application Within a microservices architecture, each application component runs as its own service. Microservices are built around business capabilities, and each service performs a single function. So if you want to add X-Ray to a Dockerized application, it makes the most sense to run the X-Ray daemon in a new Docker container and have it run alongside the other microservices which make up your application.
You work in the security industry for a large consultancy. One of your customers uses Lambda extensively in their production environment and they require a log of all API calls made to and from their Lambda functions. How can you achieve this?
Enable CloudTrail for Lambda Enabling CloudTrail for Lambda will allow you to log all API calls to an S3 bucket.
You have developed an application to run on Amazon EC2. Users have increased and you've found latency issues for users from various geographic locations. You decide to create a CloudFormation template of the application's environment in order to streamline application launch in other AWS Regions to improve performace for users. When creating the CloudFormation template, what is one thing you have to ensure for the resources to launch successfully?
Ensure the AMIs referenced in the template correspond to the AMI IDs in the desired Region AWS CloudFormation templates that declare an Amazon Elastic Compute Cloud (Amazon EC2) instance must also specify an Amazon Machine Image (AMI) ID, which includes an operating system and other software and configuration information used to launch the instance. The correct AMI ID depends on the instance type and region in which you're launching your stack. And IDs can change regularly, such as when an AMI is updated with software updates. AMIs are stored in a region and cannot be accessed in other regions. To use the AMI in another region, you must copy it to that region. IAM roles are valid across the entire account. AWS CloudFormation StackSets let you provision a common set of AWS resources across multiple accounts and regions with a single CloudFormation template. Tags are not a universal namespace and are used as metadata or labels for your resources.
You are in a development team working on a popular serverless web application which allows users to book late availability flights and hotels at a significant discount. You occasionally receive complaints that the website is running slowly. After some investigation, you notice that at the time of the complaints, DynamoDB reported a ProvisionedThrougputExceeded error. Which of the following approaches is a recommended way to handle this error?
Ensure your application is using Exponential Backoff Increasing Lambda capacity will not fix the issue because the problem is with DynamoDB. As the error only appears occasionally, the first thing to do is to ensure that the application is using Exponential Backoff to improve flow control. Increasing the capacity on the DynamoDB table could be considered but only if the problem persists.
You have software on an EC2 instance that needs to access both the private and public IP address of that instance. What's the best way for the software to get that information?
Have the software use cURL or GET to access the instance metadata. To view all categories of instance metadata from within a running instance, use the following URI: http://169.254.169.254/latest/meta-data/
You are using CloudFront to serve static website content to users based in multiple locations across the USA, Africa, India and the Middle East. You recently made some significant updates to the website, but users are complaining that they can only see the original content. What can you do you make sure the latest version of the website is being served by CloudFront?
Invalidate the file from the CloudFront edge cache If you need to remove a file from CloudFront edge caches before it expires, you can do one of the following: Invalidate the file from edge caches. The next time a viewer requests the file, CloudFront returns to the origin to fetch the latest version of the file. Use file versioning to serve a different version of the file that has a different name.
Which of the following statements is true of AWS Elastic Beanstalk?
It automatically handles the deployment details of an uploaded application. You can interact with it by using the AWS Management Console, AWS Command Line Interface (CLI), or eb. It enables you to deploy and manage applications in the AWS Cloud with little to no infrastructure knowledge. AWS Elastic Beanstalk enables you to deploy and manage applications in the cloud without having to learn about the infrastructure running them. Simply upload the application, and this service will handle details such as load balancing, capacity provisioning, scaling, and application health monitoring. There are three ways to use Elastic Beanstalk, which are the AWS Management Console, the CLI, and the eb, which is a CLI specifically designed for this service. Although Elastic Beanstalk supports applications developed in several select languages, C++ is not one of them.
You work for a large government agency which is conducting research for a top secret defense project. You are using SQS to handle messaging between components of a large, distributed application. You need to ensure that confidential data relating to your research is encrypted by the messaging system, which of the following services can you use to centrally manage your encryption keys?
KMS You can use a CMK to encrypt and decrypt up to 4 KB (4096 bytes) of data. Typically, you use CMKs to generate, encrypt, and decrypt the data keys that you use outside of AWS KMS to encrypt your data. This strategy is known as envelope encryption. CMKs are created in AWS KMS and never leave AWS KMS unencrypted. To use or manage your CMK, you access them through AWS KMS.
Your application is using Kinesis to ingest data from a number of environmental sensors which continuously monitor for pollution within a 1 mile radius of a local primary school. An EC2 instance consumes the data from the stream using the Kinesis Client Library. You have recently increased the number of shards in your stream to 6 and your project manager is now suggesting that you need to add at least 6 additional EC2 instances to cope with the new shards. What do you recommend?
One worker can process any number of shards, so it's fine if the number of shards exceeds the number of instances Resharding enables you to increase or decrease the number of shards in a stream in order to adapt to changes in the rate of data flowing through the stream. You should ensure that the number of instances does not exceed the number of shards (except for failure standby purposes). Each shard is processed by exactly one KCL worker and has exactly one corresponding record processor, so you never need multiple instances to process one shard. However, one worker can process any number of shards, so it's fine if the number of shards exceeds the number of instances. When resharding increases the number of shards in the stream, the corresponding increase in the number of record processors increases the load on the EC2 instances that are hosting them. If the instances are part of an Auto Scaling group, and the load increases sufficiently, the Auto Scaling group adds more instances to handle the increased load.
Which of the following approaches can improve the performance of your Lambda function?
Only include the libraries you need to minimize the size of your deployment package Establish your database connections from within the Lambda execution environment to enable connection reuse Establishing connections within the execution environment allows them to be reused next time the function is invoked which saves time. Only including the libraries you need will minimise the time taken for Lambda to unpack the deployment package.
You are creating a DynamoDB table to manage your customer orders, which of the following attributes would make a good Sort Key?
OrderDate A well designed Sort key allows you to retrieve groups of related items and query based on a range of values, e.g. a range of dates. In this case, Order Date is the best choice as it will allow users to search based on a range of dates
A developer deployed a serverless application consisting of an API Gateway and Lambda function using CloudFormation. Testing of the application resulted in a 500 status code and 'Execution failed due to configuration' error. What is a possible cause of the error?
POST method was not used when invoking the Lambda function from API Gateway. POST method must be used when invoking a Lambda function via REST API. This should not be confused with the methods used to access the APIs on the API Gateway. When deploying AWS Lambda and API Gateway resources via CloudFormation, you must ensure that the POST method is used when integrating API Gateway with an AWS Lambda function.
You are using CloudFormation to automate the build of several application servers in your test environment. Which of the following are valid sections that can be used in your CloudFormation template?
Parameters Resources Outputs Parameters, Resources and Outputs are all valid. It is worth learning the CloudFormation template anatomy and understanding how each section relates.
You have provisioned an RDS database and then deployed your application servers using Elastic Beanstalk. You now need to connect your application servers to the database. What should you do?
Provide the database connection information to your application Configure a security group allowing access to the database and add it to your environments auto-scaling group As you are connecting to a database that was not created within your Elastic Beanstalk environment, you will need to create the Security Group yourself and also provide connection string and credentials to allow your application servers to connect to the database
You have provisioned an RDS database and then deployed your application servers using Elastic Beanstalk. You now need to connect your application servers to the database. What should you do?
Provide the database connection information to your application. Configure a security group allowing access to the database and add it to your environments auto-scaling group As you are connecting to a database that was not created within your Elastic Beanstalk environment, you will need to create the Security Group yourself and also provide connection string and credentials to allow your application servers to connect to the database
You are working on a Serverless application written in Python and running in Lambda. You have uploaded multiple versions of your code to Lambda, but would like to make sure your test environment always utilizes the latest version. How can you configure this?
Reference the function using a qualified ARN and the $LATEST suffix Reference the function using an unqualified ARN When you create a Lambda function, there is only one version: $LATEST. You can refer to the function using its Amazon Resource Name (ARN). There are two ARNs associated with this initial version, the qualified ARN which is the function ARN plus a version suffix e.g. $LATEST. Or the unqualified ARN which is the function ARN without the version suffix. The function version for an unqualified function always maps to $LATEST, so you can access the latest version using either the qualified ARN with $LATEST, or the unqualified function ARN. Lambda also supports creating aliases for each of your Lambda function versions. An alias is a pointer to a specific Lambda function version, aliases will not be updated automatically when a new version of the function becomes available.
How does API Gateway handle SOAP?
SOAP is handled as web service pass-through API Gateway supports the legacy SOAP protocol, which returns results in xml format rather than JSON, in pass-through mode.
You need to retrieve some data from your DynamoDB table, which of the following methods would consume the greatest number of provisioned Capacity Units?
Scan with strong consistency A Query is generally far more efficient than a Scan operation. Strongly consistent reads use up double the amount of Read Capacity Units compared to eventually consistent reads
Which of the following activities are the responsibility of the customer?
Security Group configuration settings Encryption of sensitive data Management of user credentials Security and Compliance is a shared responsibility between AWS and the customer. The customer assumes responsibility and management of the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS-provided security group firewall. AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
You are developing a online-banking website which will be accessed by a global customer base. You are planning to use CloudFront to ensure users experience good performance regardless of their location. The Security Architect working on the project asks you to ensure that all requests to CloudFront are encrypted using HTTPS. How can you configure this?
Set the Viewer Protocol Policy to redirect HTTP to HTTPS Viewer Protocol Policy defines the protocols which can be used to access CloudFront content
You are a developer running an application on AWS Elastic Beanstalk. You are implementing an application update and need to use a deployment policy. The requirements are to maintain full capacity, deploy five instances at once for the new version, and to terminate instances running the old version once the new instances are running successfully. How would you implement this deployment policy?
Set the deployment policy as Rolling with Additional Batch. Set the batch size type as Fixed, and the batch size as 5. To maintain full capacity during deployments, you can configure your environment to launch a new batch of instances before taking any instances out of service. This option is known as a rolling deployment with an additional batch. When the deployment completes, Elastic Beanstalk terminates the additional batch of instances. All at Once deployment takes the instances in your environment out of service for a short time. Rolling deployment also takes a batch of servers out of service while deploying the new version in batches. Blue/Green deployments are for cases when you want to have two versions live simultaneously and be able to swap between the two versions.
Your organization is developing a CI/CD environment to improve software delivery of your applications. It has already adopted a plan to execute the various phases of the CI/CD pipeline from continuous integration to continuous deployment. There are now discussions around restructuring the team make-up to implement a CI/CD environment. How would you recommend creating developer teams as a best practice to support this change in the long run?
Set up an application team to develop applications. Set up an infrastructure team to create and configure the infrastructure to run the applications. Set up a tools team to build and manage the CI/CD pipeline. AWS recommends organizing three developer teams for implementing a CI/CD environment: an application team, an infrastructure team, and a tools team. This organization represents a set of best practices that have been developed and applied in fast-moving startups, large enterprise organizations, and in Amazon itself. The teams should be no larger than groups that two pizzas can feed, or about 10-12 people. This follows the communication rule that meaningful conversations hit limits as group sizes increase and lines of communication multiply. Hiring an external consulting firm will not be beneficial in the long run. Setting up a single team is not best practice. AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates and not used for team structuring.
You work for a company which facilitates and organizes technical conferences. You ran a large number of events this year with many high profile speakers and would like to enable your customers to access videos of the most popular presentations. You have stored all your content in S3, but you would like to restrict access so that people can only access the videos after logging into your website. How should you configure this?
Share the videos by creating a pre-signed URL Remove public read access from the S3 bucket where the videos are stored All objects by default are private. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a pre-signed URL, using their own security credentials, to grant time-limited permission to download the objects. Anyone who receives the pre-signed URL can then access the object. For example, if you have a video in your bucket and both the bucket and the object are private, you can share the video with others by generating a pre-signed URL.
Which of the following are considered to be Serverless?
The following AWS technologies are Serverless: DynamoDB, API Gateway, SNS, Lambda, Kinesis and S3. RDS and Elastic Beanstalk both deploy EC2 instances to run their services
You are running a large distributed application using a mix of EC2 instances and Lambda. Your EC2 instances are spread across multiple availability zones for resilience and are configured inside a VPC. You have just developed a new Lambda function which you are testing. However, when you try to complete the testing, your function cannot access a number of application servers which are located in the same private subnet. Which of the following could be a possible reason for this?
The function execution role does not include permission to connect to the VPC To connect to a VPC, your functions execution role must have the following permissions: ec2:CreateNetworkInterface, ec2:DescribeNetworkInterfaces, ec2:DeleteNetworkInterface. These permissions are included in the AWSLambdaVPCAccessExecutionRole managed policy,
You have a load balancer configuration that you use for most of your CloudFormation stacks. This load balancer always sits in front of your application running on EC2 as it has the important function of forwarding HTTPS requests on port 443 to HTTP requests on port 80 on the instance. As demand for the application grows you need to reuse this load balancer configuration in multiple other deployments of the application and you need to use CloudFormation to do this in an automated way. What is the most efficient way to deploy the load balancer configuration?
Use AWS CloudFormation nested stacks by creating a dedicated template for the load balancer and refer to that template within other templates. Nested stacks are stacks created as part of other stacks. You create a nested stack within another stack by using the AWS::CloudFormation::Stack resource. For example, assume that you have a load balancer configuration that you use for most of your stacks. Instead of copying and pasting the same configurations into your templates, you can create a dedicated template for the load balancer. Then, you just use the resource to reference that template from within other templates. Lambda would not be able to deploy infrastructure resources as efficiently as CloudFormation nested stacks. AWS CloudFormation provides two methods for updating stacks: direct update or creating and executing change sets. When you directly update a stack, you submit changes and AWS CloudFormation immediately deploys them. Use direct updates when you want to quickly deploy your updates. With change sets, you can preview the changes AWS CloudFormation will make to your stack, and then decide whether to apply those changes.
You are working on a social media application which allows users to share BBQ recipes and photos. You would like to schedule a Lambda function to run every 10 minutes which checks for the latest posts and sends a notification including an image thumbnail to users who have previously engaged with posts from the same user. How can you configure your function to automatically run at 10 minute intervals?
Use CloudWatch Events to schedule the function You can direct AWS Lambda to execute a function on a regular schedule using CloudWatch Events. You can specify a fixed rate - for example, execute a Lambda function every hour or 15 minutes, or you can specify a cron expression.
You are deploying a new version of your application using a CodeDeploy In-Place upgrade. At the end of the deployment you test the application and discover that something has gone wrong. You need to roll back your changes as quickly as possible. What do you do?
Use CodeDeploy to redeploy the previous version of the application With an In-Place upgrade you will need to redeploy the original version. Only a Blue / Green upgrade allows you to keep the original instances and roll back by routing all requests to the original instances
A developer has been tasked with migrating a large legacy web application, written in C++, to AWS. The developer wants to benefit from using Elastic Beanstalk to simplify the management of the infrastructure. Which of the following methods would allow the developer to migrate the application with the least amount of work?
Use Packer to generate a custom AMI that contains the application, which can then be deployed via Elastic Beanstalk. Use Docker to containerize the application, which can then be deployed via Elastic Beanstalk. Elastic Beanstalk supports Docker containers and custom AMIs via Packer. Both would allow the legacy application to be wrapped in a layer of abstraction such that Elastic Beanstalk itself would not need to support the specific language of the legacy application. The Go platform only supports applications written in Go. The application could be re-written in Node.js, but as it's a large application, a full rewrite is unlikely to require the least amount of work. Elastic Beanstalk cannot be used to manage Lambda functions.
You are developing a gaming website which scores all players scores in a DynamoDB table. You are using a Partition key of user_ID and a Sort Key of game_ID as well as storing the user_score which is the user's highest score for the game and also a timestamp. You need to find a way get the top scorers for each game, who have scored over 50,000 points. Which of the following will allow to to find this information in the most efficient way?
Use a local secondary index with a partition key of user_ID and a sort key of user_score A scan operation would be less efficient than a query, so that is definitely not the most efficient way. The Query operation described won't help you find the top scorers for each game. A local secondary index is an index that has the same partition key as the base table, but a different sort key. A global secondary index is an index with a partition key and a sort key that can be different from those on the base table.
You are developing an application in API Gateway, and need to categorize your APIs based on their status as: sandbox, test, or prod. You want to use a name-value pair system to label and manage your APIs. What feature of API Gateway would you use to accomplish this task?
Use stage variables based on the API deployment stage to interact with different backend endpoints. Stage variables are name-value pairs that you can define as configuration attributes associated with a deployment stage of a REST API. They act like environment variables and can be used in your API setup and mapping templates. With deployment stages in API Gateway, you can manage multiple release stages for each API, such as: alpha, beta, and production. Using stage variables you can configure an API deployment stage to interact with different backend endpoints. Environment variables apply to AWS Lambda. Canary release is a software development strategy in which a new version of an API (as well as other software) is deployed as a canary release for testing purposes, and the base version remains deployed as a production release for normal operations on the same stage. (This would be appropriate when your application is live and you'd want to reduce the risk inherent in a new software version release.) A tag is a metadata label that you assign or that AWS assigns to an AWS resource and would not impact the functionality of your APIs.
Your Security team have recently reviewed the security standards across your entire AWS environment. They have identified that a number of EC2 instances in your development environment have read and write access to an S3 bucket containing highly confidential production data. You have been asked to help investigate and suggest a way to remedy this. Which of the following can you use to find out what is going on so that you can suggest a solution?
Use the IAM Policy Simulator to identify which role or policy is granting access. With the IAM policy simulator, you can test and troubleshoot IAM and resource-based policies attached to IAM users, groups, or roles in your AWS account. You can test which actions are allowed or denied by the selected policies for specific resources.
You can use X-Ray with applications running on which platforms?
X-Ray works with Lambda, EC2, API Gateway, Elastic Beanstalk and ECS
You receive a "timed out" error message when running a command using the AWS CLI. What could be a possible reason for this?
You have run a command which is trying to return a large number of items and has exceeded the maximum allowed time to return results If you see issues when running list commands on a large number of resources, the default page size of 1000 might be too high. This can cause calls to AWS services to exceed the maximum allowed time and generate a "timed out" error. You can use the --page-size option to specify that the AWS CLI request a smaller number of items from each call to the AWS service. The CLI still retrieves the full list, but performs a larger number of service API calls in the background and retrieves a smaller number of items with each call. This gives the individual calls a better chance of succeeding without a timeout. Changing the page size doesn't affect the output; it affects only the number of API calls that need to be made to generate the output.
You are working on a web application which handles confidential financial data. The application runs on a few EC2 instances which are behind an Elastic Load Balancer. How can you ensure the data is encrypted end-to-end in transit between your ELB and EC2 instances?
a. Terminate HTTPS connections on your EC2 instances. b. Configure the instances in your environment to listen on the secure port. c. Configure a secure listener on your load balancer. Terminating secure connections at the load balancer and using HTTP on the backend might be sufficient for your application. However, if you are developing an application that needs to comply with strict external regulations, you might be required to secure all network connections. First, add a secure listener to your load balancer, then configure the instances in your environment to listen on the secure port and terminate HTTPS connections.
You need to monitor application-specific events every 10 seconds. How can you configure this?
configure a high-resolution custom metric in CloudWatch You need to configure a custom metric to handle application specific events and if you want to monitor at 10 second intervals, you need to use high-resolution metrics. Detailed monitoring reports metrics at 1 minute intervals.