AWS Dev Cert AS

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Your team has configured an environment in Elastic beanstalk using the following configuration Java 7 with tomcat 7 They now want to change the configuration to Java 8 with Tomcat 8.5. How can they achieve this in the easiest way possible? ]A. Change the configuration using the AWS console ]B. Create a new application revision ]C. Use the Change configuration environment ]D. Migrate the environment to Opswork

Answer - A

Your team has started configuring CodeBuild to run builds in AWS. The source code is stored in a bucket. When the build is run, you are getting the below error Error: "The bucket you are attempting to access must be addressed using the specified endpoint..." When Running a Build Which of the following could be the cause of the error? A. The bucket is not in the same region as the Code Build project B. Code should ideally be stored on EBS Volumes C. Versioning is enabled for the bucket D. MFA is enabled on the bucket

Answer - A

Your team has started configuring CodeBuild to run builds in AWS. The source code is stored in a bucket. When the build is run, you are getting the below error Error: "The bucket you are attempting to access must be addressed using the specified endpoint..." When Running a Build Which of the following could be the cause of the error? ]A. The bucket is not in the same region as the Code Build project ]B. Code should ideally be stored on EBS Volumes ]C. Versioning is enabled for the bucket ]D. MFA is enabled on the bucket

Answer - A

You've currently set up an API gateway service in AWS. The API gateway is calling a custom API hosted on an EC2 Instance. There are severe latency issues and you need to diagnose the reason for those latency issues. Which of the following could be used to address this concern? A. AWS X-ray B. AWS cloudwatch C. AWS Cloudtrail D. AWS VPC Flow logs

Answer - A The AWS Documentation mentions the following AWS X-Ray is an AWS service that allows you to trace latency issues with your Amazon API Gateway APIs. X-Ray collects metadata from the API Gateway service and any downstream services that make up your API. X-Ray uses this metadata to generate a detailed service graph that illustrates latency spikes and other issues that impact the performance of your API.

you are the lead for your development team. there is a requirement to provision an application using Elastic beanstalk service. It's a custom application with unique configuration files and software to include. which of the following would be the best way to provision the environment in the least time possible? A. use a custom AMI for the underlying instances B. use configuration files to download and install the updates C. use the user data section for the instances to download the updates D. use the metadata data section for the instances to download the updates

Answer - A The AWS Documentation mentions the following: when you create an AWS elastic beanstalk environment, you can specify an Amazon machine image (AMI) to use instead of the standard elastic beanstalk AMI included in your platform configuration's solution stack. A custom AMI can improve provisioning times when instances are launched in your environment if you need to install a lot of software that isn't included int he standard AMIs. Using configuration files is great for configuring and customizing your environment quickly and consistently. Applying configurations, however, can start to take a long time during environment creation and updates. if you do a lot of server configuration in configuration files, you can reduce this time by making a custom AMI that already has the software configuration that you need.

Your company is hosting a static web site in S3. The code has recently been changed wherein Javascript calls are being made to the web pages in the same bucket via the Fully Qualified Domain Name. But the browser is blocking the requests. What should be done to alleviate the issue? ]A. Enable CORS on the bucket ]B. Enable versioning on the bucket ]C. Enable CRR on the bucket ]D. Enable encryption the bucket

Answer - A Option B is incorrect because this is used to prevent accidental deletion of objects in S3 Option C is incorrect because this is used for Cross region replication of objects Option D is incorrect because this is used to encrypt objects at rest

Your company is going to develop an application in .NET Core with DynamoDB. There is a requirement that all data needs to be encrypted at rest. If the DynamoDB table has already been created, what else is needed to achieve this? ]A. No additional configurations are required since server-side encryption is enabled on all DynamoDB table data. ]B. Enable encryption on the existing table. ]C. You cannot enable encryption at rest. Consider using the AWS RDS service instead. ]D. You cannot enable encryption at rest. Consider using the S3 service instead.

Answer - A Option B is incorrect since Encryption can only be configured during table creation time Options C and D are incorrect since Encryption is possible in DynamoDB The AWS Documentation mentions the following: Amazon DynamoDB offers fully managed encryption at rest. DynamoDB encryption at rest provides enhanced security by encrypting your data at rest using an AWS Key Management Service (AWS KMS) managed encryption key for DynamoDB. This functionality eliminates the operational burden and complexity involved in protecting sensitive data.

You've currently set up an API gateway service in AWS. The API gateway is calling a custom API hosted on an EC2 Instance. There are severe latency issues and you need to diagnose the reason for those latency issues. Which of the following could be used to address this concern? ]A. AWS X-Ray ]B. AWS Cloudwatch ]C. AWS Cloudtrail ]D. AWS VPC Flow logs

Answer - A The AWS Documentation mentions the following AWS X-Ray is an AWS service that allows you to trace latency issues with your Amazon API Gateway APIs. X-Ray collects metadata from the API Gateway service and any downstream services that make up your API. X-Ray uses this metadata to generate a detailed service graph that illustrates latency spikes and other issues that impact the performance of your API. Option B is invalid since this is used to log API execution operations Option C is invalid since this is used to log API Gateway API management operations Option D is invalid since this is used to log calls into the VPC

An application is currently accessing a DynamoDB table. Currently the tables queries are performing well. Changes have been made to the application and now the performance of the application is starting to degrade. After looking at the changes , you see that the queries are making use of an attribute which is not the partition key? Which of the following would be the adequate change to make to resolve the issue? ]A. Add a Global Secondary Index to the DynamoDB table ]B. Change all the queries to ensure they use the partition key ]C. Enable global tables for DynamoDB ]D. Change the read capacity on the table

Answer - A The AWS Documentation mentions the following Amazon DynamoDB provides fast access to items in a table by specifying primary key values. However, many applications might benefit from having one or more secondary (or alternate) keys available, to allow efficient access to data with attributes other than the primary key. To address this, you can create one or more secondary indexes on a table, and issue Query or Scan requests against these indexes. A secondary index is a data structure that contains a subset of attributes from a table, along with an alternate key to support Query operations. You can retrieve data from the index using a Query, in much the same way as you use Query with a table. A table can have multiple secondary indexes, which gives your applications access to many different query patterns. Option B , although possible , is not the ideal approach to change the application code. Option C is used to disaster recovery scenarios Option D is not right , because we don't know if this would solve the issue in the long run

You are the lead for your development team. There is a requirement to provision an application using the Elastic beanstalk service. It's a custom application with unique configuration files and software to include. Which of the following would be the best way to provision the environment in the least time possible? ]A. Use a custom AMI for the underlying instances ]B. Use configuration files to download and install the updates ]C. Use the User data section for the Instances to download the updates ]D. Use the metadata data section for the Instances to download the updates

Answer - A The AWS Documentation mentions the following When you create an AWS Elastic Beanstalk environment, you can specify an Amazon Machine Image (AMI) to use instead of the standard Elastic Beanstalk AMI included in your platform configuration's solution stack. A custom AMI can improve provisioning times when instances are launched in your environment if you need to install a lot of software that isn't included in the standard AMIs. Using configuration files is great for configuring and customizing your environment quickly and consistently. Applying configurations, however, can start to take a long time during environment creation and updates. If you do a lot of server configuration in configuration files, you can reduce this time by making a custom AMI that already has the software and configuration that you need. Options B and C are invalid since these options would not result in the least amount of time for setting up the environment. Option D is invalid since the metadata data section is used for getting information about the underlying instances

You have been told to make use of Cloudformation templates for deploying applications on EC2 Instances. These Instances need to be preconfigured with the NGINX web server to host the application. How could you accomplish this with Cloudformation? ]A. Use the cfn-init helper script in Cloudformation ]B. Use the Output resource type in Cloudformation ]C. Use the Parameter resource type in Cloudformation ]D. Use SAML to deploy the template

Answer - A The AWS Documentation mentions the following When you launch stacks, you can install and configure software applications on Amazon EC2 instances by using the cfn-init helper script and the AWS::CloudFormation::Init resource. By using AWS::CloudFormation::Init, you can describe the configurations that you want rather than scripting procedural steps. Because of what the AWS documentation clearly mentions, all other options are invalid

You have recently developed an AWS Lambda function to be used as a backend technology for an API gateway instance. You need to give the API gateway URL to a set of users for testing. What must be done before the users can test the API? ]A. Ensure that a deployment is created in the API gateway ]B. Ensure that CORS is enabled for the API gateway ]C. Generate the SDK for the API ]D. Enable support for binary payloads

Answer - A This is also mentioned in the AWS Documentation In API Gateway, a deployment is represented by a Deployment resource. It is like an executable of an API represented by a RestApi resource. For the client to call your API, you must create a deployment and associate a stage to it. A stage is represented by a Stage resource and represents a snapshot of the API, including methods, integrations, models, mapping templates, Lambda authorizers (formerly known as custom authorizers), etc Option B is incorrect since this is only required for cross domain requests. Option C is incorrect since this is only required when you want to use your code to call the API gateway and there is no mention of that requirement in the question Option D is incorrect since this is only required is the request is not a text-based request and there is no mention of the type of payload in the question

When calling an API operation on an EC2 Instance , the following error message was returned A client error (UnauthorizedOperation) occurred when calling the RunInstances operation: You are not authorized to perform this operation. Encoded authorization failure message: oGsbAaIV7wlfj8zUqebHUANHzFbmkzILlxyj__y9xwhIHk99U_cUq1FIeZnskWDjQ1wSHStVfdCEyZILGoccGpC iCIhORceWF9rRwFTnEcRJ3N9iTrPAE1WHveC5Z54ALPaWlEjHlLg8wCaB8d8lCKmxQuylCm0r1Bf2fHJRU jAYopMVmga8olFmKAl9yn_Z5rI120Q9p5ZIMX28zYM4dTu1cJQUQjosgrEejfiIMYDda8l7Ooko9H6VmGJX S62KfkRa5l7yE6hhh2bIwA6tpyCJy2LWFRTe4bafqAyoqkarhPA4mGiZyWn4gSqbO8oSIvWYPwea KGkampa0arcFR4gBD7Ph097WYBkzX9hVjGppLMy4jpXRvjeA5o7TembBR-Jvowq6mNim0 Which of the following can be used to get a human-readable error message? ]A. Use the command aws sts decode-authorization-message ]B. Use the command aws get authorization-message ]C. Use the IAM Policy simulator , enter the error message to get the human readble format ]D. Use the command aws set authorization-message

Answer - A This is mentioned in the AWS Documentation Decodes additional information about the authorization status of a request from an encoded message returned in response to an AWS request. For example, if a user is not authorized to perform an action that he or she has requested, the request returns a Client.UnauthorizedOperation response (an HTTP 403 response). Some AWS actions additionally return an encoded message that can provide details about this authorization failure Because of the right command used in the documentation, all other options are incorrect

You are a developer for a company. You have to develop an application which would analyse all of the system level and guest level metrics from Amazon EC2 Instances and On-premise servers. Which of the following pre-requisite would you need to carry out? ]A. Ensure the Cloudwatch agent is present on the servers ]B. Ensure that a trail is created in Cloudtrail ]C. Ensure that AWS Config is enabled ]D. Ensure the on-premise servers are moved to AWS

Answer - A This is mentioned in the AWS Documentation The unified CloudWatch agent enables you to do the following: Collect more system-level metrics from Amazon EC2 instances, including in-guest metrics, in addition to the metrics listed in Amazon EC2 Metrics and Dimensions. The additional metrics are listed in Metrics Collected by the CloudWatch Agent. Collect system-level metrics from on-premises servers. These can include servers in a hybrid environment as well as servers not managed by AWS. Collect logs from Amazon EC2 instances and on-premises servers, running either Linux or Windows Server. Option B is incorrect since this is used for API monitoring Option C is incorrect since this is used for monitoring configuration changes. Option D is incorrect since monitoring can also be configured for on-premise servers

You are developing an application that is going to make use of Docker containers. Traffic needs to be routed based on demand to the application. Dynamic host port mapping would be used for the docker containers. Which of the following two options would you use for distribution of traffic to the docker containers? A. AWS Application Load Balancer B. AWS Network Load Balancer C. AWS Route 53 D. AWS Classic Load Balancer

Answer - A & B The AWS Documentation mentions the following Application Load Balancers offer several features that make them attractive for use with Amazon ECS services: Application Load Balancers allow containers to use dynamic host port mapping (so that multiple tasks from the same service are allowed per container instance). Application Load Balancers support path-based routing and priority rules (so that multiple services can use the same listener port on a single Application Load Balancer). Network Load Balancers do support dynamic host port mapping. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb Options D is invalid since Application Load balancer is ideally used when you have the requirement for path based routing Option C is incorrect since this is used for DNS Routing

Your company is planning on using the Simple Storage service to host objects that will be accessed by users. There is a speculation that there would be roughly 6000 GET requests per second. Which of the following could be used to ensure optimal performance? Choose 2 answers from the options given below? A. Use a Cloudfront distribution in front of the S3 bucket B. Use sequential date-based naming for your prefixes. C. Enable versioning for the objects D. Enable Cross Region Replication for the bucket

Answer - A and B

You've created a Code Commit Repository in AWS. You need to share the repository with the developers in your team. Which of the following would be a secure and easier way to share the repository with the development team? Choose 2 answers from the options given below A. Create Git credentials for the IAM users B. Allow the developers to connect via HTTPS using the Git credentials C. Allow the developers to connect via SSH D. Create a public private key pair

Answer - A and B The AWS Documentation mentions the following HTTPS connections require either Git credentials, which IAM users can generate for themselves in IAM, or an AWS access key, which your repository users must configure in the credential helper included in the AWS CLI but is the only method available for root account or federated users. Git credentials are the easiest method for users of your repository to set up and use. SSH connections require your users to generate a public-private key pair, store the public key, associate the public key with their IAM user, configure their known hosts file on their local computer, and create and maintain a config file on their local computers. Because this is a more complex configuration process, we recommend you choose HTTPS and Git credentials for connections to AWS CodeCommit. The easiest way to set up AWS CodeCommit is to configure HTTPS Git credentials for AWS CodeCommit. This HTTPS authentication method: Uses a static user name and password. Works with all operating systems supported by AWS CodeCommit. Is also compatible with integrated development environments (IDEs) and other development tools that support Git credentials. The simplest way to set up connections to AWS CodeCommit repositories is to configure Git credentials for AWS CodeCommit in the IAM console, and then use those credentials for HTTPS connections.

You are developing an application that is working with a DynamoDB table. You need to create a query that has search criteria. Which of the following must be done in order to work with search queries? Choose 2 answers from the options given below A. Specify a key condition expression in the query B. Specify a partition key name and value in the equality condition C. Specify a sort key name and value in the equality condition D. Specify a filter expression

Answer - A and B The AWS Documentation mentions the following Key Condition Expression To specify the search criteria, you use a key condition expression—a string that determines the items to be read from the table or index. You must specify the partition key name and value as an equality condition. Option C is incorrect since you need to mention the partition key and not the sort key Option D is incorrect since this is used to further filter results

Your development team is planning on deploying an application using the Elastic Beanstalk service. As part of the deployment, you need to ensure that a high-end instance type is used for the deployment of the underlying instances. Which of the following would you use to enable this? Choose 2 answers from the options given below. A. The Launch configuration B. The Environment manifest file C. Instance Profile section D. In the AWS Config section

Answer - A and B The AWS Documentation mentions the following Your Elastic Beanstalk includes an Auto Scaling group that manages the Amazon EC2 instances in your environment. In a single-instance environment, the Auto Scaling group ensures that there is always one instance running. In a load-balanced environment, you configure the group with a range of instances to run, and Amazon EC2 Auto Scaling adds or removes instances as needed, based on load. The Auto Scaling group also manages the launch configuration for the instances in your environment. You can modify the launch configuration to change the instance type, key pair, Amazon Elastic Block Store (Amazon EBS) storage, and other settings that can only be configured when you launch an instance. You can include a YAML formatted environment manifest in the root of your application source bundle to configure the environment name, solution stack and environment links to use when creating your environment. An environment manifest uses the same format as Saved Configurations. Option C is invalid since this is used to ensure that the environment can interact with other AWS resources Option D is invalid since this is used to monitor the configuration changes of resources.

You are developing an application that is working with a DynamoDB table. Some of your requests results are returning an HTTP 400 status code. Which of the following are possible issues with the requests? Choose 2 answers from the options given below A. There are missing required parameters with some of the requests B. You are exceeding the table's provisioned throughput C. The DynamoDB service is unavailable D. There are network issues

Answer - A and B This is mentioned in the AWS Documentation An HTTP 400 status code indicates a problem with your request, such as authentication failure, missing required parameters, or exceeding a table's provisioned throughput. You will have to fix the issue in your application before submitting the request again. Options C and D are incorrect since these would result in 5xx errors.

Your application currently points to several Lambda functions in AWS. A change is being made to one of the Lambda functions. You need to ensure that application traffic is shifted slowly from one Lambda function to the other. Which of the following steps would you carry out? Select 2 Options: A. Create an ALIAS with the -routing-config parameter B. Update the ALIAS with the -routing-config parameter C. Create a version with the -routing-config parameter D. Update the version with the -routing-config parameter E. Update the function with the - config parameter

Answer - A and B This is mentioned in the AWS Documentation By default, an alias points to a single Lambda function version. When the alias is updated to point to a different function version, incoming request traffic in turn instantly points to the updated version. This exposes that alias to any potential instabilities introduced by the new version. To minimize this impact, you can implement the routing-config parameter of the Lambda alias that allows you to point to two different versions of the Lambda function and dictate what percentage of incoming traffic is sent to each version. Options C and D are incorrect since you need to use ALIAS for this purpose. Option E is incorrect. Because A & B are the correct ways to achieve the requirement.

You have defined some custom policies in AWS. You need to test out the permissions assigned to those policies. Which of the following can be used for this purpose via the CLI? Choose 2 answers from the options given below A. Get the context keys first B. Use the aws iam simulate-custom-policy command C. Get the AWS IAM Access keys first D. Use the aws iam get-custom-policy command

Answer - A and B This is mentioned in the AWS Documentation Policy simulator commands typically require calling API operations to do two things: Evaluate the policies and return the list of context keys that they reference. You need to know what context keys are referenced so that you can supply values for them in the next step. Simulate the policies, providing a list of actions, resources, and context keys that are used during the simulation. Because of the right command used in the documentation, all other options are incorrect

You have been instructed to use the CodePipeline service for the CI/CD automation in your company. Due to security reasons , the resources that would be part of the deployment are placed in another account. Which of the following steps need to be carried out to accomplish this deployment? Choose 2 answers from the options given below A. Define a customer master key in KMS B. Create a reference Code Pipeline instance in the other account C. Add a cross account role D. Embed the access keys in the codepipeline process

Answer - A and C Option B is invalid since this would go against the security policy Option D is invalid since this is not a recommended security practice. This is mentioned in the AWS Documentation You might want to create a pipeline that uses resources created or managed by another AWS account. For example, you might want to use one account for your pipeline and another for your AWS CodeDeploy resources. To do so, you must create a AWS Key Management Service (AWS KMS) key to use, add the key to the pipeline, and set up account policies and roles to enable cross-account access.

You have an application that needs to encrypt data using the KMS service. The company has already defined the customer master key in AWS for usage in the application. Which of the following steps must be followed in the encryption process? Choose 2 answers from the options given below A. Use the GenerateDataKey to get the data key to encrypt the data B. Use CustomerMaster Key to encrypt the data C. Delete the plaintext data encryption key after the data is encrypted D. Delete the Customer Master Key after the data is encrypted

Answer - A and C Options B and D are incorrect because you will not use the Customer Key directly to encrypt and decrypt data. The AWS Documentation mentions the following We recommend that you use the following pattern to encrypt data locally in your application: Use this operation (GenerateDataKey) to get a data encryption key. Use the plaintext data encryption key (returned in the Plaintext field of the response) to encrypt data locally, then erase the plaintext data key from memory. Store the encrypted data key (returned in the CiphertextBlob field of the response) alongside the locally encrypted data.

Your development team is planning on working with Amazon Step Functions. Which of the following is a recommended practise when working with activity workers and tasks in Step Functions? Choose 2 answers from the options given below. A. Ensure to specify a timeout in state machine definitions B. We can use only 1 transition per state. C. If you are passing larger payloads between states, consider using the Simple Storage Service D. If you are passing larger payloads between states, consider using EBS volumes

Answer - A and C The AWS Documentation mentions the following By default, the Amazon States Language doesn't set timeouts in state machine definitions. Without an explicit timeout, Step Functions often relies solely on a response from an activity worker to know that a task is complete. If something goes wrong and TimeoutSeconds isn't specified, an execution is stuck waiting for a response that will never come. Executions that pass large payloads of data between states can be terminated. If the data you are passing between states might grow to over 32 KB, use Amazon Simple Storage Service (Amazon S3) to store the data, and pass the Amazon Resource Name instead of the raw data. Alternatively, adjust your implementation so that you pass smaller payloads in your executions. Option B is incorrect since States can have multiple incoming transitions from other states.

You are a team lead for the development of an application that will be hosted in AWS. The application will consist of a front end which will allow users to upload files. Part of the application will consist of sending and processing of messages by a backend service. You have been told to reduce the cost for the backend service , but also ensure efficiency. Which of the following would you consider in the implementation of the backend service? Choose 2 answers from the options given below A. Create an SQS queue to handle the processing of messages B. Create an SNS topics to handle the processing of messages C. Create a Lambda function to process the messages from the queue D. Create an EC2 Instance to process the messages from the queue

Answer - A and C The SQS queue can be used to handle the sending and receiving of messages. To reduce costs you can use Lambda functions to process the messages. The below is also given in the AWS Documentation Using AWS Lambda with Amazon SQS Attaching an Amazon SQS queue as an AWS Lambda event source is an easy way to process the queue's content using a Lambda function. Lambda takes care of: Automatically retrieving messages and directing them to the target Lambda function. Deleting them once your Lambda function successfully completes. Option B is incorrect since you should use SQS for handling of messages. SNS has no persistence. Whichever consumer is present at the time of message arrival, get the message and the message is deleted. If no consumers available then the message is lost. Option D is incorrect since this would not be a cost-effective option

You are an API developer that has been hired to work in a company. You have been asked to use the AWS services for development and deployment via the API gateway. You need to control the behavior of the API's frontend interaction. Which of the following could be done to achieve this? Select 2 options. A. Modify the configuration of the Method request B. Modify the configuration of the Integration request C. Modify the configuration of the Method response D. Modify the configuration of the Integration response

Answer - A and C This is also mentioned in the AWS Documentation As an API developer, you control the behaviors of your API's frontend interactions by configuring the method request and a method response. You control the behaviors of your API's backend interactions by setting up the integration request and integration response. These involve data mappings between a method and its corresponding integration Options B and D are incorrect since these are used control the behaviors of your API's backend interactions

Your application is making requests to a DynamoDB table. Due to the certain surge of requests , you are now getting throttling errors in your application. Which of the following can be used to resolve such errors? Choose 2 answers from the options given below. A. Use exponential backoff in your requests from the application B. Consider using multiple sort keys C. Change the throughput capacity on the tables D. Consider using global tables

Answer - A and C Using exponential backoff in your requests can put some retries for your application to help with your surge of requests. Alternatively, you can increase the throughput capacity defined for your table. Option B is invalid because better use of partition keys could help Option D is invalid because this is used for having multiple copies of your table in additional regions

Your team is planning on deploying an application on an ECS cluster. They need to also ensure that the X-Ray service can be used to trace the application deployed on the cluster. Which of the following are the right set of steps that are needed to accomplish this? Choose 2 answers from the options given below. A. Create a Docker image with the X-Ray daemon B. Attach an IAM role with permissions to the ECS Cluster C. Deploy the EC2 Instance to the ECS Cluster D. Assign a role to the docker container instance in ECS which has a policy that allows it to write to xray

Answer - A and D

An organization deployed their static website on Amazon S3. Now, the Developer has a requirement to serve dynamic content using a serverless solution. Which combination of services should be used to implement a serverless application for the dynamic content? Select 2 answers from the options given below A. Amazon API Gateway B. Amazon EC2 C. AWS ECS D. AWS Lambda E. Amazon kinesis

Answer - A and D Out of the above list, Given the scenerio,API Gateway and AWS Lambda are the best two choices to build this serverless application. The AWS Documentation mentions the following AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running.

A company is planning on using AWS CodePipeline for their underlying CI/CD process. The code will be picked up from an S3 bucket. The company policy mandates that all data should be encrypted at rest and that the keys are managed by the customer. Which of the following measures would you take to ensure that the CI/CD process conforms to this policy? Choose 2 possible actions from the options given below. A. Ensure that server side encryption is enabled on the S3 bucket and data is encrypted at-rest on the CodeBuild environment using customer managed CMK B. Ensure that server-side encryption is enabled on the CodePipeline stage C. Configure the code pickup stage in CodePipeline to use AWS KMS D. Configure AWS KMS with customer managed keys and use it for S3 bucket encryption

Answer - A and D This is also mentioned in the AWS Documentation There are two ways to configure server-side encryption for Amazon S3 artifacts: AWS CodePipeline creates an Amazon S3 artifact bucket and default AWS-managed SSE-KMS encryption keys when you create a pipeline using the Create Pipeline wizard. The master key is encrypted along with object data and managed by AWS. You can create and manage your own customer-managed SSE-KMS keys. Options B and C are incorrect since this needs to be configured at the S3 bucket level.

You are developing an application which will make use of Kinesis Firehose for streaming the records onto the Simple Storage Service. Your company policy mandates that all data needs to be encrypted at rest. How can you achieve this with Kinesis Firehose? Choose 2 answers for the options given below. A. Enable Encryption on the Kinesis Data Firehose B. Install an SSL certificate in Kinesis Data Firehose C. Ensure that all data records are transferred via SSL D. Ensure that Kinesis streams are used to transfer the data from the producers

Answer - A and D This is given in the AWS Documentation Option A is correct because you can enable encryption ( encryption of data at rest ) for Kinesis Data Firehose Option D is correct because when Kinesis Streams are chosen as source, then encryption of data at rest is enabled automatically Options B and C are invalid because this is used for encrypting data in transit For more information on Data encryption with Kinesis Firehose, please refer to the below URL

Your company currently has an S3 bucket hosted in an AWS Account. It holds information that needs be accessed by a partner account. Which is the MOST secure way to allow the partner account to access the S3 bucket in your account. Choose 3 answers from the options given below. A. Ensure an IAM role is created which can be assumed by the partner account. B. Ensure an IAM user is created which can be assumed by the partner account. C. Ensure the partner uses an external id when making the request D. Provide the ARN for the role to the partner account

Answer - A,C and D The below diagram from the AWS documentation showcases an example on this wherein an IAM role and external ID is used to access an AWS account resources

You've define a DynamoDB table with a read capacity of 5 and a write capacity of 5. Which of the following statements are TRUE? Choose 3 answers from the options given below A. Strong consistent reads of a maximum of 20 KB per second B. Eventual consistent reads of a maximum of 20 KB per second C. Strong consistent reads of a maximum of 40 KB per second D. Eventual consistent reads of a maximum of 40 KB per second E. Maximum writes of 5KB per second

Answer - A,D and E This is also given in the AWS Documentation For example, suppose that you create a table with 5 read capacity units and 5 write capacity units. With these settings, your application could: Perform strongly consistent reads of up to 20 KB per second (4 KB × 5 read capacity units). Perform eventually consistent reads of up to 40 KB per second (twice as much read throughput). Write up to 5 KB per second (1 KB × 5 write capacity units).

A Lambda function has been developed with the default settings and is using Node.js. The function makes calls to a DynamoDB table. It is estimated that the lambda function would run for 5 minutes. When the lambda function is executed, it is not adding the required rows to the DynamoDB table. What needs to be changed in order to ensure that the Lambda function works as desired? ]A. Ensure that the underlying programming language is changed to python ]B. Change the timeout for the function ]C. Change the memory assigned to the function to 1 GB ]D. Assign an IAM user to the Lambda function

Answer - B If the lambda function was created with the default settings , it would have the default timeout of 3 seconds as shown below. Since the function executes in a timespan of 300 seconds on an EC2 instance , this value needs to be changed. Option A is incorrect since the programming language is not an issue Option C is incorrect since there is no mention on the amount of memory required in the question Option D is incorrect since IAM roles should be assigned to the Lambda function

You've just deployed an AWS Lambda function. This Lambda function would be invoked via the API gateway service. You want to know if there were any errors while the Lambda function was being invoked. Which of the following service would allow you to check the performance of your underlying Lambda function. ]A. VPC Flow Logs ]B. Cloudwatch ]C. Cloudtrail ]D. AWS Trusted Advisor

Answer - B In AWS Lambda , you can use Cloudwatch metrics to see the number of Invocation errors. The below snapshot from the AWS Documentation shows an example on this. Accessing Amazon CloudWatch Metrics for AWS Lambda AWS Lambda automatically monitors functions on your behalf, reporting metrics through Amazon CloudWatch. These metrics include total requests, latency, and error rates. For more information about Lambda metrics, see AWS Lambda Metrics. For more information about CloudWatch, see the Amazon CloudWatch User Guide. You can monitor metrics for Lambda and view logs by using the Lambda console, the CloudWatch console, the AWS CLI, or the CloudWatch API. The following procedures show you how to access metrics using these different methods.

You are part of a development team that is in charge of creating Cloudformation templates. These templates need to be created across multiple accounts with the least amount of effort. Which of the following would assist in accomplishing this? ]A. Creating Cloudformation ChangeSets ]B. Creating Cloudformation StackSets ]C. Make use of Nested stacks ]D. Use Cloudformation artifacts

Answer - B The AWS Documentation mentions the following AWS CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation. Using an administrator account, you define and manage an AWS CloudFormation template, and use the template as the basis for provisioning stacks into selected target accounts across specified regions. Option A is incorrect since this is used to make changes to the running resources in a stack, Option C is incorrect since these are stacks created as part of other stacks Option D is incorrect since this is used in conjunction with Code Pipeline

You are developing a set of Lambda functions for your application. The company mandates that all calls to Lambda functions be recorded. Which of the below service can help achieve this? ]A. AWS Cloudwatch ]B. AWS CloudTrail ]C. AWS VPC Flow Logs ]D. AWS Trusted Advisor

Answer - B The AWS Documentation mentions the following AWS Lambda is integrated with AWS CloudTrail, a service that captures API calls made by or on behalf of AWS Lambda in your AWS account and delivers the log files to an Amazon S3 bucket that you specify. CloudTrail captures API calls made from the AWS Lambda console or from the AWS Lambda API. Using the information collected by CloudTrail, you can determine what request was made to AWS Lambda, the source IP address from which the request was made, who made the request, when it was made, and so on Option A is incorrect since this can only give information on the logs from Cloudwatch but not who called the Lambda function itself. Option C is incorrect since this is used for logging network traffic to the VPC Option D is incorrect since this cannot give API logging information

Your company is planning on creating new development environments in AWS. They want to make use of their existing Chef recipes which they use for their on-premise configuration for servers in AWS. Which of the following service would be ideal to use in this regard? ]A. AWS Elastic Beanstalk ]B. AWS OpsWorks ]C. AWS Cloudformation ]D. AWS SQS

Answer - B The AWS Documentation mentions the following AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments All other options are invalid since they cannot be used to work with Chef recipes for configuration management.

Your team is developing a solution that will make use of DynamoDB tables. Due to the nature of the application, the data is needed across a couple of regions across the world. Which of the following would help reduce the latency of requests to DynamoDB from different regions? ]A. Enable Multi-AZ for the DynamoDB table ]B. Enable global tables for DynamoDB ]C. Enable Indexes for the table ]D. Increase the read and write throughput for the table

Answer - B The AWS Documentation mentions the following Amazon DynamoDB global tables provide a fully managed solution for deploying a multi-region, multi-master database, without having to build and maintain your own replication solution. When you create a global table, you specify the AWS regions where you want the table to be available. DynamoDB performs all of the necessary tasks to create identical tables in these regions, and propagate ongoing data changes to all of them.

Your company has a large set of data sets that need to be streamed directly into Amazon S3. Which of the following would be perfect for such a requirement? ]A. Kinesis Streams ]B. Kinesis Data Firehose ]C. AWS Redshift ]D. AWS DynamoDB

Answer - B The AWS Documentation mentions the following Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elasticsearch Service (Amazon ES), and Splunk. Option A is partially valid , but since the stream of data needs to go directly into S3 , Firehose can be used instead of Kinesis streams Option C is invalid because this is used as a petabyte warehouse system Option D is invalid because this is an AWS fully managed NoSQL database.

You are defining a Redis cluster using the AWS Elasticache service. You need to define common values across the nodes for memory usage and item sizes. Which of the following components of the ElastiCache service allows you to define this? ]A. Endpoints ]B. Parameter Groups ]C. Security Groups ]D. Subnet Groups

Answer - B The AWS Documentation mentions the following Cache parameter groups are an easy way to manage runtime settings for supported engine software. Parameters are used to control memory usage, eviction policies, item sizes, and more. An ElastiCache parameter group is a named collection of engine-specific parameters that you can apply to a cluster. By doing this, you make sure that all of the nodes in that cluster are configured in exactly the same way. Because of what the AWS Documentation mentions , all other options are invalid

You are in charge of deploying an application that will be hosted on an EC2 Instance and sit behind an Elastic Load Balancer. You have been requested to monitor the incoming API connections to the Elastic Load Balancer. Which of the below options can suffice this requirement? ]A. Use AWS CloudTrail with your load balancer ]B. Enable access logs on the load balancer ]C. Use a CloudWatch Logs Agent by installing on EC2. ]D. Create a custom metric CloudWatch filter on your load balancer

Answer - B The AWS Documentation mentions the following Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. Option A is INCORRECT because CloudTrail captures all API calls for Elastic Load Balancing as events. This is not the recommended approach to monitoring incoming connections to the ELB Option B is CORRECT. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues. Option C is invalid since the Logs agents are installed on EC2 Instances and not on the ELB Option D is invalid since the metrics will not provide detailed information on the incoming connections

A company is planning on using Amazon Kinesis firehose to stream data into an S3 bucket. They need the data to be transformed first before it can be sent to the S3 bucket. Which of the following would be used for the transformation process? ]A. AWS SQS ]B. AWS Lambda ]C. AWS EC2 ]D. AWS API Gateway

Answer - B The AWS Documentation mentions the following Kinesis Data Firehose can invoke your Lambda function to transform incoming source data and deliver the transformed data to destinations. You can enable Kinesis Data Firehose data transformation when you create your delivery stream. Because of what the AWS Documentation mentions , all other options are invalid For more information on Kinesis Firehose , please refer to the below URL

A developer is using Amazon API Gateway as an HTTP proxy to a backend endpoint. There are three separate environments: Development, Testing, Production and three corresponding stages in the API gateway. How should traffic be directed to different backend endpoints for each of these stages without creating a separate API for each? ]A. Add a model to the API and add a schema to differentiate different backend endpoints. ]B. Use stage variables and configure the stage variables in the HTTP integration Request of the API ]C. Use API Custom Authorizers to create an authorizer for each of the different stages. ]D. Update the Integration Response of the API to add different backend endpoints.

Answer - B The AWS Documentation mentions the following to support this Stage variables are name-value pairs that you can define as configuration attributes associated with a deployment stage of an API. They act like environment variables and can be used in your API setup and mapping templates. Option A is incorrect since this would only allow for additions of schema's Option C is incorrect since this is only used for Authorization and would not help to differentiate the environments Option D is incorrect since this would help in integrating the responses to the API gateway

Your team has just finished developing a new version of an existing application. This is a web-based application hosted on AWS. Currently Route 53 is being used to point the company's DNS name to the web site. Your Management has instructed you to deliver the new application to a portion of the users for testing. How can you achieve this? ]A. Port the application onto Elastic beanstalk and use the Swap URL feature ]B. Use Route 53 weighted Routing policies ]C. Port the application onto Opswork by creating a new stack ]D. Use Route 53 failover Routing policies

Answer - B The AWS Documentation mentions the following to support this Weighted Routing Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software. To configure weighted routing, you create records that have the same name and type for each of your resources. You assign each record a relative weight that corresponds with how much traffic you want to send to each resource. Amazon Route 53 sends traffic to a resource based on the weight that you assign to the record as a proportion of the total weight for all records in the group:

Your team has been instructed on deploying a Microservices based application onto AWS. There is a requirement to manage the orchestration of application. Which of the following would the ideal way to implement this with the least amount of administrative effort? ]A. Use the Elastic Beanstalk Service ]B. Use the Elastic Container Service ]C. Deploy Kubernetes on EC2 Instances ]D. Use the Opswork service

Answer - B The Elastic Container service is a fully managed orchestration service available in AWS. The AWS Documentation mentions the following Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. Amazon ECS eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines. Options A and D are incorrect since they can be used to effectively manage Docker Containers. But for a fully orchestration service, use the Elastic Container Service Option C is incorrect since even though Kubernetes is a fully managed solution, hosting it on EC2 Instances will incur more administrative headache

A DynamoDB table has a Read Throughput capacity of 5 RCU. Which of the following read configuration will provide us the maximum read throughput? ]A. Read capacity set to 5 for 4KB reads of data at strong consistency ]B. Read capacity set to 5 for 4KB reads of data at eventual consistency ]C. Read capacity set to 15 for 1KB reads of data at strong consistency ]D. Read capacity set to 5 for 1KB reads of data at eventual consistency

Answer - B The calculation of throughput capacity for option B would be Read capacity(5) * Amount of data(4) = 20. Since its required at eventual consistency , we can double the read throughput to 20*2=40 For Option A Read capacity(5) * Amount of data(4) = 20. Since we need strong consistency we have would get a read throughput of 20 For Option C Read capacity(15) * Amount of data(1) = 15. Since we need strong consistency we have would get a read throughput of 15 For Option D Read capacity(5) * Amount of data(1) = 5. Since we need eventual consistency we have would get a read throughput of 5*2=10

You are planning on deploying an application to the worker role in Elastic Beanstalk. Moreover, this worker application is going to run the periodic tasks. Which of the following is a must have as part of the deployment? ]A. An appspec.yaml file ]B. A cron.yaml file ]C. A cron.config file ]D. An appspec.json file

Answer - B This is also given in the AWS Documentation Create an Application Source Bundle When you use the AWS Elastic Beanstalk console to deploy a new application or an application version, you'll need to upload a source bundle. Your source bundle must meet the following requirements: Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file) Not exceed 512 MB Not include a parent folder or top-level directory (subdirectories are fine) If you want to deploy a worker application that processes periodic background tasks, your application source bundle must also include a cron.yaml file. For more information, see Periodic Tasks. Because of the exact requirement given in the AWS Documentation, all other options are invalid.

You have a lambda function that is processed asynchronously. You need a way to check and debug issues if the function fails? How could you accomplish this? ]A. Use AWS Cloudwatch metrics ]B. Assign a dead letter queue ]C. Configure SNS notifications ]D. Use AWS Cloudtrail logs

Answer - B This is also mentioned in the AWS Documentation Any Lambda function invoked asynchronously is retried twice before the event is discarded. If the retries fail and you're unsure why, use Dead Letter Queues (DLQ) to direct unprocessed events to an Amazon SQS queue or an Amazon SNS topic to analyze the failure. Option A is incorrect since the metrics will only give the rate at which the function is executing , but not help debug the actual error Option C is incorrect since this will only provide notifications but not give the actual events which failed. Option D is incorrect since this is only used for API monitoring

You are developing a common lambda function that will be used across several development environments such as dev, qa, staging, etc. The lambda function needs to interact with each of these development environments. What is the best way to develop the Lambda function? ]A. Create a Lambda function for each environment so that each function can point to its respective environment. ]B. Create one Lambda function and use environment variables for each environment to interact. ]C. Create one Lambda function and create several versions for each environment. ]D. Create one Lambda function and create several ALIAS for each environment.

Answer - B This is also mentioned in the AWS Documentation Environment variables for Lambda functions enable you to dynamically pass settings to your function code and libraries, without making changes to your code. Environment variables are key-value pairs that you create and modify as part of your function configuration, using either the AWS Lambda Console, the AWS Lambda CLI or the AWS Lambda SDK. AWS Lambda then makes these key value pairs available to your Lambda function code using standard APIs supported by the language, like process.env for Node.js functions. Option A is incorrect since this would result in unnecessary code functions and more maintenance requirement for the functions Options C and D are incorrect since these are not the right way to design the functions for this use case

You are a developer who has been hired to lead the development of a new application. The application needs to interact with a backend data-store. The application also needs to perform many complex join operations. Which of the following would be the ideal data-store option? (Select Two) A. AWS DynamoDB B. AWS RDS C. AWS Redshift D. AWS S3

Answer - B and C Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks. Since you need complex query design, it is better to choose one of the available relational database services. Amazon Redshift is an Internet hosting service and data warehouse product which forms part of the larger cloud-computing platform Amazon Web Services. It is built on top of technology from the massive parallel processing data warehouse company ParAccel, to handle large scale data sets and database migrations. Both of the options above support complex joins. Option A is incorrect since AWS DynamoDB does not support complex joins Option D is incorrect since this is used for Object level storage

Your team is currently working on a source code that's defined in a Subversion repository. The company has just started using AWS tools for their CI/CD process and has now mandated that source code be migrated to AWS CodeCommit. Which of the following steps would you perform to fulfill this requirement? Choose 2 answers from the options given below. A. Migrate the code as it is to the AWS Code Commit Repository B. Migrate the code to a Git Repository first C. Migrate code from Git to AWS Code Commit D. Ensure to clone the current repository before committing it to AWS Code Commit

Answer - B and C The AWS Documentation mentions the following Migrate to AWS CodeCommit You can migrate a Git repository to an AWS CodeCommit repository in a number of ways: by cloning it, mirroring it, migrating all or just some of the branches, and so on. You can also migrate local, unversioned content on your computer to AWS CodeCommit. The following topics demonstrate some of the ways you can choose to migrate to a repository. Your steps may vary, depending on the type, style, or complexity of your repository and the decisions you make about what and how you want to migrate. For very large repositories, you might want to consider migrating incrementally. Note You can migrate to AWS CodeCommit from other version control systems, such as Perforce, Subversion, or TFS, but you will have to migrate to Git first. Options A and D are incorrect since you need to migrate the repository to Git first.

Your development team is working with Docker containers. These containers need to encrypt data. The data key needs to be generated using the KMS service. The data key should be in the encrypted format. Which of the following would you most ideally use? (Choose 2 options) A. The GenerateDataKey command B. The GenerateDataKeyWithoutPlaintext command C. Use the CMK Keys D. Use client-side keys

Answer - B and C The AWS Documentation mentions the following: GenerateDataKeyWithoutPlaintext returns a data encryption key encrypted under a customer master key (CMK). This operation is identical to GenerateDataKey but returns only the encrypted copy of the data key. Option A is incorrect because the GenerateDataKey command returns both the original plaintext key and the encrypted copy of the key. Option B is CORRECT because we need the command "GenerateDataKeyWithoutPlaintext" in order to only return the encrypted key. Option C is CORRECT because the CMK is required to encrypt the data keys with the above command. Option D is invalid since the question states that you need to use the KMS service

You need to set up a RESTful API service in AWS that would be serviced via the following URL https://democompany.com/customers?ID=1 So customers should be able to get their details whilst providing the ID to the API. Which of the following would you define to fulfill this requirement? Choose 2 answers from the options given below A. A Lambda function and expose the Lambda function to the customers. Pass the ID as a parameter to the function B. An API gateway with a Lambda function to process the customer information C. Expose the GET method in the API Gateway D. Expose the GET method in the Lambda function

Answer - B and C The ideal approach would be to define the code to get the customer information in the Lambda function. Then attach the Lambda function to the API gateway service. Expose the GET method in the API gateway so that users can call the API accordingly. For more information on methods for the API gateway, please refer to the below URL

You are planning on using the Serverless Application model which will be used to deploy a serverless application consisting of a Node.js function. Which of the following steps need to be carried out? Choose 2 answers from the options given below. A. Use the Lambda package command B. Use the SAM package command C. Use the Lambda deploy command D. Use the SAM deploy command

Answer - B and D

You are developing a Java based application that needs to make use of the AWS KMS service for encryption. Which of the following must be done for the encryption and decryption process? Choose 2 answers from the options given below. A. Use the Customer master key to encrypt the data B. Use the Customer master key to generate a data key for the encryption process C. Use the Customer master key to decrypt the data D. Use the generated data key to decrypt the data

Answer - B and D The AWS Documentation mentions the following The AWS Encryption SDK is a client-side encryption library that makes it easier for you to implement cryptography best practices in your application. It includes secure default behaviour for developers who are not encryption experts, while being flexible enough to work for the most experienced users. Options A and C are incorrect because you should never use the Customer master keys directly for the encryption of decryption process. In the AWS Encryption SDK, by default, you generate a new data key for each encryption operation

You are working as a team lead for your company. You have been told to manage the Blue Green Deployment methodology for one of the applications. Which of the following are some of the approaches for implementing this methodology? Choose 2 answers from the options given below A. Using Autoscaling Groups to scale on demands for both deployments B. Using Route 53 with Weighted Routing policies C. Using Route 53 with Latency Routing policies D. Using Elastic Beanstalk with the swap URL feature

Answer - B and D The AWS Documentation mentions the following Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software. Because AWS Elastic Beanstalk performs an in-place update when you update your application versions, your application can become unavailable to users for a short period of time. You can avoid this downtime by performing a blue/green deployment, where you deploy the new version to a separate environment, and then swap CNAMEs of the two environments to redirect traffic to the new version instantly. Option A is incorrect as on its own Autoscaling should be used to shift traffic and not on demand for such deployments Option C is incorrect since you need to use Route 53 with Weighted Routing policies

You are configuring Cross Origin Resource Sharing for your S3 bucket. You need to ensure that external domain sites can only issue the GET requests against your bucket. Which of the following would you modify as part of the CORS configuration for this requirement? ]A. AllowedOrigin Element ]B. AllowedHeader Element ]C. AllowedMethod Element ]D. MaxAgeSeconds Element

Answer - C

You are in charge of developing Cloudformation templates which would be used to deploy databases in different AWS Accounts. In order to ensure that the passwords for the database are passed in a secure manner which of the following could you use with Cloudformation? ]A. Outputs ]B. Metadata ]C. Parameters ]D. Resources

Answer - C

You've written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 - 500 MB. You've seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider? ]A. Create multiple threads and upload the objects in the multiple threads ]B. Write the items in batches for better performance ]C. Use the Multipart upload API ]D. Enable versioning on the Bucket

Answer - C All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API The AWS Documentation mentions the following to support this The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object (see Operations on Objects). Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

You've written an application that uploads objects onto an S3 bucket. The size of the object varies between 200 - 500 MB. You've seen that the application sometimes takes a longer than expected time to upload the object. You want to improve the performance of the application. Which of the following would you consider? ]A. Create multiple threads and upload the objects in the multiple threads ]B. Write the items in batches for better performance ]C. Use the Multipart upload API ]D. Enable versioning on the Bucket

Answer - C All other options are invalid since the best way to handle large object uploads to the S3 service is to use the Multipart upload API The AWS Documentation mentions the following to support this The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object (see Operations on Objects). Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket. For more information on Amazon S3 Multipart file upload, please refer to the below link

You've just started developing an application on your On-premise network. This application will interact with the Simple Storage Service and some DynamoDB tables. How would you as the developer ensure that your SDK can interact with the AWS services on the cloud? ]A. Create an IAM Role with the required permissions and add it to your workstation ]B. Create an IAM Role with the required permissions and make a call to the STS service ]C. Create an IAM user, generate the access keys, to interact with AWS services on the cloud ]D. Create an IAM User , generate a security token. Use the Security Token from within your program.

Answer - C Options A and B are incorrect since we need to use AWS Access keys during development and not IAM Roles Option D is incorrect since we should not be generating a security token to interact with the various AWS services during the development phase. When working on development, you need to use the AWS Access keys to work with the AWS Resources The AWS Documentation additionally mentions the following You use different types of security credentials depending on how you interact with AWS. For example, you use a user name and password to sign in to the AWS Management Console. You use access keys to make programmatic calls to AWS API operations.

Your company is planning on storing documents in an S3 bucket. The documents are sensitive, and employees should use Multi Factor authentication when trying to access documents. Which of the following must be done to fulfil this requirement? ]A. Ensure that Encryption is enabled the bucket AWS server-side encryption ]B. Ensure that Encryption is enabled the bucket using KMS keys ]C. Ensure that the a bucket policy is in place with a condition of "aws:MultiFactorAuthPresent":"false" with a Deny policy ]D. Ensure that the a bucket policy is in place with a condition of "aws:MultiFactorAuthPresent":"true" with a Deny policy

Answer - C The AWS Documentation gives an example on how to add a bucket policy which ensures that only if users are MFA authenticated , will they have access the bucket. Options A and B are incorrect since the question talks about MFA and not encryption

You're developing an application that is going to be hosted in AWS Lambda. The function will make calls to a database. The security mandate is that all connection strings should be kept secure. Which of the following is the MOST secure way to implement this? ]A. Use Lambda Environment variables ]B. Put the database connection string in the app.json file ]C. Lambda needs to reference the AWS Systems Manager Parameter Store for the encrypted database connection string ]D. Place the database connection string in the AWS Lambda function itself since all lambda functions are encrypted at rest

Answer - C The AWS Documentation mentions the following AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, and license codes as parameter values. You can store values as plain text or encrypted data. You can then reference values by using the unique name that you specified when you created the parameter. Highly scalable, available, and durable, Parameter Store is backed by the AWS Cloud. Parameter Store is offered at no additional charge. Option A would only be valid If the option also said that the variable would be in an encrypted format Option B is invalid since this is the most unsecured way to store database strings Option D is invalid since this is not the case with Lambda functions

Your application currently makes use of AWS Cognito for managing user identities. You want to analyze the information that is stored in AWS Cognito for your application. Which of the following features of AWS Cognito should you use for this purpose? ]A.Cognito Data ]B.Cognito Events ]C.Cognito Streams ]D.Cognito Callbacks

Answer - C The AWS Documentation mentions the following Amazon Cognito Streams gives developers control and insight into their data stored in Amazon Cognito. Developers can now configure a Kinesis stream to receive events as data is updated and synchronized. Amazon Cognito can push each dataset change to a Kinesis stream you own in real time. All other options are invalid since you should use Cognito Streams

You are developing an application that is working with a DynamoDB table. During the development phase, you want to know how much of the Consumed capacity is being used for the queries being fired. How can this be achieved? ]A. The queries by default sent via the program will return the consumed capacity as part of the result. ]B. Ensure to set the ReturnConsumedCapacity in the query request to TRUE. ]C. Ensure to set the ReturnConsumedCapacity in the query request to TOTAL. ]D. Use the Scan operation instead of the query operation.

Answer - C The AWS Documentation mentions the following By default, a Query operation does not return any data on how much read capacity it consumes. However, you can specify the ReturnConsumedCapacity parameter in a Query request to obtain this information. The following are the valid settings for ReturnConsumedCapacity: NONE—no consumed capacity data is returned. (This is the default.) TOTAL—the response includes the aggregate number of read capacity units consumed. INDEXES—the response shows the aggregate number of read capacity units consumed, together with the consumed capacity for each table and index that was accessed. Because of what the AWS Documentation mentions, all other options are invalid.

Your team is looking towards deploying an application into Elastic beanstalk. They want to deploy different versions of the application onto the environment. How can they achieve this in the easiest possible way? ]A. Create multiple applications in Elastic Beanstalk ]B. Create multiple environments in Elastic Beanstalk ]C. Upload the application versions to the environment ]D. Use CodePipeline to stream line the various application versions

Answer - C The AWS Documentation mentions the following Elastic Beanstalk creates an application version whenever you upload source code. This usually occurs when you create an environment or upload and deploy code using the environment management console or EB CLI. Elastic Beanstalk deletes these application versions according to the application's lifecycle policy and when you delete the application.

Your company has a bucket which has versioning and Encryption enabled. The bucket receives thousands of PUT operations per day. After a period of 6 months, there are a significant number of HTTP 503 error codes which are being received. Which of the following can be used to diagnose the error? ]A. AWS Config ]B. AWS Cloudtrail ]C. AWS S3 Inventory ]D. AWS Trusted Advisor

Answer - C The AWS Documentation mentions the following If you notice a significant increase in the number of HTTP 503-slow down responses received for Amazon S3 PUT or DELETE object requests to a bucket that has versioning enabled, you might have one or more objects in the bucket for which there are millions of versions. When you have objects with millions of versions, Amazon S3 automatically throttles requests to the bucket to protect the customer from an excessive amount of request traffic, which could potentially impede other requests made to the same bucket. To determine which S3 objects have millions of versions, use the Amazon S3 inventory tool. The inventory tool generates a report that provides a flat file list of the objects in a bucket. Option A is incorrect since this tool is used to monitor configuration changes Option B is incorrect since this tool is used to monitor API activity Option D is incorrect since this tool is used to give recommendations

You've been given the requirement to customize the content which is distributed to users via a Cloudfront Distribution. The content origin is an S3 bucket. How could you achieve this? ]A. Add an event to the S3 bucket. Make the event invoke a Lambda function which would customize the content. ]B. Add a Step Function. Add a step with a Lambda function just before the content gets delivered to the users. ]C. Consider using Lambda@Edge ]D. Consider using a separate application on an EC2 Instance for this purpose.

Answer - C The AWS Documentation mentions the following Lambda@Edge is an extension of AWS Lambda, a compute service that lets you execute functions that customize the content that CloudFront delivers. You can author functions in one region and execute them in AWS locations globally that are closer to the viewer, without provisioning or managing servers. Lambda@Edge scales automatically, from a few requests per day to thousands per second. Processing requests at AWS locations closer to the viewer instead of on origin servers significantly reduces latency and improves the user experience. All other options are incorrect since none of these are valid ways to customize content via Cloudfront distributions.

You've developed an AWS Lambda function but are running into a lot of performance issues. You decide to use the AWS X-Ray service to diagnose the issues. Which of the following must be done to ensure that you can use the X-Ray service with your Lambda function? ]A. Ensure that the X-Ray daemon process is installed with the Lambda function ]B. Ensure that the Lambda function is registered with X-Ray ]C. Ensure that the IAM Role assigned to the Lambda function has access to the X-Ray service ]D. Ensure that the IAM Role assigned to the X-Ray function has access to the Lambda function

Answer - C The AWS Documentation mentions the following Setting Up AWS X-Ray with Lambda Following, you can find detailed information on how to set up X-Ray with Lambda. Before You Begin To enable tracing on your Lambda function using the Lambda CLI, you must first add tracing permissions to your function's execution role. To do so, take the following steps: Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. Find the execution role for your Lambda function. Attach the following managed policy: AWSXrayWriteOnlyAccess Option A is incorrect since this is used if you need to use X-Ray with an application which is hosted on an EC2 Instance Option B is incorrect since this is not required to begin using the X-Ray service with AWS Lambda Option D is incorrect since the permissions need to be assigned the other way around.

Your development team is developing several lambda functions for testing.These functions will be called by an external .Net program. The program needs to call each lambda function version for testing purposes. How can you accomplish this in easiest way to ensure the least changes need to be made to .Net program? ]A. Create different environment variables for the Lambda function ]B. Create different versions for the Lambda function ]C. Create one or more ALIAS and reference it in the program ]D. Use the SAM for deployment of the functions

Answer - C The AWS Documentation mentions the following You can create one or more aliases for your Lambda function. An AWS Lambda alias is like a pointer to a specific Lambda function version. By using aliases, you can access the Lambda function an alias is pointing to (for example, to invoke the function) without the caller having to know the specific version the alias is pointing to Option A is invalid since environment variables in AWS Lambda are used to dynamically pass settings to your function code and libraries, without making changes to the Lambda code. Option B is invalid since this is used to publish one or more versions of your Lambda function Option D is invalid since this is used to define serverless applications

Your company is planning on using the Simple Storage service to host objects that will be accessed by users. There is a speculation that there would be roughly 6000 GET requests per second. Which of the following is the right way to use object keys for optimal performance? ]A. exampleawsbucket/2019-14-03-15-00-00/photo1.jpg ]B. exampleawsbucket/sample/232a-2019-14-03-15-00-00photo1.jpg ]C. exampleawsbucket/232a-2019-14-03-15-00-00/photo1.jpg ]D. exampleawsbucket/sample/photo1.jpg

Answer - C The AWS Documentation mentions the following on optimal performance for S3

A Developer is writing several Lambda functions that each access data in a common RDS DB instance. They must share a connection string that contains the database credentials, which are a secret. A company policy requires that all secrets be stored encrypted. Which solution will minimize the amount of code the Developer must write? ]A. Use common DynamoDB table to store settings ]B. Use AWS Lambda environment variables ]C. Use Systems Manager Parameter Store secure strings ]D. Use a table in a separate RDS database.

Answer - C The AWS Documentation mentions the following to support this AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, and license codes as parameter values. You can store values as plain text or encrypted data. You can then reference values by using the unique name that you specified when you created the parameter Options A and D are incorrect and inefficient since you don't need a separate table. Also, it does not mention in the answer about the encryption of the underlying tables. Option B is not correct since you need to share the encrypted connection strings

You've currently been tasked to migrate an existing on-premise environment into Elastic Beanstalk. The application does not make use of Docker containers. You also can't see any relevant environments in the beanstalk service that would be suitable to host your application. What should you consider doing in this case? ]A. Migrate your application to using Docker containers and then migrate the app to the Elastic Beanstalk environment. ]B. Consider using Cloudformation to deploy your environment to Elastic Beanstalk ]C. Consider using Packer to create a custom platform ]D. Consider deploying your application using the Elastic Container Service

Answer - C The AWS Documentation mentions the following to support this Custom Platforms Elastic Beanstalk supports custom platforms. A custom platform is a more advanced customization than a Custom Image in several ways. A custom platform lets you develop an entire new platform from scratch, customizing the operating system, additional software, and scripts that Elastic Beanstalk runs on platform instances. This flexibility allows you to build a platform for an application that uses a language or other infrastructure software, for which Elastic Beanstalk doesn't provide a platform out of the box. Compare that to custom images, where you modify an AMI for use with an existing Elastic Beanstalk platform, and Elastic Beanstalk still provides the platform scripts and controls the platform's software stack. In addition, with custom platforms you use an automated, scripted way to create and maintain your customization, whereas with custom images you make the changes manually over a running instance. To create a custom platform, you build an Amazon Machine Image (AMI) from one of the supported operating systems—Ubuntu, RHEL, or Amazon Linux (see the flavor entry in Platform.yaml File Format for the exact version numbers)—and add further customizations. You create your own Elastic Beanstalk platform using Packer, which is an open-source tool for creating machine images for many platforms, including AMIs for use with Amazon EC2. An Elastic Beanstalk platform comprises an AMI configured to run a set of software that supports an application, and metadata that can include custom configuration options and default configuration option settings. Options A and D are invalid because it could require a lot of effort to migrate the application to start using Docker containers Option B is invalid because using Cloudformation alone cannot be used alone for this requirement

A company currently allows access to their API's to customers via the API gateway. Currently all clients have a 6-month period to move from using the older API's to newer versions of the API's. The code for the API is hosted in AWS Lambda. Which of the following is the ideal strategy to employ in such a situation? ]A. Create another AWS Lambda version and give the link to that version to the customers. ]B. Create another AWS Lambda ALIAS and give the link to that version to the customers. ]C. Create another stage in the API gateway ]D. Create a deployment package that would automatically change the link to the new Lambda version

Answer - C The best way is to create a separate stage in the API gateway as maybe 'v2' and then customers could use both API versions. They can still slowly change their usage onto the new version in this duration. Below is the concept of the API stage in the AWS Documentation API stage "A logical reference to a lifecycle state of your API (for example, 'dev', 'prod', 'beta', 'v2'). API stages are identified by API ID and stage name". Options A and B are incorrect since access needs to be provided via the gateway Option D is incorrect since you need to keep both versions running side by side

Your team has a Code Commit repository in your account. You need to give developers in another account access to your Code Commit repository. Which of the following is the most effective way to grant access? ]A. Create IAM users for each developer and provide access to the repository ]B. Create an IAM Group , add the IAM users and then provide access to the repository ]C. Create a cross account role , give the role the privileges. Provide the role ARN to the developers. ]D. Enable public access for the repository.

Answer - C This is also mentioned in the AWS Documentation Configure Cross-Account Access to an AWS CodeCommit Repository You can configure access to AWS CodeCommit repositories for IAM users and groups in another AWS account. This is often referred to as cross-account access. This section provides examples and step-by-step instructions for configuring cross-account access for a repository named MySharedDemoRepo in the US East (Ohio) Region in an AWS account (referred to as AccountA) to IAM users who belong to an IAM group named DevelopersWithCrossAccountRepositoryAccess in another AWS account (referred to as AccountB).

As a developer, you are writing an application that will be hosted on an EC2 Instance. This application will interact with a queue defined using the Simple Queue service. The messages will appear in the queue during a 20-60 second time duration. Which of the following strategy should be used to effectively query the queue for messages? ]A. Use dead letter queues ]B. Use FIFO queues ]C. Use long polling ]D. Use short polling

Answer - C This is mentioned in the AWS Documentation Long polling offers the following benefits: Eliminate empty responses by allowing Amazon SQS to wait until a message is available in a queue before sending a response. Unless the connection times out, the response to the ReceiveMessage request contains at least one of the available messages, up to the maximum number of messages specified in the ReceiveMessage action. Eliminate false empty responses by querying all—rather than a subset of—Amazon SQS servers. Option A is invalid since this is used for storing undelivered messages Option B is invalid since this is used for First In First Out queues Option D is invalid since this is used when messages are immediately available in the queue

A Developer working on an AWS CodeBuild project wants to override a build command as part of a build run to test a change. The developer has access to run the builds but does not have access to the code and to edit the CodeBuild project What process should the Developer use to override the build command? ]A. Update the buildspec.yml configuration file that is part of the source code and run a new build. ]B. Update the command in the Build Commands section during the build run in the AWS console. ]C. Run the start build AWS CLI command with buildspecOverride property set to the new buildspec.yml file. ]D. Update the buildspec property in the StartBuild API to override the build command during build run.

Answer - C Use the AWS CLI command to specify different parameters that need to be run for the build. Since the developer has access to run the build , he can run the build changing the parameters from the command line. The same is also mentioned in the AWS Documentation

Your company has developed a web application and is hosting it in an Amazon S3 bucket configured for static website hosting. The users can log in to this app using their Google/Facebook login accounts. The application is using the AWS SDK for JavaScript in the browser to access data stored in an Amazon DynamoDB table. How can you ensure that API keys for access to your data in DynamoDB are kept secure? ]A. Create an Amazon S3 role in IAM with access to the specific DynamoDB tables, and assign it to the bucket hosting your website. ]B. Configure S3 bucket tags with your AWS access keys for your bucket hosing your website so that the application can query them for access. ]C. Configure a web identity federation role within IAM to enable access to the correct DynamoDB resources and retrieve temporary credentials. ]D. Store AWS keys in global variables within your application and configure the application to use these credentials when making requests.

Answer - C With web identity federation, you don't need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don't have to embed and distribute long-term security credentials with your application. Option A is invalid since Roles cannot be assigned to S3 buckets Options B and D are invalid since the AWS Access keys should not be used

You've created a local Java based Lambda function. You then package and upload the function to AWS. You try to run the function with the default settings , but the function does not run as expected. Which of the following could be the reasons for the issue? Choose 2 answers from the options given below. A. The name assigned to the function is not correct. B. The amount of CPU assigned to the function is not enough. C. The amount of memory assigned to the function is not enough. D. The timeout specified for the function is too short.

Answer - C and D Since the function is created with the default settings , the timeout for the function would be 3 seconds and the memory would default to 128 MB. For a Java based function, this would be too less. Hence you need to ensure the right settings are put in place for the function. Q: How are compute resources assigned to an AWS Lambda function?In the AWS Lambda resource model, you choose the amount of memory you want for your function, and are allocated proportional CPU power and other resources. For example, choosing 256MB of memory allocates approximately twice as much CPU power to your Lambda function as requesting 128MB of memory and half as much CPU power as choosing 512MB of memory. You can set your memory from 128 MB to 3,008 MB, in 64 MB increments. Option A is invalid since the name is not a reason for the function not working Option B is invalid since the CPU is allocated by AWS automatically.

Your team needs to develop an application that needs to make use of SQS queues. There is a requirement that when a message is added to the queue, the message is not visible for 5 minutes to consumers. How can you achieve this? Choose 2 answers from the options given below A. Increase the visibility timeout of the queue B. Implement long polling for the SQS queue C. Implement delay queues in AWS D. Change the message timer value for each individual message

Answer - C and D The AWS Documentation mentions the following Delay queues let you postpone the delivery of new messages to a queue for a number of seconds. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes. To set delay seconds on individual messages, rather than on an entire queue, use message timers to allow Amazon SQS to use the message timer's DelaySeconds value instead of the delay queue's DelaySeconds value. Option A is invalid since this can only make the message invisible after the message has been read and not in the beginning Option B is invalid since this is used to reduce the cost of using Amazon SQS by eliminating the number of empty responses from SQS queues

Your mobile application includes a photo-sharing service that is expecting tens of thousands of users at launch. You will leverage Amazon Simple Storage Service (S3) for storage of the user Images, and you must decide how to authenticate and authorize your users for access to these images. You also need to manage the storage of these images. Which two of the following approaches should you use? Choose two answers from the options below A. Create an Amazon S3 bucket per user, and use your application to generate the S3 URL for the appropriate content. B. Use AWS Identity and Access Management (IAM) user accounts as your application-level user database, and offload the burden of authentication from your application code. C. Authenticate your users at the application level, and use AWS Security Token Service (STS)to grant token-based authorization to S3 objects. D. Authenticate your users at the application level, and send an SMS token message to the user. Create an Amazon S3 bucket with the same name as the SMS message token, and move the user's objects to that bucket. E. Use a key-based naming scheme comprised from the user IDs for all user objects in a single Amazon S3 bucket.

Answer - C and E The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). The token can then be used to grant access to the objects in S3. You can then provides access to the objects based on the key values generated via the user id. Option A is possible but then becomes a maintenance overhead because of the number of buckets. Option B is invalid because IAM users is not a good security practice. Option D is invalid because SMS tokens are not efficient for this requirement.

You have docker containers which are going to be deployed in the AWS Elastic Container Service. You need to ensure that the underlying EC2 instances hosting the containers cannot access each other (since containers may be used by different customers). How can you accomplish this? ]A. Place IAM Roles for the underlying EC2 Instances ]B. Place the access keys in the Docker containers ]C. Place the access keys in the EC2 Instances ]D. Configure the Security Groups of the instances to allow only required traffic.

Answer - D Q: How does Amazon ECS isolate containers belonging to different customers? Amazon ECS schedules containers for execution on customer-controlled Amazon EC2 instances or with AWS Fargate and builds on the same isolation controls and compliance that are available for EC2 customers. Your compute instances are located in a Virtual Private Cloud (VPC) with an IP range that you specify. You decide which instances are exposed to the Internet and which remain private. Your EC2 instances use an IAM role to access the ECS service. Your ECS tasks use an IAM role to access services and resources. Security Groups and networks ACLs allow you to control inbound and outbound network access to and from your instances. You can connect your existing IT infrastructure to resources in your VPC using industry-standard encrypted IPsec VPN connections. You can provision your EC2 resources as Dedicated Instances. Dedicated Instances are Amazon EC2 Instances that run on hardware dedicated to a single customer for additional isolation. Option A is incorrect since the Roles need to be assigned on the task level Options B and C are incorrect since access keys is not the ideal security practise.

An application is publishing a custom CloudWatch metric any time an HTTP 504 error appears in the application error logs. These errors are being received intermittently. There is a CloudWatch Alarm for this metric and the Developer would like the alarm to trigger only if it breaches two evaluation periods or more. What should be done to meet these requirements? ]A. Update the CloudWatch Alarm to send a custom notification depending on results ]B. Publish the value zero whenever there are no "HTTP 504" errors ]C. Use high - resolution metrics to get data pushed to CloudWatch more frequently ]D. Aggregate the data before sending it to CloudWatch by using statistic sets.

Answer - D Since the errors are being received intermittently, it's better to collect and aggregate the results at regular intervals and then send the data to Cloudwatch. Option A is incorrect since here there is no mention of any special kind of notification Option B is incorrect since you don't need to mention a 0 value, just place a 1 value when the result is received. Option C is incorrect since there is no mention on the frequency, so we don't know if we need high resolution for metrics

Your team has been instructed to develop a completely new solution onto AWS. Currently you have a limitation on the tools available to manage the complete lifecycle of the project. Which of the following service from AWS could help you handle all aspects of development and deployement? ]A. AWS CodePipeline ]B. AWS CodeBuild ]C. AWS CodeCommit ]D. AWS CodeStar

Answer - D The AWS Documentation mentions the following AWS CodeStar is a cloud-based service for creating, managing, and working with software development projects on AWS. You can quickly develop, build, and deploy applications on AWS with an AWS CodeStar project. An AWS CodeStar project creates and integrates AWS services for your project development toolchain. Depending on your choice of AWS CodeStar project template, that toolchain might include source control, build, deployment, virtual servers or serverless resources, and more. AWS CodeStar also manages the permissions required for project users (called team members). By adding users as team members to an AWS CodeStar project, project owners can quickly and simply grant each team member role-appropriate access to a project and its resources. Option A is incorrect since this service is used for managing CI/CD pipelines Option B is incorrect since this service is used for managing code builds Option C is incorrect since this service is used for managing source code versioning repositories

Your development team has a requirement for a message to be consumed by multiple consumers in an application. You also need to ensure that the metadata can be sent along with the messages. Which of the following would you implement for this purpose? ]A. Implement as SNS topic and use different endpoints for the different types of metadata ]B. Use SQS queues and create different queues for the different type of metadata ]C. Use SQS queues and make use of message attributes ]D. Use an SNS topic and add message attributes to the messages

Answer - D The AWS Documentation mentions the following Amazon SNS supports delivery of message attributes which let you provide structured metadata items (such as timestamps, geospatial data, signatures, and identifiers) about the message. Each message can have up to 10 attributes. https://docs.aws.amazon.com/sns/latest/dg/sns-message-attributes.html Options A is not correct, reason being the you are asked to send metadata information along with message and here it's referring to work with EndPoints. Option B is invalid since you need to use message attributes Option C is valid, but in this business scenario it's wrong. Reason, we asked on sending "messages", and for AWS SNS is the correct one. If the quesiton about working with queues, then this is the correct one.

You're a developer at a company that needs to deploy an application using Elastic Beanstalk. There is a requirement to place a healthcheck.config file for the environment. In which of the following location should this config file be placed to ensure it is part of the elastic beanstalk environment? ]A. In the application root folder ]B. In the config folder ]C. In the packages folder ]D. In the .ebextensions folder

Answer - D The AWS Documentation mentions the following Elastic Beanstalk supports two methods of saving configuration option settings. Configuration files in YAML or JSON format can be included in your application's source code in a directory named .ebextensions and deployed as part of your application source bundle. You create and manage configuration files locally. All other options are incorrect because the AWS documentation specifically mentions that you need to place custom configuration files in the .ebextensions folder

You have created the following stages in CodePipeline: What happens if there is a failure detected in the "Build" stage? ]A. A rollback will happen at the "Source" stage. ]B. The "Build" step will be attempted again. ]C. The "Build" step will be skipped and the "Staging" step will start. ]D. The entire process will halt.

Answer - D The AWS Documentation mentions the following In AWS CodePipeline, an action is a task performed on an artifact in a stage. If an action or a set of parallel actions is not completed successfully, the pipeline stops running. Options A, B and C are incorrect since the default action will be that the entire pipeline will be stopped if the build does not succeed.

You are currently managing deployments for a Lambda application via Code Deploy. You have a new version of the Lambda function in place. You have been told that all traffic needs to be shifted instantaneously to the new function. Which deployment technique would you employ in CodeDeploy? ]A. Canary ]B. Gradual ]C. Linear ]D. All-at-Once

Answer - D The AWS Documentation mentions the following There are three ways traffic can shift during a deployment: Canary: Traffic is shifted in two increments. You can choose from predefined canary options that specify the percentage of traffic shifted to your updated Lambda function version in the first increment and the interval, in minutes, before the remaining traffic is shifted in the second increment. Linear: Traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the number of minutes between each increment. All-at-once: All traffic is shifted from the original Lambda function to the updated Lambda function version at once.

You have developed an application that is putting custom metrics into Cloudwatch. You need to generate alarms on a 10 second interval based on the published metrics. Which of the following needs to be done to fulfil this requirement? ]A. Enable basic monitoring ]B. Enable detailed monitoring ]C. Create standard resolution metrics ]D. Create high resolution metrics

Answer - D This is mentioned in the AWS Documentation Using the existing PutMetricData API, you can now publish Custom Metrics down to 1-second resolution. This gives you more immediate visibility and greater granularity into the state and performance of your custom applications, such as observing short-lived spikes and functions. In addition, you can also alert sooner with High-Resolution Alarms, as frequently as 10-second periods. High-Resolution Alarms allow you to react and take actions faster and support the same actions available today with standard 1-minute alarms. You can add these high-resolution metrics and alarms widgets to your Dashboards giving you easy observability of critical components. Additionally, if you use collectd to gather your metrics, you can publish these metrics to CloudWatch using our updated collectd plugin supporting high-resolution periods down to 1-second. Options A and B are incorrect since these are more pertinent to existing services which are available in AWS Option C is incorrect since for such a high degree of resolution, you need to use high resolution metrics

A company is writing a Lambda function that will run in multiple stages, such a dev, test and production. The function is dependent upon several external services, and it must call different endpoints for these services based on function's deployment stage. What Lambda feature will enable the developer to ensure that the code references the correct endpoints when running in each stage? ]A. Tagging ]B. Concurrency ]C. Aliases ]D. Environment variables

Answer - D You can create different environment variables in the Lambda function that can be used to point to the different services. The below screenshot from the AWS Documentation shows how this can be done with databases. Option A is invalid since this can only be used to add metadata for the function Option B is invalid since this is used for managing the concurrency of execution Option C is invalid since this is used for managing the different versions of your Lambda function

You're the lead developer for a company that uses AWS KMS to decrypt passwords from an AWS RDS MySQL database using an asymmetric CMK. While decrypting the data you receive an InvalidCiphertextException error which causes the application to fail. You have made sure that the CMK ID used is accurate, what could have caused this error? ]A. EncryptionAlgorithm set to default value ]B. EncryptionContext is empty ]C. GrantTokens is an empty array ]D. KeyId is empty

Answer: A Option A is CORRECT as asymmetric CMKs cannot use the default algorithm as the default one is used for symmetric only. It is required to record the algorithm during encryption and provide the exact one during decryption. On the contrary, for symmetric CMKs the default value would work Option B is incorrect this is not a required parameter and so leaving it empty will not cause this error Option C is incorrect as GrantTokens is not a required parameter but takes values in an array and so will not cause the error Option D is incorrect as it is mentioned in the question that the CMK ID is verified and is accurate

You have many Lambda functions that read data from AWS Kinesis stream. Your colleague informs you that our application has too many Lambda function invocations which is increasing latency for your application. How can you minimize latency and increase read throughput of your function efficiently? ]A. Create a data stream consumer ]B. Reduce the number of functions and let each function do the work of 2 functions ]C. Request AWS to increase invocation frequency limit. ]D. Configure a dead letter queue

Answer: A Option A is CORRECT as creating a data stream consumer assures a dedicated connection to the each shard. The connection will help maximize throughput and minimize latency. Amazon states clearly that by default Lambda function polls each shard at a base rate of 1 per second. Option B is incorrect as this option will increase the function execution time that will require longer timeouts and may be more memory. This will not be an efficient solution. Option C is incorrect as you already have unlimited invocation frequency with AWS source services. Option D is incorrect as a dead letter queue will help when your Lambda functions are overwhelmed and are missing processing of the data coming from the stream which is not the case in this scenario.

You are an API developer for large manufacturing company. You have developed an API resource that adds new products to the distributors inventory using a POST HTTP request. It includes an Origin header and accepts application/x-www-form-encoded as request content type. Which response header will allow access to this resource from another origin? ]A. Access-Control-Request-Origin ]B. Access-Control-Request-Method ]C. Access-Control-Request-Headers ]D. All of the above

Answer: A Option A is CORRECT as the POST request satisfies the condition for a simple cross-origin request and so allowing Access-Control-Request-Origin header will make it so that it can be accessed from other origins Option B is incorrect as this option will not allow the resource to be cross origin, this header is a part of enabling CORS support for a complex HTTP request Option C is incorrect as this option will not allow the resource to be cross origin, this header is a part of enabling CORS support for a complex HTTP request Option D is incorrect but is the next closest answer. The question reads which header and not headers

You are using AWS S3 to encrypt and store large documents for your application. You have been asked to use AWS Lambda function for this purpose by your Technical Architect. You have determined the use of AWS KMS for encryption as your data is stored and managed in AWS platform. Which CMK will you use for this purpose? ]A. Asymmetric CMK ]B. Symmetric CMK ]C. RSA CMK ]D. ECC CMK

Answer: B Option A is incorrect as AWS services that integrate with AWS KMS do not support the use of asymmetric keys Option B is CORRECT as mentioned in the documentation that Symmetric keys never leave KMS unencrypted and hence is used to integrate with other AWS services Option C is incorrect as RSA CMK is a type of asymmetric key which cannot be used with other AWS services for integration Option D is incorrect as ECC CMK is a type of asymmetric key which cannot be used with other AWS services for integration

You are a developer at a company that has built a serverless application that allows users to get NBA stats. The application consists of three different levels of subscription Free, Premium and Enterprise implemented using stages in API Gateway. The Free level allows developers to get access to stats upto 5 games per player and premium and enterprise get full access to the database. You're manager has asked to you limit the free level to 100 concurrent requests at any given point in time. How can this be achieved? ]A. Under usage plan for the stage change Burst to 50 and Rate to 50 ]B. Under usage plan for the stage change Burst to 100 and Rate to 100 ]C. Under usage plan for the stage change Burst to 50 and Rate to 100 ]D. All of the above

Answer: B Option A is incorrect as changing Burst to 50 and Rate to 50 will mean that 50 concurrent requests can be made and processed at any given point in time Option B is CORRECT as changing Burst to 100 and Rate to 100 will allow 100 concurrent requests to be made and processed at any given point in time Option C is incorrect as changing Burst to 50 and Rate to 100 will allow 50 concurrent requests to be made but 100 processed at any given point in time. The remaining 50 requests will be a 429 error and will have to retry after an interval Option D is incorrect as not all options are correct

You work for a large bank and are tasked to build an application that allows 30 large customers to perform more than 1000 online transactions per second swiftly and collectively in the us-east-1 region.The size of each transaction is around 5 KB. You're manager has told you to ensure data is encrypted end-to-end, you decide to use AWS KMS to meet your requirements. While using the SDK and testing you see ThrottlingException error. How will you deliver the application with optimum performance metrics? ]A. Send data directly to AWS KMS for encryption ]B. Use LocalCryptoMaterialsCache operation ]C. Use RequestServiceQuotaIncrease operation ]D. Use AWS SQS to queue all requests made to AWS KMS

Answer: B Option A is incorrect as sending the data to KMS for encryption doesn't meet the requirement of encryption in transit and so can't be used Option B is CORRECT as using LocalCryptoMaterialsCache is in-memory cache that can be used to save data keys. It is a configurable cache that can configured to be made more secure Option C is incorrect as increasing the quota will change the performance metrics of the application Option D is incorrect as it will not improve the process of encryption and decryption and will not change the performance metrics of the application

A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application which needs to handle thousands of requests. Currently, the dev environment is running on a t1.micro instance and the developer needs to use an m4.large instance for the production environment. How can this change be implemented most effectively? ]A. Use CloudFormation to migrate the Amazon EC2 instance type of the environment from t1 micro to m4.large. ]B. Create a saved configuration file in Amazon S3 with the instance type as m4.large and use the same during environment creation. ]C. Change the instance type to m4.large in the configuration details page of the Create New Environment page. ]D. Change the instance type value for the environment to m4.large by using update autoscaling group CLI command.

Answer: B The Elastic Beanstalk console and EB CLI set configuration options when you create an environment. You can also set configuration options in saved configurations and configuration files. If the same option is set in multiple locations, the value used is determined by the order of precedence.Configuration option settings can be composed in text format and saved prior to environment creation, applied during environment creation using any supported client, and added, modified or removed after environment creation.During environment creation, configuration options are applied from multiple sources with the following precedence, from highest to lowest: Settings applied directly to the environment - Settings specified during a create environment or update environment operation on the Elastic Beanstalk API by any client, including the AWS Management Console, EB CLI, AWS CLI, and SDKs. The AWS Management Console and EB CLI also apply recommended values for some options that apply at this level unless overridden. Saved Configurations - Settings for any options that are not applied directly to the environment are loaded from a saved configuration, if specified. Configuration Files (.ebextensions) - Settings for any options that are not applied directly to the environment, and also not specified in a saved configuration, are loaded from configuration files in the .ebextensions folder at the root of the application source bundle. Configuration files are executed in alphabetical order. For example, .ebextensions/01run.config is executed before .ebextensions/02do.config. Default Values - If a configuration option has a default value, it only applies when the option is not set at any of the above levels. If the same configuration option is defined in more than one location, the setting with the highest precedence is applied. When a setting is applied from a saved configuration or settings applied directly to the environment, the setting is stored as part of the environment's configuration. These settings can be removed with the AWS CLI or with the EB CLI.Settings in configuration files are not applied directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version. If a setting applied with one of the other methods is removed, the same setting will be loaded from configuration files in the source bundle. Option A is incorrect since the environment is already managed by the Elastic Beanstalk service and we don't need Cloudformation for this. Option C is incorrect because this would not be a cost-effective option. Re-using the saved configuration file in S3 would be more cost-effective here Option D is incorrect response because hard-coding an instance type into an auto-scaling group is not a best-practice for applications with multiple environments. Since we are dealing with both a developer environment and a production environment, it is safer to load from a saved configuration.

Your are deploying a new Lambda function that needs to use a credit card number to make payments for purchases for internal company employees. This credit card information is to be used by default when an employee doesn't provide a purchase order number or another credit card number. What is the most secure way to store this information? ]A. Environment variable ]B. Environment variable with custom KMS key ]C. Encrypt information in environment variable before deploying ]D. Assign a variable with credit card information

Answer: C As per AWS documentation, they strongly suggest to encrypt sensitive information in an environment variables before deploying the function Option A is incorrect as default encryption occurs after deploying not during deployment. Option B is incorrect as this option does not specify when the encryption occurs. Option C is CORRECT as per AWS documentation, variables with sensitive information should be encrypted while storing them and would be decrypted when invoked to be made available for the Lambda code during execution. Option D is incorrect as this will defeat separation of concerns when credit card information needs to change there is no need to update application logic code further this information is used as default so more so for failover purposes.

You are a developer at a company has built a serverless application that allows users to make payments online. The applications consists of several Lambda functions and a DynamoDB table, this is implemented using a SAM template. However, when users want to see their transactions and update their payment information they are unable to do so. After debugging you discover that the Lambda functions don't have permissions to access records from the DynamoDB table. How will you resolve this issue using more tighter and secure AWS Managed Policy ? ]A. Create an IAM role using AWS Managed Policy AmazonDynamoDBFullAccess and attach to Lambda functions. ]B. Use AmazonDynamoDBFullAccess policy template in the SAM template ]C. Use DynamoDBCrudPolicy policy template in the SAM template ]D. Create an IAM role using AWS Managed Policy AmazonDynamoDBReadOnlyAccess and attach to Lambda functions

Answer: C Option A is incorrect as providing full access to DynamoDB defeats AWS best practice of least privilege. There is no need to give access to all DynamoDB table APIs across all regions. The requirement is to read and update a table. Further setting up an IAM role will require a special acknowledgement to deploy the application which is not very efficient for maintainability of the application. Option B is incorrect as providing full access to DynamoDB defeats AWS best practice of least privilege. There is no need to give access to all DynamoDB table APIs across all regions. The requirement is to read and update a table. Option C is CORRECT as DynamoDBCrudPolicy will give create, read, update and delete permissions to a DynamoDB table which is tighter and more secure inline with best practice of least privilege. It is also managed by AWS which would make it AWs's responsibility to maintain the policy. Option D is incorrect as providing read-only access to DynamoDB doesn't meet the requirement is to read and update a table. Further setting up an IAM role will require a special acknowledgement to deploy the application which is not very efficient for maintainability of the application.

Your are deploying a mobile application for an ecommerce company. They require you to change themes based on the device type, Android devices see a green theme and iOS devices see blue. You are advised to use AWS Lambda for your mobile backend deployment. How could this be achieved efficiently? ]A. Use a REST client to pass device info to API Gateway that invokes AWS Lambda function ]B. Different environment variables for Android and iOS ]C. Use AWS Mobile SDK and use the "context" object ]D. Use the "event" object to determine which device invoked the function.

Answer: C Option A is incorrect as this is not very efficient as it requires writing code at the application level to send data to Lambda function to process. Option B is incorrect as environment variables are used for information that is more constant and can be used during runtime. Option C is CORRECT as per AWS Lambda FAQ's the context object when called through AWS Mobile SDK will give the Lambda function automatic access to device and application information to them use to change themes. Option D is incorrect as the event object will not have the device information required.

You work for a travel company that books accomodation for customers. The company has decided to release a new feature that will allow customers to book accommodation in real-time through their API. As their developer, you have been asked to deploy this new feature. How will you test this new API feature with minimal impact to customers? ]A. Create a stage and inform your pilot customers to change their endpoint ]B. Create a stage and inform your pilot customers to change their endpoint and attach a usage plan ]C. Create a stage and enable canary release ]D. Create a stage and promote a canary release

Answer: C Option A is incorrect because setting up stage and informing your customers to change their endpoint will affect them by adding downtime on their side Option B is incorrect as same as above but adding the usage plan only helps throttle requests, it will still affect the customers Option C is CORRECT as enabling canary release will allow the developer to route a % of the traffic to the new API in a random order ensuring no one customer is affected too long. Canary release is a part of a stage in the API Gateway console. Option D is incorrect because promoting a canary release will make the deployment release thereby having no delta between canary and base stage. If this option is selected all customers will be affected

Your team is planning on creating a DynamoDB table to use with their application. They are planning to place the initial Read Capacity to 5 units with eventually consistent read operations. You want to read 5 items from the table per second where each item is of size 2 KB. What is the total size of data in KB that can be read using the above read throughput capacity? ]A. 1 ]B. 5 ]C. 10 ]D. 20

Answer: D For an item size of up to 4 KB, one read capacity unit represents one strongly consistent read per second OR two eventually consistent read per second.Now we want to read 5 items from the table per second and we need to calculate the total size of the data in KB.The question states the size of each item is 2 KB. Therefore the total size would be 5 items * 2 KB = 10 KB. Since DynamoDB is using eventually-consistent reads, we need to multiply the above result by 2.Therefore, the total size is = 10 KB * 2 = 20 KB and the correct answer is D.

You are an API developer for a large corporation and have been asked to investigate a latency problem of a public API in API Gateway. Upon investigation you realise that all clients making requests to the API are invalidating the cache using Cache-Control header, which has caused this latency. How will you resolve the latency problem with the least interruption to any services? ]A. Flush entire cache ]B. Disable API caching ]C. Attach an InvalidateCache policy to the resource ]D. Check Require Authorization box

Answer: D Option A is incorrect as flushing the entire cache will not help solve latency problems as all requests will be forwarded to integration backend until cache has been refilled Option B is incorrect as disabling caching all together will not reduce latency as all requests will be forwarded to integration backend Option C is a close answer but still incorrect in the current context as attaching an Invalidate Cache policy will allow a client to invalidate cache who has IAM permissions to do so and help in restricting which endpoints or groups can be invalidated. The current problem to solve is latency which occurs when all clients permitted to invalidate cache. They key here is that your API is meant to be a public API and so it is important that authorized non IAM users should also be able to invalidate cache if needed Option D is CORRECT as only authorized users can invalidate cache

You have used a CMK to create a data key using the GeneratedDataKey operation to encrypt your application's data using envelope encryption. You have been asked to provide temporary secured access to external auditors so that they can audit the data stored. These auditors should be able to immediately gain access to your data. What is the most effective and efficient way of achieving this? ]A. Download all data and send data via a secure courier ]B. Use key policies to assign decrypt access to auditors ]C. Use grants to assign decrypt access to auditors ]D. Use grant tokens after using grants with the decrypt and re-encrypt operation

Answer: D Option A is incorrect as this is not very efficient as it requires downloading all the data and then [physically transporting the data. This could cause exposure to theft and foul play that would lead to tampered data not good for auditors Option B is not ideal as key policies are used for providing static permissions to data Option C is the next closest answer as it would be apt to provide temporary access but since it's eventually consistent this access might not be immediate Option D is CORRECT answer as using grant tokens received from the CreateGrant request will mitigate potential delay and grant immediate access. Decrypt operation is needed for the auditors to decrypt and re-encrypt this data

You have a stock market trading application and it sends real-time data to AWS Kinesis which is then connected to a Lambda function to process and sort the information before it saves to a DynamoDB table. This table then is consumed by customers via a dashboard. As soon as the market opens your customers are complaining that not all data is delivered to them. Which Lambda CloudWatch metric should you look at first in order to resolve this problem? ]A. Throttles ]B. Dwell time ]C. ConcurrentExecutions ]D. IteratorAge

Answer: D Option A is incorrect as throttles will measure the number of Lambda function invocation attempts that were throttled when the invocation rates are more than concurrency limits . In this case this is not the first metric to look at as failed invocations will often trigger retry attempts. Option B is incorrect as this option is not a CloudWatch metric but a metric that can be observed via X-Ray. Option C is incorrect as this metric is an aggregated value for all Lambda functions making it difficult to find data per function. Option D is CORRECT as iterator age will determine age of stream records processed by functions. Amazon says, "Measures the age of the last record for each batch of records processed. Age is the difference between the time Lambda received the batch, and the time the last record in the batch was written to the stream. " This then leads to exceeding retention period for Kinesis. This should be the first to look at as data streams are not processed as fast as the stream records are getting generated. When the stock market opens its generally the one of the busiest times of the day and could cause a spike in records.

You have a SAM template that is used to deploy a Lambda function and you are now working to create a new version, your manager has asked you to instantly switch traffic once you have built and tested your version. What is the most efficient, effortless and simple way to achieve this? ]A. Use Qualifiers in the Management Console to select the version which you built and tested ]B. Point an alias, PROD to the version which you built and tested ]C. Set DeploymentPreference property of the function resource ]D. Set AutoPublishAlias property of the function resource

Answer: D Option A is incorrect because using the Management Console might be tempting for a one off deployment inefficient when need to be done several times. Option B is incorrect because pointing an alias will not instantly switch traffic as it doesn't state that the PROD alias is actually deployed. Option C is incorrect, although it is the next closest answer; setting DeploymentPreference as this will immediately switch traffic but used for a more complex deployment. With this property, you can mention alarms that need to be monitored, hooks that can be run pre and post traffic shifting and also a traffic switching type needs to be defined. Option D is CORRECT as the AutoPublishAlias property will create a new alias, create and publish a new version of the Lambda code, point alias to this version and point all event sources to this alias. The question states find the most effortless and efficient way to achieve this and the above actions are performed by setting the AutoPublishAlias property with the alias name. Also you are asked to instantly switch traffic to the new version and so finding an automated way to this will always be the most efficient.

You are planning to build a serverless application with microservices using AWS Code Build. This application will be using AWS Lambda. How will you specify AWS SAM CLI & Application package within the build spec file of an AWS CodeBuild? ]A. Use aws-sam-cli in the install phase & sam package in the post_build phase of the BuildSpec file. ]B. Use aws-sam-cli in the pre-build phase & sam package in the post_build phase of the BuildSpec file. ]C. Use aws-sam-cli in the install phase & sam package in the pre_build phase of the BuildSpec file. ]D. Use aws-sam-cli in the pre-build phase & sam package in the install phase of the BuildSpec file.

Correct Answer - A AWS SAM CLI needs to install during the install phase of AWS CodeBuild while AWS SAM package needs to specify in the post Build section which will create a zip file of code & upload to Amazon S3. Option B is incorrect as aws-sam-cli needs to be installed in the install phase & not in the pre-build phase. Option C is incorrect as the sam package needs to be specified in the post-build section & not in the pre-build phase. Option D is incorrect as aws-sam-cli needs to be installed in the install phase & not in the pre-build phase. sam package needs to be specified in the post-build section & not in the install phase.

A distributed application is being deployed using non-supported platforms & AWS Elastic Beanstalk. There is an additional requirement to have AWS X-Ray integration. Which of the following options can be used to integrate AWS X-Ray daemon with AWS Elastic Beanstalk in this scenario? A. Download AWS X-Ray daemon from Amazon S3 & run with configuration file. B. Install AWS X-Ray agent in AWS Elastic beanstalk. C. Use AWS Elastic Beanstalk console to enable AWS X-Ray integration. D. Include language specific AWS X-Ray libraries in application code.

Correct Answer - A For non-supported Platforms, AWS X-Ray Daemon can be installed with AWS Beanstalk by downloading from Amazon S3 bucket. Option B is incorrect as AWS X-Ray Agent can be used for EC2/ECS instances & not for AWS Elastic Beanstalk. Option C is incorrect as using AWS Elastic Beanstalk console to integrate AWS X-Ray can be used for supported platforms. Option D is incorrect as language specific AWS X-Ray libraries in application code can be used for supported platforms & not for un-supported platforms.

You have a legacy application which process message from SQS Queue. The application uses a single thread to poll multiple queues, which of the following polling timeout will be the best option to avoid latency in processing messages? ]A. Use Short Polling with default visibility timeout values. ]B. Use Long Polling with higher visibility timeout values. ]C. Use Long Polling with lower visibility timeout values. ]D. Use Short Polling with higher visibility timeout values.

Correct Answer - A In the above case, the application is polling multiple queues with a single thread. Long polling will wait for a message or timeout values for each queue which may delay the processing of messages in other queues which has messages to be processed. So, the correct option is to use short polling with default timeout values. Option B & C are incorrect as using Long Polling will delay the processing of messages in other queues. Option D is incorrect as using short polling with a higher timeout will not help in reducing the latency while processing images

You have configured a Sampling Rule with reservoir size as 60 & fixed rate to 20%. There are 200 requests per second matching the rule defined. How many requests will be sampled per second? ]A. 88 requests per second ]B. 60 requests per second. ]C. 40 requests per second. ]D. 120 requests per second.

Correct Answer - A Let us suppose, we specify a value for Reservoir Rate as 60, and Fixed Rate as 20. Now, if your application receives 200 requests in a second, then the total number of requests that would be traced or sampled will be: - "Reservoir Rate" + Fixed Rate % [(Total Requests - Reservoir Rate)] - 60 + (200-60) * 20%- 60 + (140) * 20%- 60 + 28 - 88 Option B,C, & D are incorrect as these values are not matching the reservoir size & fixed rate sampling rate.

There is a new Lambda Function developed using AWS CloudFormation Templates. Which of the following attributes can be used to test new Function with migrating 5% of traffic to new version? ]A. aws lambda create-alias --name alias name --function-name function-name \--routing-config AdditionalVersionWeights={"2"=0.05} ]B. aws lambda create -alias --name alias name --function-name function-name \--routing-config AdditionalVersionWeights={"2"=5} ]C. aws lambda create -alias --name alias name --function-name function-name \--routing-config AdditionalVersionWeights={"2"=0.5} ]D. aws lambda create -alias --name alias name --function-name function-name \--routing-config AdditionalVersionWeights={"2"=5%}

Correct Answer - A The correct way to create an alias for a Lambda function is to use " aws lambda create-alias --function-name my-function --name alias-name --function-version version-number --description " Routing-Config parameter of the Lambda alias allows to point to two different versions of the Lambda function and determine what percentage of incoming traffic is sent to each version. In the above case, a new version will be created to test new function with 5 % of the traffic while the original version will be used for the remaining 95% traffic. Option B is incorrect as since 5% traffic needs to shift to new function, routing-config parameter should be 0.05 & not 5. Option C is incorrect as since 5% traffic needs to shift to new function, routing-config parameter should be 0.05 & not 0.5. Option D is incorrect as since 5% traffic needs to shift to new function, routing-config parameter should be 0.05 & not 5%.

Which of the following is the best way to update an S3 notification configuration, when a change in the lambda version needs to be referred by an S3 bucket after new objects are created in the bucket? ]A. Specify Lambda Alias ARN in the notification configuration. ]B. Specify Lambda Function ARN in the notification configuration. ]C. Specify Lambda Qualified ARN in the notification configuration. ]D. Specify Lambda $LATEST ARN in the notification configuration.

Correct Answer - A When a Lambda Alias ARN is used in the notification configuration, & a new version of the Lambda function is created, you just need to update Alias ARN pointing to a new function. No changes are required to be done at the Amazon S3 bucket. Option B is incorrect as When a Lambda Function ARN is used in the notification configuration, whenever there is a new Lambda Function created, you will need to update ARN in notification configuration to point to the latest version. Option C is incorrect as When a Lambda Qualified ARN is used in the notification configuration, whenever there is a new Lambda Function created, you will need to update ARN in notification configuration to point to the latest version. Option D is incorrect as there is nothing as $LATEST ARN. $LATEST version has two ARN associated with it, Qualified ARN & Unqualified ARN.

Developer Team is working on an event driven application which needs to process data stored in Amazon S3 bucket & need to notify multiple subscribers using Amazon SNS. For this a single topic is created in Amazon SNS & messages are pushed to multiple Amazon SQS queues subscribed to this topic. Which of the following is a correct statement with regards to messages sent to Amazon SQS queue? A. Each Queue will receive an identical message sent to that topic instantaneously. B. Message sent to the topic will be evenly distributed among all the queues which have subscribed to this topic. C. Each Queue will receive a message sent to that topic asynchronously with a time delay. D. Messages sent to the topic will be visible to the queue, once processing of the message is completed by the first queue.

Correct Answer - A When multiple Amazon SQS queues are subscribed to a single topic within an Amazon SNS, each queue will receive an identical message. This is useful for parallel independent processing of messaging. Option B is incorrect as All the queues subscribed to the topic will not get an evenly distributed message, but all queues will have identical messages each time a message is pushed for a topic. Option C is incorrect as there would be any time delay for receiving messages. Option D is incorrect as All queue receives identical message & can start processing of messages parallelly independent of other queues.

You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With new Lambda function working as per expectation which of the following will shift traffic from original Lambda function to new Lambda function in the shortest time frame? ]A. Canary10Percent5Minutes ]B. Linear10PercentEvery10Minutes ]C. Canary10Percent15Minutes ]D. Linear10PercentEvery1Minute

Correct Answer - A With Canary Deployment Preference type, Traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while remaining all traffic is shifted after 5 minutes. Option B is incorrect as Linear10PercentEvery10Minutes will add 10 percent traffic linearly to a new version every 10 minutes. So, after 100 minutes all traffic will be shifted to the new version. Option C is incorrect as Canary10Percent15Minutes will send 10 percent traffic to the new version and 15 minutes later complete deployment by sending all traffic to the new version. Option D is incorrect as Linear10PercentEvery1Minute will add 10 percent traffic linearly to the new version every 1 minute. So, after 10 minutes all traffic will be shifted to the new version.

A company is developing an application which interacts with a DynamoDB table. There is now a security mandate that all data must be encrypted at rest. How can you achieve this requirement? Choose 2 answers from the options given below A. Enable encryption using AWS owned CMK B. Enable encryption using AWS managed CMK C. Enable encryption using client keys D. Enable your application to use the SDK to decrypt the data

Correct Answer - A and B DynamoDB encryption is mandatory at the time of table creation itself and it is of two typesi. DEFAULT method using 'AWS owned CMK'ii. KMS method using 'AWS managed CMK 'Therefore the following options are correct:A. Enable encryption using AWS owned CMKB. Enable encryption using AWS managed CMK

You are working on a mobile application which will be using AWS ElastiCache for caching application data to reduce latency & improve throughput. You have been asked by management team to evaluate both in-memory cache supported by AWS ElastiCache engines - Memcached and Redis. Which of the following features are available only with Memcached which you should consider while developing the application? (Select Two.) A. Simple caching model B. Pub/Sub capability support. C. Multithreaded performance with utilization of multiple cores D. Data replication to multiple AZ's.Data persistence

Correct Answer - A, C AWS ElastiCache Memcached cache engine supports multithreaded performance with the utilization of multiple cores & simple caching model Options B & D are incorrect as these features are available with AWS ElastiCache Redis Cache engine. For more information on Comparison between Memcached and Redis, refer to the following URLs,

Distributed web application is using multiple Amazon SQS standard & FIFO queues created in various AWS regions. There is an additional requirement of creating a Dead-letter queue along with these queues to isolate & debug problematic messages. While creating a dead-letter queue, which of the following is the TRUE statement? (Select Three.) A. For FIFO queue, FIFO Dead Letter queue should be created. B. For Standard queue, FIFO or Standard Dead-Letter queue can be created. C. Use the same AWS Account to create Dead-Letter & standard queue. D. Dead-Letter queue & Standard queue should reside in the same AWS region. E. Dead-Letter queue & FIFO queue can reside in any AWS region.

Correct Answer - A, C, D While creating Dead-Letter queue, it should be the same as original queue which will send a message to Dead-Letter queue that is for standard queue, standard dead-letter queue should be created & for FIFO queue, FIFO dead-letter queue. Also, the same AWS account needs to create both these queues & both these queues should reside in the same AWS region. Option B is incorrect as for the standard queue, a dead letter queue should also be a standard queue & not FIFO queue. Option E is incorrect as FIFO queue & Dead-letter queue should both need to be in the same AWS region.

You are using AWS SAM templates to deploy a serverless application. Which of the following resource will embed nested application from Amazon S3 buckets? ]A. AWS::Serverless::Api ]B. AWS::Serverless::Application ]C. AWS::Serverless::LayerVersion ]D. AWS::Serverless::Function

Correct Answer - B AWS::Serverless::Application resource in AWS SAM template is used to embed application from Amazon S3 buckets. Option A is incorrect as AWS::Serverless::Api is used for creating API Gateway resources & methods that can be invoked through HTTPS endpoints. Option C is incorrect as AWS::Serverless::LayerVersion resource type creates Lambda layered function. Option D is incorrect as AWS::Serverless::Function resource describes configuration for creating Lambda function.

Domain : Monitoring and Troubleshooting A start-up organisation is using FIFO AWS SQS queues for their distributed application. Developer team is observing messages are delivered out of order. Which of the following can ensure orderly delivery of messages? A. Associate a same Batch ID with all messages. B. Associate the same message group ID with all messages. C. Associate a same Message deduplication ID with all messages. D. Associate a Sequence number with all messages.

Correct Answer - B Amazon SQS FIFO queues use message order ID for orderly delivery of messages. Option A is incorrect as Batch ID is an invalid option. Option C is incorrect as this is used for deduplication of messages & not for orderly processing of messages. Option D is incorrect as Sequence number is a unique number assigned by AWS SQS & this will not impact order of messages.

An online educational institute is using a three-tier web application & are using AWS X-Ray to trace data between various services. User A is experiencing latency issues using this application & Operations team has asked you to gather all traces for User A. Which of the following needs to be enabled to get Filtered output for User A from all other traces? A. Trace ID B. Annotations C. Segment ID D. Tracing header

Correct Answer - B Annotations are key-value pairs indexed to use with filter expressions. In the above case, traces for a user A needs to be tracked, for which Annotations can be used along with a Filter expression to find all traces related to that user. Option A is incorrect as Trace ID will track the path of a request through the application & will not be used in filtering messages. Option C is incorrect as Segment will provide details of resource name, request & work done, it will not help in filtering messages. Option D is incorrect as the Tracing Header consists of Root trace ID, Segment ID & sampling decision, it is not useful for filtering messages.

You are working on an application which provide an online Car booking service using Amazon DynamoDB. This is a read heavy application which reads car & driver locations details & provides a latest position to prospective car booking customers.Which of the following can be used to have consistent data writes & avoid unpredictable spikes in DynamoDB requests during peak hours ? ]A. Write Around Cache using DynamoDB DAX. ]B. Write Through Cache using DynamoDB DAX. ]C. Use Side Cache using Redis along with DynamoDB. ]D. Write Through Cache using Redis along with DynamoDB.

Correct Answer - B DAX is intended for applications that require high-performance reads. As a write-through cache, DAX allows you to issue writes directly, so that your writes are immediately reflected in the item cache. You do not need to manage cache invalidation logic, because DAX handles it for you. For more information, please check the below link:https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/dynamodb-dg.pdf (Page 650) Option A is incorrect Write Around Cache is generally useful in cases where there is considerable amount of data to be written to database. In this case , data is directly written to DynamoDB instead of DAX. Option C is incorrect as Using Side Cache using Redis is eventually consistent and non-durable which may add additional delay. Option D is incorrect as with Write Through cache using Redis, there are chances of missing data during new scaling out.

You have created multiple S3 buckets using AWS CloudFormation Templates. You have added a DeletionPolicy for each template to clean up all S3 buckets created during stack creation and deletion. Upon some research, you find that some S3 buckets are not getting deleted. Which of the following could be the reason? ]A. Default DeletionPolicy for Amazon S3 bucket is Retain. ]B. Ensure that all objects in S3 buckets are deleted before bucket is deleted. ]C. Ensure that CloudFormation Stack has permissions to delete S3 buckets. ]D. Modify DeletionPolicy to delete S3 bucket after stack is deleted.

Correct Answer - B For Amazon S3 buckets, we need to ensure that all objects are deleted prior to deletion of S3 buckets by DeletionPolicy of CloudFormation Template. Option A is incorrect as Default DeletionPolicy for Amazon S3 bucket is Delete. Option C is incorrect as even though AWS CloudFormation has permission to delete S3 buckets , we need to ensure that no objects are present in S3 buckets before deletion of S3 bucket. Option D is incorrect as Default option for DeletionPolicy for S3 buckets is to delete S3 buckets once Stack is deleted. But you need to ensure that all objects in S3 buckets are deleted to delete S3 bucket.

A Start-up firm is planning to implement a distributed stock trading application which requires time-critical messages to be sent to premium subscribers. This developer team needs your guidance to integrate Amazon SNS with Amazon SQS. Which of the following is NOT required to be done to enable Amazon SNS topic to send messages to Amazon SQS? A. Get ARN of the queue to send a message & topic to be subscribed for this queue. B. Give sns:Publish permission to Amazon SNS topic. C. Give sqs:SendMessage permission to Amazon SNS topic. D. Subscribe queue to Amazon SNS topic. E. Allow IAM users to publish to SNS topics & read messages from the queue.

Correct Answer - B To enable an Amazon SNS topic to send messages to an Amazon SQS queue, follow these steps: · Get the Amazon Resource Name (ARN) of the queue & SNS topic. · Give sqs: SendMessage permission to the Amazon SNS topic · Subscribe the queue to the Amazon SNS topic. · Give IAM users or AWS accounts the appropriate permissions Option A, C, D & E are incorrect as these are required steps for sending messages to the Amazon SQS queue from Amazon SNS.

You are working on building microservices using Amazon ECS. This ECS will be deployed in an Amazon EC2 instance along with its Amazon ECS container agent. After the successful launching of the EC2 instance, Amazon ECS container agent has registered this instance in a cluster. What would be the status of container instance & its corresponding agent connection, when an ECS container instance is stopped? ]A. Container instance status remains as ACTIVE and Agent connection status as TRUE. ]B. Container instance status remains as ACTIVE and Agent connection status as FALSE. ]C. Container instance changes status as INACTIVE and Agent connection status as FALSE. ]D. Container instance changes status as INACTIVE and Agent connection status as TRUE.

Correct Answer - B When an ECS Instance is stopped, Container instance status remains Active but ECS container Agent status changes to FALSE immediately. Option A is incorrect as when the EC2 instance is stopped, Agent connection status changes to False & not True. Options C & D are incorrect as when the EC2 instance is stopped, Container instance remains as Active & not inactive.

A web application is using a single thread to poll multiple queues of Amazon SQS using long polling having a wait time set as 15 seconds. This is causing delay in processing of messages in a queue. Which of the following best practices can be used to enhance message processing performance without any cost impact? A. Shorten long-poll wait time. B. Use a single thread to process a single queue using long polling. C. Use short polling instead of long polling. D. Use a single thread to process a single queue using short polling.

Correct Answer - B When an application is using a single thread to query multiple queues, in order to increase message performance best practice is to have a single thread query single queue & use long polling. Option A is incorrect as this will not increase performance as a single thread has to wait till long poll wait time on multiple queues which it is using. Option C is incorrect as short polling can be used in this case, but it will have additional cost impact. Option D is incorrect as short polling can be used in this case, but it will have additional cost impact.

Which of the following is correct way of passing a stage variable to an HTTP URL in an API Gateway? (Select TWO.) A. http://example.com/${}/prod B. http://example.com/${stageVariables. <variable_name> }/prod C. http://${stageVariables. <variable_name> }.example.com/dev/operation D. http://${stageVariables}.example.com/dev/operation E. http://${}.example.com/dev/operation F. http://example.com/${stageVariables}/prod

Correct Answer - B,C A stage variable can be used as part of HTTP integration URL as in following cases, · A full URI without protocol · A full domain · A subdomain · A path · A query string In the above case , option B & C displays stage variable as a path & sub-domain. Option A, D ,E & F are incorrect as these are incorrect ways of passing stage variables.

AWS CodeDeploy is used to configure a deployment group to automatically roll-back to last known good revision when a deployment fails. During roll-back, files required for deployment to earlier revision cannot be retrieved by AWS CodeDeploy. Which of the following actions can be executed for successful roll-back? Choose 2 correct options. A. Use Manual Roll Back instead of Automatic Roll-Back. B. Manually Add required files to Instance. C. Use an existing application revision. D. Map CodeDeploy to access those files from S3 buckets. E. Create a new application revision.

Correct Answer - B,E During AWS CodeDeploy automatic roll-back, it will try to retrieve files which were part of previous versions. If these files are deleted or missing, you need to manually add those files to Instance or create a new application revision. Option A is incorrect as If files required for deployment are overwritten by earlier deployment, files will not be available for CodeDeploy & deployment will fail even in case of Manual roll-back. Option C is incorrect as AWS CodeDeploy will not find missing files using an existing application revision. Option D is incorrect as CodeDeploy does not automatically access these files from S3 buckets.

AWS CodeDeploy deployment fails to start & generate following error code, "HEALTH_CONSTRAINTS_INVALID", Which of the following can be used to eliminate this error? ]A. Make sure the minimum number of healthy instances is equal to the total number of instances in the deployment group. ]B. Increase the number of healthy instances required during deployment. ]C. Reduce number of "healthy instances required" to less than the total number of instances. ]D. Make sure the minimum number of healthy instances is greater than than the number of instances in the deployment group.

Correct Answer - C AWS CodeDeploy generates "HEALTH_CONSTRAINTS_INVALID" error, when a minimum number of healthy instances defined in deployment group are not available during deployment. To mitigate this error, make sure required number of healthy instances are available during deployments. Option A is incorrect as During Deployment process, CodeDeploy tracks the health status of the instances in a deployment group. It uses the deployment's specified minimum number of healthy instances to determine whether to continue the deployment. For this, minimum number of healthy instances should be less than & not equal to the total number of instances in the deployment group. Option B is incorrect as to continue with deployment, you should increase the total number of instances in a deployment group as compared to minimum number of healthy instances. Option D is incorrect as Number of healthy instances should be greater than & not equal or less than number of healthy instances specified in minimum number of healthy instances.

Domain : Development with AWS Services Developer team is planning to create a new distributed application. Most of the messages between this application having size higher than 256 KB needs to be polled periodically & buffered so that other applications can retrieve to start processing. Which of the following services can be used to meet this requirement? A. Use Amazon Kinesis Streams. B. Use Amazon SNS C. Use Amazon SQS D. Use Amazon MQ

Correct Answer - C AWS SQS can be used for distributed applications for queuing messages between applications which will decouple components. AWS SQS performs periodic polling of messages between components.. For message size higher than 256 KB , Amazon SQS Extended Client Library for Java can be used which will reference message payload stored in Amazon S3. Option A is incorrect as Amazon Kinesis Streams will be more effective in real-time streaming of data. Option B is incorrect as Amazon SNS is a push notification messaging service, since in this case client is looking for a polling message AWS SQS is a better option. Option D is incorrect as Since a new application is to be developed, Using AWS SQS is a better option than AWS MQ.

You are working on an application which saves strings in DynamoDB table. For strings with size more than 400KB, you are getting item size exceeded error. Which of the following is a recommended option to store strings with larger size ? ]A. Compress large size strings to fit in DynamoDB table. ]B. Split strings between multiple tables. ]C. Save string object in S3 with object identifiers in DynamoDB. ]D. Open a ticket with AWS support to increase Item size to more than 400 KB.

Correct Answer - C Amazon S3 can be used to saved items which are exceeding 400 KB. In this option , Items are saved in S3 buckets while an object identifier is saved in DynamoDB table which points to an item in S3 Option A is incorrect as Compressing large size strings to fit in DynamoDB table can be used to resolve this error but will be a short-term solution & not a recommended option for permanently resolve item size error. Option B is incorrect as Splitting strings in multiple table will incur inconsistency while updating of items in multiple tables, & also this will not be a permanent resolution. Option D is incorrect as Item size of 400 KB is a hard limit & cannot be expanded.

Domain : Deployment A Junior Engineer is configuring AWS X-Ray daemon which would be running locally on a multi-vendor OS environment. He is concerned about the listening port to be configured for this scenario. Which is the correct statement with respect to AWS X-Ray Daemon listening port? A. AWS X-Ray Daemon Listening Port can be changed only for the Linux environment. B. AWS X-Ray Daemon Listening Port cannot be changed while running the daemon locally. C. AWS X-Ray Daemon Listening Port can be changed using --bind command with CLI. D. AWS X-Ray Daemon Listening Port is default as 2000 & cannot be changed.

Correct Answer - C By default, AWS X-Ray Daemon listens for traffic on UDP port 2000. This port can be changed while configuring daemon with CLI option as - bind "Different port number". Option A is incorrect as AWS X-Ray Daemon Listening port can be changed for all environments. Option B is incorrect as Listening Ports can be changed for running Daemons Locally as well. Option D is incorrect as Daemons Listening port is UDP 2000, but it can be changed using - bind command.

Domain : Monitoring and Troubleshooting Developer team has created an e-commerce application using Amazon SQS which buffers messages to be sent to the end user. Currently there are facing issues with end users getting order confirmation messages before backend process completion, where in some cases order is cancelled due to non-availability of products. To resolve this issue, the Developer Team wants messages within an Amazon SQS queue to be initiated only once backend processes are completed. Which of the following can be used for this purpose? A. Decrease time period using visibility timeouts. B. Decrease time period using delay queue. C. Increase time period using delay queue. D. Increase time period using visibility timeouts.

Correct Answer - C Delay Queue will add delay to the message when it's added to the queue. In the above case, Application needs additional time to process backend activities before processing new messages. This can be done by increasing the time period of the delay queue. Default Delay is 0 sec which is minimum value while maximum delay which can be set is 900 sec. Option A & D are incorrect as visibility timeouts will be required to be set to hide the message once it's consumed by the subscriber. In the above case there should be a delay initially when a message is added to the queue. Option B is incorrect as Decreasing delay timers will make messages available to subscribers quickly.

You are using AWS DynamoDB as a database to save sales data for a global appliance company. All data at rest is encrypted using AWS KMS. Which of the following can be used for encryption of global secondary indexes with minimum cost ? ]A. Data Keys with AWS Managed CMK. ]B. Table Keys with AWS Managed CMK. ]C. Table Keys with AWS owned CMK. ]D. Data Keys with AWS owned CMK

Correct Answer - C DynamoDB uses the CMK to generate and encrypt a unique data key for the table, known as the table key. With DynamoDB, AWS Owned, or AWS Managed CMK can be used to generate & encrypt keys. AWS Owned CMK is free of charge while AWS Managed CMK is chargeable. Customer managed CMK's are not supported with encryption at rest. "With encryption at rest, DynamoDB transparently encrypts all customer data in a DynamoDB table, including its primary key and local and global secondary indexes, whenever the table is persisted to disk Option A is incorrect as AWS DynamoDB uses Table keys & not Data keys to encrypt global secondary tables. Also, AWS managed CMK with incur charges. Option B is incorrect as AWS managed CMK with incur charges. Option D is incorrect as AWS DynamoDB uses Table keys & not Data keys to encrypt global secondary tables.

You have integrated the application with the X-Ray SDK which generates segment documents recording tracing information of the applications. New recruit is enquiring about fields which are required with a Segment Document. Which of the following is a required field in a trace Segment sent by an application? A. service B. http C. start_time D. parent_id

Correct Answer - C Following are the required segment fields in a Segment document, 1) Name 2) Id 3) Trace_id 4) Start_time 5) End_time 6) In_progress Following are the optional segment fields in a Segment Document, 1) Service 2) User 3) Origin 4) Parent_id 5) http 6) aws 7) error, throttle, fault, and cause 8) annotations 9) metadata 10) subsegments Option A is incorrect as service is an optional field in a Segment. Option B is incorrect as http is an optional field in a Segment. Option D is incorrect as parent_id field is required in Subsegment field & optional in Segment field.

Your Company is using a AWS CodeDeploy for deployment of applications to EC2 instance. For a financial application project to be deployed in the us-east region log files are saved in EC2 instance launched in this region . A new joinee mistakenly deleted deployment log file for AWS CodeDeploy. Which of the following can be done to create a new log file? ]A. AWS CodeDeploy will automatically create a replacement file. ]B. Restore deployment log file from backup file. ]C. Restart CodeDeployAgent Service. ]D. Restart EC2 instance with AWS CodeDeploy agent.

Correct Answer - C If a deployment log file is deleted, a new log file can be created by restarting CodeDeployAgent service using following commands, For Windows, o powershell.exe -Command Restart-Service -Name codedeployagent For Linux o sudo service codedeploy-agent stop o sudo service codedeploy-agent start Option A is incorrect as AWS CodeDeploy does not create replacement file automatically. Option B is incorrect as this is not a valid option for creating new log file for AWS CodeDeploy. Option D is incorrect as Restarting EC2 instance will impact all services running on that instance.

You are using Amazon DynamoDB for storing all product details for an online Furniture store. Which of the following expression can be used to return Colour & Size Attribute of the table during query operations? ]A. Update Expressions ]B. Condition Expressions ]C. Projection Expressions ]D. Expression Attribute Names

Correct Answer - C Projection Expression is used to identify a specific attribute from a table instead of all items within a table during scan or query operation. In the above case , Projection Expression can be created with Colour & Size , instead of querying full table. Option A is incorrect as Update Expression is used to specify how update item will modify attribute of an item. In the above case , there is no need to modify attributes of a table. Option B is incorrect as Condition Expressions are used to specify a condition which should be met to modify attribute of an item. Option D is incorrect as Expression Attribute Names are used to as a alternate name in a expression instead of actual attribute name.

You are working on a POC for a new gaming application in us-east-1 region which will be using Amazon Cognito Events to execute AWS Lambda function. AWS Lambda function will issue a winning badge for a player post reaching every new level. You are getting error as "LambdaThrottledException" for certain cases when you are performing application testing with large number of players. Which of the following action needs to be implemented to resolve this error message? ]A. Make sure Lambda Function responds in 5 sec. ]B. Make sure Amazon Cognito provides all records in a dataset as input to the function. ]C. Retry sync operation. ]D. Make sure you are updating "datasetRecords" field & not any other fields.

Correct Answer - C To resolve "LambdaThrottledException" error while using Amazon Cognito Events, you need to perform retry on sync operations while writing Lambda function. Option A is incorrect as If a Lambda Function does not respond in 5 sec, "LambdaSocketTimeoutException" error will be generated & not "LambdaThrottledException" Option B is incorrect as this will not generate error "LambdaThrottledException". Option D is incorrect as If you are updating other fields, this will result in failure in updating records & not generate error "LambdaThrottledException".

A microservice application is developed using AWS Lambda functions which are invoked by non-instrumented service. For performance optimization, AWS X-Ray needs to be integrated with this AWS Lambda function to get traces with multiple services. Which of the following is a correct statement for an AWS Lambda function invoked by non-instrumented service? A. For non-instrumented services, AWS Lambda record trace without any additional configuration. B. Enable Start tracing under AWS Lambda Function configuration. C. Enable Active tracing under AWS Lambda Function configuration. D. AWS Lambda functions invoked by non-instrumented services do not support tracing.

Correct Answer - C When an AWS Lambda function is invoked by non-instrumented service, tracing can be enabled by enable active tracing under AWS Lambda function from console or by CLI using following $ aws lambda update-function-configuration --tracing-config '{"Mode": "Active"}' Option A is incorrect as AWS Lambda trace requests without any additional configuration if AWS Lambda function is called by another instrumented service & not for non-instrumented service. Option B is incorrect as Start Tracing is an invalid option to enable Tracing for Lambda Function invoked by a non-instrumented service. Option D is incorrect as AWS Lambda function invoked by non-instrumented services supports tracing by enabling active tracing within a Lambda function.

You are using AWS CodeDeploy for upgrading a web application on an EC2 Instance using blue/green deployment. DevOps Lead is concerned about scripting in each of the hooks section in an AppSpec file. Which of the following events can be scripted while creating run order of hooks in a blue/green deployment? ]A. DownloadBundle ]B. BlockTraffic ]C. ApplicationStart ]D. AllowTraffic

Correct Answer - C While creating events hooks for Blue/Green Deployment, the following events can have scripted files ApplicationStop BeforeInstall AfterInstall ApplicationStart ValidateService BeforeAllowTraffic AfterAllowTraffic BeforeBlockTraffic AfterBlockTraffic. Options A, B & D are incorrect as DownloadBundle, BlockTraffic & AllowTraffic cannot be scripted.

While choosing an Instance for your web application, which of the following features are available additionally with M5 instance in comparison with T2 instance. ]A. Network performance of only up to 5 Gbps & Enhanced Networking Support with ENA. ]B. Network performance of up to 1 Gbps & Enhanced Networking support with Intel 82599 Virtual Function (VF) interface. ]C. Network performance of 10- 25 Gbps based upon instance type & Enhanced Networking Support with ENA. ]D. Network performance up to 5 Gbps & Enhanced Networking support with Intel 82599 Virtual Function (VF) interface.

Correct Answer - C With M5 General-Purpose Instance, Elastic Network Adaptor is used to support Enhance Networking. Also, M5 General-Purpose Instance support network performance with 10 Gbps to 25 Gbps based upon instance type. T2 instance does not support Enhance Networking & support network performance only up to 1 Gbps. Option A is incorrect as M5 General-Purpose Instance provides Network Performance of 10 Gbps to 25 Gbps based upon various instance types & not only up to 5 Gbps. Option B is incorrect as M5 General-Purpose Instance provides Enhance Networking Support with Elastic Network Adaptor & not with Intel 82599 Virtual Function (VF) interface. Also, Network Performance of 10 Gbps to 25 Gbps based upon various instance types. Option D is incorrect as M5 General-Purpose Instance provides Enhance Networking Support with Elastic Network Adaptor & not with Intel 82599 Virtual Function (VF) interface.

Which of the following is true with respect to strongly consistent read request from an application to a DynamoDB with a DAX cluster ? ]A. All requests are forwarded to DynamoDB & results are cached. ]B. All requests are forwarded to DynamoDB & results are store in Item Cache before passing to application. ]C. All requests are forwarded to DynamoDB & results are store in Query Cache before passing to application. ]D. All requests are forwarded to DynamoDB & results are not cached.

Correct Answer - D For strongly consistent read request from an application, DAX Cluster pass all request to DynamoDB & does not cache for these requests. Option A is incorrect as Partly correct as for consistent read request from an application, DAX Cluster pass all request to DynamoDB & does not cache for these requests. Option B is incorrect as Only for GetItem and BatchGetItem eventual consistent read request , Data is stored in Item Cache. Option C is incorrect as Only for Query and Scan eventual consistent read request , Data is stored in Query Cache.

You are planning to use AWS X-Ray for a multiservice application for which operations Team is getting lots of complaints with respect to application performance. Before integrating AWS X-Ray, you are looking into core details of AWS X-Ray. Which of the following is the correct statement pertaining to AWS X-Ray? A. Sub-Segments can only be embedded in a full segment document. B. X-Ray Trace is a set of data points sharing the same Segment ID. C. Annotations consist of only system defined data. D. Segment consists of multiple annotations.

Correct Answer - D Segment consists of tracing records for a request which a distributed application makes. Segment consists of multiple system defined & user defined annotations along with sub-segments consisting of remote calls made from this application. Option A is incorrect as Sub-Segments can also be sent independently apart from embedding in segment documents. Option B is incorrect as X-Trace is a set of data points sharing the same Trace ID & not segment ID. Option C is incorrect as Annotations consists of system defined & user defined data.

You are using S3 buckets to store images. These S3 buckets invoke a lambda function on upload. The Lambda function creates thumbnails of the images and stores them in another S3 bucket. An AWS CloudFormation template is used to create the Lambda function with the resource "AWS::Lambda::Function". Which of the following functions would Lambda call to execute this? ]A. FunctionName ]B. Layers ]C. Environment ]D. Handler

Correct Answer - D The handler is the name of the method within a code that Lambda calls to execute the function. Option A is incorrect as it is the name of a function, Option B is incorrect as it's a list of function layers which is added to the Lambda function execution environment. Option C is incorrect as these are variables which are accessible during Lambda function execution.

A loose coupled distributed application is built using Amazon SQS along with Amazon EC2 container service and on-premise hosts. AWS X-Ray is enabled for all messages passing through Amazon SQS. What will be the effect of AWS X-Ray Trace headers on the Amazon SQS message? A. 1 byte of Trace header is added to AWS SQS message size. B. Trace Header will be included as a message attribute in an AWS SQS message. C. Trace Header will be included as message content in an AWS SQS message. D. Trace header is excluded from AWS SQS message size.

Correct Answer - D When AWS X-Ray is integrated with AWS SQS, there is no impact of trace Header on AWS SQS message size or message attributes quota. Option A is incorrect as 1 byte of trace header is not added to AWS SQS message size. Option B is incorrect as Trace Header will be excluded from message attribute quota in AWS SQS message Option C is incorrect as Trace Header will be carried as Default HTTP Header or AWSTraceHeader System Attribute but not as a message content in an AWS SQS message.

Development Team is using AWS CodePipeline to deploy a new application from Amazon S3 bucket to a fleet of Amazon EC2 instance across multiple AZ's. Team needs to have a test before the Deploy Stage to ensure no bugs in source code. Which of the following is an additional requirement for Custom Action in AWS CodePipeline? ]A. Create a Custom Worker with access to public endpoint for AWS CodePipeline. ]B. Create a Custom Worker with access to private endpoint for AWS CodePipeline. ]C. Create a Job Worker with access to private endpoint for AWS CodePipeline. ]D. Create a Job Worker with access to public endpoint for AWS CodePipeline.

Correct Answer - D While creating Custom actions with AWS CodePipeline, a Job worker needs to be created on any device which has access to Public endpoint for AWS CodePipeline. Option A & B are incorrect as a Custom Worker is not a valid option. Option C is incorrect as Job worker needs access to public endpoint for AWS CodePipeline & not to private endpoint.

You are using AWS Envelope Encryption to encrypt all of your sensitive data. Which of the following is true with regards to the AWS Envelope Encryption service? ]A. First, the data is encrypted using an encrypted Data Key. The encrypted Data Key is then further encrypted using an encrypted Master Key. ]B. First, the data is encrypted using a plaintext Data Key. The Data Key is then further encrypted using an encrypted Master Key. ]C. First, the data is encrypted using an encrypted Data Key. The encrypted Data Key is then further encrypted using a plaintext Master Key. ]D. First, the data is encrypted using a plaintext Data Key. The Data Key is then further encrypted using a plaintext Master Key.

Correct Answer - D With Envelope Encryption, unencrypted data is encrypted using a plaintext Data key. This Data key is further encrypted using a plaintext Master key. This plaintext Master key is securely stored in AWS KMS & known as Customer Master Keys. Option A is incorrect as Data Key used for encryption of data is plaintext along with Master key used to encrypt Data Keys. Option B is incorrect as the Master key used to encrypt Data Keys is in plaintext format. Option C is incorrect as Data Key used for encryption of data is in plaintext format.

For a new HTTP based application deployed on Amazon EC2 & DynamoDB, you are using AWS API to retrieve service graph data. Team Lead has suggested using GetTraceSummaries API to get trace summaries. Which of the following flags are available while retrieving traces using GetTraceSummaries API? (Select Two.) A. UserId B. Annotations C. HTTPMethod D. TraceID E. Event Time

Correct Answer - D, E When GetTraceSummaries API is called, following time range flags are available 1) TraceID: Search uses TraceId time & returns traces between computed start & end time range. 2) Event Time: Searches based upon the event occur & returns traces between which are active during start & end time range Option A, B & C are incorrect as these flags are not used in retrieving traces with GetTraceSummaries API

As a developer, you have set up Amazon Cognito Sync to enable cross-device syncing of application-related user data. You later configure Amazon Cognito streams with an Amazon Kinesis stream by changing the role trust permission to receive events as data is updated and synchronized. What statement is true in this context? A. You will need to either recreate the Kinesis stream or fix the role, and then you will need to re-enable the stream. B. You can move all of your Sync data to Kinesis, which can then be streamed to a data warehouse tool. C. You have to enable updates for cases that are larger than the Kinesis maximum payload size. D. Before configuring a Kinesis stream, you have to enable bulk publish operation to manually apply to all of your streams.

Correct Answer: A After you have configured Amazon Cognito streams, if you delete the Kinesis stream or change the role trust permission so that it can no longer be assumed by Amazon Cognito Sync, Amazon Cognito streams will be disabled. You will need to either recreate the Kinesis stream or fix the role, and then you will need to re-enable the stream. Incorrect Answers: Option B is incorrect because although it is true, it's irrelevant for this scenario. Option C is incorrect because it does not make sense. After you've successfully configured Amazon Cognito streams, all subsequent updates to datasets in this identity pool will be sent to the stream. For updates that are larger than the Kinesis maximum payload size of 50 KB, a presigned Amazon S3 URL will be included that contains the full contents of the update. Option D is incorrect because it does not make sense since once you have configured Amazon Cognito streams, you will be able to execute a bulk publish operation for the existing data in your identity pool.

Domain : Monitoring and Troubleshooting You are the senior developer in a company that builds and sells analytics dashboards for organisations in a B2B model. After implementing the architecture designed by the solutions architect which integrates Amazon Cognito and Amazon Elasticsearch service, you notice that you are able to log in but you cannot see the Kibana dashboard getting an Elasticsearch es:ESHttpGet authorisation error. What could be the possible reason? A. It is an IAM role because the authenticated IAM role for identity pools doesn't include the privileges required to access Kibana. B. Amazon Cognito authentication is required. The user identity has changed with respect to the access policy variables for unauthenticated identities. C. Authenticated identities belong to users who are authenticated by any supported identity provider. Unauthenticated identities typically belong to guest users. D. For each identity type, there is an assigned role. This role has a policy attached to it which dictates which AWS services that role can access.

Correct Answer: A By default, the authenticated IAM role for identity pools doesn't include the privileges required to access Kibana. You have to add the name of the authenticated role to the Amazon ES access policy. Incorrect Answers: Option B is incorrect because Amazon Cognito authentication is not required and you have to be careful when including your users' identity IDs in your access policies, particularly for unauthenticated identities as these may change if the user chooses to login. Options C, D are incorrect because although the statements are true, they do not apply to this scenario.

As a cloud engineer, you have to use containers to package source code and dependencies into immutable artifacts in order to be deployed predictably to any environment. To be able to optimise the application CI/CD pipelines and reduce costs, what options are the most appropriate in this scenario? A. Run Kubernetes Workloads on Amazon EC2 Spot Instances with Amazon EKS. B. Run Docker Workloads on Amazon EC2 On-Demand Instances with Amazon EKS. C. Run Kubernetes Workloads on Amazon EC2 On-Demand Instances with Amazon ECS. D. Run Docker Workloads on Amazon EC2 Spot Instances with Amazon EMR and Auto Scaling Groups.

Correct Answer: A It is recommended to run Kubernetes Workloads on Amazon EC2 Spot Instances with Amazon EKS as a cost optimisation practice. Incorrect Answers: Option B is incorrect because EC2 Spot instances are unused EC2 instances that are available for less than the On-Demand price. Option C is incorrect because you run Kubernetes Workloads with Amazon EKS. Option D is incorrect because Amazon EMR is not a service for Kubernetes.

While developing an application, MFA and password recovery were included as additional requirements to increase security by adding a second authentication and recovery mechanisms. What is considered a recommended practice in this context? A. Use TOTP as a second factor and SMS as a password recovery mechanism which is disjoint from an authentication factor. B. Enable MFA as Required immediately after creating a user pool to add another layer of security. C. Disable adaptive authentication, so you can configure a second factor authentication in response to an increased risk level. D. Use SMS as a second factor and TOTP along with a security key as the MFA device for your IAM and root users.

Correct Answer: A It is recommended using TOTP for a second factor. This allows SMS to be used as a password recovery mechanism which is disjoint from an authentication factor. Incorrect Answers: Option B is incorrect because you can only choose MFA as Required when you initially create a user pool. Option C is incorrect because with adaptive authentication, you can configure your user pool to require second factor authentication in response to an increased risk level. Option D is incorrect because it does not make sense and security keys as the MFA device for a root account is irrelevant in this context.

You, as cloud engineer, have been assigned to a project where an application must sign its AWS API requests with AWS credentials. You are working with IAM roles for Amazon ECS tasks so you can use them in the containers in a task. What statement is not true in this context? A. Containers that are running on your container instances are prevented accordingly from accessing the credentials that are supplied to the container instance profile. B. It is recommended to limit the permissions in your container instance role to the minimal list of permissions in AmazonEC2ContainerServiceforEC2Role role e.g. ecs:CreateCluster, ecr:GetAuthorizationToken. C. Set the ECS_AWSVPC_BLOCK_IMDS agent configuration variable to true in the agent configuration file and restart the agent to protect credential information supplied to the container instance profile. D. You define the IAM role to use in your task definitions, or you can use a taskRoleArn override when running a task manually with the RunTask API operation.

Correct Answer: A The statement is false because containers that are running on your container instances are not prevented from accessing the credentials that are supplied to the container instance profile. Incorrect Answers: Options B, C, and D are incorrect answers because they are true in this context.

A developer is designing a mobile game application relying on some AWS serverless services. In order to access these services, requests must be signed with an AWS access key. Among recommended approaches which one is the most appropriate in this sort of scenario? A. Embed or distribute long-term AWS credentials that a user downloads to an encrypted store. B. Use Amazon Cognito which acts as an identity broker to implement web identity federation. C. Write code that interacts with a web identity provider and trades the authentication token for AWS temporary security credentials. D. Use federation and AWS IAM to enable single sign-on (SSO) to your AWS root accounts.

Correct Answer: B It is recommended for best results, using Amazon Cognito as your identity broker for almost all web identity federation scenarios. Incorrect Answers: Option A is incorrect because it is strongly recommended that you do not embed or distribute long-term AWS credentials with apps that a user downloads to a device, even in an encrypted store. Using a web identity provider helps you keep your AWS account secure, because you don't have to embed and distribute long-term security credentials with your application. Option C is incorrect because the best approach here is using Amazon Cognito. If you don't use Amazon Cognito, then you must write code that interacts with a web identity provider, such as Facebook, and then calls the AssumeRoleWithWebIdentity API to trade the authentication token you get from those web identity providers for AWS temporary security credentials. Option D is incorrect because it does not make sense and it is referring to AWS root accounts.

You are working with an architecture team and several cloud engineers. The project requires containerisation and you are responsible for maintaining the Amazon ECS tasks. What concepts are correct when it comes to task definitions? A. Name, image, memory, and port mapping are task definition parameters that are required and used in most container definitions. B. If the network mode is bridge, the task utilizes Docker's built-in virtual network which runs inside each container instance. C. If using the Fargate launch type, the awsvpc network mode is no longer required. D. Docker for Windows offers host and awsvpc network modes as the highest networking performance so you can take advantage of dynamic host port mappings.

Correct Answer: B The Docker networking mode to use for the containers in the task. The valid values are none, bridge, awsvpc, and host. The default Docker network mode is bridge. If the network mode is bridge, the task utilizes Docker's built-in virtual network which runs inside each container instance. Incorrect Answers: Option A is incorrect because only name and image are required as task definition parameters. Option C is incorrect because If using the Fargate launch type, the awsvpc network mode is required. If using the EC2 launch type, the allowable network mode depends on the underlying EC2 instance's operating system. Option D is incorrect because Docker for Windows uses a different network mode (known as NAT) than Docker for Linux. When you register a task definition with Windows containers, you must not specify a network mode.

During the development, definition and deployment of a backend you are building, you have to decide how to use user and identity pools as a part of a serverless application. What statements are correct in this scenario? A. User pools support temporary, limited-privilege AWS credentials to access other AWS services while Identity pools provide sign-up and sign-in services. B. User pools help you track user device, location, and IP address while Identity pools help you generate temporary AWS credentials for unauthenticated users. C. User pools give your users access to AWS resources. D. User pools are for authorization (access control) which is fully managed on your behalf.

Correct Answer: B User pools are for authentication (identify verification). Identity pools are for authorization (access control). User pools help you track user device, location, and IP address, and adapt to sign-in requests of different risk levels. Identity pools help you generate temporary AWS credentials for unauthenticated users. Incorrect Answers: Options A, C are incorrect because user pools do not deal with AWS credentials to access other AWS services. Option D is incorrect because Identity pools are for authorization (access control).

You have been hired as a Database administrator by a start-up and your first task is to modify a DynamoDB table so that its data expires automatically after a certain period of time. Upon looking at the documentation you figured out that DynamoDB supports a concept of TTL using which you can achieve the same. What are the steps to use the feature? ]A. Enable TTL and use the dataExpiry keyword as a key attribute to store expiry timestamp. ]B. Enable TTL and use any name of your choice as a key attribute to store expiry timestamp. ]C. Enable TTL and use the keyword expiryTTL as a key attribute to store expiry timestamp. ]D. It is by default enabled and will automatically pick a key attribute with timestamp value.

Correct Answer: B While enabling TTL you can provide any name of your choice to use as a key attribute name to be used for storing timestamp value. Option A is incorrect: dataExpiry is not a reserved keyword for TTL. Option C is incorrect: expiryTTL is also not a reserved keyword for TTL. Option D is incorrect: TTL is not enabled by default and it does not even pick TTL key attribute automatically.

You have been asked to design a serverless web application with DynamoDB as its database. The client wants to record each and every change in data in DynamoDB and store it individually as a file on S3. How can you achieve this? ]A. Write a script which will record each and every create or update requests and store it on S3. ]B. Enable DynamoDB streams and attach it to Lambda which will read the data and store it on S3. ]C. Enable DynamoDB streams and set the destination to S3. This will automatically store each change on S3. ]D. Use DynamoDB binary logging feature to record each and every change on S3.

Correct Answer: B You can enable DynamoDB streams to record both old and new image and send it to a Lambda function which can store the changes on S3. Option A is incorrect: This is not an ideal choice because the same can be implemented using DynamoDB streams. Option C is incorrect: DynamoDB stream only supports Lambda as a target as of writing this question. Option D is incorrect: DynamoDB does not support binary logging. This feature is only supported for SQL databases.

You have been asked to troubleshoot a serverless application which is constantly throwing errors while storing payment transaction details. The application is using DynamoDB as its database. Upon actively looking at the application logs you see ProvisionedThroughputExceededException as an error against each timestamp application was unable to serve the request successfully. Which of the following are the best ways to overcome this problem? (Select TWO) A. Create one more table and split the data between both the tables. B. Use Exponential Backoff algorithm. C. Increase RCUs and WCUs to get rid of ProvisionedThroughputExceededException. D. Use AWS SDK.

Correct Answer: B and D By implementing exponential backoff manually you can overcome this issue or via AWS SDK. AWS SDK automatically retries failed requests using jitter based backoff algorithm. Option A is incorrect: This is not practically feasible and involves overhead in implementation. Hence, this is not best practise. Option C is incorrect: Mere increasing RCUs and WCUs every time you receive this error is not an optimal solution.

You have been assigned a task by your manager to write an API which will extract limited amount of data from a DynamoDB table which has 30 columns and more than 10,000 rows of open data. Not all the columns are relevant for your use-case hence your manager has asked you to filter out only 6 columns out of all and store them in your in-house data warehouse tool. How can you achieve this? ]A. Use DataFilter operation while performing Scan operation. ]B. Use ProjectionExpression operation while performing Query operation. ]C. Use ProjectionExpression operation while performing Scan operation. ]D. Use ColumnProjection operation while performing Query operation.

Correct Answer: C Using ProjectionExpression with Scan operation is the correct answer because scan operation will fetch the entire table for you with only the columns mentioned in ProjectionExpression attribute. A is incorrect: DataFilter is an invalid attribute. B is incorrect: Query operation is performed when you want to filter out data using Primary/Sort key combination whereas in the question there is no such requirement. The task just wants you to filter out data on the basis of columns. D is incorrect: ColumnProjection is an invalid attribute.

Domain : Security As a cloud engineer, you have been granted access to an Amazon ECR image repository. You have to pull images from the repository as a part of a container definition when creating an Amazon ECS task. What statement correctly describes this scenario? A. You can access the Amazon ECR image repository only with Amazon EC2 launch type. B. You can access the Amazon ECR image repository only with AWS Fargate launch type. C. You can access the Amazon ECR image repository with Amazon EC2 or AWS Fargate launch types. D. You must grant your Amazon ECS task execution role permission to access Amazon ECS.

Correct Answer: C You can access the Amazon ECR image repository with Amazon EC2 or AWS Fargate launch types. Incorrect Answers: Options A, B are incorrect because you can access the Amazon ECR image repository with Amazon EC2 or AWS Fargate launch types. Option D is incorrect because it does not make sense as a recursive statement. The right statement is that for AWS Fargate launch types, you must grant your Amazon ECS task execution role permission to access the Amazon ECR image repository.

You are a developer who is supporting a containerised application. You are told to set up dynamic port mapping for Amazon ECS and load balancing. What statement is true in this case? A. Classic Load Balancer allows you to run multiple copies of a task on the same instance. B. Application Load Balancer uses static port mapping on a container instance. C. After creating an Amazon ECS service, you add the load balancer configuration. D. If dynamic port mapping is set up correctly, then you see the registered targets and the assigned port for the task.

Correct Answer: D This is just the clear result of having set up dynamic port mapping correctly since you are able to see the registered targets in the target group used to route requests and the assigned port for the task. Incorrect Answers: Option A is incorrect because Classic Load Balancer actually doesn't allow you to run multiple copies of a task on the same instance. Option B is incorrect because Application Load Balancer uses dynamic port mapping, so you can run multiple tasks from a single service on the same container instance. Option C is incorrect because you can add a load balancer only during the creation of the service. You can't add, remove, or change the load balancer configuration of an existing service.

As a developer, you have to build Windows containers to use with AWS CodeBuild and integrate it into AWS CodePipeline and push the resulting containers to Amazon ECR. You might consider custom Docker images. What assertions are true in this scenario? (Select TWO.) A. The default maximum execution time for CodePipeline custom actions is one hour. If your build jobs require more than an hour, you need to request a limit increase for custom actions. B. AWS CodeBuild caches Docker images to avoid downloading new copies each build job, which reduces the time for large Docker images. C. AWS CodeBuild fully supports Windows builds in all AWS Regions. D. AWS CodeBuild has to download a new copy of the Docker image for each build job, which may take longer time for large Docker images. E. Any limitations with Windows Server containers can be addressed by using Amazon EC2 instances. AWS CodeBuild and AWS CodePipeline support Amazon EC2 instances directly.

Correct Answers: A, D There are custom action timeouts whose default maximum execution time for AWS CodePipeline is one hour. It can be requested to increase the limit through AWS Service Quotas console. AWS CodeBuild downloads a new copy of the Docker image for each build job. Incorrect Answers: Option B is incorrect because AWS CodeBuild does not cache Docker images to avoid downloading new copies. Instead a new copy of the Docker image for each build job is downloaded. Option C is incorrect because AWS CodeBuild supports Windows builds in some AWS Regions. Option E is incorrect because even though you can work around AWS Region limitations with Windows Servers through Amazon EC2 instances, the downside of this approach is additional management burden —neither AWS CodeBuild nor AWS CodePipeline support Amazon EC2 instances directly.

You are planning to run a Jenkins service requiring Amazon EFS as a shared, persistent storage attached to an Amazon ECS cluster and using AWS Fargate. What two statements are true in this context? (Select TWO.) A. You define the host and sourcePath parameters in the task definition. B. You mount the Amazon EFS file system before the Docker daemon starts. C. Amazon EFS file system support relies on platform version 1.3.0 or later. D. The supervisor container is responsible for managing the Amazon EFS volume. E. The supervisor container is visible in CloudWatch Container Insights.

Correct Answers: B, D You must configure your container instance AMI to mount the Amazon EFS file system before the Docker daemon starts. When specifying Amazon EFS volumes in tasks using the Fargate launch type, Fargate creates a supervisor container that is responsible for managing the Amazon EFS volume. The supervisor container uses a small amount of the task's memory. Incorrect Answers: Option A is incorrect because the host and sourcePath parameters are not supported for Fargate tasks. Option C is incorrect because for tasks using the Fargate launch type, Amazon EFS file system support was added when using platform version 1.4.0 or later. Option E is incorrect because the supervisor container is visible when querying the task metadata version 4 endpoint, but is not visible in CloudWatch Container Insights.

During a project enhancement, you are assigned to set up an AWS Application Load Balancer with a provisioned Amazon EKS cluster for ingress-based load balancing to AWS Fargate pods. What steps better describe how to achieve this? (Select TWO.) A. Create a cluster. Create an AWS Fargate profile. When your pods start, Fargate automatically allocates the IAM policy so the ALB Ingress Controller can manage the AWS resources and also manages compute resources on-demand to run them. B. Create an AWS Fargate profile. When your pods start, Fargate automatically allocates the IAM policy so the ALB Ingress Controller can manage the AWS resources and also manages compute resources on-demand to run them. These two steps automatically create and provision the cluster. C. Create a cluster. Create an AWS Fargate profile. Create a cluster role and a Kubernetes service account. These steps create the IAM policy so the ALB Ingress Controller can manage the AWS resources. D. Create a cluster. Create an AWS Fargate profile. Set up an OIDC provider with the cluster. Create the IAM policy so the ALB Ingress Controller can manage the AWS resources. Create a cluster role, role binding and a Kubernetes service account attached to the ALB Ingress Controller running pod. E. Deploy your application and create the Service and Ingress resources.

Correct Answers: D, E In order to set up an AWS Application Load Balancer with an Amazon EKS cluster using AWS Fargate like in this scenario, you create a cluster and an AWS Fargate profile. Then, you set up an OIDC provider with the cluster. You create the IAM policy so the ALB Ingress Controller can manage the AWS resources. Afterwards, you create a cluster role, role binding and a Kubernetes service account attached to the ALB Ingress Controller running pod. Once these steps are completed, you deploy your application and create the Service and Ingress resources. Incorrect Answers: Option A is incorrect because Fargate does not automatically allocate the IAM policy. Option B is incorrect because the steps described do not create and provision the cluster. Option C is incorrect because the steps described do not create the IAM policy so the ALB Ingress Controller can manage the AWS resources.


Set pelajaran terkait

Sparsh Gupta cellular adaptation ,intracellular accumulation

View Set

BIOLOGY: CH. 7 PHOTOSYNTHESIS: USING LIGHT TO MAKE FOOD

View Set

Upgrade Computer Hardware- Assignment

View Set