Developer Associate Exam 2020 - List 1

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

A transport company uses a mobile GPS application to track the location of each of their 60 vehicles. The application records each vehicle's location to a DynamoDB table every 6 seconds. Each transmission is just under 1KB and throughput is spread evenly within that minute. How many units of write capacity should you specify for this table? a) 10 b) 600 c) 60 d) 100

a) 10 Writing to the database every six seconds, there are 10 writes/minute/vehicle. There are sixty vehicles in the fleet, so there are 600 writes/minute overall. 600/60 seconds = 10 writes/second.

How long can a message be retained in an SQS Queue? a) 7 days b) 30 days c) 1 day d) 14 days

d) 14days Messages will be retained in queues for up to 14 days

A development team wants to build an application using serverless architecture. The team plans to use AWS Lambda functions extensively to achieve this goal. The developers of the team work on different programming languages like Python, .NET and Javascript. The team wants to model the cloud infrastructure using any of these programming languages. Which AWS service/tool should the team use for the given use-case? a) AWS Cloud Development Kit (CDK) b) AWS CodeDeploy c) AWS Servesless Application Model (SAM) d) AWS CloudFormation

a) AWS Cloud Development Kit (CDK) The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define your cloud application resources using familiar programming languages. Provisioning cloud applications can be a challenging process that requires you to perform manual actions, write custom scripts, maintain templates, or learn domain-specific languages. AWS CDK uses the familiarity and expressive power of programming languages such as JavaScript/TypeScript, Python, Java, and .NET for modeling your applications. It provides you with high-level components called constructs that preconfigure cloud resources with proven defaults, so you can build cloud applications without needing to be an expert. AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. It also enables you to compose and share your own custom constructs that incorporate your organization's requirements, helping you start new projects faster.

A developer has been tasked with writing a new Lambda function that generates statistics from data stored in DynamoDB. The function must be able to be invoked both synchronous and asynchronously. Which of the following AWS services would use synchronous invocations to trigger the Lambda function? (Choose 2) a) Application Load Balancer b) SNS c) API Gateway d) S3

a) Application Load Balancer and b) API Gateway API Gateway and Application Load Balancers both require Lambda functions to be invoked synchronously, as they require a response from the function before returning their own response to the user. S3 and SNS do not require a response from the Lambda function. API Gateway and Application Load Balancers both require Lambda functions to be invoked synchronously, as they require a response from the function before returning their own response to the user. S3 and SNS do not require a response from the Lambda function.

An E-commerce business, has its applications built on a fleet of Amazon EC2 instances, spread across various Regions and AZs. The technical team has suggested using Elastic Load Balancers for better architectural design. What characteristics of an Elastic Load Balancer make it a winning choice? (Select two) a) Build a highly available system b) Deploy EC2 instances across multiple regions c) Improve vertical stability d) The Load Balancer communicates with the underlying EC2 instances using their public APIs e) Separate public traffic from private traffic

a) Build a highly available system e) Separate public traffic from private traffic Separate public traffic from private traffic - The nodes of an internet-facing load balancer have public IP addresses. Load balancers route requests to your targets using private IP addresses. Therefore, your targets do not need public IP addresses to receive requests from users over the internet. Build a highly available system - Elastic Load Balancing provides fault tolerance for your applications by automatically balancing traffic across targets - Amazon EC2 instances, containers, IP addresses, and Lambda functions - in multiple Availability Zones while ensuring only healthy targets receive traffic.

A firm runs its technology operations on a fleet of Amazon EC2 instances. The firm needs a certain software to be available on the instances to support their daily workflows. The developer team has been told to use the user data feature of EC2 instances. Which of the following are true about the user data EC2 configuration? ( Select two) a) By default, user data runs only during the boot cycle when you first launch an instance b) When an instance is running, you can update the user data by using root user creds c) By default, user data is executed every time an EC2 instance is re-started d) By default, scripts entered as user data do not have root user privileges for executing e) By default, scripts entered as user data are executed with root privileges

a) By default, user data runs only during the boot cycle when you first launch an instance e) By default, scripts entered as user data are executed with root privileges Correct options: User Data is generally used to perform common automated configuration tasks and even run scripts after the instance starts. When you launch an instance in Amazon EC2, you can pass two types of user data - shell scripts and cloud-init directives. You can also pass this data into the launch wizard as plain text or as a file. By default, scripts entered as user data are executed with root user privileges - Scripts entered as user data are executed as the root user, hence do not need the sudo command in the script. Any files you create will be owned by root; if you need non-root users to have file access, you should modify the permissions accordingly in the script. By default, user data runs only during the boot cycle when you first launch an instance - By default, user data scripts and cloud-init directives run only during the boot cycle when you first launch an instance. You can update your configuration to ensure that your user data scripts and cloud-init directives run every time you restart your instance. Incorrect options: By default, user data is executed every time an EC2 instance is re-started - As discussed above, this is not a default configuration of the system. But, can be achieved by explicitly configuring the instance. When an instance is running, you can update user data by using root user credentials - You can't change the user data if the instance is running (even by using root user credentials), but you can view it. By default, scripts entered as user data do not have root user privileges for executing - Scripts entered as user data are executed as the root user, hence do not need the sudo command in the script.

You have been asked to use Elastic Beanstalk to build a number of web servers to use in your development environment, which of the following services can you use? [Choose 4] a) EC2 b) S3 c) Lambda d) Elastic Load Balancer e) Auto Scaling Group

a) EC2 b) S3 d) Elastic Load Balancer e) Auto Scaling Group Except for Lambda, all of the services listed can be used to create a web server farm. AWS Lambda automatically runs your code without requiring you to provision or manage servers. Lambda is generally used for stateless, short-running tasks and is not suitable for long-running tasks like running a web server.

Which of the following security credentials can only be created by the AWS Account root user? a) Cloudfront Key Pairs b) IAM user passwords c) IAM User Access Keys d) EC2 Instance Key Pairs

a) CloudFront Key Pairs Correct option: For Amazon CloudFront, you use key pairs to create signed URLs for private content, such as when you want to distribute restricted content that someone paid for. [CloudFront Key Pairs] - IAM users can't create CloudFront key pairs. You must log in using root credentials to create key pairs. To create signed URLs or signed cookies, you need a signer. A signer is either a trusted key group that you create in CloudFront, or an AWS account that contains a CloudFront key pair. AWS recommends that you use trusted key groups with signed URLs and signed cookies instead of using CloudFront key pairs. Incorrect options: [EC2 Instance Key Pairs] - You use key pairs to access Amazon EC2 instances, such as when you use SSH to log in to a Linux instance. These key pairs can be created from the IAM user login and do not need root user access. [IAM User Access Keys] - Access keys consist of two parts: an access key ID and a secret access key. You use access keys to sign programmatic requests that you make to AWS if you use AWS CLI commands (using the SDKs) or using AWS API operations. IAM users can create their own Access Keys, does not need root access. [IAM User passwords] - Every IAM user has access to his own credentials and can reset the password whenever they need to.

An application is hosted by a 3rd party and exposed at yourapp.3rdparty.com. You would like to have your users access your application using www.mydomain.com, which you own and manage under Route 53. What Route 53 record should you create? a) Create a CNAME record b) Create an Alias record c) Create a PTR record d) Create an A record

a) Create a CNAME record Correct option: Create a CNAME record A CNAME record maps DNS queries for the name of the current record, such as acme.example.com, to another domain (example.com or example.net) or subdomain (acme.example.com or zenith.example.org). CNAME records can be used to map one domain name to another. Although you should keep in mind that the DNS protocol does not allow you to create a CNAME record for the top node of a DNS namespace, also known as the zone apex. For example, if you register the DNS name example.com, the zone apex is example.com. You cannot create a CNAME record for example.com, but you can create CNAME records for www.example.com, newproduct.example.com, and so on. Create an A record - Used to point a domain or subdomain to an IP address. 'A record' cannot be used to map one domain name to another. Create a PTR record - A Pointer (PTR) record resolves an IP address to a fully-qualified domain name (FQDN) as an opposite to what A record does. PTR records are also called Reverse DNS records. 'PTR record' cannot be used to map one domain name to another. Create an Alias Record - Alias records let you route traffic to selected AWS resources, such as CloudFront distributions and Amazon S3 buckets. They also let you route traffic from one record in a hosted zone to another record. 3rd party websites do not qualify for these as we have no control over those. 'Alias record' cannot be used to map one domain name to another.

A developer is configuring CodeDeploy to deploy an application to an EC2 instance. The application's source code is stored within AWS CodeCommit. What permissions need to be configured to allow CodeDeploy to perform the deployment to EC2? a) Create an IAM policy with an acton to allow codecommit:GitPull on the required repository. Attach the policy to the EC2 instance profile role. b) Create an IAM Policy with an acton to allow codecommit:CreatePullRequest on the required repository. Attach the policy to CodeDeploy's Service role. c) Create an IAM policy with an acton to allow codecommit:CreatePullRequest on the required repository. Attach the policy to the EC2 instance profile role. d) Create an IAM Policy with an acton to allow codecommit:GitPull on the required repository. Attach the policy to CodeDeploy's Service role.

a) Create an IAM policy with an acton to allow codecommit:GitPull on the required repository. Attach the policy to the EC2 instance profile role. CodeDeploy interacts with EC2 via the CodeDeploy Agent, which must be installed and running on the EC2 instance. During a deployment the CodeDeploy Agent running on EC2 pulls the source code from CodeCommit. The EC2 instance accesses CodeCommit using the permissions defined in its instance profile role; therefore, it is the EC2 instance itself that needs CodeCommit access. The specific CodeCommit permission needed to pull code is codecommit:GitPull.

Which of the following approaches can improve the performance of your Lambda function? [Choose 2] a) Establish your db connections from within the lambda execution env to enable connection reuse b) Only include the libraries you need to minimize the size of your deployment package c) Store env vars outside the function d) Package all dependencies with your deployment package

a) Establish your db connections from within the lambda execution env to enable connection reuse b) Only include the libraries you need to minimize the size of your deployment package Establishing connections within the execution environment allows them to be reused next time the function is invoked which saves time. Only including the libraries you need will minimise the time taken for Lambda to unpack the deployment package.

An organization wishes to use CodeDeploy to automate its application deployments. The organization has asked a developer to advise on which of their services can integrate with CodeDeploy. Which of the following services can the developer advise are compatible with CodeDeploy managed deployments? [Choose 4] a) Fargate b) On-premises servers c) Lambda d) Elastic K8s Service pods e) S3 static web hosting f) EC2

a) Fargate b) On-premises servers c) Lambda f) EC2 CodeDeploy supports EC2, ECS (both EC2 and Fargate), Lambda, and on-premise servers.

Which of the following statements are true about the concept of blue/green deployment regarding development and deployment of your application? [Choose 3] a) It allows you to shift traffic between two identical environments that are running different versions of your application b) The green environment represents the production environment c) The green environment is staged running a different version of your app d) The blue environment represents the current app version serving production traffic

a) It allows you to shift traffic between two identical environments that are running different versions of you app With blue/green deployment, you can shift traffic between two identical environments that are running different versions of your application. It allows you to easily deploy changes to your application and roll-back on changes very quickly. c) The green environment is staged running a different version of your app The fundamental idea behind blue/green deployment is to shift traffic between two identical environments that are running different versions of your application. The blue environment represents the current application version serving production traffic. In parallel, the green environment is staged running a different version of your application. d) The blue environment represents the current app version service prod traffic The fundamental idea behind blue/green deployment is to shift traffic between two identical environments that are running different versions of your application. The blue environment represents the current application version serving production traffic. In parallel, the green environment is staged running a different version of your application.

Which of the following best describes how KMS Encryption works? a) KMS stores the CMK and receives data from the clients, which it encrypts and sends back b) KMS generates a new CMK for each Encrypt call and encrypts the data with it c) KMS sends the CMK to the client, which performs the encryption and then deletes the CMK d) KMS receives CMK from the client at every Encrypt call and encrypts the data with that

a) KMS stores the CMK and receives data from the clients, which it encrypts and sends back Correct option: [KMS stores the CMK, and receives data from the clients, which it encrypts and sends back] A customer master key (CMK) is a logical representation of a master key. The CMK includes metadata, such as the key ID, creation date, description, and key state. The CMK also contains the key material used to encrypt and decrypt data. You can generate CMKs in KMS, in an AWS CloudHSM cluster, or import them from your key management infrastructure. AWS KMS supports symmetric and asymmetric CMKs. A symmetric CMK represents a 256-bit key that is used for encryption and decryption. An asymmetric CMK represents an RSA key pair that is used for encryption and decryption or signing and verification (but not both), or an elliptic curve (ECC) key pair that is used for signing and verification. AWS KMS supports three types of CMKs: customer-managed CMKs, AWS managed CMKs, and AWS owned CMKs. Incorrect options: [KMS receives CMK from the client at every encrypt call, and encrypts the data with that] - You can import your own CMK (Customer Master Key) but it is done once and then you can encrypt/decrypt as needed. [KMS sends the CMK to the client, which performs the encryption and then deletes the CMK] - KMS does not send CMK to the client, KMS itself encrypts, and then decrypts the data. [KMS generates a new CMK for each Encrypt call and encrypts the data with it] - KMS does not generate a new key each time but you can have KMS rotate the keys for you. Best practices discourage extensive reuse of encryption keys so it is good practice to generate new keys.

The development team at a company creates serverless solutions using AWS Lambda. Functions are invoked by clients via AWS API Gateway which anyone can access. The team lead would like to control access using a 3rd party authorization mechanism. As a Developer Associate, which of the following options would you recommend for the given use-case? a) Lambda Authorizer b) Cognito User Pools c) API Gatway User Pools d) IAM permission with sigv4

a) Lambda Authorizer Correct option: "Lambda Authorizer" An Amazon API Gateway Lambda authorizer (formerly known as a custom authorizer) is a Lambda function that you provide to control access to your API. A Lambda authorizer uses bearer token authentication strategies, such as OAuth or SAML. Before creating an API Gateway Lambda authorizer, you must first create the AWS Lambda function that implements the logic to authorize and, if necessary, to authenticate the caller. Incorrect options: "IAM permissions with sigv4" - Signature Version 4 is the process to add authentication information to AWS requests sent by HTTP. You will still need to provide permissions but our requirements have a need for 3rd party authentication which is where Lambda Authorizer comes in to play. "Cognito User Pools" - A Cognito user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito, or federate through a third-party identity provider (IdP). Whether your users sign-in directly or through a third party, all members of the user pool have a directory profile that you can access through an SDK. This is managed by AWS, therefore, does not meet our requirements. "API Gateway User Pools" - This is a made-up option, added as a distractor.

Which of the following platforms are supported in ElasticBeanstalk? (Choose 3) a) Passenger b) Tomcat c) JBoss d) Docker

a) Passenger Elastic beanstalk supports common platforms like including Tomcat, Passenger, Puma and Docker Elastic Beanstalk Supported Platforms b) Tomcat Elastic beanstalk supports common platforms like including Tomcat, Passenger, Puma and Docker Elastic Beanstalk Supported Platforms Selected d) docker Elastic beanstalk supports common platforms like including Tomcat, Passenger, Puma and Docker Elastic Beanstalk Supported Platforms

After a test deployment in ElasticBeanstalk environment, a developer noticed that all accumulated Amazon EC2 burst balances were lost. Which of the following options can lead to this behavior? a) The deployment was either run with immutable updates or in traffic splitting mode b) The deployment was run as a Rolling deployment, resulting in the resetting of EC2 burst balances c) When a canary deployment fails, it resets the EC2 burst balances to 0 d) The deployment was run as a All-at-once deployment, flushing all the accumulated EC2 burst balances

a) The deployment was either run with immutable updates or in traffic splitting mode Correct option: [The deployment was either run with immutable updates or in traffic splitting mode] - Immutable deployments perform an immutable update to launch a full set of new instances running the new version of the application in a separate Auto Scaling group, alongside the instances running the old version. Immutable deployments can prevent issues caused by partially completed rolling deployments. Traffic-splitting deployments let you perform canary testing as part of your application deployment. In a traffic-splitting deployment, Elastic Beanstalk launches a full set of new instances just like during an immutable deployment. It then forwards a specified percentage of incoming client traffic to the new application version for a specified evaluation period. Some policies replace all instances during the deployment or update. This causes all accumulated Amazon EC2 burst balances to be lost. It happens in the following cases: 1. Managed platform updates with instance replacement enabled 2. Immutable updates 3. Deployments with immutable updates or traffic splitting enabled Incorrect options: [The deployment was run as a Rolling deployment, resulting in the resetting of EC2 burst balances] - With rolling deployments, Elastic Beanstalk splits the environment's Amazon EC2 instances into batches and deploys the new version of the application to one batch at a time. Rolling deployments do not result in loss of EC2 burst balances. [The deployment was run as a All-at-once deployment, flushing all the accumulated EC2 burst balances] - The traditional All-at-once deployment, wherein all the instances are updated simultaneously, does not result in loss of EC2 burst balances. [When a canary deployment fails, it resets the EC2 burst balances to zero] - This is incorrect and given only as a distractor.

You are working on a social media application which allows users to share BBQ recipes and photos. You would like to schedule a Lambda function to run every 10 minutes which checks for the latest posts and sends a notification including an image thumbnail to users who have previously engaged with posts from the same user. How can you configure your function to automatically run at 10 minute intervals? a) Use CloudWatch Events to schedule the function b) Use Lambda with cron to schedule the function c) Use AWS SWF to schedule the function d) Use EC2 with cron to schedule the function

a) Use CloudWatch Events to schedule the function You can direct AWS Lambda to execute a function on a regular schedule using CloudWatch Events. You can specify a fixed rate - for example, execute a Lambda function every hour or 15 minutes, or you can specify a cron expression.

You are developing an application in API Gateway, and need to categorize your APIs based on their status as: sandbox, test, or prod. You want to use a name-value pair system to label and manage your APIs. What feature of API Gateway would you use to accomplish this task? a) Use stage variables based on the API deployment stage to interact with different backend endpoints. b) Use tags based on stages. The tag can be set directly on the stage of the API. c) Use environment variables based on the API deployment stage to interact with different backend endpoints. d) Use the API Gateway console to create a canary release deployment.

a) Use stage variables based on the API deployment stage to interact with different backend endpoints. Stage variables are name-value pairs that you can define as configuration attributes associated with a deployment stage of a REST API. They act like environment variables and can be used in your API setup and mapping templates. With deployment stages in API Gateway, you can manage multiple release stages for each API, such as: alpha, beta, and production. Using stage variables you can configure an API deployment stage to interact with different backend endpoints. Environment variables apply to AWS Lambda. Canary release is a software development strategy in which a new version of an API (as well as other software) is deployed as a canary release for testing purposes, and the base version remains deployed as a production release for normal operations on the same stage. (This would be appropriate when your application is live and you'd want to reduce the risk inherent in a new software version release.) A tag is a metadata label that you assign or that AWS assigns to an AWS resource and would not impact the functionality of your APIs.

You have developed a CloudFormation stack in the AWS console. You have a few CloudFormation stacks saved in the region you are operating in. When you launch your stack that contains many EC2 resources, you receive the error Status=start_failed. How would you troubleshoot this issue? a) Use the Support Center in the AWS Management Console to request an increase in the number of EC2 instances. b) Use the Support Center in the AWS Management Console to request an increase in the number of CloudFormation stacks. c) Save the template via the AWS CLI. d) Wait a few minutes before saving the template and retry the process.

a) Use the Support Center in the AWS Management Console to request an increase in the number of EC2 instances. Verify that you didn't reach a resource limit. For example, the default number of EC2 instances that you can launch is 20. If you try to create more instances than your account limit, the instance creation fails and you receive the status start_failed. Also, during an update, if a resource is replaced, AWS CloudFormation creates new resources before it deletes the old ones. This replacement might put your account over the resource limit, which would cause your update to fail. You can delete excess resources or request a limit increase. Saving the template in the CLI or waiting a few minutes will have no impact. The default limit for CloudFormation stacks is 200 and the question explicitly states that there are only a very small number of existing stacks.

You have developed a Lambda function that is triggered by an application running on EC2. The Lambda function currently has an execution role that allows read/write access to EC2, and also has the AWSLambdaBasicExecutionRole managed policy attached. After some architectural changes in your environment, the Lambda now needs to access some data stored in S3. What changes are required for your Lambda function to fulfill this new task of accessing data in S3? a) Attach a resource-based policy to the Lambda function that will allow it to access the S3 bucket resource. b) Add permissions to the function's execution role to grant it the necessary access to S3 in IAM. c) Create a new Lambda function with the an execution role allowing read/write access to EC2 and S3 since you cannot add permissions to a function's execution role. d) No changes are required. The function has all the permissions it needs to access S3.

b) Add permissions to the function's execution role to grant it the necessary access to S3 in IAM An AWS Lambda function's execution role grants it permission to access AWS services and resources. You provide this role when you create a function, and Lambda assumes the role when your function is invoked. You can update (add or remove) the permissions of a function's execution role at any time, or configure your function to use a different role. Add permissions for any services that your function calls with the AWS SDK, and for services that Lambda uses to enable optional features. The AWSLambdaBasicExecutionRole AWS managed policy grants Lambda the permissions to upload logs to CloudWatch. This policy alone will not grant it access to S3. You do not need to create a new function as you can add or remove permissions to an execution role at any time. Resource-based policies let you grant usage permission to other accounts on a per-resource basis, and allow an AWS service to invoke your function, so this is not a valid way to grant S3 permissions to the existing Lambda function.

ECS Fargate container tasks are usually spread across Availability Zones (AZs) and the underlying workloads need persistent cross-AZ shared access to the data volumes configured for the container tasks. Which of the following solutions is the best choice for these workloads? a) AWS Gateway Storage volumes b) Amazon EFS volumes c) Docker volumes d) Bind mounts

b) Amazon EFS volumes Correct option: Amazon EFS volumes - EFS volumes provide a simple, scalable, and persistent file storage for use with your Amazon ECS tasks. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files. Your applications can have the storage they need, when they need it. Amazon EFS volumes are supported for tasks hosted on Fargate or Amazon EC2 instances. You can use Amazon EFS file systems with Amazon ECS to export file system data across your fleet of container instances. That way, your tasks have access to the same persistent storage, no matter the instance on which they land. However, you must configure your container instance AMI to mount the Amazon EFS file system before the Docker daemon starts. Also, your task definitions must reference volume mounts on the container instance to use the file system. Incorrect options: Docker volumes - A Docker-managed volume that is created under /var/lib/docker/volumes on the host Amazon EC2 instance. Docker volume drivers (also referred to as plugins) are used to integrate the volumes with external storage systems, such as Amazon EBS. The built-in local volume driver or a third-party volume driver can be used. Docker volumes are only supported when running tasks on Amazon EC2 instances. Bind mounts - A file or directory on the host, such as an Amazon EC2 instance or AWS Fargate, is mounted into a container. Bind mount host volumes are supported for tasks hosted on Fargate or Amazon EC2 instances. Bind mounts provide temporary storage, and hence these are a wrong choice for this use case. AWS Storage Gateway volumes - This is an incorrect choice, given only as a distractor.

You're a developer doing contract work for the media sector. Since you work alone, you opt for technologies that require little maintenance, which allows you to focus more on your coding. You have chosen AWS Elastic Beanstalk to assist with the deployment of your applications. While reading online documentation you find that Elastic Beanstalk relies on another AWS service to provision your resources. Which of the following represents this AWS service? a) CodeDeploy b) CloudFormation c) Systems Manager d) CodeCommit

b) CloudFormation AWS Elastic Beanstalk provides an environment to easily deploy and run applications in the cloud. It is integrated with developer tools and provides a one-stop experience for you to manage the lifecycle of your applications. Elastic Beanstalk uses AWS CloudFormation to launch the resources in your environment and propagate configuration changes. AWS CloudFormation supports Elastic Beanstalk application environments as one of the AWS resource types. This allows you, for example, to create and manage an AWS Elastic Beanstalk-hosted application along with an RDS database to store the application data.

As a developer, you are working on creating an application using AWS Cloud Development Kit (CDK). Which of the following represents the correct order of steps to be followed for creating an app using AWS CDK? a) Create the app from a template provided by AWS CloudFormation -> Add code to the app to create resources within stacks -> Build the app (optional) -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account b) Create the app from a template provided by AWS CDK -> Add code to the app to create resources within stacks -> Build the app (optional) -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account c) Create the app from a template provided by AWS CDK -> Add code to the app to create resources within stacks -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account -> Build the app d) Create the app from a template provided by AWS CloudFormation -> Add code to the app to create resources within stacks -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account -> Build the app

b) Create the app from a template provided by AWS CDK -> Add code to the app to create resources within stacks -> Build the app (optional) -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account Correct option: Create the app from a template provided by AWS CDK -> Add code to the app to create resources within stacks -> Build the app (optional) -> Synthesize one or more stacks in the app -> Deploy stack(s) to your AWS account The standard AWS CDK development workflow is similar to the workflow you're already familiar as a developer. There are a few extra steps: 1. Create the app from a template provided by AWS CDK - Each AWS CDK app should be in its own directory, with its own local module dependencies. Create a new directory for your app. Now initialize the app using the cdk init command, specifying the desired template ("app") and programming language. The cdk init command creates a number of files and folders inside the created home directory to help you organize the source code for your AWS CDK app. 2. Add code to the app to create resources within stacks - Add custom code as is needed for your application. 3. Build the app (optional) - In most programming environments, after making changes to your code, you'd build (compile) it. This isn't strictly necessary with the AWS CDK—the Toolkit does it for you so you can't forget. But you can still build manually whenever you want to catch syntax and type errors. 4. Synthesize one or more stacks in the app to create an AWS CloudFormation template - Synthesize one or more stacks in the app to create an AWS CloudFormation template. The synthesis step catches logical errors in defining your AWS resources. If your app contains more than one stack, you'd need to specify which stack(s) to synthesize. 5. Deploy one or more stacks to your AWS account - It is optional (though good practice) to synthesize before deploying. The AWS CDK synthesizes your stack before each deployment. If your code has security implications, you'll see a summary of these and need to confirm them before deployment proceeds. cdk deploy is used to deploy the stack using CloudFormation templates. This command displays progress information as your stack is deployed. When it's done, the command prompt reappears.

CodeCommit is a managed version control service that hosts private Git repositories in the AWS cloud. Which of the following credential types is NOT supported by IAM for CodeCommit? a) SSH Keys b) IAM username and password c) AWS access keys d) Git credentials

b) IAM username and password Correct option: [IAM username and password] - IAM username and password credentials cannot be used to access CodeCommit. Incorrect options: [Git credentials] - These are IAM -generated user name and password pair you can use to communicate with CodeCommit repositories over HTTPS. [SSH Keys] - Are locally generated public-private key pair that you can associate with your IAM user to communicate with CodeCommit repositories over SSH. [AWS access keys] - You can use these keys with the credential helper included with the AWS CLI to communicate with CodeCommit repositories over HTTPS.

What is the CloudFormation helper script cfn-init used for? a) Fetch a metadata block from AWS CloudFormation template b) Install packages and start/stop services on EC2 instance. c) Initialize CloudFormation IAM Service Role. d) Fetch required credentials before provisioning AWS resources.

b) Install packages and start/stop services on EC2 instances. CloudFormation helper scripts are Python scripts that can be used as part of a CloudFormation template to automate common tasks during stack created. cfn-init helper script can be used to install packages, create files, and start/stop services. cfn-get-metadata can be used to fetch a metadata block from AWS CloudFormation and print it to stdout.

You have deployed your app on EC2 using Elastic Beanstalk. You would like to configure your app to send data to x-ray. Where should you install the X-Ray daemon? a) Manually provision a new EC2 instance and install the X-Ray daemon on the new instance b) Install the X-Ray daemon on the EC2 instances inside your Elastic Beanstalk environment. c) Install the X-Ray daemon on a Docker container running on your EC2 instance d) Install the X-Ray daemon on the EC2 instances located in your own data center

b) Install the X-Ray daemon on the EC2 instances inside your Elastic Beanstalk environment. To relay trace data from your app to AWS X-Ray, you can run the X-Ray daemon on your Elastic Beanstalk environment's Amazon EC2 instances. Elastic Beanstalk platforms provide a configuration option that you can set to run the daemon automatically. You can enable the daemon in a configuration file in your source code or by choosing an option in the Elastic Beanstalk console. When you enable the configuration option, the daemon is installed on the instance and runs as a service.

You are developing a online-banking website which will be accessed by a global customer base. You are planning to use CloudFront to ensure users experience good performance regardless of their location. The Security Architect working on the project asks you to ensure that all requests to CloudFront are encrypted using HTTPS. How can you configure this? a) Set the Request Protocol Policy to redirect HTTP to HTTPS b) Set the Viewer Protocol Policy to redirect HTTP to HTTPS c) Set the User Protocol Policy to redirect HTTP to HTTPS d) Set the Session Protocol Policy to redirect HTTP to HTTPS

b) Set the Viewer Protocol Policy to redirect HTTP to HTTPS Viewer Protocol Policy defines the protocols which can be used to access CloudFront content Requiring HTTPS for Communication Between Viewers and CloudFront

A VPC has four subnets: 1) Subnet1 has a route table entry with destination: 0.0.0.0/0 and target: VPC Internet Gateway ID; 2) Subnet2 has a route table entry with destination 0.0.0.0/0 and target: NAT Gateway ID; 3) Subnet3 has a EC2 instance that serves as a bastion host 4) Subnet4 has a security group inbound rule with source: 0.0.0.0/0; Protocol: TCP; and Port Range: 1433. What would be the recommended subnet for hosting an RDS database instance? a) Subnet4 b) Subent2 c) Subnet1 d) Subnet3

b) Subnet2 Security best practice would state that RDS Database instances should be deployed to a private subnet. A private subnet would only have private IP's with no direct access to the public internet. Outbound connectivity would be provided via a NAT gateway. Thus, Subnet2 is a suitable choice for deploying RDS instances as it is characterized as a private subnet. Subnet1 has direct connectivity to the public internet via the Internet gateway. Thus, it is characterized as a public subnet and would not be recommended location for deploying databases. Bastion hosts allow direct inbound connections from public internet. This means that Subnet3 would not be a good choice to host databases. Lastly, Subnet4 contains a security group rule that allows inbound connectivity from the public internet on the database port (1433). This makes it a poor candidate to host databases.

Your main application currently stores its credentials as a text file on an EC2 server. Your Manager has informed you that this is an insecure practice and has told you to store these credentials in an AWS managed service instead. AWS Systems Manager Parameter Store and AWS Secrets Manager can be used for the secure storage of credentials. Of the below features, which apply to both Secrets Manager and Parameter Store? [Choose 3] a) Available at no additional charge providing you store less than 10,000 credentials b) Supports encryption at rest using customer-owned KMS keys c) Integrated with Identity and Access Management d) Manages rotation and lifecycle of credentials e) Can store credentials in hierarchical form

b) Supports encryption at rest using customer-owned KMS keys c) Integrated with Identity and Access Management e) Can store credentials in hierarchical form Many aspects of Parameter Store and Secrets Manager appear very similar, but Secrets Store charges you for storing each secret and also provides a secret rotation service whereas Parameter Store does not. Therefore these are the only three answers related to both services.

Your application runs on Lambda and you would like to enable your functions to communicate with EC2 instances in your private subnet. How can you enable this? a) Add a NAT gateway to the subnet b) Update your Lambda function with the relevant VPC config information c) Launch the Lambda function in the same subnet as your EC2 instances d) Add an internet gateway to your VPC

b) Update your Lambda function with the relevant VPC config information In order to enable Lambda to communicate with your private VPC, you need to add VPC config information. You do not need to add a NAT or Internet gateway and it is not possible to launch a Lambda function inside your own VPC or subnet. - Please be aware that in September 2019, AWS announced that they are simplifying VPC networking for Lambda functions, with changes planned to be rolled out on a per region basis. However the exams do generally run at least 6-12 months behind any new updates or announcements, so unfortunately, you may still see exam questions referring to the old way of doing things. Announcing improved VPC networking for AWS Lambda functions

An e-commerce company has developed an API that is hosted on Amazon ECS. Variable traffic spikes on the application are causing order processing to take too long. The application processes orders using Amazon SQS queues. The `ApproximateNumberOfMessagesVisible` metric spikes at very high values throughout the day which triggers the CloudWatch alarm. Other ECS metrics for the API containers are well within limits. As a Developer Associate, which of the following will you recommend for improving performance while keeping costs low? a) Use Docker swarm b) Use backlog per instance metric with target tracking scaling policy c) Use ECS service scheduler d) Use ECS step scaling policy

b) Use backlog per instance metric with target tracking scaling policy Correct option: Use backlog per instance metric with target tracking scaling policy - If you use a target tracking scaling policy based on a custom Amazon SQS queue metric, dynamic scaling can adjust to the demand curve of your application more effectively. The issue with using a CloudWatch Amazon SQS metric like ApproximateNumberOfMessagesVisible for target tracking is that the number of messages in the queue might not change proportionally to the size of the Auto Scaling group that processes messages from the queue. That's because the number of messages in your SQS queue does not solely define the number of instances needed. The number of instances in your Auto Scaling group can be driven by multiple factors, including how long it takes to process a message and the acceptable amount of latency (queue delay). The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain. You can calculate these numbers as follows: Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to determine the length of the SQS queue (number of messages available for retrieval from the queue). Divide that number by the fleet's running capacity, which for an Auto Scaling group is the number of instances in the InService state, to get the backlog per instance. Acceptable backlog per instance: To calculate your target value, first determine what your application can accept in terms of latency. Then, take the acceptable latency value and divide it by the average time that an EC2 instance takes to process a message. To illustrate with an example, let's say that the current ApproximateNumberOfMessages is 1500 and the fleet's running capacity is 10. If the average processing time is 0.1 seconds for each message and the longest acceptable latency is 10 seconds, then the acceptable backlog per instance is 10 / 0.1, which equals 100. This means that 100 is the target value for your target tracking policy. If the backlog per instance is currently at 150 (1500 / 10), your fleet scales out, and it scales out by five instances to maintain proportion to the target value. Incorrect options: Use Docker swarm - A Docker swarm is a container orchestration tool, meaning that it allows the user to manage multiple containers deployed across multiple host machines. A swarm consists of multiple Docker hosts which run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services). Use ECS service scheduler - Amazon ECS provides a service scheduler (for long-running tasks and applications), the ability to run tasks manually (for batch jobs or single run tasks), with Amazon ECS placing tasks on your cluster for you. You can specify task placement strategies and constraints that allow you to run tasks in the configuration you choose, such as spread out across Availability Zones. It is also possible to integrate with custom or third-party schedulers. Use ECS step scaling policy - Although Amazon ECS Service Auto Scaling supports using Application Auto Scaling step scaling policies, AWS recommends using target tracking scaling policies instead. For example, if you want to scale your service when CPU utilization falls below or rises above a certain level, create a target tracking scaling policy based on the CPU utilization metric provided by Amazon ECS. With step scaling policies, you create and manage the CloudWatch alarms that trigger the scaling process. If the target tracking alarms don't work for your use case, you can use step scaling. You can also use target tracking scaling with step scaling for an advanced scaling policy configuration. For example, you can configure a more aggressive response when utilization reaches a certain level. Step Scaling scales your cluster on various lengths of steps based on different ranges of thresholds. Target tracking on the other hand intelligently picks the smart lengths needed for the given configuration.

A startup with newly created AWS account is testing different EC2 instances. They have used Burstable performance instance - T2.micro - for 35 seconds and stopped the instance. At the end of the month, what is the instance usage duration that the company is charged for? a) 35 sec b) 30 sec c) 0 sec d) 60 sec

c) 0 sec Correct option: Burstable performance instances, which are T3, T3a, and T2 instances, are designed to provide a baseline level of CPU performance with the ability to burst to a higher level when required by your workload. Burstable performance instances are the only instance types that use credits for CPU usage. 0 seconds - AWS states that, if your AWS account is less than 12 months old, you can use a t2.micro instance for free within certain usage limits.

An organization has offices across multiple locations and the technology team has configured an Application Load Balancer across targets in multiple Availability Zones. The team wants to analyze the incoming requests for latencies and the client's IP address patterns. Which feature of the Load Balancer will help collect the required information? a) CloudTrail Logs b) ALB request tracing c) ALB access logs d) CloudWatch metrics

c) ALB access logs Correct option: [ALB access logs] - Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues. Access logging is an optional feature of Elastic Load Balancing that is disabled by default. Incorrect options: [CloudTrail logs] - Elastic Load Balancing is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Elastic Load Balancing. CloudTrail captures all API calls for Elastic Load Balancing as events. You can use AWS CloudTrail to capture detailed information about the calls made to the Elastic Load Balancing API and store them as log files in Amazon S3. You can use these CloudTrail logs to determine which API calls were made, the source IP address where the API call came from, who made the call, when the call was made, and so on. [CloudWatch metrics] - Elastic Load Balancing publishes data points to Amazon CloudWatch for your load balancers and your targets. CloudWatch enables you to retrieve statistics about those data points as an ordered set of time-series data, known as metrics. You can use metrics to verify that your system is performing as expected. This is the right feature if you wish to track a certain metric. [ALB request tracing] - You can use request tracing to track HTTP requests. The load balancer adds a header with a trace identifier to each request it receives. Request tracing will not help you to analyze latency specific data.

You want to receive an email whenever a user pushes code to your CodeCommit repository, how can you configure this? a) Create a new SNS topic and configure it to poll for CodeCommit events. Ask all your users subscribe to the topic to receive notifications b) Configure a CloudWatch Events rule to send a message to SES which will trigger an email to be sent whenever a user pushes code to the repository c) Configure Notifications in the console, this will create a CloudWatch Events rule to send a notification to an SNS topic which will trigger an email to be sent to the user d) You can configure the SNS notifications in the CodeCommit console CodeCommit Notifications e) Configure a CloudWatch Events rule to send a message to SQS which will trigger an email to be sent whenever a user pushes code to the repository

c) Configure Notifications in the console, this will create a CloudWatch Events rule to send a notification to an SNS topic which will trigger an email to be sent to the user

You are developing a Serverless app written in Node.js, which will run on lambda. During performance testing, you notice the app is not running as quickly as you would like and you suspect your lambda does not have enough CPU capacity. Which of the following options will improve the overall performance of your function? a) Configure more CPU capacity in the Lambda settings b) Configure an ElastiCache cluster and place it in front of your Lambda function c) Configure more memory for your function d) Use API Gateway to expose the Lambda function in a more scalable way

c) Configure more memory for your function In the AWS Lambda resource model, you can choose the amount of memory you want for your function, and are allocated proportional CPU power and other resources. For example, choosing 256MB of memory allocates x2 as much CPU power as 128MB of memory and half as much CPU power as 512MB. Lambda allocates CPU power linearly in proportion to the amount of memory configured.

When running a Rolling deployment in Elastic Beanstalk environment, only two batches completed the deployment successfully, while rest of the batches failed to deploy the updated version. Following this, the development team terminated the instances from the failed deployment. What will be the status of these failed instances post termination? a) Elastic Beanstalk will replace the failed instances with instances running the app version from the oldest sucessful deployment b) Elastic Beanstalk with replace the failed instances after the app version to be installed is manually chosen from the console c) Elastic Beanstalk will replace the failed instances with instances running the application version form the most recent successful deployment d) Elastic Beanstalk will not replace the failed instances

c) Elastic Beanstalk will replace the failed instances with instances running the application version form the most recent successful deployment Correct option: Elastic Beanstalk will replace them with instances running the application version from the most recent successful deployment When processing a batch, Elastic Beanstalk detaches all instances in the batch from the load balancer, deploys the new application version, and then reattaches the instances. If you enable connection draining, Elastic Beanstalk drains existing connections from the Amazon EC2 instances in each batch before beginning the deployment. If a deployment fails after one or more batches completed successfully, the completed batches run the new version of your application while any pending batches continue to run the old version. You can identify the version running on the instances in your environment on the health page in the console. This page displays the deployment ID of the most recent deployment that was executed on each instance in your environment. If you terminate instances from the failed deployment, Elastic Beanstalk replaces them with instances running the application version from the most recent successful deployment. Incorrect options: Elastic Beanstalk will not replace the failed instances Elastic Beanstalk will replace the failed instances with instances running the application version from the oldest successful deployment Elastic Beanstalk will replace the failed instances after the application version to be installed is manually chosen from AWS Console These three options contradict the explanation provided above, so these options are incorrect.

To enable HTTPS connections for his web application deployed on the AWS Cloud, a developer is in the process of creating server certificate. Which AWS entities can be used to deploy SSL/TLS server certificates? (Select two) a) AWS Secrets Manager b) AWS CloudFormation c) IAM d) AWS Certificate manager e) AWS Systems manager

c) IAM d) AWS Certificate manager [AWS Certificate Manager] - AWS Certificate Manager (ACM) is the preferred tool to provision, manage, and deploy server certificates. With ACM you can request a certificate or deploy an existing ACM or external certificate to AWS resources. Certificates provided by ACM are free and automatically renew. In a supported Region, you can use ACM to manage server certificates from the console or programmatically. [IAM] - IAM is used as a certificate manager only when you must support HTTPS connections in a Region that is not supported by ACM. IAM securely encrypts your private keys and stores the encrypted version in IAM SSL certificate storage. IAM supports deploying server certificates in all Regions, but you must obtain your certificate from an external provider for use with AWS. You cannot upload an ACM certificate to IAM. Additionally, you cannot manage your certificates from the IAM Console. Incorrect options: [AWS Secrets Manager] - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. It cannot be used to discover and protect your sensitive data in AWS. [AWS Systems Manager] - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks such as running commands, managing patches, and configuring servers across AWS Cloud as well as on-premises infrastructure. [AWS CloudFormation] - AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. Think infrastructure as code; think CloudFormation. You cannot use CloudFormation for running commands or managing patches on servers.

You work for a company which facilitates and organizes technical conferences. You ran a large number of events this year with many high profile speakers and would like to enable your customers to access videos of the most popular presentations. You have stored all your content in a publicly accessible S3 bucket, but you would like to restrict access so that people can only access the videos after logging into your website. How should you configure this? [Choose 2] a) Use CloudFront with HTTPS to enable secure access to videos b) Use SSE-S3 to generate a signed URL c) Remove public read access from the S3 bucket where the videos are stored d) Use web indentity federation with temp creds allowing access to the videos e) Share the videos by providing a presigned URL only for users logged into your website

c) Remove public read access from the S3 bucket e) Share videos by providing a presigned URL only for users logged into your website All objects by default are private. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a pre-signed URL, using their own security credentials, to grant time-limited permission to download the objects. Anyone who receives the pre-signed URL can then access the object. For example, if you have a video in your bucket and both the bucket and the object are private, you can share the video with others by generating a pre-signed URL.

In addition to regular sign-in credentials, AWS supports Multi-Factor Authentication (MFA) for accounts with privileged access. Which of the following MFA mechanisms is NOT for root user authentication? a) Virtual MFA devices b) Hardware MFA device c) SMS text message-based MFA d) U2F security key

c) SMS text message-based MFA Correct options: [SMS text message-based MFA] - A type of MFA in which the IAM user settings include the phone number of the user's SMS-compatible mobile device. When the user signs in, AWS sends a six-digit numeric code by SMS text message to the user's mobile device. The user is required to type that code on a second webpage during sign-in. SMS-based MFA is available only for IAM users, you cannot use this type of MFA with the AWS account root user. Incorrect options: [Hardware MFA device] - This hardware device generates a six-digit numeric code. The user must type this code from the device on a second webpage during sign-in. Each MFA device assigned to a user must be unique. A user cannot type a code from another user's device to be authenticated. Can be used for root user authentication. [U2F security key] - A device that you plug into a USB port on your computer. U2F is an open authentication standard hosted by the FIDO Alliance. When you enable a U2F security key, you sign in by entering your credentials and then tapping the device instead of manually entering a code. [Virtual MFA devices] - A software app that runs on a phone or other device and emulates a physical device. The device generates a six-digit numeric code. The user must type a valid code from the device on a second webpage during sign-in. Each virtual MFA device assigned to a user must be unique. A user cannot type a code from another user's virtual MFA device to authenticate.

You've been asked to create a Web application with an endpoint that can handle thousands of REST calls a minute. What AWS service can be used in front of an application to assist in achieving this? a) CloudFront b) Elastic Beanstalk c) Global Accelerator d) API Gateway

d) API Gateway Questions containing 'REST' are usually related to APIs, so API Gateway looks the best answer. Elastic Beanstalk is a service which allows you to run applications without understanding the infrastructure and can be discounted, as can Global Accelerator which is a networking service that improves the availability and performance of applications. CloudFront can be used in conjunction with API Gateway to assist in geographically disparate calls, but won't process calls by itself.

You are a developer in a manufacturing company that has several servers on-site. The company decides to move new development to the cloud using serverless technology. You decide to use the AWS Serverless Application Model (AWS SAM) and work with an AWS SAM template file to represent your serverless architecture. Which of the following is NOT a valid serverless resource type? a) AWS::Serverless::Api b) AWS::Serverless::Function c) AWS::Serverless::SimpleTable d) AWS::Serverless::UserPool

d) AWS::Serverless::UserPool Correct option: [AWS::Serverless::UserPool] The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With just a few lines per resource, you can define the application you want and model it using YAML. SAM supports the following resource types: AWS::Serverless::Api AWS::Serverless::Application AWS::Serverless::Function AWS::Serverless::HttpApi AWS::Serverless::LayerVersion AWS::Serverless::SimpleTable AWS::Serverless::StateMachine UserPool applies to the Cognito service which is used for authentication for mobile app and web. There is no resource named UserPool in the Serverless Application Model. Incorrect options: [AWS::Serverless::Function] - This resource creates a Lambda function, IAM execution role, and event source mappings that trigger the function. [AWS::Serverless::Api] - This creates a collection of Amazon API Gateway resources and methods that can be invoked through HTTPS endpoints. It is useful for advanced use cases where you want full control and flexibility when you configure your APIs. [AWS::Serverless::SimpleTable] - This creates a DynamoDB table with a single attribute primary key. It is useful when data only needs to be accessed via a primary key.

A development team lead is responsible for managing access for her IAM principals. At the start of the cycle, she has granted excess privileges to users to keep them motivated for trying new things. She now wants to ensure that the team has only the minimum permissions required to finish their work. Which of the following will help her identify unused IAM roles and remove them without disrupting any service? a) AWS Trusted Advisor b) IAM Access Analyzer c) Amazon Inspector d) Access Advisor feature on IAM console

d) Access Advisor feature on IAM console Access Advisor feature on IAM console- To help identify the unused roles, IAM reports the last-used timestamp that represents when a role was last used to make an AWS request. Your security team can use this information to identify, analyze, and then confidently remove unused roles. This helps improve the security posture of your AWS environments. Additionally, by removing unused roles, you can simplify your monitoring and auditing efforts by focusing only on roles that are in use. Incorrect options: AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. IAM Access Analyzer - AWS IAM Access Analyzer helps you identify the resources in your organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an external entity. This lets you identify unintended access to your resources and data, which is a security risk. Amazon Inspector - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.

A company is looking at optimizing their Amazon EC2 instance costs. Few instances are sure to run for a few years, but the instance type might change based on business requirements. Which EC2 instance purchasing option should they opt to meet the reduced cost criteria? a) Standard Reserved Instances b) Elastic Reserved Instances c) Scheduled Reserved Instances d) Convertible Reserved Instances

d) Convertible Reserved Instances Correct option: Reserved Instances offer significant savings on Amazon EC2 costs compared to On-Demand Instance pricing. A Reserved Instance can be purchased for a one-year or three-year commitment, with the three-year commitment offering a bigger discount. Reserved instances come with two offering classes - Standard or Convertible. Convertible Reserved instances - A Convertible Reserved Instance can be exchanged during the term for another Convertible Reserved Instance with new attributes including instance family, instance type, platform, scope, or tenancy. This is the best fit for the current requirement. Incorrect options: Elastic Reserved instances - This option has been added as a distractor. There are no Elastic Reserved Instance types. Standard Reserved instances - With Standard Reserved Instances, some attributes, such as instance size, can be modified during the term; however, the instance family cannot be modified. You cannot exchange a Standard Reserved Instance, only modify it Scheduled Reserved instances - Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. You reserve the capacity in advance so that you know it is available when you need it.

You created a CloudFormation template that launched a web application running on EC2 instances in us-west-1. However, you are experiencing a problem creating a development stack in us-east-1 to serve clients in another geographical location. What should you do to solve the problem? a) Recreate the AWS resources used for the application in us-west-1. b) Copy your IAM role to us-east-1 region so that you have permissions to deploy CloudFormation stacks in that region. c) Copy the AMI in the template from us-east-1 to us-west-1. d) Copy the AMI in the template from us-west-1 to us-east-1.

d) Copy the AMI in the template from us-west-1 to us-east-1 An Amazon Machine Image, or AMI, is used to launch an EC2 instance in a specified region. So, to use it in another region, you will have to copy it to the region of your choice. Recreating the resources is unnecessary since you only need to copy the AMI. And the IAM role is irrelevant to the question, since IAM roles are valid across the entire AWS account.

You are performing an audit of your IAM policies. Which of the following tools will enable you to identify which specific statement in a policy results in allowing or denying access to a particular resource or action? a) IAM Statement Simulator b) IAM Access Control Simulator c) IAM Permission Simulator d) IAM Policy Simulator e) IAM Role Simulator

d) IAM Policy Simulator With the IAM policy simulator, you can test and troubleshoot IAM and resource-based policies attached to IAM users, groups, or roles in your AWS account. You can test which actions are allowed or denied by the selected policies for specific resources.

The manager at an IT company wants to set up member access to user-specific folders in an Amazon S3 bucket - bucket-a. So, user x can only access files in his folder - bucket-a/user/user-x/ and user y can only access files in her folder - bucket-a/user/user-y/ and so on. As a Developer Associate, which of the following IAM constructs would you recommend so that the policy snippet can be made generic for all team members and the manager does not need to create separate IAM policy for each team member? a) IAM policy condition b) IAM policy principal c) IAM policy resource d) IAM policy variables

d) IAM policy variables Correct option: IAM policy variables Instead of creating individual policies for each user, you can use policy variables and create a single policy that applies to multiple users (a group policy). Policy variables act as placeholders. When you make a request to AWS, the placeholder is replaced by a value from the request when the policy is evaluated. As an example, the following policy gives each of the users in the group full programmatic access to a user-specific object (their own "home directory") in Amazon S3. Incorrect options: IAM policy principal - You can use the Principal element in a policy to specify the principal that is allowed or denied access to a resource (In IAM, a principal is a person or application that can make a request for an action or operation on an AWS resource. The principal is authenticated as the AWS account root user or an IAM entity to make requests to AWS). You cannot use the Principal element in an IAM identity-based policy. You can use it in the trust policies for IAM roles and in resource-based policies. IAM policy condition - The Condition element (or Condition block) lets you specify conditions for when a policy is in effect, like so - "Condition" : { "StringEquals" : { "aws:username" : "johndoe" }}. This can not be used to address the requirements of the given use-case. IAM policy resource - The Resource element specifies the object or objects that the statement covers. You specify a resource using an ARN. This can not be used to address the requirements of the given use-case.

Your application accesses data stored in an S3 bucket. The S3 bucket hosts human resources data and the application produces a summary of key metrics with the human resources data via dashboard. The data is also inserted into an RDS table for another downstream process. Given this environment, which operation could return temporarily inconsistent results? a) Running a SELECT statement in RDS after employee data was removed due to termination. b) Downloading employee data from S3 after it was created. c) Running a SELECT statement in RDS after new employee data was inserted. d) Reading employee data on the dashboard after the employee was deleted from the S3 bucket from termination.

d) Reading employee data on the dashboard after the employee was deleted from the S3 bucket from termination. Amazon S3 provides read-after-write consistency for PUTS of new objects in your S3 bucket in all Regions with one caveat. The caveat is that if you make a HEAD or GET request to a key name before the object is created, then create the object shortly after that, a subsequent GET might not return the object due to eventual consistency. Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all Regions. Strict read-after-write consistency is available on the main DB Instance.

Your application needs to access content located in an S3 bucket which is residing in a different AWS account, which of the following API calls should be used to gain access? a) STS:GetFederationToken b) STS:AttachRole c) IAM:AddRoleToInstanceProfile d) STS:AssumeRole

d) STS:AssumeRole The STS AssumeRole API call returns a set of temporary security credentials which can be used to access AWS resources, including those in a different account

As part of his development work, an AWS Certified Developer Associate is creating policies and attaching them to IAM identities. After creating necessary Identity-based policies, he is now creating Resource-based policies. Which is the only resource-based policy that the IAM service supports? a) AWS Organizations Service Control Policies (SCP) b) Permissions boundary c) Access control list (ACL) d) Trust policy

d) Trust policy Correct option: You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. Resource-based policies are JSON policy documents that you attach to a resource such as an Amazon S3 bucket. These policies grant the specified principal permission to perform specific actions on that resource and define under what conditions this applies. [Trust policy] - Trust policies define which principal entities (accounts, users, roles, and federated users) can assume the role. An IAM role is both an identity and a resource that supports resource-based policies. For this reason, you must attach both a trust policy and an identity-based policy to an IAM role. The IAM service supports only one type of resource-based policy called a role trust policy, which is attached to an IAM role. Incorrect options: [AWS Organizations Service Control Policies (SCP)] - If you enable all features of AWS organization, then you can apply service control policies (SCPs) to any or all of your accounts. SCPs are JSON policies that specify the maximum permissions for an organization or organizational unit (OU). The SCP limits permissions for entities in member accounts, including each AWS account root user. An explicit deny in any of these policies overrides the allow. [Access control list (ACL)] - Access control lists (ACLs) are service policies that allow you to control which principals in another account can access a resource. ACLs cannot be used to control access for a principal within the same account. Amazon S3, AWS WAF, and Amazon VPC are examples of services that support ACLs. [Permissions boundary] - AWS supports permissions boundaries for IAM entities (users or roles). A permissions boundary is an advanced feature for using a managed policy to set the maximum permissions that an identity-based policy can grant to an IAM entity. An entity's permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries.

You are using CodeBuild to build the source code for your new application and would like to reference a large number of environment variables in buildspec.yml. However when you try to run the build you see an error telling you that the parameters you have specified have exceeded the number of characters allowed by the buildspec file. You need to find an alternative way to store these parameters, which of the following options would you recommend? a) Store the variables as key value pairs in dynamodb b) Store the variables as dependencies within the app code c) Store the variables as key values pairs in S3 d) Use System Manager Parameter Store

d) Use System Manager Parameter Store Use Amazon Systems Manager Parameter Store to store large environment variables and then retrieve them from your buildspec file. Amazon EC2 Systems Manager Parameter Store can store an individual environment variable (name and value added together) that is a combined 4,096 characters or less.

The development team has just configured and attached the IAM policy needed to access AWS Billing and Cost Management for all users under the Finance department. But, the users are unable to see AWS Billing and Cost Management service in the AWS console. What could be the reason for this issue? a) IAM user should be created under AWS Billing and Cost Mangement and not under AWS account to have access to billing console b) The users might have another policy that restricts them from accessing the Billing information c) Only root user has access to AWS Billing and Cost Management console d) You need to activate IAM user access to the Billing and Cost Management console for all the users who need access

d) You need to activate IAM user access to the Billing and Cost Management console for all the users who need access Correct option: [You need to activate IAM user access to the Billing and Cost Management console for all the users who need access] - By default, IAM users do not have access to the AWS Billing and Cost Management console. You or your account administrator must grant users access. You can do this by activating IAM user access to the Billing and Cost Management console and attaching an IAM policy to your users. Then, you need to activate IAM user access for IAM policies to take effect. You only need to activate IAM user access once. Incorrect options: [The users might have another policy that restricts them from accessing the Billing information] - This is an incorrect option, as deduced from the given use-case. [Only root user has access to AWS Billing and Cost Management console] - This is an incorrect statement. AWS Billing and Cost Management access can be provided to any user through user activation and policies, as discussed above. [IAM user should be created under AWS Billing and Cost Management and not under the AWS account to have access to Billing console] - IAM is a feature of your AWS account. All IAM users are created and managed from a single place, irrespective of the services they wish to you.

You are testing a new app installed on an EC2 instance and have been asked to set up an alert to notify the development team any time the CPU spikes above 75%. You have configured this using CloudWatch. This morning when checking the CloudWatch metrics for your app server, the CPU usage is showing as 95%. What will the status of the alarm be? a) CPU_ALERT b) OK c) INSUFFICIENT_DATA d) ALERT e) ALARM

e) ALARM A CloudWatch alarm has the following possible states: OK - the metric or expression is within the defined threshold ALARM - the metric or expression is outside the threshold INSUFFICIENT_DATA - the alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state


Set pelajaran terkait

ACVPM: Public Health Administration and Education

View Set

Chapter 27: Disorders of Cardiac Function, and Heart Failure and Circulatory Shock

View Set

Final Exam Large Animal, Chapter 33 study guide:Large Animal Surgical Nursing, Large and small Animal Nursing Quiz 4, McCurnin's Vet Tech - Exam 2, Vet Nursing, Large Animal Nursing, chap 20, Large Animal review, Equine nursing, Surgery and Anes, vet...

View Set

Fundamentals Nursing Prep U Chapter 44 Sexuality

View Set

Chapter 11: Maternal Adaptation During Pregnancy

View Set

Ch 24, Exam 4 - Biochemistry and Molecular Biology of the Gene - UNT BIOC 4570

View Set

Chapter 9 Life insurance license

View Set

Chapter 11 Quiz: The Fat-Soluble Vitamins: A, D, E, and K

View Set