AWS Developer Associate

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Cloudformation Conditions

After you define all your conditions, you can associate them with resources and resource properties in the Resources and Outputs sections of a template.

Elastic Bean stalk supported application servers

Apache HTTP Server, Tomcat, Passenger, Puma, Ngnix and IIS

Cloud Trail event log files are not encrypted or not in S3

By default, CloudTrail event log files are encrypted using Amazon S3 server-side encryption (SSE).

A financial mobile application has a serverless backend API which consists of DynamoDB, Lambda, and Cognito. Due to the confidential financial transactions handled by the mobile application, there is a new requirement provided by the company to add a second authentication method that doesn't rely solely on user name and password. Which of the following is the MOST suitable solution that the developer should implement? A) Use Cognito with SNS to allow additional authentication via SMS. B)Integrate multi-factor authentication (MFA) to a user pool in Cognito to protect the identity of your users. C) Use a new IAM policy to a user pool in Cognito. D) Create a custom application that integrates with Amazon Cognito which implements the second layer of authentication.

Integrate multi-factor authentication (MFA) to a user pool in Cognito to protect the identity of your users.

Local Secondary Index vs global secondary index

Local Secondary Index Must be created at when you create your table Same Partition Key as your table Different Sort Key Global Secondary Index Can create any time - at table creation or after Different Partition Key and Different Sort Key

S3 Encryption in transit

SSL/TLS , HTTPS

What are Service linked roles in ECS mean?

Service linked roles are attached only to ECS directly and not to ECS task.Service-linked roles are predefined by Amazon ECS and include all the permissions that the service requires to call other AWS services on your behalf.

What is the X-Ray agent?

The X-Ray agent collects data from log files and sends them to the X-Ray service for aggregation, analysis, and storage. The agent makes it easier for you to send data to the X-Ray service, instead of using the APIs directly, and is available for Amazon Linux AMI, Red Hat Enterprise Linux (RHEL), and Windows Server 2012 R2 or later operating systems.

ECS: tasks are like containers

The tasks are grouped together to form ECS service

So how do we enable other users to read and write the objects within our S3 buckets?

Well, one of the ways that we can do that is using bucket policies. So you can set up access control to your buckets using bucket policies. Now, bucket policies are applied at a bucket level, so the clue is in the name. And permissions granted by the policy are going to apply to all of the objects within the bucket.

what are errors ?

X-Ray errors are system annotations associated with a segment for a call that results in an error response. The error includes the error message, stack trace, and any additional information (for example, version or commit ID) to associate the error with a source file.

Does application load balancer have cross zone load balancing enabled by default?

Yes it does

Lambda versions are mutable immutable

immutable

You are working for a technology startup building web and mobile applications. You would like to pull Docker images from the ECR repository called demo so you can start running local tests against the latest application version. Which of the following commands must you run to pull existing Docker images from ECR?

$(aws ecr get-login --no-include-email) docker pull 1234567890.dkr.ecr.us-east-1.amazonaws.com/demo:latest The get-login command retrieves a token that is valid for a specified registry for 12 hours, and then it prints a docker login command with that authorization token. You can execute the printed command to log in to your registry with Docker, or just run it automatically using the $() command wrapper. After you have logged in to an Amazon ECR registry with this command, you can use the Docker CLI to push and pull images from that registry until the token expires. The docker pull command is used to pull an image from the ECR registry.

what are the two types of indexes in dynamo db

- Local Secondary Index- Global Secondary Index

Which loadbalancer routes more requests to larger instance types and routes less to smaller instances ?

A Classic Load Balancer with HTTP or HTTPS listeners might route more traffic to higher-capacity instance types. This distribution aims to prevent lower-capacity instance types from having too many outstanding requests. It's a best practice to use similar instance types and configurations to reduce the likelihood of capacity gaps and traffic imbalances.

What is projection expression dynamo db

A projection expression is a string that identifies the attributes that you want. To retrieve a single attribute, specify its name. For multiple attributes, the names must be comma-separated. aws dynamodb get-item \ --table-name ProductCatalog \ --key file://key.json \ --projection-expression "Description, RelatedItems[0], ProductReviews.FiveStar"

You have just created a custom VPC with two public subnets and a custom route table. Which of the following is true with regards to route tables? A) You cannot associate multiple subnets to the same route table. B) A subnet can only be associated with one route table at a time. C) A VPC has a default limit of 5 route tables. You cannot modify/edit the main route created by default by AWS.

A subnet can only be associated with one route table at a time. A route table contains a set of rules, called routes, that are used to determine where network traffic is directed. Each subnet in your VPC must be associated with a route table; the table controls the routing for the subnet. A subnet can only be associated with one route table at a time, but you can associate multiple subnets with the same route table. Thus, the correct answer is: A subnet can only be associated with one route table at a time. The option that says: A VPC has a default limit of 5 route tables is incorrect because the default route limit per VPC is 200. The option that says: You cannot associate multiple subnets to the same route table is incorrect because you can associate multiple subnets to the same route table. The option that says: You cannot modify/edit the main route created by default by AWS is not accurate because it is definitely possible to modify/edit the main route table.

A company is developing a Python application that submits data to an Amazon DynamoDB table. The company requires client-side encryption of specific data items and end-to-end protection for the encrypted data in transit and at rest. Which combination of steps will meet the requirement for the encryption of specific data items? A Generate symmetric encryption keys with AWS Key Management Service (AWS KMS). B Generate asymmetric encryption keys with AWS Key Management Service (AWS KMS). C Use generated keys with the DynamoDB Encryption Client. D Use generated keys to configure DynamoDB table encryption with AWS managed customer master keys (CMKs). E Use generated keys to configure DynamoDB table encryption with AWS owned customer master keys (CMKs).

A&C DynamoDB Encryption Client does not support asymmetric encryption

A company has different AWS accounts, namely Account A, Account B, and Account C, which are used for their Development, Test, and Production environments respectively. A developer needs access to perform an audit whenever a new version of the application has been deployed to the Test (Account B) and production (Account C) environments. What is the MOST efficient way to provide the developer access to execute the specified task? A)Grant the developer cross-account access to the resources of Accounts B and C. B)Create separate identities and passwords for the developer on both the Test and Production accounts. C)Enable AWS multi-factor authentication (MFA) to the IAM User of the developer. D)Set up AWS Organizations and attach a Service Control Policy to the developer to access the other accounts.

A)Grant the developer cross-account access to the resources of Accounts B and C. Hence, the most suitable solution in this scenario is to create 4 different IAM Roles with the required permissions and attach it to each of the 4 ECS tasks. Creating an IAM Group with all the required permissions and attaching them to each of the 4 ECS tasks is incorrect because you cannot directly attach an IAM Group to an ECS Task. Attaching an IAM Role is a more suitable solution in this scenario and not an IAM Group. Creating 4 different Container Instance IAM Roles with the required permissions and attaching them to each of the 4 ECS tasks is incorrect because a Container Instance IAM Role only applies if you are using the EC2 launch type. Take note that the scenario says that the application will be using a Fargate launch type. Creating 4 different Service-Linked Roles with the required permissions and attaching them to each of the 4 ECS tasks is incorrect because a service-linked role is a unique type of IAM role that is linked directly to Amazon ECS itself and not on the ECS task. Service-linked roles are predefined by Amazon ECS and include all the permissions that the service requires to call other AWS services on your behalf.

AWS CloudFormation helps model and provision all the cloud infrastructure resources needed for your business. Which of the following services rely on CloudFormation to provision resources (Select two)? AWS autoscaling AWS Lambda AWS Elastic Beanstalk AWS Serverless application model (AWS SAM) AWS CodeBuild

AWS Elasticbeanstalk AWS Serverless Application Model AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. Elastic Beanstalk uses AWS CloudFormation to launch the resources in your environment and propagate configuration changes. AWS Serverless Application Model (AWS SAM) - You use the AWS SAM specification to define your serverless application. AWS SAM templates are an extension of AWS CloudFormation templates, with some additional components that make them easier to work with. AWS SAM needs CloudFormation templates as a basis for its configuration.

An application hosted in an Auto Scaling group of On-Demand EC2 instances is used to process data polled from an SQS queue and the generated output is stored in an S3 bucket. To improve security, you were tasked to ensure that all objects in the S3 bucket are encrypted at rest using server-side encryption with AWS KMS-Managed Keys (SSE-KMS). Which of the following is required to properly implement this requirement? A) Add a bucket policy which denies any s3:PutObject action unless the request includes the x-amz-server-side-encryption header. B) Add a bucket policy which denies any s3:PutObject action unless the request includes the x-amz-server-side-encryption-aws-kms-key-id header. C) Add a bucket policy which denies any s3:PostObject action unless the request includes the x-amz-server-side-encryption-aws-kms-key-id header. D) Add a bucket policy which denies any s3:PostObject action unless the request includes the x-amz-server-side-encryption header.

Add a bucket policy which denies any s3:PutObject action unless the request includes the x-amz-server-side-encryption header.

Elastic Bean Stalk Deployment Types

All at Once - Involves a service interruption Rolling back requires a further all at once update Rolling - The new code is deployed in batches. reduced capacity during deployment Rolling back requires a further rolling update. Rolling with additional batch- Maintains full capacity Rolling back requires a further rolling update Immutable Maintains full capacity to rollback delete the new instances. Preferred options for mission critical applications Traffic splitting Performs a Immutable deployment and then splits the traffic between the old and the new deployment, enable canary testing

An application has recently been migrated from an on-premises data center to a development Elastic Beanstalk environment. A developer will do iterative tests and therefore needs to deploy code changes and view them as quickly as possible. Which of the following options take the LEAST amount of time to complete the deployment? All at once Immutable Rolling Rolling with additional batch

All at once In ElasticBeanstalk, you can choose from a variety of deployment methods: -All at once - Deploy the new version to all instances simultaneously. All instances in your environment are out of service for a short time while the deployment occurs. This is the method that provides the least amount of time for deployment. -Rolling - Deploy the new version in batches. Each batch is taken out of service during the deployment phase, reducing your environment's capacity by the number of instances in a batch. -Rolling with additional batch - Deploy the new version in batches, but first launch a new batch of instances to ensure full capacity during the deployment process. -Immutable - Deploy the new version to a fresh group of instances by performing an immutable update. -Blue/Green - Deploy the new version to a separate environment, and then swap CNAMEs of the two environments to redirect traffic to the new version instantly.

How do you update a item in Dynamo Db

An update expression specifies how UpdateItem will modify the attributes of an item—for example, setting a scalar value or removing elements from a list or a map. update-expression ::= [ SET action [, action] ... ] [ REMOVE action [, action] ...] [ ADD action [, action] ... ]

Control access to a REST API using Amazon Cognito user pools as authorizer

As an alternative to using IAM roles and policies or Lambda authorizers (formerly known as custom authorizers), you can use an Amazon Cognito user pool to control who can access your API in Amazon API Gateway. To use an Amazon Cognito user pool with your API, you must first create an authorizer of the COGNITO_USER_POOLS type and then configure an API method to use that authorizer. After the API is deployed, the client must first sign the user in to the user pool, obtain an identity or access token for the user, and then call the API method with one of the tokens, which are typically set to the request's Authorization header. The API call succeeds only if the required token is supplied and the supplied token is valid, otherwise, the client isn't authorized to make the call because the client did not have credentials that could be authorized. https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html

You are a software developer for a multinational investment bank which has a hybrid cloud architecture with AWS. To improve the security of their applications, they decided to use AWS Key Management Service (KMS) to create and manage their encryption keys across a wide range of AWS services. You were given the responsibility to integrate AWS KMS with the financial applications of the company. Which of the following are the recommended steps to locally encrypt data using AWS KMS that you should follow? (Select TWO.) A) Use the GenerateDataKeyWithoutPlaintext operation to get a data encryption key then use the plaintext data key in the response to encrypt data locally. B) Erase the plaintext data key from memory and store the encrypted data key alongside the locally encrypted data. C) Erase the encrypted data key from memory and store the plaintext data key alongside the locally encrypted data. D) Encrypt data locally using the Encrypt operation. Use the GenerateDataKey operation to get a data encryption key then use the plaintext data key in the response to encrypt data locally.

B & D Erase the plaintext data key from memory and store the encrypted data key alongside the locally encrypted data. Use the GenerateDataKey operation to get a data encryption key then use the plaintext data key in the response to encrypt data locally.

A developer is preparing the application specification (AppSpec) file in CodeDeploy, which will be used to deploy her Lambda functions to AWS. In the deployment, she needs to configure CodeDeploy to run a task before the traffic is shifted to the deployed Lambda function version. Which deployment lifecycle event should she configure in this scenario? Start BeforeInstall BeforeAllowTraffic Install

BeforeAllowTraffic Take note that Start, AllowTraffic, and End events in the deployment cannot be scripted, which is why they appear in gray in the above diagram. Hence, the correct answer is BeforeAllowTraffic. Start is incorrect because this deployment lifecycle event in Lambda cannot be scripted or used. In this scenario, the correct event that you should configure is the BeforeAllowTraffic event. BeforeInstall is incorrect because this event is only applicable for ECS, EC2 or On-Premises compute platforms and not for Lambda deployments. Install is incorrect because this uses the CodeDeploy agent to copy the revision files from the temporary location to the final destination folder of the EC2 or On-Premises server. This deployment lifecycle event is not available for Lambda as well as for ECS deployments.

S3: Bucket ACL's

Bucket ACLs which give you the fine-grained access control for objects within your S3 buckets. So you can grant a different type of access to different objects within the same bucket. For example, you can apply different permissions for different objects for different users and groups. So you could set some objects within your bucket as publicly readable and others as only readable by your own team and you can do that using bucket Access Control Lists.

A developer is building an application that will be hosted in ECS which needs to configured to run its tasks and services using the Fargate launch type. The application will have four different tasks where each task will access various AWS resources that are different from the other tasks. Which of the following is the MOST efficient solution that can provide your application in ECS access to the required AWS resources? A)Create an IAM Group with all the required permissions and attach it to each of the 4 ECS tasks. B) Create 4 different Container Instance IAM Roles with the required permissions and attach them to each of the 4 ECS tasks. C) Create 4 different IAM Roles with the required permissions and attach them to each of the 4 ECS tasks. D) Create 4 different Service-Linked Roles with the required permissions and attach them to each of the 4 ECS tasks.

C) Create 4 different IAM Roles with the required permissions and attach them to each of the 4 ECS tasks. The most suitable solution in this scenario is to create 4 different IAM Roles with the required permissions and attach it to each of the 4 ECS tasks. Creating an IAM Group with all the required permissions and attaching them to each of the 4 ECS tasks is incorrect because you cannot directly attach an IAM Group to an ECS Task. Attaching an IAM Role is a more suitable solution in this scenario and not an IAM Group. Creating 4 different Container Instance IAM Roles with the required permissions and attaching them to each of the 4 ECS tasks is incorrect because a Container Instance IAM Role only applies if you are using the EC2 launch type. Take note that the scenario says that the application will be using a Fargate launch type. Creating 4 different Service-Linked Roles with the required permissions and attaching them to each of the 4 ECS tasks is incorrect because a service-linked role is a unique type of IAM role that is linked directly to Amazon ECS itself and not on the ECS task. Service-linked roles are predefined by Amazon ECS and include all the permissions that the service requires to call other AWS services on your behalf.

You work for a software development company where the teams are divided into distinct projects. The management wants to have separation on their AWS resources, which will have a detailed report on the costs of each project. Which of the following options is the recommended way to implement this? A) Tag resources by projects and use Detailed Billing Reports to show costing per tag. B) Tag resources by IAM group assigned for each project and use Detailed Billing reports to show costing. C) Create separate AWS accounts for each project and use consolidated billing. D) Create separate AWS accounts for each project and generate Detailed Billing for each account.

C) Create separate AWS accounts for each project and use consolidated billing.

A company has a static website running in an Auto Scaling group of EC2 instances which they want to convert as a dynamic e-commerce web portal. One of the requirements is to use HTTPS to improve the security of their portal and to also improve their search ranking as a reputable and secure site. A developer recently requested for an SSL/TLS certificate from a third-party certificate authority (CA) which is ready to be imported to AWS. Which of the following services can the developer use to safely import the SSL/TLS certificate? (Select TWO.) A) CloudFront B) A private S3 bucket with versioning enabled C) IAM certificate store D) AWS Certificate Manager E) AWS WAF

C) IAM certificate store D) AWS Certificate Manager

You have a private S3 bucket that stores application logs and the bucket contents are accessible to all members of the Developer IAM group. However, you want to make an object inside the bucket which should only be accessible to the members of Admin IAM group. How can you apply an S3 bucket policy to this object using AWS CLI? A)Use the put-bucket-policy --permission command. B) Use the put-bucket-policy --policy command. C) None of the above. D) Use the put-bucket-policy--grants command.

C) None of the above. Only ACL's can give fine grained access control

dynamo db - local secondary index

Can only be created when you are creating your table• You cannot add, remove, or modify it later• It has the same Partition Key as your original table• But a different Sort Key• Gives you a different view of your data, organised according to an alternative Sort Key• Any queries based on this Sort Key are much faster using the index than the main table• e.g. Partition Key : User ID Sort Key : account creation date

Cloudformation Changesets?

Change Sets only allow you to preview how proposed changes to a stack might impact your running resources.

Code Deploy deployment options

CodeDeploy provides two deployment type options, in-place deployments and blue/green deployments. In-place deployment: The application on each instance in the deployment group is stopped, the latest application revision is installed, and the new version of the application is started and validated. You can use a load balancer so that each instance is deregistered during its deployment and then restored to service after the deployment is complete. Only deployments that use the EC2/On-Premises compute platform can use in-place deployments. For more information about in-place deployments, see Overview of an in-place deployment. Blue/green deployment: The behavior of your deployment depends on which compute platform you use: Blue/green on an EC2/On-Premises compute platform: The instances in a deployment group (the original environment) are replaced by a different set of instances (the replacement environment) using these steps: Instances are provisioned for the replacement environment. The latest application revision is installed on the replacement instances. An optional wait time occurs for activities such as application testing and system verification. Instances in the replacement environment are registered with an Elastic Load Balancing load balancer, causing traffic to be rerouted to them. Instances in the original environment are deregistered and can be terminated or kept running for other uses.

A startup has recently launched a high-quality photo sharing portal using Amazon Lightsail and S3. They noticed that there are other external websites which are linking and using their photos without permission. This has caused an increase on their data transfer cost and potential revenue loss. Which of the following is the MOST effective method to solve this issue? A) Enable cross-origin resource sharing (CORS) which allows cross-origin GET requests from all origins. B) Use a CloudFront web distribution to serve the photos. C) Configure the S3 bucket to remove public read access and use pre-signed URLs with expiry dates. D) Block the IP addresses of the offending websites using Network Access Control List.

Configure the S3 bucket to remove public read access and use pre-signed URLs with expiry dates.

A serverless application is composed of several Lambda functions which reads data from RDS. These functions must share the same connection string that should be encrypted to improve data security. Which of the following is the MOST secure way to meet the above requirement? A)Use AWS Lambda environment variables encrypted with KMS which will be shared by the Lambda functions. B)Use AWS Lambda environment variables encrypted with CloudHSM. C)Create an IAM Execution Role that has access to RDS and attach it to the Lambda functions. D)Create a Secure String Parameter using the AWS Systems Manager Parameter Store.

Create a Secure String Parameter using the AWS Systems Manager Parameter Store.

A company is currently in the process of integrating their on-premises data center to their cloud infrastructure in AWS. One of the requirements is to integrate the on-premises Lightweight Directory Access Protocol (LDAP) directory service to their AWS VPC using IAM. Which of the following provides the MOST suitable solution to implement if the identity store that they are using is not compatible with SAML? A) Create a custom identity broker application in your on-premises data center and use STS to issue short-lived AWS credentials. B) Implement the AWS Single Sign-On (SSO) service to enable single sign-on between AWS and your LDAP. C) Set up an IAM policy that references the LDAP identifiers and AWS credentials. D) Create IAM roles to rotate the IAM credentials whenever LDAP credentials are updated.

Create a custom identity broker application in your on-premises data center and use STS to issue short-lived AWS credentials.

To accommodate a new application deployment, you have created a new EBS volume to be attached to your EC2 instance. After attaching the newly created EBS volume to the Linux EC2 instance, which of the following steps are you going to do next in order to use this volume?

Create a file system on this volume. After you attach an Amazon EBS volume to your instance, it is exposed as a block device. You can format the volume with any file system and then mount it. After you make the EBS volume available for use, you can access it in the same ways that you access any other volume. Any data written to this file system is written to the EBS volume and is transparent to applications using the device. New volumes are raw block devices and do not contain any partition or file system. You need to login to the instance and then format the EBS volume with a file system, and then mount the volume for it to be usable. Volumes that have been restored from snapshots likely have a file system on them already; if you create a new file system on top of an existing file system, the operation overwrites your data. Use the sudo file -s device command to list down the information about your volume, such as file system type.

You work for a software development company where the teams are divided into distinct projects. The management wants to have separation on their AWS resources, which will have a detailed report on the costs of each project. Which of the following options is the recommended way to implement this? A) Create separate AWS accounts for each project and generate Detailed Billing for each account. B) Tag resources by projects and use Detailed Billing Reports to show costing per tag. C) Create separate AWS accounts for each project and use consolidated billing. D) Tag resources by IAM group assigned for each project and use Detailed Billing reports to show costing.

Create separate AWS accounts for each project and use consolidated billing.

Elastic Bean stalk

Deploys and scales your web applications including the web application platform

A web application is currently using an on-premises Microsoft SQL Server 2017 Enterprise Edition database. Your manager instructed you to migrate the application to Elastic Beanstalk and the database to RDS. For additional security, you must configure your database to automatically encrypt data before it is written to storage, and automatically decrypt data when the data is read from storage. Which of the following services will you use to achieve this? A) Enable RDS Encryption. B) Use IAM DB Authentication. C) Enable Transparent Data Encryption (TDE). D) Use Microsoft SQL Server Windows Authentication.

Enable Transparent Data Encryption (TDE). Answer is to enable Transparent Data Encryption (TDE). Using IAM DB Authentication is incorrect because this option just lets you authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. The more appropriate security feature to use here is TDE. Enabling RDS Encryption is incorrect because this simply encrypts your Amazon RDS DB instances and snapshots at rest. It doesn't automatically encrypt data before it is written to storage, nor automatically decrypt data when it is read from storage. Using Microsoft SQL Server Windows Authentication is incorrect because this option is primarily used if you want to integrate RDS with your AWS Directory Service for Microsoft Active Directory (also called AWS Managed Microsoft AD) to enable Windows Authentication to authenticate users.

Your development team is currently developing a financial application in AWS. One of the requirements is to create and control the encryption keys used to encrypt your data using the envelope encryption strategy to comply with the strict IT security policy of the company. Which of the following correctly describes the process of envelope encryption? A) Encrypt plaintext data with a master key and then encrypt the master key with a top-level encrypted data key. B) Encrypt plaintext data with a data key and then encrypt the data key with a top-level encrypted master key. C) Encrypt plaintext data with a master key and then encrypt the master key with a top-level plaintext data key. D) Encrypt plaintext data with a data key and then encrypt the data key with a top-level plaintext master key.

Encrypt plaintext data with a data key and then encrypt the data key with a top-level plaintext master key.

How to enforce encryption using upload S3 ?

Enforcing Encryption on S3 BucketsS Every time a file is uploaded to S3, a PUT request is initiated.• This is what a PUT request looks like: PUT /myFile HTTP/1.1Host: myBucket.s3.amazonaws.comDate: Wed, 25 Apr 2018 09:50:00 GMTAuthorization: authorization stringContent-Type: text/plain If the file is to be encrypted at upload time, the x-amz-server-side-encryption parameter will be included in the request header• Two options are currently available: x-amz-server-side-encryption: AES256 (SSE-S3 - S3 managed keys) x-amz-server-side-encryption: aws:kms (SSE-KMS - KMS managed keys)• When this parameter is included in the header of the PUT request, it tells S3 to encrypt the object at the time of upload, using the specified encryption method.

You are deploying a serverless application composed of Lambda, API Gateway, CloudFront, and DynamoDB using CloudFormation. The AWS SAM syntax should be used to declare resources in your template which requires you to specify the version of the AWS Serverless Application Model (AWS SAM). Which of the following sections is required, aside from the Resources section, that should be in your CloudFormation template? A)Transform B) Format Version C) Parameters D) Mappings

For serverless applications (also referred to as Lambda-based applications), the optional Transform section specifies the version of the AWS Serverless Application Model (AWS SAM) to use. When you specify a transform, you can use AWS SAM syntax to declare resources in your template. The model defines the syntax that you can use and how it is processed.

Cloudformation Version

Format Version section refers to the AWS CloudFormation template version that the template conforms to

What API do you call to get the encrypted datakey

GenerateDataKeyWithoutPlaintext

A company has recently adopted a hybrid cloud architecture to augment their on-premises data center with virtual private clouds (VPCs) in AWS. You were assigned to manage all of the company's cloud infrastructure including the security of their resources using IAM. In this scenario, which of the following are best practices in managing security in AWS? (Select TWO.) Use IAM inline policies to delegate permissions. Grant only the permissions required by the resource to perform a task. Always keep your AWS account root user access key. Grant all the permissions to the resource in order to perform the task without any issues. Delete root user access keys.

Grant only the permissions required by the resource to perform a task. Delete root user access keys.

Dynamo DB Filter expression

If you need to further refine the Query results, you can optionally provide a filter expression. A filter expression determines which items within the Query results should be returned to you. All of the other results are discarded. A filter expression is applied after a Query finishes, but before the results are returned. Therefore, a Query consumes the same amount of read capacity, regardless of whether a filter expression is present.

How to enforce encryption using bucket policy ?

If you want to enforce the use of encryption for your files stored in S3, use an S3 Bucket Policy to deny all PUT requests that don't include the x-amz-server-side-encryption parameter in the request header.

what are three types of S2 encryption

In Transit SSL/TLS (HTTPS) At Rest Server Side Encryption• SSE-S3 SSE-KMS SSE-C Client Side encryption If you want to enforce the use of encryption for your files stored in S3, use an S3Bucket Policy to deny all PUT requests that don't include the x-amz-server-side-encryption parameter in the request header. The following request tells S3 to encrypt the file using SSE-S3 (AES 256) at thetime of upload:PUT /myFile HTTP/1.1Host: myBucket.s3.amazonaws.comDate: Wed, 25 Apr 2018 09:50:00 GMTAuthorization: authorization stringContent-Type: text/plainContent-Length: 27364x-amz-meta-author: FayeExpect: 100-continuex-amz-server-side-encryption: AES256[27364 bytes of object data] If the file is to be encrypted at upload time, the x-amz-server-side-encryptionparameter will be included in the request header• Two options are currently available: x-amz-server-side-encryption: AES256 (SSE-S3 - S3 managed keys) x-amz-server-side-encryption: aws:kms (SSE-KMS - KMS managed keys)• When this parameter is included in the header of the PUT request, it tells S3 toencrypt the object at the time of upload, using the specified encryption method.• You can enforce the use of Server Side Encryption by using a Bucket Policy which denies any S3 PUT request which doesn't include the x-amz-server-side-encryption parameter in the request header.

A software engineer is building a serverless application in AWS consisting of Lambda, API Gateway, and DynamoDB. She needs to implement a custom authorization scheme that uses a bearer token authentication strategy such as OAuth or SAML to determine the caller's identity. Which of the features of API Gateway is the MOST suitable one that she should use to build this feature? A) Resource Policy B) Lambda Authorizers C) Cross-Account Lambda Authorizer D) Cross-Origin Resource Sharing (CORS)

Lambda Authorizers In this scenario is to use Lambda Authorizers since this feature is useful if you want to implement a custom authorization scheme that uses a bearer token authentication strategy such as OAuth or SAML. Resource Policy is incorrect because this is simply a JSON policy document that you attach to an API to control whether a specified principal (typically an IAM user or role) can invoke the API. This can't be used to implement a custom authorization scheme. Cross-Origin Resource Sharing (CORS)is incorrect because this just defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. Cross-Account Lambda Authorizer is incorrect because this just enables you to use an AWS Lambda function from a different AWS account as your API authorizer function. Moreover, this is not a valid Lambda authorizer type.

Secrets Manager Vs Parameter store

Parameter Store can be used to store the secrets encrypted or unencrypted fashion. It helps you optimize and streamline application deployments by storing environmental config data, other parameters and it is free. AWS Secrets Manager takes it up by few notches by providing additional functionality such as rotation of keys, cross-account access and tighter integration with AWS services. Recommendation: Use Secrets Manager to store confidential secrets like database credentials, API keys, OAuth tokens. Use Parameter Store to store other application settings, environmental config data, License codes, etc.

A developer team mate is using API Gateway Lambda Authorizer to securely authenticate the API requests to their web application. The authentication process should be implemented using a custom authorization scheme which accepts header and query string parameters from the API caller. Which of the following methods should the developer use to properly implement the above requirement? Amazon Cognito User Pools Authorizer Cross-Account Lambda Authorizer Token-based Authorization Request Parameter-based Authorization

Request Parameter-based Authorization

S3 encryption at rest server side

SSE S3 - S3 managed keys, using AES 256 bit encryption ( applies to existing as well as new file encryption) SSE KMS - aws KMS managed keys SSE-C - Customer provided keys

You were assigned to a project that requires the use of the AWS CLI to build a project with AWS CodeBuild. Your project's root directory includes the buildspec.yml file to run build commands and would like your build artifacts to be automatically encrypted at the end. How should you configure CodeBuild to accomplish this?

Specify a KMS key to use

lambda stage variables

Stage variables are name-value pairs that you can define as configuration attributes associated with a deployment stage of an API. They act like environment variables and can be used in your API setup and mapping templates. With deployment stages in API Gateway, you can manage multiple release stages for each API, such as alpha, beta, and production. Using stage variables you can configure an API deployment stage to interact with different backend endpoints. For example, your API can pass a GET request as an HTTP proxy to the backend web host (for example, http://example.com). In this case, the backend web host is configured in a stage variable so that when developers call your production endpoint, API Gateway calls example.com. When you call your beta endpoint, API Gateway uses the value configured in the stage variable for the beta stage and calls a different web host (for example, beta.example.com).

A developer runs a shell script that uses the AWS CLI to upload a large file to an S3 bucket, which includes an AWS KMS key. An Access Denied error always shows up whenever the developer uploads a file with a size of 100 GB or more. However, when he tried to upload a smaller file with the KMS key, the upload succeeds. Which of the following are possible reasons why this issue is happening? (Select TWO.) A) The developer does not have the kms:Decrypt permission. B) The developer does not have the kms:Encrypt permission. C) The maximum size that can be encrypted in KMS is only 100 GB. D) The developer's IAM permission has an attached inline policy that restricts him from uploading a file to S3 with a size of 100 GB or more. The AWS CLI S3 commands perform a multipart upload when the file is large.

The developer does not have the kms:Decrypt permission. The AWS CLI S3 commands perform a multipart upload when the file is large. only small files can be uploaded to S3 that has encryption enabled. For multipart uploads that has kms encryption enabled s3 needs to decrypt each part inorder to assemble the full file So it will respond with access denied if there is no 'Decrypt' permission To perform a multipart upload with encryption using an AWS Key Management Service (AWS KMS) customer master key (CMK), the requester must have permission to the kms:Decrypt and kms:GenerateDataKey* actions on the key. These permissions are required because Amazon S3 must decrypt and read data from the encrypted file parts before it completes the multipart upload. Hence, the correct answers in this scenario are: - The AWS CLI S3 commands perform a multipart upload when the file is large. - The developer does not have the kms:Decrypt permission The option that says: the developer does not have the kms:Encrypt permission is incorrect because the operation is successful if the developer uploads a smaller file, which signifies that the developer already has the kms:Encrypt permission. The option that says: the developer's IAM permission has an attached inline policy that restricts him from uploading a file to S3 with a size of 100 GB or more is incorrect because inline policies are just policies that you create, manage, and embed directly into a single user, group, or role. This is unlikely to happen as there is no direct way to restrict a user from uploading a file with a specific size constraint. The option that says: the maximum size that can be encrypted in KMS is only 100 GB is incorrect because there is no such limitation in KMS.

Autoscaling group policies

The following predefined metrics are available: ASGAverageCPUUtilization—Average CPU utilization of the Auto Scaling group. ASGAverageNetworkIn—Average number of bytes received on all network interfaces by the Auto Scaling group. ASGAverageNetworkOut—Average number of bytes sent out on all network interfaces by the Auto Scaling group. ALBRequestCountPerTarget—Number of requests completed per target in an Application Load Balancer target group.

Envelope Encryption

The process of encrypting data greater the 4KB Envelope Encryption Process AWS KMS Customer master key calls the GenerateDataKeyAPI which then generates a dataKey ( that is encrypted). this key is also called as envelope key which is then used to encrypt our data. This data key is present along with our data. The decryption process involves the AWS KMS CMK to call the KMS API to decrypt the dataKey to plain text which then can decrypt the actula data. While we may ask why not directly encrypt the data with CMK?? by generating a datakey the actual data need not go over the amazon network thereby reducing cost.

What are the two types of lambda authorizers

There are two types of Lambda authorizers: A token-based Lambda authorizer (also called a TOKEN authorizer) receives the caller's identity in a bearer token, such as a JSON Web Token (JWT) or an OAuth token. For an example application, see Open Banking Brazil - Authorization Samples on GitHub. A request parameter-based Lambda authorizer (also called a REQUEST authorizer) receives the caller's identity in a combination of headers, query string parameters, stageVariables, and $context variables. For WebSocket APIs, only request parameter-based authorizers are supported. lamda authorizers be in token or custom outputs a IAM policy (JSON)that allows the request to access the particular aws resource

Cloudformation Mappings

This is just a literal mapping of keys and associated values that you can use to specify conditional parameter values, similar to a lookup table.

Cloudformation Parameters

This only contains the values that will be passed to your template at runtime (when you create or update a stack). You can refer to parameters from the Resources and Outputs sections of the template but this is not used to specify the AWS SAM version.

Q: What is sampling?

To provide a performant and cost-effective experience, X-Ray does not collect data for every request that is sent to an application. Instead, it collects data for a statistically significant number of requests. X-Ray should not be used as an audit or compliance tool because it does not guarantee data completeness.

Dynamo Db: To read data from a table what operations you use?

To read data from a table, you use operations such as GetItem, Query, or Scan Amazon DynamoDB returns all the item attributes by default. To get only some, rather than all of the attributes, use a projection expression.

A developer is using API Gateway Lambda Authorizer to provide authentication for every API request and control access to your API. The requirement is to implement an authentication strategy which is similar to OAuth or SAML. Which of the following is the MOST suitable method that the developer should use in this scenario? Request Parameter-based Authorization AWS STS-based Authentication Token-based Authorization Cross-Account Lambda Authorizer

Token-based Authorization In this scenario is to use a token-based authorization since this is useful if you want to implement a custom authorization scheme that uses a bearer token authentication strategy such as OAuth or SAML. Requesting Parameter-based Lambda Authorization is incorrect because this does not use tokens to identify a caller but through a combination of headers, query string parameters, stageVariables, and $context variables. AWS STS-based authentication is incorrect because this is not a valid type of API Gateway Lambda Authorizer. Cross-Account Lambda Authorizer is incorrect because this just enables you to use an AWS Lambda function from a different AWS account as your API authorizer function. Moreover, this is not a valid Lambda authorizer type.

Cloudformation stacksets An application architect manages several AWS accounts for staging, testing, and production environments, which are used by several development teams. For application deployments, the developers use the similar base CloudFormation template for their applications. Which of the following can allow the developer to effectively manage the updates on this template across all AWS accounts with minimal effort? Create and manage stacks on multiple AWS accounts using CloudFormation Change Sets. Update the stacks on multiple AWS accounts using CloudFormation StackSets. Upload the CloudFormation templates to CodeCommit and use a combination of CodeDeploy and CodePipeline to manage the deployment to multiple accounts. Define and manage stack instances on multiple AWS Accounts using CloudFormation Stack Instances.

Update the stacks on multiple AWS accounts using CloudFormation StackSets. AWS CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation. Using an administrator account, you define and manage an AWS CloudFormation template, and use the template as the basis for provisioning stacks into selected target accounts across specified regions.

An applicationis hosted in a large EC2 instance. The application will process the data and it will upload results to an S3 bucket. Which of the following is the correct and safe way to implement this architecture? Store the access keys in the instance then use the AWS SDK to upload the results to S3. Use an IAM Role to grant the application the necessary permissions to upload data to S3.

Use an IAM Role to grant the application the necessary permissions to upload data to S3. Role is like a temporary credential to perform operations on your behalf. If you were to create a IAM user and assign permission to S3 and store that credentials on an ec2 instance. then you are responsible for the rotation of the credentials. extra work. So better option is to use roles

How does the load balancer maintain sticky sessions?

When a load balancer first receives a request from a client, it routes the request to a target, generates a cookie named AWSALB that encodes information about the selected target, encrypts the cookie, and includes the cookie in the response to the client. The client should include the cookie that it receives in subsequent requests to the load balancer. When the load balancer receives a request from a client that contains the cookie, if sticky sessions are enabled for the target group and the request goes to the same target group, the load balancer detects the cookie and routes the request to the same target. If you use duration-based session stickiness, configure an appropriate cookie expiration time for your specific use case. If you set session stickiness from individual applications, use session cookies instead of persistent cookies where possible.

dynamo db - global secondary index

You can create when you create your table, or add it later• Different Partition Key as well as a Different Sort Key• So gives a completely different view of the data • Speeds up any queries relating to this alternative Partition and Sort Key• e.g. Partition Key : email address Sort Key : last log-in date

How do I migrate my Elastic Beanstalk environment from one AWS account to another AWS account?

You must use saved configurations to migrate an Elastic Beanstalk environment between AWS accounts. https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-migration-accounts/

A junior developer working on ECS instances terminated a container instance in Amazon Elastic Container Service (Amazon ECS) as per instructions from the team lead. But the container instance continues to appear as a resource in the ECS cluster. As a Developer Associate, which of the following solutions would you recommend to fix this behavior?

You terminated the container instance while it was in STOPPED state, that lead to this synchronization issues If you terminate a container instance while it is in the STOPPED state, that container instance isn't automatically removed from the cluster. You will need to deregister your container instance in the STOPPED state by using the Amazon ECS console or AWS Command Line Interface. Once deregistered, the container instance will no longer appear as a resource in your Amazon ECS cluster.

The development team at an e-commerce company is preparing for the upcoming Thanksgiving sale. The product manager wants the development team to implement appropriate caching strategy on Amazon ElastiCache to withstand traffic spikes on the website during the sale. A key requirement is to facilitate consistent updates to the product prices and product description, so that the cache never goes out of sync with the backend. which of the following solutions would you recommend for the given use-case? A) Use a caching strategy to write to backend firest and wait for the cache ttl to expire b) Use a caching strategy to write to the cache and backend at the same time c) Use a caching strategy to write to the backend first and then invalidate the cache

c) Use a caching strategy to write to the backend first and then invalidate the cache

Due to the popularity of serverless computing, your manager instructed you to share your technical expertise to the whole software development department of your company. You are planning to deploy a simple Node.js 'Hello World' Lambda function to AWS using CloudFormation. Which of the following is the EASIEST way of deploying the function to AWS? A) Upload the code in S3 then specify the S3Key and S3Bucket parameters under the AWS::Lambda::Function resource in the CloudFormation template. B) Include your function source inline in the ZipFile parameter of the AWS::Lambda::Function resource in the CloudFormation template. C) Include your function source inline in the Code parameter of the AWS::Lambda::Function resource in the CloudFormation template. D) Upload the code in S3 as a ZIP file then specify the S3 path in the ZipFile parameter of the AWS::Lambda::Function resource in the CloudFormation template.

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html Under the AWS::Lambda::Function resource, you can use the Code property which contains the deployment package for a Lambda function. For all runtimes, you can specify the location of an object in Amazon S3. For Node.js and Python functions, you can specify the function code inline in the template. Changes to a deployment package in Amazon S3 are not detected automatically during stack updates. To update the function code, change the object key or version in the template. Hence, including your function source inline in the ZipFile parameter of the AWS::Lambda::Function resource in the CloudFormation template is the easiest way to deploy the Lambda function to AWS. Uploading the code in S3 then specifying the S3Key and S3Bucket parameters under the AWS::Lambda::Function resource in the CloudFormation template is incorrect because although this is a valid deployment step, you still have to upload the code in S3 instead of just including the function source inline in the ZipFile parameter. Take note that the scenario explicitly mentions that you have to pick the easiest way. Including your function source inline in the Code parameter of the AWS::Lambda::Function resource in the CloudFormation template is incorrect because you should use the ZipFile parameter instead. Take note that the Code property is the parent property of the ZipFile parameter. Uploading the code in S3 as a ZIP file then specifying the S3 path in the ZipFile parameter of the AWS::Lambda::Function resource in the CloudFormation template is incorrect because contrary to its name, the ZipFile parameter directly accepts the source code of your Lambda function and not an actual zip file. If you include your function source inline with this parameter, AWS CloudFormation places it in a file named index and zips it to create a deployment package. This is the reason why it is called the "ZipFile" parameter.

You team maintains a public API Gateway that is accessed by clients from another domain. Usage has been consistent for the last few months but recently it has more than doubled. As a result, your costs have gone up and would like to prevent other unauthorized domains from accessing your API. Which of the following actions should you take?

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html Use account level throttling

Proxy lambda integration API requests routed as is to lambda function except that request params ordering is not guaranteed Lambda function sends a xml format to the API gateway resulting in a 502 response. Why is that

https://tutorialsdojo.com/amazon-api-gateway/ Lambda function has to return the response in json format not in xml format and that's exact reason for a 502

what is segment?

segment can be system defined or user defined in the form of annotations. request goes to >> application (creates a segment)>>makes db call >> gets the result. ( makes a subsegment to carry the query , timestamp, errors )

What is x-ray trace?

set of datapoints that share the same traceID. example: when a client makes a request to you application, it is assigned a unique trace ID. As the request makes its way through services in your application, the services relay information regarding the request back using this traceID. The piece of information relayed by each service in your application to XRay is a segment and trace is a collection of segments

Your manager assigned you a task of implementing server-side encryption with customer-provided encryption keys (SSE-C) to your S3 bucket, which will allow you to set your own encryption keys. Amazon S3 will manage both the encryption and decryption process using your key when you access your objects, which will remove the burden of maintaining any code to perform data encryption and decryption. To properly upload data to this bucket, which of the following headers must be included in your request? A) x-amz-server-side-encryption, x-amz-server-side-encryption-customer-key and x-amz-server-side-encryption-customer-key-MD5 headers B) x-amz-server-side-encryption and x-amz-server-side-encryption-aws-kms-key-id headers C) x-amz-server-side​-encryption​-customer-algorithm, x-amz-server-side-encryption-customer-key and x-amz-server-side-encryption-customer-key-MD5 headers D) x-amz-server-side-encryption-customer-key header only

x-amz-server-side-encryption-customer-algorithm - This header specifies the encryption algorithm. The header value must be "AES256". x-amz-server-side-encryption-customer-key - This header provides the 256-bit, base64-encoded encryption key for Amazon S3 to use to encrypt or decrypt your data. x-amz-server-side-encryption-customer-key-MD5 - This header provides the base64-encoded 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure the encryption key was transmitted without error.

S3 Encryption at rest Client side

you yourself encrypt and then upload


संबंधित स्टडी सेट्स

Principles of Marketing Study Guide

View Set

Computer Networks and Communication

View Set

NUR FUND + PREP U Chapter 39 Fluid, Electrolyte, and Acid-Base Balance

View Set

Systems of Linear Equations Vocabulary and Key Concepts

View Set

Chapter 12 - Offers and Acceptances

View Set

Chapter 10 Laws Governing Access to Foreign Markets

View Set

APCSP mid term exam 기출문제

View Set

Abeka American Literature Appendix Quiz HH

View Set