Developer Associate Exam 2020 - List 2

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

A pharmaceutical company runs their database workloads on Provisioned IOPS SSD (io1) volumes. As a Developer Associate, which of the following options would you identify as an INVALID configuration for io1 EBS volume types? a) 200 GB with 15k IOPS b) 200 GB with 5k IOPS c) 200 GB with 10k IOPS d) 200 GB wit 2k IOPS

a) 200 GB with 15k IOPS 200 GiB size volume with 15000 IOPS - This is an invalid configuration. The maximum ratio of provisioned IOPS to requested volume size (in GiB) is 50:1. So, for a 200 GiB volume size, max IOPS possible is 200*50 = 10000 IOPS. (up to 64k IOPS at most)

A company uses Amazon Simple Email Service (SES) to cost-effectively send susbscription emails to the customers. Intermittently, the SES service throws the error: Throttling - Maximum sending rate exceeded. As a developer associate, which of the following would you recommend to fix this issue? a) Configure Timeout mechanism for each request made to the SES service b) Use exponential Backoff technique to introduce delay in time before attempting to execute the operation again c) Raise a service request with Amazon to increase the throttling limit for the SES API d) Implement retry mechanism for all 4** errors to avoid throttling error

b) Use exponential Backoff technique to introduce delay in time before attempting to execute the operation again Correct option: [Use Exponential Backoff technique to introduce delay in time before attempting to execute the operation again] - A "Throttling - Maximum sending rate exceeded" error is retriable. This error is different than other errors returned by Amazon SES. A request rejected with a "Throttling" error can be retried at a later time and is likely to succeed. Retries are "selfish." In other words, when a client retries, it spends more of the server's time to get a higher chance of success. Where failures are rare or transient, that's not a problem. This is because the overall number of retried requests is small, and the tradeoff of increasing apparent availability works well. When failures are caused by overload, retries that increase load can make matters significantly worse. They can even delay recovery by keeping the load high long after the original issue is resolved. The preferred solution is to use a backoff. Instead of retrying immediately and aggressively, the client waits some amount of time between tries. The most common pattern is an exponential backoff, where the wait time is increased exponentially after every attempt. A variety of factors can affect your send rate, e.g. message size, network performance or Amazon SES availability. The advantage of the exponential backoff approach is that your application will self-tune and it will call Amazon SES at close to the maximum allowed rate. Incorrect options: [Configure Timeout mechanism for each request made to the SES service] - Requests are configured to timeout if they do not complete successfully in a given time. This helps free up the database, application and any other resource that could potentially keep on waiting to eventually succeed. But, if errors are caused by load, retries can be ineffective if all clients retry at the same time. Throttling error signifies that load is high on SES and it does not make sense to keep retrying. [Raise a service request with Amazon to increase the throttling limit for the SES API] - If throttling error is persistent, then it indicates a high load on the system consistently and increasing the throttling limit will be the right solution for the problem. But, the error is only intermittent here, signifying that decreasing the rate of requests will handle the error. [Implement retry mechanism for all 4xx errors to avoid throttling error] - 4xx status codes indicate that there was a problem with the client request. Common client request errors include providing invalid credentials and omitting required parameters. When you get a 4xx error, you need to correct the problem and resubmit a properly formed client request. Throttling is a server error and not a client error, hence retry on 4xx errors does not make sense here.

The Development team at a media company is working on securing their databases. Which of the following AWS database engines can be configured with IAM Database Authentication? (Select two) a) RDS sequel server b) RDS Maria DB c) RDS MySQL d) RDS Oracle e) RDS PostgreSQL

c) RDS MySQL e) RDS PostgreSQL

A developer is looking at establishing access control for an API that connects to a Lambda function downstream. Which of the following represents a mechanism that CANNOT be used for authenticating with the API Gateway? a) AWS Security Token Service STS b) Lambda Authorizer c) Standard AWS IAM roles and policies d) Cognito User Pools

a) AWS Security Token Service STS Correct option: Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale. API developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud. [AWS Security Token Service (STS)] - AWS Security Token Service (AWS STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). However, it is not supported by API Gateway. Incorrect options: [Standard AWS IAM roles and policies] - Standard AWS IAM roles and policies offer flexible and robust access controls that can be applied to an entire API or individual methods. IAM roles and policies can be used for controlling who can create and manage your APIs, as well as who can invoke them. [Lambda Authorizer] - Lambda authorizers are Lambda functions that control access to REST API methods using bearer token authentication—as well as information described by headers, paths, query strings, stage variables, or context variables request parameters. Lambda authorizers are used to control who can invoke REST API methods. [Cognito User Pools] - Amazon Cognito user pools let you create customizable authentication and authorization solutions for your REST APIs. Amazon Cognito user pools are used to control who can invoke REST API methods.

Your team lead has asked you to learn AWS CloudFormation to create a collection of related AWS resources and provision them in an orderly fashion. You decide to provide AWS-specific parameter types to catch invalid values. When specifying parameters which of the following is not a valid Parameter type? a) Dependent Parameter b) CommaDelimitedList c) AWS::EC2::KeyPair::KeyName d) String

a) Dependent Parameter Correct option: AWS CloudFormation gives developers and businesses an easy way to create a collection of related AWS and third-party resources and provision them in an orderly and predictable fashion. Parameter types enable CloudFormation to validate inputs earlier in the stack creation process. CloudFormation currently supports the following parameter types: String - A literal string Number - An integer or float List<Number> - An array of integers or floats CommaDelimitedList - An array of literal strings that are separated by commas AWS::EC2::KeyPair::KeyName - An Amazon EC2 key pair name AWS::EC2::SecurityGroup::Id - A security group ID AWS::EC2::Subnet::Id - A subnet ID AWS::EC2::VPC::Id - A VPC ID List<AWS::EC2::VPC::Id> - An array of VPC IDs List<AWS::EC2::SecurityGroup::Id> - An array of security group IDs List<AWS::EC2::Subnet::Id> - An array of subnet IDs

As a Team Lead, you are expected to generate a report of the code builds for every week to report internally and to the client. This report consists of the number of code builds performed for a week, the percentage success and failure, and overall time spent on these builds by the team members. You also need to retrieve the CodeBuild logs for failed builds and analyze them in Athena. Which of the following options will help achieve this? a) Enable S3 and CloudWatch logs integration b) Use CloudWatch Events c) Use AWS CloudTrail and deliver logs to S3 d) Use AWS Lambda integration

a) Enable S3 and CloudWatch logs integration AWS CodeBuild monitors functions on your behalf and reports metrics through Amazon CloudWatch. These metrics include the number of total builds, failed builds, successful builds, and the duration of builds. You can monitor your builds at two levels: Project level, AWS account level. You can export log data from your log groups to an Amazon S3 bucket and use this data in custom processing and analysis, or to load onto other systems.

A company runs its flagship application on a fleet of Amazon EC2 instances. After misplacing a couple of private keys from the SSH key pairs, they have decided to re-use their SSH key pairs for the different instances across AWS Regions. As a Developer Associate, which of the following would you recommend to address this use-case? a) Generate a public SSH key from a private SSH key. Then, import the key into each of your AWS regions b) Store the public and private SSH key pair in AWS Trusted Advisor and access it across AWS regions c) Encrypt the private SSH key and store it in the S3 bucket to be accessed from any AWS region d) It is not possible to reuse SSH key pairs across AWS regions

a) Generate a public SSH key from a private SSH key. Then, import the key into each of your AWS regions Correct option: [Generate a public SSH key from a private SSH key. Then, import the key into each of your AWS Regions] Here is the correct way of reusing SSH keys in your AWS Regions: 1. Generate a public SSH key (.pub) file from the private SSH key (.pem) file. 2. Set the AWS Region you wish to import to. 3. Import the public SSH key into the new Region. Incorrect options: [It is not possible to reuse SSH key pairs across AWS Regions] - As explained above, it is possible to reuse with manual import. [Store the public and private SSH key pair in AWS Trusted Advisor and access it across AWS Regions] - AWS Trusted Advisor is an application that draws upon best practices learned from AWS' aggregated operational history of serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment and makes recommendations for saving money, improving system performance, or closing security gaps. It does not store key pair credentials. [Encrypt the private SSH key and store it in the S3 bucket to be accessed from any AWS Region] - Storing private key to Amazon S3 is possible. But, this will not make the key accessible for all AWS Regions, as is the need in the current use case.

A company wants to automate its order fulfillment and inventory tracking workflow. Starting from order creation to updating inventory to shipment, the entire process has to be tracked, managed and updated automatically. Which of the following would you recommend as the most optimal solution for this requirement? a) Use AWS Step Functions to coordinate and manage the components of order management and inventory tracking workflow b) Use SQS to pass information from order management to inventory tracking workflow c) Use SNS to develop event-driven apps that can share information d) Configure EventBridge to track the flow of work from order management to inventory tracking systems

a) Use AWS Step Functions to coordinate and manage the components of order management and inventory tracking workflow Correct option: [Use AWS Step Functions to coordinate and manage the components of order management and inventory tracking workflow] AWS Step Functions is a serverless function orchestrator that makes it easy to sequence AWS Lambda functions and multiple AWS services into business-critical applications. Through its visual interface, you can create and run a series of checkpointed and event-driven workflows that maintain the application state. The output of one step acts as an input to the next. Each step in your application executes in order, as defined by your business logic. AWS Step Functions enables you to implement a business process as a series of steps that make up a workflow. The individual steps in the workflow can invoke a Lambda function or a container that has some business logic, update a database such as DynamoDB or publish a message to a queue once that step or the entire workflow completes execution. Benefits of Step Functions: Build and update apps quickly: AWS Step Functions lets you build visual workflows that enable the fast translation of business requirements into technical requirements. You can build applications in a matter of minutes, and when needs change, you can swap or reorganize components without customizing any code. Improve resiliency: AWS Step Functions manages state, checkpoints and restarts for you to make sure that your application executes in order and as expected. Built-in try/catch, retry and rollback capabilities deal with errors and exceptions automatically. Write less code: AWS Step Functions manages the logic of your application for you and implements basic primitives such as branching, parallel execution, and timeouts. This removes extra code that may be repeated in your microservices and functions.

A pharmaceutical company uses Amazon EC2 instances for application hosting and Amazon CloudFront for content delivery. A new research paper with critical findings has to be shared with a research team that is spread across the world. Which of the following represents the most optimal solution to address this requirement without compromising the security of the content? a) Use CloudFront signed URL feature to control access to the file b) Configure AWS Web Application Firewall (WAF) to monitor and control the HTTP and HTTPS requests that are forwarded to CloudFront c) Using CloudFront's Field-Level Encryption to help protect sensitive data d) Use CloudFront's signed cookies feature to control access to the file

a) Use CloudFront signed URL feature to control access to the file Correct option: [Use CloudFront signed URL feature to control access to the file] A signed URL includes additional information, for example, expiration date and time, that gives you more control over access to your content. Incorrect options: [Use CloudFront signed cookies feature to control access to the file] - CloudFront signed cookies allow you to control who can access your content when you don't want to change your current URLs or when you want to provide access to multiple restricted files, for example, all of the files in the subscribers' area of a website. Our requirement has only one file that needs to be shared and hence signed URL is the optimal solution. Signed URLs take precedence over signed cookies. If you use both signed URLs and signed cookies to control access to the same files and a viewer uses a signed URL to request a file, CloudFront determines whether to return the file to the viewer based only on the signed URL. [Configure AWS Web Application Firewall (WAF) to monitor and control the HTTP and HTTPS requests that are forwarded to CloudFront] - AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to CloudFront, and lets you control access to your content. Based on conditions that you specify, such as the values of query strings or the IP addresses that requests originate from, CloudFront responds to requests either with the requested content or with an HTTP status code 403 (Forbidden). A firewall is optimal for broader use cases than restricted access to a single file. [Using CloudFront's Field-Level Encryption to help protect sensitive data] - CloudFront's field-level encryption further encrypts sensitive data in an HTTPS form using field-specific encryption keys (which you supply) before a POST request is forwarded to your origin. This ensures that sensitive data can only be decrypted and viewed by certain components or services in your application stack. This feature is not useful for the given use case.

You have launched several AWS Lambda functions written in Java. A new requirement was given that over 1MB of data should be passed to the functions and should be encrypted and decrypted at runtime. Which of the following methods is suitable to address the given use-case? a) Use Envolope Encryption and reference the data as file within the code b) Use Envolope Encryption and store as environment variable c) Use KMS direct encryption and store as file d) Use KMS Encryption and store as env variable

a) Use Envolope Encryption and reference the data as file within the code Correct option: [Use Envelope Encryption and reference the data as file within the code] While AWS KMS does support sending data up to 4 KB to be encrypted directly, envelope encryption can offer significant performance benefits. When you encrypt data directly with AWS KMS it must be transferred over the network. Envelope encryption reduces the network load since only the request and delivery of the much smaller data key go over the network. The data key is used locally in your application or encrypting AWS service, avoiding the need to send the entire block of data to AWS KMS and suffer network latency. AWS Lambda environment variables can have a maximum size of 4 KB. Additionally, the direct 'Encrypt' API of KMS also has an upper limit of 4 KB for the data payload. To encrypt 1 MB, you need to use the Encryption SDK and pack the encrypted file with the lambda function. Incorrect options: [Use KMS direct encryption and store as file] - You can only encrypt up to 4 kilobytes (4096 bytes) of arbitrary data such as an RSA key, a database password, or other sensitive information, so this option is not correct for the given use-case. [Use Envelope Encryption and store as an environment variable] - Environment variables must not exceed 4 KB, so this option is not correct for the given use-case. [Use KMS Encryption and store as an environment variable] - You can encrypt up to 4 kilobytes (4096 bytes) of arbitrary data such as an RSA key, a database password, or other sensitive information. Lambda Environment variables must not exceed 4 KB. So this option is not correct for the given use-case.

A business has their test environment built on Amazon EC2 configured on General purpose SSD volume. At which gp2 volume size will their test environment hit the max IOPS? a) 10.6 TB b) 5.3 TB c) 2.7 TB d) 16 TB

b) 5.3 TB The performance of gp2 volumes is tied to volume size, which determines the baseline performance level of the volume and how quickly it accumulates I/O credits; larger volumes have higher baseline performance levels and accumulate I/O credits faster. 5.3 TiB - General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time. Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 16,000 IOPS (at 5,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size.

The development team at a retail company is gearing up for the upcoming Thanksgiving sale and wants to make sure that the application's serverless backend running via Lambda functions does not hit latency bottlenecks as a result of the traffic spike. As a Developer Associate, which of the following solutions would you recommend to address this use-case? a) Configure app auto scaling to manage lambda reserved concurrency on a schedule b) Configure app auto scaling to manage lambda provisioned concurrency on a schedule c) No need to make any special provisions as lambda is auto scalable because of its serverless nature d) Add an ALB in front of the lambda functions

b) Configure app auto scaling to manage lambda provisioned concurrency on a schedule Concurrency is the number of requests that a Lambda function is serving at any given time. If a Lambda function is invoked again while a request is still being processed, another instance is allocated, which increases the function's concurrency. Due to a spike in traffic, when Lambda functions scale, this causes the portion of requests that are served by new instances to have higher latency than the rest. To enable your function to scale without fluctuations in latency, use provisioned concurrency. By allocating provisioned concurrency before an increase in invocations, you can ensure that all requests are served by initialized instances with very low latency. You can configure Application Auto Scaling to manage provisioned concurrency on a schedule or based on utilization. Use scheduled scaling to increase provisioned concurrency in anticipation of peak traffic. To increase provisioned concurrency automatically as needed, use the Application Auto Scaling API to register a target and create a scaling policy.

A company uses AWS CodeDeploy to deploy applications from GitHub to EC2 instances running Amazon Linux. The deployment process uses a file called appspec.yml for specifying deployment hooks. A final lifecycle event should be specified to verify the deployment success. Which of the following hook events should be used to verify the success of the deployment? a) AllowTriffic b) ValidateService c) ApplicationStart d) AfterInstall

b) ValidateService Correct option: [ValidateService]: ValidateService is the last deployment lifecycle event. It is used to verify the deployment was completed successfully. Incorrect options: [AfterInstall] - You can use this deployment lifecycle event for tasks such as configuring your application or changing file permissions [ApplicationStart] - You typically use this deployment lifecycle event to restart services that were stopped during ApplicationStop [AllowTraffic] - During this deployment lifecycle event, internet traffic is allowed to access instances after a deployment. This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts

A media publishing company is using Amazon EC2 instances for running their business-critical applications. Their IT team is looking at reserving capacity apart from savings plans for the critical instances. As a Developer Associate, which of the following reserved instance types you would select to provide capacity reservations? a) Both Regional and Zonal reserved instances b) Zonal c) Neither regional nor zonal d) Regional

b) Zonal Reserved Instances Correct option: When you purchase a Reserved Instance for a specific Availability Zone, it's referred to as a Zonal Reserved Instance. Zonal Reserved Instances provide capacity reservations as well as discounts. [Zonal Reserved Instances] - A zonal Reserved Instance provides a capacity reservation in the specified Availability Zone. Capacity Reservations enable you to reserve capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. This gives you the ability to create and manage Capacity Reservations independently from the billing discounts offered by Savings Plans or regional Reserved Instances. Incorrect options: [Regional Reserved Instances] - When you purchase a Reserved Instance for a Region, it's referred to as a regional Reserved Instance. A regional Reserved Instance does not provide a capacity reservation. [Both Regional Reserved Instances and Zonal Reserved Instances] - As discussed above, only Zonal Reserved Instances provide capacity reservation. [Neither Regional Reserved Instances nor Zonal Reserved Instances] - As discussed above, Zonal Reserved Instances provide capacity reservation.

A development team is working on an AWS Lambda function that accesses DynamoDB. The Lambda function must do an upsert, that is, it must retrieve an item and update some of its attributes or create the item if it does not exist. Which of the following represents the solution with MINIMUM IAM permissions that can be used for the Lambda function to achieve this functionality? a) dynamodb:UpdateItem, dynamodb:GetItem, dynamodb:PutItem b) dynamodb:UpdateItem, dynamodb:GetItem c) dynamodb:GetRecords, dynamodb:PutItem, dynamodb:UpdateTable d) dynamodb:AddItem, dynamodb:GetItem

b) dynamodb:UpdateItem, dynamodb:GetItem Correct option: dynamodb:UpdateItem, dynamodb:GetItem - With Amazon DynamoDB transactions, you can group multiple actions together and submit them as a single all-or-nothing TransactWriteItems or TransactGetItems operation. You can use AWS Identity and Access Management (IAM) to restrict the actions that transactional operations can perform in Amazon DynamoDB. Permissions for Put, Update, Delete, and Get actions are governed by the permissions used for the underlying PutItem, UpdateItem, DeleteItem, and GetItem operations. For the ConditionCheck action, you can use the dynamodb:ConditionCheck permission in IAM policies. UpdateItem action of DynamoDB APIs, edits an existing item's attributes or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values). There is no need to inlcude the dynamodb:PutItem action for the given use-case. So, the IAM policy must include permissions to get and update the item in the DynamoDB table.

Your company has embraced cloud-native microservices architectures. New applications must be dockerized and stored in a registry service offered by AWS. The architecture should support dynamic port mapping and support multiple tasks from a single service on the same container instance. All services should run on the same EC2 instance. Which of the following options offers the best-fit solution for the given use-case? a) Classic load balancer + ECS b) Application load balancer + beanstalk c) Application load balancer + ECS d) Classic load balancer + Beanstalk

c) Application load balancer + ECS Correct option: [Application Load Balancer + ECS] Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. You can host your cluster on a serverless infrastructure that is managed by Amazon ECS by launching your services or tasks using the Fargate launch type. For more control over your infrastructure, you can host your tasks on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances that you manage by using the EC2 launch type. An Application load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. A listener checks for connection requests from clients, using the protocol and port that you configure. The rules that you define for a listener determine how the load balancer routes requests to its registered targets. Each rule consists of a priority, one or more actions, and one or more conditions. When you deploy your services using Amazon Elastic Container Service (Amazon ECS), you can use dynamic port mapping to support multiple tasks from a single service on the same container instance. Amazon ECS manages updates to your services by automatically registering and deregistering containers with your target group using the instance ID and port for each container. Incorrect options: [Classic Load Balancer + Beanstalk] - The Classic Load Balancer doesn't allow you to run multiple copies of a task on the same instance. Instead, with the Classic Load Balancer, you must statically map port numbers on a container instance. So this option is ruled out. [Application Load Balancer + Beanstalk] - You can create docker environments that support multiple containers per Amazon EC2 instance with a multi-container Docker platform for Elastic Beanstalk. However, ECS gives you finer control. [Classic Load Balancer + ECS] - The Classic Load Balancer doesn't allow you to run multiple copies of a task in the same instance. Instead, with the Classic Load Balancer, you must statically map port numbers on a container instance. So this option is ruled out.

A developer wants to package the code and dependencies for the application-specific Lambda functions as container images to be hosted on Amazon Elastic Container Registry (ECR). Which of the following options are correct for the given requirement? (Select two) a) You can test the containers locally using the Lambda Runtime API b) Lambda supports both Windows and Linux-based container images c) To deploy a container image to Lambda, the container image must implement the Lambda Runtime API d) You can deploy Lambda function as a container image, with a max size of 15GB e) You must create the Lambda function from the same account as the container registry in ECR

c) To deploy a container image to Lambda, the container image must implement the Lambda Runtime API e) You must create the Lambda function from the same account as the container registry in ECR Correct options: [To deploy a container image to Lambda, the container image must implement the Lambda Runtime API] - To deploy a container image to Lambda, the container image must implement the Lambda Runtime API. The AWS open-source runtime interface clients implement the API. You can add a runtime interface client to your preferred base image to make it compatible with Lambda. [You must create the Lambda function from the same account as the container registry in Amazon ECR] - You can package your Lambda function code and dependencies as a container image, using tools such as the Docker CLI. You can then upload the image to your container registry hosted on Amazon Elastic Container Registry (Amazon ECR). Note that you must create the Lambda function from the same account as the container registry in Amazon ECR. Incorrect options: [Lambda supports both Windows and Linux-based container images] - Lambda currently supports only Linux-based container images. You can test the containers locally using the Lambda Runtime API - You can test the containers locally using the Lambda Runtime Interface Emulator. [You can deploy Lambda function as a container image, with a maximum size of 15 GB] - You can deploy Lambda function as container image with the maximum size of 10GB.

An e-commerce company uses AWS CloudFormation to implement Infrastructure as Code for the entire organization. Maintaining resources as stacks with CloudFormation has greatly reduced the management effort needed to manage and maintain the resources. However, a few teams have been complaining of failing stack updates owing to out-of-band fixes running on the stack resources. Which of the following is the best solution that can help in keeping the CloudFormation stack and its resources in sync with each other? a) Use Change Sets feature of CF b) Use CF in Elastic beanstalk env to reduce direct changes to CF resources c) Use Drift Detection feature of CF d) Use Tag feature of CF to monitor the changes happening on specific resources

c) Use Drift Detection feature of CF Correct option: [Use Drift Detection feature of CloudFormation] Drift detection enables you to detect whether a stack's actual configuration differs, or has drifted, from its expected configuration. Use CloudFormation to detect drift on an entire stack, or individual resources within the stack. A resource is considered to have drifted if any of its actual property values differ from the expected property values. This includes if the property or resource has been deleted. A stack is considered to have drifted if one or more of its resources have drifted. To determine whether a resource has drifted, CloudFormation determines the expected resource property values, as defined in the stack template and any values specified as template parameters. CloudFormation then compares those expected values with the actual values of those resource properties as they currently exist in the stack. A resource is considered to have drifted if one or more of its properties have been deleted, or had their value changed. You can then take corrective action so that your stack resources are again in sync with their definitions in the stack template, such as updating the drifted resources directly so that they agree with their template definition. Resolving drift helps to ensure configuration consistency and successful stack operations. Incorrect options: [Use CloudFormation in Elastic Beanstalk environment to reduce direct changes to CloudFormation resources] - Elastic Beanstalk environment provides full access to the resources created. So, it is possible to edit the resources and hence does not solve the issue mentioned for the given use case. [Use Tag feature of CloudFormation to monitor the changes happening on specific resources] - Tags help you identify and categorize the resources created as part of CloudFormation template. This feature is not helpful for the given use case. [Use Change Sets feature of CloudFormation] - When you need to update a stack, understanding how your changes will affect running resources before you implement them can help you update stacks with confidence. Change sets allow you to preview how proposed changes to a stack might impact your running resources, for example, whether your changes will delete or replace any critical resources, AWS CloudFormation makes the changes to your stack only when you decide to execute the change set, allowing you to decide whether to proceed with your proposed changes or explore other changes by creating another change set. Change sets are not useful for the given use-case.

A team lead has asked you to create an AWS CloudFormation template that creates EC2 instances and RDS databases. The template should be reusable by allowing the user to input a parameter value for an Amazon EC2 AMI ID. Which of the following intrinsic function should you choose to reference the parameter? a) !Param b) !GetAtt c) !Join d) !Ref

d) !Ref Correct option: [!Ref] - The intrinsic function Ref returns the value of the specified parameter or resource. When you specify a parameter's logical name, it returns the value of the parameter, when you specify a resource's logical name, it returns a value that you can typically use to refer to that resource such as a physical ID. Incorrect options: [!GetAtt] - This function returns the value of an attribute from a resource in the template. The YAML syntax is like so: !GetAtt logicalNameOfResource.attributeName [!Param] - This is not a valid function name. This option has been added as a distractor. [!Join] - This function appends a set of values into a single value, separated by the specified delimiter.

A developer working with EC2 Windows instance has installed Kinesis Agent for Windows to stream JSON-formatted log files to Amazon Simple Storage Service (S3) via Amazon Kinesis Data Firehose. The developer wants to understand the sink type capabilities of Kinesis Firehose. Which of the following sink types is NOT supported by Kinesis Firehose. a) Redshift with S3 b) S3 as a direct Firehose destination c) Elasticsearch with optional S3 backup d) Elasticache with S3 backup

d) Elasticache with S3 backup Correct option: Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elasticsearch Service (Amazon ES), and Splunk. With Kinesis Data Firehose, you don't need to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you specified. [Amazon ElastiCache with Amazon S3 as backup] - Amazon ElastiCache is a fully managed in-memory data store, compatible with Redis or Memcached. ElastiCache is NOT a supported destination for Amazon Kinesis Data Firehose. Incorrect options: [Amazon Elasticsearch Service (Amazon ES) with optionally backing up data to Amazon S3] - Amazon ES is a supported destination type for Kinesis Firehose. Streaming data is delivered to your Amazon ES cluster, and can optionally be backed up to your S3 bucket concurrently. [Amazon Simple Storage Service (Amazon S3) as a direct Firehose destination] - For Amazon S3 destinations, streaming data is delivered to your S3 bucket. If data transformation is enabled, you can optionally back up source data to another Amazon S3 bucket. [Amazon Redshift with Amazon S3] - For Amazon Redshift destinations, streaming data is delivered to your S3 bucket first. Kinesis Data Firehose then issues an Amazon Redshift COPY command to load data from your S3 bucket to your Amazon Redshift cluster. If data transformation is enabled, you can optionally back up source data to another Amazon S3 bucket.

As a Senior Developer, you are tasked with creating several API Gateway powered APIs along with your team of developers. The developers are working on the API in the development environment, but they find the changes made to the APIs are not reflected when the API is called. As a Developer Associate, which of the following solutions would you recommend for this use-case? a) Developers need IAM permissions on API execution component of API Gateway b) Use Stage Variables for developing state of API c) Enable lambda authorizer to access API d) Redeploy the API to an existing stage or to a new stage

d) Redeploy the API to an existing stage or to a new stage Correct option: [Redeploy the API to an existing stage or to a new stage] After creating your API, you must deploy it to make it callable by your users. To deploy an API, you create an API deployment and associate it with a stage. A stage is a logical reference to a lifecycle state of your API (for example, dev, prod, beta, v2). API stages are identified by the API ID and stage name. Every time you update an API, you must redeploy the API to an existing stage or to a new stage. Updating an API includes modifying routes, methods, integrations, authorizers, and anything else other than stage settings.

You are a development team lead setting permissions for other IAM users with limited permissions. On the AWS Management Console, you created a dev group where new developers will be added, and on your workstation, you configured a developer profile. You would like to test that this user cannot terminate instances. Which of the following options would you execute? a) Use the AWS CLI --test option b) Retrieve the policy using the EC2 metadata service and use the IAM policy simulator c) Using the CLI, create a dummy EC2 and delete it using another CLI call d) Use the AWS CLI --dry-run option

d) Use the AWS CLI --dry-run option Correct option: [Use the AWS CLI --dry-run option]: The --dry-run option checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation, otherwise, it is UnauthorizedOperation. Incorrect options: [Use the AWS CLI --test option] - This is a made-up option and has been added as a distractor. [Retrieve the policy using the EC2 metadata service and use the IAM policy simulator] - EC2 metadata service is used to retrieve dynamic information such as instance-id, local-hostname, public-hostname. This cannot be used to check whether you have the required permissions for the action. [Using the CLI, create a dummy EC2 and delete it using another CLI call] - That would not work as the current EC2 may have permissions that the dummy instance does not have. If permissions were the same it can work but it's not as elegant as using the dry-run option.

A company is using a Border Gateway Protocol (BGP) based AWS VPN connection to connect from its on-premises data center to Amazon EC2 instances in the company's account. The development team can access an EC2 instance in subnet A but is unable to access an EC2 instance in subnet B in the same VPC. Which logs can be used to verify whether the traffic is reaching subnet B? a) Subnet logs b) VPN logs c) BGP logs d) VPC Flow logs

d) VPC Flow logs Correct option: [VPC Flow Logs] - VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination. You can create a flow log for a VPC, a subnet, or a network interface. If you create a flow log for a subnet or VPC, each network interface in that subnet or VPC is monitored. Flow log data for a monitored network interface is recorded as flow log records, which are log events consisting of fields that describe the traffic flow. To create a flow log, you specify: The resource for which to create the flow log The type of traffic to capture (accepted traffic, rejected traffic, or all traffic) The destinations to which you want to publish the flow log data

A developer is defining the signers that can create signed URLs for their Amazon CloudFront distributions. Which of the following statements should the developer consider while defining the signers? (Select two) a) CloudFront key pairs can be created with any account that has admin permissions and full access to CloudFront resources b) Both the signers (trusted key groups and CloudFront key pairs) can be managed using the CloudFront APIs c) You can also use AWS Identity and Access Management (IAM) permissions policies to restrict what the root user can do with CloudFront key pairs d) When you create a signer, the public key is with CloudFront and private key is used to sign a portion of URL e) When you use the root user to manage CloudFront key pairs, you can only have up to two active CloudFront key pairs per aws account

d) When you create a signer, the public key is with CloudFront and private key is used to sign a portion of URL e) When you use the root user to manage CloudFront key pairs, you can only have up to two active CloudFront key pairs per aws account Correct options: [When you create a signer, the public key is with CloudFront and private key is used to sign a portion of URL] - Each signer that you use to create CloudFront signed URLs or signed cookies must have a public-private key pair. The signer uses its private key to sign the URL or cookies, and CloudFront uses the public key to verify the signature. When you create signed URLs or signed cookies, you use the private key from the signer's key pair to sign a portion of the URL or the cookie. When someone requests a restricted file, CloudFront compares the signature in the URL or cookie with the unsigned URL or cookie, to verify that it hasn't been tampered with. CloudFront also verifies that the URL or cookie is valid, meaning, for example, that the expiration date and time haven't passed. [When you use the root user to manage CloudFront key pairs, you can only have up to two active CloudFront key pairs per AWS account] - When you use the root user to manage CloudFront key pairs, you can only have up to two active CloudFront key pairs per AWS account. Whereas, with CloudFront key groups, you can associate a higher number of public keys with your CloudFront distribution, giving you more flexibility in how you use and manage the public keys. By default, you can associate up to four key groups with a single distribution, and you can have up to five public keys in a key group. Incorrect options: [You can also use AWS Identity and Access Management (IAM) permissions policies to restrict what the root user can do with CloudFront key pairs] - When you use the AWS account root user to manage CloudFront key pairs, you can't restrict what the root user can do or the conditions in which it can do them. You can't apply IAM permissions policies to the root user, which is one reason why AWS best practices recommend against using the root user. [CloudFront key pairs can be created with any account that has administrative permissions and full access to CloudFront resources] - CloudFront key pairs can only be created using the root user account and hence is not a best practice to create CloudFront key pairs as signers. [Both the signers (trusted key groups and CloudFront key pairs) can be managed using the CloudFront APIs] - With CloudFront key groups, you can manage public keys, key groups, and trusted signers using the CloudFront API. You can use the API to automate key creation and key rotation. When you use the AWS root user, you have to use the AWS Management Console to manage CloudFront key pairs, so you can't automate the process.


Kaugnay na mga set ng pag-aaral

test 2 ch 16 disordersof brain function

View Set

Peds Hematology practice questions

View Set

Topic 6: Introduction to Financial Statement Analysis

View Set

Chapter 3 Programming Exercises (Even Numbers)

View Set

Final Exam Study Questions EOC Item Release

View Set