Developer Tools

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

A Developer needs to access AWS CodeCommit over SSH. The SSH keys configured to access AWS CodeCommit are tied to a user with the following permissions: { "version": "2012-10-17" "Statement": [ { "Effect" "Allow", "Action": [ "codecommit:BatchGetRepositories", "codecommit:Get*" "codecommit:List*", "codecommit:GitPull" ], "Resource": "*" } ] } The Developer needs to create/delete branches. Which specific IAM permissions need to be added based on the principle of least privilege? ="codecommit:*" "codecommit:Update*" "codecommit:Put*:" "codecommit:CreateBranch" and "codecommit:DeleteBranch"

"codecommit:CreateBranch" and "codecommit:DeleteBranch" The permissions assigned to the user account are missing the privileges to create and delete branches in AWS CodeCommit. The Developer needs to be assigned these permissions but according to the principal of least privilege it's important to ensure no additional permissions are assigned. The following API actions can be used to work with branches: CreateBranch , which creates a branch in a specified repository. DeleteBranch , which deletes the specified branch in a repository unless it is the default branch. GetBranch , which returns information about a specified branch. ListBranches , which lists all branches for a specified repository. UpdateDefaultBranch , which changes the default branch for a repository. Therefore, the best answer is to add the "codecommit:CreateBranch" and "codecommit:DeleteBranch" permissions to the permissions policy.

A Development team would use a GitHub repository and would like to migrate their application code to AWS CodeCommit. What needs to be created before they can migrate a cloned repository to CodeCommit over HTTPS? A set of Git credentials generated with IAM A public and private SSH key file A GitHub secure authentication token An Amazon EC2 IAM role with CodeCommit permissions

A set of Git credentials generated with IAM AWS CodeCommit is a managed version control service that hosts private Git repositories in the AWS cloud. To use CodeCommit, you configure your Git client to communicate with CodeCommit repositories. As part of this configuration, you provide IAM credentials that CodeCommit can use to authenticate you. IAM supports CodeCommit with three types of credentials: Git credentials, an IAM -generated user name and password pair you can use to communicate with CodeCommit repositories over HTTPS. SSH keys, a locally generated public-private key pair that you can associate with your IAM user to communicate with CodeCommit repositories over SSH. AWS access keys, which you can use with the credential helper included with the AWS CLI to communicate with CodeCommit repositories over HTTPS. In this scenario the Development team need to connect to CodeCommit using HTTPS so they need either AWS access keys to use the AWS CLI or Git credentials generated by IAM. Access keys are not offered as an answer choice so the best answer is that they need to create a set of Git credentials generated with IAM

A company will be hiring a large number of Developers for a series of projects. The Develops will bring their own devices to work and the company want to ensure consistency in tooling. The Developers must be able to write, run, and debug applications with just a browser, without needing to install or maintain a local Integrated Development Environment (IDE). Which AWS service should the Developers use? AWS X-Ray AWS CodeCommit AWS CodeDeploy AWS Cloud9

AWS Cloud9 AWS Cloud9 is an integrated development environment, or IDE. The AWS Cloud9 IDE offers a rich code-editing experience with support for several programming languages and runtime debuggers, and a built-in terminal. It contains a collection of tools that you use to code, build, run, test, and debug software, and helps you release software to the cloud. You access the AWS Cloud9 IDE through a web browser. You can configure the IDE to your preferences. You can switch color themes, bind shortcut keys, enable programming language-specific syntax coloring and code formatting, and more.

A Development team have moved their continuous integration and delivery (CI/CD) pipeline into the AWS Cloud. The team is leveraging AWS CodeCommit for management of source code. The team need to compile their source code, run tests, and produce software packages that are ready for deployment. Which AWS service can deliver these outcomes? AWS CodePipeline AWS CodeCommit AWS Cloud9 AWS CodeBuild

AWS CodeBuild AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for popular programming languages and build tools such as Apache Maven, Gradle, and more. You can also customize build environments in CodeBuild to use your own build tools. CodeBuild scales automatically to meet peak build requests. CodeBuild provides these benefits: Fully managed - CodeBuild eliminates the need to set up, patch, update, and manage your own build servers. On demand - CodeBuild scales on demand to meet your build needs. You pay only for the number of build minutes you consume. Out of the box - CodeBuild provides preconfigured build environments for the most popular programming languages. All you need to do is point to your build script to start your first build. Therefore, AWS CodeBuild is the best service to use to compile the Development team's source code, run tests, and produce software packages that are ready for deployment.

A team of Developers are building a continuous integration and delivery pipeline using AWS Developer Tools. Which services should they use for running tests against source code and installing compiled code on their AWS resources? (Select TWO.) AWS CodeBuild for running tests against source code AWS CodeDeploy for installing compiled code on their AWS resources AWS CodePipeline for running tests against source code AWS CodeCommit for installing compiled code on their AWS resources AWS Cloud9 for running tests against source code

AWS CodeBuild for running tests against source code AWS CodeDeploy for installing compiled code on their AWS resources AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides pre-packaged build environments for popular programming languages and build tools such as Apache Maven, Gradle, and more. CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services.

A team of Developers are working on a shared project and need to be able to collaborate on code. The shared application code must be encrypted at rest, stored on a highly available and durable architecture, and support multiple versions and batch change tracking. Which AWS service should the Developer use? AWS CodeBuild AWS Cloud9 Amazon S3 AWS CodeCommit

AWS CodeCommit AWS CodeCommit is a fully managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem. AWS CodeCommit automatically encrypts your files in transit and at rest. AWS CodeCommit helps you collaborate on code with teammates via pull requests, branching, and merging. You can implement workflows that include code reviews and feedback by default, and control who can make changes to specific branches.

A company needs a fully-managed source control service that will work in AWS. The service must ensure that revision control synchronizes multiple distributed repositories by exchanging sets of changes peer-to-peer. All users need to work productively even when not connected to a network. Which source control service should be used? Subversion AWS CodeBuild AWS CodeStar AWS CodeCommit

AWS CodeCommit AWS CodeCommit is a version control service hosted by Amazon Web Services that you can use to privately store and manage assets (such as documents, source code, and binary files) in the cloud. A repository is the fundamental version control object in CodeCommit. It's where you securely store code and files for your project. It also stores your project history, from the first commit through the latest changes. You can share your repository with other users so you can work together on a project. If you add AWS tags to repositories, you can set up notifications so that repository users receive email about events (for example, another user commenting on code). You can also change the default settings for your repository, browse its contents, and more. You can create triggers for your repository so that code pushes or other events trigger actions, such as emails or code functions. You can even configure a repository on your local computer (a local repo) to push your changes to more than one repository.

A company needs a version control system for collaborative software development. The solution must include support for batches of changes across multiple files and parallel branching. Which AWS service will meet these requirements? AWS CodePipeline Amazon S3 AWS CodeCommit AWS CodeBuild

AWS CodeCommit AWS CodeCommit is a version control service hosted by Amazon Web Services that you can use to privately store and manage assets (such as documents, source code, and binary files) in the cloud. CodeCommit is optimized for team software development. It manages batches of changes across multiple files, which can occur in parallel with changes made by other developers.

A development team require a fully-managed source control service that is compatible with Git. Which service should they use? AWS CodeCommit AWS CodePipeline AWS Cloud9 AWS CodeDeploy

AWS CodeCommit AWS CodeCommit is a version control service hosted by Amazon Web Services that you can use to privately store and manage assets (such as documents, source code, and binary files) in the cloud. CodeCommit is a fully-managed service that hosts secure Git-based repositories.

A team of Developers have been assigned to a new project. The team will be collaborating on the development and delivery of a new application and need a centralized private repository for managing source code. The repository should support updates from multiple sources. Which AWS service should the development team use? AWS CodePipeline AWS CodeCommit AWS CodeBuild AWS CodeDeploy

AWS CodeCommit CodeCommit is a secure, highly scalable, managed source control service that hosts private Git repositories. CodeCommit eliminates the need for you to manage your own source control system or worry about scaling its infrastructure. You can use CodeCommit to store anything from code to binaries. It supports the standard functionality of Git, so it works seamlessly with your existing Git-based tools. With CodeCommit, you can: Benefit from a fully managed service hosted by AWS. CodeCommit provides high service availability and durability and eliminates the administrative overhead of managing your own hardware and software. There is no hardware to provision and scale and no server software to install, configure, and update. Store your code securely. CodeCommit repositories are encrypted at rest as well as in transit. Work collaboratively on code. CodeCommit repositories support pull requests, where users can review and comment on each other's code changes before merging them to branches; notifications that automatically send emails to users about pull requests and comments; and more. Easily scale your version control projects. CodeCommit repositories can scale up to meet your development needs. The service can handle repositories with large numbers of files or branches, large file sizes, and lengthy revision histories. Store anything, anytime. CodeCommit has no limit on the size of your repositories or on the file types you can store. Integrate with other AWS and third-party services. CodeCommit keeps your repositories close to your other production resources in the AWS Cloud, which helps increase the speed and frequency of your development lifecycle. It is integrated with IAM and can be used with other AWS services and in parallel with other repositories. Easily migrate files from other remote repositories. You can migrate to CodeCommit from any Git-based repository. Use the Git tools you already know. CodeCommit supports Git commands as well as its own AWS CLI commands and APIs. Therefore, the development team should select AWS CodeCommit as the repository they use for storing code related to the new project.

A company uses continuous integration and continuous delivery (CI/CD) systems. A Developer needs to automate the deployment of a software package to Amazon EC2 instances as well as to on-premises virtual servers. Which AWS service can be used for the software deployment? AWS CodePipeline AWS CodeDeploy AWS CloudBuild AWS Elastic Beanstalk

AWS CodeDeploy CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services. CodeDeploy can deploy application content that runs on a server and is stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. CodeDeploy can also deploy a serverless Lambda function. You do not need to make changes to your existing code before you can use CodeDeploy. The image below shows the flow of a typical CodeDeploy in-place deployment. The above deployment could also be directed at on-premises servers. Therefore, the best answer is to use AWS CodeDeploy to deploy the software package to both EC2 instances and on-premises virtual servers.

An application will be hosted on the AWS Cloud. Developers will be using an Agile software development methodology with regular updates deployed through a continuous integration and delivery (CI/CD) model. Which AWS service can assist the Developers with automating the build, test, and deploy phases of the release process every time there is a code change? AWS CloudFormation AWS CodeBuild AWS CodePipeline AWS Elastic Beanstalk

AWS CodePipeline AWS CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software. You can quickly model and configure the different stages of a software release process. CodePipeline automates the steps required to release your software changes continuously. Specifically, you can: Automate your release processes: CodePipeline fully automates your release process from end to end, starting from your source repository through build, test, and deployment. You can prevent changes from moving through a pipeline by including a manual approval action in any stage except a Source stage. You can release when you want, in the way you want, on the systems of your choice, across one instance or multiple instances. Establish a consistent release process: Define a consistent set of steps for every code change. CodePipeline runs each stage of your release according to your criteria. Speed up delivery while improving quality: You can automate your release process to allow your developers to test and release code incrementally and speed up the release of new features to your customers. Use your favorite tools: You can incorporate your existing source, build, and deployment tools into your pipeline. View progress at a glance: You can review real-time status of your pipelines, check the details of any alerts, retry failed actions, view details about the source revisions used in the latest pipeline execution in each stage, and manually rerun any pipeline. View pipeline history details: You can view details about executions of a pipeline, including start and end times, run duration, and execution IDs. Therefore, AWS CodePipeline is the perfect tool for the Developer's requirements.

A team of developers need to be able to collaborate and synchronize multiple distributed code repositories and leverage a pre-configured continuous delivery toolchain for deploying their projects on AWS. The team also require a centralized project dashboard to monitor application activity. Which AWS service should they use? AWS CodeCommit AWS CodeStar AWS Cloud9 AWS CodePipeline

AWS CodeStar AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS. AWS CodeStar provides a unified user interface, enabling you to easily manage your software development activities in one place. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, allowing you to start releasing code faster. AWS CodeStar makes it easy for your whole team to work together securely, allowing you to easily manage access and add owners, contributors, and viewers to your projects. Each AWS CodeStar project comes with a project management dashboard, including an integrated issue tracking capability powered by Atlassian JIRA Software. With the AWS CodeStar project dashboard, you can easily track progress across your entire software development process, from your backlog of work items to teams' recent code deployments.

A serverless application uses Amazon API Gateway an AWS Lambda function and a Lambda authorizer function. There is a failure with the application and a developer needs to trace and analyze user requests that pass through API Gateway through to the back end services. Which AWS service is MOST suitable for this purpose? Amazon CloudWatch Amazon Inspector VPC Flow Logs AWS X-Ray

AWS X-Ray You can use AWS X-Ray to trace and analyze user requests as they travel through your Amazon API Gateway APIs to the underlying services. API Gateway supports X-Ray tracing for all API Gateway endpoint types: Regional, edge-optimized, and private. You can use X-Ray with Amazon API Gateway in all AWS Regions where X-Ray is available. Because X-Ray gives you an end-to-end view of an entire request, you can analyze latencies in your APIs and their backend services. You can use an X-Ray service map to view the latency of an entire request and that of the downstream services that are integrated with X-Ray. You can also configure sampling rules to tell X-Ray which requests to record and at what sampling rates, according to criteria that you specify. The following diagram shows a trace view generated for the example API described above, with a Lambda backend function and a Lambda authorizer function. A successful API method request is shown with a response code of 200.

A developer is using AWS CodeBuild to build an application into a Docker image. The buildspec file is used to run the application build. The developer needs to push the Docker image to an Amazon ECR repository only upon the successful completion of each build. Add a post_build phase to the buildspec file that uses the commands block to push the Docker image. Add a post_build phase to the buildspec file that uses the finally block to push the Docker image. Add a post_build phase to the buildspec file that uses the artifacts sequence to find the build artifacts and push to Amazon ECR. Add an install phase to the buildspec file that uses the commands block to push the Docker image.

Add a post_build phase to the buildspec file that uses the commands block to push the Docker image. The post_build phase is an optional sequence. It represents the commands, if any, that CodeBuild runs after the build. For example, you might use Maven to package the build artifacts into a JAR or WAR file, or you might push a Docker image into Amazon ECR. Then you might send a build notification through Amazon SNS.

An application is instrumented to generate traces using AWS X-Ray and generates a large amount of trace data. A Developer would like to use filter expressions to filter the results to specific key-value pairs added to custom subsegments. How should the Developer add the key-value pairs to the custom subsegments? Add metadata to the custom subsegments Add annotations to the custom subsegments Add the key-value pairs to the Trace ID Setup sampling for the custom subsegments

Add annotations to the custom subsegments You can record additional information about requests, the environment, or your application with annotations and metadata. You can add annotations and metadata to the segments that the X-Ray SDK creates, or to custom subsegments that you create. Annotations are key-value pairs with string, number, or Boolean values. Annotations are indexed for use with filter expressions. Use annotations to record data that you want to use to group traces in the console, or when calling the GetTraceSummaries API. Metadata are key-value pairs that can have values of any type, including objects and lists, but are not indexed for use with filter expressions. Use metadata to record additional data that you want stored in the trace but don't need to use with search. Annotations can be used with filter expressions, so this is the best solution for this requirement. The Developer can add annotations to the custom subsegments and will then be able to use filter expressions to filter the results in AWS X-Ray.

A Development team wants to instrument their code to provide more detailed information to AWS X-Ray than simple outgoing and incoming requests. This will generate large amounts of data, so the Development team wants to implement indexing so they can filter the data. What should the Development team do to achieve this? Install required plugins for the appropriate AWS SDK Configure the necessary X-Ray environment variables Add annotations to the segment document Add metadata to the segment document

Add annotations to the segment document AWS X-Ray makes it easy for developers to analyze the behavior of their production, distributed applications with end-to-end tracing capabilities. You can use X-Ray to identify performance bottlenecks, edge case errors, and other hard to detect issues. When you instrument your application, the X-Ray SDK records information about incoming and outgoing requests, the AWS resources used, and the application itself. You can add other information to the segment document as annotations and metadata. Annotations and metadata are aggregated at the trace level and can be added to any segment or subsegment. Annotations are simple key-value pairs that are indexed for use with filter expressions. Use annotations to record data that you want to use to group traces in the console, or when calling the GetTraceSummaries API. X-Ray indexes up to 50 annotations per trace. Metadata are key-value pairs with values of any type, including objects and lists, but that are not indexed. Use metadata to record data you want to store in the trace but don't need to use for searching traces. In this scenario, we need to add annotations to the segment document so that the data that needs to be filtered is indexed.

An application uses AWS Lambda which makes remote calls to several downstream services. A developer wishes to add data to custom subsegments in AWS X-Ray that can be used with filter expressions. Which type of data should be used? Metadata Trace ID Daemon Annotations

Annotations AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application and shows a map of your application's underlying components. You can record additional information about requests, the environment, or your application with annotations and metadata. You can add annotations and metadata to the segments that the X-Ray SDK creates, or to custom subsegments that you create. Annotations are key-value pairs with string, number, or Boolean values. Annotations are indexed for use with filter expressions. Use annotations to record data that you want to use to group traces in the console, or when calling the GetTraceSummaries API.

A developer must deploy an update to Amazon ECS using AWS CodeDeploy. The deployment should expose 10% of live traffic to the new version. Then after a period of time, route all remaining traffic to the new version. Which ECS deployment should the company use to meet these requirements? Blue/green with canary Blue/green with linear Rolling update Blue/green with all at once

Blue/green with canary The blue/green deployment type uses the blue/green deployment model controlled by CodeDeploy. This deployment type enables you to verify a new deployment of a service before sending production traffic to it. There are three ways traffic can shift during a blue/green deployment: Canary — Traffic is shifted in two increments. You can choose from predefined canary options that specify the percentage of traffic shifted to your updated task set in the first increment and the interval, in minutes, before the remaining traffic is shifted in the second increment. Linear — Traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the number of minutes between each increment. All-at-once — All traffic is shifted from the original task set to the updated task set all at once. The best choice for this use case would be to use the canary traffic shifting strategy. You can see the predefined canary options in the table below:

An application has been instrumented to use the AWS X-Ray SDK to collect data about the requests the application serves. The Developer has set the user field on segments to a string that identifies the user who sent the request. How can the Developer search for segments associated with specific users? By using the GetTraceSummaries API with a filter expression Use a filter expression to search for the user field in the segment annotations Use a filter expression to search for the user field in the segment metadata By using the GetTraceGraph API with a filter expression

By using the GetTraceSummaries API with a filter expression A segment document conveys information about a segment to X-Ray. A segment document can be up to 64 kB and contain a whole segment with subsegments, a fragment of a segment that indicates that a request is in progress, or a single subsegment that is sent separately. You can send segment documents directly to X-Ray by using the PutTraceSegments API. A subset of segment fields are indexed by X-Ray for use with filter expressions. For example, if you set the user field on a segment to a unique identifier, you can search for segments associated with specific users in the X-Ray console or by using the GetTraceSummaries API.

A Developer is deploying an update to a serverless application that includes AWS Lambda using the AWS Serverless Application Model (SAM). The traffic needs to move from the old Lambda version to the new Lambda version gradually, within the shortest period of time. Which deployment configuration is MOST suitable for these requirements? CodeDeployDefault.LambdaLinear10PercentEvery2Minutes CodeDeployDefault.LambdaLinear10PercentEvery1Minute CodeDeployDefault.HalfAtATime CodeDeployDefault.LambdaCanary10Percent5Minutes

CodeDeployDefault.LambdaCanary10Percent5Minutes If you use AWS SAM to create your serverless application, it comes built-in with CodeDeploy to provide gradual Lambda deployments. With just a few lines of configuration, AWS SAM does the following for you: Deploys new versions of your Lambda function, and automatically creates aliases that point to the new version. Gradually shifts customer traffic to the new version until you're satisfied that it's working as expected, or you roll back the update. Defines pre-traffic and post-traffic test functions to verify that the newly deployed code is configured correctly and your application operates as expected. Rolls back the deployment if CloudWatch alarms are triggered. There are several options for how CodeDeploy shifts traffic to the new Lambda version. You can choose from the following:

A Developer has recently created an application that uses an AWS Lambda function, an Amazon DynamoDB table, and also sends notifications using Amazon SNS. The application is not working as expected and the Developer needs to analyze what is happening across all components of the application. What is the BEST way to analyze the issue? Enable X-Ray tracing for the Lambda function Monitor the application with AWS Trusted Advisor Create an Amazon CloudWatch Events rule Assess the application with Amazon Inspector

Enable X-Ray tracing for the Lambda function AWS X-Ray makes it easy for developers to analyze the behavior of their production, distributed applications with end-to-end tracing capabilities. You can use X-Ray to identify performance bottlenecks, edge case errors, and other hard to detect issues. AWS X-Ray provides an end-to-end, cross-service view of requests made to your application. It gives you an application-centric view of requests flowing through your application by aggregating the data gathered from individual services in your application into a single unit called a trace. You can use this trace to follow the path of an individual request as it passes through each service or tier in your application so that you can pinpoint where issues are occurring. AWS X-Ray will assist the developer with visually analyzing the end-to-end view of connectivity between the application components and how they are performing using a Service Map. X-Ray also provides aggregated data about the application.

A serverless application composed of multiple Lambda functions has been deployed. A developer is setting up AWS CodeDeploy to manage the deployment of code updates. The developer would like a 10% of the traffic to be shifted to the new version in equal increments, 10 minutes apart. Which setting should be chosen for configuring how traffic is shifted? Canary Linear All-at-once Blue/green

Linear A deployment configuration is a set of rules and success and failure conditions used by CodeDeploy during a deployment. These rules and conditions are different, depending on whether you deploy to an EC2/On-Premises compute platform or an AWS Lambda compute platform. the linear option shifts a specific amount of traffic in equal increments of time. Therefore, the following option should be chosen:

An X-Ray daemon is being used on an Amazon ECS cluster to assist with debugging stability issues. A developer requires more detailed timing information and data related to downstream calls to AWS services. What should the developer use to obtain this extra detail? Filter expressions Metadata Annotations Subsegments

Subsegments A segment can break down the data about the work done into subsegments. Subsegments provide more granular timing information and details about downstream calls that your application made to fulfill the original request. A subsegment can contain additional details about a call to an AWS service, an external HTTP API, or an SQL database. You can even define arbitrary subsegments to instrument specific functions or lines of code in your application.

A new application will be deployed using AWS CodeDeploy to Amazon Elastic Container Service (ECS). What must be supplied to CodeDeploy to specify the ECS service to deploy? The AppSpec file The BuildSpec file The Policy file The Template file

The AppSpec file The application specification file (AppSpec file) is a YAML-formatted or JSON-formatted file used by CodeDeploy to manage a deployment. If your application uses the Amazon ECS compute platform, the AppSpec file is named appspec.yaml. It is used by CodeDeploy to determine: Your Amazon ECS task definition file. This is specified with its ARN in the TaskDefinition instruction in the AppSpec file. The container and port in your replacement task set where your Application Load Balancer or Network Load Balancer reroutes traffic during a deployment. This is specified with the LoadBalancerInfo instruction in the AppSpec file. Optional information about your Amazon ECS service, such the platform version on which it runs, its subnets, and its security groups. Optional Lambda functions to run during hooks that correspond with lifecycle events during an Amazon ECS deployment. For more information, see AppSpec 'hooks' Section for an Amazon ECS Deployment.

An application is being instrumented to send trace data using AWS X-Ray. A Developer needs to upload segment documents using JSON-formatted strings to X-Ray using the API. Which API action should the developer use? The UpdateGroup API action The PutTelemetryRecords API action The PutTraceSegments API action The GetTraceSummaries API action

The PutTraceSegments API action You can send trace data to X-Ray in the form of segment documents. A segment document is a JSON formatted string that contains information about the work that your application does in service of a request. Your application can record data about the work that it does itself in segments, or work that uses downstream services and resources in subsegments. Segments record information about the work that your application does. A segment, at a minimum, records the time spent on a task, a name, and two IDs. The trace ID tracks the request as it travels between services. The segment ID tracks the work done for the request by a single service. You can upload segment documents with the PutTraceSegments API. The API has a single parameter, TraceSegmentDocuments, that takes a list of JSON segment documents. Therefore, the Developer should use the PutTraceSegments API action.

A Developer is creating an application and would like add AWS X-Ray to trace user requests end-to-end through the software stack. The Developer has implemented the changes and tested the application and the traces are successfully sent to X-Ray. The Developer then deployed the application on an Amazon EC2 instance, and noticed that the traces are not being sent to X-Ray. What is the most likely cause of this issue? (Select TWO.) The X-Ray daemon is not installed on the EC2 instance. The traces are reaching X-Ray, but the Developer does not have permission to view the records The X-Ray segments are being queued The instance's instance profile role does not have permission to upload trace data to X-Ray The X-Ray API is not installed on the EC2 instance

The instance's instance profile role does not have permission to upload trace data to X-Ray The X-Ray daemon is not installed on the EC2 instance. AWS X-Ray is a service that collects data about requests that your application serves, and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization. For any traced request to your application, you can see detailed information not only about the request and response, but also about calls that your application makes to downstream AWS resources, microservices, databases and HTTP web APIs. You can run the X-Ray daemon on the following operating systems on Amazon EC2: Amazon Linux Ubuntu Windows Server (2012 R2 and newer) The X-Ray daemon must be running on the EC2 instance in order to collect data. You can use a user data script to run the daemon automatically when you launch the instance. The X-Ray daemon uses the AWS SDK to upload trace data to X-Ray, and it needs AWS credentials with permission to do that. On Amazon EC2, the daemon uses the instance's instance profile role automatically. The IAM role or user that the daemon's credentials belong to must have permission to write data to the service on your behalf. To use the daemon on Amazon EC2, create a new instance profile role or add the managed policy to an existing one. To use the daemon on Elastic Beanstalk, add the managed policy to the Elastic Beanstalk default instance profile role. To run the daemon locally, create an IAM user and save its access keys on your computer. Therefore, the most likely cause of the issues being experienced in this scenario is that the instance's instance profile role does not have permission to upload trace data to X-Ray or the X-Ray daemon is not running on the EC2 instance.

A Developer has used a third-party tool to build, bundle, and package a software package on-premises. The software package is stored in a local file system and must be deployed to Amazon EC2 instances. How can the application be deployed onto the EC2 instances? Use AWS CodeBuild to commit the package and automatically deploy the software package. Upload the bundle to an Amazon S3 bucket and specify the S3 location when doing a deployment using AWS CodeDeploy. Use AWS CodeDeploy and point it to the local file system to deploy the software package. Create a repository using AWS CodeCommit to automatically trigger a deployment to the EC2 instances.

Upload the bundle to an Amazon S3 bucket and specify the S3 location when doing a deployment using AWS CodeDeploy. AWS CodeDeploy can deploy software packages using an archive that has been uploaded to an Amazon S3 bucket. The archive file will typically be a .zip file containing the code and files required to deploy the software package.

A company has three different environments: Development, QA, and Production. The company wants to deploy its code first in the Development environment, then QA, and then Production. Which AWS service can be used to meet this requirement? Use AWS Data Pipeline to create multiple data pipeline provisions to deploy the application Use AWS CodeBuild to create, configure, and deploy multiple build application projects Use AWS CodeDeploy to create multiple deployment groups Use AWS CodeCommit to create multiple repositories to deploy the application

Use AWS CodeDeploy to create multiple deployment groups You can specify one or more deployment groups for a CodeDeploy application. Each application deployment uses one of its deployment groups. The deployment group contains settings and configurations used during the deployment. You can associate more than one deployment group with an application in CodeDeploy. This makes it possible to deploy an application revision to different sets of instances at different times. For example, you might use one deployment group to deploy an application revision to a set of instances tagged Test where you ensure the quality of the code. Next, you deploy the same application revision to a deployment group with instances tagged Staging for additional verification. Finally, when you are ready to release the latest application to customers, you deploy to a deployment group that includes instances tagged Production. Therefore, using AWS CodeDeploy to create multiple deployment groups can be used to meet the requirement

A developer is creating a microservices application that includes and AWS Lambda function. The function generates a unique file for each execution and must commit the file to an AWS CodeCommit repository. How should the developer accomplish this? After the new file is created in Lambda, use CURL to invoke the CodeCommit API. Send the file to the repository and automatically commit the change. Upload the new file to an Amazon S3 bucket. Create an AWS Step Function to accept S3 events. Use AWS Lambda functions in the Step Function, to add the file to the repository and commit the change. Send a message to an Amazon SQS queue with the file attached. Configure an AWS Step Function as a destination for messages in the queue. Configure the Step Function to add the new file to the repository and commit the change. Use an AWS SDK to instantiate a CodeCommit client. Invoke the PutFile method to add the file to the repository and execute a commit with CreateCommit.

Use an AWS SDK to instantiate a CodeCommit client. Invoke the PutFile method to add the file to the repository and execute a commit with CreateCommit. The developer can instantiate a CodeCommit client using the AWS SDK. This provides the ability to programmatically work with the AWS CodeCommit repository. The PutFile method is used to add or modify a single file in a specified repository and branch. The CreateCommit method creates a commit for changes to a repository.

A Developer is creating an AWS Lambda function that generates a new file each time it runs. Each new file must be checked into an AWS CodeCommit repository hosted in the same AWS account. How should the Developer accomplish this? When the Lambda function starts, use the Git CLI to clone the repository. Check the new file into the cloned repository and push the change Use an AWS SDK to instantiate a CodeCommit client. Invoke the put_file method to add the file to the repository Upload the new file to an Amazon S3 bucket. Create an AWS Step Function to accept S3 events. In the Step Function, add the new file to the repository After the new file is created in Lambda, use cURL to invoke the CodeCommit API. Send the file to the repository

Use an AWS SDK to instantiate a CodeCommit client. Invoke the put_file method to add the file to the repository The Developer can use the AWS SDK to instantiate a CodeCommit client. For instance, the code might include the following: import boto3 client = boto3.client('codecommit') The client can then be used with put_file which adds or updates a file in a branch in an AWS CodeCommit repository, and generates a commit for the addition in the specified branch.

A Development team is creating a microservices application running on Amazon ECS. The release process workflow of the application requires a manual approval step before the code is deployed into the production environment.What is the BEST way to achieve this using AWS CodePipeline? Disable a stage just prior the deployment stage Use an Amazon SNS notification from the deployment stage Use an approval action in a stage before deployment Disable the stage transition to allow manual approval

Use an approval action in a stage before deployment In AWS CodePipeline, you can add an approval action to a stage in a pipeline at the point where you want the pipeline execution to stop so that someone with the required AWS Identity and Access Management permissions can approve or reject the action. If the action is approved, the pipeline execution resumes. If the action is rejected—or if no one approves or rejects the action within seven days of the pipeline reaching the action and stopping—the result is the same as an action failing, and the pipeline execution does not continue. In this scenario, the manual approval stage would be placed in the pipeline before the deployment stage that deploys the application update into production: Therefore, the best answer is to use an approval action in a stage before deployment to production

A company is planning to use AWS CodeDeploy to deploy a new AWS Lambda function What are the MINIMUM properties required in the 'resources' section of the AppSpec file for CodeDeploy to deploy the function successfully? TaskDefinition, LoadBalancerInfo, and ContainerPort TaskDefinition, PlatformVersion, and ContainerName name, alias, PlatformVersion, and type name, alias, currentversion, and targetversion

name, alias, currentversion, and targetversion The content in the 'resources' section of the AppSpec file varies, depending on the compute platform of your deployment. The 'resources' section for an AWS Lambda deployment contains the name, alias, current version, and target version of a Lambda function.


Kaugnay na mga set ng pag-aaral

Chapter 22 (II) Quiz (Adaptive Immunity)

View Set

Unit 3 Campbell's Biology concepts and connections Unit 3: Chapter 6 & 7

View Set

Ch 13: The Peripheral Nervous System and Reflex Activity

View Set

from the Jesus to christ: the first christians, part 1

View Set

Chapter 28: Complementary and Integrative Health Question Bank

View Set