AWS Review Questions

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Which of the following are benefits of the AWS's Relational Database Service (RDS)? Choose the 2 correct answers from the options below B. DB owner can resize the capacity accordingly D. It allows you to store NoSQL data C. It allows you to store unstructured data A. Automated patches and backups

Answer - A and B The AWS Documentation mentions the following Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need. For more information on AWS RDS, please visit the URL: https://aws.amazon.com/rds/

Which of the following are benefits of the AWS's Relational Database Service (RDS)? Choose the 2 correct answers from the options below B. DB owner can resize the capacity accordingly D. It allows you to store NoSQL data C. It allows you to store unstructured data A. Automated patches and backups

Answer - A and B The AWS Documentation mentions the following Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need. For more information on AWS RDS, please visit the URL:https://aws.amazon.com/rds/

You have a set of EC2 Instances hosted on the AWS Cloud. The EC2 Instances are hosting a web application. Which of the following acts as a firewall to your VPC and the instances in it? Choose 2 answers from the options given below. A. Usage of Security Groups B. Usage of AWS Config C. Usage of Network Access Control Lists D. Usage of the Internet gateway

Answer - A and C The AWS Documentation mentions the following. A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. For more information on Security Groups, please refer to the following link: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html For more information on Network Access Control Lists, please refer to the following link: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html

Which of the below options cannot be used to upload archives to Amazon Glacier? A. AWS Glacier API B. AWS Console C. AWS Glacier SDK D. AWS S3 Lifecycle policies

Answer - B Note that the AWS Console cannot be used to upload data onto Glacier. The console can only be used to create a Glacier vault which can be used to upload the data. For more information on uploading data onto Glacier, please refer to the following link: https://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html Option A - AWS Glacier API AWS Glacier is a storage service optimized for infrequently used data or "cold data." This option is used for programmatically access Glacier and work with it. Due to this reason, this option is incorrect. https://docs.aws.amazon.com/amazonglacier/latest/dev/amazon-glacier-api.html Option C - AWS Glacier SDK : SDK, i.e., Software Development Kit, is used to develop applications for AmazonS3 Glacier. It provides libraries that map to the underlying REST API and provide objects that you can easily use to construct requests and process responses. Due to this reason, it's not a valid answer to the asked question. https://docs.aws.amazon.com/amazonglacier/latest/dev/using-aws-sdk.html Option D - AWS S3 Lifecycle Policies : S3 Lifecycle Policies allow you to automatically review objects within your S3 Buckets and have them moved to Glacier or have the objects deleted from S3. https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html https://aws.amazon.com/glacier/faqs/

Which of the following features of Amazon RDS allows for better availability of databases? Choose the answer from the options given below. A. VPC Peering B. Multi-AZ C. Read Replicas D. Data encryption

Answer - B The AWS Documentation mentions the following. If you are looking to use replication to increase database availability while protecting your latest database updates against unplanned outages, consider running your DB instance as a Multi-AZ deployment. For more information on AWS RDS, please visit the FAQ Link: https://aws.amazon.com/rds/faqs/

Which of the following does AWS perform on its behalf for EBS volumes to make it less prone to failure? A. Replication of the volume across Availability Zones B. Replication of the volume in the same Availability Zone C. Replication of the volume across Regions D. Replication of the volume across Edge locations

Answer - B When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to the failure of any single hardware component. For more information on EBS Volumes, please refer to the below URL: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

A company wants to have a database hosted on AWS. As much as possible they want to have control over the database itself. Which of the following would be an ideal option for this? A. Using the AWS DynamoDB service B. Using the AWS RDS service C. Hosting the database on an EC2 Instance D. Using the Amazon Aurora service

Answer - C If you want a self-managed database, that means you want complete control over the database engine and the underlying infrastructure. In such a case, you need to host the database on an EC2 Instance. For more information on EC2 Instances, please refer to the below URL: https://aws.amazon.com/ec2/

Your company is planning to pay for an AWS Support plan. They have the following requirements as far as the support plan goes: 24x7 access to Cloud Support Engineers via email, chat & phone Response time of less than 15 minutes for any business-critical system faults Which of the following plans will suffice to keep in mind the above requirement? A. Basic B. Developer C. Business D. Enterprise

Answer - D As per the AWS document, there is no critical support available for Basic, Developer and Business plans. Enterprise plan has critical support within 15 minutes. The question mentions less than 15 minutes for critical faults. Hence the correct answer is Enterprise. For more information on the support plans, please refer to the following Link:

Which of the following storage mechanisms can be used to store messages effectively? A. Amazon Glacier C. Amazon EBS Snapshots D. Amazon SQS B. Amazon EBS Volumes

Answer - D The AWS Documentation mentions the following on AWS SQS Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components For more information on AWS SQS, please refer to the below URL:https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/Welcome.html

What is the AWS service provided which provides a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. A. AWS RDS D. Elastic Map Reduce B. DynamoDB C. Oracle RDS

Answer: - B DynamoDB is a fully managed NoSQL offering provided by AWS. It is now available in most regions for users to consume. The link provides the full details on the product For more information on AWS DynamoDB, please refer to the below URL: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

You are requested to expose your serverless application implemented with AWS Lambda to HTTP clients. ( using HTTP Proxy ) Which of the following AWS services can you use to accomplish the task? (Select TWO) A. AWS Elastic Load Balancing (ELB) B. AWS Route53 C. AWS API Gateway D. AWS Lightsail E. AWS Elastic Beanstalk

Answer: A and C Option A is CORRECT because AWS documentation mentions that "Application Load Balancers now support invoking Lambda functions to serve HTTP(S) requests." This enables users to access serverless applications from any HTTP client, including web browsers. Option B is INCORRECT because Route53 is a Domain Name System and not an HTTP proxy. Option C is CORRECT because API Gateway + Lambda is a common pattern for exposing serverless functions via HTTP/HTTPS. AWS documentation mentions that "Creating, deploying, and managing a REST application programming interface (API) to expose backend HTTP endpoints, AWS Lambda functions, or other AWS services." Option D is INCORRECT because AWS Lightsail has a completely different goal. It is a service to speed up the provisioning of AWS resources. Option E is INCORRECT because AWS Elastic Beanstalk has a completely different goal. It is a service that makes it easier for developers to deploy and manage applications in the AWS Cloud quickly. Developers simply upload their applications, then Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.

Which of the following statements are FALSE when it comes to AWS DataSync? (Choose TWO) B. Can work only over AWS Direct Connect C. It is an agentless data transfer service A. A fully managed data transfer service with built-in retry mechanism D. Can copy data between NFS servers, SMB file shares, Amazon S3 buckets, and Amazon EFS file systems. E. It is integrated with AWS CloudWatch

Answer: B and C B and C are FALSE, all the others are TRUE according to the AWS Documentation (see reference below) References:https://aws.amazon.com/datasync/ https://aws.amazon.com/datasync/faq/

Your company is moving a large application to AWS using a set of EC2 instances. A key requirement is reusing existing server-bound software licensing. Which of the following options is the best for satisfying the requirement? A. EC2 Dedicated Instances B. EC2 Reserved Instances C. EC2 Dedicated Host D. EC2 Spot Instances

Answer: C Option A is INCORRECT because despite instances run on a single-tenant hardware, AWS does not give visibility to sockets and cores required for reusing server bound licenses. AWS highlights this in the comparison table at the following link: https://aws.amazon.com/ec2/dedicated-hosts/ Option B is INCORRECT because Reserved Instances are only a purchasing option and there's no way to control the hardware where these instances are running on. Option C is CORRECT because instances run on a dedicated hardware where AWS gives visibility of physical characteristics. AWS documentation mentions this with the following sentence: "...Dedicated Host gives you additional visibility and control over how instances are placed on a physical server, and you can consistently deploy your instances to the same physical server over time. As a result, Dedicated Hosts enable you to use your existing server-bound software licenses and address corporate compliance and regulatory requirements." Option D is INCORRECT because Spot Instances are only a purchasing option. AWS documentation explains the possibility of reusing server bound license: https://aws.amazon.com/ec2/dedicated-hosts/

You are the architect of a custom application running inside your corporate data center. The application runs with some unresolved bugs that produce a lot of data inside custom log files generating time-consuming activities for the operation team responsible for analyzing them. You want to move the application to AWS using EC2 instances. At the same time, you want to take the opportunity to improve logging and monitoring capabilities, but without touching the application code. What AWS service should you use to satisfy the requirement? A. AWS Kinesis Data Streams B. AWS CloudTrail C. AWS CloudWatch Logs D. AWS Application Logs

Answer: C Option A is INCORRECT because in order to feed a Data Streams from custom logs you have to change the application code. AWS documentation describes this with the following sentence: "To put data into the stream, you must specify the name of the stream, a partition key, and the data blob to be added to the stream." Option B is INCORRECT because it is not related to the scenario and custom log files. Option C is CORRECT because AWS CloudWatch Logs has the capability to reuse existing application logs increasing efficiency in operation with the ability to generate on them metrics, alerts and analytics with AWS CloudWatch Logs Insight. The application and custom log files are exactly as they were when the application was running on-prem. So you don't need to change any piece of application code that makes them ingestible by AWS CloudWatch Logs. AWS official documentation in the FAQ section highlights the reusing capability with the sentence "AWS CloudWatch Logs lets you monitor and troubleshoot your systems and applications using your existing system, application and custom log files... so, no code changes are required." You can also leverage CloudWatch Metrics, Alarms and Dashboards with Logs to get full operational visibility into your applications. This empowers you to understand your applications, make improvements, and find problems quickly. Thus you can continue to innovate rapidly. Option D is INCORRECT because AWS Application Logs does not exist. References: https://aws.amazon.com/cloudwatch/faqs/

I need to upload a large number of large-size objects from different Geographic locations to an S3 bucket. What is the best mechanism to do so in a fast & reliable way? A. I can connect to an application running on AWS EC2 that is hosted in multiple regions using Route 53 & use latency based routing to upload files to the S3 bucket. B. I can use a Direct Connect link from each of the Geographic location for transferring data quickly. C. I can use S3 Transfer Acceleration from each Geographic location that will route the data from their respective Edge locations to S3. D. I can directly access the S3 bucket from the different locations & use a multi-part-upload for transferring huge objects.

Answer: C Option A is incorrect since Route 53 latency routing only calculates latency between different endpoints based on the Internet traffic & location proximity rather than optimizing the network for fast data transfers. Option B is incorrect since Direct Connect is used for very specific purposes like extreme security requirements for the data transfer. Also, establishing multiple Direct Connect infrastructures would be expensive from a cost standpoint. Option C is CORRECT. The best way to address this scenario is to route the requests to the nearest CloudFront edge location from the different Geographic locations. Edge locations provide a fast network infrastructure bypassing much of the internet for delivering content to S3 destinations. Performance gains of nearly 50 - 500% can be observed while using S3 Transfer Acceleration. Option D is incorrect. It is possible to use S3 endpoints directly for data transfer. But it will be impractical for situations where the Geographic location is significantly far away from the S3 destination introducing high latency while uploading large objects. https://medium.com/awesome-cloud/aws-amazon-s3-transfer-acceleration-overview-6baa7b029c27 https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html

While proposing AWS Cloud solution to a client as a value proposition, which of the following is not an advantage to use the AWS Cloud? A. The AWS Cloud offers a pay-as-you-go model to trade Capital expense for Variable expense. B. The AWS Cloud offers a Scale-on-demand model to eliminate wasted capacity. C. The AWS Cloud gives complete control of Security to its users so that they can replicate their Data Center Security model on the Cloud. D. AWS Cloud frees the users from spending time & money for maintaining their Data Centers.

Answer: C Option A is incorrect. Instead of heavily investing in Data Centers & Servers for hosting their applications, clients can consume resources from the AWS Cloud, e.g., Compute, Storage, and only pay for the resources they have consumed. So Capital Expenditure (CAPEX) gets converted into Operational Expenditure (OPEX)while using a Cloud environment that results in Cost Efficiency for the client. Option B is incorrect. The AWS Cloud model helps eliminating to guess infrastructure capacity needs by offering a Scale on Demand model. For example, during Christmas when there is a sudden increase in website traffic, additional resources (like EC2) can be created On-Demand (Scale- Up) to address the surge in traffic. Similarly, when the festival period goes away, the additional resources that were created can be terminated (Scale - Down) so that the user does not need to pay for idle resources. Option C is CORRECT . AWS Cloud adopts a Shared Responsibility Model where Security & Compliance responsibility is shared between the Client & AWS. This shared responsibility security model helps relieve the client of operational burden as AWS operates, manages & controls the components from the host Operating System and Virtualization layer down to the physical security of its facilities (Data Center's). The client can effectively manage the Security of his applications & the infrastructure in which it resides. AWS also helps customers to understand their robust controls in place to maintain security and compliance in the Cloud through their compliance certifications e.g., PCI DSS compliance. Option D is incorrect. AWS Cloud allows clients to focus on their Projects that differentiate their business rather than maintaining infrastructure. AWS performs all the heavy lifting of maintaining facilities (Data Centers).

I have developed an application using AWS services that have been deployed to multiple regions. How do I achieve the best Performance and Availability when users from different locations access my application? A. Use Route 53 latency based routing for improving performance and Availability. B. Use a CloudFront distribution for improving performance and Availability. C. Use Global Accelerator for improving performance and Availability. D. Use an endpoint of the application directly for accessing it that lies within a user's Region.

Answer: C Option A is incorrect. Route 53 latency-based routing helps select a region that may be relatively faster for the user to send traffic to based on certain factors like internet traffic and proximity to the user's location. However, the actual route to the destination does not involve providing a fast network path for optimum performance, which is the prime requirement for the scenario. Latency based routing does not address Availability. Option B is incorrect. CloudFront improves performance for both cacheable content (e.g., images, videos)and dynamic content (e.g., API, dynamic site delivery) using edge locations. Here we are talking about application performance & Availability with a highly reliable, performant network rather than bringing content closer to the user. Option C is CORRECT. Global Accelerator improves the performance of a wide range of applications over TCP or UDP by proxying packets at Edge locations to applications running in one or more AWS regions. Global Accelerator provides static IP addresses acting as a fixed entry point to application endpoints(Application Load Balancers, EC2 instances ...) in a single or multiple AZ's offering High Availability. It uses the AWS global network to optimize the path from users to the application, thus improving the resultant traffic performance by as much as 60%. It provides very low latency for a great user experience by i). Routing traffic to the closest edge location through AnyCast & then routing it to the closest regional endpoint over the AWS global network. ii). Good for Gaming, Media, Mobile applications Option D is incorrect since 1. The application may not be deployed in the Region that the user is trying to access. 2. There is no way to calculate latency even though there is proximity to the user's region. 3. Availability will be restricted to AZ's rather than Regions if regional endpoints, e.g., ELB's are directly accessed.

I have a web application that has been deployed to the AWS Mumbai region. My application soon becomes popular. Now there are users all over the world who would like to access it. If I use a CloudFront distribution fordoing so, which statements are FALSE for CloudFront? (Select TWO.) A. CloudFront uses the concept of Edge locations for caching and delivering content faster to its users. B. CloudFront can help improve performance by using Keep-alive connections between the Edge locations& the origin server. C. CloudFront does not cache dynamic content. D. CloudFront can use only S3 buckets as their Origin Server from where they can cache content. E. CloudFront can customize content at the Edge locations before delivering it to users.

Answer: C, D Option A is incorrect. CloudFront does use the concept of Edge locations for caching content that is requested by the user. When a user in the US requests for content in a web server hosted in the Mumbai region, CloudFront will initially check whether the content is available at the nearest edge location in the US region. If it is available, the content will be served directly from the Edge location. If not, CloudFront will request the Origin Server, get the content and cache it at the Edge location for serving future requests. Option B is incorrect. Every HTTP connection runs on TCP/IP. For every HTTP connection to work, a TCP handshake has to be initially completed. Let's consider the following 2 scenarios. Option C is CORRECT since we can use the Time To Live (TTL) value to enable caching of dynamic content. Option D is CORRECT. CloudFront has been opened to use Origin servers of your choice that can be S3 or a custom origin like EC2, ELB, etc... Option E is incorrect. CloudFront has the ability to customize content at the Edge location before delivering it to its users. For example, a Lambda function (usually referred to as lambda@Edge) can use the following triggers Viewer Request, Origin Request, Origin Response, Viewer response that can be used for customizing the End User experience.

A client who has adopted AWS Cloud would like to ensure that his systems need to deliver continuous business value & improve supporting processes and procedures. Which design pillar will he need to focus for achieving this? A. Reliability B. Scalability C. Automation D. Operational Excellence

Continuous business value is achieved with the ability to monitor existing running systems & improving processes and procedures by managing & automating changes. For eg in response to a saturation in CPU usage of an EC2 instance, a monitoring system like CloudWatch will automatically trigger a creation of a new instance though alarms. This will ensure that the system's capacity meets changing load demands. This is a part of the Operational Excellence pillar of the AWS well architected framework which focuses on running & monitoring systems to deliver business value Option A is incorrect since the Reliability pillar focuses on the ability of the system to recover from infrastructure or service failures Option B is incorrect since scalability is a by-product of monitoring solutions which provide the capability for infrastructure resources to cope with increase or decrease of capacity by adding or terminating resources when not needed. Option C is incorrect since automation is the ability to induce certain systemic requirements like scalability, auto recovery using monitoring solutions. It helps in improving system's stability & efficiency of an Organization Option D is CORRECT Refer to the above description for details

Whilst working on a collaborative project, an administrator would like to record the initial configuration and the several authorized changes that engineers make to the route table of a VPC. What is the best method to achieve this? A. Use of AWS Config C. Use of AWS CloudTrail D. Use of an AWS Lambda function that is triggered to save a log file to an S3 bucket each time configuration changes are made. B. Use of VPC Flow Logs

Correct Answer - A AWS Config can be used to keep track of configuration changes on AWS resources, keeping multiple date-stamped versions in a reviewable history. This makes it the best method to meet the scenario requirements.https://aws.amazon.com/config/ Option B. is incorrect because VPC flow logs will only capture ip traffic related information that is passing through and from network interfaces within the VPC. VPC flow logs will not be able to capture configuration changes made to route tables.https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html Option C. is incorrect because AWS CloudTrail will capture identity access activity, event history into the AWS environment. Recording the actions and API calls but not best suited to keep a record of configurations.https://aws.amazon.com/cloudtrail/ Option D. is incorrect because using a Lambda function to write configuration changes might meet the requirements but would not be the best method. Since AWS Config can deliver what is needed with much less administrative input.

Amazon Macie uses artificial intelligence (AI) and machine learning (ML) in which three main functional components on users' data? B. Observe, Design, Alert D. Detect, Discover, Alert A. Discover, Classify, Protect C. Optimize, Alert, Secure

Correct Answer - A Amazon Macie is a fully managed security service that uses AI and ML to continuously observe data access activity in order to alert the user of any anomalies if they arise. Alerts may include unauthorized access, data leaks and any out-of-the-norm patterns. The major functions are to discover, classify and protect the user data.https://aws.amazon.com/macie/https://docs.aws.amazon.com/macie/latest/userguide/what-is-macie.html Option B. is incorrect because observe, design, alert: are not the main functional components of Amazon Macie. Option C. is incorrect because optimize, alert, secure: are not the main functional components of Amazon Macie. Option D. is incorrect because detect, discover, alert are not the main functional components of Amazon Macie.

Which statements regarding VPC Peering is accurate? Select TWO. D. Traffic between VPC peers in different AWS Regions is not encrypted by default. B. In order for VPC Peering to work each VPC should have a public subnet. A. Two VPCs in different AWS Regions and under separate AWS Accounts can share traffic between each other. E. VPC Peering can be used to replicate data to geographically distinct locations for fault-tolerance, disaster recovery and redundancy C. In VPC Peering, it is possible for traffic from one VPC to traverse through a transit VPC in order to reach a third VPC.

Correct Answer - A, E VPC Peering can be established between VPCs in different AWS Regions and in separate AWS Accounts. The logical networks still use the same common AWS backbone network infrastructure to communicate. By utilizing this infrastructure, VPC Peering makes it possible to securely store mission-critical data to geographically distinct locations for fault-tolerance, disaster recovery and redundancy.https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html Option B. is INCORRECT because VPC Peering can still work without the use of public IP addresses. Private IP address subnets can be routed between peers as long as their respective CIDR block ranges do not overlap. Option C. is INCORRECT because it is not permissible on the AWS cloud to route traffic of one VPC peer through a transitive peer to get to a third VPC.https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html Option D. is INCORRECT because traffic between VPC peers in different AWS Regions is indeed encrypted by default

Which service can be used to create steps required to automate build, test and deployments for a web application? A. AWS CodeCommit B. AWS CodePipeline C. AWS CodeDeploy D. AWS CodeBuild

Correct Answer - B AWS CodePipeline is a fully managed service that automates the release pipeline for application updates. For updates, it uses application code stored in AWS CodeCommit, performs testing using AWS CodeBuild, and uses AWS CodeDeploy for deployment. Option A is incorrect as AWS CodeCommit is used to store deployment codes. Option C is incorrect as AWS CodeDeploy is used for deployment of codes to resources. Option D is incorrect as AWS CodeBuild is used to test and build application code. For more information on AWS CodePipeline, refer to the following URL: https://aws.amazon.com/codepipeline/faqs/

To receive AWS Trusted Advisor Notifications, what actions are required from the customer end? A. Open a ticket with AWS Support. B. Set up Notification in Dashboard C. Set up Amazon Simple Notification Service D. No action is required, all Notifications are sent automatically on a weekly basis.

Correct Answer - B AWS Trusted Advisor Notification is an optional service that needs to be set up from the dashboard providing a list of recipients and selecting resource items for which status is required. Option A is incorrect as opening an AWS support ticket is not required to receive AWS Trusted Advisor Notification. Option C is incorrect as Amazon SNS is a separate service for push notifications. But it is not required to receive AWS Trusted Advisor Notification. Option D is incorrect as you need to set up the notifications in the dashboard. For more information on AWS Trusted Advisor, refer to the following URL: https://aws.amazon.com/premiumsupport/faqs/?nc=sn&loc=6

On which of the following resources does Amazon Inspector perform network accessibility checks? A. Amazon CloudFront B. Amazon VPN C. Amazon EC2 instance D. Amazon VPC

Correct Answer - C Amazon Inspector provides two types of packages. Network reachability rules package checks network accessibility checks on Amazon EC2 instance. Host assessment rules package checks vulnerabilities on AmazonEC2 instance. Options A, B & D are incorrect as Amazon Inspector performs network accessibility checks on Amazon EC2instance, not on Amazon CloudFront, Amazon VPN or Amazon VPC. For more information on Amazon Inspector, refer to the following URL: https://aws.amazon.com/inspector/faqs/

For the AWS Shared Responsibility Model, which of the following responsibilities is NOT a part of shared controls by both customer and AWS? A. Patch Management B. Configuration Management C. Global infrastructure that runs AWS Cloud services. D. Training

Correct Answer - C The global AWS infrastructure including the hardware, software, networking, and facilities is the responsibility of AWS, not the responsibility of the Customer. Option A is incorrect. AWS is responsible for patching resources within AWS infrastructure, while customers are responsible for patching guest OS and applications. Option B is incorrect. AWS is responsible for configuring resources within AWS infrastructure, while customers are responsible for configuring their guest OS, databases and applications. Option D is incorrect. AWS trains for AWS employees while the customer is responsible for training employees within their organizations. For more information on the Shared responsibility model, refer to the following URL: https://aws.amazon.com/compliance/shared-responsibility-model/

Which of the following encryption techniques is used by AWS KMS for integration with other AWS services? A. RSA Encryption B. AES Encryption C. Triple DES Encryption D. Envelope Encryption

Correct Answer - D AWS KMS uses envelope encryption while integrating with other AWS services. In this encryption technique, data is encrypted by the data encryption key within that service. This data key is further encrypted using the customer master key (CMK) stored in AWS KMS. Options A, B & C are incorrect as AWS KMS does not use these encryption techniques for integration with other AWS services. For more information on AWS KMS, refer to the following URL: https://aws.amazon.com/kms/features/

An administrator would like to check if the Amazon CloudFront identity is making access API calls to an S3 bucket where a static website is hosted. Where can this information be obtained? A. Configuring Amazon Athena to run queries on the Amazon CloudFront distribution. B. Check AWS CloudWatch logs on the S3 bucket. C. In the webserver, tail for identity access logs from the Amazon CloudFront identity. D. In AWS CloudTrail Event history, look up access calls and filter for the Amazon CloudFront identity.

Correct Answer - D By viewing Event history in Amazon CloudTrail, the administrator can be able to access operational, access and activity logs for the past 90 days, to the S3 bucket that hosts the static website. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events-console.html Option A is INCORRECT because Amazon Athena will need a specific data repository from which a database and table can be created in order to run queries. Data repositories can be a folder in an S3 bucket where logs are written to. Option B is INCORRECT because AWS CloudWatch does not log access API calls from one resource to another. AWS CloudTrail can do this. Option C is INCORRECT because it is not possible to access the underlying web server for CloudFront. It is fully managed by AWS.

A client who is using AWS cloud services has multiple environments for his EC2 servers like DEV, QA, PROD. The client would like to know billing details for each of these environments for him to make informed decisions related to saving costs. He can do that using A. Consolidated billing B. AWS Budgets C. AWS Cost allocation tags D. AWS Organizations

Correct Answer: C Cost allocation tags appear as additional columns in detailed billing reports. As an example, let's say an AWS Region has 50 EC2 instances running out of which 15 are DEV, 15 are QA & the remaining are PROD. While launching these instances, one can apply a tag named "env" providing value of DEV for 15 instances, QA for 15instances and PROD for 20 instances. These tags then appear in a detailed billing report as columns which can be filtered based on the environment and the cost of a set of environment servers calculated easily Option A is incorrect since consolidated billing is a feature that applies to scenarios where a client has multiple accounts and would like to receive one bill for all his accounts Option B is incorrect since AWS Budgets are used for setting custom budgets budgeting costs to track costs and usage of resources & get alerted when actual or forecasted cost and usage exceeds the limits set for that budget Option C is CORRECT. Refer above description for the same. Option D is incorrect. AWS Organizations allows you to consolidate multiple AWS accounts into an Organization that can be centrally managed for billing. They can have Service Control Policies applied at the organization level that can override policies set up by IAM for individual accounts Reference: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html


Ensembles d'études connexes

OB/GYN Steven Penny Review Questions: Fetal GU Ch 29

View Set

Pathophysiology: Cardiovascular System

View Set

chapter 11: pregnancy, labor and delivery

View Set

Unit 1 test chapter 1-3 government

View Set

#6 - Theories of play, recreation, and leisure

View Set

MAN3802 Marketing Man. Quiz Review Chapter 5

View Set