AWS - Certified Solutions Architect - Associates (SAA-C01) / Multiple Choice

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

*Which of the following is correct?* * # of Regions > # of Availability Zones > # of Edge Locations * # of Availability Zones > # of Regions > # of Edge Locations * # of Availability Zones > # of Edge Locations > # of Regions * # of Edge Locations > # of Availability Zones > # of Regions

*# of Edge Locations > # of Availability Zones > # of Regions* The number of Edge Locations is greater than the number of Availability Zones, which is greater than the number of Regions.

*What is the minimum file size that I can store on S3?* * 1MB * 0 bytes * 1 byte * 1KB

*0 bytes*

*How many S3 buckets can I have per account by default?* * 20 * 10 * 50 * 100

*100*

How many S3 buckets can I have per account by default? * 10 * 100 * 50 * 20

*100*

*What is the availability of S3-OneZone-IA?* * 99.90% * 100% * 99.50% * 99.99%

*99.50%* OneZone-IA is only stored in one Zone. While it has the same Durability, it may be less available than normal S3 or S3-IA.

*What is the availability of objects stored in S3?* * 99.99% * 100% * 99.90% * 99%

*99.99%*

*What does an AWS Region consist of?* * A console that gives you quick, global picture of your cloud computing environment. * A collection of data centers that is spread evenly around a specific continent. * A collection of databases that can only be accessed from a specific geographic region. * A distinct location within a geographic area designed to provide high availability to specific geography.

*A distinct location within a geographic area designed to provide high availability to specific geography* Each region is separate geographic area. Each region has multiple, isolated locations known as Availability Zones.

*What is an AWS region?* * A region is an independent data center, located in different countries around the globe. * A region is a geographical area divided into Availability Zones. Each region contains at least two Availability Zones. * A region is a collection of Edge Locations available in specific countries. * A region is a subset of AWS technologies. For example, the Compute region consists of EC2, ECS, Lamda, etc.

*A region is a geographical area divided into Availability Zones.* Each region contains at least two Availability Zones.

*What is a way of connecting your data center with AWS?* * AWS Direct Connect * Optical fiber * Using an Infiniband cable * Using a popular Internet service from a vendor such as Comcast or AT&T

*AWS Direct Connect* ----------------------------------- Your colocation or MPLS provider may use an optical fiber or Infiniband cable behind the scenes. If you want to connect over the Internet, then you need a VPN.

*You want to deploy your applications in AWS, but you don't want to host them on any servers. Which service would you choose for doing this?* (Choose two.) * Amazon ElastiCache * AWS Lambda * Amazon API Gateway * Amazon EC2

*AWS Lambda* *Amazon API Gateway* ----------------------------------- Amazon ElastiCache is use to deploy Redis or Memcached protocol-compliant server nodes in the cloud, and Amazon EC2 is a server.

*Power User Access allows ________.* * Full Access to all AWS services and resources. * Users to inspect the source code of the AWS platform * Access to all AWS services except the management of groups and users within IAM. * Read Only access to all AWS services and resources.

*Access to all AWS services except the management of groups and users within IAM.*

*What level of access does the "root" account have?* * No Access * Read Only Access * Power User Access * Administrator Access

*Administrator Access*

*If you want to speed up the distribution of your static and dynamic web content such as HTML, CSS, image, and PHP files, which service would you consider?* * Amazon S3 * Amazon EC2 * Amazon Glacier * Amazon CloudFront

*Amazon CloudFront* ----------------------------------- Amazon S3 can be used to store objects; it can't speed up the operations. Amazon EC2 provides the compute. Amazon Glacier is the archive storage.

*If you want to run your relational database in the AWS cloud, which service would you choose?* * Amazon DynamoDB * Amazon Redshift * Amazon RDS * Amazon ElastiCache

*Amazon RDS* ----------------------------------- Amazon DynamoDB is a NoSQL offering, Amazon Redshift is a data warehouse offering, and Amazon ElastiCache is used to deploy Redis or Memcached protocol-compliant server nodes in the cloud.

*A company needs to have its object-based data stored on AWS. The initial size of data would be around 500 GB, with overall growth expected to go into 80TB over the next couple of months. The solution must also be durable. Which of the following would be an ideal storage option to use for such a requirement?* * DynamoDB * Amazon S3 * Amazon Aurora * Amazon Redshift

*Amazon S3* Amazon S3 is object storage built to store and retrieve any amount of data from anywhere - web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry. S3 provides comprehensive security and compliance capabilities that meet even the most stringent regulatory requirements. It gives customers flexibility in the way they manage data for cost optimization, access control, and compliance. S3 provides query-in-place functionality, allowing you to run powerful analytics directly on your data at rest in S3.

*You want to be notified for any failure happening in the cloud. Which service would you leverage for receiving the notifications?* * Amazon SNS * Amazon SQS * Amazon CloudWatch * AWS Config

*Amazon SNS* ----------------------------------- Amazon SQS is the queue service; Amazon CloudWatch is used to monitor cloud resources; and AWS Config is used to assess, audit, and evaluate the configurations of your AWS resources.

*An application is going to be developed using AWS. The application needs a storage layer to store important documents. Which of the following option is incorrect to fulfill this requirement?* * Amazon S3 * Amazon EBS * Amazon EFS * Amazon Storage Gateway VTL

*Amazon Storage Gateway VTL* It's used to take the data backups to the cloud. ----------------------------------- *NOTE:* The question is asking about which of the below options is *incorrect* for storing of *important* documents in the cloud, and Option D is correct. The question is not asking about data archival, rather storing. So, Option D is not suited for our requirement.

*What is Amazon Glacier?* * A highly secure firewall designed to keep everything out. * A tool that allows to "freeze" an EBS volume. * An AWS service designed for long term data archival. *It is a tool used to resurrect deleted EC2 snapshots.

*An AWS service designed for long term data archival.*

*Amazon's EBS volumes are ________.* * Block based storage * Encrypted by default * Object based storage * Not suitable for databases

*Block based storage* EBS, EFS, and FSx are all storage services base on Block storage.

*You are a solutions architect who works with a large digital media company. The company has decided that they want to operate within the Japanese region and they need a bucket called "testbucket" set up immediately to test their web application on. You log in to the AWS console and try to create this bucket in the Japanese region however you are told that the bucket name is already taken. What should you do to resolve this?* * Bucket names are global, not regional. This is a popular bucket nam

*Bucket names are global, not regional. * This is a popular bucket name and is already taken. You should choose another bucket name.

*How can you get visibility of user activity by recording the API calls made to your account?* * By using Amazon API Gateway * By using Amazon CloudWatch * By using AWS CloudTrail * By using Amazon Inspector

*By using AWS CloudTrail* ----------------------------------- Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Amazon CloudWatch is used to monitor cloud resources. AWS Config is used to assess, audit, and evaluate the configurations of your AWS resources, and Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.

*How do you integrate AWS with the directories running on-premise in your organization?* * By using AWS Direct Connect * By using a VPN * By using AWS Directory Service * Directly via the Internet

*By using AWS Directory Service* ----------------------------------- AWS Direct Connect and a VPN are used to connect your corporate data center with AWS. You cannot use the Internet directly to integrate directories; you need a service to integrate your on-premise directory to AWS.

*How can you have a shared file system across multiple Amazon EC2 instances?* * By using Amazon S3 * By mounting Elastic Block Storage across multiple Amazon EC2 servers * By using Amazon EFS * By using Amazon Glacier

*By using Amazon EFS* ----------------------------------- Amazon S3 is an object store, Amazon EBS can't be mounted across multiple servers, and Amazon Glacier is an extension of Amazon S3.

*What are the various way you can control access to the data stored in S3?* (Choose all that apply.) * By using IAM policy * By creating ACLs * By encrypting the files in a bucket * By making all the files public * By creating a separate folder for the secure files

*By using IAM policy.* *By creating ACLs* By encrypting the files in the bucket, you can make them secure, but it does not help in controlling the access. By making the files public, you are providing universal access to everyone. Creating a separate folder for secure files won't help because, again, you need to control the access of the separate folder.

*You have been asked to advise on a scaling concern. The client has a elegant solution that works well. As the information base grows they use CloudFormation to spin up another stack made up of an S3 bucket and supporting compute instances. The trigger for creating a new stack is when the PUT rate approaches 100 PUTs per second. the problem is that as the business grows that number of buckets is growing into the hundreds and will soon be in the thousands. You have been asked what can be done to

*Change the trigger level to around 3000 as S3 can now accommodate much higher PUT and GET levels.* Until 2018 there was a hard limit on S3 puts of 100 PUTs per second. To achieve this care needed to be taken with the structure of the name Key to ensure parallel processing. As of July 2018 the limit was raised to 3500 and the need for the Key design was basically eliminated. Disk IOPS is not the issue with the problem. The account limit is not the issue with the problem.

*Which of the following options allows users to have secure access to private files located in S3?* (Choose 3) * CloudFront Signed URLs * CloudFront Origin Access Identity * Public S3 buckets * CloudFront Signed Cookies

*CloudFront Signed URLs* *CloudFront Origin Access Identity* *CloudFront Signed Cookies* There are three options in the question which can be used to secure access to files stored in S3 and therefore can be considered correct. Signed URLs and Signed Cookies are different ways to ensure that users attempting access to files in an S3 bucket can be authorised. One method generates URLs and the other generates special cookies but they both require the creation of an application and policy to generate and control these items. An Origin Access Identity on the other hand, is a virtual user identity that is used to give the CloudFront distribution permission to fetch a private object from an S3 bucket. Public S3 buckets should never be used unless you are using the bucket to host a public website and therefore this is an incorrect option.

*In order to enable encryption at rest using EC2 and Elastic Block Store, you must ________.* * Mount the EBS volume in to S3 and then encrypt the bucket using a bucket policy. * Configure encryption when creating the EBS volume * Configure encryption using the appropriate Operating Systems file system * Configure encryption using X.509 certificates

*Configure encryption when creating the EBS volume* The use of encryption at rest is default requirement for many industry compliance certifications. Using AWS managed keys to provide EBS encryption at rest is a relatively painless and reliable way to protect assets and demonstrate your professionalism in any commercial situation.

*You are a developer at a fast growing start up. Until now, you have used the root account to log in to the AWS console. However, as you have taken on more staff, you will now need to stop sharing the root account to prevent accidental damage to your AWS infrastructure. What should you do so that everyone can access the AWS resources they need to do their jobs?* (Choose 2) * Create individual user accounts with minimum necessary rights and tell the staff to log in to the console using the crede

*Create individual user accounts with minimum necessary rights and tell the staff to log in to the console using the credentials provided.* Create a customized sign in link such as "yourcompany.signin.aws.amazon.com/console" for your new users to use to sign in with.

*One of your users is trying to upload a 7.5GB file to S3. However, they keep getting the following error message: "Your proposed upload exceeds the maximum allowed object size.". What solution to this problem does AWS recommend?* * Design your application to use the Multipart Upload API for all objects. * Raise a ticket with AWS to increase your maximum object size. * Design your application to use large object upload API for this object. * Log in to the S3 console, click on the bucket and the

*Design your application to use the Multipart Upload API for all objects.*

*Which statement best describes Availability Zones?* * Two zones containing compute resources that are designed to automatically maintain synchronized copies of each other's data. * Distinct locations from within an AWS region that are engineered to be isolated from failures. * A Content Distribution Network used to distribute content to users. * Restricted areas designed specifically for the creation of Virtual Private Clouds.

*Distinct locations from within an AWS region that are engineered to be isolated from failures.* An Availability Zone (AZ) location within an AWS Region. ----------------------------------- Each Region comprises at least two AZs.

*Your company is planning on hosting an e-commerce application on the AWS Cloud. There is a requirement for sessions to be always maintained for users. Which of the following can be used for storing session data?* (Choose 2) * CloudWatch * DynamoDB * Elastic Load Balancing * ElastiCache * Storage Gateway

*DynamoDB* *ElastiCache* DynamoDB and ElastiCache are perfect options for storing session data. Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity makes it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications. ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution while removing the complexity associated with deploying and managing a distributed cache environment.

*A company has decided to host a MongoDB database on an EC2 Instance. There is an expectancy of a large number of reads and writes on the database. Which of the following EBS storage types would be the ideal one to implement for the database?* * EBS Provisioned IOPS SSD * EBS Throughput Optimized HDD * EBS General Purpose SSD * EBS Cold HDD

*EBS Provisioned IOPS SSD* Since there is a high performance requirement with high IOPS needed, one needs to opt for EBS Provisioned IOPS SSD.

*Which of the below are compute service from AWS?* * EC2 * S3 * Lambda * VPC

*EC2* *Lambda* Both Lambda and EC2 offer computing in the cloud. ----------------------------------- S3 is a storage offering while VPC is a network service.

*In which of the following is CloudFront content cached?* * Region * Data Center * Edge Location * Availability Zone

*Edge location* CloudFront content is cached in Edge Locations.

*What is the best way to protect a file in Amazon S3 against accidental delete?* * Upload the files in multiple buckets so that you can restore from another when a file is deleted * Back up the files regularly to a different bucket or in a different region * Enable versioning on the S3 bucket * Use MFA for deletion * Use cross-region replication

*Enable versioning on the S3 bucket* You can definitely upload the file in multiple buckets, but the cost will increase the number of times you are going to store the files. Also, now you need to manage three or four times more files. What about mapping files to applications? This does not make sense. Backing up files regularly to a different bucket can help you to restore the file to some extent. What if you have uploaded a new file just after taking the backup? The correct answer is versioning since enabling versioning maintains all the versions of the file and you can restore from any version even if you have deleted the file. You can definitely use MFA for delete, but what if even with MFA you delete a wrong file? With CRR, if a DELETE request is made without specifying an object version ID, Amazon S3 adds a delete marker, which cross-region replication replicates to the destination bucket. If a DELETE request specifies a particular object version ID to delete, Amazon

*You have created a new AWS account for your company, and you have also configured multi-factor authentication on the root account. You are about to create your new users. What strategy should you consider in order to ensure that there is good security on this account.* *Restrict login to the corporate network only. * Give all users the same password so that if they forget their password they can just ask their co-workers. * Require users to only be able to log in using biometric authentication

*Enact a strong password policy: user passwords must be changed every 45 days, with each password containing a combination of capital letters, lower case letters, numbers, and special symbols.*

*When creating a new security group, all inbound traffic is allowed by default.* * False * True

*False* There are slight differences between a normal 'new' Security Group and a 'default' security group in the default VPC. For an 'new' security group nothing is allowed in by default.

*You work for a health insurance company that amasses a large number of patients' health records. Each record will be used once when assessing a customer, and will then need to be securely stored for a period of 7 years. In some rare cases, you may need to retrieve this data within 24 hours of a claim being lodged. Given these requirements, which type of AWS storage would deliver the least expensive solution?* * Glacier * S3 - IA * S3 - RRS * S3 * S3 - OneZone-IA

*Glacier* The recovery rate is a key decider. The record shortage must be; safe, durable, low cost, and the recovery can be slow. All features of Glacier.

*A new employee has just started work, and it is your job to give her administrator access to the AWS console. You have given her a user name, an access key ID, a secret access key, and you have generated a password for her. She is now able to log in to the AWS console, but she is unable to interact with any AWS services. What should you do next?* * Tell her to log out and try logging back in again. *Ensure she is logging in to the AWS console from your corporate network and not the normal inte

*Grant her Administrator access by adding her to the Administrators' group.*

*You have uploaded a file to S3. Which HTTP code would indicate that the upload was successful?* * HTTP 200 * HTTP 404 * HTTP 307 * HTTP 501

*HTTP 200*

*Which statement best describes IAM?* * IAM allows you to manage users, groups, roles, and their corresponding level of access to the AWS Platform. * IAM stands for Improvised Application Management, and it allows you to deploy and manage applications in the AWS Cloud. * IAM allows you to manage permissions for AWS resources only. * IAM allows you to manage users' passwords only. AWS staff must create new users for your organization. This is done by raising a ticket.

*IAM allows you to manage users, groups, roles, and their corresponding level of access to the AWS Platform.*

*Which of the following is not a feature of IAM?* * IAM offers fine-grained access control to AWS resources. * IAM allows you to setup biometric authentication, so that no passwords are required. * IAM integrates with existing active directory account allowing single sign-on. * IAM offers centralized control of your AWS account.

*IAM allows you to setup biometric authentication, so that no passwords are required.*

What is an additional way to secure the AWS accounts of both the root account and new users alike? * Configure the AWS Console so that you can only log in to it from a specific IP Address range * Store the access key id and secret access key of all users in a publicly accessible plain text document on S3 of which only you and members of your organization know the address to. * Configure the AWS Console so that you can only log in to it from your internal network IP address range. * Implement Mu

*Implement Multi-Factor Authentication for all accounts.*

*What is AWS Storage Gateway?* * It allows large scale import/exports in to the AWS cloud without the use of an internet connection. * None of the above. * It is a physical or virtual appliance that can be used to cache S3 locally at a customer's site. * It allows a direct MPLS connection in to AWS.

*It is a physical or virtual appliance that can be used to cache S3 locally at a customer's site.* At its heart it is a way of using AWS S3 managed storage to supplement on-premise storage. It can also be used within a VPC in a similar way.

*In what language are policy documents written?* * JSON * Python * Java * Node.js

*JSON*

*You are consulting to a mid-sized company with a predominantly Mac & Linux desktop environment. In passing they comment that they have over 30TB of unstructured Word and spreadsheet documents of which 85% of these documents don't get accessed again after about 35 days. They wish that they could find a quick and easy solution to have tiered storage to store these documents in a more cost effective manner without impacting staff access. What options can you offer them?* (Choose 2) * Migrate docu

*Migrate documents to EFS storage and make use of life-cycle using Infrequent Access storage.* *Migrate documents to File Gateway presented as NFS and make use of life-cycle using Infrequent Access storage.* ----------------------------------- Trying to use S3 without File Gateway in front would be a major impact to the user environment. Using File Gateway is the recommended way to use S3 with shared document pools. Life-cycle management and Infrequent Access storage is available for both S3 and EFS. A restriction however is that 'Using Amazon EFS with Microsoft Windows is not supported'. File Gateway does not support iSCSI in the client side.

*An AWS VPC is a component of which group of AWS services?* * Database Services * Networking Services * Compute Services * Global Infrastructure

*Networking Services* A Virtual Private Cloud (VPC) is a virtual network dedicated to a single AWS account. ----------------------------------- It is logically isolated from other virtual networks in the AWS cloud, providing compute resources with security and robust networking functionality.

*Every user you create in the IAM systems starts with ________.* * Full Permissions * No Permissions * Partial Permissions

*No Permissions*

*Every user you create in the IAM systems starts with ________.* * Partial Permissions * No Permissions * Full Permissions

*No Permissions*

*What is the default level of access a newly created IAM User is granted?* * Administrator access to all AWS services. * Power user access to all AWS services. * No access to any AWS services. * Read only access to all AWS services.

*No access to any AWS services.*

*Can I delete a snapshot of an EBS Volume that is used as the root device of a registered AMI?* * Only using the AWS API. * Yes. * Only via the Command Line. * No.

*No*

*To save administration headaches, a consultant advises that you leave all security groups in web facing subnets open on port 22 to 0.0.0.0/0 CIDR. That way, you can connect wherever you are in the world. Is this a good security design?* * Yes * No

*No* 0.0.0.0/0 would allow ANYONE from ANYWHERE to connect to your instances. This is generally a bad plan. The phrase 'Web facing subnets..' does not mean just web servers. It would include any instances in that subnet some of which you may not strangers attacking. You would only allow 0.0.0.0/0 on port 80 or 443 to to connect to your public facing Web Servers, or preferably only to an ELB. Good security starts by limiting public access to only what the customer needs.

*Can I move a reserved instance from one region to another?* * It depends on the region. * Only in the US. * Yes. * No.

*No* Depending on you type of RIs you can modify the; AZ, scope, network platform, or instance size (within the same instance type). But not Region. In some circumstances you can sell RIs, but only if you have a US bank account.

*You work for a major news network in Europe. They have just released a new mobile app that allows users to post their photos of newsworthy events in real time. Your organization expects this app to grow very quickly, essentially doubling its user base each month. The app uses S3 to store the images, and you are expecting sudden and sizable increases in traffic to S3 when a major news event takes place (as users will be uploading large amounts of content.) You need to keep your storage costs to

*One Zone-Infrequent Access* The key driver here is cost, so an awareness of cost is necessary to answer this. ----------------------------------- Full S3 is quite expensive at around $0.023 per GB for the lowest band. S3 standard IA is $0.0125 per GB, S3 One-Zone-IA is $0.01 per GB, and Legacy S3-RRS is around $0.024 per GB for the lowest band. Of the offered solutions SS3 One-Zone-IA is the cheapest suitable option. Glacier cannot be considered as it is not intended for direct access, however it comes in at around $0.004 per GB. Of course you spotted that RRS is being deprecated, and there is no such thing as S3 - Provisioned IOPS. In this case OneZone IA should be fine as users will 'post' material but only the organization will access it and only to find relevant material. The question states that there is no concern if some material is lost.

*Will an Amazon EBS root volume persist independently from the life of the terminated EC2 instance to which it was previously attached? In other words, if I terminated an EC2 instance, would that EBS root volume persist?* * It depends on the region in which the EC2 instance is provisioned. * Yes. * Only if I specify (using either the AWS Console or the CLI) that it should do so. * No.

*Only if I specify (using either the AWS Console or the CLI) that it should do so* *You can control whether an EBS root volume is deleted when its associated instance is terminated.* The default delete-on-termination behaviour depends on whether the volume is a root volume, or an additional volume. By default, the DeleteOnTermination attribute for root volumes is set to 'true.' However, this attribute may be changed at launch by using either the AWS Console or the command line. For an instance that is already running, the DeleteOnTermination attribute must be changed using the CLI.

*Which of the following is not a component of IAM?* * Organizational Units * Groups * Users * Roles

*Organizational Units*

*A __________ is a document that provides a formal statement of one or more permissions.* * Policy * Role * User * Group

*Policy*

*Which of the below are database services from AWS?* (Choose 2) * RDS * S3 * DynamoDB * EC2

*RDS* *DynamoDB* RDS is a service for relational database provided by AWS. DynamoDB is AWS' fast, flexible, no-sql database service. ----------------------------------- S3 provides the ability to store files in the cloud and is not suitable for databases, while EC2 is part of the compute family of services.

*S3 has what consistency model for PUTS of new objects* * Write After Read Consistency * Eventual Consistency * Read After Write Consistency * Usual Consistency

*Read After Write Consistency*

*What is each unique location in the world where AWS has a cluster of data centers called?* * Region * Availability zone * Point of presence * Content delivery network

*Region* ----------------------------------- AZs are inside a region, so they are not unique. POP and content delivery both serve the purpose of speeding up distribution.

*You run a popular photo sharing website that depends on S3 to store content. Paid advertising is your primary source of revenue. However, you have discovered that other websites are linking directly to the images in your buckets, not to the HTML pages that serve the content. This means that people are not seeing the paid advertising, and you are paying AWS unnecessarily to serve content directly from S3. How might you resolve this issue?* * Use security groups to blacklist the IP addresses of

*Remove the ability for images to be served publicly to the site and then use signed URLs with expiry dates.*

You need to know both the private IP address and public IP address of your EC2 instance. You should ________. * Use the following command: AWS EC2 DisplayIP. * Retrieve the instance User Data from http://169.254.169.254/latest/meta-data/. * Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/. * Run IPCONFIG (Windows) or IFCONFIG (Linux).

*Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/.* Instance Metadata and User Data can be retrieved from within the instance via a special URL. Similar information can be extracted by using the API via the CLI or an SDK.

*You run a meme creation website that stores the original images in S3 and each meme's meta data in DynamoDB. You need to decide upon a low-cost storage option for the memes, themselves. If a meme object is unavailable or lost, a Lambda function will automatically recreate it using the original file from S3 and the metadata from DynamoDB. Which storage solution should you use to store the non-critical, easily reproducible memes in the most cost effective way?* * S3 - 1Zone-IA * S3 - RRS * S3 -

*S3 - 1Zone-IA* S3 - OneZone-IA is the recommended storage for when you want cheaper storage for infrequently accessed objects. It has the same durability but less availability. There can be cost implications if you use it frequently or use it for short lived storage. ----------------------------------- Glacier is cheaper, but has a long retrieval time. RRS has effectively been deprecated. It still exists but is not a service that AWS want to sell anymore.

*You work for a busy digital marketing company who currently store their data on premise. They are looking to migrate to AWS S3 and to store their data in buckets. Each bucket will be named after their individual customers, followed by a random series of letters and numbers. Once written to S3 the data is rarely changed, as it has already been sent to the end customer for them to use as they see fit. However on some occasions, customers may need certain files updated quickly, and this may be for

*S3 - IA* The need to immediate access is an important requirement along with cost. Glacier has a long recovery time at a low cost or a shorter recovery time at a high cost, and 1Zone-IA has a lower Availability level which means that it may not be available when needed.

*What are the different storage classes that Amazon S3 offers?* (Choose all that apply.) * S3 Standard * S3 Global * S3 CloudFront * S3 US East * S3 IA

*S3 Standard* *S3 IA* S3 Global is a region and not a storage class. Amazon CloudFront is a CDN and not a storage class. US East is a region and not a storage class.

*Which of the below are storage services in AWS?* (Choose 2) * S3 * EFS * EC2 * VPC

*S3* *EFS* S3 and EFS both provide the ability to store files in the cloud. ------------------------- EC2 provides compute, and is often augmented with other storage services. VPC is networking service.

*In addition to choosing the correct EBS volume type for your specific task, what else can be done to increase the performance of your volume?* (Choose 3) * Schedule snapshots of HDD based volumes for periods of low use * Stripe volumes together in a RAID 0 configuration. * Never use HDD volumes, always ensure that SSDs are used * Ensure that your EC2 instances are types that can be optimised for use with EBS

*Schedule snapshots of HDD based volumes for periods of low use* *Stripe volumes together in a RAID 0 configuration.* *Ensure that your EC2 instances are types that can be optimised for use with EBS* There are a number of ways you can optimise performance above that of choosing the correct EBS type. One of the easiest options is to drive more I/O throughput than you can provision for a single EBS volume, by striping using RAID 0. You can join multiple gp2, io1, st1, or sc1 volumes together in a RAID 0 configuration to use the available bandwidth for these instances. You can also choose an EC2 instance type that supports EBS optimisation. This ensures that network traffic cannot contend with traffic between your instance and your EBS volumes. The final option is to manage your snapshot times, and this only applies to HDD based EBS volumes. When you create a snapshot of a Throughput Optimized HDD (st1) or Cold HDD (sc1) volume, performance may drop as far as the volume's b

*You want to move all the files older than a month to S3 IA. What is the best way of doing this?* * Copy all the files using the S3 copy command * Set up a lifecycle rule to move all the files to S3 IA after a month * Download the files after a month and re-upload them to another S3 bucket with IA * Copy all the files to Amazon Glacier and from Amazon Glacier copy them to S3 IA

*Set up a lifecycle rule to move all the files to S3 IA after a month.* Copying all the files using the S3 copy command is going to be a painful activity if you have millions of objects. Doing this when you can do the same thing by automatically downloading and re-uploading the files does not make any sense and wastes a lot of bandwidth and manpower. Amazon Glacier is used mainly for archival storage. You should not copy anything into Amazon Glacier unless you want to archive the files.

*You have a client who is considering a move to AWS. In establishing a new account, what is the first thing the company should do?* * Set up an account using Cloud Search. * Set up an account using their company email address. * Set up an account via SQS (Simple Queue Service). * Set up an account via SNS (Simple Notification Service)

*Set up an account using their company email address.*

*What does S3 stand for?* * Simplified Serial Sequence * Simple SQL Service * Simple Storage Service * Straight Storage Service

*Simple Storage Service*

*You have developed a new web application in the US-West-2 Region that requires six Amazon Elastic Compute Cloud (EC2) instances to be running at all times. US-West-2 comprises three Availability Zones (us-west-2a, us-west-2b, and us-west-2c). You need 100 percent fault tolerance: should any single Availability Zone in us-west-2 become unavailable, the application must continue to run. How would you make sure 6 servers are ALWAYS available?* NOTE: each answer has 2 possible deployment configurat

*Solution 1: us-west-2a with six EC2 instances, us-west-2b with six EC2 instances, and us-west-2c with no EC2 instances.* *Solution 2: us-west-2a with three EC2 instances, us-west-2b with three EC2 instances, and us-west-2c with three EC2 instances.* You need to work through each case to find which will provide you with the required number of running instances even if one AZ is lost. Hint: always assume that the AZ you lose is the one with the most instances. Remember that the client has stipulated that they MUST have 100% fault tolerance.

*What is the main purpose of Amazon Glacier?* (Choose all that apply.) * Storing hot, frequently used data * Storing archival data * Storing historical or infrequently accessed data * Storing the static content of a web site * Creating a cross-region replication bucket for Amazon S3

*Storing archival data* *Storing historical or infrequently accessed data* Hot and frequently used data needs to be stored in Amazon S3; you can also use Amazon CloudFront to cache the frequently used data. ----------------------------------- Amazon Glacier is used to store the archive copies of the data or historical data or infrequent data. You can make lifecycle rules to move all the infrequently accessed data to Amazon Glacier. The static content of the web site can be stored in Amazon CloudFront in conjunction with Amazon S3. You can't use Amazon Glacier for a cross-region replication bucket of Amazon S3; however, you can use S3 IA or S3 RRS in addition to S3 Standard as a replication bucket for CRR.

*To help you manage your Amazon EC2 instances, you can assign your own metadata in the form of ________.* * Certificates * Notes * Wildcards * Tags

*Tags* Tagging is a key part of managing an environment. Even in a lab, it is easy to lose track of the purpose of a resources, and tricky determine why it was created and if it is still needed. This can rapidly translate into lost time and lost money.

*Which of the below are factors that have helped make public cloud so powerful?* (Choose 2) * Traditional methods that are used for on-premise infrastructure work just as well in cloud * No special skills required * The ability to try out new ideas and experiment without upfront commitment * Not having to deal with the collateral damage of failed experiments

*The ability to try out new ideas and experiment without upfront commitment.* *Not having to deal with the collateral damage of failed experiments.* Public cloud allows organisations to try out new ideas, new approaches and experiment with little upfront commitment. ----------------------------------- If it doesn't work out, organisations have the ability to terminate the resources and stop paying for them

*Amazon S3 provides 99.999999999 percent durability. Which of the following are true statements?* (Choose all that apply.) * The data is mirrored across multiple AZs within a region. * The data is mirrored across multiple regions to provide the durability SLA. * The data in Amazon S3 Standard is designed to handle the concurrent loss of two facilities. * The data is regularly backed up to AWS Snowball to provide the durability SLA. * The data is automatically mirrored to Amazon Glacier to achie

*The data is mirrored across multiple AZs within a region.* *The data in Amazon S3 Standard is designed to handle the concurrent loss of two facilities.* If you have created an S3 bucket in a global region, it will always stay there unless you manually move the data to a different region. Amazon does not back up data residing in S3 to anywhere else since the data is automatically mirrored across multiple facilities. However, customers can replicate the data to a different region for additional safety. AWS Snowball is used to migrate on-premises data to S3. Amazon Glacier is the archival storage of S3, and an automatic mirror of regular Amazon S3 data does not make sense. However, you can write lifecycle rules to move historical data from Amazon S3 to Amazon Glacier.

*I shut down my EC2 instance, and when I started it, I lost all my data. What could be the reason for this?* * The data was stored in the local instance store. * The data was stored in EBS but was not backed up to S3. * I used an HDD-backed EBS volume instead of an SSD-backed EBS volume. * I forgot to take a snapshot of the instance store.

*The data was stored in the local instance store.* The only possible reason is that the data was stored in a local instance store that is not persisted once the server is shut down. If the data stays in EBS, then it does not matter if you have taken the backup or not; the data will always persist. Similarly, it does not matter if it is an HDD- or SSD-backed EBS volume. You can't take a snapshot of the instance store.

*The data across the EBS volume is mirrored across which of the following?* * Multiple AZs * Multiple regions * The same AZ * EFS volumes mounted to EC2 instances

*The same AZ* Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations in the same AZ. Amazon EBS replication is stored within the same availability zone, not across multiple zones.

*To set up a cross-region replication, what statements are true?* (Choose all that apply.) * The source and target bucket should be in a same region. * The source and target bucket should be in different region. * You must choose different storage classes across different regions. * You need to enable versioning and must have an IAM policy in place to replicate. * You must have at least ten files in a bucket.

*The source and target bucket should be in different region.* *You need to enable versioning and must have an IAM policy in place to replicate.* Cross-region replication can't be used to replicate the objects in the same region. However, you can use the S3 copy command or copy the files from the console to move the objects from one bucket to another in the same region. You can choose a different class of storage for CRR; however, this option is not mandatory, and you can use the same class of storage as the source bucket as well. There is no minimum number of file restriction to enable cross-region replication. You can even use CRR when there is only one file in an Amazon S3 bucket.

*Which of the following provide the lowest cost EBS options?* (Choose 2) * Throughput Optimized (st1) * Provisioned IOPS (io1) * General Purpose (gp2) * Cold (sc1)

*Throughput Optimized (st1), Cold (sc1)* Of all the EBS types, both current and of the previous generation, HDD based volumes will always be less expensive than SSD types. Therefore, of the options available in the question, the Cold (sc1) and Throughout Optimized (st1) types are HDD based and will be the lowest cost options.

*I can change the permissions to a role, even if that role is already assigned to an existing EC2 instance, and these changes will take effect immediately.* * False * True

*True*

*Using SAML (Security Assertion Markup Language 2.0), you can give your federated users single sign-on (SSO) access to the AWS Management Console.* * True * False

*True*

*You can add multiple volumes to an EC2 instance and then create your own RAID 5/RAID 10/RAID 0 configurations using those volumes.* * True * False

*True*

*How much data can you store on S3?* * 1 petabyte per account * 1 exabyte per account * 1 petabyte per region * 1 exabyte per region * Unlimited

*Unlimited* Since the capacity of S3 is unlimited, you can store as much data you want there.

*You have been tasked with moving petabytes of data to the AWS cloud. What is the most efficient way of doing this?* * Upload them to Amazon S3 * Use AWS Snowball * Use AWS Server Migration Service * Use AWS Database Migration Service

*Use AWS Snowball* ----------------------------------- You can also upload data to Amazon S3, but if you have petabytes of data and want to upload it to Amazon S3, it is going to take a lot of time. The quickest way would be to leverage AWS Snowball. AWS Server Migration Service is an agentless service that helps coordinate, automate, schedule, and track large-scale server migrations, whereas AWS Database Migration Service is used to migrate the data of the relational database or data warehouse.

*What is the best way to get better performance for storing several files in S3?* * Create a separate folder for each file * Create separate buckets in different region * Use a partitioning strategy for storing the files * Use a partitioning strategy for storing the files

*Use a partitioning strategy for storing the files* Creating a separate folder does not improve performance. What if you need to store millions of files in these separate folders? Similarly, creating separate folders in a different region does not improve the performance. There is no such rule of storing 100 files per bucket.

*What is the best way to delete multiple objects from S3?* * Delete the files manually using a console * Use multi-object delete * Create a policy to delete multiple files * Delete all the S3 buckets to delete the files

*Use multi-object delete* Manually deleting the files from the console is going to take a lot of time. You can't create a policy to delete multiple files. Deleting buckets in order to delete files is not a recommended option. What if you need some files from the bucket?

*Which of the following are a part of AWS' Network and Content Delivery services?* (Choose 2) * VPC * RDS * EC2 * CloudFront

*VPC* *CloudFront* VPC allow you to provision a logically isolated section of the AWS where you can launch AWS resource in a virtual network. Cloudfront is fast, highly secure and programmable content delivery network (CDN). ------------------------- EC2 provides compute resources while RDS is Amazon's Relational Database System.

*What is an Amazon VPC?* * Virtual Private Compute * Virtual Public Compute * Virtual Private Cloud * Virtual Public Cloud

*Virtual Private Cloud* VPC stands for Virtual Private Cloud.

*When you create a new user, that user ________.* * Will only be able to log in to the console in the region in which that user was created. * Will be able to interact with AWS using their access key ID and secret access key using the API, CLI, or the AWS SDKs. * Will be able to log in to the console only after multi-factor authentication is enabled on their account. * Will be able to log in to the console anywhere in the world, using their access key ID and secret access key.

*Will be able to log in to the console anywhere in the world, using their access key ID and secret access key.* ----------------------------------- To access the console you use an account and password combination. To access AWS programmatically you use a Key and Secret Key combination.

*What is the underlying Hypervisor for EC2?* (Choose 2) * Xen * Nitro * Hyper-V * ESX * OVM

*Xen* *Nitro* Until very recently AWS exclusively used Xen Hypervisors, Recently they started making use of Nitro Hypervisors.

*Can a placement group be deployed across multiple Availability Zones?* * Yes. * Only in Us-East-1. * No. * Yes, but only using the AWS API.

*Yes* Technically they are called Spread or Partition placement groups. Now you can have placement groups across different hardware and multiple AZs.

*Is it possible to perform actions on an existing Amazon EBS Snapshot?* * It depends on the region. * EBS does not have snapshot functionality. * Yes, through the AWS APIs, CLI, and AWS Console. * No.

*Yes, through the AWS APIs, CLI, and AWS Console.*

*You are a security administrator working for a hotel chain. You have a new member of staff who has started as a systems administrator, and she will need full access to the AWS console. You have created the user account and generated the access key id and the secret access key. You have moved this user into the group where the other administrators are, and you have provided the new user with their secret access key and their access key id. However, when she tries to log in to the AWS console, sh

*You cannot log in to the AWS console using the Access Key ID / Secret Access Key pair. Instead, you must generate a password for the user, and supply the user with this password and your organization's unique AWS console login URL.*

*You are a solutions architect working for a large engineering company who are moving from a legacy infrastructure to AWS. You have configured the company's first AWS account and you have set up IAM. Your company is based in Andorra, but there will be a small subsidiary operating out of South Korea, so that office will need its own AWS environment. Which of the following statements is true?* * You will then need to configure Users and Policy Documents for each region respectively. * You will n

*You will need to configure Users and Policy Documents only once, as these are applied globally.*

*The use of a cluster placement group is ideal _______* * When you need to distribute content on a CDN network. * Your fleet of EC2 instances requires high network throughput and low latency within a single availability zone. * When you need to deploy EC2 instances that require high disk IO. * Your fleet of EC2 Instances requires low latency and high network throughput across multiple availability zones.

*Your fleet of EC2 instances requires high network throughput and low latency within a single availability zone.* Cluster Placement Groups are primarily about keeping you compute resources within one network hop of each other on high speed rack switches. This is only helpful when you have compute loads with network loads that are either very high or very sensitive to latency.

*Which AWS CLI command should I use to create a snapshot of an EBS volume?* * aws ec2 deploy-snapshot * aws ec2 create-snapshot * aws ec2 new-snapshot * aws ec2 fresh-snapshot

*aws ec2 create-snapshot*

*The difference between S3 and EBS is that EBS is object based where as S3 is block based.* * true * false

*false*

*To retrieve instance metadata or user data you will need to use the following IP Address:* * http://169.254.169.254 * http://10.0.0.1 * http://127.0.0.1 * http://192.168.0.254

*http://169.254.169.254*

*You have been asked by your company to create an S3 bucket with the name "acloudguru1234" in the EU West region. What would the URL for this bucket be?* * https://s3-eu-west-1.amazonaws.com/acloudguru1234 * https://s3.acloudguru1234.amazonaws.com/eu-west-1 * https://s3-us-east-1.amazonaws.com/acloudguru1234 * https://s3-acloudguru1234.amazonaws.com/

*https://s3-eu-west-1.amazonaws.com/acloudguru1234*

*S3 has eventual consistency for which HTTP Methods?* * UPDATES and DELETES * overwrite PUTS and DELETES * PUTS of new Objects and DELETES * PUTS of new objects and UPDATES

*overwrite PUTS and DELETES*


Kaugnay na mga set ng pag-aaral

Chapter 9: The Nurse-Client Relationship

View Set

2.0 now the newest Psychology notes for Jacary

View Set

chapter 6: dissonance and justification

View Set

Psalm 118 - Flashcard MC questions - Ted Hildebrandt

View Set

Pharmacology III Final Exam Study Set

View Set

AUTT Ch 64 Heating and Air-Conditioning System Diagnosis

View Set

Chapter 3: Small Business Environment: Managing External Relations

View Set

Chapter 10: Interactive Presentation

View Set