AWS S3 Quiz

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

I shutdown my EC2 instance, and when I started it, I lost all my data. What could be the reason for this? A.) The data was stored in the local instance store B.) The data was stored in EBS but was not backed up by S3 C.) I used an HDD-backed EBS volume instead of an SSD-backed EBS volume D.) I forgot to take a snapshot of the instance store

A

One of your users is trying to upload a 7.5GB file to S3 however they keep getting the following error message - "Your proposed upload exceeds the maximum allowed object size". What is a possible solution for this? A.) Design your application to use the multi-part upload API for all objects. B.) Design your application to use large object upload API for this object. C.) Raise a ticket with AWS to increase your maximum object size. D.) Log in to the S3 console, click on the bucket and then click properties. You can then increase your maximum object size to 1TB.

A

What is AWS Storage Gateway? A.) It's a physical or virtual appliance that can be used to cache locally at a customer's site B.) It allows large scale import/exports into the AWS cloud without the use of an internet connection. C.) It allows a direct MPLS connection into AWS. D.) None of the above.

A

What is the availability of objects stored in S3? A.) 99.99% B.) 99% C.) 99.90% D.) 100%

A

You work for a major news network in Europe. They have just released a new app which allows users to report on events as and when they happen using their mobile phone. Users are able to upload pictures from the app and then other users will be able to view these pics. Your organization expects this app to grow very quickly, essentially doubling it's user base every month. The app uses S3 to store the media and you are expecting sudden and large increases in traffic to S3 when a major news event takes place (as people will be uploading content in huge numbers). You need to keep your storage costs to a minimum however and it does not matter if some objects are lost. Which storage media should you use to keep costs as low as possible? A.) S3 - OneZone-IA B.) S3 - IA C.) S3 D.) Glacier

A

How many S3 buckets can you have per account by default? A.) 50 B.) 100 C.) 10 D.) 20

B

I am running an Oracle database that is very I/O intensive. My database admin needs a minimum of 3,600 IOPS. If my system is not able to meet that number, my application won't perform optimally. How can I make sure my application always performs optimally? A.) Use EFS since it automatically handles the performance B.) Use Provisioned IOPS SSD to meet the IOPS requirement C.) Use you database files in an SSD-based EBS volume and your other files in an HDD-backed EBS volume D.) Use a general-purpose SSD under a terabyte that has burst capabilities

B

S3 has eventual consistency for which HTTP Methods? A.) PUTS of new objects and DELETES B.) Overwrite PUTS and DELETES C.) PUTS of new objects and UPDATES D.) UPDATES and DELETES

B

The difference between S3 and EBS is that EBS is object based where as S3 is block based. A.) True B.) False

B

To set up cross-region replication, which statements are true? 1. The source and target buckets should be in the same region 2. The source and target buckets should be in a different region 3. You must choose different storage classes across different regions 4. You need to enable versioning and must have an IAM policy in place to replicate 5. You must have at least 10 files in a bucket A.) 1, 3 B.) 2, 4 C.) 2, 4, 5 D.) 1, 3, 5

B

What is the best way to delete multiple objects from S3? A.) Delete all files manually using the console B.) Use multi-object delete C.) Create a policy to delete multiple files D.) Delete all the S3 buckets to delete the files

B

You run a popular photo sharing website that depends on S3 to store content. You generate revenue from your website via paid for adverts, however, you have discovered that other websites are linking directly to the images on your site, and not to the HTML pages that serve the content. This means that people are not seeing your adverts and every time a request is made to S3 to serve an image it is costing your business money. How could you resolve this issue? A.) Use CloudFront to serve the static content. B.) Remove the ability for images to be served publicly to the site and then use signed URLs with expiry dates. C.) Use security groups to blacklist the IP addresses of the sites that do this. D.) Use EBS rather than S3 to store the content.

B

You want to move all the files older than a month to S3 IA. What is the best way of doing this? A.) Copy all the files using the S3 copy command B.) Set up a lifecycle rule to move all the files to S3 IA after a month C.) Download the files after a month and re-upload them to another S3 bucket with IA D.) Copy all the files to Amazon Glacier and from Glacier copy them to S3 IA

B

You want to run a mapreduce job (a part of the big data workload) for a noncritical task. Your main goal is to process it in the most cost-effective way. The task is throughput sensitive but not at all mission critical and can take a long time. Which type of storage would you choose? A.) Throughput Optimized HDD (st1) B.) Cold HDD (sc1) C.) General-Purpose SSD (gp2) D.) Provisioned IOPS (io1)

B

You work for a busy digital marketing company who currently store their data on premise. They are looking to migrate to AWS S3 and to store their data in buckets. Each bucket will be named after their individual customers, followed by a random series of letters and numbers. Once written to S3 the data is rarely changed, as it has already been sent to the end customer for them to use as they see fit. However on some occasions, customers may need certain files updated quickly, and this may be for work that has been done months or even years ago. You would need to be able to access this data immediately to make changes in that case, but you must also keep your storage costs extremely low. The data is not easily reproducible if lost. Which S3 storage class should you choose to minimize costs and to maximize retrieval times? a) S3 b) S3 - IA (Infrequently Accessed Storage) c) S3 - OneZone-IA d) Glacier

B

Your application needs a shared file system that can be accessed from multiple EC2 instances across different AZs. How would you provision it? A.) Mount the EBS volume across multiple EC2 instances B.) Use an EFS instance and mount the EFS across multiple EC2 instances C.) Access S3 from multiple EC2 instances D.) Use EBS with Provisioned IOPS

B

You are a solutions architect who works with a large digital media company. The company has decided that they want to operate within the Japanese region and they need a bucket called "testbucket" set up immediately to test their web application on. You log in to the AWS console and try to create this bucket in the Japanese region however you are told that the bucket name is already taken. What should you do to resolve this? A.) Change your region to Korea and then create the bucket "testbucket". B.) Raise a ticket with AWS and ask them to release the name "testbucket" to you. C.) Bucket names are global, not regional. This is a popular bucket name and is already taken. You should choose another bucket name. D.) Run a WHO IS request on the bucket name and get the registered owners email address. Contact the owner and ask if you can purchase the rights to the bucket.

C

You have been asked to advise on a scaling concern. The client has an elegant solution that works well. As the information base grows they use CloudFormation to spin up another stack made up of an S3 bucket and supporting compute instances. The trigger for creating a new stack is when the PUT rate approaches 100 PUTs per second. The problem is that as the business grows that the number of buckets is growing into the hundreds and will soon be in the thousands. You have been asked what can be done to reduce the number of buckets without changing the basic architecture. A.) Upgrade all buckets to S3 provision IOPS to achieve better performance B.) Refine the key hashing to randomize the name key to achieve the potential of 300 PUTs per second C.) Change the trigger level to around 3,000 as S3 can now accommodate much higher PUT and GET levels D.) Set up multiple accounts so that the per account hard limit on S3 buckets is avoided

C

You run a meme creation website that frequently generates meme images. The original images are stored in S3 and the metadata about the memes are stored in DynamoDB. You need to store the memes themselves in a low-cost storage solution. If an object is lost, you have created a Lambda function that will automatically recreate this meme using the original file in S3 and the metadata in DynamoDB. Which storage solution should you consider to store this non-critical, easily reproducible data on in the most cost-effective solution as possible? a) S3 b) S3 - IA (Infrequently Accessed Storage) c) S3 - OneZone-IA d) Glacier

C

How much data can you store in S3? A.) 1 petabyte per account B.) 1 exabyte per account C.) 1 petabyte per region D.) Unlimited

D

What are the various ways you can control access to the data stored in S3? 1. By using IAM policy 2. By creating ACLs 3. By encrypting the files in a bucket 4. By making all the files public 5. By creating a separate folder for the secure files A.) 3, 4 B.) 2, 5 C.) 1, 3 D.) 1, 2

D

What is Amazon Glacier? A.) A highly secure firewall designed to keep everything out B.) A tool that allows you to "freeze" an EBS volume C.) It is a tool used to resurrect deleted EC2 snapshots D.) An AWS service designed for long term data archival

D

You have been asked by your company to create an S3 bucket with the name "acloudguru1234" in the EU West region. What would the URL for this bucket be? A.) https://s3-acloudguru1234.amazonaws.com/ B.) https://s3.acloudguru1234.amazonaws.com/eu-west-1 C.) https://s3-us-east-1.amazonaws.com/acloudguru1234 D.) https://s3-us-west-1.amazonaws.com/acloudguru1234

D

You have uploaded a file to S3. What HTTP code would indicate that the upload was successful? A. HTTP 404 B. HTTP 501 C. HTTP 307 D. HTTP 200

D

You work for a health insurance company who collects large amounts of documents regarding patients health records. This data will be used usually only once when assessing a customer and will then need to be securely stored for a period of 7 years. In some rare cases you may need to retrieve this data within 24 hours of a claim being lodged. Which storage solution would best suit this scenario? You need to keep your costs as low as possible. A.) S3 B.) S3 - IA (Infrequently Accessed Storage) C.) S3 - OneZone-IA D.) Glacier

D

Amazon S3 provides 99.999999999% durability. Which of the following statements are true? 1. The data is mirrored across multiple AZs within a region 2. The data is mirrored across multiple regions to provide the durability SLA 3. The data in S3 Standard is designed to handle the concurrent loss of two facilities 4. The data is regularly backed up to AWS Snowball to provide the durability SLA 5. The data is automatically mirrored to Amazon Glacier to achieve high availability A.) 1, 3 B.) 1, 5 C.) 2, 4 D.) 2, 3

A

S3 has what consistency model for PUTS of new objects? A.) Usual consistency B.) Eventual consistency C.) Read after write consistency D.) Write after read consistency

C

The data across the EBS volume is mirrored across which of the following? A.) Multiple AZs B.) Multiple regions C.) The same AZ D.) EFS volumes mounted to EC2 instances

C

What does S3 stand for? A.) Simple SQL Service B.) Straight Storage Service C.) Simple Storage Service D.) Simplified Serial Sequence

C

What is the availability of S3-OneZone-IA? A.) 99.99% B.) 100% C.) 99.50% D.) 99.90%

C

What is the best way to get better performance for storing several files in S3? A.) Create a separate folder for each file B.) Create separate buckets in different regions C.) Use a partitioning strategy for storing files D.) Use the formula of keeping a maximum of 100 files in the same bucket

C

What is the best way to protect a file in Amazon S3 against accidental delete? A.) Upload the files in multiple buckets so that you can restore from another when a file is deleted B.) Backup the files regularly to a different bucket or in a different region C.) Enable versioning on the S3 bucket D.) Use cross-region replication

C

What is the main purpose of Amazon Glacier? 1. Storing frequently accessed data 2. Storing archival data 3. Storing historical or infrequently accessed data 4. Storing the static contents of a web site 5. Created a cross-region replication bucket for S3 A.) 4 B.) 1, 4 C.) 2, 3 D.) 1, 2, 4

C

What is the minimum file size that I can store on S3? A.) 1 byte B.) 1KB C.) 0 bytes D.) 1MB

C


संबंधित स्टडी सेट्स

COMPTIAA+ 220-901 -Hardware and Network Troubleshooting, 4.1;Given a scenario, troubleshoot common problems related to motherboards, RAM, CPU and power with appropriate tools.

View Set

Chapter 1 An Overview of Nutrition

View Set

Chapter 44 Assessment and Management of Patients with Biliary Disorders PrepU

View Set

Chapter 8 LearnSmart ReadingAssignment

View Set

(WW) AP Biology - Chapter 4 - Carbon and the Molecular Diversity of Life

View Set

Chapter 4 - Health Insurance Providers

View Set

Redox & Reduct. Multiple Choice IB Chemistry

View Set