SA Exam - Whiz Lab 4
You are building a system to distribute confidential training videos to employees. Using Cloudfront, what method would be used to serve content that is stored in S3, but not publicly accessible from S3 directly? A. Create an Origin Access Identity (OAI) for cloud front and grant access to the objects in your S3 bucket to that OAI B. Create an identity and access management user for cloud front and grant access to the objects in your S3 bucket to that IAM user C. Create a S3 bucket policy that lists the cloud front distribution ID as the principal and the target bucket as the amazon resource name (ARN) D. Add the cloud front account security group
A You can optionally secure the content in your Amazon S3 bucket so users can access it through cloud front but cannot access it directly by using S3 URLs. This prevents anyone from bypassing cloud front and using the Amazon S3 URL to get content that you want to restrict access to. This step isn't required to use signed URLs but it is recommended. To required that users access your content through cloud front URLs, you perform the following tasks: - Create a special cloud front user called an origin access identity - Give the origin access identity permission to read the objects in your bucket - Remove permission for anyone else to use S3 URLs to read the objects
What features in AWS acts as a firewall that controls the traffic allowed to reach one or more instances? A. Security group B. ACL C. IAM D. Private IP addresses
A A security group acts as a virtual firewall that controls the traffic for one more more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to and from its associated instances.
When reviewing the auto scaling events, it is noticed that an application is scaling up and down multiple times within the hour. What design change could you make to optimize cost while preserving elasticity? A. Change the scale down cloud watch metric to a high threshold B. Increate the instance type in the launch configuration C. Increate the base number of auto scaling instances for the auto scaling group D. Add provisioned IOPS to the instances
A If the threshold for the scale down is too low then the instances will keep on scaling down rapidly. Hence it is best to keep on optimal threshold for your metrics defined for cloud watch
A customer is looking for a hybrid cloud solution and learns about AWS Storage Gateway. What is the main use case of AWS Storage Gateway? A. It allows to integrate on premises IT environments with Cloud Storage B. A direct encrypted connection to Amazon S3. C. Its a backup solution that provides an on premises Cloud storage D. It provides an encrypted SSL endpoint for backups in the Cloud
A The AWS Storage Gateways software appliance is available for download as a virtual machine image that you install on a host in your datacenter. Once you've installed your gateway and associated it with your AWS Account through our activation process, you can use the AWS Management Console to create either gateway cached volumes, gateway stored volumes, or a gateway virtual tape library, which can be mounted as iSCSI devices by your on premises applications. You have primarily 2 types of volumes: 1. Gateway cached volumes allow you to utilize S3 for your primary data, while retaining some portion of it locally in a cache for frequently accessed data 2. Gateway stored volumes store your primary data locally, while asynchronously backing up that data to AWS
what best describes Recovery time objects? A. The time it takes after a disruption to restore operations back to its regular service level B. Minimal version of your production environment running on AWS C. full clone of your production environment D. Acceptable amount of data loss measured in time
A The recovery time objective is the targeted duration of time and a service level within which a business process must be restored after a disaster in order to avoid unacceptable consequences associated with a. break in the business continuity
What is the volume name for instances named "i-3243243faafdlafjafa"? A. Instance ID B. VPC ID C. Subnet ID D. Public IP
A When resources are created, an assignment of each resource with a unique resourceID is done. You can use resource IDs to find your resources in the Amazon EC2 console. A resource ID takes the form of a resource identifier followed by a hyphen and a unique combination of letters and numbers.
A company wants to launch EC2 instances on AWS. For the linux instance, they want to ensure that the Perl language are installed automatically when the instance is launched. In which of the below configurations can you achieve what is required by the customer. A. User data B. EC2Config Server C. IAM roles D. AWS Config
A When you configure an instance during creation, you can add custom scripts to the user data section. So in step 3 of creating an instance, in the advanced details section, we can enter custom scripts in the user data section.
A customer has a single 3 TB volume on premises that is used to hold a large repository of images and print layout files. This repository is growing at 500 GB a year and must be presented as a single logical volume. The customer is becoming increasingly constrained with their local storage capacity and wants an office site backup of this data, while maintaining low latency access to their frequently access data. Which AWS storage gateway conjugation meets the customer requirements? A. Gateway cached volumes with snapshots scheduled to S3 B. Gateway stored volumes with snapshots scheduled to S3 C. Gateway Virtual tape library with snapshots to S3 D. Gateway virtual tape library with snapshots to amazon glacier
A gateway cached volumes let you use S3 as your primary data storage while retaining frequently accessed data locally in your storage. gateway cached volumes minimize the need to scale your on premises storage infrastructure, while still providing your applications with low latency access to their frequently accessed data. you can create storage volumes up to 32 TiB in size and attach to them as iSCSI devices from your on premises application servers. your gateway stores data that you write to these volumes in S3 and retains recently read data in your on premises storage gateways cache and upload buffer storage
Which of the following are true regarding encrypted EBS volumes? Choose 2: A. Supported on all EBS volume types B. Snapshots are automatically encrypted C. Existing volumes can be encrypted D. Shared volumes can be encrypted
A and B If we create a snapshot from a volume that is encrypted, the encrypted option will automatically become true for the snapshot
Your company has moved a legacy application from an on premise data center to the cloud. The legacy application requires a static IP address hard coded into the backend, which prevents you from deploying the application with high availability and fault tolerance using the ELB. Which steps would you take to apply high availability and fault tolerance to this application? Select two: A. Write a custom script that pings the health of the instance, and if the instance stops responding, switches the elastic IP address to a standby instance B. Ensure that the instance its using has an elastic IP address assigned to it C. Do not migrate the application to the cloud until it can be converted to work with the ELB and Auto Scaling D. Create an AMI of the instance and launch it using Auto Scaling which will deploy the instance again if its unhealthy
A and B The best option is to configure an Elastic IP the can be switched between a primary and failover instance.
As an AWS administrator, you are trying to convince a team to use RDS Read replicas. What are two benefits of using read replicas? Choose 2: A. Creates elasticity in RDS B. Cached the data at edge location to reduce the network latency C. Improves performance of the primary database by taking workload from it D. Automatic failover in the case of AZ service failures
A and C By creating a read replica RDS, you have the facility to scale out the reads of your application, hence increasing elasticity for your application. Also it can be used to reduce the load on the main database.
To maintain compliance with HIPPA laws, all data being backed up or stored on S3 needs to be encrypted at rest. What is the best method for encryption for your data, assuming S3 is being used for storing the healthcare related data? Choose 2: A. Enable SSE on an S3 bucket to make sue of AES-256 encryption B. Store the data in encrypted EBS snapshots C. Encrypt the data locally using your own encryption keys, then copy the data to S3 over HTTPS endpoints D. Store the data on EBS volumes with encryption enabled instant of using S3
A and C Data protection refers to protecting data while in transit and at rest. You can protect dat Ain transit by using SSL or by using client side encryption. you have the following options of protecting data at rest in Amazon S3. - Use server side encryption - you request S3 to encrypt your object before saving it on disks in its data center and decrypt it when you download the objects - Use client side encryption - you can encrypt dat accent side and upload the encrypted data to amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools
Which of the following will occur when an EC2 instance in a. VPC with an associated Elastic Ip is stopped and started? Select two: A. The underlying host for the instance can be changed. B. The ENI is detached C. All data on instance store devices will be lost D. The Elastic IP will be dissociated from the instance
A and C EC2 instances are available in EBS backed storage and instance store backed storage. In fact, now more EC2 instances are EBS backed only so we need to conciser both options while answering the question. If you have an EBS backed instance store, then the underlying host is changed when the instance is stopped and started And if you have instance store volumes, the data on the instance store devices will be lost.
How many types of block devices does Amazon EC2 support? A. 2 B. 3 C. 4 D. 1
A A block device is a storage device that moves data in sequences of bytes or bits. These devices support random access and generally use buffered I/O. Examples include hard disk, CD ROM drives, and flash drives. A block device can be physically attached to a computer or accessed remotely as if it were physically attached to the computer. Amazon EC2 supports two types of block devices: - Instance store volumes ( virtual devices whose underlying hardware is physically attached to the host computer for the instance) - EBS volumes (remote storage devices) A block device is a storage device that moves data in sequence of bytes or bits. These devices support random access and vernally used buffered I/O. Examples include hard disk, CD-ROM drives, and flash drives. A block device can be physically attached to a computer or accessed remotely as if it were physically attached to the computer. Amazon EC2 supports two types of block devices: - Instance store volumes (virtual devices whose underlying hardware is physically attached to the host computer for the instance) - EBS (remote storage devices) Note: AWS is not treating Amazon EFS as Block level storage service. Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics, and concurrently accessible sort for up to thousands EC2 instances.
What is the durability of S3 RRS? A. 99.99% B. 99.95% C. 99.995% D. 99.99999999999%
A RRS only has 99.99% durability and there is a chance that data can be lost. So you need to ensure you have the rights steps in place to replace lost objects
As an IT administrator, you have been requested to ensure that you create a highly decouple application in AWS. Which of the following can help you accomplish this goal? A. An SQS queue to allow a second EC2 instance to process a failed instances job B. An elastic load balancer to send web traffic to healthy EC2 instances C. IAM user credentials on EC2 instances to grant permissions to modify an SQS queue D. An auto scaling group to recover from EC2 instance failures
A SQS is a fully managed message queuing service for reliably communicating among distributed software components and micro services - at any scale. Building applications from individual components that each perform a discrete function improves scalability and reliability, and is best practice design for modern applications. AQS is the best option for created a decoupled application. SQS makes ti simple and cost effective to decouple and coordinate the components of a cloud application.
You are working for a startup company that is building an application that receives large amounts of data. Unfortunately, current funding has left the start up short on cash, cannot afford to purchase thousands of dollars of storage hardware, and has opted to use AWS. Which services would you implement in order to store a virtually unlimited amount of data without any effort to scale when demand unexpectedly increase? A. S3, because it provides unlimited amounts of storage data, scales automatically, is highly available, and durable B. Glacier, to keep costs low for storage and scale infinitely C. Import/Export, because Amazon assists in migrating large amounts of data to S3 D. EC2, because EBS volumes can scale to hold any amount of data and, when used with Auto Scaling, can be designed of fault tolerance and high availability
A The best option is to use S3 because you can host a large amount of data in S3 and is the best storage option provided by AWS.
Which Amazon service can I use to define a virtual network that closely resembles a traditional data center? A. Amazon VPC B. Amazon ServiceBus C. Amazon EMR D. Amazon RDS
A VPC lets you provision a logically isolated section of the AWS cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route ables and network gateways. You can easily customize the network configuration for your amazon virtual private cloud. For example, you can create a public facing subnet for your web servers that have access to the Internet, and place your backend systems such as databases or application servers in a printing facing subnet with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet.
Your Fortune 500 company has under taken a Two analysis evaluating the use of Amazon S3 versus acquiring more hardware The outcome was that ail employees would be granted access to use S3 for storage of their personal documents. Which of the following will you need to consider so you can set up a solution that incorporates single sign on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? Choose three answers: A. Setting up a federation proxy or identity provider B. Using AWS Security Token Service to generate temporary tokens C. Tagging each folder in the bucket D. Configuring IAM role E. Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in the bucket
A, B, and D
In what events would cause RDS to instate a failover to the standby replica? Select 3 options: A. Loss of availability in primary AZ B. Loss of network connectivity to primary C. Storage failure on secondary D. Storage failure on primary
A, B, and D RDS detects and automatically recovers from the most failure scenarios for Multi AZ deployments so that you can resume database operations as quickly as possible with out administrative intervention. Amazon RDS automatically performs a failover in the event of any of the following: - Loss of availability in primary availability zone - Loss of network connectivity to primary - Comput unit failure on primary - Storage failure on primary Note: When operations such as a DB instance scaling or system or system upgrades like OS patching are initiated for Multi AZ deployments, for enhanced availability, they are applied first on the standby prior to an automatic failover. As a result, you availability impact is limited only to the time required for automatic failover to complete. Note that RDS Multi AZ deployments do not failover automatically in response to database operations such as long running queries, deadlocks or database corruption errors.
A company is storing data on S3. The company's security policy mandates that data is encrypted at rest. Which of the following methods can achieve this? Choose three: A. Use S3 server side encryption with AWS Key Management Service managed keys B. Use S3 server side encryption with customer provided keys C. Use S3 server side encryption with EC2 key pair D. Use S3 bucket policies to restrict access to the data at rest E. Encrypt the data on the client side before ingesting to the S3 using their own master key F. Use SSL to encrypt the data while in transit to S3
A, B, and E One can encrypt data in an S3 bucket using both server side encryption and client side encryption. The following techniques are available: - Use Server Side Encryption with S3 Managed keys (SSE-S3) - Use Server Side Encryption with AWS KMS Managed Keys (SEE-KMS) - Use Server Side Encryption with Customer Provided Keys (SSE-C) - Use Client Side Encryption with AWS KMS Managed Customer Master Key (CMK) - Use Client Side Encryption Using a Client Side Master Key
In Amazon CloudWatch, what is the retention period for a one minute datapoint. Choose the right answers from the options given below: A. 10 days B. 15 days C. 1 month D. 1 year
B Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Amazon Cloudwatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. Below is the retention period for the various data points. CloudWatch metrics now supports the following three retention schedules: - 1 minute 1 minute datapoint are available for 15 days - 5 minute datapoint are available for 63 days - 1 hour datapoint are available for 455 days
When you disable automated backups from AWS RDS, what are you comprising on? A. Nothing, you are actually saving resources on AWS B. You are disabling the point in time recovery C. Nothing really, you can still take manual backups D. You cannot disable automated backups in RDS
B Amazon RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases. You can set the backup retention period when you create a DB instance. If you don't set the backup retention period, Amazon RDS uses a default period retention period of one day. You can modify the backup retention period; valid values are 0 to a maximum of 365 days. It is important to note that it is highly discouraged to disable automated backups because it disables point in time recovery. If you disable and then re enable automated backups, you are only able to restore starting from the time you re enabled automated backups.
Where does AWS beanstalk store the application files and server log files? A. On the local server within Elastic Beanstalk B. S3 C. Cloudtrail D. DynamoDB
B Elastic Beanstalk stores your application files and, optionally, server log files in S3. If you are using the AWS Management Console, the AWS Toolkit for Visual Studio, or AWS Toolkit for Eclipse, an S3 bucket will be created in your account for you and the files you upload will be automatically copied from your local client to Amazon S3. Optionally, you may configure Elastic Beanstalk to copy your server log files every hour to S3. You do this by editing the environment configuration settings
Which of the following are use cases for Amazon DynamoDB? Choose 3 answers: A. Storing BLOB data B. Managing web sessions C. Storing JSON documents D. Storing metadata for Amazon S3 objects E. Running relational joins and complex updates F. Storing large amounts of infrequently accessed data
B, C, and D Amazon DynamoDB stores structured data, indexed by primary key, and allows low latency read and write access to items ranging from 1 byte up to 400 KB. Amazon S3 stores unstructured blobs and suited for storing large objects up to 5 TB. DynamoDB IS a good choice to store the metadata for a BLOB, such as name, data created, owner, etc.... The Binary Large Object itself would be stored in S3. Q: When should I use Amazon DynamoDB vs Amazon S3? Amazon DynamoDB stores unstructured data, indexed by primary key, and allows low latency read and write access to items ranging from 1 byte up to 400 KB. Amazon S3 stores unstructured blobs and suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file points are best saved in Amazon DynamoDB.
Your customer is willing to consolidate their log streams logs in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customers requirements? A. Send all the log events to Amazon SQS. Set up an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics B. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs C. Configure Amazon Cloud Trail to receive custom logs, use EMR to apply heuristics the logs D. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs
B Kinesis is the bet option for analyzing logs in real time. Kinesis makes it easy to collect, process, and analyze real time, stream data so that you can get time insights and react quickly to new information. Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose tools that best suit the requirements of your application. With Kinesis, you can ingest real time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes, and data warehouses, or build your own real time applications using this data
An existing application stores sensitive information on a non-boot Amazon EBS data volume attached to an Amazon Elastic Compute Cloud instance. Which of the following approaches would protect the sensitive data on an Amazon EBS volume? A. Upload your customer keys to AWS CloudHSM. Associate the Amazon EBS volume with AWS CloudHSm. Remount the Amazon EBS volume. B. Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume. C. Unmount the EBS volume. Toggle the encryption attribute to True. Re-mount the Amazon EBS volume. D. Snapshot the current Amazon EBS volume. Restore the snapshot to a new, encrypted Amazon EBS volume. Mount the Amazon EBS volume.
B Please note that the Encryption attribute is not listed in the Create Volume window when we select a Snapshot for creating the volume. Volume created will have the same encryption property of the snapshot.
The common use for IAM is to manage what? Select three: A. Security Groups B. API Keys C. Multi Factor Authentication D. Roles
B, C, and D You can use IAM to manage API key and MFA along with roles.
Which AWS service is used as a global content delivery network service in AWS? A. SES B. Cloudtrail C. CloudFront D. S3
C Cloudfront is a web service that gives business and web application developers an easy and cost effective way to distribute content with low latency and high data transfer speeds. Like other AWS services, Amazon Cloudfront is a self service, pay per use offering, requiring no long term commitments or minimum fees. With Cloudfront, your files are delivered to end users using a global network of edge locations
How does using ElastiCache help to improve database performance? A. It can store petabytes of data B. It provides faster internet speeds C. It can store high facing queries D. It uses read replicas
C Elasticache is a web service that makes it easy to deploy, operate, and scale an in memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in memory data stores, instead of ruling entirely on slower disk based databases
Perfect forward secrecy is used to offer SSL/TLS cipher suites from which two AWS services? A. EC2 and S3 B. cloud trail and cloud watch C. cloud front and elastic load balancing D. trusted advisor and GovCloud
C It is currently available for Cloudfront and ELB
Which technique can be used to integrate AWS IAM with an on premise LDAP directory service? A. Use an IAM policy that references the LDAP account identifiers and the AWS credentials B. Use SAML to enable single sign on between AWS and LDAP C. Use AWS Security Token Service from an identity broker to issue short lived AWS credentials D. Use IAM roles to automatically rotate the IAM credentials when LDAP credentials are updated E. Use the LDAP credentials to restrict a group of users from launching specific EC2 instance types.
C To make SAML work in this scenario, you would need SAML Service Broker service that works with LDAP. It also uses STS service for it to work. The AWS Security Token Service is a web service that enabled you to request temporary, limited privilege credentials for IAM users that you authenticate.
As part of your application architecture requirements, the company you are working for has requested the ability to run analytics against all combined log files from the Elastic Load Balancer. Which services are used together to collect logs and process log file analysis in an AWS environment? A. S3 for storing the ELB log files and EC2 for processing the log files in analysis B. DynamoDB to store the logs and EC2 for running custom log analysis scripts C. S3 for storing ELB log files and Amazon EMR for processing the log files in analysis D. EC2 for storing and processing the log files
C You can use EMR for processing the jobs. EMR provides a managed Hadoop framework that makes it easy, fast, and cost effective to process vast amounts of data cross dynamically scalable Amazon EC2 Instances You can slo run other popular distributed frameworks such as Apache Spark, Base, Presto, and Flink in Amazon EMR, and interact with data in other AWS data stores such as Amazon S3 and DynamoDb EMR securely and reliably handles a board set of big data use cases, including log analysis, web indexing, data transformations, machine learning, financial analysis, scientific simulation, and bioinformatics
All EC2 instances are assigned two IP addresses at launch, out of which one can only be reached from within the EC2 network? A. Multiple IP addresses B. Public IP address C. Private IP address D. Elastic IP address
C A private IP address is an IP address thats not reachable over the internet. You can use IP addresses for communication between instances in the same network. When an instance is launched, a private IP address is allocated for the instance using DHCP. Each instance is also given an internal DNS hostname that resolves to the private IP address of the instance. You can use the internal DNS hostname for communication between instances in the same network, but we can't resolve the DNS hostname outside the network that the instance is in.
A customer wants to apply a group of database specific settings to their Relational Database Instances in their AWS account. Which of the following options can be used to apply the settings in one go for all the Relational database instances: A. Security Groups B. NACL Groups C. Parameter Groups D. IAM Roles
C DB parameter groups are used to assign specific settings which can be applied to a set of RDS instances in AWS. In your RDS, when you go to Parameter Groups, you can create a new parameter group. In the parameters group itself, you have a lot of database related settings that can be assigned to the database.
Which of the following approaches provides the lowest cost for Amazon Elastic Block Store snapshots while giving you the ability to fully restore data? A. Maintain two snapshots: the original snapshot and the latest incremental snapshot B. Maintain a volume snapshot; subsequent snapshots will overwrite one another C. Maintain a single snapshot the latest snapshot is both incremental and complete D. Maintain the most current snapshot, archive the original and incremental to Amazon Glacier
C EBS snapshots are incremental and complete, so you don't need to maintain multiple snapshots if you are looking on reducing costs. You can easily create a snapshot from a volume while the instance is running and the volume is in use. You can do this from the EC2 dashboard.
A company is deploying a new two tier web application in AWS. The company wants to store their most frequently used data so that the response time for the application is improved. Which AWS service provides the solution for the company requirements A. MySQL installed on 2 Amazon EC2 instances in a single AZ B. RDS for MySQL with Multi AZ C. Elasticache D. DynamoDB
C Elasticache is a web service tha times it easy to deploy, operate, and scale an in memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in memory data stores, instead of relying entirely on slower disk based databases.
There is a requirement by a company that does online credit card processing to have a secure application environment on AWS. They are trying to decide on whether to use KMS or CloudHSM. Which of the following statements I right when it comes to CloudHSM and KSM. Choose the correct answer from the topics given below: A. It probably doesn't matter as they both do the same thing B. AWS CloudHSM does not support the processing, storage, and transmission of credit card data by a merchant or service provider, as it has not been validated as being complaint with Payment Card Industry Data security standard; hence, you will need to use KSM C. KSM is probably adequate unless additional protection is necessary for some applications and at a that re subject to strict contractual or regulatory requirements for managing cryptographic keys, then HSM should be used D. AWS CloudHSM should always be used for any payment transactions
C For a higher requirement on secret one can use CloudHSM. The AWS CloudHSM service helps you meet corporate, contractual and regulatory compliance requirements for data security by using dedicated Hardware Security Model appliances within the AWS cloud. With CloudHSM, you control the encryption keys and cryptographic operations performed by the HSM.
You are a solutions architect working for a large digital media company. your company is migrating their product estate to AWS and you are in the process of setting up access to the AWS console using IAM. You have created 5 users for your systems administrators. What further steps do you need to take to enable your system administrators to get access to the AWS console? A. Generate an Access Key ID and Secret Access Key, and give these to your system administrators B. Enable multi-factor authentication on their accounts and define a password policy. C. Gernerate a password for each user created and give these passwords to your system administrators D. Give the system administrators the secret access key and access key id, and tell them to use credentials to log in to the AWS console.
C In order to allow the users to log into the console, you need to provide a password for the users.
What does the following command do with respect to the Amazon EC2 security groups? revoke-security-group-ingress A. Removes one or more security groups from a. rule B. Removes one or more security groups from an Amazon EC2 instance C. removes one or more rules form a security group
C Removes one or more ingress rules from a security group. The values that you specify in the revoke request must match the existing rules value for the rule to be removed. Each rule consists of the protocol and the CIDR ranger source security group. For the TCP and UDP protocols, you must also specify the distention port or range ports. For the ICMP protocol, you must also specify the ICMP type and code.
A VPC has been setup with public subnet and an internet gateway. You setup and EC2 instance with a public Ip. But you are still not able to connect it via the Internet. You can see that the right Security groups are in place. What should you do to ensure you can connect to the EC2 instance from the internet. A. Set an Elastic IP address to the EC2 Instance B. Set a Secondary Private IP address to the EC2 Instance C. Ensure the right route entry is there in the Route table D. There must be some issue in the EC2 instance. Check the systems logs.
C You have to ensure that the Route table has an entry to the Internet gateway because this is required for instances to communicate over the internet.
A customer is hosting their company website on a cluster of web servers that are behind a public facing load balancer. The customer also uses Amazon Route 53 to manage their public DNS. How should the customer conger the DNS zone apex record to point to the load balancer? A. Create an A record posting to the IP address of the load balancer B. Create a CNAME record pointing to the load balancer DNS name C. Create an alias for CNAME record to the load balancer DNS name D. Create an A record aliased to the load balanced DNS name
D Alias resource record sets are virtual records what work like CNAME records. But stye different CNAME records in that they are not visible to resolvers. Resolvers only see the A record and the resulting IP address of the target record. Aa such, unlike CNAME records, alias resource record sets are available to configure a zone apex in a dynamic environment.
What is the base URI for all requests for instance metadata? A. http://254.169.169.254/latest/ B. http://169.169.254.254/latest/ C. http://127.0.0.1/latest/ D. http://169.254.169.254/latest/
D Instance metadata is data about your instance that you can use to configure or manage the running instance. Because your instance metadata is available from your running instance, you do not need to use the Amazon Ec2 console or the AWS CLI. This can be helpful when you're writing scripts to run from your instance. For example, you can access the local IP address of your instance from instance metadata to manage a connection to an external application
You have started a new role as a solutions architect for an architectural firm that designs large sky scrapers in the Middle East. Your company hosts large volumes of data and has about 250 TB of data on internal servers. They have decided to store this data on S3 due to the redundancy offered by it. The company currently has a telecoms line of 2 Mbps connecting their head office to the internet. What method should they use to import this data on to S3 in the fastest manner possible? A. upload it directly to S3 B. Purchase and AWS Direct connect and transfer the data over that once it is installed C. data pipeline D. snowball
D Snowball is a petabyte scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the cloud. using snowball addresses common challenges with large scale data transfers including high network costs, long transfer times, and security concerns. transferring data with snowball is simple, fast, secure, and can be as little as one fifth the cost of high speed internet
A customer is running a multi tier web application farm in a virtual private cloud that is not connected to their corporate network. They are connecting to the VPC over the Internet to manage all of their EC2 instances running in both of the public and private subnets. They have only authorized the bastion security groups with Microsoft Remote Desktop Protocol access to the application instance security groups, but the company wants to further limit administrative access to all of the instances in the VPC. Which of the following Bastion deployment scenarios will meet this requirement? A. Deploy a Windows Bastion host on the corporate network that has RDP access to all instance sin the VPC B. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow Ssh access to the bastion from anywhere C. Deploy a Windows Bastion host with an Elastic IP address in the private subnet, and restrict RDP access to the bastion from only the corporate public IP addresses D. Deploy a windows bastion host with an elastic IP address in the public subnet and allow RDP access to bastion only from corporate IP address
D The bastion host should be in a public subnet with either a public or elastic Ip and only allow RDP access from one IP from the corporate network. A bastion host is a special purpose computer on a network specifically designed and configured to withstand attacks. The computer generally hosts a single application, for example a proxy server, and all other services are removed to limited tor rescue the threat to the computer
You have instances running on your VPC. You have both production and development based instances running in the VPC. You want to ensure that people who are responsible for the development instances don't have the access to work on the production instances to ensure better security. Using policies, which of the following would be the best way to accomplish? A. Launch the testing production instances in separate VPCs and use VPC peering B. Create am IAM policy with a condition which allows access to only instances that are used for production or development C. Launch the test and production instances in different AZ and use Multi Factor Authentication D. Define the tags on the Production and Development servers and add a condition to the IAM policy which allows access to specific tags
D You can easily add tags which define which instances are production and which are development instances and then ensure these tags are used when controlling access via an IAM policy
Your company is concerned with EBS volume backup on Amazon EC2 and wants to ensure they have proper backups and that the data is durable. What solution would you implement and why? A. Configure Amazon Storage Gateway with EBS volumes as the data source and store the backups on premise through the storage gateway B. Write a cronjob on the server that compresses the data that needs to be backed up using gzip compression, then use AWS CLI to copy the data into an S3 bucket for durability C. Use a lifecycle policy to backup volumes stored on Amazon S3 for durability D. Write a cronjob that uses the AWS CLI to take a snapshot of production EBS volumes. The data is durable because EBS snapshots are stored on the Amazon S3 standard storage class
D You can take snapshots of EBS volumes and to automate the process you can use the CLI. The snapshots are automatically stored on S3 for durability
Cloud trail can log API calls from? A. the command line B. the SDK C. the console D. all of the above
D Cloud trail can log all API calls which enter AWS AWS cloud trial is a service that enables governance, compliance, operational acting, and risk auditing of your AWS account. With cloud trail, you can log, continuously monitor, and retain events related to API calls across your aws infrastructure. cloud trail provides a history of was api calls fro your account, including api calls made through the was management console, was skis, command line tools, and other was services. this history implies security analysis, resource change tracking and trouble shooting
What is the best definition of an SQS message? A. mobile push notification B. set of instructions stored in an SQS queue that can be up to 512 KB in size C. notification sent via SNS D. set of instructions stored in an SQS queue that can be up to 256 KB in size
D Q: How do I configure the maximum message size for Amazon SQS? To configure the maximum message size, using the console or the SetQueueAttributes method to set the MaximumMessageSize attribute. This attribute specifies the limit on types that an Amazon SQS message can contain. Set this limit to a v value between 1024 byes and 262144 bytes.
When running my DB instances as a Multi AZ deployment, can I use the standby fro read and write operations? A. Yes B. Only with MSSQL based RDS C. Only for Oracle RDS instances D. No
D Q: When running my D B instance as a Multi AZ deployment, can I use the standby for read or write operations? No, the standby replica cannot serve read requests. Multi AZ deployments are designed to provide enhanced database availability and durability, rather than read scaling benefits. As such, the feature uses synchronous replication between primary and stands. Our implementation makes sure the primary and the standby are constantly in sync, but precludes using the standby for read or write operations. Here is the overview of Multi AZ RDS Deployments: Amazon RDS Multi AZ deployments provide enhanced availability and durability for DB instances, making them a natural fit for production database workloads. When you provision a Multi AZ DB Instance, RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different AZ. Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, RDS performs an automatic failover to the standby so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention
Which of the following best describes what the cloudHSM has to offer? A. AWS service for generating API keys B. EBS encryption method C. S3 encryption method D. a dedicated appliance that is used to store security keys
D The AWS CloudHSM service helps you meet corporate, contractual and regulatory compliance requirements for data security by using sedated Hardware Secret Module appliances with the cloud. With CloudHSM, you control the encryption keys and cryptographic opens by performance by the HSM
What is the default period for EC2 cloud watch data with detailed monitoring disabled? A. One second B. Five seconds C. One minute D. Three minutes E. Five minutes
E By default, your instance is enabled for basic monitoring. You can optional enable detailed monitoring. After you enable detailed monitoring, the Amazon EC2 console displays monitoring graphs with a 1 minute period for the instance. Basic: Data is available automatically in 5 minute periods at no charge Detailed: Data is available in 1 minute periods for an additional cost. To get this level of data, you must specifically enable it for the instance. For the instances where you've enabled detailed monitoring, you can also get aggregated data across groups of similar instances. In CloudWatch for basic monitoring of EC2 instances, the following important metrics are collected at five minute intervals and store for two weeks: - CPU load - disk I/O - network I/O
A company has resources hosted in AWS and an on premise servers. You have been requested to create a de coupled arhctirekcture for applications which make use of both types of resources? Select two: A. You can leverage SQF to utilize both on premises servers and EC2 instances for your decoupled application B. SQS is not a valid option to help you use on premise servers and EC2 instances in the same application, as it cannot be polled by on premises servers C. You can leverage SQS to utilize both on premises servers and EC2 instances for your decoupled application D. SQF is not a valid option to help you use premises servers and EC2 instances in the same application, as on premises servers cannot be used as activity task workers
A and C You can use both SWF and SQS to coordinate with EC2 instances an don premise servers SQS is a fully managed queuing service for reliably communicating among distributed software components and micro services - at any scale. Building applications from individual components that each perform a discrete faction improves scalability and reliability, and is best practice design for modern applications. The SWF makes it easy to build applications that coordinate work across distributed components. In SWF, a task represents a logical unit of work that is performed by a component of your application. Coordinating tasks across the application involves managing intertask dependencies, scheduling, and concurrency in accordance with the logical flow of the application. SWF gives you full control over implementing tasks and coordinating them without worrying about underlying complexities such as tracking their progress and maintaining their state
You are using an m1.small EC2 instance with one 300 GB EBS volume to host a relational database. You determined that write throughput to the database needs to be increased. Which of the following approaches can help achieve this. Choose two answers: A. Use an array of EBS volumes B. Enable Multi-AZ nodes C. Place the instance in an Auto Scaling Groups D. Add an EBS volume and place into RAID 5 E. Increase the size of the EC2 instance F. Put the database behind an Elastic Load Balancer
A and E With EBS, you can use any of the standard RAID configurations that you can use with a traditional bare metal server, as long as that particular RAID configuration is supported by operating systems for your instance. This is because all RAID is accomplished at the software level. For greater I/O performance that you can achieve with a single volume, RAID 0 can stripe multiple volumes together; for on instance redundancy, RAID 1 can mirror two volumes together And then to offset the use of higher compute capacity, it is better to use a better instance type
with AWS service, if used as part of your applications architectures, has an added benefit of helping to mitigate DDoS attacks fro hitting your back end instances? A. cloud watch B. cloud front C. cloud trail D. kinesis
B
You are a consultant tasked with migrating an on premise application arhctirekcture to AWS. During your design process, you have to give consideration to current on premise security and determine which security attributes you are responsible for on AWS. Which of the following does AWS provide for you as part of the shared responsibility model? A. Customer data B. Physical network infrastructure C. Instance security D. User access to the AWS environment
B As per the Shared responsibility model, the Physical network infrastructure is taken care by AWS.
The AZ that your RDS database instance located in is suffering from outages, and you have lost access to the database. What could you have done to prevent losing access to your database without any downtime? A. Made a snapshot of the database B. Enabled multi-AZ failover C. Increased the database instance size D. Created a read replica
B RDS multi az deployemtns revide enahcned availability and durability dog rDb instances, making them a natural fir for production database workloads. when you provision a multi az db instance, amazon res automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different AZ. Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, RDS pformans an automatic failover to the standby so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention
What is the purpose of an SWF decision task? A. It tells the worker to perform a function B. It tells the decider the task of the work flow execution C. It defines all the actives in the workflow D. It represents a single task in the workflow
B A decider is an implementation of the coordination logic of your workflow type that urns during the execution of your workflow. You can run multiple deicers for a single workflow type. Because the execution state for a workflow execution is store in its workflow history, deciders can be stateless. Amazon SWF maintains the workflow execution history and provides it to a decider with a decision task
Before I delete an EBS volume, what can I do if I want to recreate the volume later? A. Create a copy of the EBS volume (not a snapshot) B. Store a snapshot of the volume C. Download the content to an EC2 instance D. Back up the data in to a physical disk
B After you no longer need an EBS volume, you can delete it. After deletion, its data is gone and the volume can't be attached to any instance. However, before deletion, you can store a snapshot of the volume, which you can use to re create the volume later. Snapshots occurs asynchronously; the point in time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is completed, which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in progress snapshot is not affected by ongoing reads and writes to the volume. You can easily create a snapshot from a volume while the instance is running and the volume is in use. You can do this from the EC2 dashboard.
you are designing a social media store and are considering how to mitigate distributed denial of service attacks. Which of the below are viable mitigation techniques? Choose 3 answers: A. Add multiple elastic network interfaces to each EC2 instance to increate the network bandwidth B. Use dedicated instances to ensure that each instance has the maximum performance possible C. Use an Amazon Cloud front distribution for both static and dynamic content D. Use an Elastic Load Balancer with auto scaling groups at the web, App. Restricting direct internet traffic to Amazon RDS tiers E. Add alert Amazon Cloud watch to look for high network in and CPU utilization F. Create process and capabilities to quickly add and remove rules to the instance OS firewall
C, D, and E