CSAA (SAA-C01)

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

For data privacy, a healthcare company has been asked to comply with the Health Insurance Portability and Accountability Act (HIPAA). They have been told that all of the data being backed up or stored on Amazon S3 must be encrypted.What is the best option to do this? (Select TWO.) A)Before sending the data to Amazon S3 over HTTPS, encrypt the data locally first using your own encryption keys. B)Enable Server-Side Encryption on an S3 bucket to make use of AES-256 encryption. C)Store the data on EBS volumes with encryption enabled instead of using Amazon S3. D)Store the data in encrypted EBS snapshots. E)Enable Server-Side Encryption on an S3 bucket to make use of AES-128 encryption.

A) and B) Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects. For example, if you share your objects using a pre-signed URL, that URL works the same way for both encrypted and unencrypted objects. You have three mutually exclusive options depending on how you choose to manage the encryption keys: Use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) Use Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS) Use Server-Side Encryption with Customer-Provided Keys (SSE-C) The options that say: Before sending the data to Amazon S3 over HTTPS, encrypt the data locally first using your own encryption keys and Enable Server-Side Encryption on an S3 bucket to make use of AES-256 encryption are correct because these options are using client-side encryption and Amazon S3-Managed Keys (SSE-S3) respectively. Client-side encryption is the act of encrypting data before sending it to Amazon S3 while SSE-S3 uses AES-256 encryption. Storing the data on EBS volumes with encryption enabled instead of using Amazon S3and storing the data in encrypted EBS snapshots are incorrect because both options use EBS encryption and not S3. Enabling Server-Side Encryption on an S3 bucket to make use of AES-128 encryption is incorrect as S3 doesn't provide AES-128 encryption, only AES-256. References:http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.htmlhttps://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html

When would you use IAM Roles? $A)When an external application needs access to an AWS resource like DynamoDB $B)When an IAM User is needing elevated permissions for a temporary task xC)All of these are examples of when IAM Roles can be used $D)When an AWS service or resource needs access to other AWS service or resource $E)When a trusted remote account needs access to AWS resources xF)When needing to give an IAM entity long term credentials

A,B,D,E - External applications can be given access to AWS resources by assuming IAM Roles. Since IAM Roles are meant for temporary usage, it would be fitting to assign an IAM Role to the IAM User for the purpose of accomplishing a temporary task. AWS resources sometimes need to access other AWS resources. For example, when an EC2 instance needs to access a database (instance or service), the EC2 instance can be assigned an IAM Role. An external account (non-AWS and AWS) that you trust can be assigned an IAM Role as a way to access AWS resources.

Which statements are true when speaking of RPO and RTO? A) A low RPO means less loss of data B) A low RTO means less time to resolve C) RPO measures the time between failure and recovery. D) RPO measures the time between a failure and the latest backup. E) RTO measures the time between system failure and the last backup. F) RTO measures the time between failure and recovery.

A,B,D,F - A low RPO will result in a lesser loss of data in a failure event. A low RTO will result in a lesser amount of time until resolution after a failure event. RPO is the maximum amount of time between a failure and the last successful backup. RTO the maximum amount of time a system can be done, along with its recovery.

You have been task to evaluate a university's enrollment system which uses DynamoDB as a database layer. The system is suffering due to heavy reads in the enrolment period of the university. Load is currently 3-4x the expected amount and 90% of so is read operations checking enrolment status for class signups. What should you suggest to help with the extra load. The changes should reduce the amount of admin overhead while being cost effective. A Change the database to Aurora Serverless B Increase the RCU C Change database to RDS D Add additional DynamoDB instances

Adding more instance would not be cost effective. Correct Answer: B Why is this correct? Increasing the RCUs would help with the load and be the most

To improve security in your AWS account you have decided to enable multi-factor authentication (MFA). You can authenticate using an MFA device in which two ways? (choose 2) A)Using the AWS API B)Using biometrics C)Through the AWS Management Console D)Locally to EC2 instances E)Using a key pair

Answer: A and C Explanation: You can authenticate using an MFA device in the following ways: Through the AWS Management Console - the user is prompted for a user name, password and authentication code Using the AWS API - restrictions are added to IAM policies and developers can request temporary security credentials and pass MFA parameters in their AWS STS API requests Using the AWS CLI by obtaining temporary security credentials from STS (aws sts get-session-token) References:AWS Training - AWS Cheatsheet (IAM)

Which of the following categories would not be found in a zone file? A)CNAME (canonical name) B)TTL (time to live) C)Record type D)Record data

Answer: A) Feedback: CNAME is a record type, but it's not a category. TTL, record type, and record data are all category names.

A start-up company that offers an intuitive financial data analytics service has consulted you about their AWS architecture. They have a fleet of Amazon EC2 worker instances that process financial data and then outputs reports which are used by their clients. You must store the generated report files in a durable storage. The number of files to be stored can grow over time as the start-up company is expanding rapidly overseas and hence, they also need a way to distribute the reports faster to clients located across the globe. Which of the following is a cost-efficient and scalable storage option that you should use for this scenario? A)Use multiple EC2 instance stores for data storage and ElastiCache as the CDN. B)Use Amazon S3 as the data storage and CloudFront as the CDN. C)Use Amazon S3 Glacier as the data storage and ElastiCache as the CDN. D)Use Amazon Redshift as the data storage and CloudFront as the CDN.

Answer: B) A Content Delivery Network (CDN) is a critical component of nearly any modern web application. It used to be that CDN merely improved the delivery of content by replicating commonly requested files (static content) across a globally distributed set of caching servers. However, CDNs have become much more useful over time. For caching, a CDN will reduce the load on an application origin and improve the experience of the requestor by delivering a local copy of the content from a nearby cache edge, or Point of Presence (PoP). The application origin is off the hook for opening the connection and delivering the content directly as the CDN takes care of the heavy lifting. The end result is that the application origins don't need to scale to meet demands for static content. Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS - both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. Amazon S3 offers a highly durable, scalable, and secure destination for backing up and archiving your critical data. This is the correct option as the start-up company is looking for a durable storage to store the audio and text files. In addition, ElastiCache is only used for caching and not specifically as a Global Content Delivery Network (CDN). Using Amazon Redshift as the data storage and CloudFront as the CDN is incorrect as Amazon Redshift is usually used as a Data Warehouse. Using Amazon S3 Glacier as the data storage and ElastiCache as the CDN is incorrect as Amazon S3 Glacier is usually used for data archives.Using multiple EC2 instance stores for data storage and ElastiCache as the CDN is incorrect as data stored in an instance store is not durable. References:https://aws.amazon.com/s3/https://aws.amazon.com/caching/cdn/

A local bank has an in-house application which handles sensitive financial data in a private subnet. After the data is processed by the EC2 worker instances, they will be delivered to S3 for ingestion by other services.How should you design this solution so that the data does not pass through the public Internet? A)Create an Internet gateway in the public subnet with a corresponding route entry that directs the data to S3. B)Configure a VPC Interface Endpoint along with a corresponding route entry that directs the data to S3. C)Configure a VPC Gateway Endpoint along with a corresponding route entry that directs the data to S3. D)Provision a NAT gateway in the private subnet with a corresponding route entry that directs the data to S3.

Answer: C) Feedback: The important concept that you have to understand in the scenario is that your VPC and your S3 bucket are located within the larger AWS network. However, the traffic coming from your VPC to your S3 bucket is traversing the public Internet by default. To better protect your data in transit, you can set up a VPC endpoint so the incoming traffic from your VPC will not pass through the public Internet, but instead through the private AWS network. A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an Internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other services do not leave the Amazon network. Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic. There are two types of VPC endpoints: interface endpoints and gateway endpoints. You should create the type of VPC endpoint required by the supported service. As a rule of thumb, most AWS services use VPC Interface Endpoint except for S3 and DynamoDB, which use VPC Gateway Endpoint. Configuring a VPC Gateway Endpoint along with a corresponding route entry that directs the data to S3 is correct because VPC Gateway Endpoint supports private connection to S3. Creating an Internet gateway in the public subnet with a corresponding route entry that directs the data to S3 is incorrect because Internet gateway is used for instances in the public subnet to have accessibility to the Internet. Configuring a VPC Interface Endpoint along with a corresponding route entry that directs the data to S3 is incorrect because VPC Interface Endpoint does not support the S3 service. You should use a VPC Gateway Endpoint instead. As mentioned in the above explanation, most AWS services use VPC Interface Endpoint except for S3 and DynamoDB, which use VPC Gateway Endpoint. Provisioning a NAT gateway in the private subnet with a corresponding route entry that directs the data to S3 is incorrect because NAT Gateway allows instances in the private subnet to gain access to the Internet, but not vice versa. References:https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.htmlhttps://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html

You recently created a brand new IAM User with a default setting using AWS CLI. This is intended to be used to send API requests to your S3, DynamoDB, Lambda, and other AWS resources of your cloud infrastructure. Which of the following must be done to allow the user to make API calls to your AWS resources? A)Enable Multi-Factor Authentication for the user. B)Assign an IAM Policy to the user to allow it to send API calls. C)Create a set of Access Keys for the user and attach the necessary permissions. D)Do nothing as the IAM User is already capable of sending API calls to your AWS resources.

Answer: C) You can choose the credentials that are right for your IAM user. When you use the AWS Management Console to create a user, you must choose to at least include a console password or access keys. By default, a brand new IAM user created using the AWS CLI or AWS API has no credentials of any kind. You must create the type of credentials for an IAM user based on the needs of your user. Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). Users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. To fill this need, you can create, modify, view, or rotate access keys (access key IDs and secret access keys) for IAM users. When you create an access key, IAM returns the access key ID and secret access key. You should save these in a secure location and give them to the user. The option that says: Do nothing as the IAM User is already capable of sending API calls to your AWS resources is incorrect because by default, a brand new IAM user created using the AWS CLI or AWS API has no credentials of any kind. Take note that in the scenario, you created the new IAM user using the AWS CLI and not via the AWS Management Console, where you must choose to at least include a console password or access keys when creating a new IAM user. Enabling Multi-Factor Authentication for the user is incorrect because this will still not provide the required Access Keys needed to send API calls to your AWS resources. You have to grant the IAM user with Access Keys to meet the requirement. Assigning an IAM Policy to the user to allow it to send API calls is incorrect because adding a new IAM policy to the new user will not grant the needed Access Keys needed to make API calls to the AWS resources. References:[url=]https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html[/url] https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html#id_users_creds

What steps are not needed to assign an Egress-Only Gateway to a VPC while allowing outbound traffic? A)Create a route for IPv6 traffic pointed at the Egress-Only Gateway B)Assign a public IP to the Egress-Only Gateway C)Attach an Egress-Only Gateway to a VPC D)Create an Egress-Only Gateway

B - Assigning a public IP to the Egress-Only Gateway is not needed. An egress-only internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over IPv6 from instances in your VPC to the internet, and prevents the internet from initiating an IPv6 connection with your instances. Note An egress-only internet gateway is for use with IPv6 traffic only. To enable outbound-only internet communication over IPv4, use a NAT gateway instead. IPv6 addresses are globally unique and are therefore public by default. If you want your instance to be able to access the internet, but you want to prevent resources on the internet from initiating communication with your instance, you can use an egress-only internet gateway. To do this, create an egress-only internet gateway in your VPC, and then add a route to your route table that points all IPv6 traffic (::/0) or a specific range of IPv6 address to the egress-only internet gateway. IPv6 traffic in the subnet that's associated with the route table is routed to the egress-only internet gateway. An egress-only internet gateway is stateful: it forwards traffic from the instances in the subnet to the internet or other AWS services, and then sends the response back to the instances. An egress-only internet gateway has the following characteristics: You cannot associate a security group with an egress-only internet gateway. You can use security groups for your instances in the private subnet to control the traffic to and from those instances. You can use a network ACL to control the traffic to and from the subnet for which the egress-only internet gateway routes traffic.

You've set up a private EC2 instance to have limited outbound access to the internet by way of a NAT gateway. You ping a public IP from the private EC2 instance and receive a response. Why does the NAT gateway allow this inbound response? A)NAT gateways always allow inbound traffic. B)NAT gateways understand and allow session traffic. C)NAT gateway only allows ping requests back into the environment. D)The NAT gateway was set up incorrectly.

B - NAT gateway understands the session, therefore, they are stateful. The NAT gateway will allow inbound information because the request was a response to the private resource's request.

You have a requirement to integrate the Lightweight Directory Access Protocol (LDAP) directory service of your on-premises data center to your AWS VPC using IAM. The identity store which is currently being used is not compatible with SAML.Which of the following provides the most valid approach to implement the integration? A)Use IAM roles to rotate the IAM credentials whenever LDAP credentials are updated. B)Develop an on-premises custom identity broker application and use STS to issue short-lived AWS credentials. C)Use AWS Single Sign-On (SSO) service to enable single sign-on between AWS and your LDAP. D)Use an IAM policy that references the LDAP identifiers and AWS credentials.

B) Develop an on-premises custom identity broker application and use STS to issue short-lived AWS credentials is the correct answer.If your identity store is not compatible with SAML 2.0, then you can build a custom identity broker application to perform a similar function. The broker application authenticates users, requests temporary credentials for users from AWS, and then provides them to the user to access AWS resources.The application verifies that employees are signed into the existing corporate network's identity and authentication system, which might use LDAP, Active Directory, or another system. The identity broker application then obtains temporary security credentials for the employees.To get temporary security credentials, the identity broker application calls either AssumeRole or GetFederationToken to obtain temporary security credentials, depending on how you want to manage the policies for users and when the temporary credentials should expire. The call returns temporary security credentials consisting of an AWS access key ID, a secret access key, and a session token. The identity broker application makes these temporary security credentials available to the internal company application. The app can then use the temporary credentials to make calls to AWS directly. The app caches the credentials until they expire, and then requests a new set of temporary credentials. Using an IAM policy that references the LDAP identifiers and AWS credentials are incorrect because using an IAM policy is not enough to integrate your LDAP service to IAM. You need to use SAML, STS or a custom identity broker. Using AWS Single Sign-On (SSO) service to enable single sign-on between AWS and your LDAP is incorrect because the scenario did not require SSO and in addition, the identity store that you are using is not SAML-compatible. Using IAM roles to rotate the IAM credentials whenever LDAP credentials are updated is incorrect because manually rotating the IAM credentials is not an optimal solution to integrate your on-premises and VPC network. You need to use SAML, STS, or a custom identity broker. References: [url=][/url]https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated-users.html

An instance is launched into a VPC subnet with the network ACL configured to allow all outbound traffic and deny all inbound traffic. The security group of the instance is configured to allow SSH from any IP address. What changes are required to allow SSH access to the instance? A)The Outbound Security Group needs to be modified to allow outbound traffic. B)The Inbound Network ACL needs to be modified to allow inbound traffic C)Nothing, it can be accessed from any IP address using SSH D)Both the Outbound Security Group and Outbound Network ACL need to be modified to allow outbound traffic

B) The Inbound Network ACL needs to be modified to allow inbound traffic The reason why Network ACL has to have both an Allow for Inbound and Outbound is that network ACLs are stateless. Responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa). Whereas for Security groups, responses are stateful. So if an incoming request is granted, by default an outgoing request will also be granted. Options A and D are invalid because Security Groups are stateful. Here, any traffic allowed in the Inbound rule is allowed in the Outbound rule too. Option C is also incorrect. For more information on Network ACLs, please refer to the URL below, https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html

What AWS services could you use Route 53 with? Choose all that apply. A)Subnets B)EC2 C)S3 D)Availability Zones

B,C - EC2 and S3

AWS CloudHSM AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. CloudHSM offers you the flexibility to integrate with your applications using industry-standard APIs, such as PKCS#11, Java Cryptography Extensions (JCE), and Microsoft CryptoNG (CNG) libraries. CloudHSM is standards-compliant and enables you to export all of your keys to most other commercially-available HSMs, subject to your configurations. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups. CloudHSM also enables you to scale quickly by adding and removing HSM capacity on-demand, with no up-front costs.

Benefits Generate and use encryption keys on FIPS 140-2 level 3 validated HSMs AWS CloudHSM enables you to generate and use your encryption keys on a FIPS 140-2 Level 3 validated hardware. CloudHSM protects your keys with exclusive, single-tenant access to tamper-resistant HSM instances in your own Amazon Virtual Private Cloud (VPC).

What is the default minimum size of an Auto Scaling Group? A)4 B)2 C)1 D)3

C - 1

You recently created a brand new IAM User with a default setting using AWS CLI. This is intended to be used to send API requests to your S3, DynamoDB, Lambda, and other AWS resources of your cloud infrastructure.Which of the following must be done to allow the user to make API calls to your AWS resources? A)Enable Multi-Factor Authentication for the user. B)Assign an IAM Policy to the user to allow it to send API calls. C)Create a set of Access Keys for the user and attach the necessary permissions. D)Do nothing as the IAM User is already capable of sending API calls to your AWS resources.

C) You can choose the credentials that are right for your IAM user. When you use the AWS Management Console to create a user, you must choose to at least include a console password or access keys. By default, a brand new IAM user created using the AWS CLI or AWS API has no credentials of any kind. You must create the type of credentials for an IAM user based on the needs of your user. Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). Users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. To fill this need, you can create, modify, view, or rotate access keys (access key IDs and secret access keys) for IAM users. When you create an access key, IAM returns the access key ID and secret access key. You should save these in a secure location and give them to the user. The option that says: Do nothing as the IAM User is already capable of sending API calls to your AWS resources is incorrect because by default, a brand new IAM user created using the AWS CLI or AWS API has no credentials of any kind. Take note that in the scenario, you created the new IAM user using the AWS CLI and not via the AWS Management Console, where you must choose to at least include a console password or access keys when creating a new IAM user. Enabling Multi-Factor Authentication for the user is incorrect because this will still not provide the required Access Keys needed to send API calls to your AWS resources. You have to grant the IAM user with Access Keys to meet the requirement. Assigning an IAM Policy to the user to allow it to send API calls is incorrect because adding a new IAM policy to the new user will not grant the needed Access Keys needed to make API calls to the AWS resources. References:[url=]https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html[/url]https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html#id_users_creds

Question 23 of 65 An application needs to have a Datastore hosted in AWS. The following requirements are in place for the Datastore: a) The initial storage capacity of 8 TB b) The ability to accommodate a database growth of 8GB per day c) The ability to have 4 Read Replicas Which of the following Datastore is the best for this requirement? A)DynamoDB B)Amazon S3 C)Amazon Aurora D)SQL Server

C) Amazon Aurora

Question 12 of 65 There is a requirement to host a database on an EC2 Instance. It is also required that the EBS volume should support 18,000 IOPS. Which Amazon EBS volume type would meet the performance requirements of this database? A)EBS Provisioned IOPS SSD B)EBS Throughput Optimized HDD C)EBS General Purpose SSD D)EBS Cold HDD

Correct Answer - A For high performance and high IOPS requirements, as in this case, the ideal choice would be to choose EBS Provisioned IOPS SSD.For more information on AWS EBS Volume types, please visit the following URL:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

You are working as an AWS Consultant for an E-Commerce organization. The organization is planning to migrate to a managed database service using Amazon RDS. To avoid any business loss due to any deletion in the database, the management team is looking for a backup process that will restore Database at any specific time during the last month. Which action should be performed as a part of Amazon RDS Automated backup process? A)AWS performs a storage volume snapshot of database instance during the backup window once a day captures transaction logs every 5 minutes, and store in S3 buckets. B)AWS performs a full snapshot of the database every 12 hours during the backup window, captures transactions logs throughout the day, and store in S3 buckets. C)AWS performs a full daily snapshot during the backup window. Given this doesn't provide point in time restoration, it does not meet the requirements. D)AWS performs storage volume snapshot of the database instance every 12 hours during the backup window, captures transactions logs throughout the day, store in S3 buckets.

Correct Answer - A During automated backup, Amazon RDS performs a storage volume snapshot of the entire Database Instance. Also, it captures transaction logs every 5 minutes. To restore a DB instance at a specific point of time, a new DB instance is created using this DB snapshot. Option B is incorrect as Database Snapshots are the manual backups initiated by users, not by AWS. These Backups can be performed at any time. Option C is incorrect as Database Snapshots are the manual backups initiated by users, not by AWS. Option D is incorrect as AWS performs storage volume snapshot on a daily basis, not every 12 hours.For more information on Amazon RDS Automated backup process and Restoring a DB instance to a specified time, refer to the following URL:https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html

A Media firm has a global presence for its Sports programming & broadcasting network which uses AWS Infrastructure. They have multiple AWS accounts created based upon verticals & tomanage these accounts they have created AWS organizations. Recently this firm is acquired by another media firm which is also using AWS Infrastructure for media streaming services. Both these firms need to merge AWS Organisations to have new policies created & enforce in all the member AWS accounts of merged entities.As an AWS Consultant which of the following steps you will suggest to the client to move the master account of original media firm to AWS Organisation used by the merged entity? (Select Three.) A)Remove all member accounts from the organization. B)Make another member account as a Master account .C)Delete old organization D)Invite an old master account to join a new organization as a member account.

Correct Answer - A, C, D To move the master account from one AWS Organisations to other AWS Organisations, the following steps needs to be implemented,· Removal of all member accounts from AWS Organisations.· Delete the old organization.· Invite master account of old AWS Organisation as a member account to join the new AWS Organisation. Option B is incorrect as the Master account of an AWS Organisation cannot be replaced with another member account.

Currently, a company makes use of EBS snapshots to back up their EBS Volumes. As a part of the business continuity requirement, these snapshots need to be made available in another region. How could this be achieved? A)Directly create the snapshot in the other region. B)Create Snapshot and copy the snapshot to a new region. C)Copy the snapshot to an S3 bucket and then enable Cross-Region Replication for the bucket. D)Copy the EBS Snapshot to an EC2 instance in another region.

Correct Answer - B AWS Documentation mentions the following: A snapshot is constrained to the region where it was created. After you create a snapshot of an EBS volume, you can use it to create new volumes in the same region. For more information, follow the link on Restoring an Amazon EBS Volume from a Snapshot below. You can also copy snapshots across regions, making it possible to use multiple regions for geographical expansion, data center migration, and disaster recovery. For more information on EBS Snapshots, please visit the following URL:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html For more information on Restoring an Amazon EBS Volume from a Snapshot, please visit the following link, https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-restoring-volume.html Option C is incorrect because the EBS snapshots are stored in S3, which is managed by AWS. We don't have the option to see the snapshots in S3.

You currently have your EC2 instances running in multiple availability zones. You have a NAT gateway defined for your private instances and you want to make this highly available. How could this be accomplished? A)Create another NAT Gateway and place it behind an ELB. B)Create a NAT Gateway in another Availability Zone. C)Create a NAT Gateway in another region. D)Use Auto Scaling groups to scale the NAT Gateway.

Correct Answer - B AWS Documentation mentions the following: If you have resources in multiple Availability Zones and they share one NAT Gateway, in the event that the NAT Gateway's Availability Zone is down, resources in the other Availability Zones lose internet access. To create an Availability Zone-independent architecture, create a NAT Gateway in each Availability Zone and configure your routing to ensure that resources use the NAT Gateway in the same Availability Zone. For more information on the NAT Gateway, please refer to the below URL:https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html

A company has an entire infrastructure hosted on AWS. It requires to create code templates used to provision the same set of resources in another region in case of a disaster in the primary region. Which AWS service can be helpful in this regard? A)AWS Beanstalk B)AWS CloudFormation C)AWS CodeBuild D)AWS CodeDeploy

Correct Answer - B AWS Documentation provides the following information to support this requirement: AWS CloudFormation provisions your resources in a safe and repeatable manner, allowing you to build and rebuild your infrastructure and applications, without having to perform manual actions or write custom scripts. CloudFormation takes care of determining the right operations to perform while managing your stack and rolls back changes automatically if errors are detected. For more information on AWS CloudFormation, please visit the following URL:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html AWS Beanstalk - is an orchestration service for deploying applications which orchestrate various AWS Services, including EC2, S3, SNS, CloudWatch, AutoScaling, and ELB. https://aws.amazon.com/elasticbeanstalk/ AWS CodeBuild - is a fully managed continuous integration(CI) service that compiles source code, run tests, and produces software packages that are ready to deploy. Using it, you don't need to provision, manage, and scale your own build servers. https://aws.amazon.com/codebuild/ CodeDeploy - is a service that automates application deployments to a variety of compute services including EC2, Fargate, Lambda, and on-premises instances. It protects your application from downtime during deployments through rolling updates and deployment health tracking. https://aws.amazon.com/codedeploy/

Question 61 of 65 An application hosted in AWS allows users to upload videos to an S3 bucket. A user is required to be given access to upload some videos for a week based on the profile. How could this be accomplished in the best way possible? A)Create an IAM bucket policy to provide access for one week. B)Create a pre-signed URL for each profile which will last for one week. C)Create an S3 bucket policy to provide access for one week. D)Create an IAM role to provide access for one week.

Correct Answer - B pre-signed URLs are the perfect solution when you want to give temporary access to users for S3 buckets. So, whenever a new profile is created, you can create a pre-signed URL to ensure that the URL lasts for a week and allows users to upload the required objects. For more information on pre-signed URLs, please visit the following URL:https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html

A company wants to host a web application and a database layer in AWS. This will be done with the use of subnets in a VPC.What would be a proper architectural design for supporting the required tiers of the application? A)Use a public subnet for the web tier and another public subnet for the databaselayer. B)Use a public subnet for the web tier and a private subnet for the database layer. C)Use a private subnet for the web tier and another private subnet for the database layer.D)Use a private subnet for the web tier and a public subnet for the database layer.

Correct Answer - B The ideal setup is to ensure that the web server is hosted in the public subnet so that it can be accessed by users on the internet. The database server can be hosted in the private subnet.For more information on public and private subnets in AWS, please visit the following URL:https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

A company hosts a popular web application that connects to an Amazon RDS MySQL DBinstance running in a default VPC private subnet created with default ACL settings. The web servers must be accessible only to customers on an SSL connection and the database must only be accessible to web servers in a public subnet. Which solution would meet these requirements without impacting other applications? (SELECT TWO) A)Create a network ACL on the Web Server's subnets, allow HTTPS port 443 inbound and specify the source as 0.0.0.0/0 B)Create a Web Server security group that allows HTTPS port 443 inbound trafficfrom anywhere (0.0.0.0/0) and apply it to the Web Servers. C)Create a DB Server security group that allows MySQL port 3306 inbound and specify the source as the Web Server security group. D)Create a network ACL on the DB subnet, allow MySQL port 3306 inbound for Web Servers and deny all outbound traffic. E)Create a DB Server security group that allows HTTPS port 443 inbound and specify the source as a Web Server security group.

Correct Answer - B and C This sort of setup is explained in the AWS documentation. 1) To ensure that traffic can flow into your webserver from anywhere on secure traffic, youneed to allow inbound security at 443.2) And then, you need to ensure that traffic can flow from the database server to the web server via the database security group. For more information review this case scenario, please visit the following URL:https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.htmlOptions A and D are invalid answers. Network ACLs are stateless. So we need to set rules for both inbound and outbound tra�c for Network ACLs. Option E is also invalid because, in order to communicate with the MySQL servers, we need to allow traffic to flow through port 3306. Note: The above correct options are the combination of steps required to secure your web and database servers. In addition, the company may implement additional security measures fromtheir end.

A team is building an application that must persist and index JSON data in a highly available data store. The latency of data access must remain consistent despite very high applicationtraffic.Which service would help the team to meet the above requirement? A)Amazon EFS B)Amazon Redshift C)DynamoDB D)AWS CloudFormation

Correct Answer - C AWS Documentation mentions the following about DynamoDB:Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. The data in DynamoDB is stored in JSON format, and hence it is the perfect data storage to meet the requirement mentioned in the question. For more information on AWS DynamoDB, please visit the following URL:https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

A Solutions Architect is designing an online shopping application running in a VPC on EC2 Instances behind an Elastic Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer-managed database cluster. There should be no access to the database from the Internet but the cluster must be able to obtain software patches from the Internet. Which VPC design meets these requirements? A)Public subnets for both the application tier and the database cluster. B)Public subnets for the application tier and private subnets for the database cluster. C)Public subnets for both application tier and NAT Gateway and private subnets for the database cluster. D)Private subnets for the application tier and private subnets for both the database cluster and NAT Gateway

Correct Answer - C We always need to keep NAT gateway on public Subnet only, because it needs to communicate the Internet.AWS says that "To create a NAT gateway, you must specify the public subnet in which the NAT gateway should reside. You must also specify an Elastic IP address to associate with the NATgateway when you create it. After you've created a NAT gateway, you must update the route table associated with one or more of your private subnets to point Internet-bound traffic to theNAT gateway. This enables instances in your private subnets to communicate with the internet. "For more information on this setup, please refer to the below URL:https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.htmlNOTE: Here the requirement is that "There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet." 1) There should be no access to the database from the Internet. To achieve this step, we have to launch the database inside the private subnet. 2) But the cluster must be able to obtain software patches from the Internet.For this, we have to create NAT Gateway inside the Public Subnet. Because the subnet with internet gateway attached is known as Public Subnet. Through the NAT Gateway, a database inside the Private subnet can access the internet. Option D is saying that "Use private subnet for NAT gateway". Option C includes these discussed Points and thus, it's a perfect answer.

You need to launch a number of EC2 instances to run Cassandra. There are large distributed and replicated workloads in Cassandra and you plan to launch instances using EC2 placement groups. The traffic should be distributed evenly across several partitions and each partition should contain multiple instances. Which strategy would you use when launching the placement groups? A)Cluster placement strategy. B)Spread placement strategy. C)Partition placement strategy. D)Network placement strategy.

Correct Answer - C Placement groups have the placement strategies of Cluster, Partition and Spread. With the Partition placement strategy, instances in one partition do not share the underlying hardware with other partitions. This strategy is suitable for distributed and replicated workloads such as Cassandra. Details please refer to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placementgroups.html#placement-groups-limitations-partition Option A is incorrect: Cluster placement strategy puts instances together in an availability zone. This does not resolve the problem mentioned in the question Option B is incorrect: Because Spread placement strategy puts instances across different racks. It does not group instances in a partition. Option C is CORRECT: With Partition placement strategy, instances in a partition have their own set of racks. Option D is incorrect: Because there is no Network placement strategy.

A company hosts 5 web servers in AWS. They want to ensure that Route53 can be used to route user traffic to random healthy web servers when they request for the underlying web application. Which routing policy should be used to fulfill this requirement? A)Simple B)Weighted C)Multivalue Answer D)Latency

Correct Answer - C The AWS Documentation mentions the following to support this:If you want to route traffic randomly to multiple resources such as web servers, you can create one multivalue answer record for each resource and, optionally, associate an Amazon Route 53 health check with each record. For example, suppose you manage an HTTP web service with a dozen web servers where each has its own IP address. No web server could handle all the traffic, but if you create a dozen multivalue answer records, Amazon Route 53 responds to DNS queries with up to eight healthy records in response to each DNS query. Amazon Route53 gives different answers to different DNS resolvers. If a web server becomes unavailable after a resolver caches a response, client software can try another IP address in the response. Simple routing policy - Use for a single resource that performs a given function for your domain, for example, a web server that serves content for the example.com website. Latency routing policy - Use when you have resources in multiple locations and you want to route traffic to the resource that provides the best latency. Weighted routing policy - Use to route traffic to multiple resources in proportions that you specify. Multivalue answer routing policy - Use when you want Route53 to respond to DNS queries with up to eight healthy records selected at random.

A database, hosted using the Amazon RDS service, is getting a lot of database queries and has now become a bottleneck for the associating application. Which action would ensure that the database is not a performance bottleneck? A)Setup a CloudFront distribution in front of the database. B)Setup an ELB in front of the database. C)Setup ElastiCache in front of the database. D)Setup SNS in front of the database.

Correct Answer - C ElastiCache is an in-memory solution which can be used in front of a database to cache the common queries issued against the database. This can reduce the overall load on the database. Option A is incorrect because this is normally used for content distribution. Option B is partially correct, but you need to have one more database as an internal load balancing solution.Option D is incorrect because SNS is a simple notification service.For more information on ElastiCache, please visit the following URL:https://aws.amazon.com/elasticache/

Your company manages an application that currently allows users to upload images to an S3 bucket. These images are picked up by EC2 Instances for processing and then placed in another S3 bucket. You need an area where the metadata for these images can be stored. A)AWS Redshift B)AWS Glacier C)AWS DynamoDB D)AWS SQS

Correct Answer - C Option A is incorrect because this is normally used for petabyte based storage. Option B is incorrect because this is used for archive storage. Option C is correct. AWS DynamoDB is the best, light-weight and durable storage option for metadata. Option D is incorrect because this used for messaging purposes.For more information on DynamoDB, please refer to the URL below,https://aws.amazon.com/dynamodb/

Question 3 of 65 You have developed a new web application on AWS for a real estate firm. It has a web interface where real estate employees upload photos of newly constructed houses in S3 buckets. Prospective buyer's login to the website and access photos. The marketing team has initiated an intensive marketing event to promote new housing schemes which will lead to customers who frequently access these images. As this is a new application, you have no projection of traffic. You have created Auto Scaling across multiple instance types for these web servers, but you also need to optimize the cost for storage. You don't want to compromise on latency & all images should be downloaded instantaneously without any outage. Which of the following is a recommended storage solution to meet this requirement? A)Use One Zone-IA storage class to store all images. B)Use Standard-IA to store all images. C)Use S3 Intelligent-Tiering storage class. D)Use Standard storage class, use Storage class analytics to identify & move objects using lifecycle policies.

Correct Answer - C When access pattern to web application using S3 storage buckets is unpredictable, you can use S3 intelligent-Tiering storage class. S3 Intelligent-Tiering storage class includes two access tiers: frequent access and infrequent access. Based upon access patterns, it moves data between these tiers which helps in cost saving. S3 Intelligent-Tiering storage class have the same performance as that of Standard storage class. Option A, "One Zone-IA" is incorrect. Although it will save cost, it will not provide any protection in case of AZ failure. Also, this class is suitable for infrequently accessed data & not for frequently access data. Option B is incorrect as "Standard-IA storage" class is for infrequently accessed data & there are retrieval charges associated. In the above requirement, you do not have any projections of data being accessed which may result in a higher cost. Option D "Standard storage class" is incorrect. It has operational overhead to setup Storage class analytics & moves objects between various classes. Also, since the access pattern is undetermined, this will run into a costlier option.For more information on S3 Intelligent-Tiering, refer to the following

Question 40 of 65 You have a web application hosted on an EC2 Instance in AWS which is being accessed by users across the globe. The Operations team has been receiving support requests about extreme slowness from users in some regions. What can be done to the architecture to improve the response time for these users? A)Add more EC2 Instances to support the load. B)Change the Instance type to a higher instance type. C)Add Route 53 health checks to improve the performance. D)Place the EC2 Instance behind CloudFront.

Correct Answer - D AWS Documentation mentions the following: Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay) so that content is delivered with the best possible performance. For more information on Amazon CloudFront, please refer to the below URL:https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html Option A is incorrect. The latency issue is experienced by people from certain parts of the world only. So, increasing the number of EC2 Instances or increasing the instance size will not make much difference. Option B is incorrect. The latency issue is experienced by people from certain parts of the world only. So, changing the Instance type to a higher instance type will not make much difference. Option C is incorrect. Route 53 health checks are meant to see whether the instance status is healthy or not. Since this case deals with responding to requests from users, we do not have to worry about this. However, for improving latency issues, CloudFront is a good solution.

Your company has a set of EC2 Instances hosted in AWS. It is mandatory to prepare for disasters and come up with the necessary disaster recovery procedures. What would be helpful in mitigating the effects of a disaster for the EC2 Instances? A)Place an ELB in front of the EC2 Instances. B)Use Auto Scaling to ensure that the minimum number of instances are alwaysrunning. C)Use CloudFront in front of the EC2 Instances. D)Use AMIs to recreate the EC2 Instances in another region.

Correct Answer - D You can create an AMI from the EC2 Instances and then copy them to another region. In case of a disaster, an EC2 Instance can be created from the AMI. Options A and B are good for fault tolerance, but cannot help completely in disaster recovery for the EC2 Instances. Option C is incorrect because we cannot determine if CloudFront would be helpful in this scenario or not without knowing what is hosted on the EC2 Instance. For disaster recovery, we have to make sure that we can launch instances in another region when required. Hence, options A, B and C are not the feasible solutions For more information on AWS AMIs, please visit the following URL:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

You are working as an AWS Architect for a start-up company. The company has a two-tier production website on AWS with web servers in front end & database servers in the back end. The third-party firm has been looking after the operations of these database servers. They need to access these database servers in private subnets on SSH port. As per standard operating procedure provided by Security team, all access to these servers should be over a secure layer. What will be the best solution to meet this requirement? A)Deploy Bastion hosts in Private Subnet B)Deploy NAT Instance in Private Subnet C)Deploy NAT Instance in Public Subnet D)Deploy Bastion hosts in Public Subnet

Correct Answer - D External users will be unable to access the instance in private subnets directly.To provide such access, we need to deploy Bastion hosts in public subnets. In case of the above requirement, third-party users will initiate a connection to Bastion hosts in public subnets & from there, they will access SSH connection to database servers in private subnets. Option A is incorrect as Bastion hosts need to be in Public subnets, & not in Private subnets, as third-party users will be accessing these servers from the internet. Option B is incorrect as NAT instance is used to provide internet tra�c to hosts in private subnets. Users from the internet will not be able to do SSH connections to hosts in private subnets using NAT instance. NAT instance is always present in Public subnets. Option C is incorrect as NAT instance is used to provide internet traffic to hosts in private subnets. Users from the internet will not be able to do SSH connections to hosts in private subnets using NAT instance. For more information on bastion instance, refer to the following URL:https://docs.aws.amazon.com/quickstart/latest/linux-bastion/architecture.html

There is a website hosted in AWS that might get a lot of traffic over the next couple of weeks. If the application experiences a natural disaster at this time, what should be used to reduce potential disruption to users? A)Use an ELB to divert traffic to an Infrastructure hosted in another region. B)Use an ELB to divert traffic to an Infrastructure hosted in another AZ. C)Use CloudFormation to create backup resources in another AZ. D)Use Route53 to route requests to another instance in a different region

Correct Answer - D In a disaster recovery scenario, the best choice out of all given options is to divert the traffic to a static website.Option A is wrong because ELB can only balance tra�c in one region, not across multiple regions. Options B and C are incorrect because using backups across AZs is not enough for disaster recovery purposes.For more information on disaster recovery in AWS, please visit the following URLs:https://aws.amazon.com/premiumsupport/knowledge-center/fail-over-s3-r53/https://aws.amazon.com/disaster-recovery/ The wording "to reduce the potential disruption in case of issues" is pointing to a disaster recovery situation. There is more than one way to manage this situation. However, we need to choose the best option from the list given here.Out of this, the most suitable one is Option D.

You are consulting for a manufacturing company who use a set of EC2 instances to automate the production of products. The EC2 instances run software that runs through a workflow, executing various different AWS services, storing and retrieving data, and ensuring orders flow through a set of steps: A->B->C->D->E. The steps include some human interaction and can take weeks to complete. What might be a cost effective alternative? A Migrate the flows to one or more state machines. B Continue using EC2; the long-running workflows require compute to run 24/7/365. C Use a Lambda function to coordinate the tasks. D Use a Lambda function, but ensure the timeout is set to 1 year.

Correct Answer: A Why is this correct? State machines are used by Step Functions. This product is serverless and can orchestrate long-running workflows involving other AWS services and human interaction. https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html

34. You run an online commerce business which prints t-shirts with custom prints. You need a long running workflow (that takes at least 2 weeks) to be initiated whenever an order is received. That order is put into a DynamoDB orders table upon arrival. What AWS products should you use to ensure the most cost-effect solution for this workflow? A Step Functions B Lambda C DynamoDB Streams D SQS

Correct Answer: A Why is this correct? AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Using Step Functions, you can design and run workflows that stitch together services, such as AWS Lambda, AWS Fargate, and Amazon SageMaker, into feature-rich applications. Correct Answer: B Why is this correct? AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app. Correct Answer: C Why is this correct? Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that automatically respond to events in DynamoDB Streams. With triggers, you can build applications that react to data modifications in DynamoDB tables. If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an AWS Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table's stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. The Lambda function can perform any actions you specify, such as sending a notification or initiating a workflow.

You have been asked to suggest the most secure way to connect two AWS VPCs, and the solution should use the least amount of additional infrastructure as possible. What should you suggest? A VPC peering B OpenVPN C AWS Organizations D Direct Connect

Correct Answer: A Why is this correct? VPC peering allows two VPCs to be connected from a networking perspective. It requires no additional hardware or instances to support it. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-peering.html Direct Connect is used to connect VPCs to on-premises locations. It cannot be used to connect two VPCs.

A large regional voting application is running on an EC2 instance and has been performing badly. The application vendor has tried to assist but mentioned that for usage at this level, the application needs around 40,000 IOPS. The EC2 instance is currently running using GP volumes. When the voting has concluded, the volume needs to be detached and used on a bespoke analytics application. Which type of storage should you suggest? A Change to io1. B Leave on GP2 and increase the IOPS level. C Change to sc1. D Change to instance store.

Correct Answer: A Why is this correct? io1 can reach a max performance of 64,000 IOPS and is the best option for these extreme levels. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

Third-party sign-in (Federation) has been implemented in your web application to allow users who need access to AWS resources. Users have been successfully logging in using Google,Facebook, and other third-party credentials. Suddenly, their access to some AWS resources has been restricted. What is the most likely cause of the restricted use of AWS resources? A)IAM policies for resources were changed, thereby restricting access to AWSresources B)Federation protocols are used to authorize services and need to be updated C)AWS changed the services allowed to be accessed via federated login D)The identity providers no longer allow access to AWS services

Correct Answer: A Option A is correct. When IAM policies are changed, they can impact the user experience and services they can connect to. Option B is incorrect. Federation is used to authenticate users, not to authorize services. Option C is incorrect. Federation is used to authenticate users, not to authorize services. Option D is incorrect. The identity providers don't have the capability to authorize services; they authenticate users. References:https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-identityfederation.htmlhttps://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.htmlhttps://aws.amazon.com/articles/web-identity-federation-with-mobile-applications/

To improve security in your AWS account you have decided to enable multi-factor authentication (MFA). You can authenticate using an MFA device in which two ways? (choose 2) A)Using the AWS API B)Using biometrics C)Through the AWS Management Console D)Locally to EC2 instances E)Using a key pair

Correct Answer: A) C) You can authenticate using an MFA device in the following ways: Through the AWS Management Console - the user is prompted for a user name, password and authentication code Using the AWS API - restrictions are added to IAM policies and developers can request temporary security credentials and pass MFA parameters in their AWS STS API requests Using the AWS CLI by obtaining temporary security credentials from STS (aws sts get-session-token)

32. You are running an application on an EC2 instance that is extremely sensitive to variations in network performance, specifically the variation in ping times and latency. The application also devours CPU cycles when this network jitter happens, so you need to implement a solution that removes any risk of network performance degradation. What option works in this scenario? A Ensure the VPC is running in dedicated tenancy mode. B Ensure the instance has enhanced networking. C Ensure you are using an X1 instance. D Ensure the instance is EBS optimized.

Correct Answer: B Why is this correct? Enhanced networking (https://aws.amazon.com/premiumsupport/knowledge-center/enable-configure-enhanced-networking/) allows high-performance networking by bypassing the need for CPU involvement in virtualizing a network interface. This increases packets per second and decreases the variability in network performance.

You have been asked to create a scalable deployment for a new business application. The application uses Java and requires lots of supporting libraries and frameworks. The total time for the installation is 25 minutes. If the business needs the application to scale in an elastic way, rapidly reacting to changes in system load, what method should you suggest for installing, deploying, and scaling the application? A Use a launch template to add the application installation commands. B Install the application on an EC2 instance and create an AMI. C Install the application directly using instance metadata. D Add the application installation commands to an Auto Scaling group.

Correct Answer: B Why is this correct? This is an example of an AMI Pre-bake architecture, which would work. The 25-minute installation would be done once, with the results stored in an AMI — and this could be used with a launch configuration/launch template and an Auto Scaling group to scale the application. https://aws.amazon.com/answers/configuration-management/aws-ami-design/ Your Answer: D Why is this incorrect? This isn't valid. An Auto Scaling group doesn't store what the instance is — it cannot have installation commands added to it. The launch configuration/launch template stores this, and it's used by the Auto Scaling group.

You are architecting a solution for a mobile application your developers are creating. You need to allow logins to the application and for those logins to access AWS resources. The application will start with 3,000 users but could reach 1,000,000 within 12 months. What resource access method should you suggest? A The application should use the AWS APIs to create an IAM user for every application user. Use long-term credentials to access resources. B Create an IAM role that trusts an external IDP. Provide this role with permissions for the AWS services. C The application should use the AWS APIs to create an IAM user for every application user. Use short-term credentials to access resources. D Configure the AWS services using resource policies to accept incoming connections from identities using Facebook, Twitter, or Google credentials. Use Google IdP to verify these credentials.

Correct Answer: B Why is this correct? Web identity federation is the best architecture to use where an external IDP is trusted to assume an IAM role. https://docs.aws.amazon.com/amazondynamodb/latest/dev

You have been tasked to come up a with a solution for a custom application running on a small number of EC2 instances. You need to configure CloudWatch to log a custom application performance metric. Which two choice below would you choose for the solution? A Add the needed permissions to your application running on the EC2 instances. B Install the CloudWatch agent on the Ec2 instances C Enable detailed monitoring for your Ec2 instances D Create IAM roles for the EC2 instances

Correct Answer: B Why is this correct? The CloudWatch agent needs to be installed on the Ec2 instances to monitor and track the needed metrics. Correct Answer: D Why is this correct? The Ec2 instances will need an IAM role to allow the CloudWatch agent to be installed and also to send the metrics to CloudWatch.

You have been asked to implement a personal S3 storage area for every staff member within a client. There are 1,000 staff members and each requires access to an area no other staff members can access. What option is both possible and is the least amount of admin overhead to implement and manage? A Create an S3 bucket for each staff member, create an IAM group, apply a policy using variables to the group, and add staff members to the group. B Create an S3 bucket for each staff member and add a resource policy onto each bucket, restricting access. C Create a single S3 bucket. Give each staff member a prefix in the bucket. Create a single managed policy using variables, apply it to a group, and add staff members to the group. D Create a single S3 bucket. Give each staff member a prefix in the bucket. Create a single managed policy and apply it to every staff member's IAM user.

Correct Answer: C Why is this correct? This is the best solution. An IAM policy using the username variable would limit each staff member to a prefix based on their username. Create the policy once, apply it once to a group, and add users to this group.

37. You have been asked to perform a security review for a client. They have a fleet of EC2 instances created by an Auto Scaling group and SQS queue to process jobs stored in DynamoDB. Currently, they retrieve access keys from an S3 bucket to gain access to other AWS resources. Recently, the bucket was exploited and the keys were leaked. The business has asked for a best-practice alternative solution for this architecture. What should you suggest? A Configure an S3 bucket policy only allowing access to the Auto Scaling group instances. B Add access keys to the Auto Scaling group configuration for delivery via the instance metadata. C Create a new launch template, IAM role, and instance profile. D Remove the access keys from the S3 bucket. E Leave the access keys stored in S3.

Correct Answer: C Why is this correct? An IAM role and instance profile can be used to deliver temporary credentials to EC2 instances securely. By configuring this in the launch template, it can be applied to all EC2 instances created by the Auto Scaling group. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html Correct Answer: D Why is this correct? This resolves the immediate issue causing the credential leak. Your Answer: A Why is this incorrect? This isn't a viable technical solution. A bucket policy can only reference identities — it cannot allow access for an Auto Scaling group.

You have been asked to migrate a large micro service application into AWS which currently uses Docker running on virtual servers. Your client would like to run docker containers on EC2 instances. Which product would be a great solution? You client is also concerned with cost efficacy and low admin overhead. A EKS B ECS EC2 C ECS Fargate D EMR

Correct Answer: C Why is this correct? ECS Fargate allows little overhead administration with a lower cost to run docker containers in AWS.

One of your environments utilizes DynamoDB as a database. You need to ensure it can only be accessed by a select number of people using specific IP addresses. What design changes do you suggest? A Create a security group, add allow rules for the IPs who need access, and attach the security group to DynamoDB B Using the AWS console or CLI, edit the table(s) requiring the restrictions, set the default security to Deny, and add the IPs they'll be accessing the table from. C Configure an IAM group (for each level of access), and add the people who need access. Give those groups access to the DynamoDB operations they need, but add a condition to the policy so it has to match the specific IP address. D Create an isolated VPC that is not connected to the internet, provision a private DynamoDB instance in the VPC, and allow those "select people" to connect to the VPC using a VPN.

Correct Answer: C Why is this correct? This is the best solution. By default, nobody has access to the DynamoDB tables unless they're granted access. Grants can be allowed via IAM users, who have policies with conditions matching specific IP addresses.

You are reviewing poor performance on a voting application running on DynamoDB. The table used to store votes has been allocated 5,000 WCU, but with three candidates you are achieving slightly over half of the expected write throughput to the table. Votes are written with a PK of candidate name and sort key of date and time. What could be a possible reason for the substandard performance? A DynamoDB cannot support 5,000 writes per second — buffer the writes or use DAX to improve write performance. B The sort key structure is the issue. C The partition key structure is the issue. D You are trying to do strongly consistent writes, which need 2x the WCU.

Correct Answer: C Why is this correct? Each occurrence of a PK value (candidate1, candidate 2, candidate 2) is stored in one partition. A partition can support a max of 1,000 WCU. The small range of possible PK values is the reason for the low performance.

A large fleet of IoT devices is sending data to a Kinesis stream but experiencing an error of ProvisionedThroughputExceededException. How should you resolve the issue? A Create an additional Kinesis stream and load balance the IoT devices. B Adjust the partition key of the Kinesis data records. C Increase the number of shards in the stream. D Increase the size of the Kinesis shards.

Correct Answer: C Why is this correct? Increasing the number of shards is the recommended way to improve the performance of a Kinesis stream. https://docs.aws.amazon.com/streams/latest/dev/service-sizes-and-limits.html

33. You are migrating a windows file server into AWS so that it can be used by Workspaces (Virtual Desktops). What is the most cost effective and resilient way to host this data in AWS and provide access to it using the SMB protocol. A S3 B EFS C EC2 instance running Windows server D FSx

Correct Answer: D Why is this correct? Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. It offers single-AZ and multi-AZ deployment options, fully managed backups, and encryption of data at rest and in transit. Amazon FSx file storage is accessible from Windows, Linux, and MacOS compute instances and devices running on AWS or on premises. You can optimize cost and performance for your workload needs with SSD and HDD storage options. Amazon FSx helps you lower TCO with data deduplication, reducing costs by up to 50-60% on your general-purpose file shares. It is easy to get started; there are no minimum commitments or upfront fees.

You are working for a large global biotech firm. Your global offices upload huge data sets regularly to a us-east-1-hosted S3 bucket. Which AWS service will provide all remote offices with improved transfer rates and reliability to S3? A Direct Connect B Enhanced networking C DAX D S3 transfer acceleration

Correct Answer: D Why is this correct? S3 transfer acceleration offers local S3 endpoints and routing back to the source bucket over the global AWS network backbone and can increase performance for all global offices. https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html

You have a requirement to make sure that an On-Demand EC2 instance can only be accessed from this IP address (110.238.98.71) via an SSH connection. Which configuration below will satisfy this requirement? A)Security Group Inbound Rule: Protocol - UDP, Port Range - 22, Source 110.238.98.71/0 B)Security Group Inbound Rule: Protocol - TCP, Port Range - 22, Source 110.238.98.71/0 C)Security Group Inbound Rule: Protocol - UDP, Port Range - 22, Source 110.238.98.71/32 D)Security Group Inbound Rule: Protocol - TCP, Port Range - 22, Source 110.238.98.71/32

Correct Answer: D) The SSH protocol uses TCP and port 22. Hence, Protocol - UDP, Port Range - 22, Source 110.238.98.71/32 and Protocol - UDP, Port Range - 22, Source 110.238.98.71/0 are incorrect as they are using UDP. The following two options: Protocol - TCP, Port Range - 22, Source 110.238.98.71/32 and Protocol - TCP, Port Range - 22, Source 110.238.98.71/0 have one major difference and that is their CIDR block. The requirement is to only allow the individual IP of the client and not the entire network. Therefore, proper CIDR notation should be used. The /32 denotes one IP address and the /0 refers to the entire network. That is why Protocol - TCP, Port Range - 22, Source 110.238.98.71/0 is incorrect as it allowed the entire network instead of a single IP. Reference: [url=] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#security-group-rules[/url]

Question 29 of 65 A newly hired Solutions Architect is assigned to manage a set of CloudFormation templates that is used in the company's cloud architecture in AWS. The Architect accessed the templates and tried to analyze the configured IAM policy for an S3 bucket. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:Get*","s3:List*"],"Resource": "*"},{ "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::urbantech/*" } ] } What does the above IAM policy allow? (Select THREE.) Correct answer: A) B) C) F) A)An IAM user with this IAM policy is allowed to read and delete objects from the urbantech S3 bucket. B)An IAM user with this IAM policy is allowed to read objects from all S3 buckets owned by the account. C)An IAM user with this IAM policy is allowed to write objects into the urbantech S3 bucket. D)An IAM user with this IAM policy is allowed to read objects in the urbantech S3 bucket but not allowed to list the objects in the bucket. E)An IAM user with this IAM policy is allowed to change access rights for the urbantech S3 bucket. F)An IAM user with this IAM policy is allowed to read objects from the urbantech S3 bucket.

Correct answer: A) B) C) F) Feedback: You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, AWS Organizations SCPs, ACLs, and session policies. IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, if a policy allows the GetUser action, then a user with that policy can get user information from the AWS Management Console, the AWS CLI, or the AWS API. When you create an IAM user, you can choose to allow console or programmatic access. If console access is allowed, the IAM user can sign in to the console using a user name and password. Or if programmatic access is allowed, the user can use access keys to work with the CLI or API. Based on the provided IAM policy, the user is only allowed to get, write and list all of the objects for the 'urbantech' s3 bucket. The s3:PutObject basically means that you can submit a PUT object request to the S3 bucket to store data. Hence, the correct answers are: - An IAM user with this IAM policy is allowed to read objects from all S3 buckets owned by the account. - An IAM user with this IAM policy is allowed to write objects into the 'urbantech' S3 bucket. - An IAM user with this IAM policy is allowed to read objects from the 'urbantech' S3 bucket. The option that says: An IAM user with this IAM policy is allowed to change access rights for the 'urbantech' S3 bucket is incorrect because the template does not have any statements which allow the user to change access rights in the bucket. The option that says: An IAM user with this IAM policy is allowed to read objects in the 'urbantech' S3 bucket but not allowed to list the objects in the bucket is incorrect because it can clearly be seen in the template the there is a s3:Get* which permits the user to list objects.The option that says: An IAM user with this IAM policy is allowed to read and delete objects from the 'urbantech' S3 bucket is incorrect because although you can read objects from the bucket, you cannot delete any objects. References: https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectOps.htmlhttps://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html

Question 48 of 65 A content management system (CMS) is hosted on a fleet of auto-scaled, On-Demand EC2 instances that use Amazon Aurora as its database. Currently, the system stores the file documents that the users uploaded in one of the attached EBS Volumes. Your manager noticed that the system performance is quite slow and he has instructed you to improve the architecture of the system .In this scenario, what will you do to implement a scalable, high throughput POSIX-compliant file system? A)Create an S3 bucket and use this as the storage for the CMS B)Use EFS C)Upgrade your existing EBS volumes to Provisioned IOPS SSD Volumes D)Use ElastiCache

Correct answer: B) Amazon Elastic File System (Amazon EFS) provides simple, scalable, elastic file storage for use with AWS Cloud services and on-premises resources. When mounted on Amazon EC2 instances, an Amazon EFS file system provides a standard file system interface and file system access semantics, allowing you to seamlessly integrate Amazon EFS with your existing applications and tools. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, allowing Amazon EFS to provide a common data source for workloads and applications running on more than one Amazon EC2 instance. This particular scenario tests your understanding of EBS, EFS, and S3. In this scenario, there is a fleet of On-Demand EC2 instances that stores file documents from the users to one of the attached EBS Volumes. The system performance is quite slow because the architecture doesn't provide the EC2 instances a parallel shared access to the file documents. Remember that an EBS Volume can be attached to one EC2 instance at a time, hence, no other EC2 instance can connect to that EBS Provisioned IOPS Volume. Take note as well that the type of storage needed here is a "file storage" which means that S3 is not the best service to use because it is mainly used for "object storage", and S3 does not provide the notion of "folders" too. This is why using EFS is the correct answer. Upgrading your existing EBS volumes to Provisioned IOPS SSD Volumes is incorrect because the scenario requires you to set up a scalable, high throughput storage system that will allow concurrent access from multiple EC2 instances. This is clearly not possible in EBS, even with Provisioned IOPS SSD Volumes. You have to use EFS instead. Using ElastiCache is incorrect because this is an in-memory data store that improves the performance of your applications, which is not what you need since it is not a file storage. Reference: https://aws.amazon.com/efs/

Question 59 of 65 You are designing a multi-tier web application architecture that consists of a fleet of EC2 instances and an Oracle relational database server. It is required that the database is highly available and that you have full control over its underlying operating system. Which AWS Service will you use for your database tier? A)Amazon EC2 instances with data replication in one Availability Zone B)Amazon EC2 instances with data replication between two different Availability Zones C)Amazon RDS with Multi-AZ deployments D)Amazon RDS

Correct answer: B) To achieve this requirement, you can deploy your Oracle database to Amazon EC2 instances with data replication between two different Availability Zones. Hence, this option is the correct answer. The deployment of this architecture can easily be achieved by using CloudFormation and Quick Start. Please refer to the reference link for information. The Quick Start deploys the Oracle primary database (using the preconfigured, general-purpose starter database from Oracle) on an Amazon EC2 instance in the first Availability Zone. It then sets up a second EC2 instance in a second Availability Zone, copies the primary database to the second instance by using the DUPLICATE command, and configures Oracle Data Guard. Amazon RDS and Amazon RDS with Multi-AZ deployments are both incorrect because the scenario requires you to have access to the underlying operating system of the database server. Remember that Amazon RDS is a managed database service, which means that Amazon is the one that manages the underlying operating system of the database instance and not you. The option that says: Amazon EC2 instances with data replication in one Availability Zone is incorrect since deploying to just one Availability Zone (AZ) will not make the database tier highly available. If that AZ went down, your database will be unavailable.

What listeners can you configure your Application Load Balancer to accept? Choose all that apply. A)RDP B)SSL C)HTTP D)HTTPS

Correct answer: C) D) Feedback: C, D - AWS ALBs support HTTP and HTTPS on ports 1-65535. Reference https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html

Question 53 of 65 You own a MySQL RDS instance in AWS Region us-east-1. The instance has a Multi-AZ instance in another availability zone for high availability. As the business grows, there are more and more clients coming from Europe (EU-west-2) and most of the database workload is read-only. What is the proper way to reduce the load on the source RDS instance? A)Create a snapshot of the instance and launch a new instance in EU-west-2. B)Promote the Multi-AZ instance to be a Read Replica and move the instance to euwest-2 region. C)Configure a read-only Multi-AZ instance in eu-west-2 as Read Replicas cannotspan across regions. D)Create a Read Replica in the AWS Region eu-west-2. Points: 0 out of 1

Correct answer: D)

A web application is using CloudFront to distribute their images, videos, and other static contents stored in their S3 bucket to its users around the world. The company has recently introduced a new member-only access to some of its high-quality media files. There is a requirement to provide access to multiple private media files only to their paying subscribers without having to change their current URLs. Which of the following is the most suitable solution that you should implement to satisfy this requirement? A)Create a Signed URL with a custom policy that only allows the members to see the private files. B)Configure your CloudFront distribution to use Match Viewer as its Origin Protocol Policy which will automatically match the user request. This will allow access to the private content if the request is a paying member and deny it if it is not a member. C)Configure your CloudFront distribution to use Field-Level Encryption to protect your private data and only allow access to members. D)Use Signed Cookies to control who can access the private files in your CloudFront distribution by modifying your application to determine whether a user should have access to your content. For members, send the required Set-Cookie headers to the viewer which will unlock the content only to them.

Correct answer: D) CloudFront signed URLs and signed cookies provide the same basic functionality: they allow you to control who can access your content. If you want to serve private content through CloudFront and you're trying to decide whether to use signed URLs or signed cookies, consider the following: Use signed URLs for the following cases: - You want to use an RTMP distribution. Signed cookies aren't supported for RTMP distributions .- You want to restrict access to individual files, for example, an installation download for your application. - Your users are using a client (for example, a custom HTTP client) that doesn't support cookies. Use signed cookies for the following cases: - You want to provide access to multiple restricted files, for example, all of the files for a video in HLS format or all of the files in the subscribers' area of a website. - You don't want to change your current URLs. Hence, the correct answer for this scenario is the option that says: Use Signed Cookies to control who can access the private files in your CloudFront distribution by modifying your application to determine whether a user should have access to your content. For members, send the required Set-Cookie headers to the viewer which will unlock the content only to them. The option that says: Configure your CloudFront distribution to use Match Viewer as its Origin Protocol Policy which will automatically match the user request. This will allow access to the private content if the request is a paying member and deny it if it is not a member is incorrect because a Match Viewer is an Origin Protocol Policy which configures CloudFront to communicate with your origin using HTTP or HTTPS, depending on the protocol of the viewer request. CloudFront caches the object only once even if viewers make requests using both HTTP and HTTPS protocols. The option that says: Create a Signed URL with a custom policy which only allows the members to see the private files is incorrect because Signed URLs are primarily used for providing access to individual files, as shown on the above explanation. In addition, the scenario explicitly says that they don't want to change their current URLs which is why implementing Signed Cookies is more suitable than Signed URL. The option that says: Configure your CloudFront distribution to use Field-Level Encryption to protect your private data and only allow access to members is incorrect because Field-Level Encryption only allows you to securely upload user-submitted sensitive information to your web servers. It does not provide access to download multiple private files. Reference:[url=] https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-choosing-signed-urls-cookies.html[/url] [url=] https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-cookies.html[/url]

Question 47 of 65 A financial application is composed of an Auto Scaling group of EC2 instances, an Application Load Balancer, and a MySQL RDS instance in a Multi-AZ Deployments configuration. To protect the confidential data of your customers, you have to ensure that your RDS database can only be accessed using the profile credentials specific to your EC2 instances via an authentication token.As the Solutions Architect of the company, which of the following should you do to meet the above requirement? A)Use a combination of IAM and STS to restrict access to your RDS instance via a temporary token. B)Create an IAM Role and assign it to your EC2 instances which will grant exclusive access to your RDS instance. C)Configure SSL in your application to encrypt the database connection to RDS. D)Enable the IAM DB Authentication.

Correct answer: D) You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token. An authentication token is a unique string of characters that Amazon RDS generates on request. Authentication tokens are generated using AWS Signature Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials in the database, because authentication is managed externally using IAM. You can also still use standard database authentication. IAM database authentication provides the following benefits: Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL). You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance. For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security Hence, enabling IAM DB Authentication is the correct answer based on the above reference. Configuring SSL in your application to encrypt the database connection to RDS is incorrect because an SSL connection is not using an authentication token from IAM. Although configuring SSL to your application can improve the security of your data in flight, it is still not a suitable option to use in this scenario. Creating an IAM Role and assigning it to your EC2 instances which will grant exclusive access to your RDS instance is incorrect because although you can create and assign an IAM Role to your EC2 instances, you still need to configure your RDS to use IAM DB Authentication. Using a combination of IAM and STS to restrict access to your RDS instance via a temporary token is incorrect because you have to use IAM DB Authentication for this scenario, and not a combination of an IAM and STS. Although STS is used to send temporary tokens for authentication, this is not a compatible use case for RDS. Reference:[url=] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html[/url]

Question 57 of 65 Which of the following AWS CLI commands can tell you whether an access key is still being used? A)aws iam get-access-key-used --access-key-id B)aws iam --get-access-key-last-used access-key-id C)aws iam get-access-key-last-used access-last-key-id D)aws iam get-access-key-last-used --access-key-id

Correct: D) aws iam get-access-key-last-used --access-key-id Feedback: The top-level command is iam, while the correct subcommmand is get-access-key-last-used. The parameter is identified by --access-last-key-id. Parameters (not subcommands) are always prefixed with -- characters.

You own a MySQL RDS instance in AWS Region us-east-1. The instance has a Multi-AZ instance in another availability zone for high availability. As the business grows, there are more and more clients coming from Europe (eu-west-2) and most of the database workload is read-only. What is the proper way to reduce the load on the source RDS instance? A)Create a snapshot of the instance and launch a new instance in eu-west-2. B)Promote the Multi-AZ instance to be a Read Replica and move the instance to euwest-2 region. C)Configure a read-only Multi-AZ instance in eu-west-2 as Read Replicas can not span across regions. D)Create a Read Replica in the AWS Region eu-west-2.

D)Create a Read Replica in the AWS Region eu-west-2.

Question 64 of 65 A company is developing a web application to be hosted in AWS. This application needs adata store for session data.As an AWS Solution Architect, what would you recommend as an ideal option to store session data? (SELECT TWO) Correct answer: B) D) A)CloudWatch B)DynamoDB C)Elastic Load Balancing D)ElastiCache E)Storage Gateway

DynamoDB and ElastiCache are perfect options for storing session data. AWS Documentation mentions the following on Amazon DynamoDB: Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity make it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications. For more information on AWS DynamoDB, please visit the following URL:https://aws.amazon.com/dynamodb/ AWS Documentation mentions the following on AWS ElastiCache: AWS ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high performance, scalable, and cost-effective caching solution while removing the complexity associated with deployment and management of a distributed cache environment.For more information on AWS Elasticache, please visit the following URL:https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/WhatIs.htmlOption A is incorrect. AWS CloudWatch offers cloud monitoring services for the customers of AWS resources. Option C is incorrect. AWS Elastic Load Balancing automatically distributes incoming application traffic across multiple targets. Option E is incorrect. AWS Storage Gateway is a hybrid storage service that enables your on premises applications to seamlessly use AWS cloud storage.

You are running an application on an EC2 instance in us-east-1a. us-east-1a fails — what options do you have to recover the application running on the EC2 instance? A Create a new EC2 instance in us-east-1b and attach the EBS volume. B Copy a snapshot of the EBS volume from us-east-1a to us-east-1b, recreate the EBS volume, and then create a new EC2 instance. C The EC2 instance will recover using EC2-Recover automatically. D If available, use a snapshot of the EBS volume to make a new volume AND then create a new EC2 instance in a different availability zone.

EBS volumes are created in a specific AZ, so if the AZ fails, they fail. Also, an EBS volume in one AZ cannot be attached to an EC2 instance in another — this isn't a recovery option. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html Correct Answer: D Why is this correct? This is the only recovery option assuming AZ 1a doesn't return.

Which factors determine the number of virtual instances in a dedicated host? (Choose 2) A)Region B)Type of Instance C)Instance auto-placement D)Size of an instance E)Host recovery F)AZ

Feedback: B, D - The type of instance is needed, and this will partly determine how many virtual instances you can create from it.The size of an instance is needed, and this will partly determine the number of virtual instances you can create from it.

From a security perspective, what are examples of a principal? (Choose 3). A)An authenticated user B)An application C)An anonymous user D)An identity

Feedback: A,B,C - An authenticated user falls under the definition of a principal. Essentially a principal is a person, application, or a system that can make an authenticated or anonymous request to perform an action on a system. Essentially a principal is a person, application, or a system that can make an authenticated or anonymous request to perform an action on a system. An anonymous user falls under the definition of a principal. Essentially a principal is a person, application, or a system that can make an authenticated or anonymous request to perform an action on a system.

Which of these aspects can contribute to having a low RPO and RTO? A)Architecting an environment that is fault-tolerant B)An untested recovery method C)A high RPO can reduce the RTO time and overall balance out both aspects D)Architecting an environment that is highly available E)Frequent system backups F)Infrequent system backups

Feedback: A,D, E - Remember, Fault-Tolerance allows for continuous operation throughout a failure, which can lead to a low Recovery Time Objective.Remember, High-Availability means automating tasks so that an instance will quickly recover, which can lead to a low Recovery Time Objective.Frequent backups reduce the time between the last backup and recovery point, otherwise known as the Recovery Point Objective.

Which are true of an EC2 instance? (Select 3) A)EC2 instances do not share their hardware with other customers. B)EC2s can be placed in a VPC. C)Key pairs generate a public and a private key which allow for password-less entry into an EC2 via the command line D)EBS or instance store volumes are storage devices that can be used by an EC2 instance.E)A single EC2s (ENI) can only have, at maximum, one security group.

Feedback: B,C,D - EC2s are intended to be placed inside a Virtual Private Cloud. When launching an EC2 instance, key pairs can be generated for password-less entry. Remember, a key pair generate a public and a private key. EBS and instance store volumes are storage devices that can be attached to an EC2. Remember, EBS is attachable/detachable while instance store volumes are local storage within an EC2.

An organization hosts a multi-language website on AWS, which is served using CloudFront. Language is specified in the HTTP request as shown below: http://d11111f8.cloudfront.net/main.html? http://d11111f8.cloudfront.net/main.html?language=en http://d11111f8.cloudfront.net/main.html?language=es How should AWS CloudFront be configured to deliver cached data in the correct language? A)Forward cookies to the origin B)Based on query string parameters C)Cache objects at the origin D)Serve dynamic content

Feedback: Correct Answer - B Since language is specified in the query string parameters, CloudFront should be configured for the same. For more information on configuring CloudFront via query string parameters, please visit the following URL:https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html

You are developing an application using AWS SDK to get objects in AWS S3. The objects have big sizes and sometimes there are failures when getting objects especially when the network connectivity is poor. You want to get a specific range of bytes in a single GET request and retrieve the whole object in parts. Which method can achieve this? A)Enable multipart upload in the AWS SDK. B)Use the "Range" HTTP header in a GET request to download the specified rangebytes of an object. C)Reduce the retry requests and enlarge the retry timeouts through AWS SDK when fetching S3 objects. D)Retrieve the whole S3 object through a single GET operation.

Feedback: Correct Answer - B Through the "Range" header in the HTTP GET request, a specified portion of the objects can be downloaded instead of the whole objects. Check the explanations in https://docs.aws.amazon.com/AmazonS3/latest/dev/GettingObjectsUsingAPIs.html. Option A is incorrect: Because the question asks for multipart download rather than multipart upload. Option B is CORRECT: Because with byte-range fetches, users can establish concurrent connections to Amazon S3 to fetch di�erent parts from within the same object. Option C is incorrect: Because adjusting retry requests and timeouts cannot download specific parts of an object. Option D is incorrect: Because the method to retrieve the entire object does not meet the requirement.

A database, hosted using the Amazon RDS service, is getting a lot of database queries and has now become a bottleneck for the associating application. Which action would ensure that the database is not a performance bottleneck? A)Setup a CloudFront distribution in front of the database. B)Setup an ELB in front of the database. C)Setup ElastiCache in front of the database. D)Setup SNS in front of the database.

Feedback: Correct Answer - C ElastiCache is an in-memory solution which can be used in front of a database to cache the common queries issued against the database. This can reduce the overall load on the database. Option A is incorrect because this is normally used for content distribution. Option B is partially correct, but you need to have one more database as an internal load balancing solution. Option D is incorrect because SNS is a simple notification service. For more information on ElastiCache, please visit the following URL:https://aws.amazon.com/elasticache/

A company is storing an access key (access key ID and secret access key) in a text file on a custom AMI. The company uses the access key to access DynamoDB tables from instances created from the AMI. The security team has mandated a more secure solution. Which solution will meet the security team's mandate? A)Put the access key in an S3 bucket, and retrieve the access key on boot from the instance. B)Pass the access key to the instances through instance user data. C)Obtain the access key from a key server launched in a private subnet. D)Create an IAM role with permissions to access the table, and launch all instances with the new role.

Feedback: D - IAM roles for EC2 instances allow applications running on the instance to access AWS resources without having to create and store any access keys. Any solution involving the creation of an access key then introduces the complexity of managing that secret.

An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk.Which solution will resolve the security concern? A)Access the data through an Internet Gateway. B)Access the data through a VPN connection. C)Access the data through a NAT Gateway. D)Access the data through a VPC endpoint for Amazon S3.

Feedback: D - VPC endpoints for Amazon S3 provide secure connections to S3 buckets that do not require a gateway or NAT instances. NAT Gateways and Internet Gateways still route traffic over the Internet to the public endpoint for Amazon S3. There is no way to connect to Amazon S3 via VPN.

A travel photo sharing website is using Amazon S3 to serve high-quality photos to visitors of your website. After a few days, you found out that there are other travel websites linking and using your photos. This resulted in financial losses for your business.What is the MOST effective method to mitigate this issue? Correct Answer: C) A)Block the IP addresses of the offending websites using NACL. B)Use CloudFront distributions for your photos. C)Configure your S3 bucket to remove public read access and use pre-signed URLs with expiry dates. D)Store and privately serve the high-quality photos on Amazon WorkDocs instead.

Feedback: In Amazon S3, all objects are private by default. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a pre-signed URL, using their own security credentials, to grant time-limited permission to download the objects. When you create a pre-signed URL for your object, you must provide your security credentials, specify a bucket name, an object key, specify the HTTP method (GET to download the object) and expiration date and time. The pre-signed URLs are valid only for the specified duration. Anyone who receives the pre-signed URL can then access the object. For example, if you have a video in your bucket and both the bucket and the object are private, you can share the video with others by generating a pre-signed URL. Using CloudFront distributions for your photos is incorrect. CloudFront is a content delivery network service that speeds up delivery of content to your customers. Blocking the IP addresses of the offending websites using NACL is also incorrect. Blocking IP address using NACLs is not a very efficient method because a quick change in IP address would easily bypass this configuration. Storing and privately serving the high-quality photos on Amazon WorkDocs instead is incorrect as WorkDocs is simply a fully managed, secure content creation, storage, and collaboration service. It is not a suitable service for storing static content. Amazon WorkDocs is more often used to easily create, edit, and share documents for collaboration and not for serving object data like Amazon S3. References: https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectOperations.html

Question 60 of 65 A VPC has a non-default subnet which has four On-Demand EC2 instances that can be accessed over the Internet. Using the AWS CLI, you launched a fifth instance that uses the same subnet, Amazon Machine Image (AMI), and security group which are being used by the other instances. Upon testing, you are not able to access the new instance. Which of the following is the most suitable solution to solve this problem? A)Enable AWS Transfer for SFTP to allow the incoming traffic to the fifth EC2 Instance. B)Set up a NAT gateway to allow access to the fifth EC2 instance. C)Configure the routing table for the public subnet to explicitly include the fifth EC2 instance. D)Associate an Elastic IP address to the fifth EC2 instance.

Feedback: D) By default, a "default subnet" of your VPC is actually a public subnet, because the main route table sends the subnet's traffic that is destined for the internet to the internet gateway. You can make a default subnet into a private subnet by removing the route from the destination 0.0.0.0/0 to the internet gateway. However, if you do this, any EC2 instance running in that subnet can't access the internet. Instances that you launch into a default subnet receive both a public IPv4 address and a private IPv4 address, and both public and private DNS hostnames. Instances that you launch into a nondefault subnet in a default VPC don't receive a public IPv4 address or a DNS hostname. You can change your subnet's default public IP addressing behavior By default, non-default subnets have the IPv4 public addressing attribute set to false, and default subnets have this attribute set to true. An exception is a non-default subnet created by the Amazon EC2 launch instance wizard — the wizard sets the attribute to tru

Which of the following are not global AWS services? A EC2 B S3 C DynamoDB D Route 53 E IAM

IAM is a global service — it isn't regional like many other AWS services. IAM resources aren't in a specific region — they are globally replicated. Correct Answer: A Why is this correct? If a region fails where an EC2 instance is located, access to that instance will also fail. Correct Answer: B Why is this correct? If a region fails where an S3 bucket is located, access to that bucket will also fail. Correct Answer: C Why is this correct? If a region fails where a table is located, access to that table will also fail.

AWS PrivateLink AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely on the Amazon network. AWS PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify the network architecture.

PrivateLink to set up a VPC endpoint for Session You can further improve the security posture of your managed instances by configuring AWS Systems Manager to use an interface VPC endpoint. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access Amazon EC2 and Systems Manager APIs by using private IP addresses. PrivateLink restricts all network traffic between your managed instances, Systems Manager, and Amazon EC2 to the Amazon network. (Managed instances don't have access to the internet.) Also, you don't need an internet gateway, a NAT device, or a virtual private gateway. In addition to the three endpoints required to use PrivateLink with Systems Manager, you can create a fourth, com.amazonaws.region.ssmmessages, for use with Session Manager.

You have been asked to suggest an AWS product which provides storage which can be mounted on linux instances, supports POSIX type permissions and can be used by multiple instances at the same time. Which option should you suggest? thumb_upthumb_down A EBS B S3 C Storage Gateway D EFS

S3 is Amazon's Simple Storage Service and can store a virtual unlimited amount of object. It cannot be mounted. Correct Answer: D Why is this correct? Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

X.509 client certificates X.509 certificates provide AWS IoT with the ability to authenticate client and device connections. Client certificates must be registered with AWS IoT before a client can communicate with AWS IoT. A client certificate can be registered in multiple AWS accounts in the same AWS Region to facilitate moving devices between your AWS accounts in the same region. See Using X.509 client certificates in multiple AWS accounts with multi-account registration for more information.

We recommend that each device or client be given a unique certificate to enable fine-grained client management actions, including certificate revocation. Devices and clients must also support rotation and replacement of certificates to help ensure smooth operation as certificates expire. For information about using X.509 certificates to support more than a few devices, see Device provisioning to review the different certificate management and provisioning options that AWS IoT supports.

An application that records weather data every minute is deployed in a fleet of Spot EC2 instances and uses a MySQL RDS database instance. Currently, there is only one RDS instance running in one Availability Zone. You plan to improve the database to ensure high availability by synchronous data replication to another RDS instance. Which of the following performs synchronous data replication in RDS? A)DynamoDB Read Replica B)CloudFront running as a Multi-AZ deployment C)RDS Read Replica D)RDS DB instance running as a Multi-AZ deployment

When you create or modify your DB instance to run as a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. Updates to your DB Instance are synchronously replicated across Availability Zones to the standby in order to keep both in sync and protect your latest database updates against DB instance failure. RDS Read Replica is incorrect as a Read Replica provides an asynchronous replication instead of synchronous. DynamoDB Read Replica and CloudFront running as a Multi-AZ deployment are incorrect as both DynamoDB and CloudFront do not have a Read Replica feature. Reference: https://aws.amazon.com/rds/details/multi-az/

Which is true of an eventually consistent read from DynamoDB A It uses more RCU than a strongly consistent read B You may receive outdated data C You receive consistent data D It uses less RCU than a strongly consistent read Correct Answer: B

Why is this correct? Correct, eventual consistent reads can have outdated data. Correct Answer: D Why is this correct? Correct, it does uses less RCUs than a strongly consistent read

50. You have 6 VPC's and need to configure AWS to allow communications between all 6 VPC. Which option below will allow communication between the VPCs with little admin overhead. A 6 Transit Gateways B 6 Peering Connections C 1 Transit Gateway D 1 VPC Peering Connection Your Answer: A

Why is this incorrect? 6 Transit Gateways is 5 Transit Gateways too many. Only 1 Transit Gateway is needed. Correct Answer: C Why is this correct? Transit Gateway allows transitive peering between VPCs

You have been asked to architect the networking for a high-performance financial modeling application. It runs on four EC2 instances, and you need the lowest network latency and highest throughput possible. What AWS products, services, or features should you suggest? A Burstable instances B VPC Flow C Spread placement group D Cluster placement group

Why is this incorrect? Burstable instances (T2 or T3) are designed for economic applications that don't need consistent CPU. Correct Answer: D Why is this correct? Cluster placement groups influence the physical placement of instances on hardware, and this allows the highest performance possible. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

40. You have an S3 bucket full of medical data. The bucket needs to be accessed by 100 IAM users within your AWS account, as well as two to three remote imaging operators who are employed by a partner and have IAM identities in another AWS account, and read-only access to one or two folders needs to be given to anonymous/unauthenticated identities. What method of permissions control should you use? A Identity policies on an IAM role B Identity policies on IAM users C Service control policies D Bucket policy

Your Answer: A Why is this incorrect? An IAM role could work. Users could assume the role to get access, but there are folders that need public access, so this wouldn't work for all question requirements. Correct Answer: D Why is this correct? A bucket policy could be defined to control access for all identities and the unauthenticated (public) users.

58. Your CIO is reviewing the expected technical effort required to manage an AWS environment. Which of the following AWS services allow SSH connectivity into the service's underlying instances? A) DynamoDB B) Amazon EMR C) Amazon RDS D) Amazon EC2

Your Answer: A Why is this incorrect? DynamoDB only provides access via APIs. There is no infrastructure to access directly. Your Answer: C Why is this incorrect? RDS provides APIs for RDS operations and SQL access for the data. Correct Answer: B Why is this correct? EMR allows you to log in to the master node via SSH. Correct Answer: D Why is this correct? You can SSH/RDP to the operating system of your EC2 instances — for certain installation/configuration and admin tasks, it's required.

31. A data scientist is trying to upload 4.5 GB objects to S3. The scientist is in N. Virginia and the S3 bucket is located in the us-east-1 region. Previous smaller uploads have been running slowly, achieving ~2 Mbps on a 1 Gbps internet connection. What options can you suggest to speed up the data transfer of this larger file? A S3 transfer acceleration B SSE-S3 C S3 CRR D Multipart upload

Your Answer: A Why is this incorrect? S3 transfer acceleration allows for faster upload speeds where the upload is occurring in a different region than the bucket. In this case, this is not happening. Correct Answer: D Why is this correct? Multipart upload allows multiple transfers to occur at the same time, improving reliability for larger files but also improving speed. https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html

You operate a commercial stock images website with millions of images. Watermarked preview images are available via an EC2 instance application. Full-resolution versions are stored on an EBS volume. The EBS volume is attached to the EC2 instance and delivered by the application. You have been asked to find a cheaper solution that can scale. Which option is the most suitable? A Add a storage-optimized EBS volume to the EC2 instance. B Move the images to S3, and add read permissions for everyone. C Move the images to S3, and enable SFTP read support. D Move the images to S3, and use pre-signed URLs.

Your Answer: A Why is this incorrect? There is no such feature, and it wouldn't improve the situation. Correct Answer: D Why is this correct? S3 is more economical for large-scale object storage. Using pre-signed URLs allows the application to provide access rights to private objects to be downloaded. https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

One of your EC2 instances has been configured with a script in its User Data that runs when the instance is booted. The boot strap script makes a call to S3 to copy the required setup and configuration. When you launch the instance you notice that the script is failing because it have no access to the S3 bucket containing the resources it needs. How can you solve this problem and follow best practices for security? A Add a bucket policy to your S3 bucket. B Add an IAM role to the S3 bucket that will give the EC2 instance the needed permissions to the S3 bucket. C Add the needed credentials to your boot strap script. D Add an IAM role to the EC2 instance that will give the EC2 instance the needed permissions to the S3 bucket.

Your Answer: B Why is this incorrect? Incorrect, the EC2 instance needs access to S3. So it would need the IAM role. Correct Answer: D Why is this correct? Correct, the EC2 instance needs permission to access S3.

You need to design a VPC that is resilient to AZ failure from an internet access perspective. The VPC is in a four-AZ region. How many internet gateways are required to ensure multiple AZ failures won't disrupt internet connectivity? A Zero — internet access is provided by a NAT gateway B Four C One D Two

Your Answer: B Why is this incorrect? An IGW is resilient by design, and only one needs to be attached to a VPC in order to provide all subnets in all AZs with resilient internet connectivity. You cannot assign more than one IGW to a VPC. Correct Answer: C Why is this correct? An IGW is resilient by design, and only one needs to be attached to a VPC in order to provide all subnets in all AZs with resilient internet connectivity. You cannot assign more than one IGW to a VPC.

You manage an application environment that consists of an EC2 instance and a MariaDB RDS Single AZ instance. You have been asked to make sure that before an environment is terminated, backups are taken which last at least 6 months. What should you suggest? A. Run a manual snapshot of the RDS instance before it is terminated. B. Ensure backups are taken automatically and choose 6 months for the retention period. C. Instead of terminating the RDS instance leave it running for an additional 6 months. D. Detach the storage for the RDS instance and leave it in place for at least 6 months.

Your Answer: B Why is this incorrect? Backup retention periods are from 1 to 35 days Correct Answer: A Why is this correct? Manual snapshots persist until you explicitly delete them.

You have been tasked to store files in S3 with encryption at rest. You also need a solution that matches the FIPS 140-2 Level 3 framework. Which solution meets this requirement? A SSE-KMS B SSE-S3 C SSE-C D CloudHSM

Your Answer: B Why is this incorrect? This allows S3 to create and manage the keys. Correct Answer: D Why is this correct? CloudHSM is rated for FIPS 140-2 Level 3

One of your systems is suffering from performance problems. It's a critical system, and you have been asked to design an upgrade to resolve the issues. Checking CloudWatch, you can see the instance is historically running at 20% CPU and 99% memory utilization. It currently runs on the second smallest C type instance. What should your suggestion be for the most economical way to resolve the performance issues during a scheduled downtime? A Edit the EC2 instance properties and select the custom memory option. Add additional memory until the performance issues are resolved. B Rebuild the application, reinstalling all components and the data into a new memory-optimized instance type. C Power down the instance and change the instance to a memory-optimized instance type. D Increase the size of the instance moving from the current C class instance to the next step.

Your Answer: D Why is this incorrect? This is not optimal. The scenario suggests the application is very memory heavy, so another C class instance won't resolve the underlying issue. Correct Answer: C Why is this correct? Memory-optimized instances sacrifice vCPU and provide more memory allocation for a similar cost. You will generally achieve better value from a memory-intensive application by moving to a memory-optimized instance type.

You manage hundreds of AWS accounts for your organization. One of the AWS accounts is for a development team and you need to restrict what can occur within the account. There are 6 IAM users and also the account root user which need to be restricted. What solution below would be best? A Identity Policy B IAM Permission Boundary C Service Control Policy D Resource Policy

Your Answer: D Why is this incorrect? A policy attached to a resource is a Resource Policy. Correct Answer: C Why is this correct? Service control policies (SCPs) are one type of policy that you can use to manage your organization. ... Attaching an SCP to an AWS Organizations entity (root, OU, or account) defines a guardrail for what actions the principals can perform.

You are reviewing a video transcoding platform for a client. The client is unable to use Elastic Transcoder due to feature requirements. The system currently uses a fleet of EC2 instances created by a launch template and Auto Scaling group. Instances are using the C family. Videos to be transcoded are entered into an SQS queue, and the size of the Auto Scaling group is controlled by messages in the queue. Any failed jobs are retried a number of times before being canceled. What options does the client have to reduce costs without negatively impacting performance over time? A Move from C type to X type instances. B Move from C type to T3 type instances. C Use spot instances. D Enable enhanced networking on all EC2 instances.

Your Answer: D Why is this incorrect? This will reduce instance latency but won't impact the overall performance in a significant way. Correct Answer: C Why is this correct? Spot instances will significantly reduce the ongoing cost of the solution. Even assuming some jobs will fail because of terminating spot instances, the Auto Scaling group will grow to compensate and the solution will still be lower cost.

You manage an application that is in-use within your employer. The environment currently uses the same infrastructure for dev and production environments: three large On-Demand EC2 instances, running behind an Application Load Balancer with an RDS MySQL Multi-AZ deployment. You have been asked for suggestions to cost-optimize the solution while not negatively impacting production availability or performance. What should you suggest? A Purchase instance reservations for PROD. B Remove Multi-AZ from PROD. C Use Spot instances for PROD. D Remove Multi-AZ from DEV. E Purchase instance reservations for DEV.

Your Answer: E Why is this incorrect? The dev environment might not have 24/7/365 availability requirements, so while this will reduce costs, it might be better to simply switch off dev when not in use. Correct Answer: A Why is this correct? For any instances that need to be available consistently, instance reservation makes sense. https://aws.amazon.com/ec2/pricing/reserved-instances/ Correct Answer: D Why is this correct? This is a potential way to reduce costs. Multi-AZ is generally not worth the additional costs for dev environments.


Kaugnay na mga set ng pag-aaral

Intro to Sociology Chapter 3 Review Quiz

View Set

Unit 6- Prioritization and Delegation

View Set

NUR 3420 Pharmacology PrepU ch21

View Set

Implied Obligation of Good Faith

View Set

Chapter 17: Anti-infective Drugs

View Set

US History - Buildup to the American Civil War

View Set