CSA Practice Exam

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

An organization stores and manages financial records of various companies in its on-premises data center, which is almost out of space. The management decided to move all of their existing records to a cloud storage service. All future financial records will also be stored in the cloud. For additional security, all records must be prevented from being deleted or overwritten.

)Use AWS DataSync to move the data. Store all of your data in Amazon S3 and enable object lock. (Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon S3 and enable object lock is incorrect because the scenario requires that all of the existing records must be migrated to AWS. The future records will also be stored in AWS and not in the on-premises network. This means that setting up a hybrid cloud storage is not necessary since the on-premises storage will no longer be used.

global, multiple AWS accounts. To improve efficiency and drive costs down, wants to centrally manage their AWS resources. procure AWS resources centrally and share resources such as AWS Transit Gateways, AWS License Manager configurations, or Amazon Route 53 Resolver rules across their various accounts.

- Consolidate all of the company's accounts using AWS Organizations. - Use the AWS Resource Access Manager (RAM) service to easily and securely share your resources with your AWS accounts. *AWS Resource Access Manager (RAM)* is a service that enables you to easily and securely share AWS resources with any AWS account or within your AWS Organization. You can share AWS Transit Gateways, Subnets, AWS License Manager configurations, and Amazon Route 53 Resolver rules resources with RAM. Many organizations use multiple accounts to create administrative or billing isolation, and limit the impact of errors. RAM eliminates the need to create duplicate resources in multiple accounts, reducing the operational overhead of managing those resources in every single account you own. You can create resources centrally in a multi-account environment, and use RAM to share those resources across accounts in three simple steps: 1.) create a Resource Share 2.) specify resources 3.) specify accounts. RAM is available to you at no additional charge.

What are the key features of API Gateway that the architect can tell to the client?

- Enables you to build RESTful APIs and WebSocket APIs that are optimized for serverless workloads - You pay only for the API calls you receive and the amount of data transferred out. Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. Amazon API Gateway has no minimum fees or startup costs. You pay only for the API calls you receive and the amount of data transferred out.

A media company has two VPCs: VPC-1 and VPC-2 with peering connection between each other. VPC-1 only contains private subnets while VPC-2 only contains public subnets. The company uses a single AWS *Direct Connect connection* and a virtual interface to connect their on-premises network with VPC-1. Which of the following options increase the fault tolerance of the connection to VPC-1? (Select TWO.)

- Establish a hardware VPN over the Internet between the VPC and the on-premises network. - Establish another AWS Direct Connect connection and private virtual interface in the same AWS region. Explanation: In this scenario, you have two VPCs which have peering connections with each other. Note that a VPC peering connection does not support edge to edge routing. This means that if either VPC in a peering relationship has one of the following connections, you cannot extend the peering relationship to that connection: - A VPN connection or an AWS Direct Connect connection to a corporate network - An Internet connection through an Internet gateway - An Internet connection in a private subnet through a NAT device - A gateway VPC endpoint to an AWS service; for example, an endpoint to Amazon S3. - (IPv6) A ClassicLink connection. You can enable IPv4 communication between a linked EC2-Classic instance and instances in a VPC on the other side of a VPC peering connection. However, IPv6 is not supported in EC2-Classic, so you cannot extend this connection for IPv6 communication.

Custom metrics that you can set up on CloudWatch(not default)

- Memory utilization - Disk swap utilization - Disk space utilization - Page file utilization - Log collection

has two On-Demand EC2 instances inside VPC in the same Availability Zone but are deployed to different subnets. One EC2 instance is running a database, other EC2 instance a web application that connects with the database. need to ensure that these two instances can communicate with each other for the system to work properly. What are the things you have to check so that these EC2 instances can communicate inside the VPC? (Select TWO.)

-Check if all security groups are set to allow the application host to communicate to the database on the right port and protocol. -Check the Network ACL if it allows communication between the two subnets. Explanation: You can secure VPC instances using only Security Groups

has multiple AWS accounts that are assigned to its development teams. management wants to consolidate all of its AWS accounts into a multi-account setup. To simplify the login process on the AWS accounts, management wants to utilize its existing directory service for user authentication.

-On the master account, use AWS Organizations to create a new organization with all features turned on. Invite the child accounts to this new organization. -Configure AWS IAM Identity Center (AWS Single Sign-On) for the organization and integrate it with the company's directory service using the Active Directory Connector(ADC is directory agnostic) Explanation: *AWS Organizations* is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWS IAM Identity Center (successor to AWS Single Sign-On) provides single sign-on access for all of your AWS accounts and cloud applications. It connects with Microsoft Active Directory through AWS Directory Service to allow users in that directory to sign in to a personalized AWS access portal using their existing Active Directory user names and passwords. From the AWS access portal, users have access to all the AWS accounts and cloud applications that they have permission for.

Host a web application in an Auto Scaling group of Amazon EC2 instances Must be cost-effective and scalable solution to store the old files yet still provide durability and high availability files that are older than 2 years must be stored in a different storage class

2 Suitable solutions: - Use Amazon S3 and create a lifecycle policy that will move the objects to Amazon S3 Glacier after 2 years. - Use Amazon S3 and create a lifecycle policy that will move the objects to Amazon S3 Standard-IA after 2 years.

AWS Database Migration Service (AWS DMS)

A cloud service that makes it easy to migrate relational databases, data warehouses, NoSQL databases, and other types of data stores. (works with Oracle) Integrates with RDS You can use AWS DMS to migrate your data into the AWS Cloud or between combinations of cloud and on-premises setups. AWS DMS can help migrate on-premises databases to the AWS Cloud. With AWS DMS, you can perform one-time migrations, and you can replicate ongoing changes to keep sources and targets in sync. If you want to migrate to a different database engine, you can use the AWS Schema Conversion Tool (AWS SCT) to translate your database schema to the new platform. You then use AWS DMS to migrate the data.

monitor all API calls in Redshift instance and can also provide secured data for auditing and compliance purposes

AWS CloudTrail

AWS Storage Gateway

Although you can *copy data* from on-premises to AWS with Storage Gateway, it is not suitable for transferring large sets of data to AWS. Storage Gateway is mainly used in providing low-latency access to data by caching frequently accessed data on-premises while storing archive data securely and durably in Amazon cloud storage services. Storage Gateway optimizes data transfer to AWS by sending only changed data and compressing data.

relational database and come up with a disaster recovery plan to mitigate multi-region failure. requires a Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute.

Amazon Aurora Global Database designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages.

In this scenario, what will you do to implement a scalable, high-available POSIX-compliant shared file system?

Amazon Elastic File System (Amazon EFS)

Uses SAP HANA for its day-to-day ERP operations. can't migrate this database due to customer preferences, so they need to integrate it with the current AWS workload in the VPC in which they are required to establish a site-to-site VPN connection. What needs to be configured outside of the VPC for them to have a successful site-to-site VPN connection?

An *Internet-routable IP address(static)* of the customer gateway's external interface for the on-premises network Explanation By default, instances that you launch into a virtual private cloud (VPC) can't communicate with your own network. You can enable access to your network from your VPC by attaching a virtual private gateway to the VPC, creating a custom route table, updating your security group rules, and creating an AWS managed VPN connection. A *customer gateway* is a physical device or software application on your side of the VPN connection. To create a *VPN connection*, you must create a *customer gateway resource in AWS*, which provides information to AWS about your customer gateway device. Next, you have to set up an Internet-routable IP address (static) of the customer gateway's external interface.

developed public APIs hosted in Amazon EC2 instances behind an Elastic Load Balancer. The APIs will be used by various clients from their respective on-premises data centers. web service clients can only access trusted IP addresses whitelisted on their firewalls.

Associate an Elastic IP address to a Network Load Balancer. Explanation: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the default rule's target group. It attempts to open a *TCP* connection to the selected target on the port specified in the listener configuration. Since web service clients can only access trusted IP addresses, you can use the Bring Your Own IP (BYOIP) feature to use the trusted IPs as Elastic IP addresses (EIP)(EIPs do not change, they are static, region-bound) to a Network Load Balancer (NLB). This way, there's no need to re-establish the whitelists with new IP addresses.

With a ElastiCache cluster, you have to secure the session data in the portal by requiring them to enter a password before they are granted permission to execute Redis commands.

Authenticate the users using Redis AUTH by creating a new Redis Cluster with both the --transit-encryption-enabled --auth-token parameters enabled.

Metrics included in CloudWatch

CPU Utilization • identifies the processing power required to run an application upon a selected instance. Network Utilization • identifies the volume of incoming and outgoing network traffic to a single instance. Disk Reads metric • used to determine the volume of the data the application reads from the hard disk of the instance. This can be used to determine the speed of the application.

Need to conduct a network security audit. web application is hosted on an ASG of EC2 Instances with an ALB in front to evenly distribute the incoming traffic. Need to enhance the security posture of the company's cloud infrastructure , minimize the impact of DDoS attacks

Configure Amazon CloudFront distribution and set Application Load Balancer as the origin. Create a *rate-based web ACL rule* using *AWS WAF* and associate it with *Amazon CloudFront.*

web application on a set of Amazon EC2 instances in an ASG behind an ALB. has an embedded NoSQL database. As the application receives more traffic, the application becomes overloaded mainly due to database requests. The management wants to ensure that the database is eventually consistent and highly available, need solution with least operational overhead

Configure the ASG to spread the Amazon EC2 instances across three Availability Zones. Use the AWS *Database Migration Service (DMS)* with a *replication server* and an ongoing replication task to migrate the embedded NoSQL database to Amazon DynamoDB

photo sharing website using Amazon S3 for high-quality photos to visitors of your website. There are other travel websites linking and using your photos, must stop them as this is hurting business

Configure your S3 bucket to remove public read access and use pre-signed URLs with expiry dates.

A telecommunications company is planning to give AWS Console access to developers. Company policy mandates the use of identity federation and role-based access control. Currently, the roles are already assigned using groups in the corporate Active Directory. In this scenario, what combination of the following services can provide developers access to the AWS console? (Select TWO.)

Considering that the company is using a corporate Active Directory, it is best to use AWS Directory Service AD Connector for easier integration. In addition, since the roles are already assigned using groups in the corporate Active Directory, it would be better to also use IAM Roles. Take note that you *can assign an IAM Role to the users* or groups from your Active Directory once it is integrated with your VPC via the *AWS Directory Service AD Connector*.

application of multiple EC2 instances in private subnets in different availability zones. application uses a single NAT Gateway for downloading software patches from the Internet to the instances. Must protect application from a single point of failure when the NAT Gateway encounters a failure or if its AZ goes down. Must be HA and cost-effective.

Create a NAT Gateway in each availability zone. Configure the route table in each private subnet to ensure that instances use the NAT Gateway in the same availability zone. A NAT Gateway is a highly available, managed Network Address Translation (NAT) *service* for your resources in a private subnet to access the Internet. NAT gateway is created in a specific Availability Zone and implemented with redundancy in that zone. You must create a NAT gateway on a public subnet to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances.

migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. Needs to be highly available and can be integrated with Active Directory for access control and authentication

Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS

on-premises MySQL database that needs to be replicated in Amazon S3 as CSV files. The database will be launched to an Amazon Aurora Serverless cluster, integrated with an RDS Proxy to allow the web applications to pool and share database connections. Once data has been fully copied, the ongoing changes to the on-premises database should be continually streamed into the S3 bucket. Needs little management overhead, still highly secure.

Create a full load and change data capture (CDC) replication task using *AWS Database Migration Service (AWS DMS)*. Add a new Certificate Authority (CA) certificate and create an AWS DMS endpoint with SSL. Explanation: AWS Database Migration Service (AWS DMS) is a cloud service that makes it easy to *migrate relational databases, data warehouses, NoSQL databases, and other types of data stores*. You can use AWS DMS to migrate your data into the AWS Cloud, between on-premises instances (through an AWS Cloud setup) or between combinations of cloud and on-premises setups. With AWS DMS, you can perform one-time migrations, and you can replicate ongoing changes to keep sources and targets in sync. When using Amazon S3 as a target in an AWS DMS task, both *full load* and *change data capture (CDC)* data is written to comma-separated value (.csv) format by default. The comma-separated value (.csv) format is the default storage format for Amazon S3 target objects. *You can encrypt connections for source and target endpoints by using Secure Sockets Layer (SSL)*. To do so, you can use the *AWS DMS Management Console* or AWS DMS API to assign a certificate to an endpoint. You can also use the AWS DMS console to manage your certificates. Not all databases use SSL in the same way. Amazon Aurora MySQL-Compatible Edition uses the *server name*, the endpoint of the primary instance in the cluster, as the *endpoint for SSL*. An Amazon Redshift endpoint *already uses an SSL connection* and does not require an SSL connection set up by AWS DMS.

website hosted in Amazon EC2 stores car listings in an Amazon Aurora database managed by Amazon RDS. Once a vehicle has been sold, its data must be removed from the current listings and forwarded to a distributed processing system

Create a native function or a stored procedure that invokes a Lambda function. Configure the Lambda function to send event notifications to an Amazon SQS queue for the processing system to consume RDS events only provide operational events such as DB instance events, DB parameter group events, DB security group events, and DB snapshot events. What we need in the scenario is to capture data-modifying events (INSERT, DELETE, UPDATE) which can be achieved thru native functions or stored procedures via Lambda

e-commerce website on an Auto Scaling group of EC2 instances behind an Application Load Balancer. website is receiving a large number of *illegitimate external requests* from *multiple systems with IP addresses that constantly change*. To resolve the performance issues, must block the illegitimate requests with minimal impact on legitimate traffic.

Create a rate-based rule in AWS WAF and associate the web ACL to an Application Load Balancer. A *rate-based* rule tracks the rate of requests for each originating IP address and triggers the rule action on IPs with rates that go over a limit. You set the limit as the number of requests per 5-minute time span. You can use this type of rule to put a temporary block on requests from an IP address that's sending excessive requests.

running a dashboard application on a Spot EC2 instance inside a private subnet. The dashboard is reachable via a domain name that maps to the private IPv4 address of the instance's network interface. need to increase network availability by allowing the traffic flow to resume in another instance if the primary instance is terminated.

Create a secondary elastic network interface and point its private IPv4 address to the application's domain name. Attach the new network interface to the primary instance. If the instance goes down, move the secondary network interface to another instance.

A startup is using Amazon RDS to store data from a web application. Most of the time, the application has low user activity but it receives *bursts of traffic within seconds* whenever there is a *new product announcement*. The Solutions Architect needs to create a solution that will allow users *around the globe* to access the data using an API. What should the Solutions Architect do meet the above requirement?

Create an API using Amazon API Gateway and use AWS Lambda to handle the bursts of traffic. Lambda can scale faster(within seconds) than the regular Auto Scaling feature of Amazon EC2, Amazon Elastic Beanstalk, or Amazon ECS. This is because AWS Lambda is more lightweight than other computing services. Under the hood, Lambda can run your code to thousands of available AWS-managed EC2 instances (that could already be running) *within seconds* to accommodate traffic.

receiving semi-structured and structured data from different sources every day. use *big data processing frameworks* to analyze vast amounts of data and access it using *various business intelligence tools* and *standard SQL queries*.

Create an Amazon EMR cluster and store the processed data in Amazon Redshift.

Using Amazon S3 to store frequently accessed data. When object is created/deleted, S3 bucket sends event notification to Amazon SQS queue. need to create solution that notifies development and operations team about the created or deleted objects.

Create an Amazon SNS topic and configure two Amazon SQS queues to *subscribe* to the topic. Grant Amazon S3 permission to send notifications to Amazon SNS and update the bucket to use the new SNS topic. Explanation: The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. In Amazon SNS, the fanout scenario is when a message published to an SNS topic is replicated and pushed to multiple endpoints, such as *Amazon SQS queues*, HTTP(S) endpoints, and Lambda functions. This allows for parallel asynchronous processing. If Amazon SNS receives an event notification, it will publish the message to both subscribers(SQS).

need to deploy at least 2 EC2 instances to support the normal workloads of its application and automatically scale up to 6 EC2 instances to handle the peak load. architecture must be HA and fault-tolerant as it is processing mission-critical workloads.

Create an Auto Scaling group of EC2 instances and set the minimum capacity to 4 and the maximum capacity to 6. Deploy 2 instances in Availability Zone A and another 2 instances in Availability Zone B. Since the scenario requires at least 2 instances to handle regular traffic, you should have 2 instances running all the time even if an AZ outage occurred.

trading platform is hosted in your on-premises data center and uses an Oracle database. needs to migrate their infrastructure to AWS to improve the performance of their applications and HA

Creating an Oracle database in RDS with Multi-AZ deployments

multi-tier web application farm in a VPC not connected to their corporate network. connecting to the VPC over the Internet to manage fleet EC2 instances running in both the public and private subnets. Added a bastion host with Microsoft Remote Desktop Protocol (RDP) access to the application instance security groups, but the company wants to further limit administrative access to all of the instances in the VPC.

Deploy a Windows Bastion host with an EIP in the public subnet, and allow RDP access to the bastion from only the IP addresses

application is heavily using the RDS instance to process complex read and write database operations. To maintain the reliability, availability, and performance of your systems, you have to closely monitor the CPU, including the percentage of the (1) CPU bandwidth and (2) total memory consumed by each process.

Enable Enhanced Monitoring in RDS CloudWatch does not provide the percentage of the CPU bandwidth and total memory consumed by each database process in your RDS instance Enhanced Monitoring is a feature of Amazon RDS.

Has *high-speed Internet connection*, collected data in each location is around 500 GB and will be analyzed by a weather forecasting application hosted in Northern Virginia. Need to aggregate all the data in the fastest way.

Enable Transfer Acceleration in the destination bucket and upload the collected data using *Multipart Upload*

What are the prerequisites when routing traffic using Amazon Route 53 to a website that is hosted in an Amazon S3 Bucket? (Select TWO.)

Here are the prerequisites for routing traffic to a website that is hosted in an Amazon S3 Bucket: -(1) *An S3 bucket that is configured to host a static website.* The bucket must have the same name as your domain or subdomain. For example, if you want to use the subdomain portal.tutorialsdojo.com, the name of the bucket must be portal.tutorialsdojo.com. (2) *A registered domain name.* You can use Route 53 as your domain registrar, or you can use a different registrar. - (3) *Route 53 as the DNS service for the domain*. If you register your domain name by using Route 53, we automatically configure Route 53 as the DNS service for the domain.

Which of the following are valid points in proving that EBS is the best service to use for migration?

Here is a list of important information about EBS Volumes: - When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to a failure of any single hardware component. - An EBS volume can only be attached to one EC2 instance at a time. - After you create a volume, you can attach it to any EC2 instance in the same Availability Zone - An EBS volume is off-instance storage that can persist independently from the life of an instance. You can specify not to terminate the EBS volume when you terminate the EC2 instance during instance creation. - EBS volumes support live configuration changes while in production which means that you can modify the volume type, volume size, and IOPS capacity without service interruptions. - Amazon EBS encryption uses 256-bit Advanced Encryption Standard algorithms (AES-256) - EBS Volumes offer 99.999% SLA.

cloud architecture that is composed of Linux and Windows EC2 instances that process high volumes of financial data 24 hours a day, 7 days a week. need to monitor the memory and disk utilization metrics of all the instances.

Install the CloudWatch agent to all the EC2 instances that gathers the memory and disk utilization data. View the custom metrics in the Amazon CloudWatch console. Explan: there are no ready to use metrics for: Memory utilization Disk swap utilization Disk space utilization Page file utilization Log collection

setting up an ECS batch architecture for its image processing application. hosted in an Amazon ECS Cluster with two ECS tasks that will handle image uploads from the users and image processing. The first ECS task will process the user requests, store the image in an S3 input bucket, and push a message to a queue. The second task reads from the queue, parses the message containing the object name, and then downloads the object. Once the image is processed and transformed, it will upload the objects to the S3 output bucket. must create a queue and the necessary IAM permissions for the ECS tasks.

Launch a new Amazon SQS queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 buckets and SQS queue. Declare the IAM Role (taskRoleArn) in the task definition.

migrate on-premises application to an Amazon EC2 instance. An IPv6 CIDR block is attached to the company's Amazon VPC. Strict security policy mandates that the production VPC must only allow outbound communication over *IPv6* between the instance and the internet but should prevent the internet from initiating an inbound *IPv6* connection. The new architecture should also allow traffic flow inspection and traffic filtering.

Launch the EC2 instance to a private subnet and attach an *Egress-Only Internet Gateway* to the VPC to allow outbound IPv6 communication to the internet. Use AWS Network Firewall to set up the required rules for traffic inspection and traffic filtering Explanation An egress-only internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows outbound communication over *IPv6* from instances in your VPC to the internet and prevents it from initiating an IPv6 connection with your instances. AWS Network Firewall can set up the required rules for traffic inspection and traffic filtering

plans to migrate its suite of containerized applications running on-premises to a container service in AWS. solution must be cloud-agnostic and use an *open-source platform* that can automatically manage containerized workloads and services. It should also use the same configuration and tools across various production environments.

Migrate the application to *Amazon Elastic Kubernetes Service* with EKS worker nodes. Explanation: (Kubernetes is open-source)

Default termination policy for EC2 Instances

Oldest instances terminated first 1. If there are instances in multiple Availability Zones, choose the Availability Zone with the most instances and at least one instance that is not protected from scale in. If there is more than one Availability Zone with this number of instances, choose the Availability Zone with the instances that use the oldest launch configuration. 2. Determine which unprotected instances in the selected Availability Zone use the oldest launch configuration. If there is one such instance, terminate it. 3. If there are multiple instances to terminate based on the above criteria, determine which unprotected instances are *closest to the next billing hour*. (This helps you maximize the use of your EC2 instances and manage your Amazon EC2 usage costs.) If there is one such instance, terminate it. 4. If there is more than one unprotected instance closest to the next billing hour, choose one of these instances at random.

DynamoDB stream

Ordered flow of information about changes to items in an Amazon DynamoDB table When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table. Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attribute(s) of the items that were modified. A *stream record* contains information about a data modification to a single item in a DynamoDB table. You can configure the stream so that the stream records capture additional information, such as the "before" and "after" images of modified items. Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that automatically respond to events in DynamoDB Streams. With triggers, you can build applications that react to data modifications in DynamoDB tables. Remember that the DynamoDB Stream feature is not enabled by default

AWS Storage Gateway

Primarily used to integrate your on-premises network to AWS but not for migrating your applications

On-Demand EC2 instance can only be accessed from this IP address (110.238.98.71) via an SSH connection.

Protocol - TCP, Port Range - 22, Source 110.238.98.71/32 The /32 denotes one IP address and the /0 refers to the entire network. Take note that the SSH protocol uses TCP and port 22.

serverless application made up of AWS Amplify, Amazon API Gateway and a Lambda function. connected to an *Amazon RDS MySQL* database instance inside a private subnet. A Lambda Function URL is also implemented as the dedicated HTTPS endpoint for the function, which has the following value: https://12june1898pil1pinas.lambda-url.us-west-2.on.aws/ There are times during peak loads when the database throws a *"too many connections" error* preventing the users from accessing the application.

Provision an RDS *Proxy* between the Lambda function and RDS database instance format Explanation: RDS Proxy helps you manage a large number of connections from Lambda to an RDS database by establishing a warm connection pool to the database. Your Lambda functions interact with RDS Proxy instead of your database instance. It handles the connection pooling necessary for scaling many simultaneous connections created by concurrent Lambda functions. This allows your Lambda applications to reuse existing connections, rather than creating new connections for every function invocation.

application is deployed in a fleet of Spot EC2 instances, uses MySQL RDS database instance. one RDS instance running in one Availability Zone. You plan to improve the database to ensure high availability by synchronous data replication to another RDS instance.

RDS DB instance running as a Multi-AZ deployment

write-once-read-many (WORM) model

S3 Object Lock (Object Versioning feature must also be enabled for Object Lock)

suite of container-based web applications and serverless solutions that are hosted in AWS. Must define a standard infrastructure that will be used across development teams and applications. Also, application-specific resources that change frequently, especially during the early stages of application development. Developers must be able to add *supplemental resources* to their applications, which are beyond what the architects predefined in the system environments and service templates.

Set up *AWS Proton* for deploying *container applications* and *serverless solutions*. Create components from the AWS Proton console and attach them to their respective service instance. Explanation AWS Proton allows you to deploy any serverless or container-based application with increased efficiency, consistency, and control. You can define infrastructure standards and effective continuous delivery pipelines for your organization. Proton breaks down the infrastructure into environment and service ("infrastructure as code" templates). As a developer, you select a standardized service template that AWS Proton uses to create a service that deploys and manages your application in a service instance. An AWS Proton service is an instantiation of a service template, which normally includes several service instances and a pipeline. With a component, a developer can add supplemental resources to their application, above and beyond what administrators defined in environment and service templates. The developer then attaches the component to a service instance. AWS Proton provisions infrastructure resources defined by the component just like it provisions resources for environments and service instances.

resources hosted on both their on-premises network and in AWS cloud. They want to access resources on both environments using their on-premises credentials, which is stored in Active Directory

Set up SAML 2.0-Based Federation by using a Microsoft Active Directory Federation Service (AD FS)

enterprise web application hosted on Amazon ECS Docker containers that use an Amazon FSx for Lustre filesystem for its high-performance computing workloads. A warm standby environment is running in another AWS region for disaster recovery. Need to design a system that will automatically route the live traffic to the disaster recovery (DR) environment only in the event that the primary application stack experiences an outage.

Set up a failover routing policy configuration in Route 53 by adding a health check on the primary service endpoint. Configure Route 53 to direct the DNS queries to the secondary record when the primary resource is unhealthy. Configure the network access control list and the route table to allow Route 53 to send requests to the endpoints specified in the health checks. Enable the Evaluate Target Health option by setting it to Yes.

company has multiple VPCs in various AWS regions. set up a logging system tracks all of the changes made to their AWS resources in all regions, including the configurations made in IAM, CloudFront, AWS WAF, and Route 53. In order to pass the compliance requirements, solution must ensure *security*, *integrity*, and *durability* of the log data. It should also provide an event history of all API calls made in AWS Management Console and AWS CLI.

Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the *--is-multi-region-trail* and *--include-global-service-events* parameters then *encrypt log files* using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies Explan: An event in CloudTrail is the record of an activity in an AWS account. This activity can be an action taken by a user, role, or service that is monitorable by CloudTrail. CloudTrail events provide a history of both API and non-API account activity made through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. There are two types of events that can be logged in CloudTrail: management events and data events. By default, trails log management events, but not data events.

A company has multiple VPCs with IPv6 enabled for its suite of web applications. The Solutions Architect tried to deploy a new Amazon EC2 instance but she received an error saying that there is no IP address available on the subnet.

Set up a new IPv4 subnet with a larger CIDR range. Associate the new subnet with the VPC and then launch the instance

company wants to streamline the process of creating multiple AWS accounts within an AWS Organization. Each organization unit (OU) must be able to launch new accounts with preapproved configurations from the security team which will standardize the baselines and network configurations for all accounts in the organization.

Set up an *AWS Control Tower Landing Zone*. Enable pre-packaged guardrails to enforce policies or detect violations. Explanation:

Designing a three-tier website that will be hosted on an Amazon EC2 ASG fronted by an Internet-facing ALB. The website will persist data to an Amazon Aurora Serverless DB cluster. The company requires a network topology that follows a layered approach to reduce the impact of misconfigured security groups or network access lists. Web filtering must also be enabled to automatically stop traffic to known malicious URLs and to immediately drop requests coming from blacklisted fully qualified domain names (FQDNs).

Set up an *Application Load Balancer* deployed in a *public subnet*, then host the Auto Scaling Group of Amazon EC2 instances and the Aurora Serverless DB cluster in *private subnets*. Launch an *AWS Network Firewall* with the *appropriate firewall policy* to automatically stop traffic to known malicious URLs and drop requests coming from blacklisted FQDNs. Reroute your Amazon VPC network traffic through the firewall endpoints.

using AWS Fargate to run batch job whenever an object is uploaded to an Amazon S3 bucket. The minimum ECS task count is initially set to 1 to save on costs, should only be increased based on new objects uploaded to the S3 bucket.

Set up an Amazon EventBridge rule to detect S3 object PUT operations and set the target to the ECS cluster to run a new ECS task Amazon EventBridge (formerly called CloudWatch Events) is a serverless event bus that makes it easy to connect applications together. It uses data from your own applications, integrated software as a service (SaaS) applications, and AWS services. This simplifies the process of building event-driven architectures by decoupling event producers from event consumers. This allows producers and consumers to be scaled, updated, and deployed independently. Loose coupling improves developer agility in addition to application resiliency.

should also be alerted if there are potential policy violations with the privacy of their S3 buckets

Set up and configure Amazon Macie to monitor their Amazon S3 data. Amazon Macie is an ML-powered security service that helps you prevent data loss by automatically discovering, classifying, and protecting sensitive data stored in *Amazon S3*. Amazon Macie uses machine learning to recognize sensitive data such as personally identifiable information (PII) or intellectual property, assigns a business value, and provides visibility into where this data is stored and how it is being used in your organization.

Windows bastion

Since it is a Windows bastion, you should allow RDP access as this is mainly used for Linux-based systems.

An accounting application uses an RDS database configured with Multi-AZ deployments to improve availability. What would happen to RDS if the primary database instance fails?

The canonical name record(CNAME) is switched from the primary to standby instance

*AWS Application Migration Service (AWS MGN)*

This service is primarily used for lift-and-shift migrations of applications from physical infrastructure, VMware vSphere, Microsoft Hyper-V, Amazon Elastic Compute Cloud (AmazonEC2), Amazon Virtual Private Cloud (Amazon VPC), and other clouds to AWS. *AWS Schema Conversion Tool* is used with this service to help the conversion of source databases to a format compatible with the target database when migrating.

A company needs to assess and audit all the configurations in their AWS account. It must *enforce strict compliance* by tracking all configuration changes made to any of its Amazon S3 buckets. Publicly accessible S3 buckets should also be identified automatically to avoid data breaches.

Use *AWS Config* to set up a rule in your AWS account Explanation: By creating an *AWS Config rule*, you can *enforce your ideal configuration* in your AWS account. It also checks if the applied configuration in your resources violates any of the conditions in your rules. AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.

media company: Amazon ECS Cluster, uses Fargate launch type, to host its news website. The application data are all stored in Amazon Keyspaces (for Apache Cassandra) with data-at-rest encryption enabled. The database credentials should be supplied using environment variables, to comply with strict security compliance. Have to ensure that the credentials *are secure* and *can't be viewed in *plaintext on the cluster itself*.

Use *AWS Systems Manager Parameter Store* to keep the database credentials and then encrypt them using *AWS KMS*. Create an *IAM Role* for your Amazon ECS task execution role (taskRoleArn) and reference it with your task definition, which allows access to both KMS and the Parameter Store. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to present to the container. Explanation: Amazon ECS enables you to inject sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets. or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. This feature is supported by tasks using both the EC2 and Fargate launch types. Secrets can be exposed to a container in the following ways: - To inject sensitive data into your containers as environment variables, use the secrets container definition parameter. - To reference sensitive information in the log configuration of a container, use the container definition parameter. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of either the Secrets Manager secret or Systems Manager Parameter Store parameter containing the sensitive data to present to the container. The parameter that you reference can be from a different Region than the container using it, but must be from within the same account.

web application that uses Amazon CloudFront to distribute its images, videos, and other static contents stored in its S3 bucket to its users around the world. introduced new member-only access feature to high-quality media files. There is a requirement to provide access to *multiple* private media files only to their paying subscribers *without having to change their current URLs*.

Use *Signed Cookies* to control who can access the private files in your CloudFront distribution by modifying your application to determine whether a user should have access to your content. For members, send the required *Set-Cookie headers* to the viewer which will unlock the content only to them. ("Doesn't want to change URLS" --> Signed Cookies) To securely serve this private content by using CloudFront, you can do the following: - Require that your users access your private content by using special CloudFront signed URLs or signed cookies. - Require that your users access your content by using CloudFront URLs, not URLs that access content directly on the origin server (for example, Amazon S3 or a private HTTP server). Requiring CloudFront URLs isn't necessary, but we recommend it to prevent users from bypassing the restrictions that you specify in signed URLs or signed cookies. CloudFront signed URLs and signed cookies provide the same basic functionality: they allow you to control who can access your content. If you want to serve private content through CloudFront and you're trying to decide whether to use signed URLs or signed cookies, consider the following: Use *signed URLs* for the following cases: - You want to use an RTMP distribution. Signed cookies aren't supported for RTMP distributions. - You want to restrict access to individual files, for example, an installation download for your application. - Your users are using a client (for example, a custom HTTP client) that doesn't support cookies. Use *signed cookies* for the following cases: - You want to provide access to multiple restricted files, for example, all of the files for a video in HLS format or all of the files in the subscribers' area of a website. - You don't want to change your current URLs.

requires all the data stored in the cloud to be encrypted at rest. To easily integrate this with other AWS services, they must have full control over the encryption of the created keys and also the ability to immediately remove the key material from AWS KMS. The solution should also be able to audit the key usage independently of AWS CloudTrail.

Use AWS Key Management Service to create a CMK(customer master key) in a *custom key store* and store the non-extractable key material in AWS CloudHSM. Explan: The AWS Key Management Service (KMS) custom key store feature combines the controls provided by AWS CloudHSM with the integration and ease of use of AWS KMS. CMKs that are generated in your custom key store never leave the HSMs in the CloudHSM cluster in plaintext and all AWS KMS operations that use those keys are only performed in your HSMs.

application hosted in AWS Fargate, uses RDS database in Multi-AZ Deployments configuration with several Read Replicas. ensure that all of their database credentials, API keys, and other secrets are encrypted and rotated on a regular basis to improve data security. application should use the latest version of the encrypted credentials when connecting to the RDS database.

Use AWS Secrets Manager to store and encrypt the database credentials, API keys, and other secrets. Enable automatic rotation for all of the credentials.

identified a series of DDoS attacks while monitoring the VPC. need to fortify the current cloud infrastructure to protect the data of the clients.

Use AWS Shield Advanced to detect and mitigate DDoS attacks. In addition to the network and transport layer protections that come with Standard, AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall.

uses an Application Load Balancer (ALB) for public-facing multi-tier web applications, surge of *SQL injection attacks* lately, which causes critical data discrepancy issues.

Use AWS WAF and set up a managed rule to block request patterns associated with the exploitation of SQL databases, like SQL injection attacks. Associate it with the Application Load Balancer. Integrate AWS WAF with AWS Firewall Manager to reuse the rules across all the AWS accounts

improve the database performance by distributing the workload evenly and using the provisioned throughput efficiently using DynamoDB

Use partition keys with high-cardinality attributes, which have a large number of distinct values for each item

AWS PrivateLink

Using AWS PrivateLink to create an interface endpoint will allow your traffic to traverse the AWS Global Backbone to allow maximum performance and security. Also by using an AWS Direct Connect cable you can ensure you have a dedicated cable to provide maximum performance and low latency to and from AWS.

Amazon S3 bucket. Both the master keys and the unencrypted data should never be sent to AWS to comply with the strict compliance and regulatory requirements of the company

Using S3 client-side encryption with a KMS-managed customer master key

You cannot create an RDS Read Replica of a database that is running on Amazon EC2.

You can only create read replicas of databases running on Amazon RDS.

HTTP 504 errors

You can set up an origin failover by creating an origin group with two origins with one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin fails. This will alleviate the occasional HTTP 504 errors that users are experiencing.

Application Load Balancer

You can't assign an Elastic IP address to an ALB

AWS Systems Manager

a collection of services used to manage applications and infrastructure running in AWS that is usually in a single AWS account. The AWS Systems Manager *OpsCenter* service is just one of the capabilities of AWS Systems Manager which provides a central location where operations engineers and IT professionals can view, investigate, and resolve operational work items (OpsItems) related to AWS resources.

Virtual Private Gateway

a virtual private network (VPN) concentrator on the AWS side of the VPN connection. It is used to establish a secure and encrypted VPN tunnel between your on-premises network and your VPC (Virtual Private Cloud) in AWS. The virtual private gateway enables you to securely extend your on-premises network into the AWS Cloud over an encrypted VPN connection. Does not need an EIP

VPC endpoint

allows you to privately connect your VPC to supported AWS and VPC endpoint services powered by AWS PrivateLink without needing an Internet gateway, NAT computer, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

Traffic Mirroring

an Amazon VPC feature that you can use to copy network traffic from an elastic network interface of type interface (Does not filter or inspect the incoming/outgoing traffic)

Amazon EFS

can only transition a file to the IA storage class after 90 days.

Considering that the Lambda function is storing sensitive database and API credentials, how can this information be secured to prevent other developers in the team, or anyone, from seeing these credentials in plain text?

create your own AWS KMS key and use it enable encryption helpers that leverage on AWS KMS to store and encrypt the sensitive information

AWS Control Tower

easiest way to set up and govern a new, secure, multi-account AWS environment.

DataSync

eliminates or automatically handles many of these tasks, including scripting copy jobs, scheduling and monitoring transfers, validating data, and optimizing network utilization.

AWS Transfer for SFTP

enables you to easily move your file transfer workloads that use the Secure Shell File Transfer Protocol (SFTP) to AWS without needing to modify your applications or manage any SFTP servers. To get started with AWS Transfer for SFTP (AWS SFTP) you create an SFTP server and map your domain to the server endpoint, select authentication for your SFTP clients using service-managed identities, or integrate your own identity provider, and select your Amazon S3 buckets to store the transferred data. Your existing users can continue to operate with their existing SFTP clients or applications. Data uploaded or downloaded using SFTP is available in your Amazon S3 bucket, and can be used for archiving or processing in AWS.

ensure that your RDS database can only be accessed using the profile credentials specific to your EC2 instances via an authentication token

enabling IAM DB Authentication

platform is using an API built in AWS Lambda and API Gateway. it is expected that the trading platform would have a significant increase in site visitors and new users. Need to protect the backend systems of the platform from traffic spikes.

enabling throttling limits and result caching in API Gateway *Amazon API Gateway* provides throttling (*Throttling ensures that API traffic is controlled to help your backend services maintain performance and availability*.) at multiple levels including global and by service call. Throttling limits can be set for standard rates and bursts. For example, API owners can set a rate limit of 1,000 requests per second for a specific method in their REST APIs, and also configure Amazon API Gateway to handle a burst of 2,000 requests per second for a few seconds. Amazon API Gateway tracks the number of requests per second. Any request over the limit will receive a 429 HTTP response. The client SDKs generated by Amazon API Gateway retry calls automatically when met with this response.

AWS PrivateLink (which is also known as VPC Endpoint)

highly available, scalable technology that enables you to privately connect your VPC to the AWS services as if they were in your VPC.

maximum days for the EFS lifecycle policy

is 90 days

SQS Message Lifecycle

long polling helps reduce the cost of using SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren't included in a response)

Amazon EFS

only supports Linux workloads

EBS

primarily used as block storage for EC2 instances

AWS Transit Gateway

primarily used to connect your Amazon Virtual Private Clouds (VPCs) and on-premises networks through a central hub

AWS Elastic Beanstalk

reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. Elastic Beanstalk supports applications developed in Go, Java, .NET(microsoft), Node.js, PHP, Python, and Ruby. When you deploy your application, Elastic Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, to run your application.

Need to set up an automated backup of all of the EBS Volumes for your EC2 instances as soon as possible. What is the *fastest* and most cost-effective solution to automatically back up all of your EBS Volumes?

use *Amazon Data Lifecycle Manager* (Amazon DLM) to automate the creation of EBS snapshots

AWS Network Firewall

used for filtering traffic at the perimeter of your VPC

AWS Global Accelerator

uses the vast, congestion-free AWS global network to route TCP and UDP traffic to a healthy application endpoint in the closest AWS Region to the user. This means it will intelligently route traffic to the closest point of presence (reducing latency). uses anycast IP address which means the IP does not change when failing over between regions so there are no issues with client caches having incorrect entries that need to expire. Seamless failover is ensured

has a dynamic web app written in MEAN stack that is going to be launched in the next month. traffic will be quite high in the first couple of weeks. In event of a load failure, how can you set up DNS failover to a static website?

using Route 53 with the failover option to a static S3 website bucket or CloudFront distribution

All objects uploaded to an Amazon S3 bucket must be encrypted for security compliance. The bucket will use *server-side encryption* with *Amazon S3-Managed encryption keys* (*SSE-S3*) to encrypt data using 256-bit Advanced Encryption Standard (AES-256) block cipher. Which of the following request headers must be used?

x-amz-server-side-encryption Explanation: (However, if you choose to use server-side encryption with *customer-provided encryption keys* *(SSE-C)*, you must provide encryption key information using the following request headers: x-amz-server-side-encryption-customer-algorithm x-amz-server-side-encryption-customer-key x-amz-server-side-encryption-customer-key-MD5)


संबंधित स्टडी सेट्स

Chapter 37: The Child with a Cardiovascular/Hematologic Disorder

View Set

Explore the Value of Google Search: Module 2

View Set

05/03/22 Set One GUARANTEED EXAM Life + Health Exam

View Set

Chapter 6: Nursing Care for the Family in Need of Reproductive Life Planning

View Set