solution architect
Which security functions are based on AWS STS?(Select 2)
Granting cross-account access with IAM roles Authenticating IAM user by using access keys
*An organization needs to control the access for several S3 buckets. They plan to use a gateway endpoint to allow access to trusted buckets. Which of the following could help you achieve this requirement?
Generate an endpoint policy for trusted S3 buckets. When you create a VPC endpoint, you can attach an endpoint policy that controls access to the service to which you are connecting. this is an alternative to creating multiple bucket policies for each bucket.
which dbs in rds are read replicas available for?
Read replicas are available in Amazon RDS for MySQL, MariaDB, Oracle, and PostgreSQL as well as Amazon Aurora.
What is the solutions for managed VPN connections? How to monitor the VPN connection up or down , the architect should use ?
The CloudWatch TunnelState Metric.
A news company is planning to use a Hardware Security Module (CloudHSM) in AWS for secure key storage of their web applications. You have launched the CloudHSM cluster but after just a few hours, a support staff mistakenly attempted to log in as the administrator three times using an invalid password in the Hardware Security Module. This has caused the HSM to be zeroized, which means that the encryption keys on it have been wiped. Unfortunately, you did not have a copy of the keys stored anywhere else. How can you obtain a new copy of the keys that you have stored on Hardware Security Module?
The keys are lost permanently if you did not have a copy.
*Your company is planning on using the API Gateway service to manage APIs for developers and users. There is a need to segregate the access rights for both developers and users. How can this be accomplished?
Use IAM permissions to control the access.
Your organization AWS Setup has an AWS S3 bucket which stores confidential documents which can be only downloaded by users authenticated and authorized via your application. You do not want to create IAM users for each of these users and as a best practice you have decided to generate AWS STS Federated User temporary credentials each time when a download request is made and then use the credentials to generate presigned URL and redirect user for download. However, when user is trying to access the presigned URL, they are getting Access Denied Error. What could be the reason?
IAM User used to generate Federated User credentials does not have access on S3 bucket.
Which of the following are characteristics of a reserved instance?(select 3)
It CAN BE USED to lower Total Cost of Ownership (TCO) of a system Its is specific to an Instance Type It CAN BE applied to instances launched by Auto Scaling Term x class x payment option
How does Elastic Beanstalk apply updates?
By having a duplicate ready with updates before swapping.
A company hosted a movie streaming app in Amazon Web Services. The application is deployed to several EC2 instances on multiple availability zones. Which of the following configurations allows the load balancer to distribute incoming requests evenly to all EC2 instances across multiple Availability Zones?
Cross zone load balancing
A Solutions Architect plans to migrate NAT instances to NAT gateway. The Architect has NAT instances with scripts to manage high availability. What is the MOST efficient method to achieve similar high availability with NAT gateway?
Launch a NAT gateway in each Availability Zone
A customer implemented AWS Storage Gateway with a gateway-cached volume at their main office. An event takes the link between the main and branch office offline. Which methods will enable the branch office to access their data?
Launch a new AWS Storage Gateway instance AMI in Amazon EC2, and restore from a gateway snapshot.
A digital media company shares static content to its premium users around the world and also to their partners who syndicate their media files. The company is looking for ways to reduce its server costs and securely deliver their data to their customers globally with low latency. Which combination of services should be used to provide the MOST suitable and cost-effective architecture? (Select TWO.)
S3, Cloudfront
Which of the following notification endpoints or clients are supported by Amazon Simple Notification Service?(2)
Simple Network Management Protocol Email
A company is generating large datasets with millions of rows to be summarized column-wise. Existing business intelligence tools will be used to build daily reports from these datasets.Which storage service meets these requirements?
redshift
*A company has a static corporate website hosted in a standard S3 bucket and a new web domain name that was registered using Route 53. You are instructed by your manager to integrate these two services in order to successfully launch their corporate website. What are the prerequisites when routing traffic using Amazon Route 53 to a website that is hosted in an Amazon S3 Bucket? (Select TWO.)
- A registered domain name. You can use Route 53 as your domain registrar, or you can use a different registrar. -The bucket must have the same name as your domain or subdomain. For example, if you want to use the subdomain portal.tutorialsdojo.com, the name of the bucket must be portal.tutorialsdojo.com. -S3 bucket and route 53 service can be in different region.
A company plans to migrate all of their applications to AWS. The Solutions Architect suggested to store all the data to EBS volumes. The Chief Technical Officer is worried that EBS volumes are not appropriate for the existing workloads due to compliance requirements, downtime scenarios, and IOPS performance. Which of the following are valid points in proving that EBS is the best service to use for migration? (Select TWO.)
- After you create a volume, you can attach it to any EC2 instance in the same Availability Zone - EBS volumes support live configuration changes while in production which means that you can modify the volume type, volume size, and IOPS capacity without service interruptions. - When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to a failure of any single hardware component. - Amazon EBS encryption uses 256-bit Advanced Encryption Standard algorithms (AES-256) - EBS Volumes offer 99.999% SLA. - EBS Volume snapshots are actually sent to Amazon S3.
*An advertising company is currently working on a proof of concept project that automatically provides SEO analytics for its clients. Your company has a VPC in AWS that operates in a dual-stack mode in which IPv4 and IPv6 communication is allowed. You deployed the application to an Auto Scaling group of EC2 instances with an Application Load Balancer in front that evenly distributes the incoming traffic. You are ready to go live but you need to point your domain name (tutorialsdojo.com) to the Application Load Balancer. In Route 53, which record types will you use to point the DNS name of the Application Load Balancer? (Select TWO.)
- Alias with a type "AAAA" record set - Alias with a type "A" record set alias since goes to root url domain IPv6 AAAA provide alias with alias target and hosted zone id and route policy simple
Which of the following are true regarding encrypted Amazon Elastic Block Store(EBS) volumes?(Select 2)
- Amazon EBS encryption is available on all current generation instance types and the following previous generation instance types: A1, C3, cr1.8xlarge, G2, I2, M3, and R3. - encryption is supported by all EBS volume types. - You can configure your AWS account to enforce the encryption of the new EBS volumes and snapshot copies that you create. -Amazon EBS snapshots are automatically encrypted with the same encryption key that was used to encrypt the source EBS volume.
*A company is looking to store their confidential financial files in AWS which are accessed every week. The Architect was instructed to set up the storage system which uses envelope encryption and automates key rotation. It should also provide an audit trail which shows who used the encryption key and by whom for security purposes. Which of the following should the Architect implement to satisfy the requirement in the most cost-effective way? (Select TWO.)
- Configure Server side encryption with AWS KMS Managed keys (SSE-KMS) - Use Amazon S3 to store the data SSE-KMS also provides you with an audit trail that shows when your CMK was used and by whom.
A popular social media website uses a CloudFront web distribution to serve their static contents to their millions of users around the globe. They are receiving a number of complaints recently that their users take a lot of time to log into their website. There are also occasions when their users are getting HTTP 504 errors. You are instructed by your manager to significantly reduce the user's login time to further optimize the system. Which of the following options should you use together to set up a cost-effective solution that can improve your application's performance? (Select TWO.)
- Customize the content that the CloudFront web distribution delivers to your users using Lambda@Edge, which allows your Lambda functions to execute the authentication process in AWS locations closer to the users. - Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses.
A company currently has an Augment Reality (AR) mobile game that has a serverless backend. It is using a DynamoDB table which was launched using the AWS CLI to store all the user data and information gathered from the players and a Lambda function to pull the data from DynamoDB. The game is being used by millions of users each day to read and store data. How would you design the application to improve its overall performance and make it more scalable while keeping the costs low? (Select TWO.)
- Enable DynamoDB Accelerator (DAX) and ensure that the Auto Scaling is enabled and increase the maximum provisioned read and write capacity. - Use API Gateway in conjunction with Lambda and turn on the caching on frequently accessed data and enable DynamoDB global replication.
*An application is hosted in an Auto Scaling group of EC2 instances and a Microsoft SQL Server on Amazon RDS. There is a requirement that all in-flight data between your web servers and RDS should be secured. Which of the following options is the MOST suitable solution that you should implement? (Select TWO.)
- Force all connections to your DB instance to use SSL by setting the rds.force_ssl parameter to true. Once done, reboot your DB instance. - Download the Amazon RDS Root CA certificate. Import the certificate to your servers and configure your application to use SSL to encrypt the connection to RDS. transparent data encryption (TDE) is incorrect because transparent data encryption (TDE) is primarily used to encrypt stored data on your DB instances running Microsoft SQL Server
A company has both on-premises data center as well as AWS cloud infrastructure. They store their graphics, audios, videos, and other multimedia assets primarily in their on-premises storage server and use an S3 Standard storage class bucket as a backup. Their data is heavily used for only a week (7 days) but after that period, it will only be infrequently used by their customers. The Solutions Architect is instructed to save storage costs in AWS yet maintain the ability to fetch a subset of their media assets in a matter of minutes for a surprise annual data audit, which will be conducted on their cloud storage. Which of the following are valid options that the Solutions Architect can implement to meet the above requirement? (Select TWO.)
- Set a lifecycle policy in the bucket to transition the data from Standard storage class to Glacier after one week (7 days). - Set a lifecycle policy in the bucket to transition to S3 - Standard IA after 30 days.
A loan processing application is hosted in a single On-Demand EC2 instance in your VPC. To improve the scalability of your application, you have to use Auto Scaling to automatically add new EC2 instances to handle a surge of incoming requests. Which of the following items should be done in order to add an existing EC2 instance to an Auto Scaling group? (Select TWO.)
- You have to ensure that the AMI used to launch the instance still exists. - You have to ensure that the instance is launched in one of the Availability Zones defined in your Auto Scaling group. The instance that you want to attach must meet the following criteria: - The instance is in the running state. - The AMI used to launch the instance must still exist. - The instance is not a member of another Auto Scaling group. - The instance is launched into one of the Availability Zones defined in your Auto Scaling group. - If the Auto Scaling group has an attached load balancer, the instance and the load balancer must both be in EC2-Classic or the same VPC. If the Auto Scaling group has an attached target group, the instance and the load balancer must both be in the same VPC.
A start-up company has an EC2 instance that is hosting a web application. The volume of users is expected to grow in the coming months and hence, you need to add more elasticity and scalability in your AWS architecture to cope with the demand. Which of the following options can satisfy the above requirement for the given scenario? (Select TWO.)
- setting up two EC2 instances and then put them behind an Elastic Load balancer (ELB) - setting up two EC2 instances and using Route 53 to route traffic based on a Weighted Routing Policy
A Solutions Architect working for a startup is designing a High Performance Computing (HPC) application which is publicly accessible for their customers. The startup founders want to mitigate distributed denial-of-service (DDoS) attacks on their application. Which of the following options are not suitable to be implemented in this scenario? (Select TWO.)
-Using Dedicated EC2 instances to ensure that each instance has the maximum performance possible -Adding multiple Elastic Fabric Adapters (EFA) to each EC2 instance to increase the network bandwidth
*In Amazon EC2, you can manage your instances from the moment you launch them up to their termination. You can flexibly control your computing costs by changing the EC2 instance state. Which of the following statements is true regarding EC2 billing? (Select TWO.)
-You will be billed when your On-Demand instance is preparing to hibernate with a stopping state -You will be billed when your Reserved instance is in terminated state
The DevOps team at an IT company is provisioning a two-tier application in a VPC with a public subnet and a private subnet. The team wants to use either a NAT instance or a NAT gateway in the public subnet to enable instances in the private subnet to initiate outbound IPv4 traffic to the internet but needs some technical assistance in terms of the configuration options available for the NAT instance and the NAT gateway. As a solutions architect, which of the following options would you identify as CORRECT? (Select three)
-nat instance can be used as a bastion server, port forwarding and security groups can be associated with a NAT instance.
*A company has a web-based order processing system that is currently using a standard queue in Amazon SQS. The IT Manager noticed that there are a lot of cases where an order was processed twice. This issue has caused a lot of trouble in processing and made the customers very unhappy. The manager has asked you to ensure that this issue will not recur. What can you do to prevent this from happening again in the future? (Select TWO.)
-use a SQS FIFO queue instead -Replace SQS with SWF. SWF ensures that a task is never duplicated and is assigned only once. Thus, even though you may have multiple workers for a particular activity type (or a number of instances of a decider), Amazon SWF will give a specific task to only one worker (or one decider instance). Additionally, Amazon SWF keeps at most one decision task outstanding at a time for a workflow execution. Thus, you can run multiple decider instances without worrying about two instances operating on the same execution simultaneously. These facilities enable you to coordinate your workflow without worrying about duplicate, lost, or conflicting tasks.
A software company has resources hosted in AWS and on-premises servers. You have been requested to create a decoupled architecture for applications which make use of both resources. Which of the following options are valid? (Select TWO.)
-use swf to utilize both onprem servers and ec2 instances for your decoupled application -use SQS to utilize both onpremises servers and ec2 instances for your decoupled application.
A company has a two-tier environment in its on-premises data center which is composed of an application tier and database tier. You are instructed to migrate their environment to the AWS cloud, and to design the subnets in their VPC with the following requirements: 1. There is an application load balancer that would distribute the incoming traffic among the servers in the application tier.2. The application tier and the database tier must not be accessible from the public Internet. The application tier should only accept traffic coming from the load balancer.3. The database tier contains very sensitive data. It must not share the same subnet with other AWS resources and its custom route table with other instances in the environment.4. The environment must be highly available and scalable to handle a surge of incoming traffic over the Internet. How many subnets should you create to meet the above requirements?
6
*A customer wants to leverage Amazon Simple Storage Service (S3) and Amazon Glacier as part of their backup and archive infrastructure. The customer plans to use third-party software to support this integration. Which approach will limit the access of the third party software to only the Amazon S3 bucket named "company-backup"?
A custom IAM user policy limited to the Amazon S3 API in "company-backup". has effect, action, resource If you're more interested in "What can this user do in AWS?"
Recovered ec2 instance config?
A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata. public IP may change. The automatic recovery process attempts to recover your instance for up to three separate failures per day. instance store volumes are not supported for automatic recovery by CloudWatch alarms.
36. A company is preparing to give AWS Management Console access to developers. Company policy mandates identity federation and role-based access control .Roles are currently assigned using groups in the corporate Active Directory. What combination of the following will give developers access to the AWS console?(Select TWO)
AWS Directory Service AD Connector AWS Identity and Access Management roles
*A company created a VPC with a single subnet then launched an On-Demand EC2 instance in that subnet. You have attached an Internet gateway (IGW) to the VPC and verified that the EC2 instance has a public IP. The main route table of the VPC is as shown below: 10.0.0.0/27 target-local status-active propagated-no However, the instance still cannot be reached from the Internet when you tried to connect to it from your computer. Which of the following should be made to the route table to fix this issue?
Add this new entry to the subnet route table: 0.0.0.0/0 -> Your internet gateway you have to add a route with a destination of 0.0.0.0/0 for IPv4 traffic or ::/0 for IPv6 traffic, and then a target of the Internet gateway ID (igw-xxxxxxxx). vpc per region 5 subnet per vpc 200 ipv4 cidr per vpc 5 ipv6 cidr per vpc 1
A company plans to use a cloud storage service to temporarily store its log files. The number of files to be stored is still unknown, but it only needs to be kept for 12 hours. Which of the following is the most cost-effective storage class to use in this scenario?
Amazon S3 standard
A company is developing a new mobile version of its polular web app in the aws cloud. It should be accessible to internal and external users. The modile app must handle authorization, authentication and user management from one central source. Which solution meets these requirements?
Amazon cognito user tools
A data processing facility wants to move a group of windows servers to the aws cloud. These servers require access to a shared file system that can integrate with the facility's existing AD infrastructure for file and folder permissions. The solution needs to provide seamless support for shared files with AWS and on-premises servers and allow the environment to be highly available. the chosen solution should provide added security by supporting excryption at rest and in transit. The solution should also be cost-effective to implement and manage. Which storage solution would meet these requirements?
An Amazon fsx for windows file server file system joined tothe existing AD domain.
*An insurance company utilizes SAP HANA for its day-to-day ERP operations. Since they can't migrate this database due to customer preferences, they need to integrate it with the current AWS workload in the VPC in which they are required to establish a site-to-site VPN connection. What needs to be configured outside of the VPC for them to have a successful site-to-site VPN connection?
An internet routable static ip address of the customer gateway's external interface for the on premises network.
*An online shopping platform has been deployed to AWS using Elastic Beanstalk. They simply uploaded their Node.js application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. Since the entire deployment process is automated, the DevOps team is not sure where to get the application log files of their shopping platform. In Elastic Beanstalk, where does it store the application files and server log files?
Application files are stored in S3. The server logs files can also optionally be stored in s3 or cloudwatch logs. Can also store server log on the EBS volumes of the EC2 instances which are launched by AWS Elastic Beanstalk. bserver log ut not directly to Glacier. You can create a lifecycle policy to the S3 bucket to store the server logs and archive it in Glacier, but there is no direct way of storing the server logs to Glacier using Elastic Beanstalk unless you do it programmatically.
A company has developed public APIs hosted in Amazon EC2 instances behind an Elastic Load Balancer. The APIs will be used by various clients from their respective on-premises data centers. A Solutions Architect received a report that the web service clients can only access trusted IP addresses whitelisted on their firewalls. What should you do to accomplish the above requirement?
Attach Elastic IP address to a NLB. To resolve this requirement, you can use the Bring Your Own IP (BYOIP) feature to use the trusted IPs as Elastic IP addresses (EIP) to a Network Load Balancer (NLB). This way, there's no need to re-establish the whitelists with new IP addresses.
A company has recently adopted a hybrid cloud architecture and is planning to migrate a database hosted on-premises to AWS. The database currently has over 50 TB of consumer data, handles highly transactional (OLTP) workloads, and is expected to grow. The Solutions Architect should ensure that the database is ACID-compliant and can handle complex queries of the application. Which type of database service should the Architect use?
Aurora. compatible with MySQL and PostgreSQL. grows automatically. The underlying storage grows automatically as needed, up to 64 tebibytes (TiB).
ipv4 public addressing attribute?
By default, nondefault subnets have the IPv4 public addressing attribute set to false, and default subnets have this attribute set to true. An exception is a nondefault subnet created by the Amazon EC2 launch instance wizard — the wizard sets the attribute to true. You can modify this attribute using the Amazon VPC console. Regardless of the subnet attribute, you can still override this setting for a specific instance during launch.
Which of the following are true about the EC2 user data configuration?
By default, scripts entered as user data are executed with root user privileges By default, user data runs only during the boot cycle when you first launch an instance you can't change the user data if the instance is running (even by using root user credentials), but you can view it.
A Solutions Architect is working for a company which has multiple VPCs in various AWS regions. The Architect is assigned to set up a logging system which will track all of the changes made to their AWS resources in all regions, including the configurations made in IAM, CloudFront, AWS WAF, and Route 53. In order to pass the compliance requirements, the solution must ensure the security, integrity, and durability of the log data. It should also provide an event history of all API calls made in AWS Management Console and AWS CLI. Which of the following solutions is the best fit for this scenario?
By default, trails log management events, but not data events. Set up a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Apply Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies
A health organization is using a large Dedicated EC2 instance with multiple EBS volumes to host its health records web application. The EBS volumes must be encrypted due to the confidentiality of the data that they are handling and also to comply with the HIPAA (Health Insurance Portability and Accountability Act) standard. In EBS encryption, what service does AWS use to secure the volume's data at rest? (Select TWO.)
By using Amazon managed keys in AWS KMS. By using your own keys in KMS.
A company has used ec2 spot instances for a demo. The demo is complete and a SA must now remove the spot instances. Which action should the SA take to remove the spot instances?
Cancel the spot request and then terminate the instances. since instance is running if instance is stopped then can only cancel the request. The instance will be removed automatically by ec2 spot service.
A company is deploying a Microsoft SharePoint Server environment on AWS using CloudFormation. The Solutions Architect needs to install and configure the architecture that is composed of Microsoft Active Directory (AD) domain controllers, Microsoft SQL Server 2012, multiple Amazon EC2 instances to host the Microsoft SharePoint Server and many other dependencies. The Architect needs to ensure that the required components are properly running before the stack creation proceeds. Which of the following should the Architect do to meet this requirement?
Configure a CreationPolicy attribute to the instance in the CloudFormation template. Send a success signal after the applications are installed and configured using the cfn-signal helper script
*A legacy application running on premises requires a Solutions Architect to be able to open a firewall to allow access to several Amazon S3 buckets. The Architect has a VPN connection to AWS in place. How should the Architect meet this requirement?
Configure a proxy on Amazon Ec2 and use an Amazon S3 VPC endpoint
A SA is designing a solution to run a containerized web app using Amazon ECS. He wants to minimize costs by running multiple copies of a task on each container instance. Which solution for routing the requests will meet these requirements?
Configure an ALB to distribute the requests using dynamic host port mapping.
*A DevOps engineer at an IT company was recently added to the admin group of the company's AWS account. The AdministratorAccess managed policy is attached to this group. Can you identify the AWS tasks that the DevOps engineer CANNOT perform even though he has full Administrator privileges (Select two)?
Configure an Amazon S3 bucket to enable MFA (Multi Factor Authentication) delete Close the company's AWS account Some of the AWS tasks that only a root account user can do are as follows: change account name or root password or root email address, change AWS support plan, close AWS account, enable MFA on S3 bucket delete, create Cloudfront key pair, register for GovCloud.
*A company is using multiple AWS accounts that are consolidated using AWS Organizations. They want to copy several S3 objects to another S3 bucket that belonged to a different AWS account which they also own. The Solutions Architect was instructed to set up the necessary permissions for this task and to ensure that the destination account owns the copied objects and not the account it was sent from. How can the Architect accomplish this requirement?
Configure cross-account permissions in S3 by creating an IAM customer managed policy that allows an IAM user or role to copy objects from the source bucket in one account to the destination bucket in the other account. Then attach the policy to the IAM user or role that you want to use to copy objects between accounts. Follow these steps to configure cross-account permissions to copy objects from a source bucket in Account A to a destination bucket in Account B: - Attach a bucket policy to the source bucket in Account A. - Attach an AWS Identity and Access Management (IAM) policy to a user or role in Account B. - Use the IAM user or role in Account B to perform the cross-account copy.
an API receives sensor data. The data is written to a queue before being processed to produce trend analysis and forecasting reports. current architecture, some data records are being received and processed more than once. how to ensure that duplicate records are not processed?
Configure the api to send the records to an SQS FIFO queue.
You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which method will ensure that all objects uploaded to the bucket are set to public read?
Configure the bucket policy to set all objects to public read. If you're more interested in "Who can access this S3 bucket?" then S3 bucket policies will likely suit you better. has principal S3 bucket policy that ensures that MFA Delete is set on the objects in thebucket.
A Solutions Architect needs to use AWS to implement pilot light disaster recovery for a three-tier web application hosted in an on-premise data center. Which solution allows rapid provision of a working, fully-scaled production environment?
Continuously replicate the production database server to Amazon RDS. Create one application load balancer and register on premises servers. Configure ELB Application Load Balancer to automatically deploy Amazon EC2 instances for application and additional servers if the on-premises application is down.
An application consists of multiple EC2 instances in private subnets in different availability zones. The application uses a single NAT Gateway for downloading software patches from the Internet to the instances. There is a requirement to protect the application from a single point of failure when the NAT Gateway encounters a failure or if its availability zone goes down. How should the Solutions Architect redesign the architecture to be more highly available and cost-effective
Create a NAT Gateway in each availability zone. Configure the route table in each private subnet to ensure that instances use the NAT Gateway in the same availability zone. -NAT gateway is created in a specific Availability Zone and implemented with redundancy in that zone.
A company has a direct conn connection to aws cloud. DNS and AD services are onpremises. The company has new accounts that will also need consistent and dedicated access to these network services. Which of the following can satisfy this with least amount of operational overhead and in a cost-effective manner?
Create a new DC gateway and integrate it with the DC connection. Set up a transit gateway between AWS accounts and associate it with the DC gateway.
A software development company is using serverless computing with AWS Lambda to build and run applications without having to set up or manage servers. They have a Lambda function that connects to a MongoDB Atlas, which is a popular Database as a Service (DBaaS) platform and also uses a third party API to fetch certain data for their application. One of the developers was instructed to create the environment variables for the MongoDB database hostname, username, and password as well as the API credentials that will be used by the Lambda function for DEV, SIT, UAT, and PROD environments. Considering that the Lambda function is storing sensitive database and API credentials, how can this information be secured to prevent other developers in the team, or anyone, from seeing these credentials in plain text? Select the best option that provides maximum security.
Create a new KMS key and use it to enable encryption helpers that leverage on AWS KMS to store and encrypt the sensitive information.
A company hosted an e-commerce website on an Auto Scaling group of EC2 instances behind an Application Load Balancer. The Solutions Architect noticed that the website is receiving a large number of illegitimate external requests from multiple systems with IP addresses that constantly change. To resolve the performance issues, the Solutions Architect must implement a solution that would block the illegitimate requests with minimal impact on legitimate traffic. Which of the following options fulfills this requirement?
Create a rate-based rule in AWS WAF and associate the web ACL to an Application Load Balancer. WAF attached to ALB, CF, API Gateway, Appsync.
A Solutions Architect created a brand new IAM User with a default setting using AWS CLI. This is intended to be used to send API requests to Amazon S3, DynamoDB, Lambda, and other AWS resources of the company's cloud infrastructure. Which of the following must be done to allow the user to make API calls to the AWS resources?
Create a set of access keys and attach necessary permissons
A company needs to use Amazon Aurora as the Amazon RDS database engine of their web application. The Solutions Architect has been instructed to implement a 90-day backup retention policy. Which of the following options can satisfy the given requirement?
Create an AWS backup plan to take daily snapshots with a retention period of 90 days. AWS Backup makes protecting your AWS storage volumes, databases, and file systems simple by providing a central place maximum backup retention period for automated backup in aurora is only 35 days.
An application on an Amazon EC2 instance routinely stops responding to requests and requires a reboot to recover.The application logs are already exported into Amazon CloudWatch,and you notice that the problem consistently follows the appearance of a specific message in the log.The application team is working to address the bug,but has not provided a date for the fix.What work around can you implement to automate recovery of the instance until the fix is deployed?
Create an Amazon CloudWatch alarm on an Amazon CloudWatch Logs filter for that message;based on that alarm,trigger an Amazon CloudWatch action to reboot the instance
A company needs to deploy at least 2 EC2 instances to support the normal workloads of its application and automatically scale up to 6 EC2 instances to handle the peak load. The architecture must be highly available and fault-tolerant as it is processing mission-critical workloads. As the Solutions Architect of the company, what should you do to meet the above requirement?
Create an Auto Scaling group of EC2 instances and set the minimum capacity to 4 and the maximum capacity to 6. Deploy 2 instances in Availability Zone A and another 2 instances in Availability Zone B.
*A startup created an AI-based traffic monitoring service. You need to register a new domain called www.tutorialsdojo-ai.com and set up other DNS entries for the other components of the system in AWS. Which of the following is not supported by Amazon Route 53?
DNS(Domain name system security extensions)ets DNS resolvers validate that a DNS response came from Amazon Route 53 and has not been tampered with. sign usin gpublic key. -PTR (pointer record) -SPF(sender policy framework) -SRV (service locator) Amazon Route 53 currently supports the following DNS record types: - A (address record) - AAAA (IPv6 address record) - CNAME (canonical name record) - CAA (certification authority authorization) - MX (mail exchange record) - NAPTR (name authority pointer record) - NS (name server record) - PTR (pointer record) - SOA (start of authority record) - SPF (sender policy framework) - SRV (service locator) - TXT (text record)
A large multinational investment bank has a web application that requires a minimum of 4 EC2 instances to run to ensure that it can cater to its users across the globe. You are instructed to ensure fault tolerance of this system. Which of the following is the best option?
Deploy an ASG with 2 isntances in each of the 3 AZ behind an ALB
A company needs to integrate the Lightweight Directory Access Protocol (LDAP) directory service from the on-premises data center to the AWS VPC using IAM. The identity store which is currently being used is not compatible with SAML. Which of the following provides the most valid approach to implement the integration?
Develop an on premises custom identity broker app and use STS to issue short-lived AWS credentials. To get temporary security credentials, the identity broker application calls either AssumeRole or GetFederationToken to obtain temporary security credentials, depending on how you want to manage the policies for users and when the temporary credentials should expire. The call returns temporary security credentials consisting of an AWS access key ID, a secret access key, and a session token.
A company deployed a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. The Solutions Architect expects the S3 bucket to immediately receive over 2000 PUT requests and 3500 GET requests per second at peak hour. What should the Solutions Architect do to ensure optimal performance?
Do nothing. S3 will automatically manage performance at this scale.
A company plans to build a web architecture using On-Demand EC2 instances and a database in AWS. However, due to budget constraints, the company instructed the Solutions Architect to choose a database service in which they no longer need to worry about database management tasks such as hardware or software provisioning, setup, configuration, scaling, and backups. Which of the following services should the Solutions Architect recommend?
Dynamodb.
*A company is building an internal application that processes loans, accruals, and interest rates for their clients. They require a storage service that is able to handle future increases in storage capacity of up to 16 TB and can provide the lowest-latency access to their data. The web application will be hosted in a single m5ad.24xlarge Reserved EC2 instance that will process and store data to the storage service. Which of the following storage services would you recommend?
EBS. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance. You can also increase EBS storage for up to 16TB or add new volumes for additional storage.
*A leading social media analytics company is contemplating moving its dockerized application stack into AWS Cloud. The company is not sure about the pricing for using Elastic Container Service (ECS) with the EC2 launch type compared to the Elastic Container Service (ECS) with the Fargate launch type. Which of the following is correct regarding the pricing for these two services?
ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized application requests
A financial application is composed of an Auto Scaling group of EC2 instances, an Application Load Balancer, and a MySQL RDS instance in a Multi-AZ Deployments configuration. To protect the confidential data of your customers, you have to ensure that your RDS database can only be accessed using the profile credentials specific to your EC2 instances via an authentication token. As the Solutions Architect of the company, which of the following should you do to meet the above requirement?
Enable IAM DB Authentication for RDS.
*A company is planning to launch a High Performance Computing (HPC) cluster in AWS that does Computational Fluid Dynamics (CFD) simulations. The solution should scale-out their simulation jobs to experiment with more tunable parameters for faster and more accurate results. The cluster is composed of Windows servers hosted on t3a.medium EC2 instances. As the Solutions Architect, you should ensure that the architecture provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. Which is the MOST suitable and cost-effective solution that the Architect should implement to achieve the above requirements?
Enable enhanced networking with elastic network adapter ENA on the windows EC2 instances. The OS-bypass capabilities of EFAs are not supported on Windows instances. If you attach an EFA to a Windows instance, the instance functions as an Elastic Network Adapter, without the added EFA capabilities.
*A company has several unencrypted EBS snapshots in their VPC. The Solutions Architect must ensure that all of the new EBS volumes restored from the unencrypted snapshots are automatically encrypted. What should be done to accomplish this requirement?
Enable the EBS Encryption By default feature for the AWS region.
A company needs to launch an Amazon EC2 instance with a persistent block storage to host its application. The stored data must be encrypted at rest. Which of the following is the most suitable storage solution in this scenario?
Encrypted Amazon EBS volume using AWS KMS.
enhanced montioring?
Enhanced Monitoring is a feature of Amazon RDS. By default, Enhanced Monitoring metrics are stored for 30 days in the CloudWatch Logs.
A user is designing a new service that receives location updates from 3600 rental cars every hour. The cars location needs to be uploaded to an Amazon S3 bucket. Each location must also be checked for distance from the original rental location. Which services will process the updates and automatically scale?
Firehose and s3
A newly hired Solutions Architect is checking all of the security groups and network access control list rules of the company's AWS resources. For security purposes, the MS SQL connection via port 1433 of the database tier should be secured. Below is the security group configuration of their Microsoft SQL Server database: MSSQL TCP 1433 Source - anywhere 0.0.0.0/0,::0
For MSSQL rule, change the source to the security group id attached to the application tier.
A messaging application in ap-northeast-1 region uses m4.2xlarge instance to accommodate 75 percent of users from Tokyo and Seoul. It uses a cheaper m4.large instance in ap-southeast-1 to accommodate the rest of users from Manila and Singapore. As a Solutions Architect, what routing policy should you use to route traffic to your instances based on the location of your users and instances?
Geoproximity routing
A startup launched a new API Gateway service which uses AWS Lambda as a serverless computing service. In what type of protocol will the API endpoint be exposed?
HTTPS By default, Amazon API Gateway assigns an internal domain to the API that automatically uses the Amazon API Gateway certificate. When configuring your APIs to run under a custom domain name, you can provide your own certificate for the domain.
A Solutions Architect designed a serverless architecture that allows AWS Lambda to access an Amazon DynamoDB table named tutorialsdojo in the US East (N. Virginia) region. The IAM policy attached to a Lambda function allows it to put and delete items in the table. The policy must be updated to only allow two operations in the tutorialsdojo table and prevent other DynamoDB tables from being modified. Which of the following IAM policies fulfill this requirement and follows the principle of granting the least privilege?
IAM policy evaluated in order. If first allows and second rule denies then whole policy denies the action. { "Version": "2012-10-17", "Statement": [ { "Sid": "tutorialsdojo", "Effect": "Allow", "Action": [ "dynamodb:PutItem", "dynamodb:DeleteItem" ], "Resource": "arn:aws:dynamodb:us-east-1:120618981206:table/tutorialsdojo" } ] }
*A reporting app runs on Amazon ec2 instances behind an Application Load balancer. The instances run in an ec2 ASG across multiple AZ. Due to their complexity, some reports may take upto 15 mins to respond to a request. A SA is concerned that users will receive 500 errors if a report request is in process during a scale in event. Which action will ensure that user requests are completed before instances are terminated?
Increase the deregistraion delay timeout for the target group of the instances to greater than 1500 seconds. 25 mins. it is 5 mins by default.
A company is running a custom application in an Auto Scaling group of Amazon EC2 instances. Several instances are failing due to insufficient swap space. The Solutions Architect has been instructed to troubleshoot the issue and effectively monitor the available swap space of each EC2 instance. Which of the following options fulfills this requirement?
Install the CloudWatch agent on each instance and monitor the SwapUtilization metric.
An auto scaling group of Linux EC2 instances is created with basic monitoring enabled in CloudWatch. You noticed that your application is slow so you asked one of your engineers to check all of your EC2 instances. After checking your instances, you noticed that the auto scaling group is not launching more instances as it should be, even though the servers already have high memory usage. Which of the following options should the Architect implement to solve this issue?
Install the CloudWatch agent to the EC2 instances which will trigger your Auto Scaling group to scale up. The premise of the scenario is that the EC2 servers have high memory usage, but since this specific metric is not tracked by the Auto Scaling group by default, the scaling up activity is not being triggered. Remember that by default, CloudWatch doesn't monitor memory usage but only the CPU utilization, Network utilization, Disk performance, and Disk Reads/Writes.
A company has a VPC for its human resource department, and another VPC located in a different region for their finance department. The Solutions Architect must redesign the architecture to allow the finance department to access all resources that are in the human resource department, and vice versa. Which type of networking connection in AWS should the Solutions Architect set up to satisfy the above requirement?
Inter-Region VPC Peering
An organization needs to provision a new Amazon EC2 instance with a persistent block storage volume to migrate data from its on-premises network to AWS. The required maximum performance for the storage volume is 64,000 IOPS. In this scenario, which of the following can be used to fulfill this requirement?
Launch a Nitro-based EC2 instance and attach a Provisioned IOPS SSD EBS volume (io1) with 64,000 IOPS. The maximum IOPS and throughput are guaranteed only on Instances built on the Nitro System provisioned with more than 32,000 IOPS. Other instances guarantee up to 32,000 IOPS only.
A media company is setting up an ECS batch architecture for its image processing application. It will be hosted in an Amazon ECS Cluster with two ECS tasks that will handle image uploads from the users and image processing. The first ECS task will process the user requests, store the image in an S3 input bucket, and push a message to a queue. The second task reads from the queue, parses the message containing the object name, and then downloads the object. Once the image is processed and transformed, it will upload the objects to the S3 output bucket. To complete the architecture, the Solutions Architect must create a queue and the necessary IAM permissions for the ECS tasks. Which of the following should the Architect do next?
Launch a new Amazon SQS queue and configure the second ECS task to read from it. Create an IAM role that the ECS tasks can assume in order to get access to the S3 buckets and SQS queue. Declare the IAM Role (taskRoleArn) in the task definition.
A company has a hybrid cloud architecture that connects their on-premises data center and cloud infrastructure in AWS. They require a durable storage backup for their corporate documents stored on-premises and a local cache that provides low latency access to their recently accessed data to reduce data egress charges. The documents must be stored to and retrieved from AWS via the Server Message Block (SMB) protocol. These files must immediately be accessible within minutes for six months and archived for another decade to meet the data compliance. Which of the following is the best and most cost-effective approach to implement in this scenario?
Launch a new file gateway that connects to your on-premises data center using AWS Storage Gateway. Upload the documents to the file gateway and set up a lifecycle policy to move the data into Glacier for data archival. - file, volume gateway allows cached data. data available immediately. - tape does not allow cached data and data is not available immediately.
A company is using an Amazon RDS for MySQL 5.6 with Multi-AZ deployment enabled and several web servers across two AWS Regions. The database is currently experiencing highly dynamic reads due to the growth of the company's website. The Solutions Architect tried to test the read performance from the secondary AWS Region and noticed a notable slowdown on the SQL queries. Which of the following options would provide a read replication latency of less than 1 second?
Migrate the existing database to Amazon Aurora and create a cross-region read replica. async replication in milliseconds, in region replica,
*A Solutions Architect is managing a three-tier web application that processes credit card payments and online transactions. Static web pages are used on the front-end tier while the application tier contains a single Amazon EC2 instance that handles long-running processes. The data is stored in a MySQL database. The Solutions Architect is instructed to decouple the tiers to create a highly available application. Which of the following options can satisfy the given requirement?
Move all static assets and web pages to S3. Rehost app to ECS containers and enable Service auto scaling. Migrate database to RDS with Multi AZ deployments configuration Amazon ECS lets you schedule long-running applications, services, and batch processes using Docker containers. ECs allows you to easily run applications on a managed cluster of Amazon EC2 instances with scaling.
*Your company's IT policies mandate that all critical data must be duplicated in two physical locations at least 200 miles apart.Which storage option meets this requirement?
One Amazon S3 bucket to increase durability, Amazon S3 synchronously stores your data across multiple facilities before confirming that the data has been successfully stored
A development team is building an application with front-end and backend application tiers. Each tier consist of Amazon EC2 instances behind an ELB Classic Load Balancer. The instances run in Auto Scaling groups across multiple Availability Zones. The network team has allocated the 10.0.0.0/24 address space for this application. Only the front-end load balancer should be exposed to the Internet. There are concerns about the limited size of the address space and the ability of each tier to scale. What should the VPC subnet design be in each Availability Zone ?
One public subnet for the load balancer tier and one shared private subnet for the application tiers.
*An application reads and writes objects to an S3 bucket. When the application is fully deployed, the read/write traffic is very high. How should the architect maximize the Amazon S3 performance?
Prefix each object name with a hex hash 3-4 char hex key key along with the current data. Prefix each object name with a random string
RAID (Redundant Array of Independent Disks) is just a data storage virtualization technology that combines multiple storage devices to achieve higher performance or data durability
RAID 0 can stripe multiple volumes together for greater I/O performance than you can achieve with a single volume. On the other hand, RAID 1 can mirror two volumes together to achieve on-instance redundancy.
What are the enhanced monitoring metrics that Amazon CloudWatch gathers from Amazon RDS DB instances which provide more accurate information?
RDS child processes - mysqld aurora RDS processes os processes - kernel system level
*A startup launched a fleet of on-demand EC2 instances to host a massively multiplayer online role-playing game (MMORPG). The EC2 instances are configured with Auto Scaling and AWS Systems Manager. What can be used to configure the EC2 instances without having to establish an RDP or SSH connection to each instance?
Run Command
A company is deploying a new application that will consist of an application layer and an online transaction processing (OLTP) relational database. The application needs to be available at all times, but is going to experience periods of inactivity. The chief financial officer wants to pay the minimum for compute costs during these idle times.What is the BEST solution for these requirements?
Run the application in containers on AWS Fargate and use Amazon Aurora Serverless for the database.
Users of an image processing app experience long delays waiting for their images to process when the application experiences unpredictable periods of heavy usage. The architecture consists of a webtier, an SQS standard queue, and message consumers running on Amazon ec2 instances. When there is a high volume of requests, the message backlog in SQS spikes. A SA is tasked with improving the perf of the application while keeping costs low?
Run the message consumer instances in an ASG configured to scale out and in based on ApproximateNumberof messages amazon cloudwatch metric Configure an aws lambda function to scale out the number of consumer instances when the message backlog grows.
A company is using Amazon S3 to store frequently accessed data. When an object is created or deleted, the S3 bucket will send an event notification to the Amazon SQS queue. A solutions architect needs to create a solution that will notify the development and operations team about the created or deleted objects. Which of the following would satisfy this requirement?
S3 notifications. -> Lambda, sns, sqs. Create an Amazon SNS topic and configure two Amazon SQS queues to subscribe to the topic. Grant Amazon S3 permission to send notifications to Amazon SNS and update the bucket to use the new SNS topic.
Which services can invoke AWS Lambda functions
SNS, dynamodb, ALB not --- Amazon Redshift Amazon Route53 Elastic Load Balancing
*A large financial firm needs to set up a Linux bastion host to allow access to the Amazon EC2 instances running in their VPC. For security purposes, only the clients connecting from the corporate external public IP address 175.45.116.100 should have SSH access to the host. Which is the best option that can meet the customer's requirement?
Security Group Inbound Rule: Protocol - tcP, Port Range - 22, Source 175.45.116.100/32 sec grp since protecting ec2 instance
*A company has a High Performance Computing (HPC) cluster that is composed of EC2 Instances with Provisioned IOPS volume to process transaction-intensive, low-latency workloads. The Solutions Architect must maintain high IOPS while keeping the latency down by setting the optimal queue length for the volume. The size of each volume is 10 GiB. Which of the following is the MOST suitable configuration that the Architect should set up?
Set IOPs to 500 and then maintain a low queue length. An io1 volume can range in size from 4 GiB to 16 TiB.
An application is hosted in an AWS Fargate cluster that runs a batch job whenever an object is loaded on an Amazon S3 bucket. The minimum number of ECS Tasks is initially set to 1 to save on costs, and it will only increase the task count based on the new objects uploaded on the S3 bucket. Once processing is done, the bucket becomes empty and the ECS Task count should be back to 1. Which is the most suitable option to implement with the LEAST amount of effort?
Set up a CloudWatch Event rule to detect S3 object PUT operations and set the target to the ECS cluster with the increased number of tasks. Create another rule to detect S3 DELETE operations and set the target to the ECS Cluster with 1 as the Task count.
A web application, which is hosted in your on-premises data center and uses a MySQL database, must be migrated to AWS Cloud. You need to ensure that the network traffic to and from your RDS database instance is encrypted using SSL. For improved security, you have to use the profile credentials specific to your EC2 instance to access your database, instead of a password. Which of the following should you do to meet the above requirement?
Set up an RDS database and enable the IAM DB authentication. AM database authentication works with MySQL and PostgreSQL. Authentication tokens are generated using AWS Signature Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials in the database, because authentication is managed externally using IAM.
The media company that you are working for has a video transcoding application running on Amazon EC2. Each EC2 instance polls a queue to find out which video should be transcoded, and then runs a transcoding process. If this process is interrupted, the video will be transcoded by another instance based on the queuing system. This application has a large backlog of videos which need to be transcoded. Your manager would like to reduce this backlog by adding more EC2 instances, however, these instances are only needed until the backlog is reduced. In this scenario, which type of Amazon EC2 instance is the most cost-effective type to use?
Spot instances
There are a few, easily reproducible but confidential files that your client wants to store in AWS without worrying about storage capacity. For the first month, all of these files will be accessed frequently but after that, they will rarely be accessed at all. The old files will only be accessed by developers so there is no set retrieval time requirement. However, the files under a specific tutorialsdojo-financeprefix in the S3 bucket will be used for post-processing that requires millisecond retrieval time. Given these conditions, which of the following options would be the most cost-effective solution for your client's storage needs?
Store the files in S3 then after a month, change the torage class of the tutorialsdojo-finance prefix to One Zone -IA while the remaining go to Glacier using lifecycle policy.
A streaming solutions company is building a video streaming product by using an Application Load Balancer (ALB) that routes the requests to the underlying EC2 instances. The engineering team has noticed a peculiar pattern. The ALB removes an instance whenever it is detected as unhealthy but the Auto Scaling group fails to kick-in and provision the replacement instance. What could explain this anomaly?
The Auto Scaling group is using EC2 based health check and the Application Load Balancer is using ALB based health check
A global news network created a CloudFront distribution for their web application. However, you noticed that the application's origin server is being hit for each request instead of the AWS Edge locations, which serve the cached objects. The issue occurs even for the commonly requested objects. What could be a possible cause of this issue?
The Cache-Control max-age directive is set to zero. The minimum expiration time CloudFront supports is 0 seconds for web distributions and 3600 seconds for RTMP distributions.
S3 application works as expected until a new change was added to increase the rate at which the application updates its data. There have been reports that outdated data intermittently apperars when app access objects from s3 bucket. Which is the cause?
The app is designed to fetch objects from s3 bucket using parallel requests. Eventual consistency for read after write get requests for non existing objects.
A company has established a dedicated network connection from its on-premises data center to AWS Cloud using AWS Direct Connect (DX). The core network services, such as the Domain Name System (DNS) service and Active Directory services, are all hosted on-premises. The company has new AWS accounts that will also require consistent and dedicated access to these network services. Which of the following can satisfy this requirement with the LEAST amount of operational overhead and in a cost-effective manner?
The dataa analytics app is designed to fetch objects from the s3 bucket using parallel requests. Explanation Amazon S3 provides read-after-write consistency for PUTS of new objects in your S3 bucket in all regions with one caveat:if you make a HEAD or GET request to the key name (to find if the object exists) before creating the object, Amazon S3 provides eventual consistency for read-after-write.
A Solutions Architect needs to create a publicly accessible EC2 instance by using an Elastic IP (EIP) address and generate a report on how much it will cost to use that EIP. Which of the following statements is correct regarding the pricing of EIP?
There is no cost if instance is running and it has only one associated EIP. An Elastic IP address doesn't incur charges as long as the following conditions are true: - The Elastic IP address is associated with an Amazon EC2 instance. - The instance associated with the Elastic IP address is running. - The instance has only one Elastic IP address attached to it.
A company needs to assess and audit all the configurations in their AWS account. It must enforce strict compliance by tracking all configuration changes made to any of its Amazon S3 buckets. Publicly accessible S3 buckets should also be identified automatically to avoid data breaches. Which of the following options will meet this requirement?
Use AWS Config to set up a rule in your AWS account. AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources.
Both historical records and frequently accessed data are stored on an on-premises storage system. The amount of current data is growing at an exponential rate. As the storage's capacity is nearing its limit, the company's Solutions Architect has decided to move the historical records to AWS to free up space for the active data. Which of the following architectures deliver the best solution in terms of cost and operational management?
Use AWS DataSync to move the historical records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
A company requires all the data stored in the cloud to be encrypted at rest. To easily integrate this with other AWS services, they must have full control over the encryption of the created keys and also the ability to immediately remove the key material from AWS KMS. The solution should also be able to audit the key usage independently of AWS CloudTrail. Which of the following options will meet this requirement?
Use AWS Key Management Service to create a CMK in a custom key store and store the non-extractable key material in AWS CloudHSM.
An organization is currently using a tape backup solution to store its application data on-premises. They plan to use a cloud storage service to preserve the backup data for up to 10 years that may be accessed about once or twice a year. Which of the following is the most cost-effective option to implement this solution?
Use AWS Storage Gateway to backup the data directly to Amazon S3 Glacier Deep Archive.
A news network uses Amazon S3 to aggregate the raw video footage from its reporting teams across the US. The news network has recently expanded into new geographies in Europe and Asia. The technical teams at the overseas branch offices have reported huge delays in uploading large video files to the destination S3 bucket. Which of the following are the MOST cost-effective options to improve the file upload speed into S3? (Select two)
Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination S3 bucket Use multipart uploads for faster file uploads into the destination S3 bucket
A social media company manages its flagship application on an EC2 server fleet running behind an Application Load Balancer and the traffic is fronted by a CloudFront distribution. The engineering team wants to decouple the user authentication process for the application so that the application servers can just focus on the business logic. As a Solutions Architect, which of the following solutions would you recommend to the development team so that it requires minimal development effort?
Use Cognito Authentication via Cognito User Pools for your Application Load Balancer
*A company needs secure access to its Amazon RDS for MySQL database that is used by multiple applications. Each IAM user must use a short-lived authentication token to connect to the database. Which of the following is the most suitable solution in this scenario?
Use IAM DB Authentication and create database accounts using the AWS provided AWSAuthenticationPlugin plugin in MySQL. IAM database authentication works with MySQL and PostgreSQL. for 15 mins valid. IAM database authentication provides the following benefits: Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL). You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance. For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security
*An organization plans to use an AWS Direct Connect connection to establish a dedicated connection between its on-premises network and AWS. The organization needs to launch a fully managed solution that will automate and accelerate the replication of data to and from various AWS storage services. Which of the following solutions would you recommend?
Use an AWS Datasync agent to rapidly move the data over a service endpoint. To connect programmatically to an AWS service, you will need to use an AWS Direct Connect service endpoint. not storage gatway bcz do not need to establish a hybrid cloud storage architecture.
*A company has grown from a small startup to an enterprise employing over 1000 people. As part of the scaling up of the AWS Cloud teams, the company has observed some strange behavior with S3 buckets settings being changed regularly. How can you figure out what's happening without restricting the rights of the users?
Use cloudtrail to analyze API calls to s3 bucket.
A Solutions Architect is designing a solution with AWS Lambda where different environments require different database passwords. What should the Architect do to accomplish this in a secure and scalable way?
Use encrypted AWS Lambda environment variables.
A SA has been asked to reduce the cost of data transfer between amazon ec2 instances. The instances are in two AZ in a single AWS region. The instances use public IP addresses for all communications. How should the SA reduce costs without sacrificing reliability and scalability?
Use private ip address instaead of public ip addresses.
A media company has an Amazon ECS Cluster, which uses the Fargate launch type, to host its news website. The database credentials should be supplied using environment variables, to comply with strict security compliance. As the Solutions Architect, you have to ensure that the credentials are secure and that they cannot be viewed in plaintext on the cluster itself. Which of the following is the most suitable solution in this scenario that you can implement with minimal effort?
Use the AWS Systems Manager Parameter Store to keep the database credentials and then encrypt them using AWS KMS. Create an IAM Role for your Amazon ECS task execution role (taskRoleArn) and reference it with your task definition, which allows access to both KMS and the Parameter Store. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to present to the container. Docker secrets only applicable for docker swarm. IAM Role is good for accessing KMS and Secrets Manager Parameter store Secrets manager or Systems Manager good to store secrets for ECS. Specify secret arn in env variable in container.
*A global medical research company has a molecular imaging system which provides each client with frequently updated images of what is happening inside the human body at the molecular and cellular level. The system is hosted in AWS and the images are hosted in an S3 bucket behind a CloudFront web distribution. There was a new batch of updated images that were uploaded in S3, however, the users were reporting that they were still seeing the old content. You need to control which image will be returned by the system even when the user has another version cached either locally or behind a corporate caching proxy. Which of the following is the most suitable solution to solve this issue?
Use versioned objects Versioning enables you to control which file a request returns even when the user has a version cached either locally or behind a corporate caching proxy. If you invalidate the file, the user might continue to see the old version until it expires from those caches. you don't have to pay for invalidating files. or you can add Cache-Control or Expires header field to your objects
An e-commerce company is planning to migrate their two-tier application from on-premises infrastructure to AWS Cloud. As the engineering team at the company is new to the AWS Cloud, they are planning to use the Amazon VPC console wizard to set up the networking configuration for the two-tier application having public web servers and private database servers. Can you spot the configuration that is NOT supported by the Amazon VPC console wizard?
VPC with public subnet only and AWS site to site access 4 configs are supported - 1. vpc with a single public subnet. website 2. vpc with public and private subnets (NAT) - multi tier website 3. vpc with public and private subnets and AWS site-to-site VPN access - ipvsec connection with VPG 4. VPC with a private subnet only and AWS Site-to-Site VPN access - We recommend this scenario if you want to extend your network into the cloud using Amazon's infrastructure without exposing your network to the Internet.
*A mobile application stores pictures in Amazon Simple Storage Service (S3) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for this scenario?
Web identity federation dont need to manage your own identities. users use IDP like fb, google or open id oidc idp. get token and then get temp sec creds in
The engineering team at an e-commerce company is working on cost optimizations for EC2 instances. The team wants to manage the workload using a mix of on-demand and spot instances across multiple instance types. They would like to create an Auto Scaling group with a mix of these instances. Which of the following options would allow the engineering team to provision the instances for this use-case?
You can only use a launch template to provision capacity across multiple instance types using both On-Demand Instances and Spot Instances to achieve the desired scale, performance, and cost ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups
A company has a web application that is relying entirely on slower disk-based databases, causing it to perform slowly. To improve its performance, the Solutions Architect integrated an in-memory data store to the web application using ElastiCache. How does Amazon ElastiCache improve database performance?
by caching query results. (submillisecond latency)
A company is setting up a cloud architecture for an international money transfer service to be deployed in AWS which will have thousands of users around the globe. The service should be available 24/7 to avoid any business disruption and should be resilient enough to handle the outage of an entire AWS region. To meet this requirement, the Solutions Architect has deployed their AWS resources to multiple AWS Regions. He needs to use Route 53 and configure it to set all of the resources to be available all the time as much as possible. When a resource becomes unavailable, Route 53 should detect that it's unhealthy and stop including it when responding to queries. Which of the following is the most fault-tolerant routing configuration that the Solutions Architect should use in this scenario?
configure an active-active failover with weighted routing policy
A tech company has a CRM application hosted on an Auto Scaling group of On-Demand EC2 instances. The application is extensively used during office hours from 9 in the morning till 5 in the afternoon. Their users are complaining that the performance of the application is slow during the start of the day but then works normally after a couple of hours. Which of the following can be done to ensure that the application works properly at the beginning of the day?
configuring a Scheduled scaling policy for the Auto Scaling group to launch new instances before the start of the day By the time either the CPU or Memory hits a peak, the application already has performance issues, so you need to ensure the scaling is done beforehand using a Scheduled scaling policy.
196. A solutions Architect is building a new feature using Lambda to create metadata when a user uploads a picture to Amazon S3. All metadata must be indexed. Which AWS service should the Architecture use to store this metadata?
dynamodb
A Solution Architect is designing a solution that must store and retrieve session data and JSON documents. The solution must provide high availability, strong consistent, and data durability. Which solution meets these requirements?
dynamodb
A retailer exports data from its transactional databases daily into an S3 bucket. The retailer data warehousing team wants to import that data into an existing Amazon Redshift cluster in their VPC. Corporate security mandates that this data can only be transported within a VPC. What combination of following steps will satisfy the security policy? ( SELECT TWO)
enable Amazon Redshift Enhanced VPC Routing provides VPC resources the access to Redshift. Create and configure an Amazon S3 VPC endpoint
A large electronics company is using Amazon Simple Storage Service to store important documents. For reporting purposes, they want to track and log every request access to their S3 buckets including the requester, bucket name, request time, request action, referrer, turnaround time, and error code information. The solution should also provide more visibility into the object-level operations of the bucket. Which is the best solution among the following options that can satisfy the requirement?
enable server access logging for all required Amazon S3 buckets not cloudtrail since it is mentioned in the scenario that they need detailed information about every access request sent to the S3 bucket including the referrer and turn-around time information. These two records are not available in CloudTrail
An insurance company plans to implement a message filtering feature in their web application. To implement this solution, they need to create separate Amazon SQS queues for each type of quote request. The entire message processing should not exceed 24 hours. As the Solutions Architect of the company, which of the following should you do to meet the above requirement?
reate one Amazon SNS topic and configure the Amazon SQS queues to subscribe to the SNS topic. Set the filter policies in the SNS subscriptions to publish the message to the designated SQS queue based on its quote request type.
All objects uploaded to an Amazon S3 bucket must be encrypted for security compliance. The bucket will use server-side encryption with Amazon S3-Managed encryption keys (SSE-S3) to encrypt data using 256-bit Advanced Encryption Standard (AES-256) block cipher. Which of the following request headers must be used?
x-amz-server-side-encryption
if you chose to use server-side encryption with customer-provided encryption keys (SSE-C), you must provide encryption key information using the following request headers
x-amz-server-side-encryption-customer-algorithm x-amz-server-side-encryption-customer-key x-amz-server-side-encryption-customer-key-MD5
A SA needs to design a secure environment for resources that are being deployed to a VPC. The solution should support for a three-tier architectire consisting of web, app and db tiers. The VPC needs to allow resources in the web tier to be accessible from the internet using HTTPS. Which combination of actions would create the most secure network? (SElect two)
- Attach a VPG to the vpc. Create public subnet for web and app tiers. Create a private subnet for the db tier. - Create a web security group that allows https requests from the internet. Create an app security group that allows http requests from the web sec group only. create a db security group that allows TCP connections from the app sec group on the database port only.
A company requires corporate IT governance and cost oversight of all of its AWS resources across its divisions around the world. Their corporate divisions want to maintain administrative control of the discrete AWS resources they consume and ensure that those resources are separate from other divisions. Which of the following options will support the autonomy of each corporate division while enabling the corporate IT to maintain governance and cost oversight? (Select TWO.)
- enabling IAM cross-account access for all corporate IT administrators in each child account - using AWS Consolidated Billing by creating AWS Organizations to link the divisions' accounts to a parent corporate account You share resources in one account with users in a different account. By setting up cross-account access in this way, you don't need to create individual IAM users in each account. In addition, users don't have to sign out of one account and sign into another in order to access resources that are in different AWS accounts.
Your company is deploying a website running on Elastic Beanstalk. The website takes over 45 minutes for the installation and contains both static as well as dynamic files that must be generated during the installation process. As a Solutions Architect, you would like to bring the time to create a new Instance in your Elastic Beanstalk deployment to be less than 2 minutes. What do you recommend? (Select two)
-Create a Golden AMI with the static installation components already setup -Use EC2 user data to customize the dynamic installation parts at boot time
*An online stocks trading application that stores financial data in an S3 bucket has a lifecycle policy that moves older data to Glacier every month. There is a strict compliance requirement where a surprise audit can happen at anytime and you should be able to retrieve the required data in under 15 minutes under all circumstances. Your manager instructed you to ensure that retrieval capacity is available when you need it and should handle up to 150 MB/s of retrieval throughput. Which of the following should you do to meet the above requirement? (Select TWO.)
-Purchase provisoned retrieval capacity -Use expedited retrieval to access the financial data Each unit of capacity provides that at least three expedited retrievals can be performed every five minutes and provides up to 150 MB/s of retrieval throughput.
An aerospace engineering company recently adopted a hybrid cloud infrastructure with AWS. One of the Solutions Architect's tasks is to launch a VPC with both public and private subnets for their EC2 instances as well as their database instances. Which of the following statements are true regarding Amazon VPC subnets? (Select TWO.)
-The allowed block size in VPC is between a /16 netmask (65,536 IP addresses) and /28 netmask (16 IP addresses) -Each subnet maps to a single availability zone -Every subnet that you create is associated with the route table of the vpc. -When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block.
maximum days for the EFS lifecycle policy?
90 days
An e-commerce application is hosted in AWS. The last time a new product was launched, the application experienced a performance issue due to an enormous spike in traffic. Management decided that capacity must be doubled the week of future product launches. Which is the MOST efficient way for management to ensure that capacity requirements are met?
Add a Dynamic Scaling policy
A Big Data analytics company wants to set up an AWS cloud architecture that throttles requests in case of sudden traffic spikes. To augment its custom technology stack, the company is looking for AWS services that can be used for buffering or throttling to handle traffic variations. Which of the following services can be used to support this requirement?
Amazon API Gateway, Amazon SQS and Amazon Kinesis API Gateway throttles requests to your API using the token bucket algorithm, where a token counts for a request Amazon SQS offers buffer capabilities to smooth out temporary volume spikes without losing messages or increasing latency.
*A gaming application is heavily dependent on caching and uses Amazon Elasticache for Redis. The application performance was recently degrade failure of the cache node. What should a Solution Architect recommend to minimize performance degradation in the failure?
Configure ElastiCache Multi-AZ with automatic failover.
A media company runs a photo-sharing web application that is currently accessed across three different countries. The application is deployed on several Amazon EC2 instances running behind an Application Load Balancer. With new government regulations, the company has been asked to block access from two countries and allow access only from the home country of the company. Which configuration should be used to meet this changed requirement?
Configure WAF on ALB in a VPC
A company is using Amazon S3 to store frequently accessed data. The S3 bucket is shared with external users that will upload files regularly. A Solutions Architect needs to implement a solution that will grant the bucket owner full access to all uploaded objects in the S3 bucket. What action should be done to achieve this task?
Create a bucket policy that will require the users to set the object's ACL to bucket-owner-full-control.
Considering that the Lambda function is storing sensitive database and API credentials, how can this information be secured to prevent other developers in the team, or anyone, from seeing these credentials in plain text? Select the best option that provides maximum security.
Create a new KMS key and use to enable encryption helpers that leverage on AWS KMS to store and encrypt sensitive information. The first time you create or update Lambda functions that use environment variables in a region, a default service key is created for you automatically within AWS KMS. This key is used to encrypt environment variables. However, if you wish to use encryption helpers and use KMS to encrypt environment variables after your Lambda function is created, you must create your own AWS KMS key and choose it instead of the default key. The default key will give errors when chosen. Creating your own key gives you more flexibility, including the ability to create, rotate, disable, and define access controls, and to audit the encryption keys used to protect your data.
A retail company has built their AWS solution using serverless architecture by leveraging AWS Lambda and Amazon S3. The development team has a requirement to implement AWS Lambda across AWS accounts. The requirement entails using a Lambda function with an IAM role from an AWS account A to access an Amazon S3 bucket in AWS account B. As a Solutions Architect, which of the following will you recommend as the BEST solution to meet this requirement?
Create an AWS Identity and Access Management (IAM) role for the Lambda function that also grants access to the S3 bucket. Set the IAM role as the Lambda function's execution role. Verify that the bucket policy grants access to the Lambda function's execution role
A company needs to use Aurora as the RDS db engine of the app. The SA has been instructed to implement a 90 day backup retention policy. Which of the following options can satisfy the given requirement?
Create an AWS backup plan to take daily snapshots with a retention period of 90 days. Aurora automated only works for max 35 days.
An application with a 150 GB relational database runs on an EC2 Instance.While the application is used infrequently with small peaks in the morning and evening, what is the MOST cost effective storage type among the options below?
EBS General Purpose SSD The minimum volume of Throughput Optimized HDD is 500 GB. As per our scenario, we need 150 GB only. With current pricing it would cost $22.50/month for st1 and only $15/month for gp2Hence, option C: Amazon EBS General Purpose SSD, would be the best choice for cost-effective.For more information on AWS EBS Volumes, please visit the following URL:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html
*A company developed a financial analytics web application hosted in a Docker container using MEAN (MongoDB, Express.js, AngularJS, and Node.js) stack. You want to easily port that web application to AWS Cloud which can automatically handle all the tasks such as balancing load, auto-scaling, monitoring, and placing your containers across your cluster. Which of the following services can be used to fulfill this requirement?
Elastic beanstalk ECS needs to be manually configured.
For faster processing, the data processing application needs to parse through this healthcare data in an in-memory database that is highly available as well as HIPAA compliant. As a solutions architect, which of the following AWS services would you recommend for this task?
Elasticache for redis redis is also PCI DSS compliant redis also provides snapshot,replication, transactions, pub/sub, geospatial support., advanced data structures.
A data analytics company, which uses machine learning to collect and analyze consumer data, is using Redshift cluster as their data warehouse. You are instructed to implement a disaster recovery plan for their systems to ensure business continuity even in the event of an AWS region outage. Which of the following is the best approach to meet this requirement?
Enable cross region snapshot copy in your amazon redshift cluster
*A startup is building an AI-based face recognition application in AWS, where they store millions of images in an S3 bucket. As the Solutions Architect, you have to ensure that each and every image uploaded to their system is stored without any issues. What is the correct indication that an object was successfully stored when you put objects in Amazon S3?
HTTP 200 result code and MD5 checksum. The S3 API will return an error code in case the upload is unsuccessful.
A customer is leveraging Amazon Simple Storage Service in eu-west-1 to store static content for a web-based property.The customer is storing objects using the Standard Storage class.Where are the customer's objects replicated?
Multiple facilities in eu-west-1
A startup company is looking for a solution to cost-effectively run microservices without the operational overhead of managing infrastructure. the solution needs to be able to scale quickly to accomodate rapid changes in the volume of the requests. What is most cost-effective solution?
Run the microservices in lambda behind an amazon api gateway.
You have a Cassandra cluster running in private subnets in an Amazon VPC.A new application in a different Amazon VPC needs access to the database.How can the new application access the database?
Set up a VPC peering connection between the two Amazon VPCs
An application is hosted in an Auto Scaling group of EC2 instances. To improve the monitoring process, you have to configure the current capacity to increase or decrease based on a set of scaling adjustments. This should be done by specifying the scaling metrics and threshold values for the CloudWatch alarms that trigger the scaling process. Which of the following is the most suitable type of scaling policy that you should use?
Step scaling, since threshold and step adjustments
A Solutions Architect is designing the storage layer for a production relational database. The database will run on Amazon EC2. The database is accessed by an application that performs intensive reads and writes, so the database requires the LOWEST random I/O latency. Which data storage method fulfills the above requirements?
Stripe data across multiple Amazon EBS volumes using RAID 0
You are automating the creation of EC2 instances in your VPC. Hence, you wrote a python script to trigger the Amazon EC2 API to request 50 EC2 instances in a single Availability Zone. However, you noticed that after 20 successful requests, subsequent requests failed. What could be a reason for this issue and how would you resolve it?
There is a vCPU-based On-Demand Instance limit per region which is why subsequent requests failed. Just submit the limit increase form to AWS and retry the failed requests once approved.
A company needs to implement a secure data encryption solution to meet regulatory requirements. the solution must provide security and durability in generating, storing and controlling cryptographic data keys. Which action should be taken to provide the most secure solution?
Use AWS KMS to generate cryptographic keys and import the keys to AWS certificate manager. Use AWS KMS key policies to control access to the keys. IAM policies by themselves are not sufficient to allow access to a CMK, although you can use them in combination with a CMK KMS key policy.
Your security team requires each Amazon ECS task to have an IAM policy that limits the task's privileges to only those required for its use of AWS services.How can you achieve this?
Use IAM roles for Amazon ECS tasks to associate a specific IAM role with each ECS task definition
*A retail company uses AWS Cloud to manage its IT infrastructure. The company has set up "AWS Organizations" to manage several departments running their AWS accounts and using resources such as EC2 instances and RDS databases. The company wants to provide shared and centrally-managed VPCs to all departments using applications that need a high degree of interconnectivity. As a solutions architect, which of the following options would you choose to facilitate this use-case?
Use VPC sharing (RAM) to share one or more subnets with other AWS accounts belonging to the same parent org from AWS Organizations. To set this up, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations. After a subnet is shared, the participants can view, create, modify, and delete their own application resources
A company hosts a website on premises. The website has a mix of static and dynamic content but users experience latency when loading static files. Which AWS service can help reduce latency?
cloudfront with on prem servers as origin
A software company has resources hosted in AWS and on-premises servers. You have been requested to create a decoupled architecture for applications which make use of both resources. Which of the following options are valid? (Select TWO.)
use ses or swf to utilize both onprem servers and ec2 instances for your dcopuled app
An online cryptocurrency exchange platform is hosted in AWS which uses ECS Cluster and RDS in Multi-AZ Deployments configuration. The application is heavily using the RDS instance to process complex read and write database operations. To maintain the reliability, availability, and performance of your systems, you have to closely monitor how the different processes or threads on a DB instance use the CPU, including the percentage of the CPU bandwidth and total memory consumed by each process. Which of the following is the most suitable solution to properly monitor your database?
enabling Enhanced Monitoring in RDS using agent. By default, Enhanced Monitoring metrics are stored in the CloudWatch Logs for 30 days. Can change using RDSISMetrics log group in Cloudwatch.
A company deployed several EC2 instances in a private subnet. The Solutions Architect needs to ensure the security of all EC2 instances. Upon checking the existing Inbound Rules of the Network ACL, she saw this configuration: ALL 0.0.0.0/0 allow TCP 110.238.109.238/32 deny If a computer with an IP address of 110.238.109.37 sends a request to the VPC, what will happen?
it is allowed as nacl is evaluated in order. If there is a match then it will allow the request.
*A media company uses Amazon S3 buckets for storing their business-critical files. Initially, the development team used to provide bucket access to specific users within the same account. With changing business requirements, cross-account S3 access requirements are also growing. The company is looking for a granular solution that can offer user level as well as account-level access permissions for the data stored in S3 buckets. As a Solutions Architect, which of the following would you suggest as the MOST optimized way of controlling access for this use-case?
use amazon s3 bucket policies Bucket policies in Amazon S3 can be used to add or deny permissions across some or all of the objects within a single bucket. Policies can be attached to users, groups, or Amazon S3 buckets You can further restrict access to specific resources based on certain conditions. For example, you can restrict access based on request time (Date Condition), whether the request was sent using SSL (Boolean Conditions), a requester's IP address (IP Address Condition), or based on the requester's client application (String Conditions).
Using AWS Cognito to issue JWT is used for what?
user authentication, not to provide access to resource