SA-Pro

Ace your homework & exams now with Quizwiz!

Which of the following is an example of buffer-based approach to controlling costs?

A mobile image upload and processing service makes use of SQS to smooth an erratic demand curve

You want to gradually migrate data directly from an on-prem RAID10 file server to S3 without moving it to other storage first. What 2 things will you use?

AWS CLI AWS Storage Gateway - Volume Gateway Stored Mode

Which AWS service can help orchestration of a continuous delivery process?

AWS CodePipeline

Your company is migrating to a containerized architecture. You what to minimize management but you want to make use of some third-party add-ons. What service for this ?

AWS EKS AWS EKS runs the Kubernetes management platform for you on AWS across multiple AZs. Because its Kubernetes conformant, you can use third-party add-ons.

AWS-DefaultPatchBaseline or AWS-WindowsDefaultPatchBaseline for MS SQL Patch manager?

AWS-DefaultPatchBaseline

You are helping a customer build a CloudFormation template. During the stack creation, you need to get a software license key from a third-party via API call. What resource would you use?

AWS::CloudFormation::CustomResource

Define BASE

Basic Availability - values availability even if stale Soft-state - might not be instantly conistent across stores Eventual Consistency - will achieve consistency at some point

Your client has decided to use Elastic Beanstalk to facilitate deployments. They want shortest deployment time above all other considerations. Which deployment option should they choose? What is the downside ?

All at once will incur downtime as the instances are upgraded

What 2 services for dealing with Layer 7 DDOS attacks ?

CloudFront AWS WAF

3 ways to have secure access to private files in S3 ?

CloudFront Signed URLs CloudFront Signed Cookies CloudFront Origin Access Identity

What is the most efficient way of logging all external interaction with AWS services for your accounts globally?

Setup CloudTrail in your main region and configure it to log all regions and store logs in a single S3 bucket in your main region. By default, CloudTrail will log all regions and store them in a single S3 location. It can however be configured to only log specific regions

You are working with a pharmaceutical company on designing a workflow for processing data. Once a day, a large 2TB dataset is dropped off at a pre-defined file share where the file is processed by a Python script containing some proprietary data aggregation routines. On average, it takes 20-30 minutes to complete the processing. At the end, a notification has to be sent to the submitter of the dataset letting them know processing is complete. Which of the following architectures will work in this scenario? Why not SQS? Why not Lambda

Stand up memory optimized instances and provision an EFS volume. Pre-load the data on the EFS volume. Use a User Data script to sync the data from the EFS share to the local instance store. Use an SDK call to SNS to notify when the processing is complete, sync the processed data back to the EFS volume and shutdown the instance. Too big for SQS Lambda can only run for 15 mins

SG: Stateless or Stateful

StateFULL

T/F AWS SMS is free to use but we must pay for storage resources used in the migration process.

True, first 100 are free

Per the requirements of a government contract your company recently won, you must encrypt all data at rest. Additionally, the material used to generate the encryption key cannot be produced by a third-party because that could result in a vulnerability. You are making use of S3, EBS and RDS as data stores, so these must be encrypted. Which of the following will meet the requirements at the least cost?

Use AMS KMS to create a customer managed CMK. Create a random 256 bit key and encrypt it with the wrapping key. Import the encrypted key with the import token. When created s3 buckets, ebs volumes, or RDS instances, select the CMK from the drop down list.

Your company has come under some hard times resulting in downsizing and cuts in operating budgets. You have been asked to create a process that will increase expense awareness and your enhance your team's ability to contain costs. Given the reduction in staff, any sort of manual analysis would not be popular so you need to leverage the AWS platform itself for automation. What is the best design for this objective?

Use AWS Budgets to create a budget. Choose to be notified when monthly costs are forecasted to exceed your updated monthly target. *AWS Budgets is specifically designed for creating awareness and transparency in your AWS spending rate and trends.

Your client is a small engineering firm which has decided to migrate their engineering CAD files to the cloud. They currently have an on-prem SAN with 30TB of CAD files and growing at about 1TB a month as they take on new projects. Their engineering workstations are Windows-based and mount the SAN via SMB shares. Propose a design solution that will make the best use of AWS services, be easy to manage and reduce costs where possible.

Use AWS CLI to sync the CAD files to S3. Setup Storage Gateway-File Gateway locally and configure the CAD workstations to mount as SMB. * At present, EFS doesn't support Windows-based clients. Storage Gateway-File Gateway does support SMB mount points. The other options introduce additional unneeded costs.

A client calls you in a panic. They have just accidentally deleted the private key portion of their EC2 key pair. Now, they are unable to SSH into their Amazon Linux servers. Unfortunately the keys were not backed up and are considered gone for good. What can this customer do to regain access to their instances? 2 answers

Use AWS Systems Manager Automation with the AWSSupport-ResetAccess document to create a new SSH key for your current instance. Stop the instances, detach its root volume and attach it as a data volume to another instances. Modify the authorized_keys file, move the volume back to the original instance and restart the instances. * the two methods that AWS recommends if you lose a private key for an EC2 key pair are using Systems Manager Automation or using a secondary instance to edit the authorized_keys file.

You are helping a client prepare a business case for cloud migration. One of the required parts of the business case is an estimation of AWS costs per month. The client has about 200 VMs in their landscape under VMware vCenter. Due to security concerns, they will not allow any external agents to be installed on their VMs for discovery. How might you most efficiently gather information about their VMs to build a cost estimate with the least amount of effort?

Use Application Discovery Service to gather details on the network connections, hardware and performance of the VMs. Export this data as CSV and use it to approximate monthly AWS costs by aligning current VMs with similar EC2 instances types.

How to best handle temporary credentials for a mobile app ?

Use Cognito SDK to provide temp credentials

Your application has to process a very volatile inconsistent flow of data inbound in order. What would be most reliable and cost-effective?

Use SQS to receive inbound messages and use a single reserved instance to process them Use FIFO to satisfy order

What is the main benefit of loosely coupled architectures for scalability?

More atomic functional units (discrete units of work can scale independently)

Your company's AWS migration was not planned out very well across the enterprise. As as result, different business units created their own accounts and managed their own resources. Recently, an internal audit of costs show that there may be some room for improvement with regard to how reserved instances are being used throughout the enterprise. What is the most efficient way to ensure that the reserved instance spend is being best used?

Setup Consolidated Billing for a single account and link all the other various accounts in the organization. Ensure that Reserved Instance Sharing is turned on. The discounts for specific reserved instances will automatically be applied to the consolidated invoice.

We have setup an autoscaling group using Dynamic scaling based on CPU utilization. During times of heavy spikes in demand, our fleet is unable to keep up with demand initially but eventually settles in. What how might we address this most cost effectively?

Reduce the cooldown time to allow scaling to be more dramatic and responsive.

What type of instances can you run as Dedicated ?

demand spot reserved

What is the first step of planning a cloud migration ?

get a clear understanding of current costs

what is an In Place upgrade ?

involves performing application updates on live Amazon EC2 instances

What info do you need to calculate partitions ?

table size RCUs WCUs

Do dedicated hosts reserve capacity?

yes, you are literally owning the whole host to yourself

3 data formats supported by Athena

Parquet JSON ORC

2 ways to increase Dynamo read operations

- DAX - Secondary indexes

3 data formats supported by Athena

- Parquet - JSON - ORC

Your client is contemplating migration to a hybrid architecture over the next year. What preparation tasks would you suggest that would be directly tied to supporting this migration? (Choose 3)

- Re architect tightly coupled interfaces to loosely coupled patterns - Create an accurate inventory of all systems and services - Spend time uncovering or verifying current on-prem total ownership costs

List 4 benefits of Continuous Delivery

- improved developer productivity - automated and consistent release preparation - deliver faster updates - improved code quality

Given a VPC CIDR of 10.0.0.0/16 and subnet CIDR block of 10.0.0.0/24, what would you expect the DNS address to be for DHCP clients in that subnet given default settings? How do you know?

10.0.0.2 It is always the base of the subnet range +2

Minimum amount volumes for RAID 5

3

What is canary ?

A variation of the rolling deployment method, called canary release, involves deployment of the new software version on a very small percentage of servers at first. This way, you can observe how the software behaves in production on a few servers, while minimizing the impact of breaking changes. If there is an elevated rate of errors from a canary deployment, the software is rolled back. Otherwise, the percentage of servers with the new version is gradually increased

List 3 services that support VPC endpoints:

API Gateway Dynamo DB Kinesis Data Streams

How many partitions for a Dynamo table with 25GB ? What is the calculation?

At least 3 25 / 10 = 2.5 and round up

define ACID ?

Atomic Consistent Isolated Durable

List 2 characteristics of oAuth, what does it provide and issue ?

Authorization Issues tokens to clients

Your client has defined an RPO and RTO of 24 hours for their 2GB database. What general approach would you recommend to fulfill these requirements most cost-effectively?

Backup and Restore With the relatively small data size and generous RTO/RPO, a simple backup and restore process would work well.

Minimal support level for business api?

Business

Your organization does not have a good software QA process in place and its difficult to anticipate how your app will perform in production. What deployment method presents the lowest risk in this situation?

Canary Release A Canary Release is a way to introduce a new version of an application into production with limited exposure.

A client has asked you to help troubleshoot a Service Control Policy. Upon reviewing the policy, you notice that they have used multiple "Statement" elements for each Effect/Action/Resource object but the policy is not working. What would you suggest next? Why?

Change the policy to combine the multiple Statement elements into one element with an object array. The syntax for an SCP requires only one Statement element. You can have multiple objects within a single Statement element though.

You have been asked to give employees the simplest way of accessing the corporate intranet and other internal resources, from their iPhone or iPad. The solution should allow access via a Web browser, authentication via SAML integration and you need to ensure that no corporate data is cached on their device. Which option would meet all of these requirements?

Configure Amazon WorkLink and connect to the servers using a Web Browser with the link provided *Amazon WorkLink is a fully managed, cloud-based service that enables secure access to internal websites and apps from mobile devices. It provides single URL access to the applications and also links to existing SAML-based identity providers. Amazon WorkLink does not store or cache data on user devices as the web content is rendered in AWS and sent to user devices as encrypted Scalable Vector Graphics (SVG). WorkLink meets all of the requirements in the question and is therefore the only correct answer.

You are helping a client design their AWS network for the first time. They have a fleet of servers that run a very precise and proprietary data analysis program. It is highly dependent on keeping the system time across the servers in sync. As a result, the company has invested in a high-precision stratum-0 atomic clock and network appliance which all servers sync to using NTP. They would like any new AWS-based EC2 instances to also be in sync as close as possible to the on-prem atomic clock as well. What is the most cost-effective, lowest maintenance way to design for this requirement?

Configure a DHCP Option Set with the on-prem NTP server address and assign it to each VPC. Ensure NTP (UDP port 123) is allowed between AWS and your on-prem network. *DHCP Option Sets provide a way to customize certain parameters that are issued to clients upon a DHCP request. Setting the NTP server is one of those parameters.

A client wants help setting up a way to manage access to the AWS Console and various services on AWS for their employees. They are starting out small but expect to provide AWS-hosted services to their 20,000 employees within the year. They currently have Active Directory on-premises, use VMware to host their VMs. They want something that will allow for minimal administrative overhead and something that could scale out to work for their 20,000 employees when they have more services on AWS. Due to audit requirements, they need to ensure that the solution can centrally log sign-in activity. Which option is best for them?

Connect the multiple accounts with AWS Organizations. Deploy AWS Directory Service for Microsoft Active Directory on AWS and configure a trust with your on-premises AD. Configure AWS Single Sign-On with the users and groups who are permitted to log into AWS. Give the users the URL to the AWS SSO sign-in web page. *For userbases more than 5,000 and if they want to establish a trust relationship with on-prem directories, AWS recommends using AWS Directory Service for Microsoft Active Directory. This is also compatible with AWS Single Sign-On which provides a simple way to provide SSO for your users across AWS Organizations. Additionally, you can monitor and audit sign-in activity centrally using CloudTrail.

You have been asked to help a company with optimizing cost on AWS. You notice in reviewing documentation that they have constructed a transit network to link around 30 VPCs in different regions. When you review traffic logs, most of the traffic is across regions. Given this information, what might you recommend to reduce costs?

Consolidate resources into as few regions and AZs as necessary. *By smartly consolidating resources into fewer regions and AZs, you are able to reduce or potentially eliminate data transfer and thus lower your overall costs.

For large organizationally complex AWS landscapes, it is considered a best practice to combine a tagging strategy with lifecycle tracking of various projects to identify orphaned resources that are no longer generating value for the organization and should be decommissioned. With which AWS Well-Architected Framework Pillar is this best practice most aligned?

Cost optimization *Tagging has many uses but one strong use-case is in being able to tie resources that incur costs with cost centers or projects to create a direct line of sight to actual AWS expenses. If this visibility does not exist, costs tend to increase because "someone else is paying." A Best Practice of the Cost Optimization Pillar is to maintain expenditure awareness.

You have decided to make some changes to your landscape. Your landscape consists of four EC2 instances within a VPC interacting mostly with S3 buckets. You decide to move your EC2 instances into a spread placement group. You then create a VPC endpoint for S3. These changes have which of these impacts? 2 things

Costs will decrease. Security profile will be improved. *Costs will decrease because you are using the S3 gateway endpoint to reach S3 rather than Internet egress. Security will be improved because the traffic is not routed out through the Internet.

How to tolerate AZ failure using EFS on multiple EC2s

Create EFS mount targets in each AZ and configure each EC2 to mount the common target FQDN

You are helping a client with some process automation. They have managed to get their website landscape and deployment process encapsulated in a large CloudFormation template. They have recently contracted with a third-party service to provide some automated UI testing. To initiate the test scripts, they need to make a call out to an external REST API. They would like to integrate this into their existing CloudFormation template but not quite sure of the best way to do that. Help them decide which of the following ideas is feasible and incurs the least extra cost.

Create a Lambda function which issues a call out to the external REST API using the POST method. Define a custom resources in the CloudFormation template and associate the Lambda function and execution role with the custom resource. Include DependsOn to ensure that the function is only called after the other instances are ready. *To integrate external services into a CloudFormation template, we can use a custom resource. Lambda makes a very good choice for this scenario because it can handle some logic if needed and make a call out to an external API. Using an EC2 instances to make this call is excessive and we likely would not have the ability to configure the third-party API to poll an SQS queue

Improve a Dynamo table where common queries do not use partition key?

Create a global secondary index with the most common queried attribute as the hash key

You are consulting for a large multi-national company that is designing their AWS account structure. The company policy says that they must maintain a centralized logging repository but localized security management. For economic efficiency, they also require all sub-account charges to roll up under one invoice. Which of the following solutions most efficiently addresses these requirements?

Create a stand-alone consolidated logging account and configure all sub-account CloudWatch and CloudTrail activity to route to that account. Use an SCP to restrict sub-accounts from changing CloudWatch and CloudTrail configuration. Configure consolidated billing under a single account and register all sub-accounts to that billing account. Create localized IAM Admin accounts for each sub-account *Service Control Policies are an effective way to broadly restrict access to certain features of sub-accounts. Use of a single separate logging account is an effective way to create a secure logging repository.

You currently manage a website that consists of two web servers behind an Application Load Balancer. You currently use Route 53 as a DNS service. Going with the current trend of websites doing away with the need to enter "www" in front of the domain, you want to allow your users to simply enter your domain name. What is required to allow this?

Create an A record for your top-level domain name as an alias for the ALB.

Several years ago, the company you are consulting for started an SOA concept to enable more modularity. At that time, they chose to deploy each microservice as separate LAMP stacks launched in Elastic Beanstalk instances due to the ease of deployment and scalability. They are now in the process of migrating all the services over to Docker containers. Which of the following options would make the most efficient use of AWS resources?

Create an auto-scaled group of EC2 instances and run Kubernetes across them to orchestrate the containers.

For your production web farm, you have configured an auto scaling group behind a Network Load Balancer. Your auto-scaling group is defined to have a core number of reserved instances and to scale with spot instances. Because of differences in spot pricing across AZs, sometimes you end up with many more instances in one AZ over another. During times of peak load, you notice that AZs with fewer instances are averaging 70% CPU utilization while the AZ with more instances average barely above 10% CPU utilization. What is the most likely cause of this behavior?

Cross-zone load balancing is disabled on the Network Load Balancer. *Cross-zone load balancing ensures that requests are equally spread across all available instances, regardless of AZ. When cross-zone load balancing is enabled, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. When cross-zone load balancing is disabled, each load balancer node distributes traffic across the registered targets in its Availability Zone only.

Moving large amount of historic files?

DMS

A client calls you in a panic. They notice on their RDS console that one of their mission-critical production databases has an "Available" listed under the Maintenance column. They are extremely concerned that any sort of updates to the database will negatively impact their DB-intensive mission-critical application. They at least want to review the update before it gets applied, but they are not sure when they will get around to that. What do you suggest they do?

Defer the updates indefinitely until they are comfortable. *for RDS, certain OS updates are marked as Required. If you defer a required update, you receive a notice from Amazon RDS indicating when the update will be performed. Other updates are marked as Available, and these you can defer indefinitely. You can also apply the maintenance items immediately or schedule the maintenance for your next maintenance window.

In CloudFront, Behaviors permit which of the following scenarios?

Delivery of different origins based on URL path

What is the difference between Continuous Deployment and Continuous Delivery

Delivery still does a manual check before release to Production

Creating a new AMI, assigning it to a new Launch Configuration and then updating an existing Auto Scaling Group is an example of this upgrade method

Disposable new release is deployed on new instances while instances containing the old version are terminated

Based on past statistics of our web traffic, we observe that we sometimes get traffic spikes on Monday morning. What is the most cost-effective type of scaling should we use for this scenario?

Dynamic You might be tempted to use Scheduled given the traffic patterns but this might scale needlessly if we do not get that traffic spike. The most efficient way would be Dynamic based on some metric like connections, CPU or network I/O.

What stage does the CAF focus on ? Does it apply to changing business processes?

Early stages of cloud adoption No the reinvention of business processes is not CAF

What service for in memory cache that is ENCRYPTED

ElastiCache Redis

You need an in-memory cache but you want it to be able to survive an AZ failure. Which option is best?

Elastiache for Redis memcacheD does not do multi AZ

You are helping an IT organization meet some security audit requirements imposed on them by a prospective customer. The customer wants to ensure their vendors uphold the same security practices as they do before they can become authorized vendors. The organization's assets consist of around 50 EC2 instances all within a single private VPC. The VPC is only accessible via an OpenVPN connection to an OpenVPN server hosted on an EC2 instance in the VPC. The customer's audit requirements disallow any direct exposure to the public internet. Additionally, prospective vendors must demonstrate that they have a proactive method in place to ensure OS-level lvulternatiblity are remediated as soon as possible. Which of the following AWS services will fulfill this requirement?

Employ Amazon Inspector to periodically assess applications for vulnerabilities or deviations from best practices. *AWS Macie is a service that attempts to detect confidential data rather than OS vulnerabilities. Since there is no public internet access for the VPC, services like GuardDuty and Shield have limited usefulness. They help protect against external threats versus any OS-level needs. AWS Artifact is simply a document repository and has no monitoring functions. Only AWS Inspector will proactively monitor instances using a database of known vulnerabilities and suggest patches.

30 AWS accounts. Which steps will allow you to most efficiently protect against SQL injection attacks?

Ensure all sub-accounts are members of an organization in the AWS Organizations service. Use Firewall Manager to create an ACL rule to deny requests that contain SQL code. Apply the ACL to WAF instances across all organizational accounts.

You have some extra unused RIs in one AZ but need in another AZ. If you want to make use of them in the other AZ, what 2 things can you do ?

For regional RIs you do not need to do anything For Zonal RIs you just need to modify zone first

What does a VPC endpoint enable ?

For traffic to stay in VPC, not travel over internet

What mode to use for back up of on site data to S3 using storage Gateway?

Gateway stored volume mode

Immutable wayt o set policies on Glacier vault?

Glacier vault lock

If you here Scala what AWS?

Glue

Best database for complex relationship data and an example ?

Graph DBs Neptune

What type of database is best for storing complex relationship data? AWS Example?

Graph Databases Neptune

You are migrating from an Oracle on-prem database to an Oracle RDS database. What type of migration is this ?

Homogenous (same databases migration)

2 recomedations if you are concerned with hardware failure ?

Horizontal scaling over vertical scaling Spread placement groups

How does an IPS differ from IDS ?

IPS will take automatic action based on suspicious traffic

You are helping a client troubleshoot a problem. The client has several Ubuntu Linux servers in a private subnet within a VPC. The servers are configured to use IPv6 only and must periodically communicate to the Internet to get security patches for applications installed on them. Unfortunately, the servers are unable to reach the internet. An internet gateway has been deployed in the public subnet in the VPC and default routes are configured. Which of the following could fix the issue? Why?

Implement an Egress-only Gateway in the public subnet and configure an IPv6 default route for the private subnet to the gateway. With IPv6 you only requires an Egress-Only Internet Gateway and an IPv6 route to reach the internet from within a VPC.

You are helping a client migrate over an internal application from on-prem to AWS. The application landscape on AWS will consist of a fleet of EC2 instances behind an Application Load Balancer. The application client is an in-house custom application that communicates to the server via HTTPS and is used by around 40,000 users globally across several business units. The same exact application and landscape will be deployed in US-WEST-2 as well as EU-CENTRAL-1. Route 53 will then be used to redirect users to the closest region. When the application was originally built, they chose to use a self-signed 2048-bit RSA X.509 certificate (SSL/TLS server certificate) and embedded the self-signed certificate information into the in-house custom client application. Regarding the SSL certificate, which activities are both feasible and minimize extra administrative work?

Import the existing certificate and private key into Certificate Manager in both regions. Assign that imported certificate to the Application Load Balancers using their respective regionally imported certificate. *You can import private certificates into Certificate Manager and assign them to all the same resources you can with generated certificates, including an ALB. Also note that Certificate Manager is a regional service so certificates must be imported in each region where they will be used. The other options in this question would either require you to update the certificate on the client or requires unnecessary steps to resolve the challenge.

4 early parts of CAF

Investigate the need for training for Program and Project Management staff around agile project management. Work with internal Finance business partners to design a transparent chargeback model. Hold a workshop with IT business partners about the creation of an IT Service Catalog concept. Work with the Human Resources business partners to create new job roles, titles and compensation/remuneration scales

After an EMR cluster is terminated, what happens to the data stored as HDFS?

It is deleted

A development team is comprised of 20 different developers working remotely around the globe all in different timezones. They are currently practicing Continuous Delivery and desperately want to mature to true Continuous Deployment. Given a very large codebase and distributed nature of the team, enforcing consistent coding standards has become the top priority. Which of the following would be the most effective to address this problem and get them closer to Continuous Deployment?

Include code style check in the build stage of the deployment pipeline using a linting tool.

An electromagnetic pulse (EMP) from a sun flare takes out the electrical grid. What type of disaster is this?

Infrastructure Failure of utilities or adverse environmental conditions are considered Infrastructure disasters.

Your company has recently acquired another business unit and is in the process of integrating it into the corporate structure. Like your company, the aquisition's IT assets are fully hosted on AWS. They have a mix of EC2 instances, RDS instances and Lambda-based applications but these will be end-of-lifed as the new business unit transitions to your company's standard applications over the next year. Fortunately, the CIDR blocks of the respective VPCs do not overlap. If the goal is to integrate the new network into your current hub-and-spoke network architecture to provide full access to each other's resource, what can you do that will require the least amount of disruption and management?

Initiate a VPC peering request between your hub VPC and all VPCs from the new business. Setup routes in the hub to direct traffic to and from the new VPCs. VPC Peering provides a way to connect VPCs so they can communicate with each other.

Due to a dispute with their co-location hosting company, your client is forced to move some applications as soon as possible to AWS. The main application uses IBM DB2 for the data store layer and a Java process on AIX which interacts via JMS with IBM MQ hosted on an AS400. What is the best course of action to reduce risk and allow for fast migration?

Install DB2 on an EC2 instance and migrate the data by doing an export and import. Spin up an instance of Amazon MQ in place of IBM MQ. Install the Java process on a Linux-based EC2 system. *For a fast migration with minimal risk, we would be looking for a lift-and-shift approach and not spend any time on re-architecting or re-platforming that we don't absolutely have to do. Amazon MQ is JMS compatible and would provide a shorter path to the cloud than SQS. DMS does not support DB2 as a target.

What storage class for fast disk IO?

Instance Store (did not specify persistent)

What will happen fetching meta data half way through S3 upload?

It is not fully propogated fully, we will get a 404

What is JMS? What AWS service can do that?

Java Message Service Amazon MQ

When developing a Amazon Kinesis Data Stream application, what is the recommended method to read data from a shard ?

KCL Amazon always recommends this

What is more cost effective, CloudHSM or KMS ? Which required more customization

KMS CloudHSM

Are pods associated with ECS or Kubernetes?

Kubernetes

You are consulting with a small Engineering firm that wants to move to a Bring-Your-Own-Device policy where employees are given some money to buy whatever computer they want (within certain standards). Because of device management and security concerns, along with this policy is the need to create a virtualized desktop concept. The only problem is that the specialized engineering applications used by the employees only run on Linux. Considering current platform limitations, what is the best way to deliver a desktop-as-a-service for this client?

Launch a Linux Workspace in AWS WorkSpaces and customized it with the required software. Then, create a custom bundle from that image and use that bundle when you launch subsequent Workspaces.

What type of write is associated with eventual consistency ?

Lazy writes

Consolidated Billing offers what potential economy-of-scale benefit?

Leveraging tiered pricing

What 2 of these does CodeDeploy provide: logging provisioning scaling monitoring What 3 services can handle the parts that CodeDeploy does not?

Logging Monitoring Beanstalk, CF, and OpsWorks can do scaling and provisioning

A client has asked you to review their system architecture in advance of a compliance audit. Their production environment is setup in a single AWS account that can only be accessed through a monitored and audited bastion host. Their EC2 Linux instances currently use AWS-encrypted EBS volumes and the web server instances sit in a private subnet behind an ALB that terminates TLS using a certificate from ACM. All their web servers share a single Security Group, and their application and data layer servers similarly share one Security Group each. Their S3 objects are stored with SSE-S3. The auditors will require all data to be encrypted at rest and will expect the system to secure against the possibility that TLS certificates might be stolen by would-be spoofers. How would you help this client pass their audit in a cost effective way? 3 answers

Make no changes to the EBS volumes. Leave the S3 objects alone. Continue to use the ACM for the TLS certificate. *All the measures they have taken with Certificate Manager, S3 encryption and the EBS volumes meet the audit requirements. There is no need for LUKS, CloudHSM or client-side encryption.

You are helping a client troubleshoot a new Direct Connect connection. The connection is up and you can ping the AWS peer IP address, but the BGP peering session cannot be established. What should be your next logical troubleshooting steps?

Make sure no firewalls or ACLs are blocking TCP port 179 or any high-numbered ephemeral ports. Because the connection is up and we can ping the AWS peer, the problem must be at a higher level on the OSI model than the Physical or Data layers. BGP uses TCP port 179 to communicate routes so we should check that no NACL or SG is blocking it. Additionally, we should make sure the ASNs are properly configured in the proper ranges.

You have just completed the move of a Microsoft SQL Server database over to a Windows Server EC2 instance. Rather than logging in periodically to check for patches, you want something more proactive. Which of the following would be the most appropriate for this?

Make use of Patch Manager and the AWS-DefaultPatchBaseline pre-defined baseline

How would you move Informix to Aurora ?

Manually create the target schema on Aurora then use Data Pipeline with JDBC to move the data

Your organisation currently runs an on-premise Windows file server. Your manager has requested that you utilise the existing Direct Connect connection into AWS, to provide a method of storing and accessing these files securely in the Cloud. The method should be simple to configure, appear as a standard file share on the existing servers, use native Windows technology and also have an SLA. Choose an option which meets these needs.

Map an SMB share to the Windows file server using Amazon FSx for Windows File Server and use RoboCopy to copy the files across *To choose the correct option, we can start by eliminating services which don't have an SLA, in this case only Storage Gateway doesn't have an SLA so we can remove that as an option. Next we can rule out EFS and S3 as they don't use native Windows technology or provide a standard Windows file share, therefore the only correct answer is to use Amazon FSx for Windows File Server.

You are helping a Retail client migrate some of their assets over to AWS. Presently, they are in the process of moving their Enterprise Data Warehouse. They are planning to re-host their very large Oracle data warehouse on EC2 in a high availability configuration across AZs. They presently have several Scala scripts that process some detailed Point of Sale data that is collected each day. The scripts perform some aggregation on the data and import the aggregate into their Oracle database. They want to move this process to AWS as well. Which option would be the most cost-effective way for them to do this?

Migrate the processing to AWS Glue. *AWS Glue is a fully managed extract, translate and loading service and is compatible with Scala. EMR could do this but represents more overhead than necessary. Lambda is not compatible with Scala and migrating to Redshift does not bring anything in this case if the customer wants to retain their Oracle database.

Does SG or NACL support DENY rules ?

NACL

NACLS: Stateless or Stateful ?

NACLS are StateLESS

Does Simple AD allow trust relationships with other domains, such as your on premise AD ?

NO

Does elasticahce for memcacheD offer native encryption at rest?

NO but elasticache for Redis does

Can you use OpsWorks to clone stacks to other regions ?

No

Does AmazonMQ allow VPC endpoint ?

No

Is Solaris supported ?

No

Can you do Multi AZ with Redshift ?

No You can run multiple clusters in differnet AZs

Is Informix Supported by data migration services or schema conversion tool ?

No neither

You are consulting for a company that performs specialized customer data analytics. Their customers can upload raw customer data to a website and receive back demographic statistics. Their application consists of a REST API created using PHP and Apache. The application is self-contained and works in real-time to return results as a JSON response to the REST API call. Because there is customer data involved, company policy states that data must be encrypted in transit and at rest. Sometimes, there are data quality issues and the PHP application will throw an error. The company wants to be notified immediately when this occurs so they can proactively reach out to the customer. Additionally, many of the company's customers use very old mainframe systems that can only access internet resources using IP address rather than a FQDN. Which architecture will meet these requirements fully?

Provision a Network Load Balancer with an EIP in front of your EC2 target group. Install the CloudWatch Logging agent on the EC2 instances and stream logs to CloudWatch. Configure notification via SNS when application errors are noticed in the system logs. Configure the server AMI to use encrypted EBS volumes with a key from AWS KMS. Terminate SSL on the EC2 instances. *The requirement of a static IP leads us to a Network Load Balancer with an EIP. For SSL, our requirement is end-to-end encryption so we have to terminate on the EC2 instances because we cannot do it at the NLB.

What RAID is the fastest write and why ? What is the other name for it?

RAID0 (striping) provides the highest write performance of these options because writes are distributed across disks and no parity is required.

For a new EC2 file server, you want RAID fault-tolerance for 1 TB of data. Which option should you choose? What is the layout of the EBS volumes?

RAID1 with 2 EBS volumes of not less that 1 TB each

What is the relationship for RPO and BC ?

RPO provides an expectation of potential manual data re-entry for recovery plans. Recovery Point Objective will define the potential for data loss during a disaster. This can inform an expectation of manual data re-entry for BC planners.

what is RTO ?

Recovery Time Objective Time within which systems and applications must be recovered after an outage or amount of downtime that a business can endure and survive (down time)

what is RPO ?

Recovery-Point Objective (data loss), point in time to which systems and data must be recovered after an outage or amount of data loss that a business can endure

5 pillars ?

Reliability Security Performance Efficiency Operational Excellence Cost Optimization

You are a database administrator for a company in the process of changing over from RDS MySQL to Amazon Aurora for MySQL. You setup the new Aurora database in a similar fashion to how your pre-existing RDS MySQL landscape was setup: Multi-AZ with Read Replica in a backup region. You have just completed the migration of data and verified that the new Aurora landscape is performing like it should. You are now in the process of decommissioning the old RDS MySQL landscape. First, you decide to disable automatic backups. Via the console, you try to set the Retention Period to 0 but receive an error saying "Cannot Set Backup Retention Period to 0". How can you disable automatic backups?

Remove the Read Replicas first. *For RDS, Read Replicas require backups for managing read replica logs and thus you cannot set the retention period to 0. You must first remove the read replicas and then you can disable backups.

9 valid CF sections ? What 1 is required ?

Resources (req) Description Metadata Parameters Mappings Conditions Transform Outputs Format Version

What migration strategy generally has the least cost ?

Retire

You are consulting for a client who is trying to define a comprehensive cloud migration roadmap. They have a legacy custom ERP system written in RPG running on an AS400 system. RPG programmers are becoming rare so support is an issue. They run Lotus Notes email which has not been upgraded in years and thus out of support. They do have a web application that serves as their CRM created several years ago by a consulting group. It is a Java and JSP-based application running on Tomcat with MySQL as the data layer hosted on a Red Hat Linux server. The company is in a real growth cycle and realizes their current platforms cannot sustain them. So, they are about to launch a project to implement SAP as a replacement for their legacy ERP system over the next year. What migration strategy would you recommend for their landscape that would allow them to modernize as soon as possible?

Retire the Lotus Notes email and implement AWS Workmail. Replatform the CRM application Tomcat portion to Elastic Beanstalk and the data store to MySQL RDS. Invest time in training Operations staff CloudFormation and spend time architecting the landscape for the new SAP platform. Do nothing to the legacy ERP platform until the SAP implementation is complete. *In this case, retiring Lotus Notes is the better move because it would just prolong the inevitable by simply migrating to EC2. The CRM system is fairly new and can be re-platformed on Elastic Beanstalk. Due to the impending ERP upgrade, it makes no sense to do anything with the legacy ERP. It would take lots of work to port over an RPG application to run on AWS--if it's even possible.

Which of the following activities will have the most cost impact (increase or decrease) on your AWS bill? Start using AWS CodeCommit as your source code repository. Add a new Route 53 hosted zone. Deploy existing reserved instances into a Placement Group. Provision an Elastic IP and associate it to a running instance. Begin using AWS OpsWorks Stacks on EC2 to manage your landscape

Route 53 hosted zone Provisioning an EIP to a running instance or using Placement Groups or CodeCommit all do not cost anything. OpsWorks Stacks on EC2 does not cost anything but using it for on-prem systems does cost a small amount. The only thing on this list that would increase your AWS bill is adding a Route 53 hosted zone.

We are designing an application where we need to accept a steady stream of large binary objects up to 1GB each. We want our architecture to allow for scaling out. What would you select as the best option for intake of the BLOBs?

SQS can do up to 2gb using Java SQS SDK

List 4 good uses of Tagging

Security Grouping Cost Allocation Automation

What to use if is idle most of the time

Serverless

How to make it so that you can only use certain regions ?

Service control policy

Due to new corporate policies on data security, you are now required to use encryption at rest for all data. You have some EC2 Linux instances on AWS that were created without encryption for the root EBS volume. What can you do that meet the requirement and reduce administrative overhead?

Stop the instances and create AMIs from the instances. Copy the AMIs to the same region and select the "Encrypt target EBS snapshots". Redeploy the instances using the AMI copies you made with encrypted root volumes. * AWS does support encrypted root volumes but conversion from unencrypted root to an encrypted root requires a bit of a process. You must first create an AMI then copy that newly created AMI to the same region, specifying that you want to encrypt the EBS volumes during the copy. You can then create a new instance with an encrypted root volume from the copied AMI. You can use either a generated key from KMS or your own CMK imported into KMS.

You have just finished a contract with your client where you have helped them fully migrate to AWS. As you are preparing to transition out of the account, they would like to integrate their current help desk software, Jira, with the AWS support platform to be able to create and track tickets in one place. Which of the following do you recommend?

Subscribe to the Business Support Plan and direct them to the AWS Support API documentation You must have subscribe to at least the Business Support Plan to gain access to the AWS Support API.

What is complexity of Layer 7 DDOS attack?

Telling an attack from normal user traffic

Your company has contracted with a third-party Security Consulting company to perform some risk assessments on existing AWS resources. As part of a routine list of activities, they inform you that they will be launching a simulated attack on one of your EC2 instances. After the Security Group performed all their activities, they issue their report. In their report, they claim that they were successful at taking the EC2 instance offline because it stopped responding soon after the simulated attack began. However, you're quite certain that machine did not go offline and have the logs prove it. What might explain the Security company's experience?

The Security Company's traffic was seen as a threat and blocked dynamically by AWS. AWS must grant permission before any penetration testing is done. * AWS Shield and other counter-measure technologies work to protect all AWS customers from DDoS attacks. Unless AWS was aware of the test time and expected duration, its likely the traffic was blocked as suspicious. AWS Firewall Manager is used to manage WAF ACLs and not dynamically blacklist IPs. Similarly, VPC Flow Logs cannot automatically implement NACL changes as described here.

A client is trying to setup a new VPC from scratch. They are not able to reach the Amazon Linux web server instance launched in their VPC from their on-prem network using a web browser. You have verified the internet gateway is attached and the main route table is configured to route 0.0.0.0/0 to the internet gateway properly. The instance also is being assigned a public IP address. Which of the following would be another potential cause of the problem? 2 answers

The outbound network ACL allows port 80 and 22 only. The subnet of the instance is not associated with the main route table. *For an HTTP connection to be successful, you need to allow port 80 inbound and allow the ephemeral ports outbound. Additionally, it is possible that the subnet is not associate with the route table containing the default route to the internet.

You have just set up a Service Catalog portfolio and collection of products for your users. Unfortunately, the users are having difficulty launching one of the products and are getting "access denied" messages. What 3 things could be the cause of this?

The product does not have a launch constraint assigned. The user launching the product does not have required permissions to launch the product. The launch constraint does not have permissions to CloudFormation.

T/F We can use DMS to migrate our MongoDB database to DynamoDB.

True

T/F - Eventual consistency could result in stale data.

True

T/F - Row locking attempts to ensure consistency by keeping updates atomic

True

The most common attack, based on forensic work security researchers have done after other attacks, seems to be the TCP Syn Flood attack. To better protect yourself from that style of attack, what is the least cost measure you can take?

This type of attack is automatically addressed by AWS. You do not need to take additional action. AWS Shield Standard is offered to all AWS customers automatically at no charge and will protect against TCP Syn Flood attacks without you having to do anything - this meets the requirements of protecting TCP Syn Flood attacks at the lowest cost possible, as described in the question. A more robust solution which is better aligned to best practice would involve a load balancer in the data path, however as this would provide more functionality than required at a higher cost, is not the correct option for this question.

T/F Because we use VMWare we do not need to install agents on our VMs to use AWS Application Discovery Service.

True

T/F The Server Migration Connector will be downloaded from AWS and run as a virtual appliance on vSphere.

True

Migrate from VPN to Direct Connect, what 2 things to do for minimarl disruption?

Update BGP on your customer sized router to a higher weight than the VPN connection. Configure both the VPN and Direct Connect with the same BGP prefix *Both the VPN and Direct Connect paths have to have the same BGP prefix to dynamically route among themselves. Using BGP you should also configure route priorities from on-prem to AWS to make use of Direct Connect as primary and VPN as secondary.

You work for a Clothing Retailer and have just been informed the company is planning a huge promotional sale in the coming weeks. You are very concerned about the performance of your eCommerce site because you have reached capacity in your data center. Just normal day-to-day traffic pushes your web servers to their limit. Even your on-prem load balancer is maxed out, mostly because that's where you terminate SSL and use sticky sessions. You have evaluated various options including buying new hardware but there just isn't enough time. Your company is a current AWS customer with a nice large Direct Connect pipe between your data center and AWS. You already use Route 53 to manage your public domains. You currently use VMware to run your on-prem web servers and sadly, the decision was made long ago to move the eCommerce site over to AWS last. Your eCommerce site can scale easily by just adding VMs, but you just don't have the capacity. Given this scenario, what is the best choice that would leverage as much of your current infrastructure as possible but also allow the landscape to scale in a cost-effective manner?

Use Server Migration Service to import a VM of a current web server into AWS as an AMI. Create an ALB on AWS. Define a target group using private IP addresses of your on-prem web servers and additional AWS-based EC2 instances created from the imported AMI. Use Route 53 to update your public facing eCommerce name to point to the ALB as an alias record. *A Target Group for an ALB can contain instances or IP addresses. In this case, we can define the private IP addresses of our on-prem web servers along side the private IP addresses of any EC2 instances we spin up. The caveat is that we can only use private IP addresses when defining a target group in this way.

You have just been informed that your company's data center has been struck by a meteor and it is a total loss. Your company's applications were not capable of being deployed with high availability so everything is currently offline. You do have a recent VM images and DB backup stored off-site. Your CTO has made a crisis decision to migrate to AWS as soon as possible since it would take months to rebuild the data center. Which of the following options will get your company's applications up and running again in the fastest way possible?

Use VM Import to upload the VM image to S3 and create the AMI of key servers. Manually start them in a single AZ. Stand-up a single AZ RDS instance and use the backup files to restore the database data. *The Server Migration Service uses the Server Migration Service Connector which is an appliance VM that needs to be loaded locally in vCenter. We don't have a VMware system...only a backup of an image so this won't work. The best thing we can do is import the VM and restore the database.

You are designing a DynamoDB datastore to record electric meter readings from millions of homes once a week. We share on our website weekly live electric consumption charts based of this data so the week must be part of the primary key. How might we design our datastore for optimal efficiency?

Use a table per week to store the data If we put all the time series data in one big table, the last partition is the one that gets all the read and write activity, limiting the throughput. If we create a new table for each period, we can maximize the RCU and WCU efficiency against a smaller number of partitions

Your company is preparing for a large sales promotion coming up in a few weeks. This promotion is going to increase the load on your web server landscape substantially. In past promotions, you've run into scaling issues because the region and AZ of your web landscape is very heavily used. Being unable to scale due to lack of resources is a very real possibility. You need some way to absolutely guarantee that resources will be available for this one-time event. Which of the following would be the most cost-effective in this scenario.

Use an On-Demand Capacity Reservation. * If we only need a short-term resource availability guarantee, it does not make sense to contract for a whole year worth of Reserved Instance. We can instead use On-Demand Capacity Reservations.

You are consulting with a client to guide them on migration of an in-house data center to AWS. The client has stipulated in the contract that the migration cannot require any more than 1 hour downtime at a time and that there is always a fallback path. Additionally, they want an overall increase in business continuity capabilities when the migration is done. Their landscape is as follows: (1) Several databases with about 1TB of data combined which are heavily used 24x7 and considered mission critical; (2) About 40TB of historic files which are read sometimes but almost never updated; (3) About 150 web servers on VMware in various states of customization of which there is a current project underway to standardize them. The client's team has suggested some next steps but because they aren't yet familiar with AWS, they are not using equivalent AWS terms. Translating their suggestions, which of the following activities would you choose to meet the requirements, reducing costs and management where possible? 2 things

Use some block-level SAN replication tool to gradually migrate the on-prem historic files to AWS. Create new high powered stand-alone database instances in AWS and migrate data from on-prem database. Use log shipping to keep the databases in sync. Once we better understand AWS, we'll rebuild the servers and repartition the tables *The database migration suggestion aligns well with DMS as it can keep the databases in sync until cutover. SAN replication sounds a lot like Storage Gateway which is a reasonable way to migrate data to AWS. However, simply using K8s does not convert your VMs into containers or make them serverless. We can't restore tapes to AWS. Creating the same VM landscape on AWS just adds an additional layer of complexity that's not needed.

Your production web farm consists of a minimum of 4 instances. The application can run on any instance type with at least 16GB of RAM but you have selected m5.xlarge in your current launch configuration. You have defined a scaling policy such that you scale out when the average CPU across your auto scaling group reaches 70% for 5 minutes. When this threshold is reached, your launch configuration should add more m5.xlarge instances. You notice that auto scaling is not working as it should when your existing instances reach the scaling event threshold of 70% for 5 minutes. Since you are deployed in a heavily used region, you suspect there are capacity issues. Which of the following would be a reasonable way to solve this issue? 2 answers

Version the launch configuration to include additional instances types that also have at least 16GB of RAM. Provision some zonal reserved instances of m5.xlarge to ensure you have capacity when you need it. *If you do not have capacity reserved via a zonal RI or on-demand capacity reservation, it is possible that the AZ is out of available capacity for the type of instance you need. You can reserve capacity or you can also increase the possible instance types in hopes that some other similarly equipped instance capacity is available.

What is rolling deployment ?

With rolling deployment, the fleet is divided into portions so that all of the fleet isn't upgraded at once. During the deployment process two software versions, new and old, are running on the same fleet. This method allows a zerodowntime update. If the deployment fails, only the updated portion of the fleet will be affected.

You are consulting with a company who is at the very early stages of their cloud journey. As a framework to help work through the process, you introduce them to the Cloud Adoption Framework. They read over the CAF and come back with a list of activities as next steps. They are asking you to validate these activities to keep them focused. Of these activities, which would you recommend delaying until later in the project?

Work with Marketing business partners to design an external communications strategy External communication usually comes much later in the process once project plans are defined and specific customer impact is better understood.

Is availability more important than consistency? In which model ?

Yes in BASE

You are in the process of migrating a large quantity of small log files to S3 for long-term storage. To accelerate the process and just because you can, you have created quite sophisticated multi-threaded distributed process deployed across 100 VMs which can load hundreds of thousands of files at one time. For some reason, the process seems to be throttled somewhere along the chain. You try many things to try to uncover the source of the throttling but nothing works. Reluctantly, you decide to turn off the KMS encryption setting for your S3 bucket and the throttling goes away. You turn AMS-KMS back on and the throttling is back. Given the troubleshooting steps, what is the most likely cause of the throttling and how can you correct it?

You are hitting the KMS encrypt request account limit. You must request a limit increase via a Support Case.

You manage a relatively complex landscape across multiple AZs. You notice that the incoming requests vary mostly depending on the time of day but also there is a more unpredictable component resulting in smaller spikes and valleys for your resources. Fortunately, you manage this landscape via OpsWorks Stacks. What options, if any, are available to you as part of the OpsWorks featureset.

You would define a baseline level of resources and configure them for 24/7 instances. Then you could define a time-based instances to cover certain times of day. Finally, you could cover the volatile spikes with a load-based instances. All this can be done within OpsWorks Stack

What is required for a service catalog product to be successfully launed ?

either a launch constraint must be assigned and have sufficient permission to deploy the product or the user must have the same required permissions.

what is a disposable upgrade

new release is deployed on new instances while instances containing the old version are terminated

Does Aurora PostgreSQL do cross region replicas?

no

What servies have a SLA 4?

• Amazon Elastic Compute Cloud (Amazon EC2)* includes any Amazon Elastic Graphics, Amazon Elastic Inference, and Elastic IP Address • Amazon Elastic Block Store (Amazon EBS) • Amazon Elastic Container Service (Amazon ECS) • Amazon Fargate for Amazon ECS (Amazon Fargate)


Related study sets

PPT 17: contractual capacity and legality

View Set

ART 111: Ch 1.4, 1.5, 1.6 and 1.7

View Set

Dynamics of Negotiations UNI Test 2

View Set

(A&P) Chapter 29- Development,Pregnancy, and Heredity

View Set

EMT Chapter 23: Gynecologic Emergencies

View Set

Diagnosis: Malignant neoplasm of cervix uteri

View Set

مقرر مدخل اعلام فاينل

View Set

Chapter 15 Intermediate Accounting: Questions

View Set

State Farm Fire Independent Policy Exam

View Set