study questions 2

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

A technician needs to configure three networks for a three-tier cloud service implementation: -- Two public-facing load balancers -- Ten private web servers -- Two private database servers Which of the following are the smallest networks that will accommodate these requirements? (Select THREE)

8.32.87.16/26 172.16.3.240/27 192.167.1.64/28 The network that hosts two public-facing load balancers need 3 Public IP Addresses: one for gateway and two for load balancers. The smallest network that satisfies this requirement is 8.32.87.16/29. The network that hosts ten private web servers need 11 Private IP Addresses: one for gateway and 10 for web servers. The network that hosts two private database servers need 3 Private IP Addresses: one for gateway and two for database servers. The 2 smallest networks that satisfy this requirement are 192.168.1.64/28 and 172.16.3.240/27.

Of ten newly deployed VMs from a single template, the cloud administrator notices one VM has direct root access enabled and automatic configuration control disabled. These settings do not comply with the company's policies for production VMs. Which of the following is the MOST likely cause of this situation?

Another administrator intentionally changed the settings If the template is misconfigured, not only one but all ten VMs have direct root access enabled and automatic configuration control disabled. So this is not the correct answer. Hypervisor might have bugs but this is the LEAST likely cause of the situation. Of the two remaining choices - "provisioning workflow breakdown" is a kind of machine errors while "another administrator intentionally changed the settings" is a kind of human errors. In almost all the cases, human error is more likely to happen than machine error.

A university is running a DNA decoding project that will take seven years if it runs on its current internal mainframe. The university negotiated a deal with a large cloud provider, which will donate its cloud resource to process the DNA decoding during the low peak time throughout the world. Which of the following is the MOST important resource the university should ask the cloud provider to donate?

Any available compute resource Decoding activities require large computing resources. So what the university needs from the cloud provider is compute resources that the provider can donate during low peak time throughout the world.

A private cloud administrator needs to configure replication on the storage level for a required RPO of 15 minutes and RTO of one hour. Which of the following replication types would be the BEST to use?

Asynchronous Asynchronous replication writes the data to the primary storage location and then later will send copies to the remote replicas. With asynchronous replication, there will be a delay as the data is copied to the backup site and provides eventual consistency as it uses a store-and-forward design. The backup storage array is normally several transactions behind the primary. 15 minutes RPO and 1 hour RTO is feasible with asynchronous replication

A customer has requirements for its application data to be copied into a second location for failover possibilities. Latency should be minimized, while RPO and RTO are over 15 minutes. Which of the following technologies BEST fits the customer's needs?

Asynchronous replication With the requirements of 15-minute RPO and RTO, snapshot copies and storage cloning are not practically viable. To satisfy these requirements, snapshot and cloning images have to be created every 15 minutes. This is not the way snapshot and cloning are built for. We do not have data mirroring technology but site mirroring. Asynchronous replication is when data is written to the primary first, and then later a copy is written to the remote site on a scheduled arrangement or in nearly real time. The backup storage array is normally several transactions behind the primary. With this nearly real time activity, asynchronous replication accommodate the requirements of 15-minute RTO and RPO well.

A company wants to leverage a SaaS provider for its back-office services, and security is paramount. Which of the following solutions should a cloud engineer deploy to BEST meet the security requirements?

CASB Cloud access security broker or CASB is on-premises or cloud based software that sits between cloud service users and cloud applications, and monitors all activity and enforces security policies. CASB in this context is the most holistic solution that can satisfy security requirements.

A company is seeking a new backup solution for its virtualized file servers that fits the following characteristics: -- The files stored on the servers are extremely large. -- Existing files receive multiple small changes per day. -- New files are only created once per month. -- All backups are being sent to a cloud repository. Which of the following would BEST minimize backup size?

Change block tracking Change block tracking (CBT) is an incremental backup. Comparing incremental backup and differential backup, the first has smaller backup size due to its organic operations.

A cloud administrator reports a problem with the maximum number of users reached in one of the pools. There are ten VMs in the pool, each with a software capacity to handle ten users. Based on the dashboard metrics, 15% of the incoming new service requests are failing. Which of the following is the BEST approach to resolve the issue?

Check the DHCP Scope and increase the number of available IP addresses by extending the pool

A cloud administrator has deployed a new all-flash storage array with deduplication and compression enabled, and moved some of the VMs into it. The goal was to achieve 4:1 storage efficiency while maintaining sub-millisecond latency. Which of the following results would BEST suit the requirements?

Compression 1:3:1 deduplication 3:1:1 overall savings 4:3:1 Average Latency 900us Two answers that meet the requirement of sub-millisecond latency are:1) Compression 1.5:1 Deduplication 1.8:1 Overall savings 2.2:1 Average latency 600us2) Compression 1.3:1 Deduplication 3.1:1 Overall savings 4.3:1 Average latency 900us Of these two answers, the second one meets the requirement of 4:1 storage efficiency so this is the correct answer.

A cloud engineer is migrating an application running on an on-premises server to a SaaS solution. The cloud engineer has validated the SaaS solution, as well as created and tested a migration plan. Which of the following should the cloud engineer do before performing the migration? (Select TWO).

Create a rollback plan agree upon a change window To answer this question, we have to determine: Category 1) Which activities were delivered prior to creating and testing migration planCategory 2) Which activities were conducted when creating and testing migration planCategory 3) Which activities will be performed after creating and testing migration plan and before performing the migration. The activities in this category are the answer for this question. Such activities as submitting request for change, getting CAB's approval, updating change management database have to be completed before cloud engineer can create and test migration plan. These activities belong to category 1. Cloud engineer has created and tested migration plan, meaning that the plan of action has been established already, and the test results have been documented already. These activities belong to category 2. Once test has been completed and the engineer is confident, he/she can safely perform a full migration, migrate all the data to the cloud environment, and move all client connections and communications to the cloud environment. The engineer has to consider contacting SaaS provider to make sure they know the schedule and can provide support staff that are ready to help in the event there are issues during the full migration. All impacted users should be informed about the deployment schedule so they know what to expect. So, agree upon a change window is a must before the full migration. It belongs to category 3. Besides, during the change management process, there is a standardized process to follow, including recording the change, planning for the change, testing the documentation, getting approvals, evaluating and validating the change, creating instructions for backing out the change if needed, and conducting any post-change reviews if desired. When doing migration, if there are issues arise, cloud engineer has to roll back changes to address those issues. So rollback plan is a mandatory before any production migration. It belongs to category 3.

A cloud engineer is using a hosted service for aggregating the logs for all the servers in a public cloud environment. Each server is configured via syslog to send its logs to a central location. A new version of the application was recently deployed, and the SaaS server now stops processing logs at noon each day. In reviewing the system logs, the engineer notices the size of the logs has increased by 50% each day. Which of the following is the MOST likely reason the logs are not being published after noon?

Data limit has been exceeded at the saas provider If the syslog service is not running on the servers and/or there is a cloud service provider outage, the SaaS server could not processes logs at all. While the SaaS server could process log normally before noon so these 2 answers are not appropriate ones. Since the size of the logs has increased by 50% each day, so if the cause of the issue is insufficient storage space in logging directory, it will not be able to process log after the storage capacity reach the threshold. Let say, the threshold was reached in day X, so the log processing could not happen after day X. But, the log processing happens every day and stop at noon only, so insufficient storage space in logging directory is not the cause of the issue. The only reasonable cause is the data limitation. Let say X is the threshold for log data transfer. This X amount of data transfer has been consumed completely in morning and hence the server could not send/receive any more logs after noon. Since this is daily data limitation, everything is going normally in the morning next day. And then the limitation is reached, and the logs processing stop at noon.

A development team released a new version of an application and wants to deploy it to the cloud environment with a faster rollback and minimal downtime. Which of the following should the cloud administrator do to achieve this goal?

Deploy the application to a subnet of servers in the environment and route traffic to these servers. To switch to previous version, change the route to the non-updated servers Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green. At any time, only one of the environments is live, with the live environment serving all production traffic. As you prepare a new version of your software, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green. Once you have deployed and fully tested the software in Green, you switch the router so all incoming requests now go to Green instead of Blue. Green is now live, and Blue is idle. This technique can eliminate downtime due to app deployment. In addition, blue-green deployment reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back to the last version by switching back to Blue. This deployment model satisfies requirements of faster rollback and minimal downtime of the scenario. So the correct answer is "Deploy the application to a subset of servers in the environment and route traffic to these servers. To switch to the previous version, change the route to the non-updated servers."

A company has implemented a change management process that allows standard changes during business hours. The company's private cloud hardware needs firmware updates and supports rolling upgrades. Which of the following considerations should be given to upgrade firmware and make the change as transparent as possible to users?

Fail the application over to perform the upgrade Making the change as transparent as possible to users means that the impact on the users is minimal. Since the hardware support rolling upgrades, and as per below definition of rolling upgrade: The rolling upgrade method enables you to update a cluster of two or more nodes nondisruptively. This method has several steps: initiating a failover operation on each node in an HA pair, updating the failed node, initiating giveback, and then repeating the process for each HA pair in the cluster. Hence, the best method in this situation is to failover the application and then perform the upgrade.

Several suspicious emails are being reported from end users. Organizational email is hosted by a SaaS provider. Upon investigation, the URL in the email links to a phishing site where users are prompted to enter their domain credentials to reset their passwords. Which of the following should the cloud administrator do to protect potential account compromise?

Forward the email to the systems team distribution and provide the compromised user list "Click on the URL link to verify the website and enter false domain credentials" does not help protect potential account compromise at all. "Change the encryption key for the entire organization and lock out all users from using email until the issue is remediated" is not appropriate answer. "Encryption key" is to vague and general in this case. Locking out all users from using email is also not a good option while we want to protect potential comprised accounts. Let say the total number of email users are 10,000 and there are only 50 compromised accounts. We do not want to disable email access of the whole 10,000 users. "Notify users who received the email to reset their passwords regardless of whether they click on the URL" is also not a best choice here. The reason is that if the account has been compromised already, their machines or mobile phone might be compromised as well. Attackers might install backdoor or malicious software into their machines already. Changing password does not really help in this situation. To have comprehensive protection in this context, the cloud administrator should notify the system admin about the breach. The system admin will take the case from this point and execute their plan to holistically protect potential compromised accounts (might include changing the end users password but normally include other activities such as reinstall the machine, etc)

A cloud administrator wants to make a web application on the company's private cloud available to multiple remote sites. Which of the following protocols BEST provides IP packet encapsulation?

GRE The Point-to-Point Tunneling Protocol (PPTP) is a Microsoft-developed protocol that has been depreciated and has been replaced by more current remote access technologies. PPTP is considered to be obsolete and now has limited support in most operating systems and network equipment. L2TP is a communications protocol that is a common method to connect to a remote device over the Internet. Most VPN servers support L2TP even though it is an older protocol and not as widely used as it was in the past because it does not natively support encryption or confidentiality without using add-on application software such as the IPsec framework. Generic Routing Encapsulation (GRE) is a standardized network tunneling protocol that is used to encapsulate any network layer protocol inside a virtual link between two locations. GRE is commonly used to create tunnels across a public network that carries private network traffic. The effect of this interconnection is that the two clouds appear to be directly connected even though they traverse the public Internet. The Session Initiation Protocol (SIP) is a signaling protocol used for initiating, maintaining, and terminating real-time sessions that include voice, video and messaging applications. Of these 4 protocols, GRE is the BEST one to use to access the web application from multiple remote sites.

A company wants to take advantage of cloud benefits while retaining control of and maintaining compliance with all its security policy obligations. Based on the non-functional requirements, which of the following should the company use?

Hybrid cloud as use is restricted to trusted customers For IaaS, PaaS and SaaS, the company could not easily retain control of and maintain compliance with its security policy obligations. For these cloud implementation models, data for example stores within service provider system so the company has very least control of security settings. For hybrid cloud, the company is able to use the cloud for applications and processing, but keep data on-premises, and data access tightly controlled.

The InfoSec team has directed compliance database activity monitoring without agents on a hosted database server in the public IaaS. Which of the following configurations is needed to ensure this requirement is achieved?

Implement built-in database tracking functionality "Configure the agent configuration file to log to the syslog server" is not an appropriate answer since using agents violates the requirement of InfoSec team. "Configure sniffing mode on database traffic" is not an appropriate answer since this solution can monitor traffic going in and out of database server only, but not the detail activities within the server. "Implement database encryption and secure copy to the NAS" does not relate to database activity monitoring. "Implement built-in database tracking functionality" is easiest and most convenient way to go since almost all database systems have their own tracking functionality already.

A cloud administrator is provisioning five VMs, each with a minimum of 8GB of RAM and a varying load throughout the day. The hypervisor has only 32GB of RAM. Which of the following features should the administrator use?

Memory over-commitment 5 VMs, each with 8GB of RAM running on a physical host with 32GB of RAM requires memory over commitment to function properly.

A cloud administrator is securing an application hosted by an IaaS provider. The operating system on the VM has been updated. Which of the following should the administrator use to BEST secure the VM from attacks against vulnerable services regardless of operating system?

Patch management With an not-so-secure Operating System (OS), Firewall, Intrusion Detection System (IDS) and Antivirus are all mandatory to keep the system secure. They block attacks from exploiting vulnerability in OS. With a "secure" OS (that generally has built-in malware and antivirus mechanism), it is very hard to attack the OS even though there are no explicit Firewall, IDS and Antivirus. However, if there is vulnerable application running on top of this "secure" OS, attacker can still exploit the VM (via vulnerability of the application, not of the OS). To keep the system secure regardless of OS, the cloud administrator has to have good patch management practice. This patch management practice helps secure the application, hence secure the VM as a whole from attacks.

A manufacturing company has the following DR requirements for its IaaS environment: -- RPO of 24 hours-- RTO of 8 hours The company experiences a disaster and has a two-site hot/cold configuration. Which of the following is the BEST way for the company to recover?

Replicate the data from the non-failed site to another cloud provider, point users to it, and resume operations One very important thing in this scenario is that the company has a two-site hot/cold configuration. This means that apart from primary site, the company has two backup sites - one configured as hot site and one configured as cold site. A cold site is a backup facility with little or no hardware equipment installed. A cold site is essentially an office space with basic utilities such as power, cooling system, air conditioning, and communication equipment, etc. A cold site is the most cost-effective option among the three disaster recovery sites. However, due to the fact that a cold site doesn't have any pre-installed equipment, it takes a lot of time to properly set it up so as to fully resume business operations. This process in reality could take few days, so the RTO of 8 hours could not be met.So 2 cold-site-related answers: "Bring the cold site online, point users to it, and resume operations" and "Rebuild the site from the cold site, bring the site back online, and point users to it" are not correct. "Restore data from the archives on the hot site, point users to it, and resume operations" seems to be a good choice except the keyword "archives". Archived data is inactive data, or data that is no longer being used. Restoring from this old archived data could not satisfy the requirements of 24-hour RPO. The only feasible option in this situation is to replicate data from the non-failed site to another cloud provider, point users to it, and resume operations.

When designing a new private cloud platform, a cloud engineer wants to make sure the new hypervisor can be configured as fast as possible by cloning the OS from the other hypervisor. The engineer does not want to use local drives for the hypervisors. Which of the following storage types would BEST suit the engineer's needs?

SAN Since using local drives is not an option so DAS is not correct answer. The remaining three options - CAS, NAS and SAN are not local storage and since the SAN is the fastest storage solution, so SAN is the most appropriate answer.

A healthcare provider determines a Europa-based SaaS electronic medical record system will meet all functional requirements. The healthcare provider plans to sign a contract to use the system starting in the next calendar year. Which of the following should be reviewed prior to signing the contract?

Security auditing A healthcare organization will adopt strong governance practices to ensure that all data is handled in a way that ensures the organization is in compliance with HIPAA. That includes patient records on clipboards, in filing cabinets, and in the cloud. Since healthcare is highly regulated industry, security is a top-of-mind concern for management team. The Europa-based SaaS electronic medical record system meets all functional requirements. However, the healthcare provider has to make sure that the Europa-based SaaS electronic medical record system meet security requirements as well. To do so, security auditing need to be conducted prior to signing the contract.

A company is interested in a DRP. The purpose of the plan is to recover business as soon as possible. The MOST effective technique is:

Site mirroring Site mirroring can encompass multiple redundancy strategies that you will explore in this section. Site mirroring refers to the process of keeping the backup site updated so it is ready to assume the workload in the event of a primary data center failure. Site mirroring provides an identical copy of the original site's data and applications operating in standby at a remote site. By implementing a mirroring strategy, you can be better prepared to survive an outage event with little or no impact on your operations. The cloud operations can be deployed in a hot site model where two fully redundant cloud data centers are in sync with each other, with the standby site backing up the primary in real time in the event of a failure. The hot site offers the most redundancy of any model. However, it is also the most expensive option and is used when having your cloud computing operations go offline is not an option.

A cloud architect created a new delivery controller for a large VM farm to scale up according to organizational needs. The old and new delivery controllers now form a cluster. However, the new delivery controller returns an error when entering the license code. Which of the following is the MOST likely cause?

The existing license is for a lower version "The existing license has expired" and "A firewall is blocking the port on the license server" are not appropriate answers since the old controller can still use the license without any issue. "The existing license is not supported for clusters" is also not an appropriate answer since the old server would get the same error as well if it was added into a cluster. The only reasonable cause is that the existing license is for old version of controller (hence the old server still works fine) so the new controller can not work with that.

A multinational corporation needs to migrate servers, which are supporting a national defense project, to a new datacenter. The data in question is approximately 20GB in size. The engineer on the project is considering datacenters in several countries as possible destinations. All sites in consideration are on a high-speed MPLS network (10Gb+ connections). Which of the following environmental constraints is MOST likely to rule out a possible site as an option?

legal restrictions The MPLS network is very high speed (10Gb+) while the data size is only 20GB so bandwidth and downtime impact should not be constraints here. There is nothing mentioned in the scenario that relates to Peak time frames so this should not be appropriate answer for this question. The servers are used for national defense project, meaning that they host very sensitive information. With this sensitivity, candidated datacenters have to satisfy all requirements by law and regulator. Hence the legal restrictions are MOST likely to rule out a possible site as an option

A systems administrator migrated a company's CRM middleware to a VPC and left the database in the company's datacenter. The CRM application is now accessible over the Internet. A VPN between the company network and the VPC is used for the middleware to communicate with the database sever. Since the migration, users are experiencing high application latency. Which of the following should the company do to resolve the latency issue?

move the database to the cloud The MOST likely reason for this issue is network-related one. With the distance-related latency of VPN connection between VPC and datacenter, increasing resources (i.e compute resources) for middleware or implementing load balancers in the VPC could not resolve the underlying root cause. Moving the database into the cloud to be co-located with the middleware should resolve this problem.

A public cloud provider recently updated one of its services to provide a new type of application load balancer. The cloud administrator is tasked with building out a proof-of-concept using this new service type. The administrator sets out to update the scripts and notices the cloud provider does not list the load balancer as an available option type for deploying this service. Which of the following is the MOST likely reason?

the administrator needs to update the version of the CLI tool The question asks for the reason of the issue, not for the way to deploy the new load balancer. So "The administrator can deploy the new load balancer via the cloud provider's web console" is incorrect answer. The administrator has already updated the scripts so writing a new script function to call the service is not an appropriate answer. Incorrect account will prevent administrator from using most of the cloud services, not only from listing the new type of application so "The administrator is not using the correct cloud provider account" is not an appropriate answer. Using old version of CLI prevents administrator from using updated and new cloud services and application from cloud provider. This should be the best answer for this scenario.

A cloud administrator updates the syslog forwarder configuration on a local server in production to use a different port. The development team is not longer receiving the audit logs from that server. However, the security team can retrieve and search the logs for the same server. Which of the following is MOST likely the issue?

the development team's syslog server is configured to listen on the wrong port One of the very first questions when troubleshooting a technical issue is to ask about recent changes before issue happened. In this situation, the port used to forward syslog has been changed and it likely led to the issue. If "sending" and "receiving" servers use different ports for syslog activity, the receiving server could not get the syslog message. With that, the MOST likely cause is "the development team's syslog server is configured to listen on the wrong port"


संबंधित स्टडी सेट्स

CH 8: Unemployment and Inflation

View Set

DAccord1_Unité1_Panorama_LeMondeFrancophone

View Set

Health 1-Chapter 6:Pregnancy, Childbirth, and Sexuality

View Set

Accounting II Activity Based Costing

View Set

Modeling with Quadratic Equations

View Set

Ole Miss Math 167 exam 1 study terms

View Set