Cloud+ Set 3A

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

A systems administrator is deploying a new cloud application and needs to provision cloud services with minimal effort. The administrator wants to reduce the tasks required for maintenance, such as OS patching, VM and volume provisioning, and autoscaling configurations. Which of the following would be the BEST option to deploy the new application? A. A VM cluster B. Containers C. OS templates D. Serverless

For a systems administrator who needs to provision cloud services with minimal effort and reduce tasks required for maintenance, the best option to deploy the new application would be: D. Serverless Serverless computing is a cloud-computing execution model in which the cloud provider runs the server and dynamically manages the allocation of machine resources. The main advantage of serverless computing is that it abstracts the underlying infrastructure away from the developer or administrator, which means there's no need to manage OS patching, VM and volume provisioning, or autoscaling configurations. The cloud provider handles these tasks automatically, allowing the administrator to focus solely on deploying and managing the application code. Serverless computing (D) is the most fitting option for the requirements specified, as it offloads the responsibility of managing the infrastructure, scaling, and OS maintenance to the cloud provider.

A systems administrator needs to modify the replication factors of an automated application container from 3 to 5. Which of the following file types should the systems administrator modify on the master controller? A. .yaml B. .txt C. .conf D. .etcd

A. .yaml A systems administrator who needs to modify the replication factors of an automated application container, particularly in a system that uses a master controller, is likely working with an orchestration tool like Kubernetes. In such systems, configuration files are typically written in YAML format. Therefore, to change the replication factor, the administrator would need to modify: A. .yaml YAML files are used to define the desired state of the system, including the number of replicas for a given application container. The administrator would update the .yaml file that defines the deployment or service configuration, changing the replicas field from 3 to 5 and then apply the updated configuration to the master controller.

A systems administrator is helping to develop a disaster recovery solution. The solution must ensure all production capabilities are available within two hours. Which of the following will BEST meet this requirement? A. A hot site B. A warm site C. A backup site D. A cold site

A. A hot site To ensure that all production capabilities are available within two hours as part of a disaster recovery solution, the best option would be: A. A hot site A hot site is a fully operational data center with hardware and software, telecommunications, and staffing necessary to transition data processing with minimal downtime. It is essentially a replica of the existing production environment that can quickly take over operations in the event of a disaster. Given the requirement to have all production capabilities available within two hours, a hot site is best suited to meet this need because it allows for the fastest recovery time after a disaster. Given the stringent two-hour requirement for disaster recovery, a hot site is the most appropriate choice to ensure continuity of production capabilities.

A systems administrator received an email from a cloud provider stating that storage is 80% full on the volume that stores VDI desktops. Which of the following is the MOST efficient way to mitigate the situation? A. Deduplication B. Compression C. Replication D. Storage migration

A. Deduplication To mitigate the situation of a storage volume that is 80% full, especially one storing Virtual Desktop Infrastructure (VDI) desktops, the most efficient way would be: A. Deduplication Deduplication is a process that eliminates redundant copies of data, which is particularly effective in environments like VDI where many desktops may contain identical files or data blocks. By storing only one copy of that data and referencing it wherever it's needed, deduplication can significantly reduce the amount of storage space required. This is highly efficient for VDI environments because of the high degree of commonality among desktop images and user data. Therefore, deduplication is the most efficient way to mitigate the situation by effectively reducing the storage requirement without the need for additional storage hardware or significantly altering the existing infrastructure.

A systems administrator is troubleshooting performance issues with a Windows VDI environment. Users have reported that VDI performance is very slow at the start of the workday, but the performance is fine during the rest of the day. Which of the following is the MOST likely cause of the issue? (Choose two.) A. Disk I/O limits B. Affinity rule C. CPU oversubscription D. RAM usage E. Insufficient GPU resources F. License issues

A. Disk I/O limits C. CPU oversubscription The symptoms described — slow VDI performance at the start of the workday, which then normalizes during the rest of the day — suggest an issue that occurs due to simultaneous demand from multiple users. The most likely causes for this issue are: A. Disk I/O limits At the start of the workday, many users are likely logging in and launching applications simultaneously, which can cause a spike in disk I/O as the systems read from and write to the disk heavily. If the storage system has I/O limits or is not equipped to handle such a high level of concurrent activity, it could result in the slow performance that is reported. C. CPU oversubscription CPU oversubscription happens when there are more virtual CPUs allocated to VMs than the physical CPUs available. This is not usually a problem until many or all VMs demand CPU resources simultaneously, such as during startup or login storms at the beginning of the workday. This can lead to contention for CPU resources, slowing down performance. Given the scenario, Disk I/O limits (A) and CPU oversubscription (C) are the most likely causes of the reported VDI performance issues.

During a security incident, an IaaS compute instance is detected to send traffic to a host related to cryptocurrency mining. The security analyst handling the incident determines the scope of the incident is limited to that particular instance. Which of the following should the security analyst do NEXT? A. Isolate the instance from the network into quarantine B. Perform a memory acquisition in the affected instance C. Create a snapshot of the volumes attached to the instance D. Replace the instance with another from the baseline

A. Isolate the instance from the network into quarantine When handling a security incident where an IaaS compute instance is detected sending traffic related to cryptocurrency mining and the scope is limited to that particular instance, the next step should be to: A. Isolate the instance from the network into quarantine Isolating the affected instance is a critical immediate step to prevent the spread of the incident to other parts of the network and to stop the malicious activity. Quarantining the instance allows for a safe environment to perform further investigation without risking further compromise or allowing the attacker to continue using the organization's resources for unauthorized activities such as cryptocurrency mining. Therefore, the first action after detecting and scoping the incident should be to isolate the compromised instance to prevent further unauthorized activities and to safely conduct a forensic investigation.

A cloud administrator is evaluating a solution that will limit access to authorized individuals. The solution also needs to ensure the system that connects to the environment meets patching, antivirus and configuration requirements. Which of the following technologies would BEST meet these requirements? A. NAC B. EDR C. IDS D. HIPS

A. NAC The technology that best meets the requirements of limiting access to authorized individuals and ensuring that the connecting systems meet patching, antivirus, and configuration requirements is: A. NAC (Network Access Control) NAC systems allow network administrators to define and implement policies that enforce access controls to network resources. They can check the state of a system's security before it connects to the network, ensuring that it is adequately patched, has antivirus protection, and meets other predefined security configuration requirements. NAC can grant or block access to the network based on compliance with these policies. NAC is the technology designed to check that devices comply with certain security criteria before they're allowed to connect to the network, making it the best fit for the described requirements.

A systems administrator is working on the backup schedule for a critical business application that is running in a private cloud. Which of the following would help the administrator schedule the frequency of the backup job? A. RPO B. MTTR C. SLA D. RTO

A. RPO To schedule the frequency of the backup job for a critical business application running in a private cloud, the most relevant factor to consider is: A. RPO (Recovery Point Objective) RPO refers to the maximum targeted period in which data might be lost from an IT service due to a major incident. It essentially dictates how often data should be backed up. The RPO helps businesses understand the amount of data they can afford to lose in terms of time. For example, if an application has an RPO of 4 hours, backups would need to occur at least every 4 hours to ensure that no more than 4 hours of data is ever lost in the event of a disaster. Therefore, RPO directly influences how frequently backups should be scheduled. Therefore, RPO is the key metric that should guide the scheduling of backups to ensure that data loss is minimized and within acceptable limits for the business.

Users of a public website that is hosted on a cloud platform are receiving a message indicating the connection is not secure when landing on the website. The administrator has found that only a single protocol is opened to the service and accessed through the URL https://www.comptiasite.com. Which of the following would MOST likely resolve the issue? A. Renewing the expired certificate B. Updating the web-server software C. Changing the crypto settings on the web server D. Upgrading the users' browser to the latest version

A. Renewing the expired certificate The message indicating the connection is not secure, especially for a website accessed through an HTTPS URL (https://www.comptiasite.com), most likely stems from an issue related to the website's SSL/TLS certificate. Since HTTPS relies on certificates to establish secure connections, an expired certificate can lead to such security warnings in users' browsers. Therefore, the most likely action to resolve this issue would be: A. Renewing the expired certificate Renewing the expired SSL/TLS certificate will ensure that the website can establish a trusted, secure connection with users' browsers. An up-to-date certificate is crucial for the HTTPS protocol to verify the website's identity, encrypt data in transit, and prevent security warnings. Therefore, renewing the expired certificate (A) is the most direct and effective solution to resolve the issue of users receiving a "connection not secure" message when accessing the website, as it directly addresses the root cause of these security warnings.

A company is migrating workloads from on premises to the cloud and would like to establish a connection between the entire data center and the cloud environment. Which of the following VPN configurations would accomplish this task? A. Site-to-site B. Client-to-site C. Point-to-site D. Point-to-point

A. Site-to-site For migrating workloads from an on-premises data center to the cloud and establishing a connection between the entire data center and the cloud environment, the appropriate VPN configuration would be: A. Site-to-site A site-to-site VPN is designed to connect entire networks to each other, meaning it can connect the network of an on-premises data center to a cloud provider's network. This type of VPN allows for all devices within one location to communicate with all devices in the other location, making it ideal for organizations looking to extend their data center capabilities into the cloud seamlessly. Therefore, a site-to-site VPN is the best choice for the company's requirement to connect its on-premises data center to the cloud environment.

After announcing a big sales promotion, an e-commerce company starts to experience a slow response on its platform that is hosted in a public cloud. When checking the resources involved, the systems administrator sees the following consumption: Considering all VMs were built from the same templates, which of the following actions should the administrator perform FIRST to speed up the response of the e-commerce platform? A. Spin up a new web server B. Spin up a new application server C. Add more memory to the web server D. Spin up a new database server

A. Spin up a new web server Based on the resource consumption details provided: The web server (webserver01) is running at 89% memory usage and 98% CPU usage. The application servers (appserver01 and appserver02) are both below 50% for memory and CPU usage. The database server (database01) is at 55% memory usage and 50% CPU usage, but network usage is relatively high at 60%. Given this information, the most constrained resource is the CPU on the web server, which is running at 98% utilization. This high CPU usage is likely causing the slow response times on the e-commerce platform, as the web server is struggling to process incoming requests efficiently. The most appropriate action to take first, considering these details, would be: A. Spin up a new web server Adding another web server would help distribute the load and alleviate the high CPU usage, likely resulting in improved response times for the e-commerce platform. Since memory usage on the web server is also high (but not as critical as CPU), adding another web server would also help with memory capacity. Therefore, the best first action is to spin up a new web server (A) to distribute the processing load and mitigate the high CPU utilization on the existing web server.

A company is using an IaaS environment. Which of the following licensing models would BEST suit the organization from a financial perspective to implement scaling? A. Subscription B. Volume-based C. Per user D. Socket-based

A. Subscription For a company using an Infrastructure as a Service (IaaS) environment that is looking to implement scaling from a financial perspective, the best licensing model would be: A. Subscription The Subscription model is typically the most suited for organizations operating in an IaaS environment, especially when considering scalability. This model often includes access to software, support, and sometimes even upgrades within the subscription period. It allows for predictable budgeting and can scale up or down based on the organization's needs without significant financial penalty or complexity. The Subscription model offers the flexibility and scalability that aligns well with the dynamic nature of IaaS environments, making it the best choice among the options provided.

A systems administrator is using a configuration management tool to perform maintenance tasks in a system. The tool is leveraging the target system's API to perform these maintenance tasks After a number of features and security updates are applied to the target system, the configuration management tool no longer works as expected. Which of the following is the MOST likely cause of the issue? A. The target system's API functionality has been deprecated B. The password for the service account has expired C. The IP addresses of the target system have changed D. The target system has failed after the updates Reveal Solution Discussion

A. The target system's API functionality has been deprecated The most likely cause of the issue, where a configuration management tool no longer works as expected after features and security updates are applied to the target system, is: A. The target system's API functionality has been deprecated When updates are applied to a system, it's common for certain features, including API endpoints or functionalities, to be changed, updated, or deprecated. If the configuration management tool relies on specific API calls that have been deprecated or significantly altered in the latest update, it may no longer be able to perform its intended maintenance tasks. This is a common issue when systems are not updated in tandem with the tools that rely on them, leading to compatibility issues. Therefore, the most likely cause, given the scenario where the tool stopped working correctly after an update, is that the API functionality it depended on has been deprecated or altered.

A company is considering consolidating a number of physical machines into a virtual infrastructure that will be located at its main office. The company has the following requirements • High-performance VMs • More secure • Has system independence Which of the following is the BEST platform for the company to use? A. Type 1 hypervisor B. Type 2 hypervisor C. Software application virtualization D. Remote dedicated hosting

A. Type 1 hypervisor Given the company's requirements for high-performance VMs, increased security, and system independence, the best platform to use would be: A. Type 1 hypervisor Type 1 hypervisors, also known as bare-metal hypervisors, run directly on the host's hardware to control the hardware and to manage guest operating systems. This setup provides several advantages that align with the company's requirements: High-performance VMs: Since Type 1 hypervisors have direct access to physical hardware without going through an underlying operating system, they can offer better performance compared to other virtualization platforms. This makes them ideal for environments where VM performance is critical. More secure: The absence of an underlying operating system reduces the attack surface, making Type 1 hypervisors inherently more secure than Type 2 hypervisors, which run on top of a host operating system. The direct control over hardware also allows for more granular security controls. System independence: Type 1 hypervisors provide strong isolation between VMs, ensuring that each VM operates independently with minimal interference from others. This isolation supports system independence by allowing each VM to run its own operating system and applications as if it were on its own dedicated hardware. Therefore, a Type 1 hypervisor (A) is the best platform for the company to meet its requirements for high-performance VMs, increased security, and system independence in a consolidated virtual infrastructure.

A security audit related to confidentiality controls found the following transactions occurring in the system:GET http://gateway.securetransaction.com/privileged/api/v1/changeResource?id=123&user=277Which of the following solutions will solve the audit finding? A. Using a TLS-protected API endpoint B. Implementing a software firewall C. Deploying a HIDS on each system D. Implementing a Layer 4 load balancer

A. Using a TLS-protected API endpoint The audit finding related to confidentiality controls points out a security issue with a transaction that uses an unencrypted HTTP connection to transmit potentially sensitive information in the URL query string. The best solution to address this specific issue is: A. Using a TLS-protected API endpoint Transitioning from HTTP to HTTPS, which is secured by TLS (Transport Layer Security), ensures that the data transmitted between the client and the server is encrypted. This addresses the confidentiality concern by preventing eavesdroppers from being able to see or tamper with the information in transit, including the query parameters in the URL. Using a TLS-protected API endpoint would secure the communication and protect the data's confidentiality. Therefore, using a TLS-protected API endpoint is the most appropriate solution to secure the data transmitted in the URL and solve the confidentiality issue highlighted by the security audit.

An organization is conducting a performance test of a public application. The following actions have already been completed: • The baseline performance has been established • A load test has passed. • A benchmark report has been generated Which of the following needs to be done to conclude the performance test? A. Verify the application works well under an unexpected volume of requests. B. Assess the application against vulnerabilities and/or misconfiguration exploitation. C. Test how well the application can resist a DDoS attack. D. Conduct a test with the end users and collect feedback.

A. Verify the application works well under an unexpected volume of requests. To conclude a performance test of a public application, after establishing a baseline performance, passing a load test, and generating a benchmark report, the next step would be: A. Verify the application works well under an unexpected volume of requests. This step, often referred to as a stress test, is important to understand how the application behaves under extreme conditions, which could include sudden spikes in traffic or requests beyond the normal operational capacity. It helps to identify the breaking points and the maximum capacity the application can handle before it fails or its performance degrades unacceptably. Therefore, to complete the performance testing cycle, verifying the application's performance under unexpected and extreme volumes of requests would be the most appropriate next step.

A security team is conducting an audit of the security group configurations for the Linux servers that are hosted in a public IaaS The team identifies the following rule as a potential issue: A cloud administrator, who is working remotely, logs in to the cloud management console and modifies the rule to set the source to "My IP." Shortly after deploying the rule, an internal developer receives the following error message when attempting to log in to the server using SSH: Network error: Connection timed out. A. Modify the outbound rule to allow the company's external IP address as a source B. Add an inbound rule to use the IP address for the company's main office as a source C. Modify the inbound rule to allow the company's external IP address as a source D. Delete the inbound rule to allow the company's external IP address as a source

B. Add an inbound rule to use the IP address for the company's main office as a source The situation described is that a security group rule allowing SSH access (port 22) to Linux servers in a public IaaS environment was initially set to allow all IP addresses (0.0.0.0/0), which is not secure. The cloud administrator changed this rule to allow only "My IP", which would typically restrict SSH access to the IP address of the administrator's current location. As a result, an internal developer is unable to connect due to their IP address not being included in the allowed list. The best option to allow both the developer and the administrator to access the server from their respective locations is: B. Add an inbound rule to use the IP address for the company's main office as a source This allows all traffic from the company's main office IP address, which presumably includes the internal developer, while maintaining the administrator's ability to access the server. It's important to note that this should be the public-facing IP address of the company's main office. Therefore, the best solution is to add a specific inbound rule to allow SSH connections from the IP address of the company's main office, and potentially keep the rule allowing the administrator's IP address if the administrator is not connecting through the company's network. This maintains security while providing access to both the administrator and the developer.

A cloud administrator is configuring several security appliances hosted in the private IaaS environment to forward the logs to a central log aggregation solution using syslog. Which of the following firewall rules should the administrator add to allow the web servers to connect to the central log collector? A. Allow UDP 161 outbound from the web servers to the log collector B. Allow TCP 514 outbound from the web servers to the log collector C. Allow UDP 161 inbound from the log collector to the web servers D. Allow TCP 514 inbound from the log collector to the web servers

B. Allow TCP 514 outbound from the web servers to the log collector For configuring security appliances to forward logs to a central log aggregation solution using syslog, the appropriate firewall rule that the cloud administrator should add is: B. Allow TCP 514 outbound from the web servers to the log collector Syslog traditionally uses UDP port 514 for log transmission. However, it can also use TCP for more reliable log delivery. Since the question mentions using syslog but does not specify the transport protocol, the best choice given the options would be to allow communication on the syslog port. Given the nature of the task—sending logs from the web servers to the log collector—the communication would be outbound from the perspective of the web servers. However, it's important to note that while UDP 514 is the more traditional choice for syslog, TCP 514 can be used when reliability is a concern (as UDP does not guarantee delivery). The choice between UDP and TCP would depend on the specific requirements for log transmission in the environment. If reliability and confirmation of log delivery are required, TCP would be the preferred choice. Given the options, the correct adjustment would be to allow UDP 514 outbound from the web servers to the log collector for traditional syslog setups, but if the setup requires TCP for reliability, then B. Allow TCP 514 outbound from the web servers to the log collector is correct. The initial choice assumes the need for reliability in log transmission, which is why TCP 514 was selected. If the question is strictly about what is typically used for syslog without specifying the need for reliability, UDP 514 outbound would be the standard choice. However, in the context of ensuring that logs are reliably transmitted, TCP is a valid option.

A cloud administrator is responsible for managing a VDI environment that provides end users with access to limited applications. Which of the following should the administrator make changes to when a new application needs to be provided? A. Application security policy B. Application whitelisting policy C. Application hardening policy D. Application testing policy

B. Application whitelisting policy When a new application needs to be provided in a VDI (Virtual Desktop Infrastructure) environment that offers end users access to limited applications, the administrator should make changes to the: B. Application whitelisting policy Application whitelisting policy is a security measure that allows only specified applications to run on the system. In a controlled VDI environment, where users have access to a limited set of applications, whitelisting ensures that only authorized applications are available to the end users. When introducing a new application into such an environment, updating the whitelisting policy is crucial to allow the new application to be accessed and run by the users. This involves adding the new application to the list of approved software within the policy settings. Therefore, to provide a new application within a VDI environment, updating the application whitelisting policy is the most directly relevant action to ensure that the new application can be accessed and used by the end users.

A cloud administrator needs to coordinate and automate the management of a company's secrets and keys for all its cloud services with minimal effort and low cost. Which of the following is the BEST option to achieve the goal? A. Implement database as a service B. Configure Key Vault C. Use password as a service D. Implement KeePass

B. Configure Key Vault To coordinate and automate the management of a company's secrets and keys for all its cloud services with minimal effort and low cost, the best option would be: B. Configure Key Vault A Key Vault service, offered by cloud providers (like Azure Key Vault, AWS Secrets Manager, or Google Cloud Secret Manager), is designed to securely store and tightly control access to tokens, passwords, certificates, API keys, and other secrets. It provides a centralized way to manage these secrets across various services with automation capabilities, ensuring that applications and services access only the secrets they need. It also offers features like automatic key rotation, auditing, and access policies, which can significantly simplify secret management in a cloud environment. Therefore, configuring a Key Vault service is the most effective and efficient option for automating and centralizing the management of secrets and keys in a cloud environment.

A systems administrator is diagnosing performance issues on a web application. The web application sends thousands of extremely complex SQL queries to a database server, which has trouble retrieving the information in time. The administrator checks the database server and notes the following resource utilization:• CPU: 64%• RAM: 97%• Network throughput: 384,100Kbps.• Disk throughput: 382,700KbpsThe administrator also looks at the storage for the database server and notices it is consistently near its OPS limit. Which of the following will BEST resolve these performance issues? A. Increase CPU resources on the database server. B. Increase caching on the database server. C. Put the storage and the database on the same VLAN. D. Enable compression on storage traffic. E. Enable deduplication on the storage appliance.

B. Increase caching on the database server. Given the symptoms described, the best resolution to the performance issues being experienced by the web application, due to the database server struggling with complex SQL queries, is: B. Increase caching on the database server. The resource utilization details show that RAM is almost fully utilized at 97%, while CPU usage is moderate at 64%, and both network and disk throughputs are high. The high RAM utilization suggests that the database server may benefit from additional memory resources to handle its workload more effectively. However, the specific action that can directly address the performance issues related to processing complex SQL queries is increasing caching on the database server. Increasing caching can help by storing frequently accessed data in faster-access memory areas, reducing the need to fetch data from disk storage repeatedly. This can significantly speed up the retrieval of information for complex queries by reducing disk I/O operations, which seems to be a bottleneck given the storage is consistently near its OPS (Operations Per Second) limit. Therefore, increasing caching on the database server is the most effective way to address the performance issues by reducing the reliance on disk I/O and making better use of the server's RAM for faster data access.

A cloud administrator has created a new asynchronous workflow to deploy VMs to the cloud in bulk. When the workflow is tested for a single VM, it completes successfully. However, if the workflow is used to create 50 VMs at once, the job fails. Which of the following is the MOST likely cause of the issue? (Choose two.) A. Incorrect permissions B. Insufficient storage C. Billing issues with the cloud provider D. No connectivity to the public cloud E. Expired API token F. Disabled autoscaling

B. Insufficient storage C. Billing issues with the cloud provider When a workflow designed to deploy VMs to the cloud fails during bulk operations but succeeds for a single VM, the most likely causes of the issue, considering the options provided, are: B. Insufficient storage - Deploying a large number of VMs simultaneously can quickly exhaust available storage resources, especially if each VM is allocated a significant amount of disk space. If there isn't enough storage available to accommodate all the VMs being deployed in bulk, the job will fail. C. Billing issues with the cloud provider - Many cloud providers have billing limits or quotas that, when reached, prevent further resources from being provisioned until the issue is resolved. If deploying a large number of VMs at once leads to hitting a billing limit or quota, the cloud provider may block the creation of additional VMs, causing the job to fail. Therefore, insufficient storage and billing issues with the cloud provider are the most plausible reasons for the failure of the workflow when deploying VMs in bulk, as these factors can both become significant when scaling operations from a single instance to many.

A systems administrator has verified that a physical switchport that is connected to a virtualization host is using all available bandwidth. Which of the following would BEST address this issue? A. Port mirroring B. Link aggregation C. Spanning tree D. Microsegmentation

B. Link aggregation To best address the issue of a physical switchport connected to a virtualization host using all available bandwidth, the most effective solution would be: B. Link aggregation Here's why link aggregation is the best choice: Link aggregation allows you to combine multiple network connections in parallel to increase throughput and provide redundancy in case one of the links fails. This approach can significantly increase the bandwidth available between the virtualization host and the switch, addressing the bottleneck caused by the high bandwidth usage. Link aggregation is often supported by both physical switches and virtualization hosts, making it a practical solution to improve network performance and reliability. Therefore, link aggregation is the best option to address the issue of a switchport using all available bandwidth by effectively increasing the total bandwidth available to the virtualization host.

A cloud solutions architect has received guidance to migrate an application from on premises to a public cloud. Which of the following requirements will help predict the operational expenditures in the cloud? A. Average resource consumption B. Maximum resource consumption C. Minimum resource consumption D. Actual hardware configuration

B. Maximum resource consumption To predict the operational expenditures (OpEx) in the cloud accurately, it's important to understand how the application will utilize cloud resources over time. The most relevant requirement among the options provided would be: B. Maximum resource consumption Knowing the maximum resource consumption of an application is crucial for accurately predicting operational expenditures in the cloud. This information helps in planning for peak usage times and ensures that the cloud environment is scaled appropriately to handle the highest expected load without incurring unnecessary costs during off-peak times. It also allows for more precise budgeting and cost control, as it provides a ceiling for potential resource usage costs. Therefore, maximum resource consumption is the most critical piece of information for predicting operational expenditures in a cloud migration scenario. It ensures that the architecture is designed to handle peak loads efficiently while avoiding over-provisioning during less busy periods.

A piece of software applies licensing fees on a socket-based model. Which of the following is the MOST important consideration when attempting to calculate the licensing costs for this software? A. The amount of memory in the server B. The number of CPUs in the server C. The type of cloud in which the software is deployed D. The number of customers who will be using the software

B. The number of CPUs in the server When calculating licensing costs for software that applies fees on a socket-based model, the most important consideration is: B. The number of CPUs in the server Here's why this is the most crucial factor: Socket-based licensing typically refers to the physical CPU sockets on a server motherboard. A "socket" is a physical connector that houses a CPU. Thus, the licensing cost directly correlates with the number of CPU sockets in a server, not the number of cores within each CPU or other factors like memory, cloud type, or user count. It's essential to know how many sockets are present or will be utilized for the software to accurately calculate the licensing fees. Therefore, to calculate the licensing costs for this socket-based licensed software, you must know the number of CPU sockets in the servers where the software will be installed or run.

A systems administrator needs to implement a service to protect a web application from external attacks. The administrator must have session-based granular control of all HTTP traffic. Which of the following should the administrator configure? A. IDS B. WAF C. DLP D. NAC

B. WAF To protect a web application from external attacks with session-based granular control of all HTTP traffic, the systems administrator should configure: B. WAF (Web Application Firewall) A WAF is specifically designed to monitor, filter, and block HTTP/HTTPS traffic to and from a web application. It provides protection against web application threats and attacks, such as SQL injection, cross-site scripting (XSS), and other vulnerabilities. A WAF operates at the application layer and can enforce policies based on HTTP sessions, allowing for granular control over individual user sessions and the ability to inspect and manage traffic in real time. Therefore, configuring a WAF is the best option for achieving session-based granular control of HTTP traffic to protect a web application from external attacks.

A large pharmaceutical company needs to ensure it is in compliance with the following requirements:• An application must run on its own virtual machine.• The hardware the application is hosted on does not change.Which of the following will BEST ensure compliance? A. Containers B. A firewall C. Affinity rules D. Load balancers

C. Affinity rules To ensure compliance with the requirements that an application must run on its own virtual machine and the hardware the application is hosted on does not change, the best option would be: C. Affinity rules Affinity rules are settings available in virtualized environments that control the placement of virtual machines on the host hardware. They ensure that specific VMs run on designated physical hosts. Affinity rules can be used to keep a virtual machine tied to a particular host, which would meet the pharmaceutical company's requirements of keeping the application on its own VM and ensuring the underlying hardware does not change. Affinity rules are the best match for the requirements as they are specifically designed to manage VM placement on physical servers in a virtualized environment.

A cloud administrator is assigned to establish a connection between the on-premises data center and the new CSP infrastructure. The connection between the two locations must be secure at all times and provide service for all users inside the organization. Low latency is also required to improve performance during data transfer operations. Which of the following would BEST meet these requirements? A. A VPC peering configuration B. An IPSec tunnel C. An MPLS connection D. A point-to-site VPN

C. An MPLS connection For establishing a secure connection between an on-premises data center and new Cloud Service Provider (CSP) infrastructure that requires low latency and service for all users within the organization, the best option would be: C. An MPLS connection Multiprotocol Label Switching (MPLS) is a data-carrying technique that directs data from one node to the next based on short path labels rather than long network addresses. MPLS can be used to create virtual private networks (VPNs) and allows for the establishment of highly efficient, secure, and direct paths between different network locations. An MPLS connection between an on-premises data center and a CSP infrastructure would provide the required security, as data can be encapsulated within the MPLS network, and also ensure low latency due to the optimized and direct paths MPLS uses. This is particularly suitable for organizations needing high-performance, reliable connections for data transfer operations. Therefore, an MPLS connection (C) best meets the requirements for security, low latency, and overall performance for connecting an on-premises data center with CSP infrastructure, ensuring efficient and secure data transfer operations for the entire organization.

A systems administrator wants to restrict access to a set of sensitive files to a specific group of users. Which of the following will achieve the objective? A. Add audit rules on the server B. Configure data loss prevention in the environment C. Change tine permissions and ownership of the files D. Implement a HIPS solution on the host

C. Change tine permissions and ownership of the files To restrict access to a set of sensitive files to a specific group of users, the most direct and effective method is: C. Change the permissions and ownership of the files. Here's why this option is the most suitable: Changing the permissions and ownership of the files directly controls who can read, write, or execute the files based on user and group ownership. This method is precise and effective for ensuring that only a specific group of users has access to the sensitive files. You can set the owner of the files to a specific user and the group ownership to a specific group. Then, by setting the appropriate file permissions, you can control access at a very granular level. For example, you can set the files to be readable and writable by the owner and the group, but not by others. Therefore, changing the permissions and ownership of the files is the most straightforward and effective way to achieve the objective of restricting access to a set of sensitive files to a specific group of users.

A company wants to utilize its private cloud for a new application. The private cloud resources can meet 75% of the application's resource requirements. Which of the following scaling techniques can the cloud administrator implement to accommodate 100% of the application's requirements? A. Horizontal B. Vertical C. Cloud bursting D. Autoscaling

C. Cloud bursting To accommodate 100% of the application's requirements when the private cloud resources can only meet 75% of them, the cloud administrator should implement: C. Cloud bursting Cloud bursting is a configuration set up between a private cloud and a public cloud in which the application runs in the private cloud until there is a spike in the demand for computing capacity, at which point the application "bursts" to the public cloud to tap into additional computing resources. This ensures that the application can handle peaks in demand beyond what the private cloud can accommodate. Therefore, cloud bursting is the best solution because it allows the company to use additional resources from the public cloud when the private cloud reaches its capacity limits.

A systems administrator is responding to an outage in a cloud environment that was caused by a network-based flooding attack. Which of the following should the administrator configure to mitigate the attack? A. NIPS B. Network overlay using GENEVE C. DDoS protection D. DoH

C. DDoS protection To mitigate an outage in a cloud environment that was caused by a network-based flooding attack, the systems administrator should configure: C. DDoS protection DDoS (Distributed Denial of Service) protection services are specifically designed to detect, mitigate, and protect against flooding attacks that aim to overwhelm network resources, making them unavailable to legitimate users. DDoS protection mechanisms can include rate limiting, traffic analysis, anomaly detection, and filtering to block malicious traffic while allowing legitimate traffic to pass through. Therefore, configuring DDoS protection is the most appropriate and direct action to mitigate the impact of a network-based flooding attack in a cloud environment.

A product-based company wants to transition to a method that provides the capability to enhance the product seamlessly and keep the development iterations to a shorter time frame. Which of the following would BEST meet these requirements? A. Implement a secret management solution B. Create autoscaling capabilities C. Develop CI/CD tools D. Deploy a CMDB tool

C. Develop CI/CD tools For a product-based company looking to enhance their product seamlessly and keep the development iterations to a shorter time frame, the best approach would be: C. Develop CI/CD tools CI/CD stands for Continuous Integration/Continuous Deployment or Continuous Delivery. This method automates the software delivery process. The CI part involves automatically testing code changes from multiple contributors to the main repository multiple times a day. The CD part automates the delivery of applications to selected infrastructure environments. This approach helps in increasing the speed of development, improving developer productivity, and delivering a high-quality product by automating the testing and deployment processes. Therefore, developing CI/CD tools is the best option to meet the company's requirements for enhancing the product seamlessly and keeping development iterations to shorter time frames.

A systems administrator is troubleshooting performance issues with a VDI environment. The administrator determines the issue is GPU related. and then increases the frame buffer on the virtual machines. Testing confirms the issue is solved, and everything is now working correctly. Which of the following should the administrator do NEXT? A. Consult corporate policies to ensure the fix is allowed B. Conduct internal and external research based on the symptoms C. Document the solution and place it in a shared knowledge base D. Establish a plan of action to resolve the issue

C. Document the solution and place it in a shared knowledge base After troubleshooting the issue and confirming that increasing the frame buffer on the virtual machines has solved the GPU-related performance issues in the VDI environment, the next step the systems administrator should take is: C. Document the solution and place it in a shared knowledge base Documentation is a critical part of the troubleshooting process. By documenting the problem and the solution that was effective in resolving it, the administrator ensures that knowledge is retained within the organization. This makes it easier to solve similar problems in the future and helps other team members understand the resolution steps if they encounter the same issue. Good documentation in a shared knowledge base promotes efficient knowledge sharing and is considered a best practice in IT operations.

A company wants to move its environment from on premises to the cloud without vendor lock-in. Which of the following would BEST meet this requirement? A. DBaaS B. SaaS C. IaaS D. PaaS

C. IaaS To move an environment from on-premises to the cloud without vendor lock-in, the best option would be: C. IaaS (Infrastructure as a Service) IaaS provides virtualized computing resources over the internet and offers the most flexibility in terms of the underlying technology stack. With IaaS, a company can manage its own applications, data, middleware, and OS, while the cloud provider typically manages virtualization, servers, hard drives, storage, and networking. Because IaaS provides the infrastructure but doesn't force the company into using a particular middleware, development environment, or database system, it reduces vendor lock-in compared to the other models. Therefore, IaaS (C) is the best option for a company that wants to move its environment to the cloud while maintaining flexibility and avoiding vendor lock-in.

A cloud architect is reviewing four deployment options for a new application that will be hosted by a public cloud provider. The application must meet an SLA that allows for no more than five hours of downtime annually. The cloud architect is reviewing the SLAs for the services each option will use: Based on the information above, which of the following minimally complies with the SLA requirements? A. Option A B. Option B C. Option C D. Option D

C. Option C Option C has a total uptime closest to the required uptime of 99.943%, without being lower, and not as high as Option D. Therefore, Option C minimally complies with the SLA requirements of allowing for no more than five hours of downtime annually.

A systems administrator is configuring a storage system for maximum performance and redundancy. Which of the following storage technologies should the administrator use to achieve this? A. RAID 5 B. RAID 6 C. RAID 10 D. RAID 50

C. RAID 10 For maximum performance and redundancy in a storage system, the administrator should use: C. RAID 10 RAID 10, also known as RAID 1+0, combines disk mirroring and disk striping, which means it mirrors each disk in a striped set. This configuration provides high redundancy and high performance, as read and write operations can be performed in parallel across the striped disks. In the event of a disk failure, the mirrored set ensures that the data is still available, and the impact on performance during rebuilds is less than with other RAID levels. RAID 10 is often the choice for applications that require both high performance and high availability, such as database servers and high-traffic web servers, especially when using traditional hard disk drives (HDDs). If using solid-state drives (SSDs), the performance differences between these RAID levels might be less pronounced, but RAID 10 would still provide superior redundancy with the added benefit of simpler and faster rebuilds in the event of a disk failure.

A company is performing a DR drill and is looking to validate its documentation. Which of the following metrics will determine the service recovery duration? A. MTTF B. SLA C. RTO D. RPO

C. RTO The metric that will determine the service recovery duration during a Disaster Recovery (DR) drill is: C. RTO (Recovery Time Objective) RTO is a key metric in disaster recovery and business continuity planning. It refers to the maximum acceptable length of time that a service, application, or function can be offline after a disaster occurs. It essentially defines the duration within which a business process must be restored after a disaster in order to avoid unacceptable consequences associated with a break in business continuity. For validating the actual duration of recovery in a DR drill, RTO is the appropriate metric.

A cloud security engineer needs to ensure authentication to the cloud provider console is secure. Which of the following would BEST achieve this objective? A. Require the user's source IP to be an RFC1918 address B. Require the password to contain uppercase letters, lowercase letters, numbers, and symbols C. Require the use of a password and a physical token. D. Require the password to be ten characters long

C. Require the use of a password and a physical token. To ensure that authentication to the cloud provider console is secure, the best approach would be: C. Require the use of a password and a physical token. This option is referring to Two-Factor Authentication (2FA), which significantly enhances security by requiring two forms of verification before granting access: something you know (a password) and something you have (a physical token). This method is much more secure than relying on password strength or length alone, as it adds an additional layer of security that would be difficult for unauthorized users to bypass even if they manage to obtain the password. Therefore, requiring both a password and a physical token for authentication (2FA) is the best way to ensure secure access to the cloud provider console.

Audit and system logs are being forwarded to a syslog solution. An administrator observes that two application servers have not generated any logs for a period of three days, while others continue to send logs normally. Which of the following BEST explains what is occurring? A. There is a configuration failure in the syslog solution B. The application servers were migrated to the cloud as IaaS instances C. The application administrators have not performed any activity in those servers D. There is a local firewall policy restriction on the syslog server

C. The application administrators have not performed any activity in those servers

An administrator manages a file server that has a lot of users accessing and creating many files. As a result, the storage consumption is growing quickly. Which of the following would BEST control storage usage? A. Compression B. File permissions C. User quotas D. Access policies

C. User quotas To best control storage usage on a file server with many users creating and accessing files, leading to rapidly growing storage consumption, the most effective approach would be: C. User quotas User quotas limit the amount of storage space a user or group can use on the file server. By implementing quotas, an administrator can control and manage the growth of storage consumption by preventing any single user or group from using an excessive amount of disk space. Quotas can be set based on individual user needs and can help in preventing a situation where unchecked storage consumption impacts the availability of disk space for others. Therefore, implementing user quotas is the best way to directly control and manage storage usage on a file server with a lot of users.

An administrator recently provisioned a file server in the cloud. Based on financial considerations, the administrator has a limited amount of disk space. Which of the following will help control the amount of space that is being used? A. Thick provisioning B. Software-defined storage C. User quotas D. Network file system

C. User quotas To control the amount of disk space being used, especially when dealing with a limited amount of provisioned storage, the best approach would be: C. User quotas User quotas are a feature that can be enabled on file systems to restrict the amount of space that can be used by each user or group. By setting quotas, an administrator can control and limit the storage usage, ensuring that no single user or department consumes more than their fair share of the available space. This is an effective way to manage limited storage resources and prevent any user from inadvertently or intentionally using an excessive amount of disk space. Therefore, implementing user quotas (C) is the most appropriate and direct solution to control the amount of disk space used when dealing with a limited storage budget.

A cloud administrator has deployed a website and needs to improve the site security to meet requirements. The website architecture is designed to have a DBaaS in the back end and autoscaling instances in the front end using a load balancer to distribute the request. Which of the following will the cloud administrator MOST likely use? A. An API gateway B. An IPS/IDS C. A reverse proxy D. A WAF

D. A WAF For a website architecture that includes a Database as a Service (DBaaS) backend and autoscaling instances at the front end with a load balancer, the most likely security improvement the cloud administrator would use is: D. A WAF (Web Application Firewall) A WAF is specifically designed to monitor, filter, and block malicious HTTP/HTTPS traffic to and from a web application. It helps protect web applications by filtering and monitoring HTTP traffic between a web application and the Internet. A WAF operates at the application layer and can protect the web application from various attacks such as cross-site scripting (XSS), SQL injection, and other vulnerabilities that are common in web applications. Given the described architecture, a WAF would be an ideal solution to improve site security by inspecting incoming traffic for malicious activities and blocking potentially harmful requests before they reach the web servers. Thus, a WAF is the most appropriate tool for improving the security of a website with the given architecture.

A company has two identical environments (X and Y) running its core business application. As part of an upgrade, the X environment is patched/upgraded and tested while the Y environment is still serving the consumer workloads. Upon successful testing of the X environment, all workload is sent to this environment, and the Y environment is then upgraded before both environments start to manage the workloads. Which of the following upgrade methods is being used? A. Active-passive B. Canary C. Development/production D. Blue-green

D. Blue-green The upgrade method described in the scenario is known as: D. Blue-green In the blue-green deployment strategy, there are two identical environments: one (Blue) serves the live production traffic while the other (Green) is updated and tested. Once the new version in the Green environment is fully tested and ready to go live, the traffic is switched from Blue to Green. This allows for minimal downtime and quick rollback if necessary, since the original (Blue) environment can be kept as a backup until the new (Green) environment is proven to be stable. After the switch, the old environment can then be updated to serve as the next staging area for future releases, continuing the cycle. Therefore, the method being used as described in the scenario is best identified as blue-green deployment.

A disaster situation has occurred, and the entire team needs to be informed about the situation. Which of the following documents will help the administrator find the details of the relevant team members for escalation? A. Chain of custody B. Root cause analysis C. Playbook D. Call tree

D. Call tree The document that will help the administrator find the details of the relevant team members for escalation in a disaster situation is: D. Call tree A call tree is a predefined communication model that is used to notify team members of an incident. It typically includes contact information for all relevant personnel and outlines a structured sequence of who should contact whom in the event of an emergency or disaster. This ensures that information is disseminated quickly and effectively to all necessary parties.

A systems administrator is planning a penetration test for company resources that are hosted in a public cloud. Which of the following must the systems administrator do FIRST? A. Consult the law for the country where the company's headquarters is located B. Consult the regulatory requirements for the company's industry C. Consult the law for the country where the cloud services provider is located D. Consult the cloud services provider's policies and guidelines

D. Consult the cloud services provider's policies and guidelines Before conducting a penetration test on company resources hosted in a public cloud, the first step a systems administrator must take is: D. Consult the cloud services provider's policies and guidelines Cloud service providers often have specific policies and guidelines regarding penetration testing, as such activities can impact not only the customer's environment but also the provider's infrastructure and potentially other tenants. It is essential to gain permission and understand the rules of engagement laid out by the cloud provider to avoid legal issues or violations of the service agreement. While understanding the laws of the country where the company is headquartered and the regulatory requirements of the industry (options A and B) are important, they come into play after ensuring compliance with the cloud provider's policies, which is the immediate concern. The laws of the country where the cloud provider is located (option C) may also be relevant, but the first step is always to check with the provider directly because their policies will take into account their legal obligations in their jurisdiction.

A cloud administrator needs to control the connections between a group of web servers and database servers as part of the financial application security review. Which of the following would be the BEST way to achieve this objective? A. Create a directory security group B. Create a resource group C. Create separate VLANs D. Create a network security group

D. Create a network security group The best way to control the connections between a group of web servers and database servers in the context of cloud infrastructure is: D. Create a network security group Network security groups (NSGs) are used to control inbound and outbound traffic to network interfaces (NIC), VMs, and subnets. An NSG contains security rules that allow or deny traffic to and from resources connected to Azure Virtual Networks (VNet). NSGs can be used to effectively segregate traffic between the web servers and database servers, ensuring that only authorized traffic can flow between them, which is essential for maintaining the security of a financial application.

A cloud administrator is troubleshooting a highly available web application running within three containers behind a Layer 7 load balancer with a WAF inspecting all traffic. The application frequently asks the users to log in again even when the session timeout has not been reached. Which of the following should the cloud administrator configure to solve this issue? A. Firewall outbound rules B. Firewall inbound rules C. Load balancer certificates D. Load balancer stickiness E. WAF transaction throttling

D. Load balancer stickiness To solve the issue of a web application frequently asking users to log in again even when the session timeout has not been reached, the cloud administrator should configure: D. Load balancer stickiness Session stickiness, also known as session affinity, is a configuration that allows all requests from a particular client to be directed to the same server or container for the duration of a session. Without stickiness, a Layer 7 load balancer might route a user's requests to different containers for each HTTP request, which can disrupt session continuity and cause the application to ask for login information again, especially if sessions are stored locally within each container. Enabling stickiness helps maintain session consistency by ensuring that all requests from a user during a session are handled by the same container. Therefore, configuring load balancer stickiness is the most direct and effective solution to ensure that a user's session remains on the same container throughout their interaction with the web application, thus avoiding unnecessary re-logins.

A technician is trying to delete six decommissioned VMs. Four VMs were deleted without issue. However, two of the VMs cannot be deleted due to an error. Which of the following would MOST likely enable the technician to delete the VMs? A. Remove the snapshots B. Remove the VMs' IP addresses C. Remove the VMs from the resource group D. Remove the lock from the two VMs

D. Remove the lock from the two VMs The error preventing the deletion of the two virtual machines (VMs) could be due to several reasons, but among the options provided, the most likely solution would be: D. Remove the lock from the two VMs Resource locks are a common feature in cloud and virtualization platforms designed to prevent accidental deletion or modification of critical resources. If a lock is applied to a VM, it cannot be deleted or significantly altered until the lock is removed. This is a security and governance feature that helps ensure important VMs are not inadvertently deleted or changed. The presence of a lock would directly result in an error when attempting to delete a VM, fitting the scenario described. Therefore, removing the lock from the two VMs (D) is the most direct action that could resolve the issue, enabling the technician to proceed with the deletion of the decommissioned VMs.

A systems administrator is writing a script for provisioning nodes in the environment. Which of the following would be BEST for the administrator to use to provision the authentication credentials to the script?

The best practice for provisioning authentication credentials to a script, especially in a secure and manageable way, is: B. password=$env_password This method involves using an environment variable to store the password. Environment variables can be set securely in the system or CI/CD (Continuous Integration/Continuous Deployment) pipeline, reducing the risk of exposing sensitive information. Storing credentials directly in scripts or in accessible files poses a significant security risk. Using environment variables allows for the credentials to be managed outside the script, making it easier to change passwords without modifying the script and reducing the risk of the password being exposed in version control or log files. Therefore, using an environment variable to provision authentication credentials in the script is the best option among those provided. It allows for secure management of the credentials and reduces the risk of accidental exposure.


Kaugnay na mga set ng pag-aaral

Music Appreciation Final Exam Study Set

View Set

Texas Real Estate Finance (Edited by me)

View Set

Final Real Estate Exam Study Set

View Set

Fluid Electrolyte Balance Physiology

View Set

Anatomy & Physiology 101 Chapter 13 The Brain, Cranial Nerves and Sensory and Motor Pathways

View Set

Q9 Ebusiness Customization and Pricing

View Set

Law Test 3 - SU 15: Non-Corporate Bus. Entities; Rhodes Ole Miss

View Set

Chapter 40: tThe Child with a Musculoskeletal Disorder

View Set