CompTIA CASP+ CAS-004 Exam Guide A-Z - CHAPTER 18 - Integrating Hosts, Networks, Storage, and Applications

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

CMDB

A configuration management database (CMDB) records sthe status of assets like goods, systems, software, facilities, and people at precise periods in time, as well as the relationships amont them. CMDBs are commonly used as data warehouses by the IT department.

CHAPTER 18 - Integrating Hosts, Networks, Storage, and Applications - OBJECTIVES

After going through this chapter, you will be able to understand the following topics: 1.) Adapt security to meet the changing business needs. 2.) Standards, adherence and competing standards, issues with lack of standards, and de facto standards. 3.) Interoperability and resilience issues covering legacy systems and software/current systems, application requirements, software types (in-house developed, commercial, tailored commercial, open-source), standard data formats, and protocols and APIs, including the use of heterogeneous components, course of action automation/orchestration, distribution of critical assets, persistence and non-persistence of data, redundancy/high availability, and assumed likelihood of attack. 4.) Data security considerations for remnants, aggregation, data isolation, data ownership, data sovereignty, and data volume. 5.) Resource provisioning and de-provisioning. 6.) Design considerations during mergers, acquisitions, and demergers/divestitures. 7.) Network secure segmentation and delegation. 8.) Logical deployment diagram and corresponding physical deployment diagram of all relevant devices. 9.) Security and privacy considerations of storage integration. 10.) Applications: Applications include CRM, ERP, CMDB, CMS, integration enablers, Directory Services, DNS, SOA, and ESB.

Network secure segmentation and delegation

An organization may need to segment its network for a variety of reasons, including improving network performance, protecting certain traffic, and so on. The employment of routers, switches, and firewalls is commonly used to segment a business network. A network administrator may choose to use switches to create VLANs or firewalls to create a DMZ. Whatever method you use to divide the network, make sure the interfaces connecting the segments are as safe as possible. This might entail shutting down ports, adopting MAC filtering, and employing other security measures. Separate physical trust zones can be implemented in a virtualized system. After the segments or zones have been formed, you may assign individual administrators to manage the various segments or zones.

Protocols and APIs

Another barrier to interoperability is the usage of several protocols and application program interfaces (APIs). Both endpoints must support and comprehend the proto cols in use when it comes to networking, storage, and authentication. To lower the attack surfact, there should be an aim to reduce the number of protocols in use. Each protocol has its oown set of flaws that must be addressed. There are a variety of API techniquest available, including Simple Object Access Protocol (SOAP), Representational State Transfer (REST), and JavaScript Object Notation (JSON), and many businesses use all of them. Reducing the amount of APIs in use should be a goal to limit the attack surface.

Application requirements

Any installed program may demand hardware, software, or other requirements that the business does not have. However, thanks to recent developments in virtual technology, the company may use virtualization to create a virtual machine that meets the application's requirements. For example, an application can demand a certain screen resolution or graphics driver that isn't installed on any of the company's physical PCs. In this situation, the company might set up a virtual machine with the necessary screen resollution or driver to ensure that the program runs well. Keep in mind that some apps may require operating system versions that are no longer accessible. You may opt to deploy a program in compatibility mode in current verisions of Windows by utilizing the Compatibility tab of the application's executable files, as illustrated in Figure 18.1: Software can be of several types, and each type has advantages and disadvantages. This section looks at the major types of software.

In-house developed

Applications can be built in-house or bought off the shelf. If developers have the requisite expertise, funding, and time, in-house-produced appsp may b e tailored to the firm. Commercial apps may give the organization customization possibilities. However, customization is typically limited. When a new application is required, businesses should thoroughly investigate their possiblities. Once an organization's requirements have been definied, it may compare them to all commercially available programs to see whether any of them will meet those requirements. Purchasing a commercial solution rather than developing one in-house is typically more cost-effective. However, each company must weigh the expenses of commercial applications against the expenditures of in-house development.

Commercial

Commercial software, often known as commercial off-the-shelf (COTS) software, is well-known and readily available. Information about vulnerabilities and possible attack patterns is commonly discussed among IT professionals. This implies that employing commercial software might expose the company to additional security vulnerabilities. In most circumstances, consumers to not have access to the source code, making it impossible to assess the security of commercial software programs.

Competing standards

Competing standards are frequently enforced between rival companies. Microsoft, for example, frequently creates its authentication standards. Its standards are frequently based on an industry-standard with minor changes to suit Microsoft's needs. Linux, on the other hand, may adopt standards, but being an open-source operating system, changes may have been made along the road that may not entirely fit with the standards your company must follow. Always examine rival standards to see which one best meets your company's requirements.

CRM

Customer relationship management (CRM) entails identifying customers and preserving all customer-related data, including contact information and information on any direct interactions with them. CRM security is critical to an organization's success. Access to the CRM system is often restricted to sales and marketing staff, as well as management. If remote access to the CRM system is essential, a VPN or equivalent solution should be used to safeguard the CRM data.

DNS

DNS is a hierarchical naming system for computers, services, and other resources that are connected to the Internet or a private network. To guarantee that a DNS server is authorized before the transfer of DNS information between DNS server and the client, you should activate Domain Name System Security Extensions (DNSSEC). Transaction Signature (TSIG) is a DNSSEC cryptographic method that allows a DNS server to update client resource entries automatically if their IP addresses or hostnames change. A DNS client's TSIG record is u sed to validate it. Internal DnS servers can be configured to only connect with root servers as a security precaution. When the internal DNS servers are configured to connect soley with the root servers, they are not allowed to contact any other external DNS servers, as shown in Figure 18.5: The Start of Authority (SOA) includes information about the authoritative server for a DNS zone. The Time to Live (TTL) of a DNS record specifies how long it will last before it has to be renewed. When the TTL or a record expires, it is deleted from the DNS cache. To poison the DNS cache, you must add fake records to the DNS zone. Because the resource record is accessed less frequently with a longer TTL, it is less likely to be poisoned. Let's have a look at a DNS-related security problem. Assume an IT administrator sets up new DNS name servers to host the company's MX records and resolve the public address of the webserver. Only server ACLs are used by the administrator to safeguard zone transfers between DNS servers. Any alternative DNS servers, on the other hand, would be vulnerable to the IP spoofing attacks. Another situation is when a security team discovers that by asking the company's external DNS server, someone from outside the organization has gotten critical information about the internal organization. The security manager should solve the problem by setting up a split DNS server, with the external DNS server containing only the information about domains that should be visible to the outside world and the internal DNS server maintaining authoritative records for internal systems.

Data aggregation

Data aggregation allows you to query data from numerous sources and aggregate it into a single report. On all of the domains and servers involved, the account used to access the data must have proper rights. In most situations, these sorts of installations include a dedicated server with a centralized data warehousing and mining system. Unwanted access to data is frequently the source of database security risks. The processes of aggregation and inference are two security vulnerabilities that occur in database management. Aggregation is the process or merging data from several sources. When a user does not have access to a group of data items but does have access to them individually--or at least to some of them--and can piece together the information to which he should not have access, this can constitute a security concern with databases. Inference refers to the process of putting together knowledge. The following two types of access measures can be put in place to help prevent access to inferable information: 1.) Content-dependent access control: Access is dependent on the sensitivity of theh data using this sort of metric. A department manager, for example, may have access to the salaries of his or her employees but not to the salaries of employees in other departments. Increased processing overhead is the cost of this measure. 2.) Context-dependent access control: Access is dependent on various criteria using this sort of metric to help prevent inference. Location, time of day, and past access history are all elemnts that might influence access control.

ERP

Data from product planning, product cost, manufacturing or service delivery, marketing/sales, inventory management, shipping, payment, and any other company operations is collected, stored, managed, and interpreted by enterprise resource planning (ERP). Personnel has access to an ERP system for reporting reasons. ERP should be installed on a secure internal network, sometimes known as a DMZ. When implementing ERP, you may encounter resistance from some departments that do not want to share their process data with other departments.

De facto standards

De facto standards are commonly accepted but not technically established standards. International standards groups adopt de jure standards, which are standards that are based on laws or regulations. De facto norms should be prioritized over de jure ones. If at all possible, your company should implement security policies that meet both de facto and de jure requirements. Consider the following scenario: Assume that the major goal of a chief information officer (CIO) is to implement a system that supports the 802.11r standard, which will aid wireless VoIP devices in moving automobiles. The 802.11r standard, on the other hand, has not been fully certified. The products of the wireless vendor do support 802.11r as it is currently specified. The administrators have evaluated the product and found no security or compatibility concerns, but they are worried that the standard has not yet been finalized. The best course of action is to get the equipment right now, as long as the firmware can be upgraded to the final 802.11r standard.

Directory Services

Directory Services organizes, saves, and makes the information in a computer operating system's directory accesible. Users can access a resource using its name rather than its IP or MAC address with Directory Services. The majority of business set up an internal Directory Services serveer to handle all internal queries. To gather information on any resources that are not on the local company network, this internal servers talks with a root server on a public network or with an externally facing server that is secured by a firewall or other security barriers. Directory Services include Active Directory, DNS, and LDAP, as illustrated in FIGURE 18.4:

Distribution of critical assets

Ensuring that key assets are not all situated in the same physical area is one method that can help improve resiliency. Collocating important assets exposes your company to the type of disaster that occurred at the Atlanta airport in 2017. The world's busiest airport stayed black for nearly 12 hours when a fire knocked ou the primary and backup power systems (which were both located together). It is undeniable that distributing vital assets improves resilience.

Integration enablers

Enterprise application enablers guarantee that a company's applications and services can communicate when they're needed. These enablers include all of the services specified in this section.

Security implications of integrating enterprise applications

Enterprise application integration enablers guarantee that an enterprise's applications and services can communicate as needed. Understanding which enabler is required in a given circumstance or scenario and ensuring that the solution is delivered in the most secure manner feasible are the main issues for the CASP test. Customer relationship management (CRM), e nterprise resource planning (ERP), governance, risk, and compliance (GRC), enterprise service bus (ESB), service-oriented architecture (SOA), Directory Services, Doman Name System (DNS), configurarion management database (CMDB), and content management systems (CMSs) are some of the solutions you should be familiar with.

Redundancy and high availability

Fault tolerance permits a system to continue to function normally even if one or more of its components fail. Fault-tolerant drives and fault-tolerant drive adapters are used to provide fault tolerance for a hard disc system. The cost of fault tolerance, on the other hand, must be evaluated against the cost of a redundant device or hardware. If information system security capabilities are not fault-tolerant, attackers can get access to systems if the security mechanisms fail. the cost of implementing a fault-tolerant system should be weighed against the cost of an assault on the system under attack. While providing a fault-tolerant security method to protect public data may not be critical, providing a fault-tolerant security mechanism to safeguard confidential data is critical. Availability refers to the ability to obtain data when it is required. Only those with a legitimate need for data should have access to it. The two basic scenarios in which availability is impaced are (1) when an attack disables or cripples a system and (2) when service is lost during and after disasters. Each system should be evaluated for its importance to the organization's operations. Controls should be implemented according to the criticality level of each system. The opposite of destruction or isolation is availability. Controls that can increase availability include fault-tolerant technology such as RAID or redundant locations. The degree to which a new solution or system displays high availability, which is generally given by redendancy of internal components, network connections, or data sources, is perhaps the most visible effect on its resiliency. To take it a step further, some systems would need to be deployed in clusters to give the capacity to recover from the loss of an entire system. When the criticality of the operation that the systems supports is stated, all new integrations should consider high availability solutions and redundant components.

CMS

From a central interface, a content management system (CMS) publishes, edits, updates, organizes, deletes, and manages material. Users may readily discover material using this single interface. Users can easily access the most recent version of the material since modifications are made from a central place. Microsoft SharePoint is an example of a content management system.

Use of heterogeneous components

Heterogeneous components are systems that have many types of components. These components can be found within a system or in separate physical systems, such as when Windows and Linux computers must connect to complete a business process. A data warehouse, a repository of information from heterogeneous databases, is probably the greatest example of heterogeneous components. It allows multiple sources of data to not only be stored in one location, but also be organized in such a way that data redundancy is reduced (a process known as data normalization), and more sophisticated data mining tools to manipulate the data to discover previously unknown relationships. They provide greater security problems in addition to the benefits they bring. Heterogeneous computing is another word for systems that have more than one type of processor or core. Such systems make use of the distince qualities of the many components by assigning each one a job in which it excels. While this usually improves performance, it makes performance predictability a little more challenging. Because capacity planning relies on the ability to forecast performance under varying workloads, this can have a detrimental influence on resilience or lead to overcapacity.

CHAPTER 18 - Integrating Hosts, Networks, Storage, and Applications - INTRODUCTION

INTRODUCTION: Organizations need to securely integrate hosts, storage, networks, and applications. It is the security practitioner's responsibility to ensure that appropriate security access ccontrols are implemented and tested apart from several other steps

ESB

In an SOA, planning and executing communication between mutually engaging software applications is called enterprise service bus (ESB). It enables the communication between SOAP, Java, .NET, and other applications. To facilitate communication with business partners, an ESB system is often implemented on a DMZ. ESB is the best option for offering safe software architecture that is event-driven and standards-based.

Data isolation

In databases, data isoluation prevents data corruption due to two concurrent activities. Using tenant IDs in the data labels, data isolation is used in cloud computing to guarantee that tenant data in a multi-tenant system is separated from other renters' data. In most cases, trusted login services are also utilized. Data isoluation should be checked in each of these installations to ensure that data is not damaged. In most circumstances, a transaction rollback should be used to guarantee that appropriate recovery is possible.

CHAPTER 18 - CONCLUSION

In this chapter, we discussed how organizations securely integrate hosts, storage, networks, and applications to meet the changing business needs. The need to understand different security standards, interoperability issues, and the use of techniques to increase resilience has high priority. Designing a secure infrastructure (logical and physical) and integrating secure storage solutions within the enterprise were also discussed. In the next chapter, we will discuss the security activities across the technology lifecycle, related to secure development and asset iventory, ensuring appropriate security controls are deployed.

Data sovereignty

Information that has been transformed to binary digital form and saved is governed by the laws of the jurisdiction in which it is stored. Data sovereignty is the term for this notion. When a company works on a worldwide scale, data sovereignty must be taken into account. It may have an impact on security concerns such as control selection, and it may eventually lead to a choice to centralize all data in the home nation. There is no such thing as a company that works in a vacuum. Laws, rules, and compliance requirements influence all businesses. Contracts, legislation, industry standards, and regulations must all be followed by organizations. Security personnel must be familiar with the rules and regulations of the nation or countries in which they work, as well as the industry in which they work. Many rules and regulations are constructed in such a way that specified activities are required in certain circumstances, however, rules and regulations leave it up to the organization to figure out how to comply. Both the United States and the European Union have enacted rules and regulations that influence businesses operating inside their respective jurisdictions. While security professionals should endeavor to comprehend laws and regulations, they may lack the expertise and experience necessary to completely grasp these rules and regulations to defend their business. Security professionals should consult with legal council in certain situations to ensure legislative or regulartory compliance.

Users

It is usually recommended to utilize an account template when provisioning (or establishing) user accounts to ensure that all of the relevant password policies, user rights, and other account settings are applied to the newly formed account. If you're going to de-provision a user account, you should first disable it. It may be hard to access files, directories, and other resources controlled by that user account once it has been destroyed. If an account is deactiviated rather than destroyed, the administrator can re-enable it temporarily to provice access to the account's resources. A proper method for requesting the establishment, disable-ment, or deletion of user accounts should be adopted by an organization. Administrations should also keep an eye on the account use to ensure that accounts are still active.

Legacy and current systems

Legacy systems are older technology, computers, or programs that provide a key role in the organization. Frequently, the vendor no longer maintains old systems, which means that no future technology, computer, or program upgrades will be supplied. Because of the security risks they pose, it is usually better to replace these systems as soon as feasible. However, due to the vital job they provide, these systems are occasionally required to be kept. Some guidelines when retaining legacy systems include the following: *If possible, implement the legacy system in a protected network or deminiltarized zone (DMZ). *Limit physical access to the legacy system to administrators. * If possible, deploy the legacy application on a virtual computer. * Employ ACLs to protect the data on the system. * Deploy the highest-level authentication and encryption mechanisms possible. The examples of legacy technologies that were highly poular decases ago, and are still in use today, are as follows: * Mainframe computers running ancient applications * Programming languages, such as COBOL. * Operating systems such as MS-DOS, Windows 3.1, or XP. * Hardware, such as Apple IIGS machines or Intel 286 computers. There are a wide variety of reasons why businesses may decide not to upgrade legacy systems and instead continue to utilize them. The following are the primary reasons: * Aside from the disadvantages of legacy systems, some businesses may want to keep them since they are still functional. Legacy technologies that have been tried and proven, are robust and reliable, and personnel has a high degree of technical knowledge. * Upgrades and replacements are expensive, especially for complicated, organization-wide, critical technology. Furthermore, most firms have made significant investments in legacy technology. When the cost of maintenance is less than the cost of replacement, businesses may opt to keep legacy systems. * It's possible that switching to a whole new technology may be too disruptive. Legacy technology playls an important function in a company, and a replacement may or may not be more dependable, secure, or speedier. As a result, the hazards associated with comprehensive replacement may jeopardize operations. Consider the following scenario. Let's say a company has a historical customer relationship management system that is has to keep. The application is only compatible with the Windows 2000 operating system, and the manufacturer no longer maintains it. The company might set up a virtual machine (VM) running Windows 2000 and migrate the application there. Users that want application access can utilize a remote desktop to connect to the VM and the application. Let's have a look at a more complicated scenario. Assume that an administrator replaces servers whenever funds are available in the budget. The organization has used 20 servers and 50 PCs from five different manufacturers during the last few years. Increased mean time to failure of older servers, OS variations, patch availability, and the capacity to recover incompatible hardware are some of the management problems and hazards associated with this form of technology life cycle management.

Data security considerations

One of the most significant issues during integration is the security of the data processed by any new system. Every stage of the data life cycle must be considered for data security. This section highlights data security concerns during integration.

Open source

Open-source software if free, but it comes with no assurances and limited support outside the user community's assistance. It necessitates a great deal of knowledge and ability to apply to a given business, but it also provides the most flexibility.

Applications

Organizations frequently require a wide range of applications. It is critical to keep track of the licenses for any commercial software that it utilized. Administrators must be alerted so that licenses are not renewed when an organization no longer requires them, or are renewed with lesser level when the app demand is low.

Adherence to standards

Organizations might choose to follow just open standards or standards that are governed by a standards body. Depending on the industry, some organizations may choose to implement only certain elements of standards. Remember that each standard should be thoroughly reviewed and analyzed to see how its implementation will affect the company. If a company disregards well-established norms, it may face legal consequences. Failure to employ standards to drive your organization's security approach, particularly if others in your field do so, can have a severe negative impact on your company's image and position.

Data volume

Organizations should endeavor to keep their data storage to a minimum. A broader attack surface equals more data. Data retention rules should be established that require data to be destroyed when it is no longer useful to the company. Remember that the formulation of such policies should be guided by legal and regulatory requirements for the retention of data that is relevant to the industry in which the company operates.

Persistence and non-persistence of data

Persistent data is information that remains accessible after you close and restart an app or device. When an unexpected shutdown happens, non-persistent data is lost. The hibernation procedure that a laptop goes through when the battery runs out is one approach that has been used to safeguard non-persistent data. Another example is system images, which "store" all changes to a snapshot from time to time. Finally, database systems' journaling mechanism records changes to the database that is planned to be made (known as transactions) and saves them to disc before they are made. The transaction log is examined after a power outage to apply any un-applied transactions. When integrating new technologies, security ecperts must investigate and employ these diverse strategies to give the best possible level of protection for non-persistent data.

Assumed likelihood of attack

Risk analysis should be performed on all new integrations to identify the possibility and effect of various vulnerabilities and threats. Attacks can be prevented and their impact reduced when they are foreseen and measures are implemented. It's also crucial to examine the risk of new vulnerabilities arising from interactions between new and older systems.

CHAPTER 18 - Integrating Hosts, Networks, Storage, and Applications - STRUCTURE

STRUCTURE: In this chapter, we will cover the following topics: - Secure data flow to meet changing business needs and Security standards. - Understand interoperability issues, techniques to increase the resilience. - Segment a network and analyze logical and physical deployment diagrams. - Design a secure infrastructure. - Integrate secure storage solutions within the enterprise. - Deploy enterprise application integration enablers.

1.) Logical deployment diagram and corresponding physical diagram of all relevant devices

Security professionals must be familiar with two types of enterprise deployment diagrams to pass the CASP exam - logical deployment diagrams and physical deployment diagrams. The architecture depicted in a logical deployment diagram, which includes the domain architecture, which thus includes the current domain hierarcy, names, and addressing scheme, server roles, and trust relationships. The details of physical communication links, such as cable length, grade, and wiring paths, servers, including computer name, IP address (if static), server roles, and domain membership, device location, including printer, hub, switch, modem, router, or bridge, as well as proxy location, communication links and available bandwidth between sites, and the number of users, including mobile users, at each site, are all shown on a physical deployment diagram. In comparison to a physical diagram, a local diagram often contains less information. While a logical diagram may frequently be created from a physical diagram, creating a physical diagram from a logical diagram is practically impossible. Figure 18.2 shows an example of a logical network diagram:

Servers

Server provisioning and de-provisioning should be bassed on organizational requirements and performance metrics. Administrators must monitor existing server resource utilization to decide when a new server should be deployed. Procedures should be put in place whenever a pre-set threshould has been achieved to guarantee that fresh server resources are supplied. When such resources are no longer required, procedures for de-provisioning the servers should be in placce. Monitoring is crucial once more.

SOA

Software is used to give application functionality as services to other applcations in a service-oriented architecture (SOA). A service is a single unit of functionality that is integrated to deliver all of the required capabilities; web services frequently overlap with this design, as presented in Figure 1.8.6:

Lack of standards

Standards have yet to be established in certain emerging technological domains. Don't let a lack of formal standards keep you from providing your company with the greatest security measures. If you can identify a similar technology that has formal standards, see if those standards apply to your solution. You could also wish to seek advice from specialists in the field (SMEs). A lack of standards does not exempt your company from taking all required precautions to safeguard personal and private information.

Tailored commercial

Tailored commercial (or commercial customized) software if a new type of software that comes in modules that may be combined to create exactly the components that the business need. It enables the company to customize it.

Adapt security to meet business needs

The business demands of a company may vary, necessitating the deployment of security devices or controls in a new way of securing data flow. As a security professional, you should be able to assess business changes, determine how they influence security, and then implement the necessary controls. Security professionals should identify personal and private information to secure data during transfer. Once this data has been properly identified, the following analysis steps should occur: Step 1: Determine which applications and services access the information. Step 2: Document where the information is stored. Step 3: Document security controls to protect the stored information. Step 4: Determine how the information is transmitted. Step 5: Analyze whether authentication is used when accessing information. If it is, determine whether the authentication information is securely transmitted. If it is not, determine whether authentication can be used. Step 6: Analyze enterprise password policies, including password length, password complexity, and password expiration. Step 7: Determine whether encryption is used to transmit data. If it is, ensure that the level of encryption is appropriate and that the encryption algorithm is adequate. If it is not, determine whether encryption can be used. Step 8: Ensure that the encryption keys are protected. To ensure the CIA of data throughout its life cycle, security practitioners should use the defense-in-depth philosophy. Applications and services should be examined to see whether better and more secure alternatives are available or if insufficient security measures are in place. To guarantee complete safety, data at rest may require encryption and proper access control lists (ACLs) to ensure that only authorized users have access. Secure protocols and encryption should be used for data transfer to prevent unauthorized users from intercepting and reading data. In the enterprise, the highest degree of authentication available should be applied. Password and account rules that are appropriate help defend against probable password attacks. Finally, security professionals should guarantee that personal and private data is kept apart from other data, such as by storing it on separate physical servers or isolating it through virtual LANs (VLANs). On all devices, disable any superfluous services, protocols, and accounts. Based on vendor recommendations and releases, ensure that all firmware, operating systems, and applications are kept up to date. When new technologies are implemented to meet the organization's business demands, security practitioners must be careful in ensuring that they understand all of the new technology's security implications and challenges. Deploying new technology without first doing a thorough security review might lead to security breaches that harm more than simply the newly installed technology. Keep in mind that changes are unavoidable. What will set you apart from other security experts is how you examine and plan for these developments.

Considerations during mergers, acquisitions, and demergers

The enterprise design must be addressed when companies combine, are purchased, or split. Each unique organization has its resources, infrastructure, and model in the case of mergers and acquisitions. As a security professional, you must guarantee that the architecture of two firms is properly examined before selecting how to integrate them. In the case of demergers, you'll almost certainly have to assist in determining how to effectively distribute the resources. Data security should always be a top priority.

Resources provisioning and de-provisioning

The flexibility to supply and de-provision resources as required is one of the advantages of many cloud installations. Users, servers, virtual devices, and applications may all be provisioned and de-provisoned. Your business may have an internal administrator who performs these activities, the could provider may handle these tasks, or you may have a hybrid solution where these tasks are shared between an internal administrator and cloud provider employees, depending on the deployment model employed. Remember that any solution that requires the cloud provider staff to offer provisioning and de-provisioning may not be suitable since those individuals may not be accessible to do any operatoins that you want right away.

Virtual devices

The host machine's resources are used byu virtual devices. The RAM on a real system, for example, is shared among all virtual devices installed on that physical machine. When an organizational requirement arises, administrators should supply additional virtual devices. However, de-provisioning virtual devices when they are no longer needed is equally crucial to free up resources for other virtual devices.

Standards

The implementation of policies inside an organization is described by standards. They are tactical acts or rules, in the sense that they outline the processes required to accomplish security. Standards, like rules, should be evaluated and amended regularly. A governing body, such as the National Institute of Standards and Technology (NIST), normally establishes standards. Security experts must be conversant with the standards that have been created since compannies require direction on how to secure their assets. Many organizations for standards have been established, including the National Institute of Standards and Technology (NIST), the United States Department of Defense (DoD), and the International Organization for Standardization (ISO). The United States Department of Defence defines a certification and accreditation procedure for DOD information systems in DOD Instruction 85.10.01. ISO collaborates with the International Electrotechnical Commission (IEC) to develop several information security standards. Other standards, such as those from the European Union Agency for Network and Information Security (ENISA), the European Union (EU), and the United States National Security Agency (NSA), may be needed by security experts. A company must examine the many standards accessible and implement the must advantageous recommendations based on the company;s requirements. Open standards, conformity to standards, competing standards, a lack of standards, and de facto standards are briefly discussed in the following sections.

2.) Logical deployment diagram and corresponding physical diagram of all relevant devices

The logical diagram only depicts a handful of the network's servers, as well as the services they provide, their IP addresses, and DNS names. The arrows between the different servers represent the relationships between them. Figure 18.3 shows an example of a physical network diagram; the cabling utilized, the devices on the network, the important information for each server, and the other connection information are all included in a physical network diagram, which provides more information than a logical one:

Open standards

The term open standard refers to standards that are available to the broader public. Without acquiring any rights to the standards or organizational membership, the general public can offer input on them and utilize them. Subject matter experts and industry specialists must assist in the establishment and upkeep of these standards.

Security and privacy considerations of storage integrations

To guarantee that security risks are considered when integrating storage systems into an organization, security practitioners should be included in the design and implementation. To guarantee that storage administrators priorize the security of the storage solutions, security practitioners should ensure that an organization sets proper security policies for storage solutions. The following are some of the security considerations for storage integration: 1.) Limit physical access to the storage solution. 2.) Create a private network to manage the storage solution. 3.) Implement ACLs for all data, paths, subnets, and networks. 4.) Implement ACLs at the port level, if possible. 5.) Implement multi-factor authentication.

Data remnants

When a computer or another resource is no longer in use, data remnants are that data that are left behind. An unauthorized user can access data leftovers if resources, particularly hard discs, are reused regularly. The best approach to safeguard this information is to use data encryption. Without the original encryption key, data that has been encrypted cannot be retrieved. Administrators must be aware of the types of data stored on physical discs to access if data remains are an issue. The organization may not be concerned with data leftovers if the data stored on a disc is not private or secret. If the data one the drive is secret or sensitive, however, the business should consider asset reuse and disposal rules. Residual data can be left behind after the data is wiped or removed from the storage medium. When an organization disposes of media, the data may be rebuilt, allowing unauthorized persons or organizations access to the information. Magnetic hard disc drives, solid-state drives, magnetic tapes, and optical media such as CDs and DVDs must all be considered by security professionals. When considering data reminisce, security professionals must understand the following countermeasures: 1.) Clearing: This includes remomving data from the media so that the data cannot be resonstucted using normal file recovery techniques and tools. With this method, the data is recoverable only using special forensic techniques. 2.) Purging: Also referred to as sanitization, purging makes the data unreadable even with advanced forensic techniques. When this technique is used, data should be unrecoverable. 3.) Destruction: Destruction involves destroying the media on which the data resides. Overwriting is a destruction technique that writes data patterns over the entire media, thereby eliminating any trace of data. 4.) Degaussing: Another destruction technique involves exposing the media to a powerful, alternating magnetic field to remove any previously written data and leave the media in a magnetically randomized (blank) state. Encryption scrambles the data on the media, thereby rendering it unreadable without the encryption key. 5.) Physical destruction involves physically breaking the media apart or chemically altering it. For magnetic media, physical destruction can also involve exposure to high temperatures. Most of these countermeasures are effective against magnetic media. Solid-state drives, on the other hand, provide distinct issues since they cannot be rewritten. Sanitization instructions are provided by mmost solid-state drive manufacturers and can be used to wipe data from the device. These commands should be researched by security specialists to guarantee that they are effective. Erasing the cryptographic key is another option for these discs. To guarantee that the data is completely deleted, a combination of these procedures is frequently utilized. When adopting any cloud-based service for an enterprise, data reminiscence is also a factor to consider. Although it is difficult to establish whether data is effectively deleted, security specialists should be engaged in the negotiation of any contract with a cloud-based service to ensure that the contract includes data reminisce problems. When working with the cloud, data encryption is a wonderful technique to ensure that data reminiscence is not an issue.

Standard data formats

When integrating many apps in an organization, issues with data formats might develop. As shown by the filename extension, each program will have its own set of data formats unique to that software. Securing various data types is a difficulty. Some encryption methods work on some kids but not on others. The Trusted Data Format (TDF), developed by Virtru, is one recent advancement in this field. TDF is essentially a content-containing protective wrapper. Your data are encrypted and 'wrapped' into a TDF file, which connects with Virtru-enabled key stores to retain access priviledges, whether you're trandmissing an email message, an Excel document, or a kitten photo.

Interoperability issues

When integrating solutions into a secure corporate architecture, security professionals must be aware of all potential interoperability difficulties with legacy systems/current systems, application needs, and in-house versus commercial versus adapted commercial apps.

Resilience issues

When integrating technologies into safe enterprise architecture, security professionals must make sure that the result is an environment that can survive both in the short term and over time. Misson-critical operations, as well as the systems that enable the services and applciations needed to keep the company running, must be robust. This section examines problems that affect availability.

Course of action automation/orchestration

While task automation has been used for some time (at least through scripts), orochestration takes it a step further by automating the whole workflows. One of the advantages of orthestration is the ability to program logic into the supporting systems, allowing them to react to changes in the environment. This can be a valuable tool for ensuring system reilience. To handle changing workflows, assets may be updated in real-time. For example, VMware vRealize is a virtual environment orthestration software that goes a step further by predicting workloads based on historical data.

Data ownership

While the majority of an organization's data is developed in-house, some of it is not. In many circumstances, businesses obtain data from others who create data similar to their own. These organizations may keep ownership of the information and simply license its usage. When integrated systems use such data, all duties associated with the data must be taken into account. Service-level agreements (SLAs) should be observed if they stipulate certain sorts of data handling or protection. A data or information owner's primary task is to identify the categorization level of the information they own and to secure the data they are in charge of. This job grants or refuses data access permissions, The data owner, on the other hand, is typically not in charge of data access control implementation. The data owner's job is requently performed by someone who has the best understanding of the data due to their affiliation with a certain business unit. A data owner should be assigned to each business unit. A human resources department employee, for example, has a superior understanding of human resources data than an accounting department person. After the data owner has defined the information classification and controls, the data custodian applies them. The data custodian, on the other hand, does not require any understanding of the data beyond its classification levels, although the data owner normally does. Even though the data owner for human resources data should be a human resources manager, the data custodian for the data might be a member of the IT department.


संबंधित स्टडी सेट्स

Elders Final Review - Full Chapter Questions

View Set

EMT Chapter 26 - Soft Tissue Injuries

View Set

Business Law 1 // Ch. 19 Breach of Contract and Remedies

View Set

Cardiovascular, Hematological-oncology, Immune

View Set

TestOut Chapter 1.4: Common TCP/IP Protocols

View Set