Cysa+

¡Supera tus tareas y exámenes ahora con Quizwiz!

Behavioral Analysis

Another term for anomaly analysis. It also observes network behaviors for anomalies. It can be implemented using combinations of the scanning types already covered, including NetFlow, protocol, and packet analysis to create a baseline and subsequently report departures from the traffic metrics found in the baseline.

Payment Card Information

Another type of PII that almost all companies possess is credit card data. Holders of this data must protect it. Many of the highest-profile security breaches that have occurred have involved the theft of this data. The Payment Card Industry Data Security Standard (PCI-DSS) affects any organizations that handle cardholder information for the major credit card companies. The latest version is 3.2. To prove compliance with the standard, an organization must be reviewed annually. Although PCI-DSS is not a law, this standard has affected the adoption of several state laws. PCI-DSS specifies 12

Wireshark

A popular packet sniffer.

Transport Layer Security (TLS)

A protocol based on SSL 3.0 that provides authentication and encryption, used by most servers for secure exchanges over the Internet.

Assets Criticality level: Non-Critical

These are assets that, while nice to have, are not required for the organization to continue doing business.

Assets Criticality level: Critical

These are systems without which the company cannot operate or will be so seriously impaired it cannot continue doing business.

Transport Encryption

Transport encryption ensures that data is protected when it is transmitted over a network or the Internet. Transport encryption protects against network sniffing attacks.Security professionals should ensure that their enterprises are protected using transport encryption in addition to protecting data at rest. For example, think of an enterprise that implements token and biometric authentication for all users, protected administrator accounts, transaction logging, full-disk encryption, server virtualization, port security, firewalls with ACLs, a network intrusion prevention system (NIPS), and secured access points. None of these solutions provides any protection for data in transport. Transport encryption would be necessary in this environment to protect data. To provide this encryption, secure communication mechanisms should be used, including SSL/TLS, HTTP/HTTPS/SHTTP, SET, SSH, and IPsec:SSL/TLS: Secure Sockets Layer (SSL) is a transport layer protocol that provides encryption, server and client authentication, and message integrity. SSL was developed by Netscape to transmit private documents over the Internet. While SSL implements either 40-bit (SSL 2.0) or 128-bit encryption (SSL 3.0), the 40-bit version is susceptible to attacks because of its limited key size. SSL allows an application to have encrypted, authenticated communication across a network.Transport Layer Security (TLS) is an open-community standard that provides many of the same services as SSL. TLS 1.0 is based on SSL 3.0 but is more extensible. The main goal of TLS is privacy and data integrity between two communicating applications.SSL and TLS are most commonly used when data needs to be encrypted while it is being transmitted (in transit) over a medium from one system to another.HTTP/HTTPS/SHTTP: Hypertext Transfer Protocol (HTTP) is the protocol used on the web to transmit website data between a web server and a web client. With each new address that is entered into the web browser, whether from initial user entry or by clicking a link on the page displayed, a new connection is established because HTTP is a stateless protocol.HTTP Secure (HTTPS) is the implementation of HTTP running over the SSL/TLS protocol, which establishes a secure session using the server's digital certificate. SSL/TLS keeps the session open using a secure channel. HTTPS websites always include the https:// designation at the beginning.Although it sounds very similar, Secure HTTP (S-HTTP) protects HTTP communication in a different manner. S-HTTP encrypts only a single communication message, not an entire session (or conversation). S-HTTP is not as common as HTTPS.SSH: Secure Shell (SSH) is an application and protocol that is used to remotely log in to another computer using a secure tunnel. When the secure channel is established after a session key is exchanged, all communication between the two computers is encrypted over the secure channel.IPsec: Internet Protocol Security (IPsec) is a suite of protocols that establishes a secure channel between two devices. IPsec is commonly implemented over VPNs. IPsec provides traffic analysis protection by determining the algorithms to use and implementing any cryptographic keys required for IPsec.IPsec includes Authentication Header (AH), Encapsulating Security Payload (ESP), and security associations. AH provides authentication and integrity, whereas ESP provides authentication, integrity, and encryption (confidentiality). A security association (SA) is a record of a device's configuration that needs to participate in IPsec communication. A security parameter index (SPI) is a type of table that tracks the different SAs used and ensures that a device uses the appropriate SA to communicate with another device. Each device has its own SPI.IPsec runs in one of two modes: transport mode or tunnel mode. Transport mode only protects the message payload, whereas tunnel mode protects the payload, routing, and header information. Both of these modes can be used for gateway-to-gateway or host-to-gateway IPsec communication.IPsec does not determine which hashing or encryption algorithm is used. Internet Key Exchange (IKE), which is a combination of OAKLEY and Internet Security Association and Key Management Protocol (ISAKMP), is the key exchange method that is most commonly used by IPsec. OAKLEY is a key establishment protocol based on Diffie-Hellman that was superseded by IKE. ISAKMP was established to set up and manage SAs. IKE with IPsec provides authentication and key exchange.The authentication method used by IKE with IPsec includes pre-shared keys, certificates, and public key authentication. The most secure implementations of pre-shared keys require a PKI. But a PKI is not necessary if a pre-shared key is based on simple passwords.

Anomalous Activity

When an application is behaving strangely and not operating normally, it could be that the application needs to be reinstalled or that it has been compromised by malware in some way. While all applications occasionally have issues, persistent issues or issues that are typically not seen or have never been seen could indicate a compromised application.

MX

A mail exchanger record, which represents an e-mail server mapped to an IPv4 address

WPA2-Enterprise

A version of WPA2 that uses a RADIUS server for authentication. Uses CCMP, AES encryption.

Rogue Endpoints

As if keeping up with the devices you manage is not enough, you also have to concern yourself with the possibility of rogue devices in the networks. Rogue devices are devices that are present that you do not control or manage. In some cases, these devices are benign, as in the case of a user bringing his son's laptop to work and putting it on the network. In other cases, rogue endpoints are placed by malicious individuals.

Wireshark

Application that captures and analyzes network packets

Outsourcing

Third-party outsourcing is a liability that many organizations do not consider as part of their risk assessment. Any outsourcing agreement must ensure that the information that is entrusted to the other organization is protected by the proper security measures to fulfill all the regulatory and legal requirements.Downstream liability refers to liability that an organization accrues due to partnerships with other organizations and customers. For example, you need to consider whether a contracted third party has the appropriate procedures in place to ensure that an organization's firewall has the security updates it needs. If hackers later break into the network through a security hole and steal data to steal identities, the customers can sue the organization (not necessarily the third party) for negligence. This is an example of a downstream liability. Liability issues that an organization must consider include third-party outsourcing and contracts and procurements.Due diligence and due care are two related terms that deal with liability. Due diligence means that an organization understands the security risks it faces and has taken reasonable measures to meet those risks. Due care means that an organization takes all the actions it can reasonably take to prevent security issues or to mitigate damage if security breaches occur. Due care and due diligence often go hand-in-hand but must be understood separately before they can be considered together.Due diligence is all about gathering information. Organizations must institute the appropriate procedures to determine any risks to organizational assets. Due diligence provides the information necessary to ensure that the organization practices due care. Without adequate due diligence, due care cannot occur.Due care is all about action. Organizations must institute the appropriate protections and procedures for all organizational assets, especially intellectual property. With due care, failure to meet minimum standards and practices is considered negligent. If an organization does not take actions that a prudent person would have taken under similar circumstances, the organization is negligent.As you can see, due diligence and due care have a dependent relationship. When due diligence is performed, organizations recognize areas of risk. Examples include an organization determining that personnel do not understand basic security issues, that printed documentation is not being discarded appropriately, and that employees are accessing files to which they should not have access. When due care occurs, organizations implement plans to protect against identified risks. For the due diligence examples just listed, due care would include providing personnel security awareness training, putting procedures into place for proper destruction of printed documentation, and implementing appropriate access controls for all files.It is important to ensure that a third party provides the level of security warranted by the data involved. There are a number of ways to facilitate this: Include contract clauses that detail the exact security measures that are expected of the third party. Periodically audit and test the security provided to ensure compliance. Consider executing an information security agreement (ISA), which may actually be required in some areas (for example, healthcare). Like third-party outsourcing agreements, contract and procurement processes must be formalized. Organizations should establish procedures for managing all contracts and procurements to ensure that they include all the regulatory and legal requirements. Periodic reviews should occur to ensure that the contracted organization is complying with the guidelines of the contract.Outsourcing can also cause an issue for a company when a vendor subcontracts a function to a third party. In this case, if the vendor cannot present an agreement with the third party that ensures the required protection for any data handled by the third party, the company that owns the data should terminate the contract with the vendor at the first opportunity.Problems caused by outsourcing of functions can be worsened when the functions are divided among several vendors. Strategic architecture may be adversely impacted through the segregation of duties between providers. Vendor management costs may increase, and the organization's flexibility to react to new market conditions will be reduced. Internal knowledge of IT systems will decline and decrease future platform development. The implementation of security controls and security updates takes longer as responsibility crosses multiple boundaries.Finally, when outsourcing crosses national boundaries, additional complications arise. Some countries' laws are more lax than others. Depending on where the data originates and where it is stored, it may be necessary to consider the laws of more than one country or regulatory agency. If a country has laws that are too lax, an organization may want to reconsider doing business with a company from that country.In summary, while engaging third parties can help meet time-to-market demands, a third party should be contractually obliged to perform adequate security activities, and evidence of those activities should be confirmed by the company prior to the launch of any products or services that are a result of third-party engagement. The agreement should also include the right of the company to audit the third party at any time.

Access control decisions can be of what types?

Time based, Rule based, role based or location based

Nonessential resources

____________ Should be restored within 30 days.

Beaconing (Common Network-Related Symptoms)

refers to traffic that leaves a network at regular intervals. This type of traffic could be generated by compromised hosts that are attempting to communicate with (or call home) the malicious party that compromised the host. While there are security products that can identify beacons, including firewalls, intrusion detection systems, web proxies, and SIEM systems, creating and maintaining baselines of activity will help you identify beacons that are occurring during times of no activity (for example, at night). When this type of traffic is detected, you should search the local source device for scripts that may be generating these calls home.

Packet Analysis

examines an entire packet including the payload

Software Patches

updates released by vendors that either fix functional issues with or close security loopholes in operating systems, applications, and versions of firmware that run on network devices.To ensure that all devices have the latest patches installed, deploy a formal system to ensure that all systems receive the latest updates after thorough testing in a non-production environment.

DMZ (demilitarized zone)

A small section of a private network that is located between two firewalls and made available for public access.

Dual Control

An example of dual control is one person initiating a request for a payment and another authorizing that same payment. Neither person could perform both operations.

Split Knowledge

An example of split knowledge is two bank employees who individually know only part of the combination for the safe. They must both be present to open the safe.

Risk Assessment Process

1. Identify Business Objectives 2. Identify Information Assets supporting the BOs 3. Perform Risk Assessment (RA) 4. Perform Risk Mitigation (RM) 5. Perform Risk Treatment (RT) 6. Perform Periodic Risk Reevaluation

ayer 2 Tunneling Protocol (L2TP)

2TP is a newer protocol that operates at Layer 2 of the OSI model. Like PPTP, L2TP can use various authentication mechanisms; however, L2TP does not provide any encryption. It is typically used with IPsec, which is a very strong encryption mechanism.

Compensative controls

: Compensative controls are in place to substitute for a primary access control and mainly act to mitigate risks. By using compensative controls, you can reduce risk to a more manageable level. Examples of compensative controls include requiring two authorized signatures to release sensitive or confidential information and requiring two keys owned by different personnel to open a safety deposit box.

TACACS+ and RADIUS

802.1x is a standard that defines a framework for centralized port-based authentication. It can be applied to both wireless and wired networks and uses three components: Supplicant: The user or device requesting access to the network Authenticator: The device through which the supplicant is attempting to access the network Authentication server: The centralized device that performs authentication The role of the authenticator can be performed by a wide variety of network access devices, including remote access servers (both dial-up and VPN), switches, and wireless access points. The role of the authentication server can be performed by a Remote Authentication Dial-in User Service (RADIUS) or Terminal Access Controller Access Control System Plus (TACACS+) server. The authenticator requests credentials from the supplicant and, upon receiving those credentials, relays them to the authentication server, where they are validated. Upon successful verification, the authenticator is notified to open the port for the supplicant to allow network access. This process is illustrated in Figure 11-5. The figure is showing the 802.1x standard. In the figure, Supplicant is requesting access to the network from Authenticator. Authenticator requests credentials from the Supplicant end. Authenticator, then relays the credentials to the Authentication Server. Authentication Server validates the credentials and on successful verification, notification is sent to the Authenticator to open the port for the Supplicant to allow network access. Supplicant then access the Internet or Other LAN Resources. Figure 11-5: 802.1x While RADIUS and TACACS+ perform the same roles, they have different characteristics. You need to take these differences into consideration when choosing a method. Keep in mind also that while RADIUS is a standard, TACACS+ is Cisco proprietary. Table 11-1 compares them. RADIUS TACACS+ Transport Protocol Uses UDP, which may result in faster response Uses TCP, which offers more information for troubleshooting Confidentiality Encrypts only the password in the access request packet Encrypts the entire body of the packet but leaves a standard TACACS+ header for troubleshooting Authentication and Authorization Combines authentication and authorization Separates authentication, authorization, and accounting processes Supported Layer 3 Protocols Does not support any of the following: Apple Remote Access protocol NetBIOS Frame Protocol Control protocol X.25 PAD connections Supports all protocols Devices Does not support securing the available commands on routers and switches Supports securing the available commands on routers and switches Traffic Creates less traffic Creates more traffic Table 11-1: RADIUS and TACACS+ The security issues with RADIUS include the following: RADIUS Access-Request messages sent by RADIUS clients are not authenticated. Require the Message-Authenticator attribute in all Access-Request messages or use an authentication counting and lockout mechanism. The RADIUS shared secret can be weak due to poor configuration and limited size. Choose shared secrets at least 22 characters long and consisting of a random sequence of upper- and lowercase letters, numbers, and punctuation. Sensitive attributes are encrypted using the RADIUS hiding mechanism. This is not secure. Use Internet Protocol Security (IPsec) with Encapsulating Security Payload (ESP) and an encryption algorithm such as Triple Data Encryption Standard (3DES) to provide data confidentiality for the entire RADIUS message. Poor Request Authenticator values can be used to decrypt encrypted attributes. If the Request Authenticator is not sufficiently random, it can be predicted and is also more likely to repeat. The Request Authenticator generator should be of cryptographic quality. Security issues with TACACS+ include the following: If the servers that are running the TACACS+ applications are compromised, an attacker could have access to your organization's entire user/password database. This access should be tightly controlled. Lack of integrity checking allows an attacker with access to the wire to flip most of the bits in the packet without the change getting detected. TACACS+ is vulnerable to replay attacks because it uses TCP and provides no security against replay. New TCP connections may be opened by an attacker for replaying recorded TACACS+ sessions. Forced session ID collisions occur when two different packets happen to get the same session ID and the same sequence number. Then they both become vulnerable to simple frequency analysis attacks. Session IDs may be too small to be unique if randomly chosen. You can expect to see two different sessions with the same session ID if you watch about 100,000 TACACS+ sessions. Due to lack of padding, the lengths of variable-size data fields can often be determined from the packet sizes. While many of these defects are inherent to the protocol, there are some general measures you can take to reduce or eliminate them: Apply packet filtering where possible to ensure that servers are accessible only from within your network—and preferably only by the IP addresses of the clients. Choose strong encryption keys. Avoid running the service as root.

SQL Injection

A SQL injection attack inserts, or "injects," a SQL query as the input data from the client to the application. This type of attack can result in reading sensitive data from the database, modifying database data, executing administrative operations on the database, recovering the content of a given file, and even issuing commands to the operating system.

Netstat

A TCP/IP command line utility that shows the status of each active connection.

Zenmap

A Windows-based GUI version of nmap.

host record (also called an A record for IPv4 and AAAA record for IPv6)

A host record, which represents the mapping of a single device to an IPv4 or IPv6 address

NS

A name server record, which represents a DNS server mapped to an IPv4 address

Accounting Data

Accounting data in today's networks is typically contained in accounting information systems (AIS). While these systems offer valuable integration with other systems, such as HR and customer relationship management systems, this integration comes at the cost of creating a secure connection between these systems. Many organizations are also abandoning legacy accounting software for cloud-based vendors to maximize profit. Cloud arrangements bring their own security issues, such as the danger of data comingling in the multitenancy environment that is common in public clouds. Moreover, considering that a virtual infrastructure underlies these cloud systems, all the dangers of the virtual environment come into play.

Cryptographic Types

Algorithms that are used in computer systems implement complex mathematical formulas when converting plaintext to ciphertext. The two main components of any encryption system are the key and the algorithm. In some encryption systems, the two communicating parties use the same key. In other encryption systems, the two communicating parties use different keys, but these keys are related.In this section, we discuss symmetric and asymmetric algorithms.Symmetric algorithms use a private or secret key that must remain secret between the two parties. Each party pair requires a separate private key. Therefore, a single user would need a unique secret key for every user with whom she communicates.Consider an example of 10 unique users. Each user needs a separate private key to communicate with the other users. To calculate the number of keys that would be needed in this example, you use the following formula:Number\;of\;users\; \times(Number\;of\;users - 1) / 2Numberofusers×(Numberofusers−1)/2Therefore, in this example you would calculate 10 × (10 - 1) / 2, or 45 needed keys.With symmetric algorithms, the encryption key must remain secure. To obtain the secret key, the users must find a secure out-of-band method for communicating the secret key, including courier, or direct physical contact between the users. A special type of symmetric key called a session key encrypts messages between two users during a communication session. Symmetric algorithms can be referred to as single-key, secret-key, private-key, or shared-key cryptography. Symmetric systems provide confidentiality but not authentication or non-repudiation. If both users use the same key, determining where the message originated is impossible. Symmetric algorithms include DES, AES, IDEA, Skipjack, Blowfish, Twofish, RC4/RC5/RC6, and CAST. Table 12-4 lists the strengths and weaknesses of symmetric algorithms. StrengthsWeaknessesSymmetric algorithms are 1000 to 10,000 times faster than asymmetric algorithms.The number of unique keys needed can cause key management issues.They are hard to break.Secure key distribution is critical.Symmetric algorithms are cheaper to implement than asymmetric.Key compromise occurs if one party is compromised, thereby allowing impersonation. Table 12-4: Symmetric Algorithm Strengths and WeaknessesThe two broad types of symmetric algorithms are stream-based ciphers and block ciphers. Initialization vectors (IV) are an important part of block ciphers.Stream-based ciphers perform encryption on a bit-by-bit basis and use keystream generators. The keystream generators create a bit stream that is XORed with the plaintext bits. The result of this XOR operation is the ciphertext.A synchronous stream-based cipher depends only on the key, and an asynchronous stream cipher depends on the key and plaintext. The key ensures that the bit stream that is XORed to the plaintext is random.Advantages of stream-based ciphers include the following: They generally have lower error propagation because encryption occurs on each bit. They generally use more in hardware implementation. They use the same key for encryption and decryption. They are generally cheaper to implement than block ciphers. Block ciphers perform encryption by breaking a message into fixed-length units. A message of 1024 bits could be divided into 16 blocks of 64 bits each. Each of those 16 blocks is processed by the algorithm formulas, resulting in a single block of ciphertext.Examples of block ciphers include IDEA, Blowfish, RC5, and RC6.Advantages of block ciphers include the following: Implementation of block ciphers is easier than stream-based cipher implementation. They are generally less susceptible to security issues. They are generally used more in software implementations. Table 12-5 lists the key facts about each symmetric algorithm. Algorithm NameBlock or Stream Cipher?Key SizeNumber of RoundsBlock SizeDESBlock64 bits (effective length 56 bits)1664 bits3DESBlock56, 112, or 168 bits4864 bitsAESBlock128, 192, or 256 bits10, 12, or 14 (depending on block/key size)128, 192, or 256 bitsIDEABlock128 bits864 bitsSkipjackBlock80 bits3264 bitsBlowfishBlock32 to 448 bits1664 bitsTwofishBlock128, 192, or 256 bits16128 bitsRC4Stream40 to 2,048 bitsUp to 256N/ARC5BlockUp to 2,048Up to 25532, 64, or 128 bitsRC6BlockUp to 2,048Up to 25532, 64, or 128 bits Table 12-5: Symmetric Algorithm Key FactsThe modes mentioned earlier use initialization vectors (IVs) to ensure that patterns are not produced during encryption. IVs provide this service by using random values with the algorithms. Without using IVs, a repeated phrase in a plaintext message could result in the same ciphertext. Attackers can possibly use these patterns to break the encryption.Asymmetric algorithms use both a public key and a private or secret key. The public key is known by all parties, and the private key is known only by its owner. One of these keys encrypts the message, and the other decrypts the message.In asymmetric cryptography, determining a user's private key is virtually impossible even if the public key is known, although both keys are mathematically related. However, if a user's private key is discovered, the system can be compromised.Asymmetric algorithms can be referred to as dual-key or public-key cryptography.Asymmetric systems provide confidentiality, integrity, authentication, and non-repudiation. Because both users have one unique key that is part of the process, determining where the message originated is possible.If confidentiality is the primary concern for an organization, a message should be encrypted with the receiver's public key, which is referred to as secure message format. If authentication is the primary concern for an organization, a message should be encrypted with the sender's private key, which is referred to as open message format. When using open message format, the message can be decrypted by anyone who has the public key.Asymmetric algorithms include Diffie-Hellman, RSA, ElGamal, ECC, Knapsack, DSA, and Zero Knowledge Proof.Table 12-6 lists the strengths and weaknesses of asymmetric algorithms. StrengthsWeaknessesKey distribution is easier and more manageable than with symmetric algorithms.Asymmetric algorithms are more expensive to implement than symmetric algorithms.Key management is easier because the same public key is used by all parties.Asymmetric algorithms are 1000 to 10,000 times slower than symmetric algorithms. Table 12-6: Asymmetric Algorithm Strengths and WeaknessesBecause both symmetric and asymmetric algorithms have weaknesses, solutions have been developed that use both types of algorithms in a hybrid cipher. By using both algorithm types, the cipher provides confidentiality, authentication, and non-repudiation.The process for hybrid encryption is as follows: Step 1. The symmetric algorithm provides the keys used for encryption. Step 2. The symmetric keys are passed to the asymmetric algorithm, which encrypts the symmetric keys and automatically distributes them. Step 3. The message is encrypted with the symmetric key. Step 4. Both the message and the key are sent to the receiver. Step 5. The receiver decrypts the symmetric key and uses the symmetric key to decrypt the message. An organization should use hybrid encryption if the parties do not have a shared secret key and large quantities of sensitive data must be transmitted.Integrity is one of the three basic tenets of security. Message integrity ensures that a message has not been altered by using parity bits, cyclic redundancy checks (CRCs), or checksums.The parity bit method adds an extra bit to the data. This parity bit simply indicates whether the number of 1 bits is odd or even. The parity bit is 1 if the number of 1 bits is odd, and the parity bit is 0 if the number of 1 bits is even. The parity bit is set before the data is transmitted. When the data arrives, the parity bit is checked against the other data. If the parity bit doesn't match the data sent, an error is sent to the originator.The CRC method uses polynomial division to determine the CRC value for a file. The CRC value is usually 16 or 32 bits long. Because CRC is very accurate, the CRC value does not match up if a single bit is incorrect.The checksum method adds up the bytes of data being sent and then transmits that number to be checked later, using the same method. The source adds up the values of the bytes and sends the data and its checksum. The receiving end receives the information, adds up the bytes in the same way the source did, and gets the checksum. The receiver then compares his checksum with the source's checksum. If the values match, message integrity is intact. If the values do not match, the data should be resent or replaced. Checksums are also referred to as hash sums because they typically use hash functions for the computation.Message integrity is provided by hash functions and message authentication code.Hash functions are used to ensure integrity. The following sections discuss some of the most popular hash functions. Some of them are no longer commonly used because more secure alternatives are available.Security professionals should be familiar with the following hash functions: One-way hash MD2/MD4/MD5/MD6 SHA/SHA-2/SHA-3

WPA Personal

Also known as WPA-PSK or preshared key mode. Only wireless devices with the passphrase can join the network. Uses TKIP encryption.

NTP (Network Time Protocol)

An Internet protocol that enables synchronization of computer clock times in a network of computers by exchanging time signals.

Unexpected Outbound Communication

Any unexpected outbound traffic should be investigated, regardless of whether it was discovered as a result of network monitoring or as a result of monitoring the host or application. With regard to the application, it can mean that data is being transmitted back to the malicious individual.

Basel II

Basel II affects financial institutions. It addresses minimum capital requirements, supervisory review, and market discipline. Its main purpose is to protect against risks that banks and other financial institutions face.

Change report

Change reports indicate only what has changed since the last report. New vulnerabilities, open ports, new services, and new/removed hosts are included, as well as a summary of the problems that were fixed.

What are the seven main categories of access control mechanisms?

Compensative Corrective Detective Deterrent Directive Preventive Recovery

Contamination

Contamination is the intermingling or mixing of data of one sensitivity or need-to-know level with that of another. Proper implementation of security levels is the best defense against these problems.

Corrective controls

Corrective controls are in place to reduce the effect of an attack or other undesirable event. Using corrective controls fixes or restores the entity that is attacked. Examples of corrective controls include installing fire extinguishers, isolating or terminating a connection, implementing new firewall rules, and using server images to restore to a previous state.

Confidential Classification Level (Data Classifications)

Data that is shared within the company but might cause damage if disclosed

Patching

In many cases, a threat or an attack is made possible by missing security patches. You should update or at least check for updates for a variety of components. This includes all patches for the operating system, updates for any applications that are running, and updates to all anti-malware software that is installed.While you are at it, check for any firmware update the device may require. This is especially true of hardware security devices such as firewalls, IDSs, and IPSs. If any routers or switches are compromised, check for software and firmware updates.

Check Point

In the 1990s Check Point was a dominant player in the firewall field, and it is still a major provider. Like other vendors, Check Point has gone to a next-generation platform, incorporating many of the features found in ASA and Palo Alto firewalls.

Inference

Inference occurs when someone has access to information at one level that allows her to infer information about another level. The main mitigation technique for inference is polyinstantiation, which is the development of a detailed version of an object from another object using different values in the new object. It prevents low-level database users from inferring the existence of higher-level data.

Maximum tolerable downtime (MTD)

This is the maximum amount of time that an organization can tolerate a single resource or function being down. This is also referred to as maximum period time of disruption (MPTD).

Hypervisor attack

Involves taking control of the hypervisor to gain access to the VMs and their data.

WLAN sniffing

Just as a wired network can be sniffed, so can a WLAN. Unfortunately, there is no way to detect this when it is occurring, and there is no way to stop it. Therefore, any traffic that is sensitive should be encrypted to prevent disclosure.

Automated Reporting

Many vulnerability scanning tools have robust automated reporting features of which the organization should take advantage. These reports can be calendared to be delivered to the proper individual in the organization when generated. A variety of report types are available, tailored for the audience to which they are directed. These report types include the following: Technical report: This report provides a comprehensive analysis of all vulnerabilities found and a tool for network administrators, security officers, and IT managers to evaluate network security. Change report: This report presents only the changes from any previous scan, highlighting potential risks, unauthorized activity, and security-related network actions. Executive report: This report, designed for senior IT executives, provides modest graphics with enough supporting detail to assist in decision making. Senior executive report: This report provides more graphics and less detail for presentation to nontechnical decision makers.

Marketing (incident Response)

Marketing can be involved in the following activities in support of the incident response plan: -Create newsletters and other educational materials to be used in employee response training. -In coordination with the legal department, handle advanced preparation of media responses and internal communications regarding incidents.

Handling Risk

Risk reduction is the process of altering elements of the organization in response to risk analysis. After an organization understands its risk, it must determine how to handle the risk. The following four basic methods are used to handle risk: Risk avoidance: Terminating the activity that causes a risk or choosing an alternative that is not as risky Risk transfer: Passing on the risk to a third party, such as an insurance company Risk mitigation: Defining the acceptable risk level the organization can tolerate and reducing the risk to that level Risk acceptance: Understanding and accepting the level of risk as well as the cost of damages that can occur

ArcSight

SIEM

Switch spoofing

Switch ports can be set to use a negotiation protocol called Dynamic Trunking Protocol (DTP) to negotiate the formation of a trunk link. If an access port is left configured to use DTP, it is possible for a hacker to set his interface to spoof a switch and use DTP to create a trunk link. If this occurs, the hacker can capture traffic from all VLANs. This process is shown in Figure 6-12. To prevent this, you should disable DTP on all switch ports.

Work recovery time (WRT)

This is the difference between RTO and MTD, which is the remaining time that is left over after the RTO before reaching the MTD.

Technical report

Technical reports are the most comprehensive and also the most technical. They might be inappropriate for recipients with low security knowledge.

Blue Team

The Blue team acts as the network defense team, and the attempted attack by the Red team tests the Blue team's ability to respond to the attack. It also serves as practice for a real attack. This includes accessing log data, using a SIEM, garnering intelligence information, and performing traffic and data flow analysis.

CMMI

The Capability Maturity Model Integration (CMMI) is a comprehensive set of guidelines that address all phases of the software development life cycle. It describes a series of stages or maturity levels that a development process can advance through as it goes from the ad hoc (Build and Fix) model to one that incorporates a budgeted plan for continuous improvement. Figure 10-4 shows its five maturity levels.

Economic Espionage Act of 1996

The Economic Espionage Act of 1996 covers a multitude of issues because of the way the act was structured. This act affects companies that have trade secrets and any individuals who plan to use encryption technology for criminal activities. A trade secret does not need to be tangible to be protected by this act. Per this law, theft of a trade secret is now a federal crime, and the United States Sentencing Commission must provide specific information in its reports regarding encryption or scrambling technology that is used illegally.

system isolation

The best means to prevent a worm (or any other type of malicious code) from infecting a system or spreading from your system to others is system isolation. If there are no communication pathways into or out of a computer system, there is no means by which a worm or other malicious code can enter or leave.

Crime tape

The incident scene and its evidence must be protected from contamination, so you need crime tape to block the area and prevent any unauthorized individuals from entering it.

Unauthorized Software (Common Host-Related Symptoms)

The presence of any unauthorized software should be another red flag. If you have invested in a vulnerability scanner, you can use it to create a list of installed software that can be compared to a list of authorized software. Unfortunately, many types of malware do a great job of escaping detection. One of the ways to prevent unauthorized software is through the use of Windows AppLocker. By using this tool, you can create whitelists, which specify the only applications that are allowed, or you can create a blacklist, specifying which applications cannot be run. For Windows operating systems that predate Windows 7, you need to use an older tool called Software Restriction Policies. Both features leverage Group Policy to enforce the restrictions on the devices.

Assessment scans

These scans are more comprehensive than discovery scans and can identify misconfigurations, malware, application settings that are against policy, and weak passwords. These scans have a significant impact on the scanned device.

Discovery Scans

These scans are typically used to create an asset inventory of all hosts and all available services.

IAM Software

Third-party identity and access management (IAM) software is created to supplement the tools that may be available to you with your directory service. These tools typically enhance the ability to manage identities in complex situations like federations and single sign-on environments (covered later in this lesson). They may be delivered as an Identity as a Service (IDaaS) solution. Security issues with third-party IAM solutions include the following: Many deployments overprovision effective access rights to users, which can lead to unauthorized disclosure, fraud, accidental access, and identity theft. DDoS attacks on identity services or network connectivity could risk the availability of or degrade the performance of the service. As with any other system that manages access by using identities, all common identity-based attacks may come into play, such as brute-force attacks, cookie-replay attacks, elevations of privileges, and identity spoofing.

Third Party/Consultants

Third-party outsourcing is a liability that many organizations do not consider as part of their risk assessment. Any outsourcing agreement must ensure that the information that is entrusted to the other organization is protected by the proper security measures to fulfill all the regulatory and legal requirements.Contract and procurement processes must be formalized. Organizations should establish procedures for managing all contracts and procurements to ensure that they include all the regulatory and legal requirements. Periodic reviews should occur to ensure that the contractual organization is complying with the guidelines of the contract.

Incident form:

This form is used to describe the incident in detail. It should include sections to record Complementary metal oxide semiconductor (CMOS), hard drive information, image archive details, analysis platform information, and other details. The best approach is to obtain a template and customize it to your needs.

Mean time to repair (MTTR)

This is the average time required to repair a single resource or function when a disaster or disruption occurs.

Rule- or heuristic-based IDS

This type of IDS is an expert system that uses a knowledge base, an inference engine, and rule-based programming. The knowledge is configured as rules. The data and traffic are analyzed, and the rules are applied to the analyzed traffic. The inference engine uses its intelligent software to "learn." If characteristics of an attack are met, alerts or notifications are triggered. This is often referred to as an if/then, or expert, system

UEBA user entity behavior analytics

This type of analysis focuses on user activities. Combining behavior analysis with machine learning, ____ enhances the ability to determine which particular users are behaving oddly. An example would be a hacker who has stolen credentials of a user and is identified by the system because he is not performing the same activities that the user would perform.

Executive report

This type of report provides only modest graphics and brief supporting text to assist in decision making.

Null, FIN, and XMAS scans all serve the same purpose-

To discover open ports and ports blocked by a firewall.

How to prevent MAC overflow attack?

To prevent these attacks, you should limit the number of MAC addresses allowed on each port by using port-based security.

To protect against buffer overflow

To protect against this issue, organizations should ensure that all operating systems and applications are updated with the latest updates, service packs, and patches. In addition, programmers should properly test all applications to check for overflow conditions.

Mobile device forensics

Today, many incidents involve mobile devices. You need different tools to acquire the required information from these devices. A suite should contain tools for this purpose.

Traffic Analysis

Tools that chart a network's traffic usage. (Tool is NetFlow analysis)

You can mitigate integer overflow attacks by doing the following:

Use strict input validation. Use a language or compiler that performs automatic bounds checks. Choose an integer type that contains all possible values of a calculation. This reduces the need for integer type casting (changing an entity of one data type into another), which is a major source of defects.

Trusted Foundry

Used to verify that hardware can be trusted (ensured by NSA):

WPA Enterprise Mode

Uses 802.1X (RADIUS) access control and TKIP encryption

Agent based Vulnerability Scanner

Uses Pull technology Has the following characteristics: -Can get information from disconnected machines or machines in the DMZ -Ideal for remote locations that have limited bandwidth -Less dependent on network connectivity -Based on policies defined on the central console

Packet Capture

Uses a sniffing program to capture packets from a PC's NIC

Anti-malware

We are not helpless in the fight against malware. There are both programs and practices that help mitigate the damage malware can cause. Anti-malware software addresses problematic software such as adware and spyware, viruses, worms, and other forms of destructive software. Most commercial applications today combine anti-malware, antivirus, and anti-spyware into a single tool. An antivirus tool just protects against viruses. An anti-spyware tool just protects against spyware. Security professionals should review the documentation of any tool they consider so they can understand the protection it provides.User education in safe Internet use practices is a necessary part of preventing malware. This education should be a part of security policies and should include topics such as the following: Keeping anti-malware applications current Performing daily or weekly scans Disabling autorun/autoplay Disabling image previews in Outlook Avoiding clicking on e-mail links or attachments Surfing smart Hardening the browser with content phishing filters and security zones

Improper Error and Exception Handling

Web applications, like all other applications, suffer from errors and exceptions, and such problems are to be expected. However, the manner in which an application reacts to errors and exceptions determines whether security can be compromised. One of the issues is that an error message may reveal information about the system that a hacker may find useful. For this reason, when applications are developed, all error messages describing problems should be kept as generic as possible. Also, you can use tools such as the OWASP Zed Attack Proxy to try to make applications generate errors.

Nikto

Web vulnerability scanner

Web App Vulnerability Scanning

Web vulnerability scanners focus on web applications. These tools can operate in two ways: using synthetic transaction monitoring and real user monitoring. In synthetic transaction monitoring, preformed (synthetic) transactions are performed against the application in an automated fashion, and the behavior of the application is recorded. In real user monitoring, real user transactions are monitored while the web application is live.Synthetic transaction monitoring, which is a type of proactive monitoring, uses external agents to run scripted transactions against an application. This type of monitoring is often preferred for websites and applications. It provides insight into the application's availability and performance and warns of any potential issue before users experience any degradation in application behavior. For example, Microsoft's System Center Operations Manager uses synthetic transactions to monitor databases, websites, and TCP port usage.In contrast, real user monitoring (RUM), which is a type of passive monitoring, is a monitoring method that captures and analyzes every transaction of every application or website user. Unlike synthetic monitoring, which attempts to gain performance insights by regularly testing synthetic interactions, RUM cuts through the guesswork, seeing exactly how users are interacting with the application.A number of web testing applications are available. These tools scan an application for common security issues with cookie management, PHP scripts, SQL injections, and other problems. Some examples of these tools, discussed more fully in Lesson 14, include the following: Qualys Nessus Nexpose Nikto

Scan Sweeps

When a penetration test is undertaken, one of the early steps is to scan the network. These scan sweeps can be of several kinds. When they occur, and no known penetration test is under way, it is an indication that a malicious individual may be scanning in preparation for an attack. The following are the most common of these scans: Ping sweeps: Also known as ICMP sweeps, ping sweeps use ICMP to identify all live hosts by pinging all IP addresses in the known network. All devices that answer are up and running. Port scans: Once all live hosts are identified, a port scan attempts to connect to every port on each device and report which ports are open, or "listening." Vulnerability scans: Vulnerability scans are more comprehensive than the other types of scans in that they identify open ports and security weaknesses. The good news is that uncredentialed scans expose less information than credentialed scans. An uncredentialed scan is a scan in which the scanner lacks administrative privileges on the device he is scanning. You will learn more about scanning tools in Lesson 14.

Mergers and Acquisitions

When two companies merge, it is a marriage of sorts. Networks can be combined and systems can be integrated, or in some cases entirely new infrastructures may be built. These processes provide an opportunity to take a fresh look at ensuring that all systems are as secure as required. This can be rather difficult because the two entities may be using different hardware vendors, different network architectures, or different policies and procedures.

Vulnerability Scanning

Whereas a port scanner can discover open ports, a vulnerability scanner can probe for a variety of security weaknesses, including misconfigurations, out-of-date software, missing patches, and open ports. These solutions can be on premises or cloud based.Cloud-based vulnerability scanning is a service performed from the vendor's cloud and is a good example of Software as a Service (SaaS). The benefits here are the same as the benefits derived from any SaaS offering—that is, no equipment on the part of the subscriber and no footprint in the local network. Figure 14-13 shows a premises-based approach to vulnerability scanning, and Figure 14-14 shows a cloud-based solution. In the premises-based approach, the hardware and/or software vulnerability scanners and associated components are entirely installed on the client premises, while in the cloud-based approach, the vulnerability management platform is in the cloud. Vulnerability scanners for external vulnerability assessments are located at the solution provider's site, with additional scanners on the premises. The following are the advantages of the cloud-based approach: Installation costs are low because there is no installation and configuration for the client to complete. Maintenance costs are low because there is only one centralized component to maintain, and it is maintained by the vendor (not the end client). Upgrades are included in a subscription. Costs are distributed among all customers. It does not require the client to provide onsite equipment. However, there is a considerable disadvantage to the cloud-based approach: Whereas premises-based deployments store data findings at the organization's site, in a cloud-based deployment, the data is resident with the provider. This means the customer is dependent on the provider to ensure the security of the vulnerability data. The following are the advantages of the cloud-based approach: Installation costs are low because there is no installation and configuration for the client to complete. Maintenance costs are low because there is only one centralized component to maintain, and it is maintained by the vendor (not the end client). Upgrades are included in a subscription. Costs are distributed among all customers. It does not require the client to provide onsite equipment. However, there is a considerable disadvantage to the cloud-based approach: Whereas premises-based deployments store data findings at the organization's site, in a cloud-based deployment, the data is resident with the provider. This means the customer is dependent on the provider to ensure the security of the vulnerability data.QualysQualys is an example of a cloud-based vulnerability scanner. Sensors are placed throughout the network, and they upload data to the cloud for analysis. Sensors can be implemented as dedicated appliances or as software instances on a host. A third option is to deploy sensors as images on virtual machines.NessusOne of the most widely used vulnerability scanners is Nessus, a proprietary tool developed by Tenable Network Security. It is free of charge for personal use in a non-enterprise environment. Figure 14-15 shows a partial screenshot of Nessus. By default, Nessus starts by listing at the top of the output the issues found on a host that are rated with the highest severity. Figure 14-15: Nessus For the computer scanned in Figure 14-15, you can see that there is one high-severity issue (the default password for a Firebird database located on the host), and there are five medium-level issues, including two SSL certificates that cannot be trusted and a remote desktop man-in-the-middle attack vulnerability.OpenVASAs you might suspect from the name, the OpenVAS tool is open source. It was developed from the Nessus code base and is available as a package for many Linux distributions. The scanner is accompanied with a regularly updated feed of network vulnerability tests (NVT). It uses the Greenbone console, shown in Figure 14-16.Figure 14-16: OpenVASNexposeNexpose is a vulnerability scanner that has a free version called the community edition and several other editions by Force7 that are sold commercially. The community edition supports the scanning of up to 32 hosts. It also supports compliance reporting to standards including PCI.NiktoNikto is a vulnerability scanner that is dedicated to web servers. It is for Linux but can be run in Windows through a Perl interpreter. This tool is not stealthy, but it is a fast scanner. Everything it does is recorded in your logs. It generates a lot of information, much of it normal or informational. It is a command-line tool that is often run from within a Kali Linux server and preinstalled with more than 300 penetration-testing programs.Microsoft Baseline Security AnalyzerMicrosoft Baseline Security Analyzer (MBSA) is a Windows tool that can scan for all sorts of vulnerabilities, including missing security patches, missing operating system updates, missing antivirus updates, and weak passwords. It also identifies issues with applications. While not included in Windows, it is a free download. Figure 14-17 shows an example of MBSA scan results. You can see in the listed results a list of security issues found on the scanned device.

NIPS

While a NIDS can alert you of malicious activity, it cannot prevent the activity. A network intrusion prevention system (NIPS) can take actions to prevent malicious activity. You should place a NIPS at the border of the network and connect it in-line between the external network and the internal network, as shown in Figure 12-7.

Stress Test Application

While fuzz testing has a goal of locating security issues, stress testing determines the workload that the application can withstand. These tests should be performed in a certain way and should always have defined objectives before testing begins. You will find many models for this, but one suggested order of activities is as follows:Step 1. Identify test objectives in terms of the desired outcomes of the testing activity.Step 2. Identify key scenario(s)—the cases that need to be stress tested (for example, test login, test searching, test checkout).Step 3. Identify the workload that you want to apply (for example, simulate 300 users).Step 4. Identify the metrics you want to collect and what form these metrics will take (for example, time to complete login, time to complete search).Step 5. Create test cases. Define steps for running a single test, as well as your expected results (for example, Step 1 Select a product. Step 2 Add to cart, Step 3 Check out).Step 6. Simulate load by using test tools (for example, attempt 300 sessions).Step 7. Analyze results.

Chain of custody

While hard copies of chain of custody activities should be kept, some suites contain software to help manage this process. These tools can help you maintain an accurate and legal chain of custody for all evidence, with or without hard copy (paper) backup. Some perform a dual electronic signature capture that places both signatures in an Excel spreadsheet as proof of transfer. Those signatures are doubly encrypted so that if the spreadsheet is altered in any way, the signatures disappear.

User Acceptance Testing

While it is important to make web applications secure, in some cases security features make an application unusable from the user perspective. User acceptance testing is designed to ensure that this does not occur. Keep the following guidelines in mind when designing user acceptance testing:Perform the testing in an environment that mirrors the live environment.Identify real-world use cases for execution.Select UAT staff from various internal departments.

Servers

While servers represent a less significant number of devices than endpoints, they usually contain the critical and sensitive assets and perform mission-critical services for the network. Therefore, these devices receive the lion's share of attention from malicious individuals. The following are some issues that can impact any device but that are most commonly directed at servers: DoS/DDoS: A denial-of-service (DoS) attack occurs when attackers flood a device with enough requests to degrade the performance of the targeted device. Some popular DoS attacks include SYN floods and teardrop attacks. A distributed DoS (DDoS) attack is a DoS attack that is carried out from multiple attack locations. Vulnerable devices are infected with software agents called zombies. The vulnerable devices become botnets, which then carry out the attack. Because of the distributed nature of the attack, identifying all the attacking botnets is virtually impossible. The botnets also help to hide the original source of the attack. Buffer overflow: Buffers are portions of system memory that are used to store information. A buffer overflow occurs when the amount of data that is submitted to an application is larger than the buffer can handle. Typically, this type of attack is possible because of poorly written application or operating system code, and it can result in an injection of malicious code. To protect against this issue, organizations should ensure that all operating systems and applications are updated with the latest service packs and patches. In addition, programmers should properly test all applications to check for overflow conditions. Finally, programmers should use input validation to ensure that the data submitted is not too large for the buffer. Mobile code: Mobile code is any software that is transmitted across a network to be executed on a local system. Examples of mobile code include Java applets, JavaScript code, and ActiveX controls. Mobile code includes security controls, Java implements sandboxes, and ActiveX uses digital code signatures. Malicious mobile code can be used to bypass access controls. Organizations should ensure that users understand the security concerns related to malicious mobile code. Users should only download mobile code from legitimate sites and vendors. Emanations: Emanations are electromagnetic signals that are emitted by an electronic device. Attackers can target certain devices or transmission media to eavesdrop on communication without having physical access to the device or medium. The TEMPEST program, initiated by the United States and United Kingdom, researches ways to limit emanations and standardizes the technologies used. Any equipment that meets TEMPEST standards suppresses signal emanations using shielding material. Devices that meet TEMPEST standards usually implement an outer barrier or coating, called a Faraday cage or Faraday shield. TEMPEST devices are most often used in government, military, and law enforcement settings. Backdoor/trapdoor: A backdoor, or trapdoor, is a mechanism implemented in many devices or applications that gives the user who uses the backdoor unlimited access to the device or application. Privileged backdoor accounts are the most common type of backdoor in use today. Most established vendors no longer release devices or applications with this security issue. You should be aware of any known backdoors in the devices or applications you manage.

What provides a greater opportunity to attackers; a wired or wireless environment?

Wireless. With a wired environment you have some control over where your packets go as they are bound to cables but with wireless they are available to anyone within ranges.

Normal Resources

_________ Should be restored in 7 days but are not considered as important as critical, urgent, or important resources.

Urgent Resources

__________ Should be restored in 24 hours but are not considered as important as critical resources.

Physical Controls

implemented to protect an organization's facilities and personnel. Personnel concerns should take priority over all other concerns. Specific examples of physical controls include perimeter security, badges, swipe cards, guards, dogs, man traps, biometrics, and cabling.When controlling physical entry into a building, security professionals should ensure that the appropriate policies are in place for visitor control, including visitor logs, visitor escort, and visitor access limitation to sensitive areas.

Detective Controls

in place to detect an attack while it is occurring to alert appropriate personnel. Examples of detective controls include motion detectors, intrusion detection systems (IDS), logs, guards, investigations, and job rotation.

compensative control

in place to substitute for a primary access control and mainly act to mitigate risks. Using compensative controls, you can reduce the risk to a more manageable level. Examples of compensative controls include requiring two authorized signatures to release sensitive or confidential information and requiring two keys owned by different personnel to open a safety deposit box.

Personally Identifiable Information (PII)

s any piece of data that can be used alone or with other information to identify a single person. Any PII that an organization collects must be protected in the strongest manner possible. PII includes full name, identification numbers (including driver's license number and Social Security number), date of birth, place of birth, biometric data, financial account numbers (both bank account and credit card numbers), and digital identities (including social media names and tags).Keep in mind that different countries and levels of government can have different qualifiers for identifying PII. Security professionals must ensure that they understand international, national, state, and local regulations and laws regarding PII. As the theft of this data becomes even more prevalent, you can expect more laws to be enacted that will affect your job. Examples of PII are shown in Figure 7-1.

Directive controls

specify acceptable practice within an organization. They are in place to formalize an organization's security directive mainly to its employees. The most popular directive control is an acceptable use policy (AUP), which lists proper (and often examples of improper) procedures and behaviors that personnel must follow. Any organizational security policies or procedures usually fall into this access control category. You should keep in mind that directive controls are efficient only if there is a stated consequence for not following the organization's directions.

Fingerprinting, or hashing

the process of using a hashing algorithm to reduce a large document or file to a character string that can be used to verify the integrity of the file (that is, whether the file has changed in any way). To be useful, a hash value must have been computed at a time when the software or file was known to have integrity (for example, at release time). Then at any time thereafter, the software file can be checked for integrity by calculating a new hash value and comparing it to the value from the initial calculation. If the character strings do not match, a change has been made to the software.

Email Harvesting

these bots are programmed to gather email addresses using available methods on the internet

proxy firewall

A firewall that stands between a connection from the outside and the inside and makes the connection on behalf of the endpoints. With a proxy firewall, there is no direct connection.

Dynamic packet-filtering firewall

A firewall type that can react to network traffic and create or modify configuration rules to adapt.

screened host firewall

a single firewall or system designed to be externally accessible and protected by placement behind a filtering firewall. When traffic comes into the router and is forwarded to the firewall, it is inspected before going into the internal network.

correlation analysis

analysis of the degree to which changes in one variable are associated with changes in another

Firewall logs

can vary widely in appearance but generally list each interaction of the firewall with the respective traffic traversing it. In its simplest form, a firewall log is a text file showing each of these interactions, as in the example in Figure 2-3, which shows the Windows Firewall log. As you can see, it shows the type of information included in all firewall logs, including the following: Source and destination IP address Source and destination MAC address Source and destination port number Action taken

What are the three types of access controls?

-Administrative (management) Controls -Logical (technical) controls -Physical Controls

What are the three metric groups that CVSS is composed of?

-Base: Characteristics of a vulnerability that are constant over time and user environments -Temporal: Characteristics of a vulnerability that change over time but not among user environments -Environmental: Characteristics of a vulnerability that are relevant and unique to a particular user's environment The base metric group includes the following metrics:Access Vector (AV): Describes how the attacker would exploit the vulnerability and has three possible values:L: Stands for local and means that the attacker must have physical or logical access to the affected system.A: Stands for adjacent network and means that the attacker must be on the local network.N: Stands for network and means that the attacker can cause the vulnerability from any network.Access Complexity (AC): Describes the difficulty of exploiting the vulnerability and has three possible values:H: Stands for high and means that the vulnerability requires special conditions that are hard to find.M: Stands for medium and means that the vulnerability requires somewhat special conditions.L: Stands for low and means that the vulnerability does not require special conditions.Authentication (Au): Describes the authentication an attacker would need to get through to exploit the vulnerability and has three possible values:M: Stands for multiple and means that the attacker would need to get through two or more authentication mechanisms.S: Stands for single and means that the attacker would need to get through one authentication mechanism.N: Stands for none and means that no authentication mechanisms are in place to stop the exploit of the vulnerability.Availability (A): Describes the disruption that might occur if the vulnerability is exploited and has three possible values:N: Stands for none and means that there is no availability impact.P: Stands for partial and means that system performance is degraded.C: Stands for complete and means that the system is completely shut down.Confidentiality (C): Describes the information disclosure that may occur if the vulnerability is exploited and has three possible values:N: Stands for none and means that there is no confidentiality impact.P: Stands for partial and means some access to information would occur.C: Stands for complete and means all information on the system could be compromised.Integrity (I): Describes the type of data alteration that might occur and has three possible values:N: Stands for none and means that there is no integrity impact.P: Stands for partial and means some information modification would occur.C: Stands for complete and means all information on the system could be compromised.The CVSS vector looks something like:CVSS2#AV:L/AC:H/Au:M/C:P/I:N/A:N This vector is read as follows:AV:L: Access vector, where L stands for local and means that the attacker must have physical or logical access to the affected system.AC:H: Access complexity, where H stands for high and means that the vulnerability requires special conditions that are hard to find.Au:M: Authentication, where M stands for multiple and means that the attacker would need to get through two or more authentication mechanisms.C:P: Confidentiality, where P stands for partial and means that some access to information would occur.I:N: Integrity, where N stands for none and means that there is no integrity impact.A:N: Availability, where N stands for none and means that there is no availability impact.

PCI-DSS specifies 12 requirements:

-Build and maintain a secure network: 1. Install and maintain a firewall configuration to protect cardholder data. 2.Do not use vendor-supplied defaults for system passwords and other security parameters. -Protect cardholder data 3.Protect stored cardholder data. 4.Encrypt transmission of cardholder data across open, public networks. -Maintain a vulnerability management program 5.Use and regularly update antivirus software on all systems commonly affected by malware. 6.Develop and maintain secure systems and applications. -Implement strong access control measures 7. Restrict access to cardholder data based on business need to know. 8. Assign a unique ID to each person who has computer access. 9. Restrict physical access to cardholder data. -Regularly monitor and test networks 10. Track and monitor all access to network resources and cardholder data. 11.Regularly test security systems and processes. -Maintain an information security policy 12. Maintain a policy that addresses information security.

What are the components of SCAP?

-Common Configuration Enumeration (CCE): These are configuration best practice statements maintained by NIST. -Common Platform Enumeration (CPE): These are methods for describing and classifying operating systems applications and hardware devices. -Common Weakness Enumeration (CWE): These are design flaws in the development of software that can lead to vulnerabilities. -Common Vulnerabilities and Exposures (CVE): These are vulnerabilities in published operating systems and applications software.

What are the modes of MAC?

-Dedicated security mode: A system is operating in dedicated security mode if it employs a single classification level. In this system, all users can access all data, but they must sign a nondisclosure agreement (NDA) and be formally approved for access on a need-to-know basis. -System high security mode: In a system operating in system high security mode, all users have the same security clearance (as in the dedicated security model), but they do not all possess a need-to-know clearance for all the information in the system. Consequently, although a user might have clearance to access an object, she still might be restricted if she does not have need-to-know clearance pertaining to the object. -Compartmented security mode: In a compartmented security mode system, all users must possess the highest security clearance (as in both dedicated and system high security), but they must also have valid need-to-know clearance, a signed NDA, and formal approval for all information to which they have access. The objective is to ensure that the minimum number of people possible have access to information at each level or compartment.

What are the two mitigation techniques available for preventing ARP poisoning on a Cisco switch?

-Dynamic ARP inspection (DAI) and DHCP snooping

The following measures help prevent session hijacking:

-Encode heuristic information, like IP addresses, into session IDs. -Use SecureSessionModule, which modifies each session ID by appending a hash to the ID. The hash or MAC is generated from the session ID, the network portion of the IP address, the UserAgent header in the request, and a secret key stored on the server. SecureSessionModule uses this value to validate each request for a session cookie.

What are the advantages of VLANS?

-Flexibility: Removes the requirement that devices in the same LAN (or, in this case, VLAN) be in the same location. -Performance: Creating smaller broadcast domains (each VLAN is a broadcast domain) improves performance. -Security: Provides more separation at Layers 2 and 3. -Cost: Switched networks with VLANs are less costly than routed networks because routers cost more than switches.

netstat commands

-n: Displays active TCP connections -m: Displays the memory statistics for the networking code -s: Displays statistics by protocol -p: Specifies a set of protocols -r: Displays the contents of the IP routing table

What are ways to harvest DNS records?

-unauthorized zone transfers, the use of the tracert or traceroute tool on Unix, the use of the Whois protocol which is used to query databases that contain information about the owners of internet resources such as domain names, IP address block, and autonomous systems (AS)

Directory Services

Directory services store, organize, and provide access to information in a computer operating system's directory. With directory services, users can access a resource by using the resource's name instead of its IP or MAC address. Most enterprises implement an internal directory services server that handles any internal requests. This internal server communicates with a root server on a public network or with an externally facing server that is protected by a firewall or other security device to obtain information on any resources that are not on the local enterprise network. LDAP, Active Directory, and DNS are primary examples of directory services.

Context Based Authentication Behavioral

It is possible for authentication systems to track the behavior of an individual over time and use this information to detect when an entity is performing actions that, while within the rights of the entity, differ from the normal activity of the entity. This could be an indication that the account has been compromised. The real strength of an authentication system lies in the way you can combine the attributes just discussed to create very granular policies such as the following: Gene can access the Sales folder from 9 to 5 if he is in the office and is using his desktop device but only from 10 to 3 using his smart phone in the office but not at all during 9 to 5 from outside the office. The main security issue is that the complexity of the rule creation can lead to mistakes that actually reduce security. A complete understanding of the system is required, and special training should be provided to anyone managing the system. Other security issues include privacy issues, such as user concerns about the potential misuse of information used to make contextual decisions. These concerns can usually be addressed through proper training about the power of context-based security.

Port Scans

Just as operating systems have well-known vulnerabilities, so do common services. By determining the services that are running on a system, an attacker also discovers potential vulnerabilities of the service and may attempt to take advantage of them. This is typically done with a port scan in which all "open," or "listening," ports are identified. Once again, the lion's share of these issues will have been mitigated with the proper security patches, but it is not uncommon for security analysts to find that systems that are running vulnerable services are missing the relevant security patches. Consequently, when performing service discovery, patches should be checked on systems found to have open ports. It is also advisable to close any ports not required for the system to do its job.

Malicious Processes (Common Host-Related Symptoms)

Malicious programs use processes to access the CPU, just as normal programs do. This means their processes are considered malicious processes. You can sometimes locate processes that are using either CPU or memory by using Task Manager, but again, many malware programs don't show up in Task Manager. Either Process Explorer or some other tool may give better results than Task Manager. If you locate an offending process and end that process, don't forget that the program is still there, and you need to locate it and delete all of its associated files and registry entries.

Malicious Software

Malicious software, also called malware, is any software that is designed to perform malicious acts. The following are the four classes of malware you should understand: Virus: Any malware that attaches itself to another application to replicate or distribute itself Worm: Any malware that replicates itself, meaning that it does not need another application or human interaction to propagate Trojan horse: Any malware that disguises itself as a needed application while carrying out malicious actions Spyware: Any malware that collects private user data, including browsing history or keyboard input The best defense against malicious software is to implement antivirus and anti-malware software. Today most vendors package these two types of software in the same package. Keeping antivirus and anti-malware software up to date is vital. It includes ensuring that the latest virus and malware definitions are installed.

Input Validation (software development)

Many attacks arise because a web application has not validated the data entered by the user (or hacker). Input validation is the process of checking all input for issues such as proper format and proper length. In many cases, these validators use either the blacklisting of characters or patterns or the whitelisting of characters or patterns. Blacklisting looks for characters or patterns to block. It can be prone to preventing legitimate requests. Whitelisting looks for allowable characters or patterns and allows only those.Input validation tools fall into several categories: Cloud-based services Open source tools Proprietary commercial products Because these tools vary in the amount of skill required, the choice should be made based on the skill sets represented on the cybersecurity team. A fancy tool that no one knows how to use is not an effective tool.

Input Validation

Many of the attacks discussed so far arise because the web application has not validated the data entered by the user (or hacker). Input validation is the process of checking all input for things such as proper format and proper length. In many cases, these validators use either the blacklisting of characters or patterns or the whitelisting of characters or patterns. Blacklisting looks for characters or patterns to block. It can prevent legitimate requests. Whitelisting looks for allowable characters or patterns and only allows those. The length of the input should also be checked and verified to prevent buffer overflows. This attack type is discussed later in this lesson.

Scheduled Reviews/Retirement of Processes

Many of the processes and procedures that are developed are done to mitigate vulnerability. In that regard, they represent another form of security control. Security control assessments should be used to verify that the organization's or business unit's security goals are being met. Vulnerability assessments and penetration tests are considered part of this process. If a security control that does not meet a security goal is implemented, the security control is ineffective. Once an assessment has been conducted, security professionals should use the assessment results to determine which security controls have weaknesses or deficiencies. Security professionals should then work to eliminate the weaknesses or deficiencies. Reviewing the effectiveness of the control should include asking the following questions: Which security controls are we using? How can these controls be improved? Are these controls necessary? Are there new issues that have arisen? Which security controls can be deployed to address the new issues?

Security Regression Testing

Regression testing is done to verify functionality subsequently to making a change to the software. Security regression testing is a subset of regression testing that validates that changes have not reduced the security of the application or opened new weaknesses. This testing should be performed by a different group than the group that implemented the change. It can occur in any part of the development process and includes the following types: Unit regression: This type tests the code as a single unit. Interactions and dependencies are not tested. Partial regression: With this type, new code is made to interact with other parts of older existing code. Complete regression: This type is final regression testing

Extranet

a network configuration that allows selected outside organizations to access internal information systems

Intranet

a network designed for the exclusive use of computer users within an organization that cannot be accessed by users outside the organization

Security Appliances

Security appliances are one of the key components in a defense-in-depth strategy. These hardware devices are designed to provide some function that supports the securing of the network or detecting vulnerabilities and attacks. While these appliances are covered in depth in Lesson 14, they are listed here: Intrusion prevention systems Intrusion detection systems Firewalls SIEM systems Hardware encryption devices One additional type of security device that also bears mention is devices that perform unified threat management (UTM). UTM is an approach that involves performing multiple security functions within the same device or appliance. The functions may include the following: Network firewalling Network intrusion prevention Gateway antivirus Gateway antispam VPN Content filtering Load balancing Data leak prevention On-appliance reporting UTM makes administering multiple systems unnecessary. However, some feel that UTM creates a single point of failure; they favor creating multiple layers of devices as a more secure approach.

Chain of custody form

The chain of custody form indicates who has handled the evidence, when each person handled it, and the order in which the handlers were in possession of the evidence. This form is used to provide a complete account of the handling and storage of the evidence.

Separation of Duties

The concept of separation of duties prescribes that sensitive operations be divided among multiple users so that no one user has the rights and access to carry out the operation alone. Separation of duties is valuable in deterring fraud by ensuring that no single individual can compromise a system. It is considered a preventive administrative control. There are two basic types of this: split knowledge and third-party outsourcing.

Risk Appetite

The degree of uncertainty an entity is willing to take on, in anticipation of a reward.

Business Process Interruption

The deployment of mitigations cannot be done in such a way that business operations and processes are interrupted. Therefore, the need to conduct these activities during off hours can also be a factor that impedes the remediation of vulnerabilities.

Develop

The develop phase involves writing the code or instructions that make the software work. The emphasis of this phase is strict adherence to secure coding practices. Some models that can help promote secure coding are covered later in this lesson, in the section "Secure Coding Best Practices."Many security issues with software are created through insecure coding practices, such as lack of input validation or data type checks. It is important to identify these issues in a code review that attempts to assume all possible attack scenarios and their impact on the code.

Cryptography

The discussion of technology components of defense in depth would not be complete without the inclusion of cryptography. Protecting information with cryptography involves the deployment of a cryptosystem, which consists of software, protocols, algorithms, and keys. The strength of any cryptosystem comes from the algorithm and the length and secrecy of the key. For example, one method of making a cryptographic key more resistant to exhaustive attacks is to increase the key length. If the cryptosystem uses a weak key, it facilitates attacks against the algorithm.While a cryptosystem supports the three core parts of the confidentiality, integrity, and availability (CIA) triad, cryptosystems directly provide authentication, confidentiality, integrity, authorization, and non-repudiation. The availability tenet of the CIA triad is supported by cryptosystems, meaning that implementing cryptography helps ensure that an organization's data remains available. However, cryptography does not directly ensure data availability, although it can be used to protect the data. Security services provided by cryptosystems include the following: Authentication: Cryptosystems provide authentication by being able to determine the sender's identity and validity. Digital signatures verify the sender's identity. Protecting the key ensures that only valid users can properly encrypt and decrypt the message. Confidentiality: Cryptosystems provide confidentiality by altering the original data in such a way as to ensure that the data cannot be read except by the valid recipient. Without the proper key, unauthorized users are unable to read the message. Integrity: Cryptosystems provide integrity by allowing valid recipients to verify that data has not been altered. Hash functions do not prevent data alteration but provide a means to determine whether data alteration has occurred. Authorization: Cryptosystems provide authorization by providing the key to a valid user after that user proves his identity through authentication. The key given allows the user to access a resource. Non-repudiation: Non-repudiation in cryptosystems provides proof of the origin of data, thereby preventing the sender from denying that he sent the message and supporting data integrity. Public key cryptography and digital signatures provide non-repudiation. Key management: Key management in cryptography is essential to ensure that the cryptography provides confidentiality, integrity, and authentication. If a key is compromised, it can have serious consequences throughout an organization. Key management involves the entire process of ensuring that keys are protected during creation, distribution, transmission, and storage. As part of this process, keys must also be destroyed properly. When you consider the vast number of networks over which a key is transmitted and the different types of system on which a key is stored, the enormity of this issue really comes to light.As the most demanding and critical aspect of cryptography, it is important that security professionals understand key management principles. Keys should always be stored in ciphertext when stored on a non-cryptographic device. Key distribution, storage, and maintenance should be automatic by having the processes integrated into the application.Because keys can be lost, backup copies should be made and stored in a secure location. A designated individual should have control of the backup copies, and other individuals should be designated to serve as emergency backups. The key recovery process should require more than one operator to ensure that only valid key recovery requests are completed. In some cases, keys are even broken into parts and deposited with trusted agents, which provide their part of the key to a central authority when authorized to do so. Although other methods of distributing parts of a key are used, all the solutions involve the use of trustee agents entrusted with part of the key and a central authority tasked with assembling the key from its parts. Also, key recovery personnel should span the entire organization and not just be members of the IT department.Organizations should limit the number of keys that are used. The more keys you have, the more keys you must worry about and whose protection you must ensure. Although a valid reason for issuing a key should never be ignored, limiting the number of keys issued and used reduces the potential damage.When designing the key management process, you should consider how to do the following: Securely store and transmit the keys. Use random keys. Issue keys of sufficient length to ensure protection. Properly destroy keys when no longer needed. Back up the keys to ensure that they can be recovered.

OS and process analysis

These tools focus on the activities of the operating system and the processes that have been executed. While most operating systems have tools of some sort that can report on processes, tools included in a forensics suite have more robust features and capabilities.

Mean time between failures (MTBF)

This is the estimated amount of time a device will operate before a failure occurs. This amount is calculated by the device vendor. System reliability is increased by a higher MTBF and lower MTTR.

Recovery point objective (RPO)

This is the point in time to which the disrupted resource or function must be returned.

Recovery time objective (RTO)

This is the shortest time period after a disaster or disruptive event within which a resource or function must be restored in order to avoid unacceptable consequences. RTO assumes that an acceptable period of downtime exists. RTO should be smaller than MTD.

Verify Logging/Communication to Security Monitoring

To ensure that you will have good security data going forward, you need to ensure that all logs related to security are collecting data. Pay special attention to the manner in which the logs react when full. With some settings, the log begins to overwrite older entries with new entries. With other settings, the service stops collecting events when the log is full. Security log entries need to be preserved. This may require manual archiving of the logs and subsequent clearing of the logs. Some logs make this possible automatically, whereas others require a script. If all else fails, check the log often to assess its state. Many organizations send all security logs to a central location. This could be a Syslog server, or it could be a security information and event management (SIEM) system. These systems not only collect all the logs, they use the information to make inferences about possible attacks. Having access to all logs allows the system to correlate all the data from all responding devices. Regardless of whether you are logging to a syslog server or a SIEM system, you should verify that all communications between the devices and the central server are occurring without a hitch. This is especially true if you had to rebuild the system manually rather than restore from an image, as there would be more opportunity for human error in the rebuilding of the device.

Total Risk vs. Residual Risk

Total risk is the risk that an organization might encounter if it decides not to implement any safeguards. As you already know, any environment is never fully secure, so you must always deal with residual risk. Residual risk is risk that is left over after safeguards have been implemented. Residual risk is represented using the following equation: Residual risk=Total risk−Countermeasures This equation is for more conceptual figuring than for actual calculation.

TCP Flags

URG - data is bad, skip ahead ACK - acknowledgement number is valid PSH - don't buffer, send immediately RST - done with connection SYN - packet is synchronized FIN - done sending data

Unauthorized Privileges (Common Host-Related Symptoms)

Unauthorized changes can be the result of privilege escalation. Check all system accounts for changes to the permissions and rights that should be assigned, paying special attention to new accounts with administrative privileges. When assigning permissions, always exercise the concept of least privilege, as discussed in Lesson 1. Also ensure that account reviews take place on a regular basis to identify privileges that have been escalated and accounts that are no longer needed.

Server based Vulnerability Scanner

Uses Push technology Has the following characteristics: -Good for networks with plentiful bandwidth -Dependent on network connectivity -Central authority does all the scanning and deployment

Vulnerability Feed

Vulnerability feeds are RSS feeds dedicated to the sharing of information about the latest vulnerabilities. Subscribing to these feeds can enhance the knowledge of the scanning team and can keep the team abreast of the latest issues. For example, the National Vulnerability Database is the U.S. government repository of standards-based vulnerability management data represented using Security Content Automation Protocol (SCAP) (covered later in this section).

Sinkholes can be used to mitigate the following issues

Worms Compromised devices communicating with command and control servers External attacks targeted at a single device inside the network

Drive adapters

You also need drive adapters. The best approach is to invest in a multipack drive adapter kit. It should include support for the following drive types: microSATA SATA blade type SSD SATA LIF

Cisco firewall is called the Adaptive Security Appliance (ASA)

You may still find some earlier Cisco firewall products, called Private Internet Exchange (PIX), deployed as well. The ASA can be installed as an appliance, and it can also be integrated into routers and switches as plug-in modules. This firewall can go far beyond simple firewall functions and can also do content inspection. It can be used to create several types of VPN connections, some that require software on the client and some that do not. Finally, it includes a built-in IPS.

Multihomed Firewall

You might recall from Lesson 1 that a firewall can be multihomed. One popular type is the three-legged firewall. In this configuration, there are three interfaces: one connected to the untrusted network, one connected to the internal network, and one connected to a part of the network called a demilitarized zone (DMZ), a protected network that contains systems needing a higher level of protection. The advantages of a three-legged firewall include the following: They offer cost savings on devices because you need only one firewall and not two or three. It is possible to perform IP masquerading (NAT) on the internal network while not doing so for the DMZ. Among the disadvantages are the following: The complexity of the configuration is increased. There is a single point of failure. The location of a three-legged firewall is shown in Figure 14-6.

Analysis utilities

You need a tool to analyze the bit-level copy that is created. Many of these tools are available on the market. Often these tools are included in forensic suites and toolkits, such as those sold by EnCase, FTK, and Helix. (For more on these tools, see Lesson 14, "Using Cybersecurity Tools and Technologies.")

Hashing utilities

You need to be able to prove that certain evidence has not been altered during your possession of it. Hashing utilities use hashing algorithms to create a value that can be used later to verify that the information is unchanged. The two most common algorithms used are Message Digest 5 (MD5) and Secure Hashing Algorithm (SHA).

Wiped removable media

You should have removable media of various types that have been wiped clean. These may include USB flash drives, external hard drives, MultiMediaCards (MMC), Secure Digital (SD) cards, Compact Flash (CF) cards, memory sticks, xD picture cards, CDs, CD-RW, DVDs, and Blu-ray discs. It is also helpful to have a device that can read flash memory, such as an UltraBlock Forensics Card reader and writer. This small device can read almost all of the media types just listed.

SIEM

You should place a SIEM device in a central location where all reporting systems can reach it. Moreover, given the security information it contains, you should put it in a secured portion of the network. More important than the placement, though, is the tuning of the system so that it doesn't gather so much information that it is unusable.

What are the main categories of penetration tests?

Zero-knowledge test, partial-knowledge test (Testing team is provided with public knowledge regarding organizations network), and full-knowledge test

ISA (Interconnection Security Agreement)

[a]n agreement established between the organizations that own and operate connected IT systems to document the technical requirements of the interconnection. The ISA also supports a Memorandum of Understanding or Agreement (MOU/A) between the organizations

NDA

____ must be signed by any entity that has access to information that is part of a trade secret. Anyone who signs an NDA will suffer legal consequences if the organization is able to prove that the signer violated it.

Copyright

______ ensures that a work that is authored is protected from any form of reproduction or use without the consent of the copyright holder, usually the author or artist who created the original work. A copyright lasts longer than a patent. In 1996, the World Intellectual Property Organization (WIPO) standardized the treatment of digital copyrights. Copyright management information (CMI) is licensing and ownership information that is added to any digital work. In this standardization, WIPO stipulated that CMI included in copyrighted material cannot be altered.

Administrative (Management) Controls

________ controls are implemented to administer the organization's assets and personnel and include security policies, procedures, standards, baselines, and guidelines that are established by management. These controls are commonly referred to as soft controls. Specific examples are personnel controls, data classification, data labeling, security awareness training, and supervision. Security awareness training is a very important administrative control. Its purpose is to improve the organization's attitude about safeguarding data. The benefits of security awareness training include reduction in the number and severity of errors and omissions, better understanding of information value, and better administrator recognition of unauthorized intrusion attempts. A cost-effective way to ensure that employees take security awareness seriously is to create an award or recognition program.

Important Resources

_________ Should be restored in 72 hours but are not considered as important as critical or urgent resources.

Windows 10 Logs

a Windows 10 computer has the following logs: Application: This log focuses on the operation of Windows applications. Events in this log are classified as error, warning, or information, depending on the severity of the event. Security: This log focuses on security-related events, called audit events, described as either successful or failed, depending on the event. Setup: This log focuses on events that occur during the installation of a program. It is useful for troubleshooting when the installation fails. System: This log focuses on system events, which are sent by Windows and Windows system services and are classified as error, warning, or information.

Third Party Connection Agreement (TCA)

a document that spells out exactly what security measures should be taken with respect to the handling of data exchanged between parties. This document should be executed in any instance where a partnership involves relying on another entity to secure company data.

Industrial control system (ICS)

a general term that encompasses several types of control systems used in industrial production. The most widespread is supervisory control and data acquisition (SCADA). SCADA is a system that operates with coded signals over communication channels to provide control of remote equipment.

Advanced Persistent Threat

a hacking process that targets a specific entity and is carried out over a long period of time. In most cases, the victim of an APT is a large corporation or government entity. The attacker is usually a group of organized individuals or a government. The attackers have a predefined objective. Once the objective is met, the attack is halted. APTs can often be detected by monitoring logs and performance metrics. While no defensive actions are 100% effective, the following actions may help mitigate many APTs: -Use application whitelisting to help prevent malicious software and unapproved programs from running. -Patch applications such as Java, PDF viewers, Flash, web browsers, and Microsoft Office. -Patch operating system vulnerabilities. -Restrict administrative privileges to operating systems and applications, based on user duties.

dual-homed firewall

a host that resides on more than one network and possesses more than one network card

Common Vulnerability Scoring System (CVSS)

a system of ranking vulnerabilities that are discovered based on predefined metrics. This system ensures that the most critical vulnerabilities can be easily identified and addressed after a vulnerability test is met. Scores are awarded on a scale of 0 to 10, with the values having the following ranks: 0: No issues 0.1 to 3.9: Low 4.0 to 6.9: Medium 7.0 to 8.9: High 9.0 to 10.0: Critical

Intellectual Property

a tangible or intangible asset to which the owner has exclusive rights. Intellectual property law is a group of laws that recognize exclusive rights for creations of the mind. The intellectual property covered by this type of law includes the following: Patents Trade secrets Trademarks Copyrights Software piracy and licensing issues Digital rights management (DRM) The following sections explain these types of intellectual properties and their internal protection.

Tarpit

a type of honeypot designed to provide a very slow connection to the hacker so that the attack can be analyzed.

Secure Sockets Layer (SSL

another option for creating secure connections to servers. It works at the application layer of the OSI model and is used mainly to protect HTTP traffic or web servers. Its functionality is embedded in most browsers, and its use typically requires no action on the part of the user. It is widely used to secure Internet transactions. SSL can be implemented in two ways: SSL portal VPN: In this case, a user has a single SSL connection for accessing multiple services on the web server. Once authenticated, the user is provided a page that acts as a portal to other services. SSL tunnel VPN: A user may use an SSL tunnel to access services on a server that is not a web server. This solution uses custom programming to provide access to non-web services through a web browser.

Detective controls

are in place to detect an attack while it is occurring to alert appropriate personnel. Examples of detective controls include motion detectors, intrusion detection systems (IDS), logs, guards, investigations, and job rotation.

Corrective Controls

are in place to reduce the effect of an attack or other undesirable event. Using corrective controls fixes or restores the entity that is attacked. Examples of corrective controls include installing fire extinguishers, isolating or terminating a connection, implementing new firewall rules, and using server images to restore to a previous state.

Compensating Controls

controls or countermeasures that compensate for a weakness that cannot be completely eliminated. A countermeasure reduces the potential risk. Countermeasures are also referred to as safeguards or controls. Three things must be considered when implementing a countermeasure: vulnerability, threat, and risk.

sheep dip system

o Checks physical media, device drivers, and other files for malware before they are introduced to the network

Recovery Controls

recover a system or device after an attack has occurred. The primary goal of recovery controls is restoring resources. Examples of recovery controls include disaster recovery plans, data backups, and offsite facilities.

Recovery controls

recover a system or device after an attack has occurred. The primary goal of recovery controls is restoring resources. Examples of recovery controls include disaster recovery plans, data backups, and offsite facilities.

Honeypot

systems that are configured to be attractive to hackers and lure them into spending time attacking them while information is gathered about the attack. In some cases, entire networks called honeynets are attractively configured for this purpose. These types of approaches should only be undertaken by companies with the skill to properly deploy and monitor them.

Directive Controls

specify acceptable practice within an organization. They are in place to formalize an organization's security directive, mainly to its employees. The most popular directive control is an acceptable use policy (AUP) that lists proper (and often examples of improper) procedures and behaviors that personnel must follow. Any organizational security policies or procedures usually fall into this access control category. You should keep in mind that directive controls are efficient only if there is a stated consequence for not following the organization's directions.

What to do about click jacking?

Most responsibility for preventing click-jacking rests with the site owner. When you are designing website applications, use the X-FRAME-OPTIONS header to control the embedding of a site within a frame. This option should be set to DENY, which virtually ensures that click-jacking attacks fail. Also, the SAMEORIGIN option of X-FRAME can be used to restrict the site to be framed only in web pages from the same origin.

National Institute of Standards and Technology (NIST)

NIST SP 800-53 is a security controls development framework developed by the NIST body of the U.S. Department of Commerce. SP 800-53 divides the controls into three classes: technical, operational, and management. Each class contains control families or categories. Access Control (AC) Technical Awareness and Training (AT)Operational Audit and Accountability (AU)Technical Security Assessment and Authorization (CA)Management Configuration Management (CM)Operational Contingency Planning (CP)Operational Identification and Authentication (IA)Technical Incident Response (IR)Operational Maintenance (MA)Operational Media Protection (MP)Operational Physical and Environmental Protection (PE)Operational Planning (PL)Management Program Management (PM)Management Personnel Security (PS)Operational Risk Assessment (RA)Management System and Services Acquisition (SA)Management System and Communications Protection (SC)Technical System and Information Integrity (SI)Operational NIST SP 800-55 is an information security metrics framework that provides guidance on developing performance measuring procedures with a U.S. government viewpoint.

NAXSI

Nginx Anti XSS & SQL Injection (NAXSI) is an open source WAF for the nginx web server. It uses whitelists that you create to allow and disallow actions. You install it in Learning mode and run it for a period of time (days or weeks) so it can learn traffic patterns. Then you create and apply the whitelist to your specifications.

TKIP (Temporal Key Integrity Protocol)

Deprecated encryption standard that provided a new encryption key for every sent packet.

Compensating Control Development

Developing controls that address vulnerabilities is an ongoing process that occurs every time a new vulnerability is discovered. The type of control you choose largely depends on the following: The likelihood that the vulnerability will be exposed The sensitivity of the resource at risk The cost of implementing the control vs. the cost of the vulnerability being exposed Cost-benefit analysis can be performed by using either quantitative or qualitative risk analysis.

Data remnants

Sensitive data inadvertently replicated in VMs as a result of cloud maintenance functions or remnant data left in terminated VMs needs to be protected. Also, if data is moved, residual data may be left behind, accessible to unauthorized users. Any remaining data in the old location should be shredded, but depending on the security practice, data remnants may remain. This can be a concern with confidential data in private clouds and any sensitive data in public clouds.Commercial products can deal with data remnants. For example, Blancco is a product that permanently removes data from PCs, servers, data center equipment, and smart phones. Data erased by Blancco cannot be recovered with any existing technology. Blancco also creates a report to price each erasure for compliance purposes.

Packet Captures

Sniffing is the process of capturing packets for analysis; when used maliciously, sniffing is referred to as eavesdropping. Sniffing occurs when an attacker attaches or inserts a device or software into the communication medium to collect all the information transmitted over the medium. Sniffers, called protocol analyzers, collect raw packets from the network; both legitimate security professionals and attackers use them.The fact that a sniffer does what it does without transmitting any data to the network is an advantage when the tool is being used legitimately and a disadvantage when it is being used against you (because you cannot tell you are being sniffed).WiresharkOne of the most widely used sniffers is Wireshark. It captures raw packets off the interface on which it is configured and allows you to examine each packet. If the data is unencrypted, you can read the data. Figure 14-18 shows an example of Wireshark in use. In the output shown in Figure 14-18, each line represents a packet captured on the network. You can see the source IP address, the destination IP address, the protocol in use, and the information in the packet. For example, line 511 shows a packet from 10.68.26.15 to 10.68.16.127, which is a NetBIOS name resolution query. Line 521 shows an HTTP packet from 10.68.26.46 to a server at 108.160.163.97. Just after that, you can see the server sending an acknowledgment back. To try to read the packet, you click on the single packet. If the data is clear text, you can read and analyze it. So you can see how an attacker could use Wireshark to acquire credentials and other sensitive information.Protocol analyzers can be of help whenever you need to see what is really happening on your network. For example, say you have a security policy that says certain types of traffic should be encrypted, but you are not sure that everyone is complying with this policy. By capturing and viewing the raw packets on the network, you can determine whether users are compliant.

Snort

Snort is an open source NIDS on which Sourcefire products are based. It can be installed on Fedora, CentOS, FreeBSD, and Windows. The installation files are free, but you need a subscription to keep rule sets up to data. Figure 14-2 shows a Snort report that has organized the traffic in the pie chart by protocol. It also lists all events detected by various signatures that have been installed. If you scan through the list, you can see attacks such as URL host spoofing, oversized packets, and, in row 10, a SYN FIN scan.

VM Sprawl

Growth that occurs on a large number of virtual machines and requires resources—usually administration related—to keep up with.

SOCKS firewall

A circuit-level firewall that requires a SOCKS client on the computers.

What are the disadvantages of using VLANS?

-Managerial overhead is required to secure VLANs. -Misconfigurations can isolate devices. -The limit on number of VLANs may cause issues on a very large network. -Subnet-based VLANs may expose traffic to potential sniffing and man-in-the-middle attacks when traffic goes through third-party ATM clouds or the Internet.

What are the benefits of running credentialed scans?

-Operations are executed on the host itself rather than across the network. -There is a more definitive list of missing patches. -Client-side software vulnerabilities are uncovered. -A credentialed scan can read password policies, obtain a list of USB devices, check antivirus software configurations, and even enumerate Bluetooth devices attached to scanned hosts.

How to logically harden a system?

-Remove unnecessary applications. -Disable unnecessary services. -Block unrequired ports. -Tightly control the connecting of external storage devices and media, if allowed at all.

Legal (incident Response)

-Review nondisclosure agreements to ensure support for incident response efforts. -Develop wording of documents used to contact possibly affected sites and organizations. -Assess site liability for illegal computer activity.

What are the steps in the incident response plan?

-Step 1. Detect: The first step is to detect the incident. The worst sort of incident is one that goes unnoticed.Figure 7-5: Incident Response Process -Step 2. Respond: The response to the incident should be appropriate for the type of incident. A denial-of-service (DoS) attack against a web server would require a quicker and different response than a missing mouse in the server room. Establish standard responses and response times ahead of time. -Step 3. Report: All incidents should be reported within a time frame that reflects the seriousness of the incident. In many cases, establishing a list of incident types and the person to contact when each type of incident occurs is helpful. Attention to detail at this early stage, while time-sensitive information is still available, is critical. -Step 4. Recover: Recovery involves a reaction designed to make the network or system affected functional again. Exactly what that means depends on the circumstances and the recovery measures available. For example, if fault-tolerance measures are in place, the recovery might consist of simply allowing one server in a cluster to fail over to another. In other cases, it could mean restoring the server from a recent backup. The main goal of this step is to make all resources available again. -Step 5. Remediate: This step involves eliminating any residual danger or damage to the network that still might exist. For example, in the case of a virus outbreak, it could mean scanning all systems to root out any additional affected machines. These measures are designed to make a more detailed mitigation when time allows. -Step 6. Review: Finally, you need to review each incident to discover what can be learned from it. Changes to procedures might be called for. Share lessons learned with all personnel who might encounter the same type of incident again. Complete documentation and analysis are the goals of this step.

What are the steps to preparing a test bed?

-Step 1. Install virtualization software on the host. -Step 2. Create a VM and install a guest operating system on the VM. -Step 3. Isolate the system from the network by ensuring that the NIC is set to "host" only mode. -Step 4. Disable shared folders and enable guest isolation on the VM. -Step 5. Copy the malware to the guest operating system.

What are the limitations to using NAC and NAP?

-They work well for company-managed computers but less well for guests. -They tend to react only to known threats and not to new threats. -The return on investment is still unproven. -Some implementations involve confusing configuration.Time

You can take the following countermeasures to mitigate maintenance hook attacks

-Use a host IDS to record any attempt to access the system by using one of these hooks. -Encrypt all sensitive information contained in the system. -Implement auditing to supplement the IDS -The best solution is for the vendor to remove all maintenance hooks before the product goes into production. Coded reviews should be performed to identify and remove these hooks

The following measures can help you prevent SQL injection types of attacks:

-Use proper input validation. -Use blacklisting or whitelisting of special characters. -Use parameterized queries in ASP.NET and prepared statements in Java to perform escaping of dangerous characters before the SQL statement is passed to the database.

Context Based Authentication Frequency

A context-based system can make access decisions based on the frequency with which the requests are made. Because multiple requests to log in coming very quickly can indicate a password-cracking attack, the system can use this information to deny access. It also can indicate that an automated process or malware, rather than an individual, is attempting this operation.

Cross-Site Request Forgery (CSRF)

A cross-site request forgery (CSRF) is an attack that causes an end user to execute unwanted actions on a web application in which he is currently authenticated. Unlike with XSS, in CSRF, the attacker exploits the website's trust of the browser rather than the other way around. The website thinks that the request came from the user's browser and was actually made by the user. However, the request was planted in the user's browser. It usually gets there when a user follows a URL that already contains the code to be injected. This type of attack is shown in Figure 6-6.

Data Ownership Policy

A data ownership policy is closely related to a data classification policy (covered later in this lesson), and often the two policies are combined. This is because typically the data owner is tasked with classifying the data. Therefore, the data ownership policy covers how the owner of each piece of data or each data set is identified. In most cases, the creator of the data is the owner, but some organizations may deem all data created by a department to be owned by the department head. Another way a user may be the owner of data is when a user introduces to the organization data he did not create. Perhaps the data was purchased from a third party. In any case, the data ownership policy should outline both how data ownership occurs and the responsibilities of the owner with respect to determining the data classification and identifying those with access to the data.

Data Retention Policy

A data retention policy outlines how various data types must be retained and may rely on the data classifications described in the data classification policy. Data retention requirements vary based on several factors, including data type, data age, and legal and regulatory requirements. Security professionals must understand where data is stored and the type of data stored. In addition, security professionals should provide guidance on managing and archiving data. Therefore, each data retention policy must be established with the help of organizational personnel. A retention policy usually contains the purpose of the policy, the portion of the organization affected by the policy, any exclusions to the policy, the personnel responsible for overseeing the policy, the personnel responsible for data destruction, the data types covered by the policy, and the retention schedule. Security professionals should work with data owners to develop the appropriate data retention policy for each type of data the organization owns. Examples of data types include, but are not limited to, human resources data, accounts payable/receivable data, sales data, customer data, and e-mail. To design a data retention policy, an organization should answer the following questions: What are the legal/regulatory requirements and business needs for the data? What are the types of data? What are the retention periods and destruction needs of the data? The personnel who are most familiar with each data type should work with security professionals to determine the data retention policy. For example, human resources personnel should help design the data retention policy for all human resources data. While designing a data retention policy, the organization must consider the media and hardware that will be used to retain the data. Then, with this information in hand, the data retention policy should be drafted and formally adopted by the organization and/or business unit. Once a data retention policy has been created, personnel must be trained to comply with it. Auditing and monitoring should be configured to ensure data retention policy compliance. Periodically, data owners and processors should review the data retention policy to determine whether any changes need to be made. All data retention policies, implementation plans, training, and auditing should be fully documented.

Data mining warehouse

A data warehouse is a repository of information from heterogeneous databases. It allows for multiple sources of data to not only be stored in one place but to be organized in such a way that redundancy of data is reduced (called data normalizing), and more sophisticated data mining tools are used to manipulate the data to discover relationships that may not have been apparent before. Along with the benefits they provide, they also offer more security challenges.

Digital Forensics Workstation

A dedicated workstation should be available for an investigation and should not be one of the production systems. It should be dedicated to this process. Special systems are created just for this purpose and are quite pricy but are worth the cost when the need arises. You can also build a dedicated workstation for this purpose. The SANS Institute lists the following requirements of a forensic workstation in the document "Building a Low Cost Forensics Workstation": -The system must have network connectivity. -The system must support hardware-based drive duplication. -The system must support remote and network-based drive duplication. The system must support duplication and analysis of these common file system types: NTFSFAT16/32 Solaris UFS BSD UFSEXT2 (Linux) EXT3 (Linux) HFS & HFS+ (Macintosh) Swap Solaris BSD Linux The system must have the ability to validate image and file integrity. The system must be able to identify dates and times that files have been modified, accessed, and created. The system must have the capability to create file system activity timelines. The system must be able to identify deleted files. The system must be able to analyze allocated drive space. The system must be able to isolate and analyze unallocated drive space. The system must allow the investigator to directly associate disk images and evidence to a case. The system must allow the investigator to associate notes to cases and specific evidence. The system must support removable media for storage and transportation of evidence and disk images. Evidence collected by the system must be admissible in a court of law.

Federations

A federated identity is a portable identity that can be used across businesses and domains. In federated identity management, each organization that joins the federation agrees to enforce a common set of policies and standards. These policies and standards define how to provision and manage user identification, authentication, and authorization. Providing disparate authentication mechanisms with federated IDs has a lower up-front development cost than other methods, such as a PKI or attestation.Federated identity management uses two basic models for linking organizations within the federation: Cross-certification model: In this model, each organization certifies that every other organization is trusted. This trust is established when the organizations review each other's standards. Each organization must verify and certify through due diligence that the other organizations meet or exceed standards. One disadvantage of cross-certification is that the number of trust relationships that must be managed can become problematic. Trusted third-party (or bridge) model: In this model, each organization subscribes to the standards of a third party. The third party manages verification, certification, and due diligence for all organizations. This is usually the best model for an organization that needs to establish federated identity management relationships with a large number of organizations. Security issues with federations and their possible solutions include the following: Inconsistent security among partners: Federated partners need to establish minimum standards for the policies, mechanisms, and practices they use to secure their environments and information. Insufficient legal agreements among partners: Like any other business partnership, identity federation requires carefully drafted legal agreements.

Manual vs. Automatic Provisioning/Deprovisioning

A federated identity system requires a system of creating secure access accounts and granting the proper access to those accounts. This is called the provisioning process. The removal of access and the deletion of accounts is called the deprovisioning process. This can be done manually or through an automated process of some sort. Provisioning is a term also used to describe the sharing of the attributes of user accounts among federation members. Consider both of these meanings of provisioning: Provisioning/deprovisioning of accounts: Accounts can be manually provisioned or provisioned through an automated process. While manual provisioning performed by each federation member is slower, it provides better security. Automated provisioning provides a better experience for the user but is less secure. This decision may amount to one that hinges on the amount of trust present between federation members. Provisioning/deprovisioning of attributes: When making access decisions, federation members need the account information or the attributes of a user. Delivering these attributes is also called provisioning. Two methods are used: push and pull provisioning. In pull provisioning (which is the type used with LDAP, for example), the attributes are pulled from a central repository of the member by the federation repository on a set schedule. In push mode, the central directory of the member pushes attributes to the central repository of the federation

Click-Jacking

A hacker using a click-jack attack crafts a transparent page or frame over a legitimate-looking page that entices the user to click something. When he does, he is really clicking on a different URL. In many cases, the site or application may entice the user to enter credentials that could be used later by the attacker. This type of attack is shown in Figure 6-7.

Man-in-the-Middle

A man-in-the-middle attack intercepts legitimate traffic between two entities. The attacker can then control information flow and can eliminate or alter the communication between the two parties. One of the ways a man-in-the-middle attack is accomplished is by poisoning the ARP cache on a switch. This attack and the mitigations for the attack are covered in Lesson 6, "Analyzing Scan Output and Identifying Common Vulnerabilities."

MOUs

A memorandum of understanding (MOU) is a document that, while not legally binding, indicates a general agreement between the principals to do something together. An organization may have MOUs with multiple organizations, and MOUs may in some instances contain security requirements that inhibit or prevent the deployment of certain measures.

LAN (Local Area Network)

A network of computers and other devices that is confined to a relatively small space, such as one building or even one office. Any connection over 10 Mbps

Security Testing Phases

A number of different types of test may be conducted during the testing phases. Among them are static code analyses, web application vulnerability scanning, and fuzz testing. The following sections dig in to how these types of tests operate.

802.1x

A port-based authentication protocol. Wireless can use 802.1X. For example, WPA2-Enterprise mode uses an 802.1X server (implemented as a RADIUS server) to add authentication.

Threat Modeling

A process by which analysts can understand security threats to a system, determine risks from those threats, and establish appropriate mitigations.

Quantitative Risk Analysis

A quantitative risk analysis assigns monetary and numeric values to all facets of the risk analysis process, including asset value, threat frequency, vulnerability severity, impact, safeguard costs, and so on. Equations are used to determine total and residual risks. The most common equations are for single loss expectancy (SLE) and annual loss expectancy (ALE). The SLE is the monetary impact of each threat occurrence. To determine the SLE, you must know the asset value (AV) and the exposure factor (EF). The EF is the percentage value or functionality of an asset that will be lost when a threat event occurs. The calculation for obtaining the SLE is as follows: SLE=AV×EF For example, an organization has a web server farm with an AV of $20,000. If the risk assessment has determined that a power failure is a threat agent for the web server farm and the EF for a power failure is 25%, the SLE for this event equals $5000. The ALE is the expected risk factor of an annual threat event. To determine the ALE, you must know the SLE and the annualized rate of occurrence (ARO). The ARO is the estimate of how often a given threat might occur annually. The calculation for obtaining the ALE is as follows: ALE = SLE x ARO Using the previously mentioned example, if the risk assessment has determined that the ARO for the power failure of the web server farm is 50%, the ALE for this event equals $2500. Security professionals should keep in mind that this calculation can be adjusted for geographic distances. Using the ALE, the organization can decide whether to implement controls or not. If the annual cost of the control to protect the web server farm is more than the ALE, the organization could easily choose to accept the risk by not implementing the control. If the annual cost of the control to protect the web server farm is less than the ALE, the organization should consider implementing the control. Keep in mind that even though quantitative risk analysis uses numeric value, a purely quantitative analysis cannot be achieved because some level of subjectivity is always part of the data. In our example, how does the organization know that damage from the power failure will be 25% of the asset? This type of estimate should be based on historical data, industry experience, and expert opinion. An advantage of quantitative over qualitative risk analysis is that quantitative risk analysis uses less guesswork than qualitative. Disadvantages of quantitative risk analysis include the difficulty of the equations, the time and effort needed to complete the analysis, and the level of data that must be gathered for the analysis.

Race Conditions

A race condition is an attack in which the hacker inserts himself between instructions, introduces changes, and alters the order of execution of the instructions, thereby altering the outcome.

Network Policy Server (NPS)

A role service that enables you to define and enforce rules that determine who can access your network and how they can access it.

Rootkit

A rootkit is a set of tools that a hacker can use on a computer after he has managed to gain access and elevate his privileges to administrator. It gets its name from the root account, the most powerful account in Linux-based operating systems. Rootkit tools might include a backdoor for the hacker to access. This is one of the hardest types of malware to remove, and in many cases only a reformat of the hard drive will completely remove it.The following are some of the actions a rootkit can take: Installing a backdoor Removing all entries from the security log (log scrubbing) Replacing default tools with a compromised version (Trojaned programs) Making malicious kernel changes Unfortunately, the best defense against rootkits is to not to get them in the first place because they are very difficult to detect and remove. In many cases rootkit removal renders the system useless. There are some steps you can take to prevent rootkits, including the following: Monitor system memory for ingress points for a process as it invokes and keeps track of any imported library calls that may be redirected to other functions. Use the Windows Rootkit Revealer to look for information kept hidden from the Windows API, the Master File Table, and the directory index. Consider products that are standalone rootkit detection tools, such as RootkitRevealer and Blacklight. Keep the firewall updated. Harden all workstations.

Security Suites

A security suite is a collection of security utilities combined into a single tool. For example, it might be a combination of antivirus and firewall services. Many security suites also include backup services, parental controls, and maintenance features that help improve performance. They may also include the following: Gateway protection Mail server protection File server protection Client protection Centralized management While it's convenient to have all tools combined, you need to ensure that the tools included are robust enough for you and provide the specific features your organization needs.

SLAs

A service level agreement (SLA) is a document that specifies a service to be provided by a party, the costs of the service, and the expectations of performance. These contracts may exist with third parties from outside the organization and between departments within an organization. Sometimes these SLAs may include specifications that inhibit or prevent the deployment of certain measures.

Trade Secret

A trade secret ensures that proprietary technical or business information remains confidential. A trade secret gives an organization a competitive edge. Trade secrets include recipes, formulas, ingredient listings, and so on that must be protected against disclosure. After a trade secret is obtained by or disclosed to a competitor or the general public, it is no longer considered a trade secret.

Trend report

A trend report depicts the changes in risk level over time, as assessed by the tool using its past scans.

LDAP

A typical directory contains a hierarchy that includes users, groups, systems, servers, client workstations, and so on. Because the directory service contains data about users and other network entities, it can be used by many applications that require access to that information. A common directory service standard is Lightweight Directory Access Protocol (LDAP), which is based on the earlier standard X.500. X.500 uses Directory Access Protocol (DAP). In X.500, the distinguished name (DN) provides the full path in the X.500 database where the entry is found. The relative distinguished name (RDN) in X.500 is an entry's name without the full path. LDAP is simpler than X.500. LDAP supports DN and RDN, but it includes more attributes, such as the common name (CN), domain component (DC), and organizational unit (OU) attributes. Using a client/server architecture, LDAP uses TCP port 389 to communicate. If advanced security is needed, LDAP over SSL communicates via TCP port 636.

Role-Based Access Control (RBAC)

A user may derive his network access privileges from a role he has been assigned, typically through addition to a specific security group.

Time based access control

A user might be allowed to connect to the network only during specific times of day.

Rule Based Access Control

A user might have his access controlled by a rule such as "all devices must have the latest antivirus patches installed."

Location Based Access control

A user might have one set of access rights when connecting from another office and another set when connected from the Internet.

Web Application Firewall

A web application firewall (WAF) applies rule sets to an HTTP conversation. These rule sets cover common attack types to which these session types are susceptible. Among the common attacks they address are cross-site scripting and SQL injections. A WAF can be implemented as an appliance or as a server plug-in. In appliance form, a WAF is typically placed directly behind the firewall and in front of the web server farm; Figure 14-10 shows an example. While all traffic is usually funneled in-line through the device, some solutions monitor a port and operate out-of-band. Table 14-3 lists the pros and cons of these two approaches. Finally, WAFs can be installed directly on the web servers themselves.The security issues involved with WAFs include the following: The IT infrastructure becomes more complex. Training on the WAF must be provided with each new release of the web application. Testing procedures may change with each release. False positives may occur and can have a significant business impact. Troubleshooting becomes more complex. The WAF terminating the application session can potentially have an effect on the web application. TypeAdvantagesDisadvantagesIn-lineCan prevent live attacksMay slow web trafficCould block legitimate trafficOut-of-bandNon-intrusiveDoesn't interfere with trafficCan't block live traffic

Access Control Types

Access controls are defined by the type of protection they provide. Whereas the access control categories classify the access controls based on where they fit in time, access control types divide access controls on their method of implementation. There are three types of access controls: Physical controls Logical (technical) controls Administrative (management) controls Access controls are covered in detail in Lesson 3, "Recommending and Implementing the Appropriate Response and Countermeasure," but examples of controls and their categories are shown in Tables

Creating Accountability

Accountability is an organization's ability to hold users responsible for the actions they perform. To ensure that users are accountable for their actions, organizations must implement an auditing mechanism. To ensure that users are accountable for their actions, organizations could implement any combination of the following components: Strong identification: Each user should have her own account because group or role accounts cannot be traced back to a single individual. Monitoring: User actions should be monitored, including login, privilege use, and other actions. Users should be warned as part of a no expectation of privacy statement that all actions can be monitored. Audit logs: Audit logs should be maintained and stored according to organizational security policies. Administrators should periodically review these logs.

Change Management and Configuration Management/Replacement

After a solution is deployed in a live environment, there will inevitably be additional changes that must be made to the software due to security issues. In some cases the software might be altered to enhance or increase its functionality. In either case, changes must be handled through a formal change and configuration management process.The purpose of this process is to ensure that all changes to the configuration of the code and to the source code itself are approved by the proper personnel and are implemented in a safe and logical manner. This process should always ensure continued functionality in the live environment, and changes should be documented fully, including all changes to hardware and software.In some cases, it may be necessary to completely replace applications or systems. While some failures may be fixed with enhancements or changes, a failure may occur that can be solved only by completely replacing the application.

What are the two types of deployment for NAC?

Agent based or agentless based. Agent-based NAC can perform deep inspection and remediation at the expense of additional software on the endpoint. Both agent-based and agentless NAC can be used to mitigate the following issues:MalwareMissing OS patchesMissing anti-malware updates 802.1x

Aggregation

Aggregation is defined as the assembling or compilation of units of information at one sensitivity level and having the resultant totality of data being of a higher sensitivity level than the individual components. So you might think of aggregation as a different way of achieving the same goal as inference, which is to learn information about data on a level to which one does not have access.

Secure Method of Communication

All communications that take place between the stakeholders should use a secure communication process to ensure that information is not leaked or sniffed. Secure communication channels and strong cryptographic mechanisms should be used for these communications. The best approach is to create an out-of-band method of communication, which does not use the regular methods of corporate e-mail or VoIP. While personal cell phones can be a method for voice communication, file and data exchange should be through a method that provides end-to-end encryption, such as Off-the-Record (OTR).

Tamper-proof seals

All evidence obtained during an investigation must be stored securely in containers sealed with tamper-proof seals. You need to have plenty of these on hand to ensure that the chain of custody is maintained.

Prevent Inadvertent Release of Information

All responders should act to prevent the disclosure of any information to parties that are not specified in the communication plan. Moreover, all information released to the public and the press should be handled by public relations or persons trained for this type of communication. The timing of all communications should also be specified in the plan.

Incident Summary Report

All stakeholders should receive a document that summarizes the incident. It should not have an excessive amount of highly technical language in it, and it should be written so nontechnical readers can understand the major points of the incident. The following are some of the highlights that should be included: -When the problem was first detected and by whom -The scope of the incident -How it was contained and eradicated -Work performed during recovery Areas where the Cyber Incident Response Team (CIRT) were effective -Areas that need improvement

Incident response plan

All the equipment in the world is useless if the response to an incident is flawed. Incident response policies should be formally designed, well communicated, and followed. They should specifically address cyber attacks against an organization's IT systems.

VLANs (Virtual LANs)

Allow switched networks to be broken into multiple broadcast domains; provides manageability, increase performance thru multiple broadcast domains

SAML

Allows for exchanging the authentication and authorization data between systems Security Assertion Markup Language (SAML) is a security attestation model built on XML and SOAP-based services that allows for the exchange of authentication and authorization data between systems and supports federated identity management. The major issue it attempts to address is SSO using a web browser. When authenticating over HTTP using SAML, an assertion ticket is issued to the authenticating user.Remember that SSO enables a user to authenticate once to access multiple sets of data. SSO at the Internet level is usually accomplished with cookies, but extending the concept beyond the Internet has resulted in many proprietary approaches that are not interoperable. The goal of SAML is to create a standard for this process.A consortium called the Liberty Alliance proposed an extension to the SAML standard called the Liberty Identity Federation Framework (ID-FF), which is proposed to be a standardized cross-domain SSO framework. It identifies a circle of trust, within which each participating domain is trusted to document the following about each user: The process used to identify a user The type of authentication system used Any policies associated with the resulting authentication credentials Each member entity is free to examine this information and determine whether to trust it. Liberty contributed ID-FF to OASIS (a nonprofit, international consortium that creates interoperable industry specifications based on public standards such as XML and SGML). In March 2005, SAML v2.0 was announced as an OASIS standard. SAML v2.0 represents the convergence of Liberty ID-FF and other proprietary extensions.In an unauthenticated SAMLv2 transaction, the browser asks the service provider (SP) for a resource. The SP provides the browser with an XHTML format. The browser asks the identity provider (IP) to validate the user and then provides the XHTML back to the SP for access. The <nameID> element in SAML can be provided as the X.509 subject name or by Kerberos principal name.To prevent a third party from identifying a specific user as having previously accessed a service provider through an SSO operation, SAML uses transient identifiers (which are valid only for a single login session and are different each time the user authenticates again but stay the same as long as the user is authenticated).SAML is a good solution in the following scenarios: When you need to provide SSO (when at least one actor or participant is an enterprise) When you need to provide access to a partner or customer application to your portal When you can provide a centralized identity source

Shibboleth

Allows use of common credentials among sites that are part of the federation Shibboleth is an open source project that provides single sign-on capabilities and allows sites to make informed authorization decisions for individual access of protected online resources in a privacy-preserving manner. Shibboleth allows the use of common credentials among sites that are a part of the federation. It is based on SAML. This system has two components: Identity providers (IP): IPs supply the user information. Service providers (SP): SPs consume this information before providing a service. Here is an example of SAML in action: Step 1. A user logs in to Domain A, using a PKI certificate that is stored on a smart card protected by an eight-digit PIN. Step 2. The credential is cached by the authenticating server in Domain A. Step 3. Later, the user attempts to access a resource in Domain B. This initiates a request to the Domain A authenticating server to somehow attest to the resource server in Domain B that the user is in fact who she claims to be. Figure 11-9 illustrates the way the service provider obtains the identity information from the identity provider

OpenID

Allows users to log in to multiple sites without registering their information repeatedly OpenID is an open standard and decentralized protocol by the nonprofit OpenID Foundation that allows users to be authenticated by certain cooperating sites. The cooperating sites are called relying parties (RP). OpenID allows users to log in to multiple sites without having to register their information repeatedly. Users select an OpenID identity provider and use the accounts to log in to any website that accepts OpenID authentication.While OpenID solves the same issue as SAML, an enterprise may find these advantages in using OpenID: It's less complex than SAML. It's been widely adopted by companies such as Google. On the other hand, you should be aware of the following shortcomings of OpenID compared to SAML: With OpenID, auto-discovery of the identity provider must be configured per user. SAML has better performance. SAML can initiate SSO from either the service provider or the identity provider, while OpenID can only be initiated from the service provider.In February 2014, the third generation of OpenID, called OpenID Connect, was released. It is an authentication layer protocol that resides atop the OAuth 2.0 framework. (OAuth is covered earlier in this lesson.) It is designed to support native and mobile applications, and it defines methods of signing and encryption.

Release/Maintain

Also called the release/maintenance phase in some documentation, this phase includes the implementation of the software into the live environment and the continued monitoring of its operation. Finding additional functional and security problems at this point, as the software begins to interface with other elements of the network, is not unusual.In many cases, vulnerabilities are discovered in the live environments for which no current fix or patch exists. Such a vulnerability is referred to as zero-day vulnerability. It is better to have the supporting development staff discover these vulnerabilities than to leave them to attackers

application-level firewall

Also known as proxy servers. Works by performing a deep inspection of application data as it traverses the firewall. Rules are set by analyzing client requests and application responses, then enforcing correct application behavior.

Certification

Although the terms are used as synonyms in casual conversation, accreditation and certification are two different concepts in the context of assurance levels and ratings. However, they are closely related. Certification evaluates the technical system components, whereas accreditation occurs when the adequacy of a system's overall security is accepted by management

Acceptable Use Policy (AUP)

An AUP is used to inform users of the actions that are allowed and those that are not allowed. It should also provide information on the consequences that may result when these policies are violated. This document should be reviewed and signed by each user during the employee orientation phase of the employment process. The following are examples of the many issues that may be addressed in an AUP: Proprietary information stored on electronic and computing devices, whether owned or leased by company, the employee, or a third party, remains the sole property of company. The employee has a responsibility to promptly report the theft, loss, or unauthorized disclosure of proprietary information. Access, use, or sharing of proprietary information is allowed only to the extent that it is authorized and necessary to fulfill assigned job duties. Employees are responsible for exercising good judgment regarding the reasonableness of personal use. Authorized individuals in the company may monitor equipment, systems, and network traffic at any time. The company reserves the right to audit networks and systems on a periodic basis to ensure compliance with this policy. All mobile and computing devices that connect to the internal network must comply with the company access policy. System-level and user-level passwords must comply with the password policy. All computing devices must be secured with a password-protected screensaver. Postings by employees from a company e-mail address to newsgroups should contain a disclaimer stating that the opinions expressed are strictly their own and not necessarily those of company. Employees must use extreme caution when opening e-mail attachments received from unknown senders, which may contain malware.

IPS (Intrusion Prevention System)

An active, inline security device that monitors suspicious network and/ or system traffic and reacts in real time to block it Also called a Network Intrusion Prevention System (NIPS). Can also be host-based. Running an IPS is more of an overall performance load than running an IDS.

CNAME record

An alias record, which represents an additional hostname mapped to an IPv4 address that already has an A record mapped

VM escape

An attack in which the attacker "breaks out" of a VM's normally isolated state and interacts directly with the hypervisor. This can allow access to all VMs an the host machine as well.

Use Interception Proxy to Crawl Application

An interception proxy is an application that stands between the web server and the client and passes all requests and responses back and forth. While it does so, it analyzes the information to test the security of the web application.A web application proxy can also "crawl" the site and its application to discover the links and content contained. These are sometimes called spiders. A good example of a web proxy is the OWASP Zed Attack Proxy (ZAP). This tool is covered more fully in Lesson 14

IETF (Internet Engineering Task Force)

An international open committee that works to develop and maintain Internet standards and contribute to the evolution and smooth operation of the Internet

Cryptography tools

An investigator uses these tools when she encounters encrypted evidence, which is becoming more common. Some of these tools can attempt to decrypt the most common types of encryption (for example, BitLocker, BitLocker To Go, PGP, TrueCrypt), and they may also be able to locate decryption keys from RAM dumps and hibernation files.

Sandboxing

An isolated test environment that simulates the production environment but will not affect production components/data.

Removal

Another containment option is to shut down a device or devices. In some cases this is not advisable until digital forensics has been completed. Much of the evidence is volatile (for example, RAM contents) and would be lost by shutting down the device.

EAP (Extensible Authentication Protocol)

Another form of network access control is 802.1x Extensible Authentication Protocol (EAP). 802.1x is a standard that defines a framework for centralized port-based authentication. It can be applied to both wireless and wired networks and uses three components: Supplicant: The user or device requesting access to the network Authenticator: The device through which the supplicant is attempting to access the network Authentication server: The centralized device that performs authentication. The role of the authenticator can be performed by a wide variety of network access devices, including remote access servers (both dial-up and VPN), switches, and wireless access points. The role of the authentication server can be performed by a Remote Authentication Dial-in User Service (RADIUS) or Terminal Access Controller Access Control System + (TACACS+) server.

Memory Consumption (Common Host-Related Symptoms)

Another key indicator of a compromised host is increased memory consumption. Many times it is an indication that additional programs have been loaded into RAM so they can be processed. Then once they are loaded, they use RAM in the process of executing their tasks, whatever they may be. You can monitor memory consumption by using the same approach you use for CPU consumption. If memory usage cannot be accounted, you should investigate it. (Review what you learned in Lesson 6 about buffer overflows, which are attacks that may display symptoms of increased memory consumption.)

Data Classification

Another process that should be considered is the data classification policy of the organization. Well-secured companies classify all data according to its sensitivity level and organize data types to apply control appropriate to each sensitivity level. Table 5-1 lists some potential classification levels for commercial enterprises.

Antivirus

Antivirus software is designed to identify viruses, Trojans, and worms. It deletes them or at least quarantines them until they can be removed. This identification process requires that you frequently update the software's definition files, the files that make it possible for the software to identify the latest viruses. If a new virus is created that has not yet been identified in the list, you cannot protect against it until the virus definition is added and the new definition file is downloaded.

Manage Exceptions

Any control settings that are flagged during the control testing procedures must be correctly handled. In some cases the settings must simply be corrected, but in others the decision is not so easy. In some cases the recommended remediation causes more problems (at least immediately) than it solves. This is especially true of a software upgrade that fixes a vulnerability but causes immediate issues with the function of the software or the system. An organization may flag such an event as an exception to its stated goal of addressing vulnerabilities in the target time frame. In most cases an exception is granted an extension of the target time frame; in other cases the exception may be deemed to be "unresolvable," and the organization just has to live with it as long as it uses the system. A third possible way this might play out is that it may be granted several extensions and then deemed unresolvable.

Rogue Devices on the Network (Common Network-Related Symptoms)

Any time new devices appear on a network, there should be cause for suspicion. While it is possible that users may be introducing these devices innocently, there are also a number of bad reasons for these devices to be on the network. The following types of illegitimate devices may be found on a network: -Wireless key loggers: These collect information and transmit it to the criminal via Bluetooth or Wi-Fi. -Wi-Fi and Bluetooth hacking gear: This gear is designed to capture both Bluetooth and Wi-Fi transmissions. -Rogue access points: Rogue APs are designed to lure your hosts into a connection for a peer-to-peer attack. -Rogue switches: These switches can attempt to create a trunk link with a legitimate switch, thus providing access to all VLANs. -Mobile hacking gear: This gear allows a malicious individual to use software along with software-defined radios to trick cell phone users into routing connections through a fake cell tower.

Unusual Traffic Spikes

Any unusual spikes in traffic that are not expected should be cause for alarm. Just as an increase in bandwidth usage may indicate DoS or DDoS activity, unusual spikes in traffic may also indicate this type of activity. Again, know what your traffic patterns are and create a baseline of this traffic rhythm. With traffic spikes, there are usually accompanying symptoms such as network slowness and, potentially, alarms from any IPS or IDS systems you have deployed. Keep in mind that there are other legitimate reasons for traffic spikes. The following are some of the normal activities that can cause these spikes: -Backup traffic in the LAN -Virus scanner updates -Operating system updates -Mail server issues

Corporate Confidential

Anything that needs to be kept confidential within the organization. This may include the following: -Plan announcements -Processes and procedures that may be unique to the organization -Profit data and estimates -Salaries -Market share figures -Customer lists -Performance appraisals

Insecure Direct Object References

Applications frequently use the actual name or key of an object when generating web pages. Applications don't always verify that a user is authorized for the target object. This results in an insecure direct object reference flaw. Such an attack can come from an authorized user, meaning that the user has permission to use the application but is accessing information to which she should not have access. To prevent this problem, each direct object reference should undergo an access check. Code review of the application with this specific issue in mind is also recommended.

SIEM options

ArcSightArcSight, owned by HP, sells SIEM systems that collect security log data from security technologies, operating systems, applications, and other log sources and analyze that data for signs of compromise, attacks, or other malicious activity. The solution comes in a number of models, based on the number of events the system can process per second and the number of devices supported. The selection of the model is important to ensure that the device is not overwhelmed trying to access the traffic. This solution also can generate compliance reports for HIPAA, SOX, and PCI-DSS.For more information, see https://saas.hpe.com/en-us/software/siem-security-information-event-management.QRadarThe IBM SIEM solution, QRadar, purports to help eliminate noise by applying advanced analytics to chain multiple incidents together and identify security offenses requiring action. Purchase also permits access to the IBM Security App Exchange for threat collaboration and management.For information, see www-03.ibm.com/software/products/en/qradar.SplunkSplunk is a SIEM system that can be deployed as a premises-based or cloud-based solution. The data it captures can be analyzed using searches written in Splunk Search Processing Language (SPL). Splunk uses machine-driven data imported by connectors or add-ons. For example, the Splunk add-on for Oracle Database allows a Splunk software administrator to collect and ingest data from an Oracle database server.AlienVault/OSSIMAlienVault produces both commercial and open source SIEM systems. Open Source Security Information Management (OSSIM) is the open source version, and the commercially available AlienVault Unified Security Management (USM) goes beyond traditional SIEM software with all-in-one security essentials and integrated threat intelligence. Figure 14-12 shows the Executive view of the AlienVault USM console.Figure 14-12: AlienVaultKiwi SyslogKiwi Syslog is log management software that provides centralized storage of log data and SNMP data from host and appliance, based on Windows or Linux. While it combines the functions of SNMP collector and log manager, it lacks many of the features found in other systems; however, it is very economical.

Screened Subnets

As mentioned in Lesson 1, in a screened subnet, two firewalls are used, and traffic must be inspected at both firewalls before it can enter the internal network. The advantages of a screened subnet include the following: It offers the added security of two firewalls before the internal network. One firewall is placed before the DMZ, protecting the devices in the DMZ. Disadvantages include the following: It is more costly than using either a dual-homed or three-legged firewall. Configuring two firewalls adds complexity. Figure 14-8 shows the placement of the firewalls to create a screened subnet. The router is acting as the outside firewall, and the firewall appliance is the second firewall. In any situation where multiple firewalls are in use, such as an active/passive cluster of two firewalls, care should be taken to ensure that TCP sessions are not traversing one firewall while return traffic of the same session is traversing the other. When stateful filtering is being performed, the return traffic will be denied, which will break the user connection.In the real world, various firewall approaches are mixed and matched to meet requirements, and you may find elements of all these architectural concepts being applied to a specific situation.

dual-homed firewall

As you learned in Lesson 1, a dual-homed firewall has two network interfaces: one pointing to the internal network and another connected to the untrusted network. In many cases, routing between these interfaces is turned off. The firewall software allows or denies traffic between the two interfaces based on the firewall rules configured by the administrator. The following are some of the advantages of this setup: The configuration is simple. It's possible to perform IP masquerading (NAT). It is less costly than using two firewalls. Disadvantages include the following: There is a single point of failure. It is not as secure as other options. A dual-homed firewall (also called a dual-homed host) location is shown in Figure 14-5.Figure 14-5: Location of a Dual-Homed Firewall

Network Segmentation

As you learned in Lesson 3, "Recommending and Implementing the Appropriate Response and Countermeasure," VLANs are used to separate devices connected to the same switch at both Layer 2 and Layer 3. This is an application of the concept of network segmentation. This means that a requirement of a basic (or primary) VLAN is that all the devices in the VLAN must also be in the same IP subnet. In some cases, you need to create separation between devices in the same primary VLAN. This can be done by implementing private VLANs (PVLAN), which are VLANs within a VLAN.You can create PVLANs after you create the primary VLAN. Then by setting the switch port to one of three states, you can make PVLANs within the primary VLAN. A port can be in three different states when using PVLANs:Promiscuous: A port set to promiscuous can communicate with all private VLAN ports. This is typically how the port that goes from the switch to the router is set.Isolated: A port set to this state can only communicate with promiscuous ports. This setting is used to isolate a device from other ports in the switch.Community: A port with this setting can communicate with other ports that are members of the community and with promiscuous ports but not with ports from other communities or with isolated ports.Figure 12-9 shows the use of these port types. All ports in the switch belong to the primary VLAN 100, and so all these ports are in the same IP subnet.Figure 12-9: PVLANsThe port from the switch to the router is set as promiscuous, which means that port can communicate with all other ports on the switch (which is necessary to route packets to other parts of the network).VLAN 101 and VLAN 102 each contain a single device. The ports on the switch connected to those devices are set to isolated, which means those two devices can only communicate with the switch and the router.VLAN 103 contains two devices, so it is a community VLAN. The ports leading to these two devices are set to community. They can communicate with one another and the router and switch but not with any other PVLANs in the switch.You should use PVLANs in cases where devices that are in the same primary VLAN and connected to the same switch must be separated.

Assessments

Assessments, which can be internal or external, focus on the effectiveness of the current controls, policies, and procedures. Rather than working from a checklist, these assessments attempt to determine whether issues the controls were designed to address still exist. You might think of these types of examinations as checking to see whether what the organization is doing is effective. One approach to assessment is to perform a vulnerability scan, using a tool such as the Microsoft Baseline Security Analyzer (MBSA). This tool can scan devices for missing patches, weak passwords, and insecure configurations.

Context Based Authentication Location

At one time, cybersecurity professionals knew that all the network users were safely in the office and behind a secure perimeter created and defended with every tool possible. That is no longer the case. Users now access your network from home, wireless hotspots, hotel rooms, and all sorts of other locations that are less than secure. When you design authentication, you can consider the physical location of the source of an access request. A scenario for this might be that Alice is allowed to access the Sales folder at any time from the office, but only from 9 to 5 from her home and never from elsewhere. Authentication systems can also use location to identify requests to authenticate and access a resource from two different locations in a very short amount of time, one of which could be fraudulent. Finally, these systems can sometimes make real-time assessments of threat levels in the region where a request originates.

Audits

Audits differ from internal assessments in that they are usually best performed by a third party. An organization should conduct internal and third-party audits as part of any security assessment and testing strategy. An audit should test all security controls that are currently in place. Some guidelines to consider as part of a good security audit plan include the following: -At minimum, perform annual audits to establish a security baseline. -Determine your organization's objectives for the audit and share them with the auditors. -Set the ground rules for the audit before the audit starts, including the dates/times of the audit. -Choose auditors who have security experience. -Involve business unit managers early in the process. -Ensure that auditors rely on experience, not just checklists. -Ensure that the auditor's report reflects risks that your organization has identified. -Ensure that the audit is conducted properly. -Ensure that the audit covers all systems and all policies and procedures. -Examine the report when the audit is complete. Many regulations today require that audits occur. Organizations used to rely on Statement on Auditing Standards (SAS) 70, which provided auditors information and verification about data center controls and processes related to the data center user and financial reporting. An SAS 70 audit verifies that the controls and processes set in place by a data center are actually followed. The Statement on Standards for Attestation Engagements (SSAE) 16 is a new standard that verifies the controls and processes and also requires a written assertion regarding the design and operating effectiveness of the controls being reviewed. An SSAE 16 audit results in a Service Organization Control (SOC) 1 report. This report focuses on internal controls over financial reporting. There are two types of SOC 1 reports: -SOC 1, Type 1 report: Focuses on the auditors' opinion of the accuracy and completeness of the data center management's design of controls, system, and/or service. -SOC 1, Type 2 report: Includes Type 1 and an audit on the effectiveness of controls over a certain time period, normally between six months and a year. Two other report types are also available: SOC 2 and SOC 3. Both of these audits provide benchmarks for controls related to the security, availability, processing integrity, confidentiality, or privacy of a system and its information. A SOC 2 report includes service auditor testing and results, and a SOC 3 report provides only the system description and auditor opinion. A SOC 3 report is for general use and provides a level of certification for data center operators that assures data center users of facility security, high availability, and process integrity. Table 10-7 briefly compares the three types of SOC reports.

IP Security (IPsec)

Authentication Header (AH): AH provides data integrity, data origin authentication, and protection from replay attacks. Encapsulating Security Payload (ESP): ESP provides all that AH does as well as data confidentiality. Internet Security Association and Key Management Protocol (ISAKMP): ISAKMP handles the creation of a security association for the session and the exchange of keys. Internet Key Exchange (IKE): Also sometimes referred to as IPsec Key Exchange, IKE provides the authentication material used to create the keys exchanged by ISAKMP during peer authentication. This was proposed to be performed by a protocol called Oakley that relied on the Diffie-Hellman algorithm, but Oakley has been superseded by IKE.

Drive Capacity Consumption (Common Host-Related Symptoms)

Available disk space on the host decreasing for no apparent reason is cause for concern. It could be that the host is storing information to be transmitted at a later time. Some malware also causes an increase in drive availability due to deleting files. Finally, in some cases, the purpose is to fill the drive as part of a DoS or DDoS attack. One of the difficult aspects of this is that the drive is typically filled with files that cannot be seen or that are hidden. When users report a sudden filling of their hard drive and even a slow buildup over time that cannot be accounted for, you should scan the device for malware in Safe Mode (press F8 during the boot). Scanning with multiple products is advised as well. The presence of any unauthorized software should be another red flag. If you have invested in a vulnerability scanner, you can use it to create a list of installed software that can be compared to a list of authorized software. Unfortunately, many types of malware do a great job of escaping detection.

Continuous Monitoring

Before continuous monitoring can be successful, an organization must ensure that operational baselines are captured. After all, an organization cannot recognize abnormal patterns or behavior if it doesn't know what "normal" is. These baselines should also be revisited periodically to ensure that they have not changed. For example, if a single web server is upgraded to a web server farm, a new performance baseline should be captured. Security analysts must ensure that the organization's security posture is maintained at all times. This requires continuous monitoring. Auditing and security logs should be reviewed on a regular schedule. Performance metrics should be compared to baselines. Even simple acts such as normal user login/logout times should be monitored. If a user suddenly starts logging in and out at irregular times, the user's supervisor should be alerted to ensure that the user is authorized. Organizations must always be diligent in monitoring the security of the enterprise. An example of a continuous monitoring tool is Microsoft Security Compliance Manager (SCM). This tool can be used to monitor compliance with a baseline. It works in conjunction with two other Microsoft tools: Group Policy and Microsoft System Center Configuration Manager (MSCCM)

What are notable strategies for penetration tests?

Blind test (minimal knowledge, defending team is aware), Double-blind test (minimal knowledge, defending team is unaware), target test (both teams aware and are provided maximum knowledge about network and type of test

Bro

Bro is another open source NIDS. It is only supported on Unix/Linux platforms. It is not as user friendly as Snort in that configuring it requires more expertise. Like with many other open source products, it is supported by a nonprofit organization called the Software Freedom Conservancy. Figure 14-3 shows the main stats dashboard, which includes information such as events, top protocols, top talkers, top HTTP hosts, top destination ports, and other statistics.

Buffer Overflow

Buffers are portions of system memory that are used to store information. A buffer overflow is an attack that occurs when the amount of data that is submitted is larger than the buffer can handle. Typically, this type of attack is possible because of poorly written application or operating system code. This can result in an injection of malicious code, primarily either a denial-of-service (DoS) attack or a SQL injection.

Benchmarks

CIS Benchmarks are recommended technical settings for operating systems, middleware and software applications, and network devices. They are directed at organizations that must comply with various compliance programs, such as PCI-DSS (for credit card data), SOX (for financial reporting), NIST 800-53 (Security and Privacy Controls for Federal Information Systems and Organizations), and ISO 27000.

Control Objectives for Information and Related Technology (COBIT)

COBIT is a security controls development framework that uses a process model to subdivide IT into four domains: Plan and Organize (PO), Acquire and Implement (AI), Deliver and Support (DS), and Monitor and Evaluate (ME), as illustrated in Figure 10-2. These four domains are further broken down into 34 processes. COBIT aligns with the ITIL, PMI, ISO, and TOGAF frameworks and is mainly used in the private sector. COBIT also prescribes a security controls development framework that documents five principles: Meeting stakeholder needs Covering the enterprise end-to-end Applying a single integrated framework Enabling a holistic approach Separating governance from management

What cables should you carry?

Cables: You should have a variety of cables for connecting to storage devices. Examples include the following: Combined signal and power cables: These include cables for USB and adapters, SATA, and FireWire (both 400 and 800). Signal cables: These include SATA and eSATA. Power cables: These include Molex and SATA.

Certify/Accredit

Certification is the process of evaluating software for its security effectiveness with regard to the customer's needs. Ratings can certainly be an input to this process but are not the only consideration. Accreditation is the formal acceptance of the adequacy of a system's overall security by management. Provisional accreditation is given for a specific amount of time and lists application, system, or accreditation documentation changes. Full accreditation grants accreditation without any required changes. Provisional accreditation becomes full accreditation once all the changes are completed, analyzed, and approved by the certifying body.While certification and accreditation are related, they are not the same thing, and they are also considered to be two steps in a process.

Commercial Business Classifications

Commercial businesses usually classify data using four main classification levels, listed here from highest sensitivity level to lowest: Confidential Private Sensitive Public Data that is confidential includes trade secrets, intellectual data, application programming code, and other data that could seriously affect the organization if unauthorized disclosure occurred. Data at this level would only be available to personnel in the organization whose work relates to the data's subject. Access to confidential data usually requires authorization for each access. Confidential data is exempt from disclosure under the Freedom of Information Act. In most cases, the only way for external entities to have authorized access to confidential data is as follows: After signing a confidentiality agreement When complying with a court order As part of a government project or contract procurement agreement Data that is private includes any information related to personnel, including human resources records, medical records, and salary information, that is used only within the organization. Data that is sensitive includes organizational financial information and requires extra measures to ensure its CIA and accuracy. Public data is data whose disclosure would not cause a negative impact on the organization.

Evidence Production

Computer investigations require different procedures than regular investigations because the time frame for the investigator is compressed, and an expert might be required to assist in the investigation. Also, computer information is intangible and often requires extra care to ensure that the data is retained in its original format. Finally, the evidence in a computer crime is difficult to gather. After a decision has been made to investigate a computer crime, you should follow standardized procedures, including the following: Identify what type of system is to be seized. Identify the search and seizure team members. Determine the risk of the suspect destroying evidence. After law enforcement has been informed of a computer crime, the constraints on the organization's investigator are increased. Turning over an investigation to law enforcement to ensure that evidence is preserved properly might be necessary. When investigating a computer crime, evidentiary rules must be addressed. Computer evidence should prove a fact that is material to the case and must be reliable. The chain of custody must be maintained. Computer evidence is less likely to be admitted in court as evidence if the process for producing it is not documented.

Countermeasures to race conditions

Countermeasures to these attacks are to make critical sets of instructions either execute in order and in entirety or to roll back or prevent the changes. It is also best for the system to lock access to certain items it will access when carrying out these sets of instructions.

Countermeasures to Time-of-Use Attacks include

Countermeasures to these attacks involve making critical sets of instructions atomic. This means they either execute in order and in entirety or the changes they make are rolled back or prevented. It is also best for the system to lock access to certain items it will use or touch when carrying out these sets of instructions.

Continual Improvement

Cybersecurity analysts can never just sit back, relax, and enjoy the ride. Security needs are always changing because the "bad guys" never take a day off. For this reason, it is vital that security professionals continuously work to improve their organization's security. Tied to this is the need to improve the quality of the security controls currently implemented. Quality improvement commonly uses a four-step quality model, known as the Deming cycle, or the Plan-Do-Check-Act (PDCA) cycle, which has four steps: Step 1. Plan: Identify an area of improvement and make a formal plan to implement it. Step 2. Do: Implement the plan on a small scale. Step 3. Check: Analyze the results of the implementation to determine whether it made a difference. Step 4. Act: If the implementation made a positive change, implement it on a wider scale. Continuously analyze the results. Other similar guidelines include Six Sigma, Lean, and Total Quality Management. No matter which one an organization uses, the result should still be a continuous cycle of improvement organizationwide.

XACML

Creates a system that decouples the access decision from an application or the local machine Extensible Access Control Markup Language (XACML) is a standard for an access control policy language using XML. Its goal is to create an attribute-based access control system that decouples the access decision from the application or the local machine. It provides for fine-grained control of activities based on criteria such as the following: Attributes of the user requesting access (for example, all division managers in London) The protocol over which the request is made (for example, HTTPS) The authentication mechanism (for example, requester must be authenticated with a certificate) XACML uses several distributed components, including the following: Policy enforcement point (PEP): This entity protects the resource that the subject (a user or an application) is attempting to access. When it receives a request from a subject, it creates an XACML request based on the attributes of the subject, the requested action, the resource, and other information. Policy decision point (PDP): This entity retrieves all applicable polices in XACML and compares the request with the policies. It transmits an answer (access or no access) back to the PEP. XACML is valuable because it is able to function across application types. The process flow used by XACML is described in Figure 11-7.Figure 11-7: XACMLXACML is a good solution when disparate applications that use their own authorization logic are in use in the enterprise. By leveraging XACML, developers can remove authorization logic from an application and centrally manage access using policies that can be managed or modified based on business need without making any additional changes to the applications themselves

XSS

Cross-site scripting (XSS) occurs when an attacker locates a website vulnerability and injects malicious code into the web application. Many websites allow and even incorporate user input into a web page to customize the web page. If a web application does not properly validate this input, one of two things could happen: The text may be rendered on the page, or a script may be executed when others visit the web page. Figure 6-5 shows a high-level view of an XSS attack.

Cross-Site Scripting (XSS)

Cross-site scripting (XSS) occurs when an attacker locates a website vulnerability that allows the attacker to inject malicious code into the web application. Cross-site scripting and its mitigation are covered in Lesson 6

Cross-Training/Mandatory Vacations

Cross-training (also known as job rotation) ensures that more than one person fulfills the job tasks of a single position within an organization. This job rotation ensures that more than one person is capable of performing those tasks, providing redundancy. It is also an important tool in helping an organization recognize when fraudulent activities have occurred.From a security perspective, job rotation refers to the training of multiple users to perform the duties of a position to help prevent fraud by any individual employee. The idea is that by making multiple people familiar with the legitimate functions of the position, the likelihood is greater that unusual activities by any one person will be noticed. This is often used in conjunction with mandatory vacations, in which all users are required to take time off, allowing another to fill their position while gone, which enhances the opportunity to discover unusual activity.

Context Based Authentication Time

Cybersecurity professionals have for quite some time been able to prevent access to a network entirely by configuring login hours in a user's account profile. However, they have not been able to prevent access to individual resources on a time-of-day basis until recently. For example, you might want to allow Joe to access the sensitive Sales folder during the hours of 9 to 5 but deny him access to that folder during other hours. Or you might configure the system so that when Joe accesses resources after certain hours, he is required to give another password or credential (a process often called step-up authentication) or perhaps even have a text code sent to his e-mail address that must be provided to allow this access.

DHCP snooping

DHCP snooping: The main purpose of DHCP snooping is to prevent a poisoning attack on the DHCP database. This is not a switch attack per se, but one of its features can support DAI. It creates a mapping of IP addresses to MAC addresses from a trusted DHCP server that can be used in the validation process of DAI.

Data Aggregation and Correlation

Data aggregation is the process of gathering a large amount of data and filtering and summarizing it in some way, based on some common variable in the information. Data correlation is the process of locating variables in the information that seem to be related.For example, say that every time there is a spike in SYN packets, you seem to have a DoS attack. When you apply these processes to the data in security logs of devices, it helps you identify correlations that help you identify issues and attacks. A good example of such a system is a security information event management (SIEM) system. These systems collect the logs, analyze the logs, and, through the use of aggregation and correlation, help you identify attacks and trends. SIEM systems are covered in more detail in Lesson 14, "Using Cybersecurity Tools and Technologies."

Decomposition

Decomposition is the process of breaking something down to discover how it works. When applied to software, it is the process of discovering how the software works, perhaps who created it, and, in some cases, how to prevent the software from performing malicious activity.

Spanning Tree Protocol (STP)

Defined by the IEEE 802.1D standard, it allows a network to have redundant Layer 2 connections, while logical preventing a loop, which could lead to symptoms such as broadcast storms and MAC address table corruption.

Data Exfiltration (Common Host-Related Symptoms)

Data exfiltration is the theft of data from a device. Any reports of missing or deleted data should be investigated. In some cases, the data may still be present, but it has been copied and transmitted to the attacker. Software tools are available to help track the movement of data in transmissions. Data loss prevention (DLP) software attempts to prevent data leakage. It does this by maintaining awareness of actions that can and cannot be taken with respect to a document. For example, it might allow printing of a document but only at the company office. It might also disallow sending the document through e-mail. DLP software uses ingress and egress filters to identify sensitive data that is leaving the organization and can prevent such leakage. Another scenario might be the release of product plans that should be available only to the Sales group. You could set the following policy for that document:It cannot be e-mailed to anyone other than Sales group members.It cannot be printed.It cannot be copied.There are two locations where you can implement this policy:Network DLP: Installed at network egress points near the perimeter, network DLP analyzes network traffic.Endpoint DLP: Endpoint DLP runs on end-user workstations or servers in the organization.You can use both precise and imprecise methods to determine what is sensitive:Precise methods: These methods involve content registration and trigger almost zero false-positive incidents.Imprecise methods: These methods can include keywords, lexicons, regular expressions, extended regular expressions, metadata tags, Bayesian analysis, and statistical analysis.The value of a DLP system resides in the level of precision with which it can locate and prevent the leakage of sensitive data.

Data Integrity

Data integrity refers to the correctness, completeness, and soundness of the data. One of the goals of integrity services is to protect the integrity of data or least to provide a means of discovering when data has been corrupted or has undergone an unauthorized change. One of the challenges with data integrity attacks where data does not move is that the effects may not be detected for years—until there is a reason to question the data.Identifying the compromise of data integrity can be made easier by using file hashing algorithms and tools to check seldom-used but sensitive files for unauthorized changes after an incident. These tools can be run to quickly identify files that have been altered. They can help you get a better assessment of the scope of the data corruption.

Data Classification Policy

Data should be classified based on its value to the organization and its sensitivity to disclosure. Assigning a value to data allows an organization to determine the resources that should be used to protect the data. Resources that are used to protect data include personnel resources, monetary resources, and access control resources. Classifying data allows you to apply different protective measures. Data classification is critical to all systems to protect the confidentiality, integrity, and availability (CIA) of data. After data is classified, the data can be segmented based on the level of protection it needs. The classification levels ensure that data is handled and protected in the most cost-effective manner possible. An organization should determine the classification levels it uses based on the needs of the organization. A number of commercial business and military and government information classifications are commonly used. The information life cycle should also be based on the classification of the data. Organizations are required to retain certain information, particularly financial data, based on local, state, or government laws and regulations.

Proprietary Classification Level (Data Classifications)

Data that could reduce the company's competitive advantage, such as the technical specifications of a new product

Confidential Classification Level (Government and Military Classifications)

Data that is confidential includes patents, trade secrets, and other information that could seriously affect the government if unauthorized disclosure occurred.

Secret Classification Level (Government and Military Classifications)

Data that is secret includes deployment plans, missile placement, and other information that could seriously damage national security if disclosed.

Sensitive but unclassified Classification Level (Government and Military Classifications)

Data that is sensitive but unclassified includes medical or other personal data that might not cause serious damage to national security but could cause citizens to question the reputation of the government.

Top secret Classification Level (Government and Military Classifications)

Data that is top secret includes weapons blueprints, technology specifications, spy satellite information, and other military information that could gravely damage national security if disclosed.

Private Classification Level (Data Classifications)

Data that might not do the company damage but must be kept private for other reasons

DNS

Domain Name System (DNS) provides a hierarchical naming system for computers, services, and any resources connected to the Internet or a private network. You should enable Domain Name System Security Extensions (DNSSEC) to ensure that a DNS server is authenticated before the transfer of DNS information begins between the DNS server and client. Transaction Signature (TSIG) is a cryptographic mechanism used with DNSSEC that allows a DNS server to automatically update client resource records if their IP addresses or hostnames change. The TSIG record is used to validate a DNS client. As a security measure, you can configure internal DNS servers to communicate only with root servers. When you configure internal DNS servers to communicate only with root servers, the internal DNS servers are prevented from communicating with any other external DNS servers. The Start of Authority (SOA) contains the information regarding a DNS zone's authoritative server. A DNS record's Time to Live (TTL) determines how long a DNS record will live before it needs to be refreshed. When a record's TTL expires, the record is removed from the DNS cache. Poisoning the DNS cache involves adding false records to the DNS zone. If you use a longer TTL, the resource record is read less frequently and therefore is less likely to be poisoned. Let's look at a security issue related to DNS. An IT administrator installs new DNS name servers that host the company mail exchanger (MX) records and resolve the web server's public address. To secure the zone transfer between the DNS servers, the administrator uses only server ACLs. However, any secondary DNS servers would still be susceptible to IP spoofing attacks. Another example would be a security team determining that someone from outside the organization has obtained sensitive information about the internal organization by querying the company's external DNS server. The security manager should address the problem by implementing a split DNS server, allowing the external DNS server to contain only information about domains that the outside world should be aware of and the internal DNS server to maintain authoritative records for internal systems.

Limit Communication to Trusted Parties

During an incident, communications should take place only with those who have been designated beforehand to receive such communications. Moreover, the content of these communications should be limited to what is necessary for each stakeholder to perform his or her role.

Dynamic ARP inspection (DAI)

Dynamic ARP inspection (DAI): This security feature intercepts all ARP requests and responses and compares each response's MAC address and IP address information against the MAC-IP bindings contained in a trusted binding table. This table is built by also monitoring all DHCP requests for IP addresses and maintaining the mapping of each resulting IP address to a MAC address (which is a part of DHCP snooping). If an incorrect mapping is attempted, the switch rejects the packet.

Employee Privacy Issues and Expectation of Privacy

Employee privacy issues must be addressed by all organizations to ensure that the organizations are protected. However, organizations must give employees the proper notice of any monitoring that might be used. Organizations must also ensure that the monitoring of employees is applied in a consistent manner. Many organizations implement a no-expectation-of-privacy policy that the employee must sign after receiving the appropriate training. This policy should specifically describe any unacceptable behavior. Companies should also keep in mind that some actions are protected by the Fourth Amendment. Security professionals and senior management should consult with legal counsel when designing and implementing any monitoring solution.

With packet analyzers you typically see 5 sections of information

Ethernet II: This is the data link layer and contains source and destination MAC addresses. Internet Protocol version 4: This is the network layer and contains the source and destination IP addresses. Transmission Control Protocol: This is the transport layer and contains the source and destination port numbers. Data of some type: This is the raw data. There is no data in this packet because it is a SYN packet, part of the TCP handshake. But if this were HTTP data, for example, the section would be titled HTTP, and it would include the raw data. In Figure 2-7, an HTTP packet has been highlighted, and the Layer 4 section (transport layer) has been expanded to show the source and destination ports. Then, below that are the seven flags, with an indication of which are on and which are off (in this case, the ACK and PSH flags are set), and at the very bottom, the data portion is highlighted, revealing the data contents in the lower-right corner. Because this packet is unencrypted, you can read that this is an HTTP post command, along with its details.

Evaluations

Evaluations are typically carried out by comparing configurations settings, patch status, and other security measures with a checklist to assess compliance with a baseline. They can be carried out by an external entity but are usually carried out as an internal process. You might think of these evaluations as ensuring that the organization is doing what it has set out to do. While some evaluations have been developed for software development, the concepts can and have been applied to these processes as well. This is another scenario that could be supported by SCM, Group Policy, and MSCCM.

Scanning

Even after you have taken all steps described thus far, consider using a vulnerability scanner to scan the devices or the network of devices that were affected. Make sure before you do so that you have updated the scanner so it can recognize the latest vulnerabilities and threats. This will help catch any lingering vulnerabilities that may still be present.

Corporate Policy

Even organizations that are not highly regulated may have a well-thought-out security policy that describes in detail the types of security mechanisms required in various scenarios. Hopefully they do, and hopefully these policies and their constituent procedures, standards, and guidelines are also supported by the assessment. In cases where an organization does not have such a program, it is incumbent on the cybersecurity analyst to advocate for the development of one.

Call list/escalation list

First responders to an incident should have contact information for all individuals who might need to be alerted during the investigation. This list should also indicate under what circumstance these individuals should be contacted to avoid unnecessary alerts and to keep the process moving in an organized manner.

Firewall Log

Examining a firewall log can be somewhat daunting at first. But if you understand the basic layout and know what certain acronyms stand for, you can usually find your way around a firewall log. For example, a Check Point log (Cisco) follows this format: Time | Action | Firewall | Interface | Product| Source | Source Port |Destination | Service | Protocol | Translation | Rule This is what a line out of the log might look like: 14:55:20 accept bd.pearson.com >eth1 product VPN-1 & Firewall-1 src10.5.5.1 s_port 4523 dst xx.xxx.10.2 service http proto tcp xlatesrcxxx.xxx.146.12 rule 15

Log viewers

Finally, because much evidence can be found in the logs located on the device, a robust log reading utility is also valuable. A log viewer should have the ability to read all Windows logs as well as the registry. Moreover, it should also be able to read logs created by other operating systems.

Degrading Functionality

Finally, some solutions create more issues than they resolve. In some cases, it may be impossible to implement mitigation due to the fact that it breaks mission-critical applications or processes. The organization may need to research an alternative solution.

SIEM (Security Information and Event Management)

For large enterprises, the amount of log data that needs to be analyzed can be quite large. For this reason, many organizations implement security information and event management (SIEM), which provides an automated solution for analyzing events and deciding where the attention needs to be given. Most SIEM products support two ways of collecting logs from log generators: Agentless: With this type of collection, the SIEM server receives data from the individual hosts without needing to have any special software installed on those hosts. Some servers pull logs from the hosts, which is usually done by having the server authenticate to each host and retrieve its logs regularly. In other cases, the hosts push their logs to the server, which usually involves each host authenticating to the server and transferring its logs regularly. Regardless of whether the logs are pushed or pulled, the server then performs event filtering and aggregation and log normalization and analysis on the collected logs. Agent based: With this type of collection, an agent program is installed on the host to perform event filtering and aggregation and log normalization for a particular type of log. The host then transmits the normalized log data to a SIEM server, usually on a real-time or near-real-time basis, for analysis and storage. Multiple agents may need to be installed if a host has multiple types of logs of interest. Some SIEM products also offer agents for generic formats such as Syslog and SNMP. A generic agent is used primarily to get log data from a source for which a format-specific agent and an agentless method are not available. Some products also allow administrators to create custom agents to handle unsupported log sources.

Manual Peer Reviews

Formal code review involves a careful and detailed process with multiple participants and multiple phases. In this type of code review, software developers attend meetings where each line of code is reviewed, usually using printed copies. Lightweight code review typically requires less overhead than formal code inspections, though it can be equally effective when done properly. Lightweight code review includes the following:Over-the-shoulder: One developer looks over the author's shoulder as the author walks through the code.E-mail pass-around: Source code is e-mailed to reviewers automatically after the code is checked in.Pair programming: Two authors develop code together at the same workstation.Tool-assisted code review: Authors and reviewers use tools designed for peer code review.

Fuzzing

Fuzz testing, or fuzzing, involves injecting invalid or unexpected input (sometimes called faults) into an application to test how the application reacts. It is usually done with a software tool that automates the process. Inputs can include environment variables, keyboard and mouse events, and sequences of API calls. Figure 13-1 shows the logic of the fuzzing process.Figure 13-1: Fuzz TestingTwo types of fuzzing can be used to identify susceptibility to a fault injection attack: Mutation fuzzing: This type involves changing the existing input values (blindly). Generation-based fuzzing: This type involves generating the inputs from scratch, based on the specification/format. The following measures can help prevent fault injection attacks: Implement fuzz testing to help identify problems. Adhere to safe coding and project management practices. Deploy application-level firewalls.

Health Insurance Portability and Accountability Act (HIPAA)

HIPAA, also known as the Kennedy-Kassebaum Act, affects all healthcare facilities, health insurance companies, and healthcare clearinghouses. It is enforced by the Office of Civil Rights of the Department of Health and Human Services. It provides standards and procedures for storing, using, and transmitting medical information and healthcare data. HIPAA overrides state laws unless the state laws are stricter.

ACLs can be used to prevent the following:

IP address spoofing, inbound IP address spoofing, outbound Denial-of-service (DoS) TCP SYN attacks, blocking external attacks at the perimeter DoS TCP SYN attacks, using TCP Intercept DoS smurf attacks Denying/filtering ICMP messages, inbound Denying/filtering ICMP messages, outbound Denying/filtering traceroute

ISO/IEC 27001

ISO/IEC 27001:2013 is the latest version of the 27001 standard, and it is one of the most popular standards by which organizations obtain certification for information security. It provides guidance on ensuring that an organization's information security management system (ISMS) is properly built, administered, maintained, and progressed. It includes the following components: ISMS scope Information security policy Risk assessment process and its results Risk treatment process and its decisions Information security objectives Information security personnel competence Necessary ISMS-related documents Operational planning and control documents Information security monitoring and measurement evidence ISMS internal audit program and its results Top management ISMS review evidence Evidence of identified nonconformities and corrective actions When an organization decides to obtain ISO/IEC 27001 certification, a project manager should be selected to ensure that all the components are properly completed. To implement ISO/IEC 27001:2013, the project manager should complete the following steps: Step 1. Obtain management support. Step 2. Determine whether to use consultants or to complete the work in-house, purchase the 27001 standard, write the project plan, define the stakeholders, and organize the project kickoff. Step 3. Identify the requirements. Step 4. Define the ISMS scope, information security policy, and information security objectives. Step 5. Develop document control, internal audit, and corrective action procedures. Step 6. Perform risk assessment and risk treatment. Step 7. Develop a statement of applicability and a risk treatment plan and accept all residual risks. Step 8. Implement the controls defined in the risk treatment plan and maintain the implementation records. Step 9. Develop and implement security training and awareness programs. Step 10. Implement the ISMS, maintain policies and procedures, and perform corrective actions. Step 11. Maintain and monitor the ISMS. Step 12. Perform an internal audit and write an audit report. Step 13. Perform management review and maintain management review records. Step 14. Select a certification body and complete certification. Step 15. Maintain records for surveillance visits. For more information, visit www.iso.org/iso/catalogue_detail?csnumber=54534.

ISO/IEC 27002

ISO/IEC 27002:2013 is the latest version of the 27002 standard, and it provides a code of practice for information security management. It includes the following 14 content areas: Information security policy Organization of information security Human resources security Asset management Access control Cryptography Physical and environmental security Operations security Communications security Information systems acquisition, development, and maintenance Supplier relationships Information security incident management Information security aspects of business continuity Compliance

Information Technology Infrastructure Library (ITIL)

ITIL is a process management development standard developed by the Office of Management and Budget in OMB Circular A-130. ITIL has five core publications: ITIL Service Strategy, ITIL Service Design, ITIL Service Transition, ITIL Service Operation, and ITIL Continual Service Improvement. These five core publications contain 26 processes. Although ITIL has a security component, it is primarily concerned with managing the service level agreements (SLA) between an IT department or organization and its customers. As part of OMB Circular A-130, an independent review of security controls should be performed every three years.

identity propagation

Identity propagation is the passing or sharing of a user's or device's authenticated identity information from one part of a multitier system to another. In most cases, each of the components in the system performs its own authentication, and identity propagation allows this to occur seamlessly. There are several approaches to performing identity propagation. Some systems, such as Microsoft's Active Directory, use a proprietary method and tickets to perform identity propagation.In some cases, not all the components in a system may be SSO enabled (meaning a component can accept the identity token in its original format from the SSO server). In those cases, a proprietary method must be altered to communicate in a manner the third-party application understands. In the example shown in Figure 11-6, a user is requesting access to a relational database management system (RDBMS) application. The RDBMS server redirects the user to the SSO authentication server. The SSO server provides the user with an authentication token, which is then used to authenticate to the RDBMS server. The RDBMS server checks the token containing the identity information and grants access.Figure 11-6: Identity PropagationNow suppose that the application service receives a request to access an external third-party web application that is not SSO enabled. The application service redirects the user to the SSO server. Now when the SSO server propagates the authenticated identity information to the external application, it does not use the SSO token but instead uses an XML token.Another example of a protocol that performs identity propagation is Credential Security Support Provider (CredSSP). It is often integrated into the Microsoft Remote Desktop Services environment to provide network layer authentication. Among the possible authentication or encryption types supported when implemented for this purpose are Kerberos, TLS, and NTLM.

Unauthorized Changes (Common Host-Related Symptoms)

If an organization has a robust change control process, there should be no unauthorized changes made to devices. (The change control process is covered in Lesson 9, "Incident Recovery and Post-Incident Response.") Whenever a user reports an unauthorized change in his device, it should be investigated. Many malicious programs make changes that may be apparent to the user. Missing files, modified files, new menu options, strange error messages, and odd system behavior are all indications of unauthorized changes.

Irregular Peer-to-Peer Communication (Common Network-Related Symptoms)

If traffic occurring between peers within a network is normal but communications are irregular, this may be an indication of a security issue. At the very least, illegal file sharing could be occurring, and at the worst, this peer-to-peer (P2P) communication could be the result of a botnet. Peer-to-peer botnets differ from normal botnets in their structure and operation. Figure 8-2 shows the structure of a traditional botnet. In this scenario, all the zombies communicate directly with the command and control server, which is located outside the network. The limitation of this arrangement and the issue that gives rise to peer-to-peer botnets is that devices that are behind a NAT server or proxy server cannot participate. Only devices that can be reached externally can do so.

Impersonation

Impersonation occurs when one user assumes the identity of another by acquiring the logon credentials associated with the account. This typically occurs through exposure of the credentials either through social engineering (shoulder surfing, help desk intimidation, etc.) or by sniffing unencrypted credentials in transit. The best approach to preventing impersonation is user education because many of these attacks rely on the user committing some insecure activity.

Imperva

Imperva is a commercial WAF that uses patented dynamic application profiling to learn all aspects of web applications, including the directories, URLs, parameters, and acceptable user inputs to detect attacks. The company offers many other security products as well, many of which can either be installed as appliances or purchased as images that can be deployed on VMware, AWS, or Microsoft Azure platforms. Imperva also easily integrates with most of the leading SIEM systems. (discussed in the next section).

Zero Day Attacks

In many cases, vulnerabilities discovered in live environments have no current fix or patch. Such a vulnerability is referred to as zero-day vulnerability. The best way to prevent zero-day attacks is to write bug-free applications by implementing efficient designing, coding, and testing practices. Having staff discover zero-day vulnerabilities is much better than having those looking to exploit the vulnerabilities find them. Monitoring known hacking community websites can often help you detect attacks early because hackers often share zero-day exploit information. Honeypots or honeynets can also provide forensic information about hacker methods and tools for zero-day attacks.

Trend Analysis

In risk management, it is sometimes necessary to identify trends. In this process, historical data is utilized, given a set of mathematical parameters, and then processed in order to determine any possible variance from an established baseline.If you do not know the established baseline, you cannot identify any variances from the baseline and then track trends in these variances. Organizations should establish procedures for capturing baseline statistics and for regularly comparing current statistics against the baselines. Also, organizations must recognize when new baselines should be established. For example, if your organization implements a two-server web farm, the baseline would be vastly different than the baseline of a farm upgraded to four servers or a farm with upgraded internal hardware in the servers.Security professionals must also research growing trends worldwide, especially in the organization's own industry. For example, financial industry risk trends vary from healthcare industry risk trends, but there are some common areas that both industries must understand. For example, any organizations that have e-commerce sites must understand the common risk trends and be able to analyze their internal sites to determine whether their resources are susceptible to these risks.When humans look at raw data, it may be difficult to spot trends. Aggregating the data and graphing it makes it much easier to discern a trend. Most tools that handle this sort of thing (like SIEM) can not only aggregate all events of a certain type but graph them over time. Figure 12-1 shows examples of such graphs.

Historical Analysis

In some cases you want to see things from a historical perspective. When that's the case, you can present data in a format that allows you to do so, as shown in Figure 12-2. This graph contains collected information about the use of memory by an appliance. You can see that two times during the day, there were spikes in the use of memory. The spikes would not be as obvious if you were viewing this data in raw format.

Applications as Identities

In some cases, an application acts as an identity. Using a process called delegation, the application takes a request from the user and retrieves something on behalf of the user. The most common example of this is when a user interacts with a web application on a front-end server, which then interacts with a database server and performs this access as a delegate of the user. This delegation capability is critical for many distributed applications for which a series of access control checks must be made sequentially for each application, database, or service in the authorization chain. Another example of delegation is the delegation process used by Kerberos in Active Directory. To understand this process, you must understand the operation of Kerberos. Kerberos is an authentication protocol that uses a client/server model and was developed by MIT's Project Athena. It is the default authentication model in the recent editions of Windows Server and is also used in Apple, Sun, and Linux operating systems. Kerberos is a single sign-on system that uses symmetric key cryptography. Kerberos provides confidentiality and integrity. Kerberos assumes that messaging, cabling, and client computers are not secure and are easily accessible. In a Kerberos exchange involving a message with an authenticator, the authenticator contains the client ID and a timestamp. Because a Kerberos ticket is valid for a certain time, the timestamp ensures the validity of the request. In a Kerberos environment, the key distribution center (KDC) is the repository for all user and service secret keys. The process of authentication and subsequent access to resource is as follows. The client sends a request to the authentication server (AS), which might or might not be the KDC. The AS forwards the client credentials to the KDC. The KDC authenticates clients to other entities on a network and facilitates communication using session keys. The KDC provides security to clients or principals, which are users, network services, and software. Each principal must have an account on the KDC. The KDC issues a ticket-granting ticket (TGT) to the principal. The principal sends the TGT to the ticket-granting service (TGS) when the principal needs to connect to another entity. The TGS then transmits a ticket and session keys to the principal. The set of principals for which a single KDC is responsible is referred to as a realm. There is one particular setting to be avoided, and that is the use of unconstrained delegation. When a server is set in this fashion, the domain controller places a copy of the user's TGT into the service ticket. When the ticket is provided to the server for access, the server places the TGT into Local Security Authority Subsystem Service (LSASS) for later use. The application server can now impersonate that user without limitation!

Security as a Service

In some cases, an organization may find itself in the position of requiring security services that require skill sets that are not currently held by anyone in the organization. Security as a Service (SaaS) is a term that encompasses many security services provided by third parties with more talent and experience than may exist in the organization.The scope of this assistance can vary from occasional help from a consultant to the use of managed security service providers (MSSP). MSSPs offer the option of fully outsourcing all information assurance to a third party. If an organization decides to deploy a third-party identity service, including cloud computing solutions, security practitioners must be involved in the integration of that implementation with internal services and resources. This integration can be complex, especially if the provider solution is not fully compatible with existing internal systems. Most third-party identity services provide cloud identity, directory synchronization, and federated identity. Examples of these services include Amazon Web Services (AWS) Identity and Access Management (IAM) service and Oracle Identity Management

Privilege Elevation

In some cases, the dangers of privilege elevation or escalation in a virtualized environment may be equal to or greater than those in a physical environment. When the hypervisor is performing its duty of handling calls between the guest operating system and the hardware, any flaws introduced to those calls could allow an attacker to escalate privileges in the guest operating system. A recent case of a flaw in VMware ESX Server, Workstation, Fusion, and View products could have led to escalation on the host. VMware reacted quickly to fix this flaw with a security update. The key to preventing privilege escalation is to make sure all virtualization products have the latest updates and patches

Secure Disposal

In some instances, you may decide to dispose of a compromised device (or its storage drive) rather than attempt to sanitize and reuse the device. In that case, you want to dispose of it in a secure manner. In the case of secure disposal, an organization must consider certain issues, including the following: Does removal or replacement introduce any security holes in the network? How can the system be terminated in an orderly fashion to avoid disrupting business continuity? How should any residual data left on any systems be removed? Are there any legal or regulatory issues that would guide the destruction of data? Whenever data is erased or removed from a storage media, residual data can be left behind. This can allow data to be reconstructed when the organization disposes of the media, and unauthorized individuals or groups may be able to gain access to the data. When considering data remanence, security professionals must understand three countermeasures: Clearing: Clearing includes removing data from the media so that it cannot be reconstructed using normal file recovery techniques and tools. With this method, the data is recoverable only using special forensic techniques. Purging: Also referred to as sanitization, purging makes the data unreadable even with advanced forensic techniques. With this technique, data should be unrecoverable. Destruction: Destruction involves destroying the media on which the data resides. Degaussing, another destruction technique, exposes the media to a powerful, alternating magnetic field, removing any previously written data and leaving the media in a magnetically randomized (blank) state. Physical destruction involves physically breaking the media apart or chemically altering it

Vulnerabilities Associated with a Single Physical Server Hosting Multiple Companies' Virtual Machines

In some virtualization deployments, a single physical server hosts multiple organizations' VMs. All the VMs hosted on a single physical computer must share the resources of that physical server. If the physical server crashes or is compromised, all the organizations that have VMs on that physical server are affected. User access to the VMs should be properly configured, managed, and audited. Appropriate security controls, including antivirus, anti-malware, ACLs, and auditing, must be implemented on each of the VMs to ensure that each one is properly protected. Other risks to consider include physical server resource depletion, network resource performance, and traffic filtering between virtual machines.Driven mainly by cost, many companies outsource to cloud providers computing jobs that require a large number of processor cycles for a short duration. This situation allows a company to avoid a large investment in computing resources that will be used for only a short time. Assuming that the provisioned resources are dedicated to a single company, the main vulnerability associated with on-demand provisioning is traces of proprietary data that can remain on the virtual machine and may be exploited.Let's look at an example. Say that a security architect is seeking to outsource company server resources to a commercial cloud service provider. The provider under consideration has a reputation for poorly controlling physical access to data centers and has been the victim of social engineering attacks. The service provider regularly assigns VMs from multiple clients to the same physical resource. When conducting the final risk assessment, the security architect should take into consideration the likelihood that a malicious user will obtain proprietary information by gaining local access to the hypervisor platform.

Design

In the design phase of the Software Development Life Cycle, an organization develops a detailed description of how the software will satisfy all functional and security goals. It attempts to map the internal behavior and operations of the software to specific requirements to identify any requirements that have not been met prior to implementation and testing.During this process, the state of the application is determined in every phase of its activities. The state of the application refers to its functional and security posture during each operation it performs. Therefore, all possible operations must be identified to ensure that the software never enters an insecure state or acts in an unpredictable way.Identifying the attack surface is also a part of this analysis. The attack surface describes what is available to be leveraged by an attacker. The amount of attack surface might change at various states of the application, but at no time should the attack surface provided violate the security needs identified in the Gather Requirements stage.

Gather Requirements (Security Requirements Definition)

In the gather requirements phase of the Software Development Life Cycle, both the functionality and the security requirements of the solution are identified. These requirements could be derived from a variety of sources, such as evaluations of competitor products for a commercial product or surveys of the needs of users for an internal solution. In some cases these requirements could come from a direct request from a current customer.From a security perspective, an organization must identify potential vulnerabilities and threats. When this assessment is performed, the intended purpose of the software and the expected environment must be considered. Moreover, the data that will be generated or handled by the solution must be assessed for its sensitivity. Assigning a privacy impact rating to the data to help guide measures intended to protect the data from exposure might be useful.

Plan/Initiate Project

In the plan/initiate phase of the Software Development Life Cycle, the organization decides to initiate a new software development project and formally plans the project. Security professionals should be involved in this phase to determine whether information involved in the project requires protection and whether the application needs to be safeguarded separately from the data it processes. Security professionals need to analyze the expected results of the new application to determine whether the resultant data has a higher value to the organization and, therefore, requires higher protection.Any information that is handled by the application needs a value assigned by its owner, and any special regulatory or compliance requirements need to be documented. For example, healthcare information is regulated by several federal laws and must be protected. The classification of all input and output data of the application needs to be documented, and the appropriate application controls should be documented to ensure that the input and output data are protected.Data transmission must also be analyzed to determine the types of networks used. All data sources must be analyzed as well. Finally, the effect of the application on organizational operations and culture needs to be analyzed

Test/Validate

In the test/validate phase, several types of testing should occur, including ways to identify both functional errors and security issues. The auditing method that assesses the extent of the system testing and identifies specific program logic that has not been tested is called the test data method. This method tests not only expected or valid input but also invalid and unexpected values to assess the behavior of the software in both instances. An active attempt should be made to attack the software, including attempts at buffer overflows and denial-of-service (DoS) attacks. The testing performed at this time has two main goals:Verification testing: Determines whether the original design specifications have been metValidation testing: Takes a higher-level view, determining whether the original purpose of the software has been achievedSoftware is typically developed in pieces or modules of code that are later assembled to yield the final product. Each module should be tested separately, in a procedure called unit testing. Having development staff carry out this testing is critical, but using a different group of engineers than the ones who wrote the code can ensure that an impartial process occurs. This is a good example of the concept of separation of duties.Unit testing should have the following characteristics:The test data should be part of the specifications.Testing should check for out-of-range values and out-of-bounds conditions.Correct test output results should be developed and known beforehand.Live or actual field data is not recommended for use in unit testing procedures.

Integer Overflows

Integer overflow occurs when math operations try to create a numeric value that is too large for the available space. The register width of a processor determines the range of values that can be represented. Moreover, a program may assume that a variable always contains a positive value. If the variable has a signed integer type, an overflow can cause its value to wrap and become negative. This may lead to unintended behavior. Similarly, subtracting from a small unsigned value may cause it to wrap to a large positive value, which may also be an unexpected

Securing Intellectual Property

Intellectual property (IP) of an organization, including patents, copyrights, trademarks, and trade secrets, must be protected, or the business loses any competitive advantage created by such properties. To ensure that an organization retains the advantages given by its IP, it should do the following: -Invest in well-written nondisclosure agreements (NDA) to be included in employment agreements, licenses, sales contracts, and technology transfer agreements. -Ensure that tight security protocols are in place for all computer systems. -Protect trade secrets residing in computer systems with encryption technologies or by limiting storage to computer systems that do not have external Internet connections. -Deploy effective insider threat countermeasures, particularly focused on disgruntlement detection and mitigation techniques.

ISO

International Organization for Standardization (ISO), often incorrectly referred to as the International Standards Organization, joined with the International Electrotechnical Commission (IEC) to standardize the British Standard 7799 (BS7799) to a new global standard that is now referred to as ISO/IEC 27000 Series. ISO 27000 is a security program development standard on how to develop and maintain an information security management system (ISMS). The 27000 Series includes a list of standards, each of which addresses a particular aspect of ISMS. These standards are either published or in development. The following standards are included as part of the ISO/IEC 27000 Series at this writing: 27000: Published overview of ISMS and vocabulary 27001: Published ISMS requirements 27002: Published code of practice for information security controls 27003: Published ISMS implementation guidelines 27004: Published ISMS measurement guidelines 27005: Published information security risk management guidelines 27006: Published requirements for bodies providing audit and certification of ISMS 27007: Published ISMS auditing guidelines 27008: Published auditor of ISMS guidelines 27010: Published information security management for inter-sector and inter-organizational communications guidelines 27011: Published telecommunications organizations information security management guidelines 27013: Published integrated implementation of ISO/IEC 27001 and ISO/IEC 20000-1 guidance 27014: Published information security governance guidelines 27015: Published financial services information security management guidelines 27016: Published ISMS organizational economics guidelines 27017: In-development cloud computing services information security control guidelines based on ISO/IEC 27002 27018: Published code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors 27019: Published energy industry process control system ISMS guidelines based on ISO/IEC 27002 27021: Published competence requirements for information security management systems professionals 27023: Published mapping the revised editions of ISO/IEC 27001 and ISO/IEC 27002 27031: Published information and communication technology readiness for business continuity guidelines 27032: Published cybersecurity guidelines 27033-1: Published network security overview and concepts 27033-2: Published network security design and implementation guidelines 27033-3: Published network security threats, design techniques, and control issues guidelines 27033-4: Published securing communications between networks using security gateways 27033-5: Published securing communications across networks using virtual private networks (VPN) 27033-6: In-development securing wireless IP network access 27034-1: Published application security overview and concepts 27034-2: In-development application security organization normative framework guidelines 27034-3: In-development application security management process guidelines 27034-4: In-development application security validation guidelines 27034-5: In-development application security protocols and controls data structure guidelines 27034-6: In-development security guidance for specific applications 27034-7: In-development guidance for application security assurance prediction 27035: Published information security incident management guidelines 27035-1: In-development information security incident management principles 27035-2: In-development information security incident response readiness guidelines 27035-3: In-development computer security incident response team (CSIRT) operations guidelines 27036-1: Published information security for supplier relationships overview and concepts 27036-2: Published information security for supplier relationships common requirements guidelines 27036-3: Published information and communication technology (ICT) supply chain security guidelines 27036-4: In-development guidelines for security of cloud services 27037: Published digital evidence identification, collection, acquisition, and preservation guidelines 27038: Published information security digital redaction specification 27039: Published intrusion detection systems (IDS) selection, deployment, and operations guidelines 27040: Published storage security guidelines 27041: Published guidance on assuring suitability and adequacy of incident investigative method 27042: Published digital evidence analysis and interpretation guidelines 27043: Published incident investigation principles and processes 27044: In-development security information and event management (SIEM) guidelines 27050: In-development electronic discovery (eDiscovery) guidelines 27799: Published information security in health organizations guidelines

SPML

Involves request authority, provisioning service provider, and provisioning service target Another open standard for exchanging authorization information between cooperating organizations is Service Provisioning Markup Language (SPML). It is an XML-based framework developed by the Organization for the Advancement of Structured Information Standards (OASIS).The SPML architecture has three components:Request authority (RA): The entity that makes the provisioning requestProvisioning service provider (PSP): The entity that responds to the RA requestsProvisioning service target (PST): The entity that performs the provisioningWhen a trust relationship has been established between two organizations with web-based services, one organization acts as the RA, and the other acts as the PSP. The trust relationship uses Security Assertion Markup Language (SAML) in a Simple Object Access Protocol (SOAP) header. The SOAP body transports the SPML requests/responses.Figure 11-8 shows an example of how these SPML messages are used. In the diagram, a company has an agreement with a supplier to allow the supplier to access its provisioning system. When the supplier's HR department adds a user, an SPML request is generated to the supplier's provisioning system so the new user can use the system. Then the supplier's provisioning system generates another SPML request to create the account in the customer provisioning system.Figure 11-8: SPML

Isolation

Isolation typically is implemented by either blocking all traffic to and from a device or devices or by shutting down device interfaces. This approach works well for a single compromised system but becomes cumbersome when multiple devices are involved. In that case, segmentation may be a more advisable approach. If a new device can be set up to perform the role of the compromised device, the team may leave the device running to analyze the end result of the threat on the isolated host. Removal

Law Enforcement

Law enforcement may become involved in many incidents. Sometimes they are required to become involved, but in many instances, the organization is likely to invite law enforcement to get involved. When making a decision about whether to involve law enforcement, consider the following factors: -Law enforcement will view the incident differently than the company security team views it. While your team may be more motivated to stop attacks and their damage, law enforcement may be inclined to let an attack proceed in order to gather more evidence. -The expertise of law enforcement varies. While local law enforcement may be indicated for physical theft of computers and such, more abstract crimes and events may be better served by involving law enforcement at the federal level, where greater skill sets are available. The USA PATRIOT Act enhanced the investigatory tools available to law enforcement and expanded their ability to look at e-mail communications, telephone records, Internet communications, medical records, and financial records, which can be helpful. -Before involving law enforcement, try to rule out other potential causes of an event, such as accidents and hardware or software failure. -In cases where laws have obviously been broken (child pornography, for example), immediately get law enforcement involved. This includes any felonies, regardless of how small the loss to the company may have been.

Permissions

Many times an attacker compromises a device by altering the permissions, either in the local database or in entries related to the device in the directory service server. All permissions should undergo a review to ensure that all are in the appropriate state. The appropriate state may not be the state they were in before the event. Sometimes you may discover that although permissions were not set in a dangerous way prior to an event, they are not correct. Make sure to check the configuration database to ensure that settings match prescribed settings.You should also make changes to the permissions based on lessons learned during an event. In that case, ensure that the new settings undergo a change control review and that any approved changes are reflected in the configuration database.

Password crackers

Many times investigators find passwords standing in the way of obtaining evidence. Password cracking utilities are required in such instances. Most suites include several password cracking utilities for this purpose. Lesson 14 lists some of these tools.

Memory Overflows

Memory overflow occurs when an application uses more memory than the operating system has assigned to it. In some cases, it simply causes the system to run slowly, as the application uses more and more memory. In other cases, the issue is more serious. When it is a buffer overflow, the intent may be to crash the system or execute commands. Buffer overflows are covered in more detail in Lesson 6.

Active Directory (AD)

Microsoft's implementation of LDAP is Active Directory (AD), which organizes directories into forests and trees. AD tools are used to manage and organize everything in an organization, including users and devices. This is where security is implemented, and its implementation is made more efficient through the use of Group Policy and Group Policy objects. AD is another example of a single sign-on (SSO) system. It uses the same authentication and authorization system used in Unix, Linux, and Kerberos. This system authenticates a user once, and then, through the use of a ticket system, allows the user to perform all actions and access all resources to which she has been given permission without the need to authenticate again. Figure 11-4 shows the steps used in this process. The user authenticates with the domain controller, and the domain controller is performing several other roles as well. First, it is the key distribution center (KDC), which runs the authorization service (AS), which determines whether the user has the right or permission to access a remote service or resource in the network. The figure is showing the Kerberos Authentication process. In the figure, Client system requests or TGT from the KDC-Domain Controller (AS and TGT) for authentication. Domain Controller provides Service Ticket and Session Key to the Client system. Client provides the Service Ticket and authentication to get Service. Figure 11-4: Kerberos Authentication To review the Kerberos process, after the user has been authenticated (when she logs on once to the network), she is issued a ticket-granting ticket (TGT). This is used to later request session tickets, which are required to access resources. At any point that she later attempts to access a service or resource, she is redirected to the AS running on the KDC. Upon presenting her TGT, she is issued a session, or service, ticket for that resource. The user presents the service ticket, which is signed by the KDC, to the resource server for access. Because the resource server trusts the KDC, the user is granted access. Some advantages of implementing Kerberos include the following: User passwords do not need to be sent over the network. Both the client and server authenticate each other. The tickets passed between the server and client are timestamped and include lifetime information. The Kerberos protocol uses open Internet standards and is not limited to proprietary codes or authentication mechanisms. Some disadvantages of implementing Kerberos include the following: KDC redundancy is required if providing fault tolerance is a requirement. The KDC is a single point of failure. The KDC must be scalable to ensure that performance of the system does not degrade. Session keys on the client machines can be compromised. Kerberos traffic needs to be encrypted to protect the information over the network. All systems participating in the Kerberos process must have synchronized clocks. Kerberos systems are susceptible to password-guessing attacks.

Military and Government Classifications

Military and government entities usually classify data using five main classification levels, listed here from highest sensitivity level to lowest: Top secret: Data that is top secret includes weapon blueprints, technology specifications, spy satellite information, and other military information that could gravely damage national security if disclosed. Secret: Data that is secret includes deployment plans, missile placement, and other information that could seriously damage national security if disclosed. Confidential: Data that is confidential includes patents, trade secrets, and other information that could seriously affect the government if unauthorized disclosure occurred. Sensitive but unclassified: Data that is sensitive but unclassified includes medical or other personal data that might not cause serious damage to national security but could cause citizens to question the reputation of the government. Unclassified: Military and government information that does not fall into any of the other four categories is considered unclassified and usually has to be granted to the public based on the Freedom of Information Act.

Unclassified Classification Level (Government and Military Classifications)

Military and government information that does not fall into any of the other four categories is considered unclassified and usually has to be granted to the public based on the Freedom of Information Act.

ModSecurity

ModSecurity is a toolkit designed to protect Apache, nginx, and IIS. It is open source and supports the OWASP Core Rule Sets (CRS). Among the features it provides are the following: Real-time application monitoring and access control Web application hardening Full HTTP traffic logging Continuous passive security assessment ModSecurity records information in the Application log, as shown in Figure 14-11, when it blocks an action. Notice in the informational section in this figure that ModSecurity blocked an action.

Framework for Improving Critical Infrastructure Cybersecurity

NIST created the Framework for Improving Critical Infrastructure Cybersecurity, or simply the NIST Cybersecurity Framework, in 2014. It focuses exclusively on IT security and is composed of three parts: Framework core: The core presents five cybersecurity functions, each of which is further divided into subfunctions. It describes desired outcomes for these functions. As you can see in Figure 10-1, each function has informative references available to help guide the completion of that subcategory of a particular function. Implementation tiers: These tiers are levels of sophistication in the risk management process that organizations can aspire to reach. These tiers can be used as milestones in the development of an organization's risk management process. The four tiers, from least developed to most developed, are Partial, Risk Informed, Repeatable, and Adaptive. Framework profiles: Profiles can be used to compare the current state (or profile) to a target state (profile). This enables an organization to create an action plan to close gaps between the two. a

Periodic Review

New security issues and threats are constantly cropping up. As a result, security professionals should review all security awareness training and ensure that it is updated to address new security issues and threats. Such reviews should be scheduled at regular intervals.

NextGen Firewalls

Next-generation firewalls (NGFWs) are a category of devices that attempt to address traffic inspection and application awareness shortcomings of a traditional stateful firewall, without hampering the performance. Although UTM devices also attempt to address these issues, they tend to use separate internal engines to perform individual security functions. This means a packet may be examined several times by different engines to determine whether it should be allowed into the network.NGFWs are application aware, which means they can distinguish between specific applications instead of allowing all traffic coming in via typical web ports. Moreover, they examine packets only once, during the deep packet inspection phase (which is required to detect malware and anomalies). The following are some of the features provided by NGFWs: Non-disruptive in-line configuration (which has little impact on network performance) Standard first-generation firewall capabilities, such as network address translation (NAT), stateful protocol inspection (SPI), and virtual private networking Integrated signature-based IPS engine Application awareness, full stack visibility, and granular control Ability to incorporate information from outside the firewall, such as directory-based policy, blacklists, and whitelists Upgrade path to include future information feeds and security threats and SSL decryption to enable identifying undesirable encrypted applications An NGFW can be placed in-line or out-of-path. Out-of-path means that a gateway redirects traffic to the NGFW, while in-line placement causes all traffic to flow through the device. The two placements are shown in Figure 12-8.Figure 12-8: Placement of an NGFWThe advantages and disadvantages of NGFWs are listed in Table 12-8. AdvantagesDisadvantagesThey provide enhanced security.Managing NGFWs is more involved than managing standard firewalls.They provide integration between security services.They lead to reliance on a single vendor.They may save costs on appliances.Performance can be impacted. Table 12-8: Advantages and Disadvantages of NGFWs

Network Scanning

NmapWhile network scanning can be done with more blunt tools, like ping, Nmap is more stealthy and may be able to perform its activities without setting off firewalls and IDS. It is valuable to note that while we are discussing Nmap in the context of network scanning, this tool can be used for many other operations, including performing certain attacks. When used for scanning, it typically locates the devices, locates the open ports on the devices, and determines the OS on each host

Reconstruction/Reimage

Once a device has been sanitized, the system must be rebuilt. This can be done by reinstalling the operating system, applying all system updates, reinstalling the anti-malware software, and implementing any organization security settings. Then, any needed applications must be installed and configured. If the device is a server that is running some service on behalf of the network (for example, DNS, DHCP), that service must be reconfigured as well. All this is not only a lot of work, it is time-consuming. A better approach is to maintain standard images of the various device types in the network so that you can use these images to stand up a device quickly. To make this approach even more seamless, having a backup image of the same device eliminates the need to reconfigure everything you might have to reconfigure when using standard images.

Live VM migration

One of the advantages of a virtualized environment is the ability of the system to migrate a VM from one host to another when needed. This is called a live migration. When VMs are on the network between secured perimeters, attackers can exploit the network vulnerability to gain unauthorized access to VMs. With access to the VM images, attackers can plant malicious code in the VM images to plant attacks on data centers that VMs travel between. Often the protocols used for the migration are not encrypted, making a man-in-the-middle attack in the VM possible while it is in transit, as shown in Figure 6-15. They key to preventing man-in-the middle attacks is encryption of the images where they are stored.

Downtime and Recovery Time

One of the issues that must be considered is the potential amount of downtime the incident could inflict and the time it will take to recover from the incident. If a proper business continuity plan has been created, you will have collected information about each asset that will help classify incidents that affect each asset.

Write blockers

One of the key issues when digital evidence is presented in court is whether the evidence has been altered or corrupted. A write blocker is a tool that permits read-only access to data storage devices and does not compromise the integrity of the data. According to the National Institute of Standards and Technology (NIST), a write blocking device should have the following characteristics: -The tool shall not allow a protected drive to be changed. -The tool shall not prevent obtaining any information from or about any drive. -The tool shall not prevent any operations to a drive that is not protected. These devices can be either hardware devices or software that is installed on the forensics workstation. Either way, these tools block commands that modify data. The key is that they still allow for acquiring the information you need to complete an investigation without altering the original storage device.

Succession Planning

One of the most disruptive events that can occur is for an organization to lose a key employee who is critical to operations. When this occurs, it should be considered a failure of redundancy in human resources. Organizations should consider succession planning for key positions as a key part of defense in depth.A proper succession plan should not only identify potential candidates to succeed key employees but should develop a specific plan to train these individuals so that they are ready to take over the position and perform well in the job. Typically this involves external as well as internal training modalities. It should also include working alongside the current position holder so that all organizational knowledge is transferred.

Imaging utilities

One of the tasks you will be performing is making copies of storage devices. For this you need a disk imaging tool. To make system images, you need to use a tool that creates a bit-level copy of the system. In most cases, you must isolate the system and remove it from production to create this bit-level copy. You should ensure that two copies of the image are retained. One copy of the image will be stored to ensure that an undamaged, accurate copy is available as evidence. The other copy will be used during the examination and analysis steps. Message digests (or hashing digests) should be used to ensure data integrity.

ARP Poisoning

One of the ways a man-in-the-middle attack is accomplished is by poisoning the ARP cache on a switch. The attacker accomplishes this poison by answering ARP requests for another computer's IP address with his own MAC address. After the ARP cache has been successfully poisoned, when ARP resolution occurs, both computers have the attacker's MAC address listed as the MAC address that maps to the other computer's IP address. As a result, both are sending to the attacker, placing him "in the middle."

Disclosure Based on Regulatory/Legislative Requirements

Organizations in certain industries may be required to comply with regulatory or legislative requirements with regard to communicating data breaches to affected parties and to those agencies and legislative bodies promulgating these regulations. The organization should include these communication types in the communication plan.

OAuth

Open Authorization (OAuth) is a standard for authorization that allows users to share private resources from one site to another site without using credentials. It is sometimes described as the valet key for the web. Whereas a valet key only gives the valet the ability to park your car but not access the trunk, OAuth uses tokens to allow restricted access to a user's data when a client application requires access. These tokens are issued by an authorization server. Although the exact flow of steps depends on the specific implementation, Figure 11-3 shows the general process steps. OAuth is a good choice for authorization whenever one web application uses another web application's API on behalf of the user. A good example would be a geolocation application integrated with Facebook. OAuth gives the geolocation application a secure way to get an access token for Facebook without revealing the Facebook password to the geolocation application.

OpenSSL

OpenSSL is an open source implementation of SSL and TSL that can be used to assure the identity of both machines and the application code they run. OpenSSL is implemented as a library of software functions. Once installed, it exposes commands that can be used to create digital certificates and associated key pairs that can be assigned to applications and machines.

Organizational Governance

Organizational governance refers to the process of controlling an organization's activities, processes, and operations. When the process is unwieldy, as it is in some very large organizations, the application of countermeasures may be frustratingly slow. One of the reasons for including upper management in the entire process is to use the weight of authority to cut through the red tape.

Maturity Model

Organizations are not alone in the wilderness when it comes to developing processes for assessing vulnerability, selecting controls, adjusting security policies and procedures to support those controls, and performing audits. Several publications and process models have been developed to help develop these skills.

Maintaining a Secure Provisioning Life Cycle

Organizations should create a formal process for creating, changing, and removing users. This process, called the provisioning life cycle, includes user approval, user creation, user creation standards, and authorization. Users should sign a written statement that explains the access conditions, including user responsibilities. Finally, access modification and removal procedures should be documented. User provision policies should be integrated as part of human resource management. Human resource policies should include procedures whereby the human resource department formally requests the creation or deletion of a user account when new personnel are hired or terminated.

Payment Card Industry Data Security Standard (PCI-DSS)

PCI-DSS v3.1, developed in April 2015, is the latest version of the PCI-DSS standard as of this writing. It encourages and enhances cardholder data security and facilitates the broad adoption of consistent data security measures globally. Figure 5-1 shows a high-level overview of the PCI-DSS standard.

Point-to-Point Tunneling Protocol (PPTP)

PPTP is a Microsoft protocol based on PPP. It uses built-in Microsoft Point-to-Point encryption and can use a number of authentication methods, including CHAP, MS-CHAP, and EAP-TLS. One shortcoming of PPTP is that it works only on IP-based networks. If a WAN connection that is not IP based is in use, L2TP must be used.

Palo Alto Firewalls

Palo Alto makes next-generation firewalls, which means they perform many of the functions found in the ASA. While Palo Alto firewalls are not as widely deployed as Cisco and Check Point (covered next), in 2016 Gardner Magic Quadrant identified Palo Alto as a leader in the enterprise firewall. It sends unknown threats to the cloud for analysis.

Center for Internet Security

Partly funded by SANS, the Center for Internet Security (CIS) is a not-for-profit organization that is known for compiling CIS Security Controls (CIS Controls). CIS publishes a list of the top 20 CIS Controls. It also provides hardened system images, training, assessment tools, and consulting services.System Design RecommendationsCIS makes system design recommendations through the publication of security controls for specific scenarios. CIS Controls are organized by type and numbered. As you can see from the following list, CIS Controls cover many of the concepts and techniques discussed throughout this book: CSC 1: Inventory of Authorized and Unauthorized Devices CSC 2: Inventory of Authorized and Unauthorized Software CSC 3: Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers CSC 4: Continuous Vulnerability Assessment and Remediation CSC 5: Controlled Use of Administrative Privileges CSC 6: Maintenance, Monitoring, and Analysis of Audit Logs CSC 7: Email and Web Browser Protections CSC 8: Malware Defenses CSC 9: Limitation and Control of Network Ports, Protocols, and Services CSC 10: Data Recovery Capability CSC 11: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches CSC 12: Boundary Defense CSC 13: Data Protection CSC 14: Controlled Access Based on the Need to Know CSC 15: Wireless Access Control CSC 16: Account Monitoring and Control CSC 17: Security Skills Assessment and Appropriate Training to Fill Gaps CSC 18: Application Software Security CSC 19: Incident Response and Management CSC 20: Penetration Tests and Red Team Exercises To learn more above the CIS Controls, visit www.cisecurity.org/critical-controls.cfm.

Password Policy

Password authentication is the most popular authentication method implemented today. But often password types can vary from system to system. Before we look at potential password policies, it is vital that you understand all the types of passwords that can be used.Some of the types of passwords that you should be familiar with include the following: Standard word passwords: As the name implies, these passwords consist of single words that often include a mixture of upper- and lowercase letters. The advantage of this password type is that it is easy to remember. A disadvantage of this password type is that it is easy for attackers to crack or break, resulting in compromised accounts. Combination passwords: These passwords, also called composition passwords, use a mix of dictionary words, usually two that are unrelated. Like standard word passwords, they can include upper- and lowercase letters and numbers. An advantage of this password type is that it is harder to break than a standard word password. A disadvantage is that it can be hard to remember. Static passwords: This password type is the same for each login. It provides a minimum level of security because the password never changes. It is most often seen in peer-to-peer networks. Complex passwords: This password type forces a user to include a mixture of upper- and lowercase letters, numbers, and special characters. For many organizations today, this type of password is enforced as part of the organization's password policy. An advantage of this password type is that it is very hard to crack. A disadvantage is that it is harder to remember and can often be much harder to enter correctly. Passphrase passwords: This password type requires that a long phrase be used. Because of the password's length, it is easier to remember but much harder to attack, both of which are definite advantages. Incorporating upper- and lowercase letters, numbers, and special characters in this type of password can significantly increase authentication security. Cognitive passwords: This password type is a piece of information that can be used to verify an individual's identity. The user provides this information to the system by answering a series of questions based on her life, such as favorite color, pet's name, mother's maiden name, and so on. An advantage of this type is that users can usually easily remember this information. The disadvantage is that someone who has intimate knowledge of the person's life (spouse, child, sibling, and so on) may be able to provide this information as well. One-time passwords (OTP): Also called a dynamic password, an OTP is used only once to log in to the access control system. This password type provides the highest level of security because it is discarded after it is used once. Graphical passwords: Also called Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) passwords, this type of password uses graphics as part of the authentication mechanism. One popular implementation requires a user to enter a series of characters that appear in a graphic. This implementation ensures that a human, not a machine, is entering the password. Another popular implementation requires the user to select the appropriate graphic for his account from a list of graphics. Numeric passwords: This type of password includes only numbers. Keep in mind that the choices of a password are limited by the number of digits allowed. For example, if all passwords are four digits, then the maximum number of password possibilities is 10,000, from 0000 through 9999. Once an attacker realizes that only numbers are used, cracking user passwords is much easier because the attacker knows the possibilities.

Password Management

Password management considerations include, but may not be limited to, the following: Password life: How long a password will be valid. For most organizations, passwords are valid for 60 to 90 days. Password history: How long before a password can be reused. Password policies usually remember a certain number of previously used passwords. Authentication period: How long a user can remain logged in. If a user remains logged in for the specified period without activity, the user will be automatically logged out. Password complexity: How the password will be structured. Most organizations require upper- and lowercase letters, numbers, and special characters. The following are some recommendations: Passwords shouldn't contain the username or parts of the user's full name, such as his first name. Passwords should use at least three of the four available character types: lowercase letters, uppercase letters, numbers, and symbols. Password length: How long the password must be. Most organizations require 8 to 12 characters. As part of password management, an organization should establish a procedure for changing passwords. Most organizations implement a service that allows users to automatically reset a password before the password expires. In addition, most organizations should consider establishing a password reset policy that addresses users forgetting their passwords or having them compromised. A self-service password reset approach allows users to reset their own passwords, without the assistance of help desk employees. An assisted password reset approach requires that users contact help desk personnel for help changing passwords. Password reset policies can also be affected by other organizational policies, such as account lockout policies. Account lockout policies are security policies that organizations implement to protect against attacks carried out against passwords. Organizations often configure account lockout policies so that user accounts are locked after a certain number of unsuccessful login attempts. If an account is locked out, the system administrator may need to unlock or reenable the user account. Security professionals should also consider encouraging organizations to require users to reset their passwords if their accounts have been locked. For most organizations, all the password policies, including account lockout policies, are implemented at the enterprise level on the servers that manage the network. Depending on which servers are used to manage the enterprise, security professionals must be aware of the security issues that affect user accounts and password management. Two popular server operating systems are Linux and Windows. For Linux, passwords are stored in the /etc/passwd or /etc/shadow file. Because the /etc/passwd file is a text file that can be easily accessed, you should ensure that any Linux servers use the /etc/shadow file, where the passwords in the file can be protected using a hash. The root user in Linux is a default account that is given administrative-level access to the entire server. If the root account is compromised, all passwords should be changed. Access to the root account should be limited only to system administrators, and root login should be allowed only via a system console. For Windows Server 2003 and earlier and all client versions of Windows that are in workgroups, the Security Account Manager (SAM) stores user passwords in a hashed format. It stores a password as a LAN Manager (LM) hash and/or a New Technology LAN Manager (NTLM) hash. However, known security issues exist with a SAM, especially with regard to the LM hashes, including the ability to dump the password hashes directly from the registry. You should take all Microsoft-recommended security measures to protect this file. If you manage a Windows network, you should change the name of the default administrator account or disable it. If this account is retained, make sure that you assign it a password. The default administrator account may have full access to a Windows server. Most versions of Windows can be configured to disable the creation and storage of valid LM hashes when the user changes her password. This is the default setting in Windows Vista and later but was disabled by default in earlier versions of Windows

Patch management

Patch management is often seen as a subset of configuration management. Software patches are updates released by vendors that either fix functional issues with or close security loopholes in operating systems, applications, and versions of firmware that run on network devices. To ensure that all devices have the latest patches installed, you should deploy a formal system to ensure that all systems receive the latest updates after thorough testing in a non-production environment. It is impossible for a vendor to anticipate every possible impact a change might have on business-critical systems in a network. The enterprise is responsible for ensuring that patches do not adversely impact operations. The patch management life cycle includes the following steps: Step 1. Determine the priority of the patches and schedule the patches for deployment. Step 2. Test the patches prior to deployment to ensure that they work properly and do not cause system or security issues. Step 3. Install the patches in the live environment. Step 4. After patches are deployed, ensure that they work properly. Many organizations deploy a centralized patch management system to ensure that patches are deployed in a timely manner. With this system, administrators can test and review all patches before deploying them to the systems they affect. Administrators can schedule the updates to occur during non-peak hours.

Employment Agreement and Policies

Personnel hiring procedures should include signing all the appropriate documents, including government-required documentation, no expectation of privacy statements, and nondisclosure agreements (NDA). Organizations usually have a personnel handbook and other hiring information that must be communicated to the employee. The hiring process should include a formal verification that the employee has completed all the training. Employee IDs and passwords are issued at this time. Code of conduct, conflict of interest, and ethics agreements should also be signed at this time. Also, any non-compete agreements should be verified to ensure that employees do not leave the organization for a competitor. Employees should be given guidelines for periodic performance reviews, compensation, and recognition of achievements.

Employment Candidate Screening

Personnel screening should occur prior to an offer of employment and might include a criminal history, work history, background investigations, credit history, driving records, substance-abuse testing, reference checks, education and licensing verification, Social Security number verification and validation, and a suspected terrorist watch list check. Each organization should determine its screening needs based on the organization's needs and the perspective personnel's employment level. Job descriptions should contain the roles and responsibilities of the job role and any experience or education required. If skills must be maintained or upgraded, the job description should list the annual training requirements, especially if specialized security training is needed. Annual participation in security awareness training and other compliance requirements should be included as part of the employment agreement. Criminal history checks are allowed under the Fair Credit Reporting Act (FCRA). Employers can request criminal records for most potential employees for the past seven years. If an applicant will be earning more than $75,000 annually, there are no time restrictions on criminal history. Employers need to search state and county criminal records, sex and violent offender records, and prison records. Many companies can provide search services for a fee. Work history should be verified. Former employers should be contacted to confirm dates employed, position, performance, and reason for leaving. However, security professionals should keep in mind that some companies will only verify the employment term. Background investigation should research any claim made on the applicant's application or resume. Verification of the applicant's claims serves to protect the hiring organization by ensuring that the applicant holds the skills and experience claimed. Employees should also be reinvestigated based on their employment level. For example, employees with access to financial data and transactions should undergo periodic credit checks. Credit history ensures that personnel who are involved in financial transactions for the organization will not be risks for financial fraud. The FCRA and Equal Employment Opportunity Commission (EEOC) should be consulted to help human resources personnel in this area. In addition, it is always good to involve legal counsel. Driving records are necessary if the applicant will be operating a motor vehicle as part of his job. But often this type of check for other applicants can help reveal lifestyle issues, such as driving under the influence or license suspension, that can cause employment problems later. Substance-abuse testing will reveal to the employer any drug use. Because a history of drug use can cause productivity problems and absenteeism, it is best to perform this testing before offering employment. However, security professionals should ensure that any substance testing is clearly stated as part of the job posting. For reference checks, two types of checks are performed: work and personal. Work reference checks verify employment history. Personal reference checks involve contacting individuals supplied by the applicant and asking questions regarding the applicant's capabilities, skills, and personality. Education and licensure verification are usually fairly easy to complete. Employers can request transcripts from educational institutions. For any licensure or certification, the licensing or certification body can verify the license or certification held. Social Security number verification and validation can be achieved by contacting the Social Security Administration to ensure that the Social Security information provided is accurate. The Social Security Administration will alert you if a provided Social Security number has been misused, including if the number belongs to a deceased person or person in a detention facility. Just as there are companies that can provide criminal history checks, companies have recently started providing services to search the federal and international lists of suspected terrorists. Organizations involved in defense, aviation, technology, and biotechnology fields should consider performing terrorist checks for all applicants. As any security professional knows, the sensitivity of the information that the applicant will have access to should be the biggest determining factor guiding which checks to perform. Organizations should never get lax in their pre-employment applicant screening processes.

MAC Overflow

Preventing security issues with switches involves preventing MAC address overflow attacks. By design, switches place each port in its own collision domain, which is why a sniffer connected to a single port on a switch can only capture the traffic on that port and not traffic on other ports. However, an attack called a MAC address overflow attack can cause a switch to fill its MAC address table with nonexistent MAC addresses. Using free tools, a hacker can send thousands of nonexistent MAC addresses to the switch. The switch can dedicate only a certain amount of memory for the table, and at some point, it fills with the bogus MAC addresses. This prevents valid devices from creating content-addressable memory (CAM) entries (MAC addresses) in the MAC address table. When this occurs, all legitimate traffic received by the switch is flooded out every port. Remember that this is what switches do when they don't find a MAC address in the table. A hacker can capture all the traffic. Figure 6-11 shows how this type of attack works.

Privilege Escalation

Privilege escalation is the process of exploiting a bug or weakness in an operating system to allow a user to receive privileges to which she is not entitled. These privileges can be used to delete files, view private information, or install unwanted programs, such as viruses. There are two types of privilege escalation: Vertical privilege escalation: This occurs when a lower-privilege user or application accesses functions or content reserved for higher-privilege users or applications. Horizontal privilege escalation: This occurs when a normal user accesses functions or content reserved for other normal users. The following measures can help prevent privilege escalation: Ensure that databases and related systems and applications are operating with the minimum privileges necessary to function. Verify that users are given the minimum access required to do their job. Ensure that databases do not run with root, administrator, or other privileged account permissions, if possible.

How to mitigate XSS?

Proper validation of all input should be performed to prevent this type of attack. This involves identifying all user-supplied input and testing all output.

Web Proxy

Proxy servers can be appliances, or they can be software that is installed on a server operating system. These servers act like a proxy firewall in that they create the web connection between systems on their behalf, but they can typically allow and disallow traffic on a more granular basis. For example, a proxy server may allow the Sales group to go to certain websites while not allowing the Data Entry group access to those same sites. The functionality extends beyond HTTP to other traffic types, such as FTP traffic.Proxy servers can provide an additional beneficial function called web caching. When a proxy server is configured to provide web caching, it saves a copy of all web pages that have been delivered to internal computers in a web cache. If any user requests the same page later, the proxy server has a local copy and need not spend the time and effort to retrieve it from the Internet. This greatly improves web performance for frequently requested pages.

Qualitative Risk Analysis

Qualitative risk analysis does not assign monetary and numeric values to all facets of the risk analysis process. Qualitative risk analysis techniques include intuition, experience, and best practice techniques, such as brainstorming, focus groups, surveys, questionnaires, meetings, and interviews. Although all these techniques can be used, most organizations determine the best technique(s) based on the threats to be assessed. Experience and education on the threats are needed. Each member of the group who has been chosen to participate in qualitative risk analysis uses his experience to rank the likelihood of each threat and the damage that might result. After each group member ranks the threat possibility, loss potential, and safeguard advantage, the data is combined in a report to present to management. All levels of staff should be represented as part of a qualitative risk analysis, but it is vital that some participants in this process should have some expertise in risk analysis. One advantage of qualitative over quantitative risk analysis is that qualitative risk analysis prioritizes the risks and identifies areas for immediate improvement in addressing the threats. A disadvantage of qualitative risk analysis is that all results are subjective, and a dollar value is not provided for cost-benefit analysis or for budget help. Most risk analysis includes some hybrid use of both quantitative and qualitative risk analyses. Most organizations favor using quantitative risk analysis for tangible assets and qualitative risk analysis for intangible assets.

Reverse Engineering

Reverse engineering can refer to retracing the steps in an incident, as seen from the logs in the affected devices or in logs of infrastructure devices that may have been involved in transferring information to and from the devices. This can help you understand the sequence of events. When unknown malware is involved, the term reverse engineering may refer to an analysis of the malware's actions to determine a removal technique. This is the approach to zero-day attacks in which no known fix is yet available from anti-malware vendors. With respect to reverse engineering malware, this process refers to extracting the code from the binary executable to identify how it was programmed and what it does. There are three ways the binary malware file can be made readable: -Disassembly: This refers to reading the machine code into memory and then outputting each instruction as a text string. Analyzing this output requires a very high level of skill and special software tools. -Decompiling: This process attempts to reconstruct the high-level language source code. -Debugging: This process steps though the code interactively. There are two kinds of debuggers: +Kernel debugger: This type of debugger operates at ring 0—essentially the driver level—and has direct access to the kernel. +Usermode debugger: This type of debugger has access to only the usermode space of the operating system. Most of the time, this is enough, but not always. In the case of rootkits or even super-advanced protection schemes, it is preferable to step into a kernel mode debugger instead because usermode in such situations is untrustworthy

Sherwood Applied Business Security Architecture (SABSA)

SABSA is an enterprise security architecture framework that uses the six communication questions (What, Where, When, Why, Who, and How) that intersect with six layers (operational, component, physical, logical, conceptual, and contextual). It is a risk-driven architecture.

Scada includes what components?

SCADA includes the following components: -Sensors: Sensors typically have digital or analog I/O and are not in a form that can be easily communicated over long distances. -Remote terminal units (RTU): RTUs connect to sensors and convert sensor data to digital data, including telemetry hardware. -Programmable logic controllers (PLC): PLCs connect to sensors and convert sensor data to digital data; they do not include telemetry hardware. -Telemetry system: Such a system connects RTUs and PLCs to control centers and the enterprise. Human interface: Such an interface presents data to the operator.

Sanitization

Sanitization refers to removing all traces of a threat by overwriting the drive multiple times to ensure that the threat is removed. This works well for mechanical hard disk drives, but solid-state drives present a challenge in that they cannot be overwritten. Most solid-state drive vendors provide sanitization commands that can be used to erase the data on the drive. Security professionals should research these commands to ensure that they are effective.

Training

Security awareness training, security training, and security education are three terms that are often used interchangeably, but they are actually three different things. Basically, awareness training is the what, security training is the how, and security education is the why. Awareness training reinforces the fact that valuable resources must be protected by implementing security measures. Security training teaches personnel the skills they need to perform their jobs in a secure manner. Awareness training and security training are usually combined as security awareness training, which improves user awareness of security and ensures that users can be held accountable for their actions. Security education is more independent, targeted at security professionals who require security expertise to act as in-house experts for managing the security programs.Security awareness training should be developed based on the audience. In addition, trainers must understand the corporate culture and how it will affect security. The audiences you need to consider when designing training include high-level management, middle management, technical personnel, and other staff.For high-level management, the security awareness training must provide a clear understanding of potential risks and threats, effects of security issues on organizational reputation and financial standing, and any applicable laws and regulations that pertain to the organization's security program. Middle management training should discuss policies, standards, baselines, guidelines, and procedures, particularly how these components map to individual departments. Also, middle management must understand their responsibilities regarding security. Technical staff should receive technical training on configuring and maintaining security controls, including how to recognize an attack when it occurs. In addition, technical staff should be encouraged to pursue industry certifications and higher education degrees. Other staff need to understand their responsibilities regarding security so that they perform their day-to-day tasks in a secure manner. With these staff, providing real-world examples to emphasize proper security procedures is effective.Personnel should sign a document that indicates they have completed the training and understand all the topics. Although the initial training should occur when personnel are hired, security awareness training should be considered a continuous process, with future training sessions occurring at least annually.

Improper Storage of Sensitive Data

Sensitive information in this context includes usernames, passwords, encryption keys, and paths that applications need to function but that would cause harm if discovered. Determining the proper method of securing this information is critical and not easy. Although this was not always the case, it is a generally accepted rule to not hard-code passwords. Instead, passwords should be protected using encryption when they are included in application code. This makes them difficult to change, reverse, or discover.

Sensitivity and Criticality

Sensitivity is a measure of how freely data can be handled. Some data requires special care and handling, especially when inappropriate handling could result in penalties, identity theft, financial loss, invasion of privacy, or unauthorized access by an individual or many individuals. Some data is also subject to regulation by state or federal laws and requires notification in the event of a disclosure. Data is assigned a level of sensitivity based on who should have access to it and how much harm would be done if it were disclosed. This assignment of sensitivity is called data classification. Criticality is a measure of the importance of the data. Data considered sensitive may not necessarily be considered critical. Assigning a level of criticality to a particular data set must take into consideration the answer to a few questions: Will you be able to recover the data in case of a disaster? How long will it take to recover the data? What is the effect of this downtime, including loss of public standing? Data is considered essential when it is critical to the organization's business. When essential data is not available, even for a brief period of time, or its integrity is questionable, the organization is unable to function. Data is considered required when it is important to the organization, but organizational operations can continue for a predetermined period of time even if the data is not available. Data is non-essential if the organization is able to operate without it during extended periods of time. Once the sensitivity and criticality of data are understood and documented, the organization should work to create a data classification system. Most organizations use either a commercial business classification system or a military and government classification system.

Authentication Logs

Servers and other devices to which users must authenticate also contain logs. The Windows Security log is an example. In the example shown in Figure 12-3, the highlighted entry shows a logon failure. When you highlight an event in this way, the details are displayed in the bottom pane. This figure shows when the event occurred, from what device, and in what domain. It also shows an event ID of 4625. This is useful when you want to filter the log to show only events of a certain type.

Services

Services that run on both servers and workstations have identities in the security system. They possess accounts called system or service accounts that are built in, and they log on when they operate, just as users do. They also possess privileges and rights, and this is why security issues come up with these accounts. These accounts typically possess many more privileges than they actually need to perform the service. The security issue is that if a malicious individual or process gained control of the service, her rights would be significant. Therefore, it is important to apply the concept of least privilege to these services by identifying the rights the services need and limiting the services to only those rights. A common practice has been to create a user account for the service that possesses only the rights required and set the service to log on using that account. You can do this by accessing the Log On tab in the properties of the service, as shown in Figure 11-1. In this example, the Remote Desktop Service is set to log on as a Network Service account. To limit this account, you can create a new account either in the local machine or in Active Directory, give the account the proper permissions, and then click the Browse button, locate the account, and select it. While this is a good approach, it involves some complications. First is the difficulty of managing the account password. If the domain in which the system resides has a policy that requires a password change after 30 days and you don't change the service account password, the service will stop running. Another complication involves the use of domain accounts. While setting a service account as a domain account eliminates the need to create an account for the service locally on each server that runs the service, it introduces a larger security risk. If that single domain service account were compromised, the account would provide access to all servers running the service.

Session Hijack

Session hijacking occurs when a hacker is able to identify the unique session ID assigned to an authenticated user. It is important that the process used by the web server to generate these IDs be truly random. Session hijacking and measures to prevent it are covered in Lesson 6

Session Management

Session management involves taking measures to protect against session hijacking. This can occur when a hacker is able to identify the unique session ID assigned to an authenticated user. It is important that the process used by the web server to generate these IDs be truly random.

Social Engineering Threats

Social engineering attacks occur when attackers use believable language and user gullibility to obtain user credentials or some other confidential information. Social engineering threats that you should understand include phishing/pharming, shoulder surfing, identity theft, and dumpster diving. The best countermeasure against social engineering threats is to provide user security awareness training. This training should be required and must occur on a regular basis because social engineering techniques evolve constantly. The following are the most common social engineering threats: Phishing/pharming: Phishing is a social engineering attack in which attackers try to learn personal information, including credit card information and financial data. This type of attack is usually carried out by implementing a fake website that very closely resembles a legitimate website. Users enter data, including credentials, on the fake website, allowing the attackers to capture any information entered. Spear phishing is a phishing attack carried out against a specific target by learning about the target's habits and likes. Spear phishing attacks take longer to carry out than phishing attacks because of the information that must be gathered. Pharming is similar to phishing, but pharming actually pollutes the contents of a computer's DNS cache so that requests to a legitimate site are actually routed to an alternate site. Caution users against using any links embedded in e-mail messages, even if a message appears to have come from a legitimate entity. Users should also review the address bar any time they access a site where their personal information is required to ensure that the site is correct and that SSL is being used, which is indicated by an HTTPS designation at the beginning of the URL address. Shoulder surfing: Shoulder surfing occurs when an attacker watches a user enter login or other confidential data. Encourage users to always be aware of who is observing their actions. Implementing privacy screens helps ensure that data entry cannot be recorded. Identity theft: Identity theft occurs when someone obtains personal information, including driver's license number, bank account number, and Social Security number, and uses that information to assume an identity of the individual whose information was stolen. After the identity is assumed, the attack can go in any direction. In most cases, attackers open financial accounts in the user's name. Attackers also can gain access to the user's valid accounts. Dumpster diving: Dumpster diving occurs when attackers examine garbage contents to obtain confidential information. This includes personnel information, account login information, network diagrams, and organizational financial data. Organizations should implement policies for shredding documents that contain this information.

Introduction of New Accounts

Some applications have their own account database. In that case, you may find accounts that didn't previously exist in the database—and this should be a cause for alarm and investigation. Many application compromises create accounts with administrative access for the use of a malicious individual or for the processes operating on his behalf.a

System Process Criticality

Some assets are not actually information but systems that provide access to information. When these system or groups of systems provide access to data required to continue to do business, they are called critical systems. While it is somewhat simpler to arrive at a value for physical assets such as servers, routers, switches, and other devices, in cases where these systems provide access to critical data or are required to continue a business-critical process, their value is more than the replacement cost of the hardware. The assigned value should be increased to reflect its importance in providing access to data or its role in continuing a critical process.

Proper Credential Management

Some of the guidelines for credential management include the following: Use strong passwords. Automatically generate complex passwords. Implement password history. Use access control mechanisms, including the who, what, how, and when of access. Implement auditing. Implement backup and restore mechanisms for data integrity. Implement redundant systems within the credential management systems to ensure 24/7/365 access. Implement credential management group policies or other mechanisms offered by operating systems.

Sourcefire

Sourcefire (now owned by Cisco) created products based on Snort (covered in the next section). The devices Sourcefire created were branded as Firepower appliances. These products were next-generation IPSs (NGIPS) that provide network visibility into hosts, operating systems, applications, services, protocols, users, content, network behavior, and network attacks and malware. Sourcefire also includes integrated application control, malware protection, and URL filtering.Figure 14-1 shows the Sourcefire Defense Center displaying the numbers of events in the last hour in a graph. All the services provided by these products are now incorporated into Cisco firewall products. For more information on Sourcefire, see www.cisco.com/c/en/us/services/acquisitions/sourcefire.html.

Anti-spyware

Spyware tracks your activities and can also gather personal information that could lead to identity theft. In some cases, spyware can even direct the computer to install software and change settings. Most antivirus or anti-malware packages also address spyware, and ensuring that definitions for both programs are up to date is the key to addressing this issue.An example of a program that can be installed only with the participation of the user (by clicking on something he shouldn't have) is a key logger. These programs record all keystrokes, which can include usernames and passwords. One approach that has been effective in removing spyware is to reboot the machine in safe mode and then run the anti-spyware and allow it to remove the spyware. In safe mode, it is more difficult for the malware to avoid the removal process.

Static Code Analysis

Static code analysis is done without the code executing. Code review and testing must occur throughout the entire Software Development Life Cycle. Code review and testing must identify bad programming patterns, security misconfigurations, functional bugs, and logic flaws. Code review and testing in the planning and design phase include architecture security reviews and threat modeling. Code review and testing in the development phase include static source code analysis and manual code review and static binary code analysis and manual binary review. Once an application is deployed, code review and testing involve penetration testing, vulnerability scanners, and fuzz testing.Static code review can be done with scanning tools that look for common issues. These tools can use a variety of approaches to find bugs, including the following: Data flow analysis: This analysis looks at runtime information while the software is in a static state. Control flow graph: A graph of the components and their relationships can be developed and used for testing by focusing on the entry and exit points of each component or module. Taint analysis: This analysis attempts to identify variables that are tainted with user-controllable input. Lexical analysis: This analysis converts source code into tokens of information to abstract the code and make it easier to manipulate for testing purposes.

the high-level steps in conducting a vulnerability scan

Step 1. Add IP addresses or domain names to the scan. Step 2. Choose scanner appliances (hardware or software sensors). Step 3. Select the scan option. For example, in Nessus, under Advanced Settings, you can use custom policy settings to alter the operation of the scan. The following are some selected examples:auto_update_delay: Number of hours to wait between two updates. Four (4) hours is the minimum allowed interval.global.max_hosts: Maximum number of simultaneous checks against each host tested.global.max_simult_tcp_sessions: Maximum number of simultaneous TCP sessions between all scans.max_hosts: Maximum number of hosts checked at one time during a scan. Step 4. Start the scan. Step 5. View the scan status and results.

To configure an access list on a router, use the following steps:

Step 1. Create the access list:Corp(config)# access-list 10 deny 172.16.10.15 This list denies the device from 172.16.10.15 from sending any traffic on the interface where the list is applied and in the direction specified. Step 2. To prevent all other traffic from being denied by the hidden deny all rule that comes at the end of all ACLs, create another rule that allows all:Corp(config)# access-list 10 permit any Step 3. Apply the ACL to an interface of the router and indicate the direction in which it should filter (after you have entered configuration mode for the desired interface):Corp(config)# int fa0/1Corp(config-if)# ip access-group 10 out

What are the steps of a vulnerability management process?

Step 1. Identify requirements. Step 2. Establish scanning frequency. Step 3. Configure tools to perform scans according to specification. Step 4. Execute scanning. Step 5. Generate reports. Step 6. Perform remediation. Step 7. Perform ongoing scanning and continuous monitoring.

What are the steps in performing a penetration test?

Step 1. Planning and preparation Step 2. Information gathering and analysis Step 3. Vulnerability detection Step 4. Penetration attempt Step 5. Analysis and reporting Step 6. Cleaning up

Syslogs

Syslog messages all follow the same format because they have, for the most part, been standardized. The Syslog packet size is limited to 1024 bytes and carries the following information: Facility: The source of the message. The source can be the operating system, the process, or an application. Severity: Rated using the following scale:0 Emergency: System is unusable.1 Alert: Action must be taken immediately.2 Critical: Critical conditions.3 Error: Error conditions.4 Warning: Warning conditions.5 Notice: Normal but significant condition.6 Informational: Informational messages.7 Debug: Debug-level messages. Source: The log from which this entry came. Action: The action taken on the packet. Source: The source IP address and port number. Destination: The destination IP address and port number. The following is a standard Syslog message, and its parts are explained in Table 12-3: *May 1 23:02:27.143: %SEC-6-IPACCESSLOGP: list ACL-IPv4-E0/0-INpermitted tcp 192.168.1.3(1026) -> 192.168.2.1(80), 1 packet

TACACS+

TACACS+ uses Transmission Control Protocol (TCP) port 49 to communicate between the TACACS+ client and the TACACS+ server. An example is a Cisco switch authenticating and authorizing administrative access to the switch's IOS CLI. The switch is the TACACS+ client, and Cisco Secure ACS is the server. One of the key differentiators of TACACS+ is its ability to separate authentication, authorization and accounting as separate and independent functions. This is why TACACS+ is so commonly used for device administration, even though RADIUS is still certainly capable of providing device administration AAA. TACACS+ communication between the client and server uses different message types depending on the function. In other words, different messages may be used for authentication than are used for authorization and accounting. Another very interesting point to know is that TACACS+ communication will encrypt the entire packet.

The Open Group Architecture Framework (TOGAF)

TOGAF, another enterprise architecture framework, helps organizations design, plan, implement, and govern an enterprise information architecture. The latest version, TOGAF 9.1, was launched in December 2011. TOGAF is based on four interrelated domains: Business architecture: Business strategy, governance, organization, and key business processes Applications architecture: Individual systems to be deployed, interactions between the application systems, and their relationships to the core business processes Data architecture: Structure of an organization's logical and physical data assets Technical architecture: Hardware, software, and network infrastructure The Architecture Development Method (ADM), as prescribed by TOGAF, is applied to develop an enterprise architecture that meets the business and information technology needs of an organization. The process, which is iterative and cyclic, is shown in Figure 10-3. Each step checks with requirements

Double tagging

Tags are used on trunk links to identify the VLAN to which each frame belongs. Another type of attack to trunk ports is called VLAN hopping. It can be accomplished using a process called double tagging. In this attack, the hacker creates a packet with two tags. The first tag is stripped off by the trunk port of the first switch it encounters, but the second tag remains, allowing the frame to hop to another VLAN. This process is shown in Figure 6-13. In this example, the native VLAN number between the Company Switch A and Company Switch B switches has been changed from the default of 1 to 10.

Control Testing Procedures

Testing of the chosen security controls can be a manual process, or it can be automated. Manual review techniques rely on security configuration guides or checklists used to ensure that system settings are configured to minimize security risks. Assessors observe various security settings on the device and compare them with recommended settings from the checklist. Settings that do not meet minimum security standards are flagged and reported. Security Content Automation Protocol (SCAP) is a method for using specific standards to enable automated vulnerability management, measurement, and policy compliance evaluation. NIST SCAP files are written for FISMA compliance and NIST SP 800-53A security control testing. Automated tools can be executed directly on the device being assessed or on a system with network access to the device being assessed. Automated system configuration reviews are faster than manual methods, but some settings must be checked manually. Both methods require root or administrator privileges to view selected security settings. Generally it is preferable to use automated checks instead of manual checks. Automated checks can be done very quickly and provide consistent, repeatable results. Having a person manually checking hundreds or thousands of settings is tedious and prone to human error.

Communications Assistance for Law Enforcement Act (CALEA) of 1994

The Communications Assistance for Law Enforcement Act (CALEA) of 1994 affects law enforcement and intelligence agencies. It requires telecommunications carriers and manufacturers of telecommunications equipment to modify and design their equipment, facilities, and services to ensure that they have built-in surveillance capabilities. This allows federal agencies to monitor all telephone, broadband Internet, and voice over IP (VoIP) traffic in real time.

Computer Fraud and Abuse Act (CFAA):

The Computer Fraud and Abuse Act (CFAA) of 1986 affects any entities that might engage in hacking of "protected computers," as defined in the act. It was amended in 1989, 1994, and 1996; in 2001 by the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT) Act; in 2002; and in 2008 by the Identity Theft Enforcement and Restitution Act. A "protected computer" is a computer used exclusively by a financial institution or the U.S. government or used in or affecting interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States. Due to the inter-state nature of most Internet communication, any ordinary computer has come under the jurisdiction of the law, including cell phones. The law includes several definitions of hacking, including knowingly accessing a computer without authorization; intentionally accessing a computer to obtain financial records, U.S. government information, or protected computer information; and transmitting fraudulent commerce communication with the intent to extort.

Computer Security Act of 1987

The Computer Security Act of 1987 was superseded by the Federal Information Security Management Act (FISMA) of 2002. This act was the first law written to require a formal computer security plan. It was written to protect and defend the sensitive information in the federal government systems and provide security for that information. It also placed requirements on government agencies to train employees and identify sensitive systems.

European Union

The EU has implemented several laws and regulations that affect security and privacy. The EU Principles on Privacy include strict laws to protect private data. The EU's Data Protection Directive provides direction on how to follow the laws set forth in the principles. The EU created the Safe Harbor Privacy Principles to help guide U.S. organizations in compliance with the EU Principles on Privacy. Some of the guidelines include the following: Data should be collected in accordance with the law. Information collected about an individual cannot be shared with other organizations unless the individual gives explicit permission for this sharing. Information transferred to other organizations can be transferred only if the sharing organization has adequate security in place. Data should be used only for the purpose for which it was collected. Data should be used only for a reasonable period of time.

Electronic Communications Privacy Act (ECPA) of 1986:

The Electronic Communications Privacy Act (ECPA) of 1986 affects law enforcement and intelligence agencies. It extended government restrictions on wiretaps from telephone calls to include transmissions of electronic data by computer and prohibited access to stored electronic communications. It was amended by the Communications Assistance to Law Enforcement Act (CALEA) of 1994, the USA PATRIOT Act of 2001, and the FISA Amendments Act of 2008.

EMET

The Enhanced Mitigation Experience Toolkit (EMET) is a set of mitigation tools by Microsoft that helps prevent vulnerabilities in software from been exploited. While the technologies it uses present obstacles to the process of exploiting a vulnerability, it cannot guarantee success in that regard and should be considered (as the name implies) an enhancement and not a final solution.EMET focuses its attention on applications that are not capable of using CPU- or Windows-based security measures, such as Data Execution Prevention (DEP), structured exception overwrite protection, address space randomization, and certificate trust. EMET scans for applications that fall into these categories and offers the option of forcing these features upon the application. This action may or may not cause the application to stop functioning

Federal Information Security Management Act (FISMA) of 2002

The Federal Information Security Management Act (FISMA) of 2002 affects every federal agency. It requires federal agencies to develop, document, and implement an agency wide information security program.

Federal Intelligence Surveillance Act (FISA) of 1978

The Federal Intelligence Surveillance Act (FISA) of 1978 affects law enforcement and intelligence agencies. It was the first act to give procedures for the physical and electronic surveillance and collection of "foreign intelligence information" between "foreign powers" and "agents of foreign powers" and applied only to traffic within the United States. It was amended by the USA PATRIOT Act of 2001 and the FISA Amendments Act of 2008.

Federal Privacy Act of 1974

The Federal Privacy Act of 1974 affects any computer that contains records used by a federal agency. It provides guidelines on collection, maintenance, use, and dissemination of PII about individuals that is maintained in systems of records by federal agencies on collecting, maintaining, using, and distributing PII.

Gramm-Leach-Bliley Act (GLBA) of 1999

The Gramm-Leach-Bliley Act of 1999 affects all financial institutions, including banks, loan companies, insurance companies, investment companies, and credit card providers. It provides guidelines for securing all financial information and prohibits sharing financial information with third parties. This act directly affects the security of PII.

Health Care and Education Reconciliation Act of 2010

The Health Care and Education Reconciliation Act of 2010 affects healthcare and educational organizations. This act increased some of the security measures that must be taken to protect healthcare information.

NIACAP

The National Information Assurance Certification and Accreditation Process (NIACAP) provides a standard set of activities, general tasks, and a management structure to certify and accredit systems that maintain the information assurance and security posture of a system or site. The accreditation process developed by NIACAP has four phases: Phase 1: Definition Phase 2: Verification Phase 3: Validation Phase 4: Post Accreditation NIACAP defines the following three types of accreditation: Type accreditation: Evaluates an application or system that is distributed to a number of different locations System accreditation: Evaluates an application or support system Site accreditation: Evaluates the application or system at a specific self-contained location

OWASP

The Open Web Application Security Project (OWASP) is a group that monitors attacks, specifically web attacks. OWASP maintains a list of the top 10 attacks on an ongoing basis. This group also holds regular meetings at chapters throughout the world, providing resources and tools such as testing procedures, code review steps, and development guidelines. The following are some of OWASP's key publications: Software Assurance Maturity Model: Guidance on moving from a disorganized software development process to one that focuses on continuous improvement Development Guide: Tips on secure coding practices and updates on the latest threats Testing Guide: A framework for performing penetration tests on software Guide to Building Secure Web Applications: Best practices for building security into a web application Code Review Guide: Advice on code review Testing Guide: Code testing guidelines Application Security Verification Standards: A basis for testing web application technical security controls that provides developers with a list of requirements for secure development

Personal Information Protection and Electronic Documents Act (PIPEDA)

The Personal Information Protection and Electronic Documents Act (PIPEDA) affects how private sector organizations collect, use, and disclose personal information in the course of commercial business in Canada. The act was written to address European Union (EU) concerns about the security of PII in Canada. The law requires organizations to obtain consent when they collect, use, or disclose personal information and to have personal information policies that are clear, understandable, and readily available.

Sarbanes-Oxley Act (SOX)

The Public Company Accounting Reform and Investor Protection Act of 2002, more commonly known as the Sarbanes-Oxley Act (SOX), affects any organization that is publicly traded in the United States. It controls the accounting methods and financial reporting for the organizations and stipulates penalties and even jail time for executive officers.

SESAME

The Secure European System for Applications in a Multi-vendor Environment (SESAME) project extended the Kerberos functionality to fix its weaknesses. SESAME uses both symmetric and asymmetric cryptography to protect interchanged data. SESAME uses a trusted authentication server at each host. SESAME uses Privileged Attribute Certificates (PAC) instead of tickets. It incorporates two certificates: one for authentication and one for defining access privileges. The trusted authentication server is referred to as the Privileged Attribute Server (PAS), and it performs roles similar to those of the KDC in Kerberos. SESAME can be integrated into a Kerberos system.

SANS

The SysAdmin, Audit, Network and Security Institute (SANS) organization provides guidelines for secure software development and sponsors the Global Information Assurance Certification (GIAC). SANS also provides training, performs research, and publishes best practices for cybersecurity, web security, and application security. The SANS website (www.sans.org) publishes a tremendous number of white papers and best practices based on research.

USA PATRIOT Act

The USA PATRIOT Act of 2001 affects law enforcement and intelligence agencies in the United States. Its purpose is to enhance the investigatory tools that law enforcement can use, including e-mail communications, telephone records, Internet communications, medical records, and financial records. When this law was enacted, it amended several other laws, including FISA and the ECPA of 1986.The USA PATRIOT Act does not restrict private citizens' use of investigatory tools, although there are some exceptions—for example, if the private citizen is acting as a government agent (even if not formally employed), if the private citizen conducts a search that would require law enforcement to have a warrant, if the government is aware of the private citizen's search, or if the private citizen is performing a search to help the government.

United States Federal Sentencing Guidelines of 1991:

The United States Federal Sentencing Guidelines of 1991 affects individuals and organizations convicted of felonies and serious (Class A) misdemeanors. It provides guidelines to prevent sentencing disparities that existed across the United States.

White Team

The White team is a group of technicians who referee the encounter between the Red team and the Blue team. Enforcing the rules of engagement might be one of the White team's roles, along with monitoring the responses to the attack by the Blue team and making note of specific approaches employed by the Red team. Knowledge Check

Event Logs

The Windows Security log shown in the previous section is just one type of event log. Event logs can include security events, but other types of event logs exist as well. Figure 12-4 shows the Windows System log, which includes operating system events. The highlighted event shows that the NetBIOS service failed to start. Stop messages indicate that something did not work, and warnings indicate a lesser issue, and informational events are normal operations. System logs record regular system events, including operating system and service events. Audit and security logs record successful and failed attempts to perform certain actions and require that security professionals specifically configure the actions that are audited. Organizations should establish policies regarding the collection, storage, and security of these logs. In most cases, the logs can be configured to trigger alerts when certain events occur. In addition, these logs must be periodically and systematically reviewed. Cybersecurity analysts should be trained on how to use these logs to detect when incidents have occurred. Having all the information in the world is no help if personnel do not have the appropriate skills to analyze it.For large enterprises, the amount of log data that needs to be analyzed can be quite large. For this reason, many organizations implement a SIEM device, which provides an automated solution for analyzing events and deciding where the attention needs to be given.Suppose an intrusion detection system (IDS) logged an attack attempt from a remote IP address. One week later, the attacker successfully compromised the network. In this case, it is likely that no one was reviewing the IDS event logs. Consider another example of insufficient logging and mechanisms for review. Say that an organization did not know its internal financial databases were compromised until the attacker published sensitive portions of the database on several popular attacker websites. The organization was unable to determine when, how, or who conducted the attacks but rebuilt, restored, and updated the compromised database server to continue operations. If the organization is unable to determine these specifics, it needs to look at the configuration of its system, audit, and security logs.

Account Management Policy

The account management policy helps guide the management of identities and accounts. Identity and account management is vital to any authentication process. As a security professional, you must ensure that your organization has a formal procedure to control the creation and allocation of access credentials or identities. If invalid accounts are allowed to be created and are not disabled, security breaches will occur. Most organizations implement a method to review the identification and authentication process to ensure that user accounts are current. Answering questions such as the following is likely to help in the process: -Is a current list of authorized users and their access maintained and approved? -Are passwords changed at least every 90 days—or earlier, if needed? -Are inactive user accounts disabled after a specified period of time? Any identity management procedure must include processes for creating (provisioning), changing and monitoring (reviewing), and removing users from the access control system (revoking). This is referred to as the access control provisioning life cycle. When initially establishing a user account, new users should be required to provide valid photo identification and should sign a statement regarding password confidentiality. User accounts must be unique. Policies should be in place to standardize the structure of user accounts. For example, all user accounts should be firstname.lastname or some other structure. This ensures that users in an organization will be able to determine a new user's identification, mainly for communication purposes. After creation, user accounts should be monitored to ensure that they remain active. Inactive accounts should be automatically disabled after a certain period of inactivity, based on business requirements. In addition, any termination policy should include formal procedures to ensure that all user accounts are disabled or deleted. Elements of proper account management include the following: -Establish a formal process for establishing, issuing, and closing user accounts. -Periodically review user accounts. -Implement a process for tracking access authorization. -Periodically rescreen personnel in sensitive positions. -Periodically verify the legitimacy of user accounts. User account reviews are a vital part of account management. User accounts should be reviewed for conformity with the principle of least privilege. This principle specifies that users should only be given the rights and permission required to do their job and no more. User account reviews can be performed on an enterprisewide, systemwide, or application-by-application basis. The size of the organization greatly affects which of these methods to use. As part of user account reviews, organizations should determine whether all user accounts are active.

Active Directory (AD)

The centralized directory database that contains user account information and security for the entire group of computers on a network.

Lessons Learned Report

The first document that should be drafted is a lessons learned report. It briefly lists and discusses what is currently known either about the attack or about the environment that was formerly unknown. This report should be compiled during a formal meeting shortly after the incident. This report provides valuable information that can be used to drive improvement in the security posture of the organization. This report might answer questions such as the following: -What went right, and what went wrong? -How can we improve? -What needs to be changed? -What was the cost of the incident?

The following measures can help you prevent disclosure of sensitive information from Improper storage:

The following measures can help you prevent disclosure of sensitive information from Improper storage: -Ensure that memory locations where this data is stored are locked memory. -Ensure that ACLs attached to sensitive data are properly configured. -Implement an appropriate level of encryption.

What to do about CSRF?

The following measures help prevent CSRF vulnerabilities in web applications: -Using techniques like URLEncode and HTMLEncode, encode all output based on input parameters for special characters to prevent malicious scripts from executing. -Filter input parameters based on special characters (those that enable malicious scripts to execute). -Filter output based on input parameters for special characters.

Software Development Life Cycle

The goal of the Software Development Life Cycle is to provide a predictable framework of procedures designed to identify all requirements with regard to functionality, cost, reliability, and delivery schedule and to ensure that each requirement is met in the final solution. These are the steps in the Software Development Life Cycle: Step 1. Plan/initiate project Step 2. Gather requirements Step 3. Design Step 4. Develop Step 5. Test/validate Step 6. Release/maintain Step 7. Certify/accredit Step 8. Perform change management and configuration management/replacement

Public Classification Level (Data Classifications)

The least sensitive data used by the company, whose disclosure would cause the least harm

Update Incident Response Plan

The lessons learned exercise may also uncover flaws in your IR plan. If this is the case, you should update the plan appropriately to reflect the needed procedure changes. When this is complete, ensure that all software and hard copy versions of the plan have been updated so everyone is working from the same document when the next event occurs.

Change Control Process

The lessons learned report may generate a number of changes that should be made to the network infrastructure. All these changes, regardless of how necessary they are, should go through the standard change control process. They should be submitted to the change control board, examined for unforeseen consequences, and studied for proper integration into the current environment. Only after gaining approval should they be implemented. You may find it helpful to create a "fast track" for assessment in your change management system for changes such as these when time is of the essence.

Management Team during IR event

The main role of management is to fully back and support all efforts of the IR team and ensure that this support extends throughout the organization. Certainly the endorsement of the IR process is important as it lends legitimacy to the process, but this support should be consistent and unwavering. Management may at some point interact with media and other outside entities as well.

Documentation/forms

There will be much to document about the crime, the crime scene, and the evidence. There may also be interviews with witnesses. All this requires documentation forms that should be already printed and available

wireless intrusion prevention system (WIPS)

These systems can not only alert you when any unknown device is in the area (APs and stations) but can take a number of actions to prevent security issues, including the following: Locate a rogue AP by using triangulation when three or more sensors are present. Deauthenticate any stations that have connected to an "evil twin." Detect denial-of-service attacks. Detect man-in-the-middle and client impersonation attacks.

Cameras

The most commonly used camera type for crime scene investigations is digital single-lens reflex (SLR). Photographs submitted as evidence must be of sufficient quality, and digital cameras that have 12-megapixel or greater image sensors and manual exposure settings (in addition to any automatic or programmed exposure modes) are usually suitable for crime scene and evidence photography.

Management

The most important factor in the success of an incident response plan is the support, both verbal and financial (through the budget process), of upper management. Moreover, all other levels of management should fall in line with support of all efforts. Specifically, management's role involves the following: -Communicate the importance of the incident response plan to all parts of the organization. -Create agreements that detail the authority of the incident response team to take over business systems if necessary. Create decision systems for determining when key systems must be removed from the network.

Mandatory Access Control (MAC)

The most restrictive access control model, typically found in military settings in which security is of supreme importance.

HR

The role of the HR department involves the following responsibilities in response: Develop job descriptions for those persons who will be hired for positions involved in incident response. Create policies and procedures that support the removal of employees found to be engaging in improper or illegal activity. For example, HR should ensure that these activities are spelled out in policies and new hire documents as activities that are punishable by firing. This can help avoid employment disputes when the firing occurs

Technical Team during IR event

The role of the IT and security teams is to recognize, identify, and react to incidents and provide support in analyzing those incidents when the time comes. IT and security teams provide the human power, skills, and knowledge to act as first responders and to remediate all issues found. For this reason, advanced training is recommended for those operating in IR-related positions.

Scope of Impact

The scope determines the impact and is a function of how widespread the incident is and the potential economic and intangible impacts it could have on the business. Five common factors are used to measure scope.

Scope

The scope of a scan defines what will be scanned and what type of scan will be performed. It defines what areas of the infrastructure will be scanned, and this part of the scope should therefore be driven by where the assets of concern are located. Limiting the scan areas helps ensure that accidental scanning of assets and devices not under the direct control of the company does not occur (because it could cause legal issues). Scope might also include times of day when scanning should not occur.

Segmentation

The segmentation process involves limiting the scope of an incident by leveraging existing segments of the network as barriers to prevent the spread to other segments. These segments could be defined at either Layer 3 or Layer 2 of the OSI reference model.When you segment at Layer 3, you are creating barriers based on IP subnets. These are either physical LANs or VLANs. Creating barriers at this level involves deploying access control lists on the routers to prevent traffic from moving from one subnet to another. While it is possible to simply shut down a router interface, in some scenarios that is not advisable because the interface is used to reach more subnets than the one where the threat exists. Segmenting at Layer 2 can be done in several ways: -You can create VLANs, which create segmentation at both Layer 2 and Layer 3. -You can create private VLANs (PVLAN), which segment an existing VLAN at Layer 2. -You can use port security to isolate a device at Layer 2 without removing it from the network. In some cases, it may be advisable to use segmentation at the perimeter of the network (for example, stopping the outbound communication from an infected machine or blocking inbound traffic).

Service Interruption

When an application stops functioning with no apparent problem, or when an application cannot seem to communicate in the case of a distributed application, it can be a sign of a compromised application. Any such interruptions that cannot be traced to an application, host, or network failure should be investigated.

Unexpected Output

When the output from a program is not what is normally expected and when dialog boxes are altered or the order in which the boxes are displayed is not correct, it is an indication that the application has been altered. Reports of strange output should be investigated.

Processor Consumption (Common Host-Related Symptoms)

When the processor is very busy with very little or nothing running to generate the activity, it could be a sign that the processor is working on behalf of malicious software. This is one of the key reasons any compromise is typically accompanied by a drop in performance. While Task Manager in Windows is designed to help with this, it has some limitations. For one, when you are attempting to use it, you are typically already in a resource crunch, and it takes a bit to open. Then when it does open, the CPU has settled back down, and you have no way of knowing what caused it. A better tool to use is Sysinternals, which is a free download at Microsoft TechNet. The specific part of this tool you need is Process Explorer, which enables you to see in the Notification area the top CPU offender, without requiring you to open Task Manager. Moreover, Process Explorer enables you to look at the graph that appears in Task Manager and identify what caused spikes in the past, which is not possible with Task Manager alone

Bandwidth Consumption (Common Network-Related Symptoms)

Whenever bandwidth usage is above normal and there is no known legitimate activity generating the traffic, you should suspect security issues that generate unusual amounts of traffic, such as denial-of-service (DoS) and distributed denial-of-service (DDoS) attacks. For this reason, benchmarks should be created for normal bandwidth usage at various times during the day. Then alerts can be set when activity rises by a specified percentage at those various times. Many free network bandwidth monitoring tools are available. Among them are BitMeter OS, Freemeter Bandwidth Monitor, BandwidthD, and PRTG Bandwidth Monitor. Anomaly-based intrusion detection systems can also "learn" normal traffic patterns and can set off alerts when unusual traffic is detected.

NIDS

Where you place a network intrusion detection system (NIDS) depends on the needs of the organization. To identify malicious traffic coming in from the Internet only, you should place it outside the firewall. On the other hand, placing a NIDS inside the firewall enables the system to identify internal attacks and attacks that get through the firewall. In cases where multiple sensors can be deployed, you might place NIDS devices in both locations. When budget allows, you should place any additional sensors closer to the sensitive systems in the network. When only a single sensor can be placed, all traffic should be funneled through it, regardless of whether it is inside or outside the firewall (see Figure 12-6).

Remediation Plans

While the simplest way to state a remediation plan is to "just fix the issues," sometimes it is not so simple. The following can be complicating issues: -Time restraints: Inadequate time for remediation activities -Governance: No mandate for the team to address issues, only identify them -Financial: Budget including the security test but not any remediation assistance or re-testing -Training: Poor training preventing a quick remediation -Poor results quality: Tester not clearly explaining how to replicate issues To help avoid these roadblocks, consider the following best practices: -Budget for security testing and remediation -Streamline the testing and re-testing process -Train development teams on secure coding practices -Give information security teams the final call if an application can be released Finally, all remediation plans should have the following characteristics: -Specific: They should clearly state the danger and desired outcome. -Measurable: They should have a specific metric. -Attainable: They should be possible to achieve. -Relevant: They should be the quickest response to the risks presenting the greatest danger. -Time-bound: They should have a specific deadline. Knowledge Check

Syslog

a protocol that can be used to collect logs from devices and store them in a central location called a Syslog server. Syslog provides a simple framework for log entry generation, storage, and transfer that any OS, security software, or application could use if designed to do so. Many log sources either use Syslog as their native logging format or offer features that allow their logging formats to be converted to Syslog format. Each Syslog message has only three parts. The first part specifies the facility and severity as numeric values. The second part of the message contains a timestamp and the hostname or IP address of the source of the log. The third part is the actual log message, with content as shown here:

sinkhole

a router designed to accept and analyze attack traffic. Sinkholes can be used to do the following: Draw traffic away from a target Monitor worm traffic Monitor other malicious traffic During an attack, a sinkhole router can be quickly configured to announce a route to the target's IP address that leads to a network or an alternate device where the attack can be safely studied. Moreover, sinkholes can also be used to prevent a compromised host from communicating back to the attacker. Finally, they can be used to prevent a worm-infected system from infecting other systems.

credentialed scan

a scan that is performed by someone with administrative rights to the host being scanned

Screened Host Firewalls

a screened host is a firewall that is between the final router and the internal network. The advantages to this solution include the following: It offers more flexibility than a dual-homed firewall because rules rather than an interface create the separation. There are potential cost savings. The disadvantages include the following: The configuration is more complex. It is easier to violate the policies than with dual-homed firewalls.

Jump Box

a server that is used to access devices that have been placed in a secure network zone such as a DMZ. The server would span the two networks to provide access from an administrative desktop to the managed device. SSH tunneling is common as the de facto method of access. Administrators can use multiple zone-specific jump boxes to access what they need, and lateral access between servers is prevented by whitelists. This helps prevent the types of breaches suffered by both Target and Home Depot, in which lateral access was used to move from one compromised device to other servers.

Network Access Control (NAC)

a service that goes beyond authentication of the user and includes examination of the state of the computer the user is introducing to the network when making a remote access or VPN connection to the network. The Cisco world calls these services Network Admission Control (NAC), and the Microsoft world calls them Network Access Protection (NAP). Regardless of the term used, the goals of the features are the same: to examine all devices requesting network access for malware, missing security updates, and any other security issues the devices could potentially introduce to the network.

Maintenance Hooks

a set of instructions built into the code that allows someone who knows about the "backdoor" to use the instructions to connect and then view and edit the code without using the normal access controls. In many cases, they are used to make it easier for the vendor to provide support to the customer. In other cases, they are meant to assist in testing and tracking the activities of the product and are not removed later.

NFC

a short-range type of wireless transmission and is therefore difficult to capture, interception is still possible. Moreover, these transmissions are typically encrypted. In any case, some steps can be taken to secure these payment mechanisms: Lock the mobile device. Devices have to be turned on or unlocked before they can read any NFC tags. Turn off NFC when not in use. For passive tags, use an RFID/NFC-blocking device. Scan mobile devices for unwanted apps, spyware, and other threats that may siphon information from your mobile payment apps.

virtual private network (VPN)

allows external devices to access an internal network by creating a tunnel over the Internet. Traffic that passes through the VPN tunnel is encrypted and protected. Figure 6-16 shows an example of a network with a VPN. In a VPN deployment, only computers that have the VPN client and are able to authenticate are able to connect to the internal resources through the VPN concentrator.

Logical (Technical) Controls

are software or hardware components used to restrict access. Specific examples of logical controls include firewalls, IDSs, IPSs, encryption, authentication systems, protocols, auditing and monitoring tools, biometrics, smart cards, and passwords.Although auditing and monitoring are logical controls and are often listed together, they are actually two different controls. Auditing is a one-time or periodic event to evaluate security. Monitoring is an ongoing activity that examines either a system or users.Network access, remote access, application access, and computer or device access all fit into this category.

Critical Resources

are those resources that are most vital to the organization's operation and should be restored within minutes or hours of the disaster or disruptive event.

Time-of-Check/Time-of-Use Attacks

attacks attempt to take advantage of the sequence of events that occur as a system completes common tasks. They rely on knowledge of the dependencies present when a specific series of events occur in multiprocessing systems. By attempting to insert himself between events and introduce changes, a hacker can gain control of the result. A term often used as a synonym for a time-of-check/time-of-use attack is a race condition, although this is actually a different attack. In a race condition attack, the hacker inserts himself between instructions, introduces changes, and alters the order of execution of the instructions, thereby altering the outcome.

Rules of Engagement

define how penetration testing should occur. These issues should be settled and agreed upon before any testing begins. The following are some of the key issues to be settled: -Timing: The timeline for the test must be established. The start and end times will be included in the scope of the project, but creating the timeline does not mean it cannot change as reality dictates; rather, it means that you have a framework to work from. This also includes the times of day the testing will occur. -Scope: The scope of the test incudes the timeline and also includes a list of all devices that are included in the test, as well as a description of all testing methodologies to be used. The output of this process should be a set of documents that are provided to the tester that include the following: +A network diagram depicting all network segments in scope for the test +A data flow diagram +A list of services and ports exposed at the perimeter +Details of how authorized users access the network +A list of all network segments that have been isolated from the test to reduce scope -Authorization: Formal authorization should be given to the tester to perform the test, with written approval by upper management. Without this, the tester could be liable for attempting to compromise the network.Exploitation: Before the test occurs, it should be determined whether exploits will be attempted if vulnerable systems are found. This is intentionally included in some cases so the incident response plan can be tested.Communication: Another of the issues in the rules of engagement is how communications are to occur between the tester and the stakeholders as the process unfolds. While regular meetings should be scheduled, there also must be a line of communication established for times when issues arise and changes may need to be made.Reporting: The type of reports to be generated is determined during the establishment of the rules of engagement. This includes the timing of reports, the format, and the specific information to be included. While postponing of reports should be allowed, it should not be allowed to become chronic, and the rules of engagement may include both incentives and penalties for the timelessness of reports.

Deterrent controls

deter or discourage an attacker. Via deterrent controls, attacks can be discovered early in the process. Deterrent controls often trigger preventive and corrective controls. Examples of deterrent controls include user identification and authentication, fences, lighting, and organizational security policies, such as a nondisclosure agreement (NDA).

Deterrent Controls

deter or discourage an attacker. Via deterrent controls, attacks can be discovered early in the process. Deterrent controls often trigger preventive and corrective controls. Examples of deterrent controls include user identification and authentication, fences, lighting, and organizational security policies, such as a nondisclosure agreement.

Trademark

ensures that a symbol, a sound, or an expression that identifies a product or an organization is protected from being used by another organization. A trademark allows a product or an organization to be recognized by the general public.If a trademark is not registered, an organization should use a capital TM. If the trademark is registered, an organization should use a capital R that is encircled.

Patent

granted to an individual or a company to cover an invention that is described in the patent's application. When the patent is granted, only the patent owner can make, use, or sell the invention for a period of time, usually 20 years. Although a patent is considered one of the strongest intellectual property protections available, the invention becomes public domain after the patent expires, thereby allowing any entity to manufacture and sell the product.

Security Content Automation Protocol (SCAP)

is a standard that the security automation community uses to enumerate software flaws and configuration issues. It standardized the nomenclature and formats used. A vendor of security automation products can obtain a validation against SCAP, demonstrating that it will interoperate with other scanners and express the scan results in a standardized way.

RADIUS

is an IETF standard for AAA. As with TACACS+, it follows a client / server model where the client initiates the requests to the server. RADIUS is the protocol of choice for network access AAA, and it's time to get very familiar with RADIUS. If you connect to a secure wireless network regularly, RADIUS is most likely being used between the wireless device and the AAA server. Why? This is the case because RADIUS is the transport protocol for Extensible Authentication Protocol (EAP), along with many other authentication protocols. Originally, RADIUS was used to extend the authentications from the layer-2 Point-to-Point Protocol (PPP) used between the end-user and the Network Access Server (NAS), and carry that authentication traffic from the NAS to the AAA server performing the authentication. This allowed a Layer-2 authentication protocol to be extended across layer-3 boundaries to a centralized authentication server. Keep in mind also that while RADIUS is a standard, TACACS+ is Cisco proprietary.

Penetration Test

is designed to simulate an attack on a system, a network, or an application. Its value lies in its potential to discover security holes that may have gone unnoticed. It differs from a vulnerability test in that it attempts to exploit vulnerabilities rather than simply identify them. Nothing places the focus on a software bug like the exposure of critical data as a result of the bug.

Role-based access control (RBAC)

ommonly used in networks to simplify the process of assigning new users the permission required to perform a job role. In this arrangement, users are organized by job role into security groups, which are then granted the rights and permissions required to perform that job. This process is pictured in Figure 11-2. The role is implemented as a security group possessing the required rights and permissions, which are inherited by all security group or role members. This is not a perfect solution, however, and it carries several security issues. First, RBAC is only as successful as the organization policies designed to support it. Poor policies can result in the proliferation of unnecessary roles, creating an administrative nightmare for the person managing user access. This can lead to mistakes that reduce rather than enhance access security. A related issue is that those managing user access may have an incomplete understanding of the process, and this can lead to a serious reduction in security. There can be additional costs to the organization to ensure proper training of these individuals. The key to making RBAC successful is proper alignment with policies and proper training of those implementing and maintaining the system.

Unknown threats

on the other hand, are lurking threats that may have been identified but for which no signatures are available. We are not completely powerless against these threats. Many security products attempt to locate these threats through static and dynamic file analysis. This may occur in a sandboxed environment, which protects the system that is performing the analysis.In some cases, unknown threats are really old threats that have been recycled. Because security products have limited memory with regard to threat signatures, vendors must choose the most current attack signatures to include. Therefore, old attack signatures may be missing in newer products, which effectively allows old known threats to reenter the unknown category. Zero Day

ACL (Access Control List)

ordered sets of rules that control the traffic that is permitted or denied to use a path through a router. These rules can operate at Layer 3, making these decisions on the basis of IP addresses, or at Layer 4, when only certain types of traffic are allowed. When this is done, the ACL typically references a port number of the service or application that is allowed or denied. Access lists operate as a series of if/then statements: If a given condition is met, then a given action is taken. If the condition isn't met, nothing happens, and the next statement is evaluated. Once the lists are built, they can be applied to either inbound or outbound traffic on any interface. Applying an access list causes the router to analyze every packet crossing that interface in the specified direction and to take the appropriate action

non-credentialed scan

performed by someone lacking administrative rights.

Preventive controls

prevent an attack from occurring. Examples of preventive controls include locks, badges, biometric systems, encryption, intrusion prevention systems (IPS), antivirus software, personnel security, security guards, passwords, and security awareness training.

Preventive controls

prevent attacks from occurring. Examples of preventive controls include locks, badges, biometric systems, encryption, intrusion prevention systems (IPS), antivirus software, personnel security, security guards, passwords, and security awareness training.

the Payment Card Industry Data Security Standard (PCI-DSS), which govern credit card transaction handlers, require ___ scans in accordance with PCI-DSS Requirement 11.2.

quarterly

Red Team

team acts as the attacking force. It typically carries out penetration tests by following a well-established process of gathering information about the network, scanning the network for vulnerabilities, and then attempting to take advantage of the vulnerabilities. The actions they can take are established ahead of time in the rules of engagement. Often these individuals are third-party contractors with no prior knowledge of the network. This helps them simulate attacks that are not inside jobs.

economic impact of an incident is driven mainly by

the value of the assets involved. Determining those values can be difficult, especially for intangible assets such as plans, designs, and recipes. Tangible assets include computers, facilities, supplies, and personnel. Intangible assets include intellectual property, data, and organizational reputation. The value of an asset should be considered with respect to the asset owner's view. The following considerations can be used to determine an asset's value: Value to owner Work required to develop or obtain the asset Costs to maintain the asset Damage that would result if the asset were lost Cost that competitors would pay for asset Penalties that would result if the asset were lost

Known Threats

threats that are common knowledge and easily identified through signatures by antivirus and IDS engines or through domain reputation blacklists.

Managing access to applications

using identities has become much more challenging as many organizations move to Software as a Service (SaaS) solutions. Many SaaS providers have not developed the required sophistication to integrate their platforms with identity services that exist behind the corporate firewall. In some cases, customers use these providers' proprietary tools. When an enterprise deals with multiple SaaS providers, these tools can introduce confusion and error. One solution could be to use third-party tools that can connect with many different types of SaaS applications and make identity management easier, less confusing, and less error prone.

Probably the two most common server types attacked are __________ and _________ servers.

web servers and database servers.

GPO (Group Policy Object)

what a set of Group Policy settings are stored in. It reduces the administrative burden and costs associated with managing these resources

RFC 3195

which was designed specifically to improve the security of Syslog. Implementations based on this standard can support log confidentiality, integrity, and availability through several features, including reliable log delivery, transmission confidentiality protection, and transmission integrity protection and authentication.

Firewall

(computing) a security system consisting of a combination of hardware and software that limits the exposure of a computer or computer network to attack from crackers The network device that perhaps is most connected with the idea of security is the firewall. Firewalls can be software programs that are installed over server or client operating systems or appliances that have their own operating system. In either case, the job of a firewall is to inspect and control the type of traffic allowed. Firewall types are discussed in Lesson 1.Table 14-1 lists the pros and cons of the various types of firewalls. TypeAdvantagesDisadvantagesPacket-filtering firewallsOffer best performanceCannot prevent the following:IP spoofingAttacks that are specific to an applicationAttacks that depend on packet fragmentationAttacks that take advantage of the TCP handshakeCircuit-level proxiesSecure addresses from exposureSupport a multiprotocol environmentAllow for comprehensive loggingSlight impact on performanceMay require a client on the computer (SOCKS proxy)No application layer securityApplication-level proxiesUnderstand the details of the communication process at Layer 7 for the applicationBig impact on performanceKernel proxy firewallsInspect the packet at every layer of the OSI modelDon't impact performance, as do application layer proxies Table 14-1: Pros and Cons of Firewall TypesAlthough each scenario is unique, the typical placement of each firewall type is shown in Table 14-2. TypePlacementPacket-filtering firewallsLocated between subnets, which must be securedCircuit-level proxiesAt the network edgeApplication-level proxiesClose to the application server it is protectingKernel proxy firewallsClose to the systems it is protecting Table 14-2: Typical Placement of Firewall Types

What are some things you can do to mitigate a wireless attacks?

-Disable the broadcast of the SSID -Create a MAC address filter that allows only known devices -Use authentication process that provides confidentiality and integrity, such as WPA and WPA2 -Deploy 802.1X port based security -Set the radio transmission strength to the lowest level that still services the required area

netstat output (possible states)

-LISTEN: Represents waiting for a connection request from any remote TCP connection and port. -SYN-SENT: Represents waiting for a matching connection request after having sent a connection request. -SYN-RECEIVED: Represents waiting for a confirming connection request acknowledgment after having both received and sent a connection request. -ESTABLISHED: Represents an open connection, and data received can be delivered to the user. This is the normal state for the data transfer phase of the connection. -FIN-WAIT-1: Represents waiting for a connection termination request from the remote TCP connection or an acknowledgment of the connection termination request previously sent. -FIN-WAIT-2: Represents waiting for a connection termination request from the remote TCP connection. -CLOSE-WAIT: Represents waiting for a connection termination request from the local user. -CLOSING: Represents waiting for a connection termination request acknowledgment from the remote TCP connection. -LAST-ACK: Represents waiting for an acknowledgment of the connection termination request previously sent to the remote TCP connection (which includes an acknowledgment of its connection termination request).

SOA

A Start of Authority record, which represents a DNS server that is authoritative for a DNS namespace

packet-filtering firewall

A firewall that examines each packet and determines whether to let the packet pass. To make this decision, it examines the source address, the destination addresses, and other data.

kernel proxy firewall

A fifth-generation firewall that inspects a packet at every layer of the OSI model but does not introduce the performance hit of an application-layer firewall because it does this at the kernel layer.

three-legged firewall

A firewall configuration that has three interfaces: one connected to the untrusted network, one to the internal network, and the last to a part of the network called a demilitarized zone (DMZ).

active scanning

A method used by wireless stations to detect the presence of an access point. In active scanning, the station issues a probe to each channel in its frequency range and waits for the access point to respond.

WPA2-Personal

A modern security type for wireless networks that uses a pre-shared key for authentication. Uses CCMP, AES

Nmap

A network utility designed to scan a network and create a map. Frequently used as a vulnerability scanner.

IDS (Intrusion Detection System)

A software and/ or hardware system that scans, audits, and monitors the security infrastructure for signs of attacks in progress.

bastion host

A strongly protected computer that is in a network protected by a firewall (or is part of a firewall) and is the only host (or one of only a few hosts) in the network that can be directly accessed from networks on the other side of the firewall. As you learned in Lesson 1, a bastion host may or may not be a firewall. The term actually refers to the position of any device. If the device is exposed directly to the Internet or to any untrusted network while screening the rest of the network from exposure, it is a bastion host. Some other examples of bastion hosts are FTP servers, DNS servers, web servers, and e-mail servers.In any case where a host must be publicly accessible from the Internet, the device must be treated as a bastion host, and you should take the following measures to protect these machines: Disable or remove all unnecessary services, protocols, programs, and network ports. Use authentication services separate from those of the trusted hosts within the network. Remove as many utilities and system configuration tools as is practical. Install all appropriate service packs, hotfixes, and patches. Encrypt any local user account and password databases. A bastion host can be located in the following locations: Behind the exterior and interior firewalls: Locating it here and keeping it separate from the interior network complicates the configuration but is safest. Behind the exterior firewall only: Perhaps the most common location for a bastion host is separated from the internal network; this is a less complicated configuration. Figure 14-4 shows an example in which there are two bastion hosts: the FTP/WWW server and the SMTP/DNS server. Figure 14-4: Bastion Host in a Screened Subnet As both the exterior firewall and a bastion host: This setup exposes the host to the most danger.

Protocol Analysis

A subset of packet analysis and it involves examining information in the header of a packet

NetFlow Analysis

A technology developed by Cisco that is supported by all major vendors and can be used to collect and subsequently export IP traffic accounting information. The traffic information is exported using UDP packets to a NetFlow analyzer which can organize the information in useful ways. It exports records of individual one-way transmissions called flows. All packets that are part of the same flow share the following characteristics: Source MAC address Destination MAC Address IP source address IP destination address Source port Destination port Layer 3 protocol type Class of service Router or switch interface

Syslog server

A type of centrally managed server used for collecting system messages from networked devices

NIDS (network-based intrusion detection system)

A type of intrusion detection that protects an entire network and is situated at the edge of the network or in a network's protective perimeter, known as the DMZ (demilitarized zone). Here, it can detect many types of suspicious traffic patterns. A NIC must be operating in promiscuous mode.

HIDS (host-based intrusion detection system)

A type of intrusion detection that runs on a single computer, such as a client or server, to alert about attacks against that one host.

active vulnerability scanner

An application that scans networks to identify exposed usernames and groups, open network shares, configuration problems, and other vulnerabilities in servers. (Example tools include Nessus and Microsoft Baseline Security Analyzer).

phishing attack

An attack that uses deception to fraudulently acquire sensitive personal information by masquerading as an official-looking e-mail.

Signature based IDS

An intrusion detection system that maintains a database of signatures that might signal a particular type of attack and compares incoming traffic to those signatures.Two main types: -Pattern matching: The IDS compares traffic to a database of attack patterns. The IDS carries out specific steps when it detects traffic that matches an attack pattern. -Stateful matching: The IDS records the initial operating system state. Any changes to the system state that specifically violate the defined rules result in an alert or notification being sent.

Point-in-time analysis

Captures data over a specified period of time and thus provides a snapshot of the situation at that point in time or across the specified time period.

Hueristic Analysis

Determines the susceptibility of a system to a particular threat/risk using decision rules or weighing methods. It is often utilized by antivirus software to identify threats that cannot be discovered with signature analysis because the threat is either too new to have been analyzed (called a zero-day threat) or it is a multipronged attack that is constructed in such a way that existing signatures do not identify the threat.

Topology Discovery

Entails determining the devices in the network, their connectivity relationships to one another and the internal ip addressing scheme

Vulnerability Scanner

Generic term for a range of products that look for vulnerabilities in networks or systems. Two types: passive and active

stateful firewall

Inspects traffic leaving the inside network as it goes out to the Internet. Then, when returning traffic from the same session (as identified by source and destination IP addresses and port numbers) attempts to enter the inside network, the stateful firewall permits that traffic. The process of inspecting traffic to identify unique sessions is called stateful inspection.

Passive Vulnerability Scanner (PVS)

Monitors the network in real-time, continuously looking for new hosts, applications and new vulnerabilities without requiring the need for active scanning. (Tools include Tenable PVS and NetScanTools Pro)

Host and guest vulnerabilities

Operating systems on both the host and the guest systems an suffer the same issues as those on all physical devices.

Firewall Types

Packet-filtering, Stateful, Proxy

Router/Firewall ACL review

Probe your own device to see what information would be available to an attacker. Both the operating system and firmware present on the device should be checked for any missing updates and patches, which are frequently forgotten on these devices.

Rogue Access Points

Rogue access points are APs that you do not control and manage. There are two types: those that are connected to wired infrastructure and those that are not. The ones that are connected to a wired network present a danger to your wired and wireless network. They may have been placed there by your own users without your knowledge, or they may have been purposefully put there by a hacker to gain access to the wired network. In either case, they allow access to your wired network.Wireless intrusion prevention system (WIPS) devices are usually used to locate rogue access points and alert administrators of their presence

CSMA/CA

Short for carrier sense multiple access with collision avoidance. It is used as a method for multiple hosts to communicate on a wireless network. The steps in CSMA/CA are as follows: Step 1. Station A has a frame to send to Station B. It checks for traffic in two ways. First, it performs carrier sense, which means it listens to see whether any radio waves are being received on its transmitter. Then, after the transmission is sent, it continues to monitor the network for possible collisions. Step 2. If traffic is being transmitted, Station A decrements an internal countdown mechanism called the random back-off algorithm. This counter will have started counting down after the last time this station was allowed to transmit. All stations count down their own individual timers. When a station's timer expires, it is allowed to send. Step 3. If Station A performs carrier sense, there is no traffic, and its timer hits zero, it sends the frame. Step 4. The frame goes to the AP. Step 5. The AP sends an acknowledgment back to Station A. Until Station A received that acknowledgment, all other stations must remain silent. For each frame the AP needs to relay, it must wait its turn to send, using the same mechanism as the stations. Step 6. When its turn comes up in the cache queue, the frame from Station A is relayed to Station B. Step 7. Station B sends an acknowledgment back to the AP. Until the AP receives that acknowledgment, all other stations must remain silent

Service Discovery

The process of sending a query to other devices on the network to identify their capabilities.

Environmental Reconnaissance

The collection of information that enhances our understanding of the environment

Wardriving

Wardriving is the process of riding around with a wireless device connected to a high-power antenna, searching for WLANs. It could be for the purpose of obtaining free Internet access, or it could be to identify any open networks that are vulnerable to attack. While hiding the SSID may deter some, anyone who knows how to use a wireless sniffer could figure out the SSID in two minutes, so there really is no way to stop wardriving.

Countermeasure (Control) Selection

The most common reason for choosing a safeguard is the cost-effectiveness of the safeguard or control. Planning, designing, implementing, and maintenance costs need to be included in determining the total cost of a safeguard. To calculate a cost-benefit analysis, use the following equation: (ALE before safeguard) - (ALE after safeguard) - (Annual cost of safeguard) = Safeguard value To complete this equation, you have to know the revised ALE after the safeguard is implemented. Implementing a safeguard can improve the ARO but cannot completely do away with it. In the example mentioned earlier, in the "Quantitative Risk Analysis" section, the ALE for the event is $2500. Let's assume that implementing the safeguard reduces the ARO to 10%, so the ALE after the safeguard is calculated as $5000 × 10%, or $500. You could then calculate the safeguard value for a control that costs $1000 as follows: $2500 - $500 - $1000 = $1000$2500−$500−$1000=$1000 Knowing the corrected ARO after the safeguard is implemented is necessary for determining the safeguard value. A legal liability exists if the cost of a safeguard is less than the estimated loss that would occur if the threat were exploited. Maintenance costs of safeguards are not often fully considered during this process. Organizations should carefully research the costs of maintaining safeguards. New staff or extensive staff training often must occur to properly maintain a new safeguard. In addition, the cost of the labor involved must be determined. So the cost of a safeguard must include the actual cost to implement it plus any training costs, testing costs, labor costs, and so on. Some of these costs might be hard to identify, but a thorough risk analysis will account for them.

network mapping

The process of discovering and identifying the devices on a network. Zenmap is an example of this

DNS zone transfer

The process of replicating the databases containing the DNS data across a set of DNS servers. You can limit which servers are allowed to perform zone transfers

Inadequate VM isolation

This type of attack enables VMs to communicate improperly and can be used to gain access to multiple guests and possibly the host.

Unsecured VM migration

This type of attack occurs when a VM is migrated to a new host and security policies and configuration are not updates to reflect the change.

NMAP NULL scan (nmap -sN)

This type of scan is a series of TCP packets that contain a sequence number of 0 and no set flags. Because it does not contain any set flags it can sometimes penetrate firewalls and edge routers that filter incoming packets with particular flags. Two responses are possible: -No response: the port is open on the target - RST: the port is closed o the target

Nmap FIN scan (nmap -sF -V)

This type of scan sets the FIN bit. When this packet is sent, two responses are possible. -No response: The port is open on the target. -RST/ACK: The port is closed on the target

XMAS Scan (nmap -sX)

This type of scan sets the FIN, PSH, and URG flags. When the paclet is sent, two responses are possible: -No response: the port is open on the target -RST: The port is closed on the target

DNS Harvesting

Using OSINT to gather info about a domain.

Warchalking

Warchalking is a practice that used to typically accompany wardriving. Once the wardriver located a WLAN, she would indicate in chalk on the sidewalk or on the building the SSID and the types of security used on the network. This activity has gone mostly online now, and there are many sites dedicated to compiling lists of found WLANs and their locations. Just as there is no way to prevent wardriving, there is no way to stop warchalking either.

circuit level proxy firewall

creates a circuit between the client and server and provides protection at the session layer. knows the source and destination addresses and makes access decisions based on this type of header information. requires that protocols are following rfcs. common example is SOCKS which provides a secure channel between two computers

Social Engineering

determines the level of security awareness possessed by the user

Anomaly Analysis

focuses on identifying something that is unusual or abnormal. Depending on the type of scan or on the information present in the captured traffic, this could be any of the following: Traffic captured at times when there usually is little or no traffic Traffic of a protocol type not normally found in the network Unusually high levels of traffic to or from a system

trend analysis

focuses on the long-term direction in the increase or decrease in a particular type of traffic or in a particular behavior in the network. Some examples include the following: An increase in the use of a SQL server, indicating the need to increase resources on the server A cessation in traffic bound for a server providing legacy services, indicating a need to decommission the server An increase in password resets, indicating a need to revise the password policy

Availability analysis

focuses on the up/down status of various devices in the network. Typically stated as a percentage of uptime, it is often used as a benchmark in service level agreements. For example, 99% uptime for a year would indicate that the device in question must be down for no more than 8 hours, 45 minutes, and 57 seconds over the entire year.

Host Scanning

involves identifying the live host on a network or in a domain namespace. Nmap and other scanning tools (such as ScanLine and SuperS an) can be used for this.

Spear Phishing Attack

phishing attacks that use specific personal information

packet analyzer

program used for appropriate purposes to read, record, and display all of the wireless packets that are broadcast in the vicinity of the computer running the analyzer

log reviews

should be conducted to ensure privileged users are not abusing their privileges and to monitor any credible events

Evil Twin Attack

the attacker is in the vicinity with a Wi-Fi-enabled computer and a separate connection to the Internet. Using a hotspotter—a device that detects wireless networks and provides information on them the attacker simulates a wireless access point with the same wireless network name, or SSID, as the one that authorized users expect. If the signal is strong enough, users will connect to the attacker's system instead of the real access point.

What are the main drawbacks from anomaly based IDS/IPS?

the large number of false positives typically generated by these systems

application-based IDS

this is a specialized IDS that analyzes transaction log files for a single application. This type of IDS is usually provided as part of the application or can be purchased as an add-on.

Anomaly-based IDS (also referred ti as behavior-based)

this type of IDS analyzes traffic and compares it to normal traffic to determine whether said traffic is a threat. The problem with this type of system is that any traffic outside expected norms is reported, resulting in more false positives than you see with signature-based systems. There are three main types of anomaly-based IDSs: -Statistical anomaly based: The IDS samples the live environment to record activities. The longer the IDS is in operation, the more accurate the profile that is built. However, developing a profile that does not have a large number of false positives can be difficult and time-consuming. Thresholds for activity deviations are important in this IDS. Too low a threshold results in false positives, whereas too high a threshold results in false negatives. -Protocol anomaly based: The IDS has knowledge of the protocols it will monitor. A profile of normal usage is built and compared to activity. -Traffic anomaly based: The IDS tracks traffic pattern changes. All future traffic patterns are compared to the sample. Changing the threshold reduces the number of false positives or negatives. This type of filter is excellent for detecting unknown attacks, but user activity might not be static enough to effectively implement this system.

Screened subnet firewall

two firewalls are used, and traffic must be inspected at both firewalls to enter the internal network. It is called a screen subnet because there is a subnet between the two firewalls that can act as a DMZ for resources from the outside world.


Conjuntos de estudio relacionados

Nursing Research: Exam 1 (Practice Questions)

View Set

business letter components and style/format

View Set

NH Adjuster for Property & Casualty

View Set

Ebersole and Hess' Chapter 18: Pain and Comfort

View Set