MIS ch12

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

firewalls

Typically, a large portion of the organization's information assets are accessible from the organization's networks and possibly by users across the Internet. With the never-ending push to have information where you want it, when you want it, and how you want it, InfoSec 12 Biometrics Universality Uniqueness Permanence Collectability Performance Acceptability Circumvention Face H L M H L H L Facial Thermogram HH L H M H H Fingerprint M Fingerprint M H H M H M H Hand Geometry MMM H M M M Hand Vein M Hand Vein M M M M M M H Eye: Iris H H H M H L H Eye: Retina H Eye: Retina H H M L H L H DNA H H H L H L L Odor & Scent H Odor & Scent H H H L L M L Voice M L L M L H L Signature L L L H L H L Keystroke L L L M L M M Gait M L L H L H M Table 12-2 Ranking of biometric effectiveness and acceptance In the table, H = High, M = Medium, and L = Low. Adapted from multiple sources1 Protection Mechanisms 531 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 professionals are under increasing pressure to provide global access to information assets without sacrificing security. Fortunately, a number of technologies support the protection of information assets across networks and the Internet. These technologies include firewalls, vir- tual private networks (VPNs), intrusion detection and prevention systems (IDPSs), wireless access points (WAPs) and wireless security protocols, and network scanning tools. Each of these will be examined in this chapter. Firewalls Key Terms application layer firewall: Also known as a layer seven firewall, a device capable of examining the application layer of network traffic (for example, HTTP, SMTP, FTP) and filtering based upon its header content rather than the traffic IP headers. application layer proxy firewall: A device capable of functioning both as a firewall and an application layer proxy server. bastion host: A device placed between an external, untrusted network and an internal, trusted network. Also known as a sacrificial host, as it serves as the sole target for attack and should therefore be thoroughly secured. cache server: A proxy server or application-level firewall that stores the most recently accessed information in its internal caches, minimizing the demand on internal servers. content filter: A software program or hardware/software appliance that allows administrators to restrict content that comes into or leaves a network—for example, restricting user access to Web sites with material that is not related to business, such as pornography or entertainment. deep packet inspection (DPI): A firewall function that involves examining multiple protocol headers and even content of network traffic, all the way through the TCP/IP layers and including encrypted, compressed, or encoded data. demilitarized zone (DMZ): An intermediate area between a trusted network and an untrusted network that restricts access to internal systems. dual-homed host: A network configuration in which a device contains two network interfaces: one that is connected to the external network and one that is connected to the internal network. All traffic must go through the device to move between the internal and external networks. dynamic packet filtering firewall: A firewall type that can react to network traffic and create or modify configuration rules to adapt. firewall: In information security, a combination of hardware and software that filters or prevents specific information from moving between the outside network and the inside network. network-address translation (NAT): A technology in which multiple real, routable external IP addresses are converted to special ranges of internal IP addresses, usually on a one-to-one basis; that is, one external valid address directly maps to one assigned internal address. packet filtering firewall: A networking device that examines the header information of data packets that come into a network and determines whether to drop them (deny) or forward them to the next network connection (allow), based on its configuration rules. port-address translation (PAT): A technology in which multiple real, routable external IP addresses are converted to special ranges of internal IP addresses, usually on a one-to-many basis; that is, one external valid address is mapped dynamically to a range of internal addresses by adding a unique port number to the address when traffic leaves the private network and is placed on the public network. proxy firewall: A device that provides both firewall and proxy services. proxy server: A server that exists to intercept requests for information from external users and provide the requested information by retrieving it from an internal server, thus protecting and minimizing the demand on internal servers. Some proxy servers are also cache servers. 532 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 sacrificial host: See bastion host. screened-host architecture: A firewall architectural model that combines the packet filtering router with a second, dedicated device such as a proxy server or proxy firewall. screened-subnet architecture: A firewall architectural model that consists of one or more internal bastion hosts located behind a packet filtering router on a dedicated network segment, with each host performing a role in protecting the trusted network. single bastion host architecture: A firewall architecture in which a single device performing firewall duties, such as packet filtering, serves as the only perimeter device providing protection between an organization's networks and the external network. This architecture can be implemented as a packet filtering router or as a firewall behind a non-filtering router. state table: A tabular record of the state and context of each packet in a conversation between an internal and external user or system. A state table is used to expedite traffic filtering. stateful packet inspection (SPI) firewall: A firewall type that keeps track of each network connection between internal and external systems using a state table, and that expedites the filtering of those communications. Also known as a stateful inspection firewall. total cost of ownership (TCO): A measurement of the true cost of a device or application, which includes not only the purchase price, but annual maintenance or service agreements, the cost to train personnel to manage the device or application, the cost of systems administrators, and the cost to protect it. trusted network: The system of networks inside the organization that contains its information assets and is under the organization's control. Unified Threat Management (UTM): Networking devices categorized by their ability to perform the work of multiple devices, such as a stateful packet inspection firewall, network intrusion detection and prevention system, content filter, spam filter, and malware scanner and filter. untrusted network: The system of networks outside the organization over which it has no control. The Internet is an example of an untrusted network. A physical

password power

incorporating at least one letter, one number, and one special character in order to create a reasonable delay in the attacker's effort to crack a password with a brute-force attack. This delay causes the attacker's work effort to exceed his or her reward level, as discussed in Chapter 7. If the system does require case-sensitive passwords, which is the much preferred alternative, then the average password length need only be 10 characters to result in an acceptable delay against brute-force attacks. Something a Person Has This authentication mechanism makes use of an item (a card, key, or token) that the user or system has. While there are many implementations of this mechanism, one example is a dumb card, a category that includes ID and ATM cards with magnetic strips that contain the digital (and often encrypted) PIN against which user 12 Case-insensitive Passwords Using a Standard Alphabet Set (No Numbers or Special Characters) Password Length Odds of Cracking: 1 in (based on number of characters ^ password length): Estimated Time to Crack* 8 208,827,064,576 1.01 seconds 9 5,429,503,678,976 26.2 seconds 10 141,167,095,653,376 11.4 minutes 11 3,670,344,486,987,780 4.9 hours 12 95,428,956,661,682,200 5.3 days 13 2,481,152,873,203,740,000 138.6 days 14 64,509,974,703,297,200,000 9.9 years 15 1,677,259,342,285,730,000,000 256.6 years 16 43,608,742,899,428,900,000,000 6,672.9 years Case-sensitive Passwords Using a Standard Alphabet Set (with Numbers and 20 Special Characters) Password Length Odds of Cracking: 1 in (based on number of characters ^ password length): Estimated Time to Crack* 8 2,044,140,858,654,980 2.7 hours 9 167,619,550,409,708,000 9.4 days 10 13,744,803,133,596,100,000 2.1 years 11 1,127,073,856,954,880,000,000 172.5 years 12 92,420,056,270,299,900,000,000 14,141.9 years 13 7,578,444,614,164,590,000,000,000 1,159,633.8 years 14 621,432,458,361,496,000,000,000,000 95,089,967.6 years 15 50,957,461,585,642,700,000,000,000,000 7,797,377,343.5 years 16 4,178,511,850,022,700,000,000,000,000,000 639,384,942,170.1 years Table 12-1 Password power *Estimated Time to Crack is based on a 2015-era PC with an Intel i7-6700K Quad Core CPU performing 207.23 Dhrystone GIPS (giga/billion instructions per second) at 4.0 GHz. Note: Modern workstations are capable of using multiple CPUs, further decreasing time to crack. Protection Mechanisms 527 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 input is compared. A more capable object is the smart card, which contains a computer chip that can verify and validate information in addition to PINs. Another often-used device is the cryptographic token, a computer chip in a card that has a display. This device contains a built-in seed number that uses a formula or a clock to calculate a number that can be used to perform a remote login authentication. Tokens may be synchronous or asynchronous. Once synchronous tokens are synchronized with a server, each device (server and token) uses the time to generate the authentication num- ber that is entered during the user login. Asynchronous tokens use a challenge-response system in which the server challenges the user with a number. That is, the user enters the challenge number into the token, which in turn calculates a response number. The user then enters the response number into the system to gain access. Only a person who has the correct token can calculate the correct response number and thus log into the system. This system does not require synchronization and does not suffer from mistiming issues. Figure 12-3 shows two examples of access control tokens from Google 2-Step and PayPal enhanced authentication. Something a Person Can Produce This authentication mechanism takes advantage of something inherent about the user that is evaluated using biometrics. Biometric authenti- cation methods include the following: • Fingerprint comparison of the person's actual fingerprint to a stored fingerprint • Palm print comparison of the person's actual palm print to a stored palm print • Hand geometry comparison of the person's actual hand to a stored measurement • Facial recognition using a photographic ID card, in which a human security guard compares the person's face to a photo. This is the most widely used form of identification today. Figure 12-3 Access control tokens Source: RSA. 528 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 • Facial recognition using a digital camera, in which a person's face is compared to a stored image • Retinal print comparison of the person's actual retina to a stored image • Iris pattern comparison of the person's actual iris to a stored image Most of the technologies that scan human characteristics convert these images to obtain some form of minutiae—that is, unique points of reference that are digitized and stored. Some technologies encrypt the minutiae to make them more resistant to tampering. Each subsequent scan is also digitized and then compared with the encoded value to determine whether users are who they claim to be. One limitation of this technique is that some human characteristics can change over time, due to normal development, injury, or illness. Among all possible biometrics, only three human characteristics are usually considered truly unique: • Fingerprints • Retina of the eye (blood vessel pattern) • Iris of the eye (random pattern of features found in the iris, including freckles, pits, striations, vasculature, coronas, and crypts) DNA or genetic authentication will be included in this category if it ever becomes a cost- effective and socially accepted technology. For items a person can produce, signature recognition is commonplace. Many retail stores use signature recognition, or at least signature capture, for authentication during a purchase. Customers sign a special pad using a stylus; the signatures are then digitized and either compared to a database for validation or simply saved. Signature capture is much more widely accepted than signature comparison, because signatures can vary due to a number of factors, including age, fatigue, and the speed with which they are written. Voice recognition for authentication captures the analog waveforms of a person's speech and compares these waveforms to a stored version. Voice recognition systems provide users with a phrase they must read—for example, "My voice is my password, please verify me. Thank you." Another pattern-based approach is keystroke pattern recognition. This authentication method relies on the timing between key signals when a user types in a known sequence of keystrokes. When measured with sufficient precision, this pattern can provide a unique identification. Figure 12-4 depicts some of these biometric and other human recognition characteristics.

authentication mechanisms

As explained in Chapter 8, access controls regulate the admission of users into trusted areas of the organization—both logical access to information systems and physical access to the organization's facilities. Access control is maintained by means of a collection of policies, programs to carry out those policies, and technologies that enforce policies. Access control approaches involve four processes: obtaining the identity of the person requesting access to a logical or physical area (identification), confirming the identity of the person seeking access to a logical or physical area (authentication), determining which actions the person can perform in that logical or physical area (authorization), and docu- menting the activities of the authorized individual and systems (accountability). A successful access control approach—whether intended to control logical or physical access—always incorporates all four of these elements, known collectively as IAAA (I triple-A). There are three types of authentication mechanisms: • Something a person knows (for example, a password or passphrase) • Something a person has (for example, a cryptographic token or smart card) 524 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 • Something a person can produce (such as fingerprints, palm prints, hand topography, hand geometry, retina and iris scans; or a voice or signature that is analyzed using pat- tern recognition). These characteristics can be assessed through the use of biometrics, which can then validate who the person claims to be. The following sections describe each of these authentication mechanisms: Something a Person Knows This authentication mechanism verifies the user's identity by means of a password, passphrase, or some other unique authentication code, such as a PIN. The technical infrastructure for something you know is commonly built into computer and network operating systems software and is in use by default unless it has been deliberately disabled. In older client operating systems, such as Windows 98 and Windows XP, pass- word systems were widely known to be insecure. This led to the implementation of supple- mental authentication mechanisms, which often requires separate physical devices. Some product vendors offer these hardware controls as built-in features; for example, some lap- tops include thumbprint readers on certain models. One of the biggest security debates focuses on password complexity. A password should be dif- ficult to guess, which means it cannot be a word that is easily associated with the user, such as the name of a spouse, child, or pet. A password also should not be a series of numbers easily associated with the user, such as a phone number, Social Security number, or birth date. At the same time, the password must be something the user can easily remember, which means it should be short or have an association with the user that is not accessible to others. The current industry best practice is for all passwords to have a minimum length of 10 characters and con- tain at least one uppercase letter, one lowercase letter, one number, and one system-acceptable special character, which of course requires systems to be case-sensitive. These criteria are referred to as a password's complexity requirement. As passwords get more complex, they become increasingly difficult to remember, which can lead to employees writing them down in unauthorized locations and defeating the whole purpose of having passwords. The greatest chal- lenge of complex password usage comes from employees who allow the local Web browser to remember passwords for them; anyone who can access the system then will have access to any online resources commonly used from that system. Most users incorporate simple access con- trols into their office (and home) systems, but are then required to use complex passwords for online applications. This issue creates a huge security problem for organizations, especially those that allow employees to work from home on their personal equipment. Organizations therefore must enforce a number of requirements, including strong passwords on local systems, restrictions on allowing systems to retain access control credentials, and restrictions on allowing users to access organizational resources with personal systems. The passphrase and corresponding virtual password are an improvement over the standard password, as they are based on an easily memorable phrase. For example, while a typical password might be 23skedoo, a passphrase could be May The Force Be With You Always, from which the virtual password MTFBWYA is derived. Another way to create a virtual password is to use a set of construction rules applied to facts you know very well, such as the first three letters of your last name, a hyphen, the first two letters of your first name, an underscore, the first two letters of your mother's maiden name, a hyphen, and the first four letters of the city in which you were born. This may sound complicated, but once memo- rized, the construction rules are easy to use. If you add another rule to substitute numbers 12 Protection Mechanisms 525 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 for certain letters—1 for L or I, 0 for O, and 3 for E, and capitalize the first letter of each section, then you have a very powerful virtual password that you can easily reconstruct. Using the preceding rules would create a very strong virtual password for Charlie Moody (born in Atlanta, mother's maiden name Meredith) of M00-Ch_M3-At1a. Another method for remembering strong passwords is to use a password memory support software application such as eWallet from Ilium Software (www.iliumsoft.com/ewallet), as shown in Figure 12-2. This application and others like it are available for smartphones, tablets, laptops, and PCs, and provide an encrypted database to store the system name (or URL), username, and password for a large number of systems. You can also use such appli- cations to store credit card numbers, frequent flyer numbers, and any portable data that needs protection. Most systems like this use strong encryption, such as 256-bit AES, which is described later in this chapter. How important is it to have a long password that isn't obvious to others? As shown in Table 12-1, the longer the password, the lower the odds of it being guessed in a brute-force attack using random bit combinations. If a particular system does not require case-sensitive passwords, the user should adopt a standard password length of at least 12 characters,

middle

Intrusion Detection and Prevention Systems Key Terms agent: In an IDPS, a piece of software that resides on a system and reports back to a management server. Also referred to as a sensor. anomaly-based IDPS: An IDPS that compares current data and traffic patterns to an established baseline of normalcy, looking for variance out of parameters. Also known as a behavior-based IDPS. behavior-based IDPS: See anomaly-based IDPS. clipping level: A predefined assessment level that triggers a predetermined response when surpassed. Typically, the response is to write the event to a log file and/or notify an administrator. host-based IDPS (HIDPS): An IDPS that resides on a particular computer or server, known as the host, and monitors activity only on that system. Also known as a system integrity verifier. 12 Protection Mechanisms 543 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 Key Terms (continued) intrusion detection and prevention system (IDPS): The general term for a system with the capability both to detect and modify its configuration and environment to prevent intrusions. An IDPS encompasses the functions of both intrusion detection systems and intrusion prevention technology. knowledge-based IDPS: See signature-based IDPS. network-based IDPS (NIDPS): An IDPS that resides on a computer or appliance connected to a segment of an organization's network and monitors traffic on that segment, looking for indications of ongoing or successful attacks. sensor: See agent. signature-based IDPS: An IDPS that examines systems or network data in search of patterns that match known attack signatures. Also known as a knowledge-based IDPS. Intrusion detection and prevention systems (IDPSs) work like burglar alarms. When the sys- tem detects a violation—the IT equivalent of an opened or broken window—it activates the alarm. This alarm can be audible and visible (noise and lights), or it can be a silent alarm that sends a message to a monitoring company. With almost all IDPSs, administrators can choose the configuration and alarm levels. Many IDPSs can be configured to notify adminis- trators via e-mail and numerical or text paging. The systems can also be configured to notify an external InfoSec service organization, just as burglar alarms do. IDPSs combine tried- and-true detection methods from intrusion detection systems (IDSs) with the capability to react to changes in the environment, which is available in intrusion prevention technology. As most modern technology in this category has the capability both to detect and prevent, the term IDPS is generally used to describe the devices or applications. Systems that include intrusion prevention technology attempt to prevent the attack from suc- ceeding by one of the following means: • Stopping the attack by terminating the network connection or the attacker's user session • Changing the security environment by reconfiguring network devices (firewalls, routers, and switches) to block access to the targeted system • Changing the attack's content to make it benign—for example, by removing an infected file attachment from an e-mail before the e-mail reaches the recipient Intrusion prevention technologies can include a mechanism that severs the communications circuit—an extreme measure that may be justified when the organization is hit with a massive Distributed Denial of Service (DDoS) or malware-laden attack. All IDPSs require complex configurations to provide the appropriate level of detection and response. These systems are either network based to protect network information assets or they are host based to protect server or host information assets. IDPSs use one of two basic detection methods: signature based or statistical anomaly based. Figure 12-9 depicts two typ- ical approaches to intrusion detection and prevention where IDPSs are used to monitor both network connection activity and current information states on host servers. Host-Based IDPS A host-based IDPS (HIDPS) works by configuring and classifying var- ious categories of systems and data files. In many cases, IDPSs provide only a few general 544 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 levels of alert notification. For example, an administrator might configure an IDPS to report changes to certain folders, such as system folders (such as C:\Windows), security-related applications (C:\Tripwire), or critical data folders; at the same time, the IDPS might be instructed to ignore changes to other files (such as C:\Program Files\Office). Administrators might configure the system to instantly page or e-mail them for high-priority alerts but to simply record other lower-priority activity. Most administrators are concerned only if unau- thorized changes occur in sensitive areas. After all, applications frequently modify their internal files, such as dictionaries and configuration templates, and users constantly update their data files. Unless the IDPS is precisely configured, these benign actions can generate a large volume of false alarms. Some organizations will use a variable degree of reporting and recording detail. During times of routine operation, the system will provide alerting for only a few urgent reasons and will provide recording only for exceptions. During periods of increased threat, however, it may send alerts on suspicious activity and record all activity for later analysis. Host-based IDPSs can monitor multiple computers simultaneously. They do so by storing a client file on each monitored host and then making that host report back to the master con- sole, which is usually located on the system administrator's computer. This master console monitors the information from the managed clients and notifies the administrator when pre- determined attack conditions occur. Network-Based IDPS In contrast to host-based IDPSs, which reside on a host (or hosts) and monitor only activities on the host, network-based IDPSs (NIDPSs) monitor net- work traffic. When a predefined condition occurs, the network-based IDPS notifies the appropriate administrator. Whereas host-based IDPSs look for changes in file attributes 12 Network IDPS: Examines packets on the network and alerts systems administrators to unusual patterns External router Host IDPS: Examines the data in files stored on the host and alerts systems administrators to any changes Untrusted network 0100101011 Header Figure 12-9 Intrusion detection and prevention systems Protection Mechanisms 545 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 (create, modify, delete), the network-based IDPS looks for patterns of network traffic, such as large collections of related traffic that can indicate a DoS attack or a series of related packets that could indicate a port scan in progress. Consequently, network IDPSs require a much more complex configuration and maintenance program than do host-based IDPSs. Network IDPSs must match known and unknown attack strategies against their knowledge base to determine whether an attack has occurred. These systems yield many more false- positive readings than do host-based IDPSs because they are attempting to read the network activity pattern to determine what is normal and what is not. Most organizations that implement an IDPS solution install data collection sensors that are both host based and network based. A system of this type is called a hybrid-IDPS, and it also usually includes a provision to concentrate the event notifications from all sensors into a central repository for analysis. The analysis makes use of either signature-based or statisti- cal anomaly-based detection techniques. Signature-Based IDPS IDPSs that use signature-based methods work like antivirus soft- ware. In fact, antivirus software can be classified as a form of signature-based IDPS. A signature-based IDPS, also known as a knowledge-based IDPS, examines data traffic for some- thing that matches the signatures, which comprise preconfigured, predetermined attack pat- terns. The problem with this approach is that the signatures must be continually updated as new attack strategies emerge. Failure to stay current allows attacks using new strategies to suc- ceed. Another weakness of this method is the time frame over which attacks occur. If attackers are slow and methodical, they may slip undetected through the IDPS, as their actions may not match a signature that includes factors based on duration of the events. The only way to resolve this dilemma is to collect and analyze data over longer periods of time, which requires substantially larger data storage ability and additional processing capacity. Anomaly-Based IDPS Another popular type of IDPS is the anomaly-based IDPS (for- merly called a statistical anomaly-based IDPS), which is also known as a behavior-based IDPS. The anomaly-based IDPS first collects data from normal traffic and establishes a base- line. It then periodically samples network activity, using statistical methods, and compares the samples to the baseline. When the activity falls outside the baseline parameters (known as the clipping level), the IDPS notifies the administrator. The baseline variables can include a host's memory or CPU usage, network packet types, and packet quantities. The advantage of this approach is that the system is able to detect new types of attacks because it looks for abnormal activity of any type. Unfortunately, these IDPSs require much more overhead and processing capacity than do signature-based versions because they must constantly attempt to pattern matched activity to the baseline. In addition, they may not detect minor changes to system variables and may generate many false-positive warnings. If the actions of the users or systems on the network vary widely, with unpredictable periods of low-level and high-level activity, this type of IDPS may not be suitable, as it will almost certainly generate false alarms. As a result, it is less commonly used than the signature- based approach. Managing Intrusion Detection and Prevention Systems Just as with any alarm system, if there is no response to an IDPS alert, it does no good. An IDPS does not remove or deny access to a system by default and, unless it is programmed to take an action, 546 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 merely records the events that trigger it. IDPSs must be configured using technical knowl- edge and adequate business and security knowledge to differentiate between routine circum- stances and low, moderate, or severe threats to the security of the organization's information assets. A properly configured IDPS can translate a security alert into different types of notification— for example, log entries for low-level alerts, e-mails for moderate-level alerts, and text mes- sages or paging for severe alerts. Some organizations may configure systems to automatically take action in response to IDPS alerts, although this technique should be carefully considered and undertaken only by organizations with experienced staff and well-constructed InfoSec procedures. A poorly configured IDPS may yield either information overload—causing the IDPS administrator to shut off the pager—or failure to detect an actual attack. When a system is configured to take unsupervised action without obtaining human approval, the organization must be prepared to take accountability for these IDPS actions. The human response to false alarms can lead to behavior that can be exploited by attackers. For example, consider the following tactic—a car theft strategy that exploits humans' intol- erance for technological glitches that cause false alarms. In the early morning hours—say, 2:00 a.m.—a thief deliberately sets off the target car's alarm and then retreats a safe dis- tance. The owner comes out, resets the alarm, and goes back to bed. A half-hour later, the thief does it again, and then again. After the third or fourth time, the owner assumes that the alarm is faulty and turns it off, leaving the vehicle unprotected. The thief is then free to steal the car without having to deal with the now disabled alarm. Most IDPSs monitor systems by means of agents. An agent (sometimes called a sensor) is a piece of software that resides on a system and reports back to a management server. If this piece of software is not properly configured and does not use a secure transmission channel to communicate with its manager, an attacker could compromise and subsequently exploit the agent or the information from the agent. A valuable tool in managing an IDPS is the consolidated enterprise management service. This software allows the security professional to collect data from multiple host-based and network-based IDPSs and look for patterns across systems and subnetworks. An attacker might potentially probe one network segment or computer host and then move on to another target before the first system's IDPS has caught on. The consolidated management service not only collects responses from all IDPSs, thereby providing a central monitoring station, it can identify these cross-system probes and intrusions. For more information on IDPSs, read NIST SP 800-94, "Guide to Intrusion Detection and Prevention Systems," which is available at http://csrc.nist.gov/publications/nistpubs/800-94/ SP800-94.pdf. Remote Access Protection Key Terms Remote Authentication Dial-In User Service (RADIUS): A computer connection system that centralizes the management of user authentication by placing the responsibility for authenticating each user on a central authentication server. 12 Protection Mechanisms 547 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 Key Terms (continued) Terminal Access Controller Access Control System (TACACS): Commonly used in UNIX systems, a remote access authorization system based on a client/server configuration that makes use of a centralized data service in order to validate the user's credentials at the TACACS server. war-dialer: An automatic phone-dialing program that dials every number in a configured range (e.g., 555-1000 to 555-2000) and checks whether a person, answering machine, or modem picks up. Before the Internet emerged as a public network, organizations created private networks and allowed individuals and other organizations to connect to them using dial-up or leased-line connections. In the current networking environment, firewalls are used to safeguard the con- nection between an organization and its Internet (public network) connection. The equivalent level of protection is necessary to protect connections when using private networks that allow dial-up access. While large organizations have replaced much of their dial-up capacity with Internet-enabled VPN connectivity, the maintenance and protection of dial-up connections from users' homes and in small offices remains a concern for some organizations. According to a May 2015 article in CNN Money, more than 2 million people in the United States still use dial-up access to get to the Internet, most notably through America Online (AOL)7 . Unsecured dial-up access represents a substantial exposure to attack. An attacker who suspects that an organization has dial-up lines can use a device called a war-dialer to locate the connec- tion points. A war-dialer is an automatic phone-dialing program that dials every number in a configured range (e.g., 555-1000 to 555-2000) and checks whether a person, answering machine, or modem picks up. If a modem answers, the war-dialer program makes a note of the number and then moves to the next target number. The attacker then attempts to hack into the network through the identified modem connection using a variety of techniques. Dial-up connections are usually much simpler and less sophisticated than Internet connec- tions. For the most part, simple user name and password schemes are the only means of authentication. Some newer technologies have improved this process, including Remote Authentication Dial-In User Service (RADIUS) systems, Challenge Handshake Authentication Protocol (CHAP) systems, and even systems that use strong encryption. The most prominent of these approaches are RADIUS and TACACS, which are discussed in the following section. RADIUS and TACACS While broadband Internet access has widely replaced dial-up access in most of the modern world, there is a substantial number of users of dial-up access. With the lower cost and wider availability of dial-up connectivity, it remains important for organizations to retain familiarity with methods necessary to protect dial-up connections. RADIUS and TACACS are systems that authenticate the credentials of users who are trying to access an organization's network via a dial-up device or a secured network session. Typi- cal remote access systems place the responsibility for the authentication of users on the sys- tem directly connected to the modems. If the dial-up system includes multiple points of entry, such an authentication scheme is difficult to manage. The Remote Authentication Dial-In User Service (RADIUS) system centralizes the management of user authentication by placing the responsibility for authenticating each user on a central RADIUS server. When a remote access server (RAS) receives a request for a network connection from a dial-up client, it passes the request along with the user's credentials to the RADIUS server. 548 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 RADIUS then validates the credentials and passes the resulting decision (accept or deny) back to the accepting RAS. Figure 12-10 shows the typical configuration of a RAS system making use of RADIUS authentication. Similar in function to the RADIUS system is the Terminal Access Controller Access Control System (TACACS), commonly used in UNIX systems. This remote access authorization sys- tem is based on a client/server configuration. It makes use of a centralized data service, such as the one provided by a RADIUS server, and validates the user's credentials at the TACACS server. Three versions of TACACS exist: TACACS, Extended TACACS, and TACACSþ. The original version combines authentication and authorization services. The extended version authenticates and authorizes in two separate steps, and records the access attempt and the requestor's identity. The plus version uses dynamic passwords and incorpo- rates two-factor authentication.8 Managing Dial-Up Connections Most organizations that once operated large dial- up access pools have reduced the number of telephone lines they support in favor of Internet access secured by VPNs. Many have stopped using any type of dial-up access. An organiza- tion that continues to offer dial-up remote access must do the following: • Determine How Many Dial-Up Connections It Has—Many organizations do not even realize they have dial-up access, or they leave telephone connections in place long after they have stopped fully using them. This creates two potential problems. One, the organization continues to pay for telecommunications circuits it is not using; two, an alternative, and frequently unauthorized, method of accessing organizational networks remains a potential vulnerability. For example, an employee may have installed a modem on an office computer to do a little telecommuting without management's knowledge. The organization should periodically scan its internal phone networks with special software to detect available connections. It should also integrate risk assessment and risk approval into the telephone service ordering process. 12 (1) (2) (4) (3) Remote access server (RAS) RADIUS server 1. Remote worker dials RAS and submits user name and password. 2. RAS passes user name and password to RADIUS server. 3. RADIUS server approves or rejects request and provides access authorization. 4. RAS provides access to authorized remote worker. Teleworker Figure 12-10 RADIUS configuration Protection Mechanisms 549 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 • Control Access to Authorized Modem Numbers—Only those authorized to use dial-up access should be allowed to use incoming connections. Furthermore, although there is no security in obscurity, the numbers should not be widely distributed and the dial-up numbers should be considered confidential. • Use Call-Back Whenever Possible—Call-back requires an access requestor to be at a preconfigured location, which is essential for authorized telecommuting. Users call into the access computer, which disconnects and immediately calls the requestor back. If the caller is an authorized user at the preconfigured number, the caller can then connect. This solution is not so useful for traveling users, however. • Use Token Authentication if at All Possible—Users can be required to enter more than user names and passwords, which is essential when allowing dial-up access from laptops and other remote computers. In this scheme, the device accepts an input number, often provided by the computer from which access is requested, and provides a response based on an internal algorithm. The result is much stronger security. Wireless Networking Protection Key Terms Bluetooth: A de facto industry standard for short-range wireless communications between wireless telephones and headsets, between PDAs and desktop computers, and between laptops. footprint: In wireless networking, the geographic area in which there is sufficient signal strength to make a network connection. war driving: An attacker technique of moving through a geographic area or building, actively scanning for open or unsecured WAPs. Wi-Fi Protected Access (WPA): A set of protocols used to secure wireless networks; created by the Wi-Fi Alliance. Includes WPA and WPA2. Wired Equivalent Privacy (WEP): A set of protocols designed to provide a basic level of security protection to wireless networks and to prevent unauthorized access or eavesdropping. WEP is part of the IEEE 802.11 wireless networking standard. wireless access point (WAP): A device used to connect wireless networking users and their devices to the rest of the organization's network(s). Also known as a Wi-Fi router. The use of wireless network technology is an area of concern for InfoSec professionals. Most organizations that make use of wireless networks use an implementation based on the IEEE 802.11 protocol. A wireless network provides a low-cost alternative to a wired network because it does not require the difficult and often expensive installation of cable in an exist- ing structure. The downside is the management of the wireless network footprint. The size of the footprint depends on the amount of power the transmitter/receiver wireless access points (WAPs) emit. Sufficient power must exist to ensure quality connections within the intended area, but not so much as to allow those outside the footprint to receive them. Just as war-dialers represent a threat to dial-up communications, so does war driving for wireless. In some cities, groups of war-drivers move through an urban area, marking 550 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 locations with unsecured wireless access with chalk (a practice called war-chalking). A num- ber of encryption protocols can be used to secure wireless networks. The most common is the Wi-Fi Protected Access (WPA) family of protocols. The predecessor of WPA, unfortunately still in use, is Wired Equivalent Privacy (WEP), considered by most to be insecure and easily breached. Wired Equivalent Privacy (WEP) Wired Equivalent Privacy (WEP) is part of the IEEE 802.11 wireless networking standard. WEP is designed to provide a basic level of secu- rity protection to these radio networks, to prevent unauthorized access or eavesdropping. However, WEP, like a traditional wired network, does not protect users from each other; it only protects the network from unauthorized users. In the early 2000s, cryptologists found several fundamental flaws in WEP, resulting in vulnerabilities that can be exploited to gain access. These vulnerabilities ultimately led to the replacement of WEP as the industry stan- dard with WPA. Wi-Fi Protected Access (WPA) Created by the Wi-Fi Alliance, an industry group, Wi-Fi Protected Access (WPA) is a set of protocols used to secure wireless networks. The protocols were developed as an intermediate solution until the IEEE 802.11i standards were fully developed. IEEE 802.11i has been implemented in products such as WPA2. This is an amendment to the 802.11 standard published in June 2004, specifying security protocols for wireless networks. While WPA works with virtually all wireless network cards, it is not compatible with some older WAPs. WPA2, on the other hand, has compat- ibility issues with some older wireless network cards. Compared to WEP, WPA and WPA2 provide increased capabilities for authentication and encryption as well as increased throughput. Unlike WEP, both WPA and WPA2 can use an IEEE 802.1X authentication server, similar to the RADIUS servers mentioned in the previous section. This type of authentication server can issue keys to users that have been authenticated by the local system. The alternative is to allow all users to share a predefined password or passphrase, known as a pre-shared key (PSK, as in WPA-PSK or WPA2-PSK). Use of these pre-shared keys is convenient but is not as secure as other authentication techniques. WPA also uses a Message Integrity Code (a type of message authentication code) to prevent certain types of attacks. WPA was the strongest possible mechanism that was backwardly compatible with older systems, as imple- mented using the Temporal Key Integrity Protocol (TKIP). As of 2006, WPA2 officially replaced WPA. WPA2 introduced newer, more robust security protocols based on the Advanced Encryption Standard (discussed later in this chapter) to improve greatly the pro- tection of wireless networks. The WPA2 standard is currently incorporated in virtually all Wi-Fi devices and should be used when available because it permits the use of improved encryption protocols. WiMAX The next generation of wireless networking is WiMAX, also known as Wireless- MAN; it is essentially an improvement on the technology developed for cellular telephones and modems. Developed as part of the IEEE 802.16 standard, WiMAX is a certification mark that stands for "Worldwide Interoperability for Microwave Access." As noted by the WiMAX Forum, an industry-sponsored organization that serves as an informal IEEE Stan- dard 801.16 wireless standards evaluation group: 12 Protection Mechanisms 551 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 WiMAX is not a technology per se, but rather a certification mark, or "stamp of approval" given to equipment that meets certain conformity and interoperability tests for the IEEE 802.16 family of standards. A similar confusion surrounds the term Wi-Fi (Wireless Fidelity), which like WiMAX, is a certification mark for equipment based on a different set of IEEE standards from the 802.11 working group for wireless local area networks (WLAN). Neither WiMAX, nor Wi-Fi, is a technology but their names have been adopted in popular usage to denote the technologies behind them. This is likely due to the difficulty of using terms like "IEEE 802.16" in common speech and writing.9 Bluetooth Bluetooth's wireless communications can be exploited by anyone within its approximately 30-foot range unless suitable security controls are implemented. As this short- range de facto standard continues to increase in popularity for use in personal communica- tions technologies, it has been estimated that there will be almost a billion Bluetooth-enabled devices by the end of the decade. In discoverable mode—which allows other Bluetooth sys- tems to detect and connect—devices can easily be accessed. Even in non-discoverable mode, the device is susceptible to access by other devices that have connected with it in the past. By default, Bluetooth does not authenticate connections; however, Bluetooth does implement some degree of security when devices access certain services, such as dial-up accounts and local area file transfers. Paired devices—usually a computer or a phone and a peripheral that a user plans to connect to it—require that the same passkey be entered on both devices. This key is used to generate a session key used for all future communications. The only way to secure Bluetooth-enabled devices is to incorporate a twofold approach: (1) Turn off Blue- tooth when you do not intend to use it, and (2) do not accept an incoming communications pairing request unless you know who the requestor is. Managing Wireless Connections Users and organizations can use a number of measures to implement a secure wireless network. These safeguards include the wire- less security protocols mentioned earlier, VPNs, and firewalls. It is also possible to restrict access to the network to a preapproved set of wireless network card MAC addresses. This is especially easy in small or personal networks where all possible users are known. One of the first management requirements is to regulate the size of the wireless network footprint. The initial step is to determine the best locations for placement of the WAPs. In addition, by using radio-strength meters, network administrators can adjust the power of the broadcast antennae to provide sufficient but not excessive coverage. This is especially important in areas where public access is possible. WEP used to be the first choice in network installation and is still available as an option on many technologies but generally should not be used. Even in a home or small office/home office (SOHO) setting, WPA is preferred; for most installations, WPA2 is preferred. The set- ups of wireless networks are also slightly different than what many users are familiar with. Most smaller wireless networks require the use of a pre-shared key. WEP networks require a 5-character or 13-character passphrase. In WPA and WPA2 settings, the passphrase can be any length, with longer being more secure. On some older equipment, the pre-shared key must be converted into a string of hexadecimal characters that is entered into both the 552 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 configuration software used to set up the WAP and each associated wireless network access card. This can quickly turn into a labor-intensive process for all but the smallest of networks. For more information on wireless networking and security, visit the WiFi Alliance Web site at www.wi-fi.org. Scanning and Analysis Tools Key Terms fingerprinting: The systematic survey of a targeted organization's Internet addresses collected during the footprinting phase to identify the network services offered by the hosts in that range. footprinting: The organized research and investigation of Internet addresses owned or controlled by a target organization. honey net: A monitored network or network segment that contains multiple honey pot systems. honey pot: An application that entices individuals who are illegally perusing the internal areas of a network by providing simulated rich content areas while the software notifies the administrator of the intrusion. port: A network channel or connection point in a data communications system. port scanners: Tools used both by attackers and defenders to identify or fingerprint active computers on a network, the active ports and services on those computers, the functions and roles of the machines, and other useful information. trap and trace applications: Applications that combine the function of honey pots or honey nets with the capability to track the attacker back through the network. vulnerability scanner: An application that examines systems connected to networks and their network traffic to identify exposed usernames and groups, open network shares, configuration problems, and other vulnerabilities in servers. In the previous section, wireless network controls were covered. Now, we return to the tech- nology and tools that are useful in all compound (wired and wireless) networks. Although they are not always perceived as defensive tools, scanners, sniffers, and other analysis tools enable security administrators to see what an attacker sees. Scanner and analysis tools can find vulnerabilities in systems, holes in security components, and other unsecured points in the network. Unfortunately, they cannot detect the unpredictable behavior of people. Some of these devices are extremely complex; others are very simple. Some are expensive com- mercial products; others are available for free from their creators. Conscientious administrators will have several hacking Web sites bookmarked and should frequently browse for discussions about new vulnerabilities, recent conquests, and favorite assault techniques. There is nothing wrong with security administrators using the tools used by hackers to examine their own defenses and search out areas of vulnerability. A word of caution: Many of these tools have dis- tinct signatures, and some ISPs scan for these signatures. If the ISP discovers someone using hacker tools, it may choose to deny access to that customer and discontinue service. It is best to establish a working relationship with the ISP and notify it before using such tools. Scanning tools collect the information that an attacker needs to succeed. Collecting informa- tion about a potential target is done through a research process known as footprinting (not to be confused with the wireless footprint). Attackers may use public Internet data sources 12 Protection Mechanisms 553 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 to perform keyword searches to identify the network addresses of the organization. They may also use the organization's Web page to find information that can be used in social engi- neering attacks. For example, the Reveal Source option on most popular Web browsers allows users to see the source code behind the graphics on a Web page. A number of clues can provide additional insight into the configuration of an internal network: the locations and directories for Common Gateway Interface (CGI) script bins, and the names and possibly addresses of computers and servers. A scanner can be used to augment the data collected by a common browser. A Web site crawler program can scan entire Web sites for valuable information, such as server names and e-mail addresses. It can also do a number of other common information collection activ- ities, such as sending multiple ICMP information requests (pings), attempting to retrieve mul- tiple and cross-zoned DNS queries, and performing common network analysis queries—all powerful diagnostic and/or hacking activities. The next phase of the pre-attack data gathering process is fingerprinting, which yields a detailed network analysis that provides useful information about the targets of the planned attack. The tool discussions here are necessarily brief; to attain true expertise in the use and configuration of these tools, you will need more specific education and training. Port Scanners Port scanners are a group of utility software applications that can iden- tify (or fingerprint) active computers on a network, as well as the active ports and the ser- vices associated with them on those computers, the functions and roles fulfilled by the machines, and other useful information. These tools can scan for specific types of compu- ters, protocols, or resources, or they can conduct generic scans. It is helpful to understand your network environment so that you can select the best tool for the job. The more specific the scanner is, the more detailed and useful the information it provides. However, you should keep a generic, broad-based scanner in your toolbox as well, to help locate and iden- tify rogue nodes on the network that administrators may not be aware of. Within the TCP/IP networking protocol, TCP and UDP port numbers differentiate among the multiple communication channels used to connect to network services that are offered on the same network device. Each service within the TCP/IP protocol suite has either a unique default port number or a user-selected port number. Table 12-4 shows some of the commonly used port numbers. In total, there are 65,536 port numbers in use. The well- known ports are those from 0 through 1023. The registered ports are those from 1024 through 49,151, and the dynamic and private ports are those from 49,152 through 65,535. The first step in securing a system is to secure open ports. Why? Simply put, an open port can be used to send commands to a computer, gain access to a server, and exert control over a networking device. As a general rule, you should secure all ports and remove from service any ports not required for essential functions. For instance, if an organization does not host Web services, there is no need for port 80 to be available in its network or on its servers. Vulnerability Scanners Vulnerability scanners, which are variants of port scanners, are capable of scanning networks for very detailed information. As a class, they identify exposed user names and groups, show open network shares, and expose configuration pro- blems and other server vulnerabilities. One vulnerability scanner is Nmap, a professional freeware utility available from www.insecure.org/nmap. Nmap identifies the systems 554 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 available on a network, the services (ports) each system is offering, the operating system and operating system version they are running, the type of packet filters and firewalls in use, and dozens of other characteristics. Several commercial vulnerability scanners are available as well, including products from IBM's Internet Security Systems, and from Foundstone, a divi- sion of McAfee. Packet Sniffers A packet sniffer can provide a network administrator with valuable infor- mation to help diagnose and resolve networking issues. In the wrong hands, it can be used to eavesdrop on network traffic. The commercially available and open-source sniffers include Sniffer (a commercial product), Snort (open-source software), and Wireshark (also open- source software). Wireshark is an excellent free network protocol analyzer; it allows adminis- trators to examine both live network traffic and previously captured data. This application offers a variety of features, including language filters and TCP session reconstruction utility. Typically, to use a packet sniffer effectively, you must be connected directly to a local net- work from an internal location. Simply tapping into any public Internet connection will flood you with more data than you can process and technically constitutes a violation of wiretapping laws. To use a packet sniffer legally, you must satisfy the following criteria: (1) Be on a network that the organization owns, not leases, (2) be under the direct authori- zation of the network's owners, (3) have the knowledge and consent of the content creators (users), and (4) have a justifiable business reason for doing so. If all four conditions are met, you can look at anything you want captured on that network. If not, you can only selec- tively collect and analyze packets using packet header information to identify and diagnose network problems. Conditions 1, 2, and 4 are self-explanatory, and condition 3 is usually a stipulation for using the company network. Incidentally, these conditions are the same as for employee monitoring in general. Trap and Trace Trap and trace applications are another set of technologies used to deploy IDPS technology that detects individuals who are intruding into network areas or 12 Port Numbers Description 20 and 21 File Transfer Protocol (FTP) 25 Simple Mail Transfer Protocol (SMTP) 53 Domain Name Services (DNS) 67 and 68 Dynamic Host Configuration Protocol (DHCP) 80 Hypertext Transfer Protocol (HTTP) 110 Post Office Protocol v3 (POP3) 161 Simple Network Management Protocol (SNMP) 194 Internet Relay Chat, or IRC (used for device sharing) 443 HTTP over SSL 8080 Proxy services Table 12-4 Commonly used port numbers Protection Mechanisms 555 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 investigating systems without authorization. Trap function software entices individuals who are illegally perusing the internal areas of a network in order to determine who they are. While perusing, these individuals discover indicators of particularly rich content areas on the network, but these areas are set up to attract potential attackers. Incorporating the func- tions of honey pots and honey nets, these directories or servers distract the attacker while the software notifies the administrator of the intrusion. The accompaniment to the trap is the trace. Similar in concept to telephone caller ID service, the trace is a process by which the organization attempts to determine the identity of some- one discovered in unauthorized areas of the network or systems. However, you must under- stand it is a violation of the Electronic Communications Protection Act to trace communications outside of networks owned by the organization. Use of any trap and trace functions requires compliance with the same four rules as packet sniffers. The U.S. government defines a trap and trace device as similar to a pen register in U.S. Code Title 18, Section 3127: "(3) the term "pen register" means a device or process which records or decodes dialing, routing, addressing, or signaling information transmitted by an instru- ment or facility from which a wire or electronic communication is transmitted, provided, however, that such information shall not include the contents of any communication, but such term does not include any device or process used by a provider or customer of a wire or electronic communication service for billing, or recording as an incident to billing, for communications services provided by such provider or any device or process used by a provider or customer of a wire com- munication service for cost accounting or other like purposes in the ordinary course of its business; (4) the term "trap and trace device" means a device or process which captures the incoming electronic or other impulses which identify the originating number or other dialing, routing, addressing, and signaling information reasonably likely to identify the source of a wire or electronic communication, provided, however, that such information shall not include the contents of any communication;"10 Note that these definitions explicitly exclude the content of communications and only focus on the header information to trace the origins of communications. Unlike packet sniffers, trap and trace devices are mainly used by law enforcement to identify the origin of commu- nications for legal and prosecution purposes. Managing Scanning and Analysis Tools It is vitally important that the security manager be able to see the organization's systems and networks from the viewpoint of potential attackers. Therefore, the security manager should develop a program, using in-house resources, contractors, or an outsourced service provider, to periodically scan the organization's systems and networks for vulnerabilities, using the same tools that a typical hacker might use. There are a number of drawbacks to using scanners and analysis tools, content filters, and trap and trace tools: • These tools are not human and thus cannot simulate the more creative behavior of a human attacker. 556 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 • Most tools function by pattern recognition, so only previously known issues can be detected. New approaches, modifications to well-known attack patterns, and the ran- domness of human behavior can cause them to misdiagnose the situation, thereby allowing vulnerabilities to go undetected or threats to go unchallenged. • Most of these tools are computer-based software or hardware and so are prone to errors, flaws, and vulnerabilities of their own. • All of these tools are designed, configured, and operated by humans and are subject to human errors. • You get what you pay for. Use of hackerware may actually infect a system with a virus or open the system to outside attacks or other unintended consequences. Always view a hacker kit skeptically before using it and especially before connecting it to the Internet. Never put anything valuable on the computer that houses the hacker tools. Consider segregating it from other network segments, and disconnect it from the network when not in use. • Specifically for content filters, some governments, agencies, institutions, and universities have established policies or laws that protect the individual user's right to access con- tent, especially if it is necessary for the conduct of his or her job. There are also situa- tions in which an entire class of content has been proscribed and mere possession of that content is a criminal act—for example, child pornography. • Tool usage and configuration must comply with an explicitly articulated policy as well as the law, and the policy must provide for valid exceptions. This mandate prevents administrators from becoming arbiters of morality as they create a filter rule set.11 For lists and reviews of scanning and analysis tools, visit the following sites: www.gfi.com/blog/ the-top-20-free-network-monitoring-and-analysis-tools-for-sys-admins/; www.techrepublic.com/ blog/five-apps/five-free-network-analyzers-worth-any-it-admins-time/; http://searchsecurity.tech target.com/Testing-and-comparing-vulnerability-analysis-tools Managing Server-Based Systems with Logging Key Terms log files: Collections of data stored by a system and used by administrators to audit systems performance and use both by authorized and unauthorized users. logs: See log files. security event information management (SEIM) systems: Log management systems specifically tasked to collect log data from a number of servers or other network devices for the purpose of interpreting, filtering, correlating, analyzing, storing, and reporting the data. Some systems are configured to record a common set of data by default; other systems must be configured to be activated. This data, referred to generally as log files or logs, is com- monly used to audit the systems performance and usage both by authorized and unautho- rized users. Table 12-5 illustrates log data categories and types of data normally collected 12 Protection Mechanisms 557 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 Category Data Type Network performance • Total traffic load in and out over time (packet, byte, and connection counts) and by event (new product or service release) • Traffic load (percentage of packets, bytes, connections) in and out over time sorted by protocol, source address, destination address, and other packet header data • Error counts on all network interfaces Other network data • Service initiation requests • Name of the user/host requesting the service • Network traffic (packet headers) • Successful connections and connection attempts (protocol, port, source, destination, time) • Connection duration • Connection flow (sequence of packets from initiation to termination) • States associated with network interfaces (up, down) • Network sockets currently open • Mode of network interface card (promiscuous or not) • Network probes and scans • Results of administrator probes System performance • Total resource use over time (CPU, memory [used, free], disk [used, free]) • Status and errors reported by systems and hardware devices • Changes in system status, including shutdowns and restarts • File system status (where mounted, free space by partition, open files, biggest file) over time and at specific times • File system warnings (low free space, too many open files, file exceeding allocated size) • Disk counters (input/output, queue lengths) over time and at specific times • Hardware availability (modems, network interface cards, memory) Other system data • Actions requiring special privileges • Successful and failed logins • Modem activities • Presence of new services and devices • Configuration of resources and devices Process performance • Amount of resources used (CPU, memory, disk, time) by specific processes over time • Top resource-consuming processes • System and user processes and services executing at any given time Other process data • User executing the process • Process start-up time, arguments, filenames • Process exit status, time, duration, resources consumed • Means by which each process is normally initiated (by an administrator, other users, or other programs or processes) and with what authorization and privileges • Devices used by specific processes • Files currently open by specific processes Table 12-5 Log data categories and types of data 558 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 during logging. To protect the log data, you must ensure that the servers that create and store the logs are secure. According to NIST, log management infrastructure involves two tiers, each with its own sub- tasks: log generation, and log analysis and storage.12 Log Generation Log generation involves the configuration of systems to create logs as well as configuration changes needed to consolidate logs if this is desired. This typically requires activating logging on the various servers, and defining where to store logging data, locally (on the system that generated the logs) or otherwise (such as on a centralized log analysis system). Issues in log generation include: 12 Category Data Type Files and directories • List of files, directories, attributes • Cryptographic checksums for all files and directories • Information about file access operations (open, create, modify, execute, delete), as well as their time and date • Changes to file sizes, contents, protections, types, locations • Changes to access control lists on system tools • Additions and deletions of files and directories • Results of virus scanners Users • Login/logout information (location, time): successful attempts, failed attempts, attempted logins to privileged accounts • Login/logout information on remote access servers that appears in modem logs • Changes in user identity • Changes in authentication status (such as enabling privileges) • Failed attempts to access restricted information (such as password files) • Keystroke monitoring logs • Violations of user quotas Applications and services • Application information (such as network traffic [packet content], mail logs, FTP logs, Web server logs, modem logs, firewall logs, SNMP logs, DNS logs, intrusion detection system logs, database management system logs) • FTP file transfers and connection statistics • Web connection statistics, including pages accessed, credentials of the requestor, user requests over time, most requested pages, and identities of requestors • Mail sender, receiver, size, and tracing information for mail requests • Mail server statistics, including number of messages over time and number of queued messages • DNS questions, answers, and zone transfers • File server transfers over time • Database server transactions over time Table 12-5 Log data categories and types of data (continued) Protection Mechanisms 559 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 • Multiple Log Sources—The diversity of systems that generate logs, with some servers generating multiple logs, such as Microsoft's application, system, security and setup logs, prevalent in most Windows OSs, can result in issues. Some logs consist of pieces of information collected from multiple sources, such as from network monitoring agents. The reintegration of the data collected from these logs can also cause complex- ity in the log consolidation process. • Inconsistent Log Content—What gets stored in a log may be dependent on options chosen by the operating system developer or configuration options chosen by the sys- tems administrator. Some systems allow the administrator to specify what gets logged, while others predefine what they believe should to be logged. • Inconsistent Timestamps—In addition to the fact that the dates and times in logs may be formatted differently, servers that are not associated with a central time server or service may result in different times recorded for events that are simultaneous. If an incident hits a number of servers in a particular sequence but the timestamps on those machines are off by a few seconds or even fractions of a second, it becomes much more difficult to analyze the incident. • Inconsistent Log Format—Because many different systems create logs, the structure and content of those logs may differ dramatically. Even a simple data element such as a date can be stored in multiple different formats, such as the difference between the standard in the United States—Month, Day, Year (MMDDYYYY)—and the standard used in many European countries—Day, Month, Year (DDMMYYYY). Some systems store ports by number, others by name. In order to interpret data from the Log Generation tier, the following functions must be addressed: • Log Parsing—Dividing data within logs into specific values, as some log data may consist of a solid stream of data. • Event Filtering—The separation of "items of interest" from the rest of the data that the log collects. • Event Aggregation—The consolidation of similar entries or related events within a log. Aggregation is critical for the organization to be able to handle the thousands of data points multiple servers will generate.13 Log Analysis and Storage Log analysis and storage is the transference of the log data to an analysis system, which may or may not be separate from the system that collects the log data. Collectively, systems of this type are known as security event information man- agement (SEIM) systems. These systems are specifically tasked to collect log data from a number of servers or other network devices for the purpose of interpreting, filtering, corre- lating, analyzing, storing, and reporting the data. Important management functions within log storage include: • Log Rotation—The file-level management of logs (e.g., when a single log file is closed and another started), usually done on a set schedule. 560 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 • Log Archival—The backup and storage of logs based on policy or legal/regulatory require- ments. This function includes log retention (the routine storage of all logs for a specified duration) and log preservation (the saving of logs of particular interest based on content). • Log Compression—The reduction in file size of logs to save drive space, using com- pression tools like Zip or Archive. • Log Reduction—The removal of unimportant or uneventful log entries to reduce the size of a log file, also known as "event reduction." • Log Conversion—The modification of the format or structure of a log file to allow it to be accessed by another application, such as an analysis tool. • Log Normalization—The standardization of log file structures and formats, using log conversion. • Log File Integrity—The determination as to whether the log files have been modified or not, usually through message digest or hashes. Important management functions within log analysis include: • Event Correlation—The association of multiple log file entries according to a prede- fined event or activity. • Log Viewing—The display of log data in a form that is easily understandable by humans, usually involving adding field data. • Log Reporting—The display of the results of log analysis. Managing Logs The final responsibility within this tier is the management of the logs once they are moved to storage. Log disposal or log clearing is the specification of when logs may be deleted or overwritten within a system, whether you are referring to the system that generated the logs or the system that stores and analyzes them.14 General suggestions for managing logs include: • Make sure that data stores can handle the amount of data generated by the configured logging activities. Some systems may generate multiple gigabytes of data for each hour of operation. • Rotate logs when unlimited data storage is not possible. Some systems overwrite older log entries with newer entries to accommodate space limitations. Log rotation settings must be configured for your system, which may require modifying the default settings. • Archive logs. Log systems can copy logs periodically to remote storage locations. Security administrators disagree about how long log files should be retained. Some argue that log files may be subpoenaed during legal proceedings and thus should be routinely destroyed to prevent unwanted disclosure. Others argue that the informa- tion gained from analyzing legacy and archival logs outweighs the risk. Still others propose aggregating the log information, then destroying the individual entries. Regardless of the method employed, some plan must be in place to handle these files or risk loss. 12 Protection Mechanisms 561 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 • Secure logs. Archived logs should be encrypted to prevent unwanted disclosure if the log data store is compromised. This should also protect the integrity of the log data, as many attackers will seek to delete or obfuscate log data to cover the tracks of the attack. • Destroy logs. Once log data has outlived its usefulness, it should be securely destroyed.15 Cryptography Key Terms cryptanalysis: The process of obtaining the plaintext message from a ciphertext message without knowing the keys used to perform the encryption. cryptography: The process of making and using codes to secure information. cryptology: The field of science that encompasses cryptography and cryptanalysis. nonrepudiation: The process of reversing public key encryption to verify that a message was sent by a specific sender and thus cannot be refuted. Although it is not a specific application or security tool, cryptography represents a sophisticated element of control that is often included in other InfoSec controls. Cryptography—from the Greek words kryptos, meaning "hidden," and graphein, meaning "to write"—is the set of pro- cesses involved in encoding and decoding messages so that others cannot understand them. Cryp- tography's parent discipline, cryptology, encompasses both cryptography and cryptanalysis— from analyein, meaning "to break up." Cryptology is a very complex field based on advanced mathematical concepts. The following sections provide a brief overview of the foundations of encryption and a short discussion of some of the related issues and tools in the field of InfoSec. You can find more information about cryptography in Bruce Schneier's book Secrets and Lies: Digital Security in a Net- worked World, which discusses many of the theoretical and practical considerations in the use of cryptographic systems. Many security-related tools use embedded cryptographic technologies to protect sensitive information. The use of the proper cryptographic tools can ensure confidentiality by keeping private information concealed from those who do not need to see it. Other cryptographic methods can provide increased information integrity via a mechanism to guarantee that a mes- sage in transit has not been altered—for example, a process that creates a secure message digest, or hash. In e-commerce situations, some cryptographic tools can be used to assure that parties to the transaction are authentic, so that they cannot later deny having participated in a transaction—a feature often called nonrepudiation. 562 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 Cryptography Definitions You can better understand the tools and functions popular in encryption security solutions if you know some basic terminology: • Algorithm—The mathematical formula or method used to convert an unen- crypted message into an encrypted message. • Cipher—When used as a verb, the transformation of the individual components (characters, bytes, or bits) of an unencrypted message into encrypted compo- nents or vice versa (see decipher and encipher); when used as a noun, the pro- cess of encryption or the algorithm used in encryption. • Ciphertext or cryptogram—The unintelligible encrypted or encoded message resulting from an encryption. • Cryptosystem—The set of transformations necessary to convert an unencrypted message into an encrypted message. • Decipher—See decryption. • Decryption—The process of converting an encoded or enciphered message (ciphertext) back to its original readable form (plaintext). Also referred to as deciphering. • Encipher—See encryption. • Encryption—The process of converting an original message (plaintext) into a form that cannot be used by unauthorized individuals (ciphertext). Also referred to as enciphering. • Key—The information used in conjunction with the algorithm to create the ciphertext from the plaintext; can be a series of bits used in a mathematical algorithm or the knowledge of how to manipulate the plaintext. Sometimes called a cryptovariable. • Keyspace—The entire range of values that can possibly be used to construct an individual key. • Plaintext—The original unencrypted message that is encrypted and that is the result of successful decryption. • Steganography—The process of hiding messages; for example, when a message is hidden within the digital encoding of a picture or graphic so that it is almost impossible to detect that the hidden message even exists. • Work factor—The amount of effort (usually expressed in units of time) required to perform cryptanalysis on an encoded message. 12 Protection Mechanisms 563 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 Encryption Operations Key Terms asymmetric encryption: A cryptographic method that incorporates mathematical operations involving both a public key and a private key to encipher or decipher a message. Either key can be used to encrypt a message, but then the other key is required to decrypt it. certificate authority (CA): A third party that manages users' digital certificates and certifies their authenticity. Diffie-Hellman key exchange method: The hybrid cryptosystem that pioneered the technology. digital certificates: Public key container files that allow PKI system components and end users to validate a public key and identify its owner. digital signatures: Encrypted message components that can be mathematically proven to be authentic. hybrid encryption system: The use of asymmetric encryption to exchange symmetric keys so that two (or more) organizations can conduct quick, efficient, secure communications based on symmetric encryption. monoalphabetic substitution: A substitution cipher that incorporates only a single alphabet in the encryption process. permutation cipher: See transposition cipher. polyalphabetic substitution: A substitution cipher that incorporates two or more alphabets in the encryption process. private key encryption: See symmetric encryption. public key encryption: See asymmetric encryption. public key infrastructure (PKI): An integrated system of software, encryp

intro

One night toward the end of his shift, Drew Brown, a technician at Random Widget Works, Inc. (RWW), received a call from his wife. One of their children was ill, and she wanted Drew to pick up some medicine on his way home from work. He decided to leave a few minutes early. Like all watchstanding employees in the security operations center (SOC), Drew had a proce- dures manual, which was organized sequentially. He used the checklists for everyday purposes and had an index to look up anything else he needed. Only one box remained unchecked on the checklist when Drew snapped the binder closed and hurriedly secured his workstation. That oversight would cause the whole company grief in the next few hours. Since he was the second-shift operator and RWW did not have a third shift in its data center, Drew carefully reviewed the room shutdown checklist next to the door, making sure all the room's environmental, safety, and physical security systems were set correctly. That activated the burglar alarm, so Drew quickly exited the room and the building, and was soon on his way to the drugstore. At about the same time, a 10th-grader in San Diego was up late, sitting at her computer. Her parents assumed she was listening to music while chatting with school friends online. In fact, 521 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 she had become bored with chatting and had discovered some new friends on the Internet— friends who shared her interest in programming. One of these new friends sent the girl a link to a new warez (illegally copied software) site. The girl downloaded a kit called Blendo from the warez site. Blendo is a tool that helps novice hackers create attack programs that combine a mass e-mailer with a worm, a macro virus, and a network scanner. The girl clicked her way through the configuration options, clicked a button labeled "custom scripts," and pasted in a script that one of her new friends had e-mailed to her. This script was built to exploit a brand-new vulnerability (announced only a few hours before). Although she didn't know it, the anonymous high-schooler had created new malware that was soon to bring large segments of the Internet to a standstill. She exported the attack script, attached it to an e-mail, and sent it to an anonymous remailer service to be forwarded to as many e-mail accounts as possible. The 10th-grader had naively set up a mailback option to an anonymous e-mail account so she could track the progress of her creation. Thirty minutes later, she checked that anonymous e-mail account and saw that she had more than 800,000 new messages; the only reason there were not even more messages was that her mailbox was full. Back at RWW, the e-mail gateway was sorting and forwarding all the incoming e-mail. The mail- box for [email protected] always received a lot of traffic, as did [email protected]. Tonight was no exception. Unfortunately for RWW, and for the second-shift operator who had failed to download and install the patch that fixed the new vulnerability, which had been announced by the vendor, the young hacker's attack code tricked the RWW mail server into running the program. The RWW mail server, with its high-performance processors, large RAM storage, and high- bandwidth Internet connection, began to do three things at once: It sent an infected e-mail to every- one with whom RWW had ever traded e-mail; it infected every RWW server that the e-mail server could reach; and it started deleting files, randomly, from every folder on each infected server. Within seconds, the network intrusion detection system had determined that something was afoot. By then, it was too late to stop the infection,

protection mechanisms

You should know by now that technical controls alone cannot secure an information technol- ogy (IT) environment, but they are almost always an essential part of the information security (InfoSec) program. Managing the development and use of technical controls requires some knowledge and familiarity with the technology that enables them. In this chapter, you will learn about firewalls, intrusion detection and prevention systems, encryption systems, and some other widely used security technologies. The chapter is designed to help you evaluate and manage the technical controls used by InfoSec programs. If you are seeking expertise in the configuration and maintenance of technical control systems, you will need education and training beyond the overview presented here. Technical controls can enable policy enforcement where human behavior is difficult to regu- late. A password policy that specifies the strength of the password (its length and the types of characters it uses), regulates how often passwords must change, and prohibits the reuse of passwords would be impossible to enforce by asking each employee if he or she had complied. This type of requirement is best enforced by the implementation of a rule in the operating system. Figure 12-1 illustrates how technical controls can be implemented at a number of points in a technical infrastructure. The technical controls that defend against threats from outside the organization are shown on the left side of the diagram. The controls that defend against threats from within the organization are shown on the right side of the diagram; these con- trols were covered in previous chapters. Because individuals inside an organization often have direct access to the information, they can circumvent many of the most potent technical con- trols. Controls that can be applied to this human element are also shown on the right side of the diagram.

summaryIdentification is a mechanism that provides basic information about an unknown entity to the known entity that it wants to communicate with. ■ Authentication is the validation of a user's identity. Authentication devices can depend on one or more of three factors: what you know, what you have, and what you can produce. ■ Authorization is the process of determining which actions an authenticated person can perform in a particular physical or logical area. ■ Accountability is the documentation of actions on a system and the tracing of those actions to a user, who can then be held responsible for those actions. Accountability is performed using system logs and auditing. ■ To obtain strong authentication, a system must use two or more authentication methods. ■ Biometric technologies are evaluated on three criteria: false reject rate, false accept rate, and crossover error rate. 12 Protection Mechanisms 577 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 ■ A firewall in an InfoSec program is any device that prevents a specific type of infor- mation from moving between the outside world (the untrusted network) and the inside world (the trusted network). ■ Types of firewalls include packet filtering firewalls, application layer proxy firewalls, stateful packet inspection firewalls, and Unified Threat Management devices. There are three common architectural implementations of firewalls: single bastion hosts, screened-host firewalls, and screened-subnet firewalls. ■ A host-based IDPS resides on a particular computer or server and monitors activity on that system. A network-based IDPS monitors network traffic; when a predefined con- dition occurs, it responds and notifies the appropriate administrator. ■ A signature-based IDPS, also known as a knowledge-based IDPS, examines data traf- fic for activity that matches signatures, which are preconfigured, predetermined attack patterns. A statistical anomaly-based IDPS (also known as a behavior-based IDPS) collects data from normal traffic and establishes a baseline. When the activity is out- side the baseline parameters (called the clipping level), the IDPS notifies the administrator. ■ The science of encryption, known as cryptology, encompasses cryptography and cryptanalysis. Cryptanalysis is the process of obtaining the original message from an encrypted code without the use of the original algorithms and keys. ■ In encryption, the most commonly used algorithms employ either substitution or transposition. A substitution cipher substitutes one value for another. A transposition cipher (or permutation cipher) rearranges the values within a block to create the ciphertext. ■ Symmetric encryption uses the same key, also known as a secret key, both to encrypt and decrypt a message. Asymmetric encryption (public key encryption) uses two dif- ferent keys for these purposes. ■ A public key infrastructure (PKI) encompasses the entire set of hardware, software, and cryptosystems necessary to implement public key encryption. ■ A digital certificate is a block of data, similar to a digital signature, that is attached to a file to certify it is from the organization it claims to be from and has not been modified. ■ A number of cryptosystems have been developed to make e-mail more secure. Exam- ples include Pretty Good Privacy (PGP) and Secure Multipurpose Internet Mail Extensions (S/MIME). ■ A number of cryptosystems work to secure Web browsers, including Secure Sockets Layer (SSL), Secure Hypertext Transfer Protocol (SHTTP), Secure Shell (SSH), and IP Security (IPSec). Review Questions 1. What is the difference between authentication and authorization? Can a system permit authorization without authentication? Why or why not? 578 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 2. What is the most widely accepted biometric authorization technology? Why? 3. What is the most effective biometric authorization technology? Why? 4. What is the typical relationship between the untrusted network, the firewall, and the trusted network? 5. How is an application layer firewall different from a packet filtering firewall? Why is an application layer firewall sometimes called a proxy server? 6. What special function does a cache server perform? Why does this function have value for larger organizations? 7. How does screened-host firewall architecture differ from screened-subnet firewall architecture? Which offers more security for the information assets that remain on the trusted network? 8. What is a DMZ? Is this actually a good name for the function this type of subnet performs? 9. What is RADIUS? What advantage does it have over TACACS? 10. How does a network-based IDPS differ from a host-based IDPS? 11. What is network footprinting? What is network fingerprinting? How are they related? 12. Why do many organizations ban port scanning activities on their internal networks? Why would ISPs ban outbound port scanning by their customers? 13. Why is TCP port 80 always of critical importance when securing an organization's network? 14. What kind of data and information can be found using a packet sniffer? 15. What are the main components of cryptology? 16. Explain the relationship between plaintext and ciphertext. 17. Define asymmetric encryption. Why would it be of interest to information security professionals? 18. One tenet of cryptography is that increasing the work factor to break a code increases the security of that code. Why is that true? 19. Explain the key differences between symmetric and asymmetric encryption. Which can the computer process faster? Which lowers the costs associated with key management? 20. What is a VPN? Why are VPNs widely used? Exercises 1. Create a spreadsheet that takes eight values that a user inputs into eight different cells. Then create a row that transposes the cells to simulate a transposition cipher, using the example transposition cipher from the text. Remember to work from right to left, with the pattern 1 > 3, 2 > 6, 3 > 8, 4 > 1, 5 > 4, 6 > 7, 7 > 5, 8 > 2 where 1 is the rightmost 12 Protection Mechanisms 579 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 of the eight cells. Input the text ABCDEFGH as single characters into the first row of cells. What is displayed? 2. Search the Internet for information about a technology called personal or home office firewalls. Examine the various alternatives, select three of the options, and compare their functionalities, cost, features, and types of protection. 3. Go to the Web site of VeriSign, one of the market leaders in digital certificates. Deter- mine whether VeriSign serves as a registration authority, certificate authority, or both. Download its free guide to PKI and summarize VeriSign's services. 4. Go to csrc.nist.gov and locate "Federal Information Processing Standard (FIPS) 197." What encryption standard does this address use? Examine the contents of this publication and describe the algorithm discussed. How strong is it? How does it encrypt plaintext? 5. Search the Internet for vendors of biometric products. Find one vendor with a product designed to examine each characteristic mentioned in Figure 12-4. What is the cross- over error rate (CER) associated with each product? Which would be more acceptable to users? Which would be preferred by security administrators? Closing Case Iris's smartphone beeped. Frowning, she glanced at the screen, expecting to see another junk e-mail. "We've really got to do something about the spam!" she muttered to herself. She scanned the header of the message. "Uh-oh!" Glancing at her watch and then looking at her incident response pocket card, Iris dialed the home number of the on-call systems administrator. When he answered, Iris asked, "Seen the alert yet? What's up?" "Wish I knew—some sort of virus," the SA replied. "A user must have opened an infected attachment." Iris made a mental note to remind the awareness program manager to restart the refresher training program for virus control. Her users should know better, but some new employees had not been trained yet. "Why didn't the firewall catch it?" Iris asked. "It must be a new one," the SA replied. "It slipped by the pattern filters." "What are we doing now?" Iris was growing more nervous by the minute. "I'm ready to cut our Internet connection remotely, then drive down to the office and start our planned recovery operations—shut down infected systems, clean up any infected servers, recover data from backups, and notify our peers that they may receive this virus from us in our e-mail. I just need your go-ahead." The admin sounded uneasy. This was not a trivial operation, and he was facing a long night of intense work. 580 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 "Do it," Iris said. "I'll activate the incident response plan and start working the notification call list to get some extra hands in to help." Iris knew this situation would be the main topic at the weekly CIO's meeting. She just hoped her team would be able to restore the systems to safe operation quickly. She looked at her watch: 12:35 a.m. Discussion Questions 1. What can be done to minimize the risk of this situation recurring? Can these types of situations be completely avoided? 2. If you were in Iris's position, once the timeline of events has been established, how would you approach your interaction with the second-shift operator? 3. How should RWW go about notifying its peers? What other procedures should Iris have the technician perform? 4. When would be the appropriate time to begin the forensic data collection process to analyze the root cause of this incident? Why? Ethical Decision Making Regarding the actions taken by the San Diego 10th-grader as described in this chapter's opening scenario, did she break the law? (You may want to look back at Chapter 2 regard- ing the applicable laws.) If, in fact, she did not break any laws, was the purposeful damage to another via malware infection an unethical action? If not, why not? Regarding the actions taken by the second-shift operator, was his oversight in running the routine update of the malware pattern file a violation of law? Was it a violation of policy? Was the mistake an ethical lapse? Endnotes 1. From

x

firewalls

A physical firewall in a building is a concrete or masonry wall running from the basement through the roof to prevent fire from spreading. In the aircraft and automotive industries, a firewall is an insulated metal barrier that keeps the hot and dangerous moving parts of the motor separate from the interior, where the passengers sit. In InfoSec, a firewall is any device that prevents a specific type of information from moving between the outside world, known as the untrusted network (e.g., the Internet), and the inside world, known as the trusted net- work. The firewall may be a separate computer system, a service running on an existing router or server, or a separate network containing a number of supporting devices. Categories of Firewalls Firewalls have made significant advances since their earliest implementations. While most firewalls are an amalgamation of various options, services, and capabilities, most are associated with one of the basic categories or types of firewalls. The most common types of firewalls are packet filtering firewalls, application layer proxy firewalls, stateful packet inspection firewalls, and Unified Threat Management (UTM) devices. Each of these will be examined in turn. Packet Filtering Firewalls The first category of firewalls, packet filtering firewalls, are simple networking devices that filter packets by examining every incoming and outgoing packet header. They can selectively filter packets based on values in the packet header, accepting or rejecting packets as needed. These devices can be configured to filter based on IP address, type of packet, port request, and other elements present in the packet. Originally 12 Protection Mechanisms 533 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 deployed as a router function, the filtering process examines packets for compliance with or violation of rules configured into the device's rule base. The rules most commonly imple- mented in packet filtering are based on a combination of IP source and destination address, direction (inbound or outbound), and source and destination port requests. Figure 12-5 shows how such a firewall typically works. What began as an advanced router function has become a firewall function. The ability to restrict a specific service is now considered standard in most modern routers and is invisible to the user. Unfortunately, these systems are unable to detect whether packet headers have been modified, as occurs in IP spoofing attacks. Early firewall models only examined the packet's destination and source addresses. Table 12-3 presents a simplified example of a packet filtering rule set. A network configured with the rules shown in Table 12-3 blocks inbound connection attempts by all computers or network devices in the 10.10.x.x address range. This first rule blocks traffic that is attempting to spoof an internal address and thus bypass the firewall Trusted network Blocked data packets Packet filtering router used as dual‐homed bastion host firewall Untrusted network Unrestricted data packets Filtered data packets Figure 12-5 Packet filtering firewall Source Address Destination Address Service Port Action 10.10.x.x Any Any Deny 192.168.x.x 10.10.x.x Any Deny 172.16.121.1 10.10.10.22 SFTP Allow Any 10.10.10.24 SMTP Allow Any 10.10.10.25 HTTP Allow Any 10.10.10.x Any Deny Table 12-3 Example of a packet filtering rule set Notes: These rules apply to a network at 10.10.x.x. This table uses special, non-routable IP addresses in the rules for this example. An actual firewall that connects to a public network would use real address ranges. 534 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 filters. The second rule is an example of a specific block, perhaps on traffic from an objec- tionable location; the rule effectively blacklists that external network from connecting to this network. The third rule could be used to allow an off-site administrator to directly access an internal system by Secure File Transfer Protocol (SFTP). The next two rules would allow outside systems to access e-mail and Web servers, but only if using the appro- priate protocols. The final rule enforces an exclusionary policy that blocks all access not spe- cifically allowed. Application Layer Proxy Firewalls The next category of firewalls is the application layer proxy firewall. The exact name and function of these devices can be confusing, as mul- tiple terms have commonly been associated with them. An application layer proxy server is distinct from an application layer proxy firewall, which is different from an application layer firewall. What a particular device is capable of most commonly boils down to the par- ticular implementation of technologies by the vendor. In the strictest sense, an application layer firewall (or application-level firewall) works like a packet filtering firewall, but at the application layer. A proxy server works as an intermediary between the requestor of infor- mation and the server that provides it, adding a layer of separation and thus security. If such a server stores the most recently accessed information in its internal cache to provide content to others accessing the same information, it may also be called a cache server. Many people consider a cache server to be a form of firewall, but it really doesn't filter; it only intercepts and provides requested content by obtaining it from the internal service pro- vider. A proxy firewall, on the other hand, provides both proxy and firewall services. By extension, then, an application layer proxy server works between a client and the data server and focuses on one application or a small set of them, like Web pages. It is now com- mon in the market to refer to a firewall that provides application layer proxy services and packet filtering firewall services as an application layer proxy firewall. However, some ven- dors offer devices that can provide both application layer firewall services and application layer proxy services. The bottom line is that when selecting this type of device or applica- tion, it is important to read the specifications to determine what true firewall services are provided. The specifications will distinguish between server and firewall, and between packet and application layer functions. When the firewall rather than an internal server is exposed to the outside world from within a network segment, it is considered deployed within a demilitarized zone, or DMZ (see Figure 12-8 later in this chapter for an example). Using this model, additional filtering devices are placed between the proxy server and internal systems, thereby restricting access to internal systems to the proxy server alone. Suppose an external user wanted to view a Web page from an organization's Web server. Rather than expose the Web server to direct traffic from the users and potential attackers, the organization can install a proxy server, configured with the registered domain's URL. This proxy server receives Web page requests, accesses the Web server on behalf of external clients, and then returns the requested pages to users. The primary disadvantage of application layer firewalls is that they are designed for a specific application layer protocol and cannot easily be reconfigured to work with other protocols. Stateful Packet Inspection Firewalls The third category of firewalls, stateful packet inspection (SPI) firewalls, keep track of each network connection established between 12 Protection Mechanisms 535 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 internal and external systems using a state table. State tables track the state and context of network traffic by recording which station sent which packet and when. Like earlier fire- walls, SPI firewalls perform packet filtering; whereas simple packet filtering firewalls merely allow or deny certain packets based on their addresses, though, a stateful packet inspection firewall can restrict incoming packets by restricting access to packets that constitute responses to internal requests. If the stateful packet inspection firewall receives an incoming packet that it cannot match in its state table, it defaults to performing traditional packet fil- tering against its rule base. If the traffic is allowed and becomes a conversation, the device updates its state table with the information. The primary disadvantage of this type of firewall is the additional processing requirements of managing and verifying packets against the state table, which can expose the system to a denial-of-service (DoS) attack. In such an attack, the firewall is subjected to a large number of external packets, slowing it down as it attempts to compare all of the incoming packets first to the state table and then to the access control list (ACL). On the positive side, these firewalls can track connectionless packet traffic such as User Datagram Protocol (UDP) and remote procedure call (RPC) traffic. Whereas static packet filtering firewalls are only able to interpret traffic based on manually configured rule sets, dynamic packet filtering firewalls are able to react to network traffic, adjusting their rule base content and sequence. They do so by understanding how the proto- col functions and by opening and closing "holes" or "doors" in the firewall based on the information contained in the packet header, which allows specially screened packets to bypass the normal packet filtering rule set. Both SPI firewalls and application level proxy firewalls are considered examples of dynamic packet filtering firewalls. Unified Threat Management (UTM) Devices One of the newest generations of fire- walls isn't truly new at all, but a hybrid built from capabilities of modern networking equip- ment that can perform a variety of tasks according to the organization's needs. Known as Unified Threat Management (UTM), these devices are categorized by their ability to per- form the work of a stateful packet inspection firewall, network intrusion detection and pre- vention system, content filter, and spam filter as well as a malware scanner and filter. UTM systems take advantage of increasing memory capacity and processor capability and can reduce the complexity associated with deploying, configuring, and integrating multiple net- working devices. With the proper configuration, these devices are even able to "drill down" into the protocol layers and examine application-specific, encrypted, compressed, and/or encoded data. This is commonly referred to as deep packet inspection (DPI). The primary disadvantage of UTM systems is the creation of a single point of failure should the device experience technical issues or become the subject of an attack.2 Next-Generation (NextGen) Firewalls Another recent development in firewall approaches is the Next-Generation Firewall, NextGen or NGFW. Similar to UTM devices, NextGen firewalls combine traditional firewall functions with other network security func- tions such as deep packet inspection, IDPSs, and the ability to decrypt encrypted traffic. The functions are so similar to those of UTM devices that the difference may lie only in the vendor's description. According to Kevin Beaver of Principle Logic, LLC, the difference may only be one of scope. "Unified threat management (UTM) systems do a good job at a lot of things, while next-generation firewalls (NGFWs) do an excellent job at just a handful of 536 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 things."3 Again, careful review of the solution's capabilities against the organization's needs will facilitate selection of the best equipment. Organizations with tight budgets may benefit from these "all-in-one" devices, while larger organizations with more staff and funding may prefer separate devices that can be managed independently and function more efficiently on their own platforms. Firewall Implementation Architectures Each of the firewall categories described here can be implemented in a number of architectural configurations. These configurations are sometimes mutually exclusive but sometimes can be combined. The configuration that works best for a particular organization depends on the uses of its network, the organiza- tion's ability to develop and implement the architectures, and the available budget. Although literally hundreds of variations exist, four architectural implementations of fire- walls are especially common: single bastion hosts, screened-host firewalls, and screened- subnet firewalls. Single Bastion Host Architecture Most organizations with an Internet connection use some form of device between their internal networks and the external service provider. In the single bastion host architecture, a single device configured to filter packets serves as the sole security point between the two networks and unfortunately represents a rich target for external attacks. As shown in Figure 12-5 earlier in this chapter, the single bastion host architecture can be implemented as a packet filtering router, or it could be a firewall behind a router that is not configured for packet filtering. Any system, router, or firewall that is exposed to the untrusted network can be referred to as a bastion host. The bastion host is sometimes referred to as a sacrificial host because it stands alone on the network perimeter. This architecture is simply defined as the presence of a single protection device on the network perimeter. It is commonplace in residential, small office or home office (SOHO) environments. Larger organizations typically look to implement architectures with more defense in depth, with additional security devices designed to provide a more robust defense strategy. The bastion host is usually implemented as a dual-homed host, as it contains two network interfaces: one that is connected to the external network and one that is connected to the internal network. All traffic must go through the device to move between the internal and external networks. Such an architecture lacks defense in depth, and the complexity of the ACLs used to filter the packets can grow and degrade network performance. An attacker who infiltrates the bastion host can discover the configuration of internal networks and pos- sibly provide external sources with internal information. A technology known as network-address translation (NAT) is often implemented with this architecture. NAT is a method of converting multiple real, routable external IP addresses to special ranges of internal IP addresses, usually on a one-to-one basis; that is, one external valid address directly maps to one assigned internal address. A related approach, called port-address translation (PAT), converts a single real, valid, external IP address to special ranges of internal IP addresses—that is, a one-to-many approach in which one address is mapped dynamically to a range of internal addresses by adding a unique port number when traffic leaves the private network and is placed on the public network. This unique number serves to identify which internal host is engaged in that specific network connection. 12 Protection Mechanisms 537 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 The combination of the address and port (known as a socket) is then easily mapped to the internal address. Both of these approaches create a barrier to intrusion from outside the local network because the addresses used for the internal network cannot be routed over the public network. These special, non-routable addresses have three possible ranges: • Organizations that need very large numbers of local addresses can use the 10.x.x.x range, which has more than 16.5 million usable addresses. • Organizations that need a moderate number of addresses can use the 192.168.x.x range, which has more than 65,500 addresses. • Organizations with smaller needs can use the 172.16.0.0—172.16.15.0 range, which has approximately 4000 usable addresses. Taking advantage of NAT or PAT prevents external attacks from reaching internal machines with addresses in specified ranges. This type of translation works by dynamically assigning addresses to internal communications and tracking the conversations with sessions to determine which incoming message is a response to which outgoing traffic. Figure 12-6 shows a typical configuration of a dual-homed host firewall that uses NAT or PAT and proxy access to protect the internal network. However, this approach has two disadvantages: If the dual-homed host is compromised, it can take out the connection to the external network, and as traffic volume increases, the dual-homed host can become overloaded. Compared to more complex solutions, though, this architecture provides strong protection with minimal expense. Screened-Host Architecture The screened-host architecture combines the packet filtering router with a second, dedicated device, such as a proxy server or proxy firewall. This approach allows the router to screen packets to minimize the network traffic and load on the proxy, while the proxy examines an application layer protocol, such as HTTP, and performs the proxy ser- vices. To its advantage, a dual-homed screened host requires an external attack to compromise two separate systems before the attack can access internal data. As a consequence, this Proxy access Trusted network Blocked external data packets Public IP addresses NAT-assigned local addresses Blocked internal data packets External filtering router Dual-homed host used as a firewall providing network address translation (NAT) Internal filtering router Untrusted network filtering router Outbound data Figure 12-6 Dual-homed host firewall 538 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 configuration protects data more fully than a packet filtering router alone. Figure 12-7 shows a typical configuration of a screened-host architectural approach. Note that the bastion host could also be placed immediately behind the firewall in a dual-homed configuration. Screened-Subnet Architecture The screened-subnet architecture consists of a special network segment with one or more internal hosts located behind a packet filtering router; each host performs a role in protecting the trusted network. Many variants of the screened- subnet architecture exist. The first general model uses two filtering routers, with one or more dual-homed bastion hosts between them, as was shown in Figure 12-6. In the second general model, illustrated in Figure 12-8, connections are routed as follows: • Connections from the outside or untrusted network are routed through an external fil- tering router. • Connections from the outside or untrusted network are routed into—and then out of— a routing firewall to the separate network segment known as the DMZ. • Connections into the trusted internal network are allowed only from the DMZ bastion host servers. Functionally, the difference between the screened-host architecture and the screened-subnet architecture is the addition of packet filtering behind the bastion host or hosts, which pro- vides more security and restricts access to internal hosts only to traffic approved in the inte- rior firewall device's rule set. As depicted in Figure 12-8, the screened subnet is an entire network segment that performs two functions: It protects the DMZ systems and information from outside threats, and it protects the internal networks by limiting how external connections can gain access to inter- nal systems. Though extremely secure, the screened subnet can be expensive to implement and complex to configure and manage; the value of the information it protects must justify the cost. 12 Proxy access Trusted network Blocked data packets Filtered Application- level firewall Screened host Untrusted network Outbound data Figure 12-7 Screened-host firewall Protection Mechanisms 539 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 The DMZ can be a dedicated port on the firewall device linking a single bastion host, as shown in Figure 12-7, or it can be an area between two firewalls, as shown in Figure 12-8. Until recently, servers providing services via the untrusted network were commonly placed in the DMZ. Examples include Web servers, FTP servers, and certain database servers. More recent strategies utilizing proxy servers have provided much more secure solutions. UTM systems could be deployed in virtually any of these architectures, according to the needs of the organization. Selecting the Right Firewall When evaluating a firewall for your networks, ask the following questions: 1. What type of firewall technology offers the right balance between protection and cost for the needs of the organization? 2. What features are included in the base price? What features are available at extra cost? Are all cost factors known? 3. How easy is it to set up and configure the firewall? How accessible are the staff techni- cians who can competently configure the firewall? 4. Can the candidate firewall adapt to the growing network in the target organization? Question 2 addresses another important issue: cost. A firewall's cost may put a certain make, model, or type out of reach for a particular security solution. As with all security decisions, the budgetary constraints stipulated by management must be taken into account. It is important to remember that the total cost of ownership for any piece of security tech- nology, including firewalls, will almost always greatly exceed the initial purchase price. Costs associated with maintenance contracts, rule set acquisition or development, rule set validation, and signature subscriptions (for vendor-produced rules to filter current malware threats), as well as expenses for employee training, all add to the total cost of ownership. Trusted network Controlled access Proxy access Demilitarized zone (DMZ) External filtering router Internal filtering router Servers Untrusted network Outbound data Figure 12-8 Screened subnet (DMZ) 540 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 Content Filters Another type of tool that effectively protects the organization's sys- tems from misuse and unintentional DoS conditions across networks is the content filter. Although technically not a firewall, a content filter (or Internet filter) allows administra- tors to restrict content that comes into a network. The most common application of a con- tent filter is the restriction of access to Web sites with material that is not business-related, such as pornography or entertainment. Another application is the restriction of spam e-mail from outside sources. Content filters can consist of small add-on software for the home or office, such as ContentProtect, SpyAgent, Net Nanny, or K9, or major corporate applications, such as Barracuda's Web Filter, Novell's BorderManager, or Websense Cloud Web Security (formerly SurfControl WebDefense) from Raytheon. Some network monitoring and management tools like LANGuard from GFI include content filtering capabilities. Content filters ensure that employees are not using network resources inappropriately. Unfortunately, these systems require extensive configuration and constant updating of the list of unacceptable destinations or restricted incoming e-mail source addresses. Some newer content filtering applications update the restricted database automatically, in the same way that some antivirus programs do. These applications match either a list of disapproved or approved Web sites, for example, or key content words, such as "nude" and "sex." Of course, content creators work to bypass such restrictions by avoiding these trip words, creat- ing additional problems for networking and security professionals. In response, some orga- nizations have begun implementing strategies of "that which is not permitted is forbidden," creating content filter rule sets that only allow access to specific sites, rather than trying to create lists of sites you can't visit. Managing Firewalls Any firewall device—whether a packet filtering router, bastion host, or other firewall implementation—must have its own set of configuration rules that regulates its actions. With packet filtering firewalls, these rules may be simple statements regulating source and destination addresses, specific protocol or port usage requests, or deci- sions to allow or deny certain types of requests. In all cases, a policy regarding the use of a firewall should be articulated before it is made operable. In practice, configuring firewall rule sets can be something of a nightmare. Logic errors in the preparation of the rules can cause unintended behavior, such as allowing access instead of denying it, specifying the wrong port or service type, or causing the network to misroute traffic. These and a myriad of other mistakes can turn a device designed to protect commu- nications into a choke point. For example, a novice firewall administrator might improperly configure a virus-screening e-mail gateway (think of it as a type of e-mail firewall), resulting in the blocking of all incoming e-mail, instead of screening only e-mail that contains mali- cious code. Each firewall rule must be carefully crafted, placed into the list in the proper sequence, debugged, and tested. The proper rule sequence ensures that the most resource- intensive actions are performed after the most restrictive ones, thereby reducing the number of packets that undergo intense scrutiny. Because of the complexity of the process, the impact of incorrect configuration, and the need to conform to organizational practices, all firewall rule changes must be subject to an organization's usual change control procedures. In addition, most organizations that need load balancing and high availability will use mul- tiple independent devices for firewall rule application. These multiple devices must be kept in synch. 12 Protection Mechanisms 541 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 The ever-present need to balance performance against restrictions imposed by security prac- tices is obvious in the use of firewalls. If users cannot work due to a security restriction, then the security administration will most likely be told by management to remove it. Organiza- tions are much more willing to live with a potential risk than certain failure. Using a computer to protect a computer is fraught with problems that must be managed by careful preparation and continuous evaluation. For the most part, automated control systems, including firewalls, cannot learn from mistakes, and they cannot adapt to changing situations. They are limited by the constraints of their programming and rule sets in the following ways: • Firewalls are not creative and cannot make sense of human actions outside the range of their programmed responses. • Firewalls deal strictly with defined patterns of measured observation. These patterns are known to possible attackers and can be used to their benefit in an attack. • Firewalls are computers themselves and are thus prone to programming errors, flaws in rule sets, and inherent vulnerabilities. • Firewalls are designed to function within limits of hardware capacity and thus can only respond to patterns of events that happen in an expected and reasonably simultaneous sequence. • Firewalls are designed, implemented, configured, and operated by people and are sub- ject to the expected series of mistakes from human error.4 There are also a number of administrative challenges to the operation of firewalls: 1. Training—Most managers think of a firewall as just another device, more or less similar to the computers already humming in the rack. If you get time to read manuals, you are lucky. 2. Uniqueness—You have mastered your firewall, and now every new configuration requirement is just a matter of a few clicks in the Telnet window; however, each brand of firewall is different, and the new e-commerce project just brought you a new firewall running on a different OS. 3. Responsibility—Because you are the firewall guy, suddenly everyone assumes that any- thing to do with computer security is your responsibility. 4. Administration—Being a firewall administrator for a medium or large organization should be a full-time job; however, that's hardly ever the case.5 Laura Taylor, Chief Technology Officer and founder of Relevant Technologies, recommends the following practices for firewall use: • All traffic from the trusted network is allowed out. This way, members of the organi- zation can access the services they need. Filtering and logging outbound traffic is possi- ble when indicated by specific organizational policy goals. • The firewall device is never accessible directly from the public network. Almost all access to the firewall device is denied to internal users as well. Only authorized firewall administrators access the device via secure authentication mechanisms, with preference for a method based on cryptographically strong authentication using two-factor access control techniques. 542 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 • Simple Mail Transport Protocol (SMTP) data is allowed to pass through the firewall, but all of it is routed to a well-configured SMTP gateway to filter and route messaging traffic securely. • All Internet Control Message Protocol (ICMP) data is denied. Known as the ping service, this is a common method for hacker reconnaissance and should be turned off to prevent snooping. • Telnet/terminal emulation access to all internal servers from the public networks is blocked. At the very least, Telnet access to the organization's Domain Name Service (DNS) server should be blocked to prevent illegal zone transfers and to prevent hackers from taking down the organization's entire network. If internal users need to reach an organization's network from outside the firewall, use a virtual private network (VPN) client or other secure authentication system to allow this kind of access. • When Web services are offered outside the firewall, HTTP traffic is prevented from reaching your internal networks via the implementation of some form of proxy access or DMZ architecture. That way, if any employees are running Web servers for internal use on their desktops, the services will be invisible to the outside Internet. If your Web server is located behind the firewall, you need to allow HTTP or HTTPS (SHTTP) data through for the Internet at large to view it. The best solution is to place the Web ser- vers containing critical data inside the network and to use proxy services from a DMZ (screened network segment). It is also advisable to restrict incoming HTTP traffic to internal network addresses such that the traffic must be responding to requests origi- nating at internal addresses. This restriction can be accomplished through NAT or fire- walls that can support stateful inspection or are directed at the proxy server itself. All other incoming HTTP traffic should be blocked. If the Web servers contain only advertising, they should be placed in the DMZ and rebuilt when (not if) they are compromised.6 For additional reading on firewalls and firewall management, visit www.techtarget.org

access controls

Access Controls and Biometrics Key Terms asynchronous token: An authentication component in the form of a token—a card or key fob that contains a computer chip and a liquid crystal display and shows a computer- generated number used to support remote login authentication. This token does not require calibration of the central authentication server; instead, it uses a challenge/response system. biometrics: The use of physiological characteristics to provide authentication for a provided identification. Biometric means "life measurement" in Greek. crossover error rate (CER): Also called the equal error rate, the point at which the rate of false rejections equals the rate of false acceptances. dumb card: An authentication card that contains digital user data, such as a personal identification number (PIN), against which user input is compared. false accept rate: The rate at which fraudulent users or nonusers are allowed access to systems or areas as a result of a failure in the biometric device. This failure is also known as a Type II error or a false positive. false reject rate: The rate at which authentic users are denied or prevented access to authorized areas as a result of a failure in the biometric device. This failure is also known as a Type I error or a false negative. passphrase: A plain-language phrase, typically longer than a password, from which a virtual password is derived. password: A secret word or combination of characters that only the user should know; used to authenticate the user. smart card: An authentication component similar to a dumb card that contains a computer chip to verify and validate several pieces of information instead of just a PIN. synchronous token: An authentication component in the form of a token—a card or key fob that contains a computer chip and a liquid crystal display and shows a computer-generated number used to support remote login authentication. This token must be calibrated with the corresponding software on the central authentication server. virtual password: The derivative of a passphrase. See passphrase.

password recovery times

Case-insensitive Passwords Using a Standard Alphabet Set (No Numbers or Special Characters) Password Length Odds of Cracking: 1 in (based on number of characters ^ password length): Estimated Time to Crack* 8 208,827,064,576 1.01 seconds 9 5,429,503,678,976 26.2 seconds 10 141,167,095,653,376 11.4 minutes 11 3,670,344,486,987,780 4.9 hours 12 95,428,956,661,682,200 5.3 days 13 2,481,152,873,203,740,000 138.6 days 14 64,509,974,703,297,200,000 9.9 years 15 1,677,259,342,285,730,000,000 256.6 years 16 43,608,742,899,428,900,000,000 6,672.9 years Case-sensitive Passwords Using a Standard Alphabet Set (with Numbers and 20 Special Characters) Password Length Odds of Cracking: 1 in (based on number of characters ^ password length): Estimated Time to Crack* 8 2,044,140,858,654,980 2.7 hours 9 167,619,550,409,708,000 9.4 days 10 13,744,803,133,596,100,000 2.1 years 11 1,127,073,856,954,880,000,000 172.5 years 12 92,420,056,270,299,900,000,000 14,141.9 years 13 7,578,444,614,164,590,000,000,000 1,159,633.8 years 14 621,432,458,361,496,000,000,000,000 95,089,967.6 years 15 50,957,461,585,642,700,000,000,000,000 7,797,377,343.5 years 16 4,178,511,850,022,700,000,000,000,000,000 639,384,942,170.1 years

biometrics

Evaluating Biometrics Biometric technologies are generally evaluated according to three basic criteria: • False Reject Rate—The percentage of authorized users who are denied access • False Accept Rate—The percentage of unauthorized users who are allowed access • Crossover Error Rate—The point at which the number of false rejections equals the number of false acceptances 12 Protection Mechanisms 529 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 False Reject Rate The false reject rate, or rate of rejection of authorized users, is also known as a Type I error or a false negative. Rejection of an authorized individual represents not a threat to security but a hindrance to legitimate use. Consequently, it is often not seen as a serious problem until the rate increase is high enough to irritate users. False Accept Rate The false accept rate, or rate of acceptance of unauthorized users, is also known as a Type II error or a false positive, and represents a serious security breach. Often, multiple authentication measures must be used to back up a device whose failure would otherwise result in erroneous authorization. The false accept rate is obviously more serious than the false reject rate. However, adjusting the sensitivity levels of most biometrics to reduce the false accept rate will dramatically increase the false reject rate and significantly hamper normal operations. Crossover Error Rate The crossover error rate (CER), also called the equal error rate, is considered the optimal outcome for biometrics-based systems, as it represents balance between the two false error rates. CERs are commonly used to compare various biometrics but may vary by manufacturer. A biometric device that provides a CER of 1 percent is con- sidered superior to one with a CER of 5 percent, for example. Acceptability of Biometrics A balance must be struck between the acceptability of a system to its users and the effectiveness of the same system. Many of the reliable, effective Hand geometry Hand and palm print Signature recognition Voice recognition Fingerprint Iris recognition Retinal recognition Facial geometry Figure 12-4 Recognition characteristics 530 Chapter 12 Copyright 2017 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-203 biometric systems are perceived as being somewhat intrusive by users. Organizations imple- menting biometrics must carefully balance a system's effectiveness against its perceived intru- siveness and acceptability to users. The rated effectiveness of a system is roughly inverse to its acceptability, as shown in Table 12-2. Since this study originally came out, iris scanning has experienced a rapid growth in popularity due mainly to its use of inexpensive camera equipment and the acceptability of the technology. Iris scanners only need a snapshot of the eye rather than an intrusive scan. As a result, iris scanning is ranked lower than retina scan- ning in terms of effectiveness (as iris scanning results in more false negatives), but it is believed to be the most accepted biometric, even compared to keystroke pattern recognition. For more information on using biometrics for identification and authentication, read NIST SP 800- 76-1 and SP 800-76-2 at http://csrc.nist.gov/publications/PubsSPs.html. You can also visit the Biometric Consortium Web site at www.biometrics.org/.


Kaugnay na mga set ng pag-aaral

Chapter 9 Connect - Accounting 103

View Set

PSYC 273 - Exam 4 Practice Questions

View Set

Penny Review: Abdominal Vasculature

View Set

Jarvis HA Ch 32: Functional Assessment of the Older Adult

View Set

PSY 2307: Research Methods in Psych

View Set

BUSINESS AS LEVEL Chapter 11 - Motivation

View Set

Module Two: Construction Math Section & Review Questions

View Set