Chapter 6 Security Technology: Access Controls, Firewalls, and VPNs
Firewall Processing Modes
-Firewalls fall into several major categories of processing modes: packet-filtering firewalls, application layer proxy firewalls, media access control layer firewalls, and hybrids. -Hybrid firewalls use a combination of the other modes; in practice, most firewalls fall into this category (hybrid) because most implementations use multiple approaches.
Something you know
-This factor of authentication relies on what the unverified user or system knows and can recall—for example, a password, passphrase, or other unique authentication code, such as a personal identification number (PIN). -A password is a private word or combination of characters that only the user should know. -A passphrase is a series of characters that is typically longer than a password and can be used to derive a virtual password. By using the words of the passphrase as cues to create a stream of unique characters, you can create a longer, stronger password that is easy to remember. -Users increasingly employ longer passwords or passphrases to provide effective security. As a result, it is becoming increasingly difficult to track the multitude of system usernames and passwords needed to access information for a typical business or personal transaction. -A common method of keeping up with so many passwords is to write them down, which is a cardinal sin in information security. A better solution is automated password-tracking storage
Secure VPN
A VPN implementation that uses security protocols to encrypt traffic transmitted across unsecured public networks. -Secure VPNs use security protocols like IPSec to encrypt traffic transmitted across unsecured public networks like the Internet
Acceptability of Biometrics
A balance must be struck between a security system's acceptability to users and how effective it is in maintaining security. Many biometric systems that are highly reliable and effective are considered intrusive by users. As a result, many information security professionals don't implement these systems, in an effort to avoid confrontation and possible user boycott of the biometric controls. -Certain biometrics rank in terms of effectiveness and acceptance. Interestingly, the orders of effectiveness and acceptance are almost exactly opposite.
Attribute
A characteristic of a subject (user or system) that can be used to restrict access to an object. Also known as subject attribute.
Hybrid VPN
A combination of trusted and secure VPN Implementations. -A hybrid VPN combines the two, providing encrypted transmissions (as in secure VPN) over some or all of a trusted VPN network.
Remote Authentication Dial-In User Service (RADIUS)
A computer connection system that centralizes the management of user authentication by placing the responsibility of reauthenticating each user on a central authentication server.
Application layer proxy firewall (aka Application Firewall)
A device capable of functioning both as a firewall and an application layer proxy server. -The application layer proxy firewall, also known as an application firewall, is frequently installed on a dedicated computer separate from the filtering router, but it is commonly used in conjunction with a filtering router. The application firewall is also known as a PROXY SERVER (or REVERSE PROXY) because it can be configured to run special software that acts as a proxy for a service request. -For example, an organization that runs a Web server can avoid exposing it to direct user traffic by installing a proxy server configured with the registered domain's URL. This proxy server receives requests for Web pages, accesses the Web server on behalf of the external client, and returns the requested pages to the users. These servers can store the most recently accessed pages in their internal cache, and are thus also called cache servers. -The benefits from this type of implementation are significant. For one, the proxy server is placed in an unsecured area of the network or in the demilitarized zone (DMZ) so that it is exposed to the higher levels of risk from less trusted networks, rather than exposing the Web server to such risks. Additional filtering routers can be implemented behind the proxy server, limiting access to the more secure internal system and providing further protection. -The primary disadvantage of application layer proxy firewalls is that they are designed for one or a few specific protocols and cannot easily be reconfigured to protect against attacks on other protocols. Because these firewalls work at the application layer by definition, they are typically restricted to a single application, such as FTP, Telnet, HTTP, SMTP, or SNMP. The processing time and resources necessary to read each packet down to the application layer diminishes the ability of these firewalls to handle multiple types of applications.
Bastion host (Single Bastion Host Firewall Architecture)
A device placed between an external, untrusted network and an internal, trusted network. Also known as a sacrificial host, a bastion host serves as the sole target for attack and should therefore be thoroughly secured. -A single firewall that provides protection behind the organization's router, the single bastion host architecture can be implemented as a packet filtering router, or it could be a firewall behind a router that is not configured for packet filtering. Any system, router, or firewall that is exposed to the untrusted network can be referred to as a bastion host. -The BASTION HOST is sometimes referred to as a SACRIFICIAL HOST because it stands alone on the net-work perimeter. -This architecture is simply defined as the presence of a single protection device on the network perimeter. It is commonplace in residential SOHO environments. Larger organizations typically look to implement architectures with more defense in depth, with additional security devices designed to provide a more robust defense strategy. -The bastion host is usually implemented as a dual-homed host because it contains two network interfaces: one that is connected to the external network and one that is connected to the internal network. All traffic must go through the device to move between the internal and external networks. Such an architecture lacks defense in depth, and the complexity of the ACLs used to filter the packets can grow and degrade network performance. An attacker who infiltrates the bastion host can discover the configuration of internal networks and possibly provide external sources with internal information. -Implementation of the bastion host architecture often makes use of Network Address Translation (NAT). RFC 2663 uses the term network address and port translation (NAPT) to describe both NAT and Port Address Translation (PAT).
Screened host architecture
A firewall architectural model that combines the packet filtering router with a second, dedicated device such as a proxy server or proxy firewall. -A screened host architecture combines the packet-filtering router with a separate, dedicated firewall, such as an application proxy server, which retrieves information on behalf of other system users and often caches copies of Webpages and other needed information on its internal drives to speed up access. -This approach allows the router to prescreen packets to minimize the network traffic and load on the internal proxy. -The application proxy examines an application layer protocol, such as HTTP, and performs the proxy services. Because an application proxy may retain working copies of some Web documents to improve performance, unanticipated losses can result if it is compromised and the documents were not designed for general access. -As such, the screened host firewall may present a promising target because compromise of the bastion host can lead to attacks on the proxy server that could disclose the configuration of internal networks and possibly provide attackers with internal information. -To its advantage, this configuration requires the external attack to compromise two separate systems before the attack can access internal data. In this way, the bastion host protects the data more fully than the router alone.
Screened subnet architecture (With DMZ)
A firewall architectural model that consists of one or more internal bastion hosts located behind a packet filtering router on a dedicated network segment, with each host performing a role in protecting the trusted network. -The dominant architecture today is the screened subnet used with a DMZ. The DMZ can be a dedicated port on the firewall device linking a single bastion host, or it can be connected to a screened subnet. -Until recently, servers that provided services through an untrusted network were commonly placed in the DMZ. Examples include Web servers, file transfer protocol(FTP) servers, and certain database servers. More recent strategies using proxy servers have provided much more secure solutions. A common arrangement is a subnet firewall that consists of two or more internal bastion hosts behind a packet-filtering router, with each host protecting the trusted network with many variants. -The first general model consists of two filtering routers, with one or more dual-homed bastion hosts between them. -In the second general model, the connections are routed as follows: • Connections from the outside or untrusted network are routed through an external filtering router. • Connections from the outside or untrusted network are routed into—and then out of—a routing firewall to the separate network segment known as the DMZ. • Connections into the trusted internal network are allowed only from the DMZ bastion host servers. -The screened subnet architecture is an entire network segment that performs two functions. First, it protects the DMZ systems and information from outside threats by providing a level of intermediate security, which means the network is more secure than general public networks but less secure than the internal network. Second, the screened subnet protects the internal networks by limiting how external connections can gain access to them. Although extremely secure, the screened subnet can be expensive to implement and com-plex to configure and manage. The value of the information it protects must justify the cost. -DMZ - Creation of Extranet: An EXTRANET is a segment of the DMZ where additional authentication and authorization controls are put into place to provide services that are not available to the general public.
Media Access Control layer Firewall
A firewall designed to operate at the media access control sublayer of the network's data link layer (Layer 2). -While not as well known or widely referenced as the firewall approaches described in the previous sections, media access control layer firewalls make filtering decisions based on the specific host computer's identity, as represented by its media access control (MAC) or network interface card (NIC) address, which operates at the data link layer of the OSI model or the subnet layer of the TCP/IP model. -Thus, media access control layer firewalls link the addresses of specific host computers to ACL entries that identify the specific types of packets that can be sent to each host, and block all other traffic. While media access control layer firewalls are also referred to as MAC layer firewalls, we don't do so here to avoid confusion with mandatory access controls (MACs).
Dynamic packet-filtering firewall
A firewall type that can react to network traffic and create or modify configuration rules to adapt. -A dynamic packet-filtering firewall can react to an emergent event and update or create rules to deal with that event. This reaction could be positive, as in allowing an internal user to engage in a specific activity upon request, or negative, as in dropping all packets from a particular address when an increased presence of a particular type of malformed packet is detected. -While static packet-filtering firewalls allow entire sets of one type of packet to enter in response to authorized requests, dynamic packet filtering allows only a particular packet with a particular source, destination, and port address to enter. -This filtering works by opening and closing "doors" in the firewall based on the information contained in the packet header, which makes dynamic packet filters an intermediate form between traditional static packet filters and application proxies.
Stateful Packet Inspection (SPI) Firewall
A firewall type that keeps track of each network connection between internal and external systems using a state table and that expedites the filtering of those communications. Also, known as a stateful inspection firewall. SPI firewalls, also called stateful inspection firewalls, keep track of each network connection between internal and external systems using a STATE TABLE. -Like first-generation firewalls, stateful inspection firewalls perform packet filtering, but they take it a step further. Whereas simple packet-filtering firewalls only allow or deny certain packets based on their address, a stateful firewall can expedite incoming packets that are responses to internal requests. If the stateful firewall receives an incoming packet that it cannot match in its state table, it refers to its ACL to determine whether to allow the packet to pass. -The primary disadvantage of this type of firewall is the additional processing required to manage and verify packets against the state table. Without this processing, the system is vulnerable to a DoS or DDoS attack. In such an attack, the system receives a large number of external packets, which slows the firewall because it attempts to compare all of the incoming packets first to the state table and then to the ACL. -On the positive side, these firewalls can track connectionless packet traffic, such as UDP and remote procedure calls (RPC) traffic. -Dynamic SPI firewalls keep a dynamic state table to make changes to the filtering ruleswithin predefined limits, based on events as they happen.
Static packet-filtering firewall
A firewall type that requires the configuration rules to be manually created, sequenced, and modified within the firewall. -Static packet filtering requires that the filtering rules be developed and installed with the firewall. The rules are created and sequenced by a person who either directly edits the rule set or uses a programmable interface to specify the rules and the sequence. Any changes to the rules require human intervention. This type of filtering is common in network routers and gateways.
Packet-filtering firewall
A networking device that examines the header information of data packets that come into a network and determines whether to drop them (deny) or forward them to the next network connection (allow), based on its configuration rules. -The packet-filtering firewall examines the header information of data packets that come into a network. A packet-filtering firewall installed on a TCP/IP-based network typically functions at the IP layer and determines whether to deny(drop) a packet or allow (forward) it to the next network connection, based on the rules programmed into the firewall. -Packet-filtering firewalls examine every incoming packet header and can selectively filter packets based on header information such as destination address, source address, packet type, and other key information. -Packet-filtering firewalls scan network data packets looking for compliance with the rules of the firewall's database or violations of those rules. Filtering firewalls inspect packets at the network layer, or Layer 3, of the Open Systems Interconnect (OSI) model, which represents the seven layers of networking processes. -The restrictions most commonly implemented in packet-filtering firewalls are based on a combination of the following: • IP source and destination address• Direction (inbound or outbound) • Protocol, for firewalls capable of examining the IP protocol layer • Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) source and destination port requests, for firewalls capable of examining the TCP/UPD layer -Packet structure varies depending on the nature of the packet. The two primary service types are TCP and UDP -Simple firewall models examine two aspects of the packet header: the destination and source address. They enforce address restrictions through ACLs, which are created and modified by the firewall administrators -The ability to restrict a specific service is now considered standard in most routers and is invisible to the user. Unfortunately, such systems are unable to detect whether packet headers have been modified, which is an advanced technique used in IP spoofing attacks and other attacks. -The three subsets of packet-filtering firewalls are: static packet filtering, dynamic packet filtering, and stateful packet inspection (SPI). They enforce address restrictions, rules designed to prohibit packets with certain addresses or partial addresses from passing through the device.
Passphrase
A plain-language phrase, typically longer than a password, from which a virtual password is derived.
Virtual Private Network (VPN)
A private, secure network operated over a public and insecure network. A VPN keeps the contents of the network messages hidden from observers who may have access to public traffic. -The VPNC defined three VPN technologies: trusted VPNs, secure VPNs, and hybrid VPNs. A VPN that proposes to offer a secure and reliable capability while relying on public net-works must accomplish the following, regardless of the specific technologies and protocols being used: • Encapsulation of incoming and outgoing data, in which the native protocol of the client is embedded within the frames of a protocol that can be routed over the public network and be usable by the server network environment. • Encryption of incoming and outgoing data to keep the data contents private while in transit over the public network, but usable by the client and server computers and/or the local networks on both ends of the VPN connection. • Authentication of the remote computer and perhaps the remote user as well. Authentication and subsequent user authorization to perform specific actions are predicated on accurate and reliable identification of the remote system and user.
Reverse Proxy
A proxy server that most commonly retrieves information from inside an organization and provides it to a requesting user or system outside the organization.
Mandatory Access Control (MAC)
A required, structured data classification scheme that rates each collection of information as well as each user. These ratings are often referred to as sensitivity or classification levels. -are also a form of lattice-based, nondiscretionary access controls that use data classification schemes; they give users and data owners limited control over access to information resources. -In a data classification scheme, each collection of information is rated, and all users are rated to specify the level of information they may access. These ratings are often referred to as sensitivity levels, and they indicate the level of confidentiality the information requires.
Password
A secret word or combination of characters that only the user should know; a password is used to authenticate the user.
Next Generation Firewall (NextGen or NGFW) (Type of Hybrid Firewall)
A security appliance that delivers unified threat management capabilities in a single appliance. -Similar to UTM devices, NextGen firewalls combine traditional firewall functions with other network security functions, such as deep packet inspection, IDPSs, and the ability to decrypt encrypted traffic. The functions are so similar to those of UTM devices that the difference may lie only in the vendor's description. -the only difference may be one of scope: "Unified Threat Management systems do a good job at a lot of things, while Next Generation Firewalls do an excellent job at just a handful of things." -Organizations with tight budgets may benefit from "all-in-one" devices (UTM), while larger organizations with more staff and funding may prefer separate devices that can be managed independently and function more efficiently on their own platforms (NextGen).
Extranet
A segment of the DMZ where additional authentication and authorization controls are put into place to provide serves that are not available to the general public.
Proxy Server
A server that exists to intercept requests for information from external users and provide the requested information by retrieving it from an internal server, thus protecting and minimizing the demand on internal servers. Some proxy servers are also cache servers.
Content Filter
A software program or hardware/software appliance that allows administrators to restrict content that comes into or leaves a network-for example, restricting user access to websites from material that is not related to business, such a pornography or entertainment. -a content filter is another utility that can help protect an organization's systems from misuse and unintentional denial-of-service problems. A content filter is a software filter—technically not a firewall—that allows administrators to restrict access to content from within a network. It is essentially a set of scripts or programs that restricts user access to certain networking protocols and Internet locations, or that restricts users from receiving general types or specific examples of Internet content. -Some content filters are combined with reverse proxy servers, which is why many are referred to as reverse firewalls, as their primary purpose is to restrict internal access to external material. In most common implementation models, the content filter has two components: rating and filtering. -The rating is like a set of firewall rules for Web sites and is common in residential content filters. The rating can be complex, with multiple access control settings for different levels of the organization, or it can be simple, with a basic allow/deny scheme like that of a firewall. -The filtering is a method used to restrict specific access requests to identified resources, which may be Websites, servers, or other resources the content filter administrator configures. The result is like a reverse ACL (technically speaking, a capabilities table); an ACL normally records a set of users who have access to resources, but the control list records resources that the user cannot access. -One use of content filtering technology is to implement DATA LOSS PREVENTION. When implemented, network traffic is monitored and analyzed. If patterns of use and keyword analysis reveal that high-value information is being transferred, an alert may be invoked or the network connection may be interrupted.
Data Loss Prevention
A strategy to gain assurance that the users of a network do not send high-value information or other critical information outside the network.
State Table
A tabular record of the state and context of each packet and conversation between an internal and external user or system. A state table is used to expedite traffic filtering. -A state table tracks the state and context of each packet in the conversation by recording which station sent what packet and when. -A state table looks like a firewall rule set but has additional information. The state table contains the familiar columns for source IP, source port, destination IP, and destination port, but it adds information for the protocol used (UDP or TCP), total time in seconds, and time remaining in seconds. -Many state table implementations allow a connection to remain in place for up to 60 minutes without any activity before the state entry is deleted.
Port Address Translation (PAT)
A technology in which multiple real, routable external IP addresses are converted to special ranges of internal IP addresses, usually on a one-to-many basis; that is, one external valid address is mapped dynamically to a range of internal addresses by adding a unique port number to the address when traffic leaves the private network and is placed on the public network. -A variation on NAT is Port Address Translation (PAT). Where NAT performs a one-to-one mapping between assigned external IP addresses and internal private addresses, PAT performs a one-to-many assignment that allows the mapping of many internal hosts to a single assigned external IP address. -The system is able to maintain the integrity of each communication by assigning a unique port number to the external IP address and mapping the address þ port combination (known as a socket) to the internal IP address.
Network Address Translation (NAT)
A technology in which multiple real, routable external IP addresses are converted to special ranges of internal IP addresses, usually on a one-to-one basis; that is, one external valid address directly maps to one assigned internal address. -Implementation of the bastion host architecture often makes use of Network Address Translation (NAT). -NAT is a method of mapping valid, external IP addresses to special ranges of non-routable internal IP addresses, known as private IPv4 addresses, to create another barrier to intrusion from external attackers. The internal addresses used by NAT consist of three different ranges. -In IPv6 addressing, these addresses are referred to as Unique Local Addresses (ULA), as defined by RFC 4193 Messages sent with internal addresses within these three reserved ranges cannot be routed externally, so if a computer with one of these internal-use addresses is directly connected to the external network and avoids the NAT server, its traffic cannot be routed on the public network -Taking advantage of this, NAT prevents external attacks from reaching internal machines with addresses in specified ranges. -NAT translates by dynamically assigning addresses to internal communications and tracking the conversations with sessions to determine which incoming message is a response to which outgoing traffic.
Lattice-based access control (LBAC)
A variation on the MAC form of access control, which assigns users a matrix of authorizations for particular areas of access, incorporating the information assets of subjects such as users and objects. -The authorization may vary between levels, depending on the classification of authorizations that users possess for each group of information or resources. -The lattice structure contains subjects and objects, and the boundaries associated with each pair are demarcated. -Some lattice-based controls are tied to a person's duties and responsibilities; such controls include role-based access controls (RBACs) and task-based access controls (TBACs). -Roles tend to last for a longer term and be related to a position, whereas tasks are much more granular and short-term.
Discretionary Access Controls (DACs)
Access controls that are implemented at the discretion or option of the data user. -Provide the ability to share resources in a peer-to-peer configuration that allows users to control and possibly provide access to information or resources at their disposal. -The users can allow general, unrestricted access, or they can allow specific people or groups of people to access these resources.
Nondiscretionary access controls (NDACs)
Access controls that are implemented by a central authority. -A form of nondiscretionary access controls is called lattice-based access control(LBAC), in which users are assigned a matrix of authorizations for particular areas of access.
Firewall Architectures
All firewall devices can be configured in several network connection architectures. These approaches are sometimes mutually exclusive, but sometimes they can be combined. The con-figuration that works best for a particular organization depends on three factors: the objectives of the network, the organization's ability to develop and implement the architectures, and the budget available for the function. Although hundreds of variations exist, three architectural implementations of firewalls are especially common: single bastion hosts, screened host firewalls, and screened subnet firewalls
Crossover error rate (CER)
Also called the equal error rate, the point at which the rate of false rejections equals the rate of false acceptances. -The crossover error rate (CER), the point at which false reject and false accept rates intersect, is possibly the most common and important overall measure of accuracy for a biometric system. -Most biometric systems can be adjusted to compensate both for false positive and false negative errors. Adjustment to one extreme creates a system that requires perfect matches and results in a high rate of false rejects, but almost no false accepts. Adjustment to the other extreme produces a low rate of false rejects, but excessive false accepts. -The trick is to find the balance between providing the requisite level of security and minimizing the frustrations of authentic users. Thus, the optimal setting is somewhere near the point at which the two error rates are equal—the CER. -CERs are used to compare various biometrics and may vary by manufacturer. If a biometric device provides a CER of 1 percent, its failure rates for false rejections and false acceptance are both 1 percent.
Trusted VPN
Also known as legacy VPN, a VPN implementation that uses leased circuits from a service provider who gives contractual assurance that no one else is allowed to use these circuits and that they are properly maintained and protected. -A trusted VPN, also known as a legacy VPN, uses leased circuits from a service provider and conducts packet switching over these leased circuits. The organization must trust the service provider, who gives contractual assurance that no one else is allowed to use these circuits and that the circuits are properly maintained and protected—hence the name trusted VPN.
Attribute-based access control(ABAC)
An access control approach whereby the organization specifies the use of objects based on some attribute of the user or system. -A newer approach to the lattice-based access controls promoted by NIST -There are characteristics or attributes of a subject such as name, date of birth, home address, training record, and job function that may, either individually or when combined, comprise a unique identity that distinguishes that person from all others. These characteristics are often called subject attributes. -An ABAC system simply uses one of these attributes to regulate access to a particular set of data. -This system is similar in concept to looking up movie times on a Web site that requires you to enter your zip code to select a particular theatre, or a home supply or electronics store that asks for your zip code to determine if a particular discount is available at your nearest store. -According to NIST, ABAC is actually the parent approach to lattice-based, MAC, and RBAC controls, as they all are based on attributes.
Dumb Card
An authentication card that contains digital user data, such as a personal identification number (PIN), against which user input is compared.
Asynchronous token
An authentication component in the form of a token—a card or key fob that contains a computer chip and a liquid crystal display and shows a computer-generated number used to support remote login authentication. This token does not require calibration of the central authentication server; instead, it uses a challenge/response system.
Synchronous Token
An authentication component in the form of a token—a card or key fob that contains a computer chip and a liquid crystal display and shows a computer-generated number used to support remote login authentication. This token must be calibrated with the corresponding software on the central authentication server.
Smart card
An authentication component similar to a dumb card that contains a computer chip to verify and validate several pieces of information instead o just a PIN.
Kerberos
An authentication system that uses symmetric key encryption to validate an individual user's access to various network resources by keeping a database containing the private keys of clients and servers that are in the authentication domain it supervises. -Kerberos—named after the three-headed dog of Greek mythology that guards the gates to the underworld—uses symmetric key encryption to validate an individual user to various network resources. Kerberos, as described in RFC 4120, keeps a database containing the private keys of clients and servers—in the case of a client, this key is simply the client's encrypted password -Kerberos consists of three interacting services, all of which use a database library: 1. Authentication server (AS), which is a Kerberos server that authenticates clients and servers. 2. Key Distribution Center (KDC), which generates and issues session keys. 3. Kerberos ticket granting service (TGS), which provides tickets to clients who request services. In Kerberos, a ticket is an identification card for a particular client that verifies to the server that the client is requesting services and that the client is a valid member of the Kerberos system and therefore authorized to receive services. The ticket consists of the client's name and network address, a ticket validation starting and ending time, and the session key, all encrypted in the private key of the server from which the client is requesting services. -Kerberos is based on the following principles: • The KDC knows the secret keys of all clients and servers on the network. • The KDC initially exchanges information with the client and server by using these secret keys. • Kerberos authenticates a client to a requested service on a server through TGS and by issuing temporary session keys for communications between the client and KDC, the erver and KDC, and the client and server. • Communications then take place between the client and server using these temporary session keys. -If the Kerberos servers are subjected to denial-of-service attacks, no client can request services. If the Kerberos servers, service providers, or clients' machines are compromised, their private key information may also be compromised.
War Dialer
An automatic phone-dialing program that dials every number in a configured range and checks whether a person, answering machine ,or modem picks up. -A war dialer dials every number in a configured range, such as 555-1000 to 555-2000, and checks to see if a person, answering machine, or modem picks up. If a modem answers, the war dialer program makes a note of the number and then moves to the next target number. The attacker then attempts to hack into the network via the identified modem connection using a variety of techniques. Dial-up network connectivity is usually less sophisticated than that deployed with Internet connections. For the most part, simple username and password schemes are the only means of authentication. However, some technologies, such as RADIUS systems, TACACS, and CHAP password systems, have improved the authentication process, and some systems now use strong encryption.
Task-based access control (TBAC)
An example of a nondiscretionary control where privileges are tied to a task a user performs in an organization and are inherited when a user is assigned to that task. Tasks are considered more temporary than roles. TBAC is an example of an LDAC. -Some consider TBACs a sub-role access control and a method of providing more detailed control over the steps or stages associated with a role or project. -These controls make it easier to maintain the restrictions associated with a particular role or task, especially if different people perform the role or task. Instead of constantly assigning and revoking the privileges of employees who come and go, the administrator simply assigns access rights to the role or task. -When users are associated with that role or task, they automatically receive the corresponding access. When their turns are over, they are removed from the role or task and access is revoked. - tasks are much more granular and short-term.
Role-based access control (RBAC)
An example of nondiscretionary control where privileges are tied to the role a user performs in an organization, and a re inherited when a user is assigned to that role. Roles are considered more persistent than tasks. RBAC is an example of an LDAC. -Role-based controls are associated with the duties a user performs in an organization, such as a position or temporary assignment like project manager, while task-based controls are tied to a particular chore or responsibility, such as a department's printer administrator. -Roles tend to last for a longer term and be related to a position
Access control matrix
An integration of access control lists (focusing on assets) and capability tables (focusing on users) that results in a matrix with organizational assets listed in the column headings and users listed in the row headings. The matrix contains ACLs in columns for a particular device or asset and capability tables in rows for a particular user.
Demilitarized Zone (DMZ)
An intermediate area between two networks designed to provide servers and firewall filtering between a trusted internal network and the outside, untrusted network. Traffic on the outside network carries a higher level of risk.
Effectiveness of Biometrics
Biometric technologies are evaluated on three basic criteria: the false reject rate, which is the percentage of authorized users who are denied access; the false accept rate, which is the percentage of unauthorized users who are granted access; and the crossover error rate, the level at which the number of false rejections equals the false acceptances
Address Restrictions
Firewall rules designed to prohibit packets with certain addresses or partial addresses from passing through the device.
Hybrid Firewalls
Hybrid firewalls combine the elements of other types of firewalls—that is, the elements of packet filtering, application layer proxy, and media access control layer firewalls. -A hybrid firewall system may actually consist of two separate firewall devices; each is a separate fire-wall system, but they are connected so that they work in tandem. -An added advantage to the hybrid firewall approach is that it enables an organization to make a security improvement without completely replacing its existing firewalls. -The most recent generations of firewalls aren't really new; they are hybrids built from capabilities of modern networking equipment that can perform a variety of tasks according to the organization's needs. -
Capabilities table
In a lattice-based access control, the row of attributes associated with a particular subject (such as user).
Strong Authentication
In access control, the use of at least two different authentication mechanisms drawn from two different factors of authentication.
Minutiae
In biometric access controls, unique points of reference that are digitized and stored in an encrypted format when the user's system access credentials are created.
Firewall
In information security, a combination of hardware and software that filters or prevents specific information from moving between the outside network and the inside network. -A firewall in an information security program is similar to a building's firewall (literal wall) in that it prevents specific types of information from moving between two different levels of networks, such as an untrusted network like the Internet and a trusted network like the organization's internal network. -Firewalls can be categorized by processing mode, development era, or structure.
Transport mode
In transport mode, the data within an IP packet is encrypted, but the header information is not. This allows the user to establish a secure link directly with the remote host, encrypting only the data contents of the packet. The downside of this implementation is that packet eavesdroppers can still identify the destination system. Transport mode VPNs have two popular uses. The first is the end-to-end transport of encrypted data. In this model, two end users can communicate directly, encrypting and decrypting their communications as needed. Each machine acts as the end-node VPN server and client. In the second, a remote access worker or teleworker connects to an office network over the Internet by connecting to a VPN server on the perimeter. This allows the teleworker's system to work as if it were part of the local area network.
Unified Threat Management (UTM) (Type of Hybrid Firewall)
Networking devices categorized by their ability to perform the work of multiple devices, such as stateful packet inspection firewalls, network intrusion detection and prevention systems, content filters, spam filters, and malware scanners and filters. -The first type of hybrid firewall is known as Unified Threat Management (UTM). These devices are categorized by their ability to perform the work of an SPI firewall, network intrusion detection and prevention system, content filter, spam filter, and malware scanner and filter. -UTM systems take advantage of increasing memory capacity and processor capability and can reduce the complexity associated with deploying, configuring, and integrating multiple networking devices. -With the proper configuration, these devices are even able to "drill down" into the protocol layers and examine application-specific data, encrypted data, compressed data, and encoded data. -The primary disadvantage of UTM systems is the creation of a single point of failure if the device has technical problems.
Best Practices for Firewalls
Note that these rules are not presented in any particular sequence: • All traffic from the trusted network is allowed out. This rule allows members of the organization to access the services they need. Filtering and logging of outbound traffic can be implemented when required by specific organizational policies. • The firewall device is never directly accessible from the public network for configuration or management purposes. Almost all administrative access to the firewall device is denied to internal users as well. Only authorized firewall administrators access the device through secure authentication mechanisms, preferably via a method that is based on cryptographically strong authentication and uses two-factor access control techniques. • Simple Mail Transfer Protocol (SMTP) data is allowed to enter through the firewall, but is routed to a well-configured SMTP gateway to filter and route messaging traffic securely. • All Internet Control Message Protocol (ICMP) data should be denied, especially on external interfaces. Known as the ping service, ICMP is a common method for hacker reconnaissance and should be turned off to prevent snooping. • Telnet (terminal emulation) access should be blocked to all internal servers from the public networks. At the very least, Telnet access to the organization's Domain Name System (DNS) server should be blocked to prevent illegal zone transfers and to prevent attackers from taking down the organization's entire network. If internal users need to access an organization's network from outside the firewall, the organization should enable them to use a virtual private network (VPN) client or other secure system that provides a reasonable level of authentication. • When Web services are offered outside the firewall, HTTP traffic should be blocked from internal networks through the use of some form of proxy access or DMZ architecture. That way, if any employees are running Web servers for internal use on their desktops, the services are invisible to the outside Internet. If the Web server is behind the firewall, allow HTTP or HTTPS traffic (also known as Secure Sockets Layer or SSL) so users on the Internet at large can view it. The best solution is to place the Webservers that contain critical data inside the network and use proxy services from a DMZ (screened network segment), and to restrict Web traffic bound for internal net-work addresses to allow only those requests that originated from internal addresses. This restriction can be accomplished using NAT or other stateful inspection or proxy server firewalls. All other incoming HTTP traffic should be blocked. If the Web servers only contain advertising, they should be placed in the DMZ and rebuilt on a timed schedule or when—not if, but when—they are compromised. • All data that is not verifiably authentic should be denied. When attempting to convince packet-filtering firewalls to permit malicious traffic, attackers frequently put an internal address in the source field. To avoid this problem, set rules so that the external firewall blocks all inbound traffic with an organizational source address.
Configuring and Managing Firewalls
Once the firewall architecture and technology have been selected, the organization must pro-vide for the initial configuration and ongoing management of the firewall(s). Good policy and practice dictates that each firewall device, whether a filtering router, bastion host, or other implementation, must have its own set of CONFIGURATION RULES. -In fact, the configuration of firewall policies can be complex and difficult. IT professionals who are familiar with application programming can appreciate the difficulty of debugging both syntax errors and logic errors. Syntax errors in firewall policies are usually easy to identify, as the systems alert the administrator to incorrectly configured policies. -However, logic errors, such as allowing instead of denying, specifying the wrong port or service type, and using the wrong switch, are another story. A myriad of simple mistakes can take a device designed to protect users' communications and turn it into one giant choke point. A choke point that restricts all communications or an incorrectly configured rule can cause other unexpected results. -Configuring firewall policies is as much an art as it is a science. Each configuration rule must be carefully crafted, debugged, tested, and placed into the firewall's rule base in the proper sequence. Good, correctly sequenced firewall rules ensure that the actions taken comply with the organization's policy. In a well-designed, efficient firewall rule set, rules that can be evaluated quickly and govern broad access are performed before rules that may take longer to evaluate and affect fewer cases. -The most important thing to remember when configuring firewalls is that when security rules conflict with the performance of business, security often loses. If users can't work because of a security restriction, the security administration is usually told in no uncertain terms to remove the safeguard. In other words, organizations are much more willing to live with potential risk than certain failure.
RADIUS, Diameter, and TACACS
RADIUS and TACACS are systems that authenticate the credentials of users who are trying to access an organization's network via a dial-up connection. Typical dial-up systems place the responsibility for user authentication on the system directly connected to the modems. If there are multiple points of entry into the dial-up system, this authentication system can become difficult to manage. The RemoteAuthentication Dial-In User Service (RADIUS)system centralizes the responsibility for authenticating each user on the RADIUS server. RADIUS was initially described in RFCs2058 and 2059, and is currently described in RFCs 2865 and 2866, among others. -When a network access server (NAS) receives a request for a network connection from a dial-up client, it passes the request and the user's credentials to the RADIUS server. RADIUS then validates the credentials and passes the resulting decision (accept or deny) back to the accepting remote access server. -derived from RADIUS is the Diameter protocol. The Diameter protocol defines the minimum requirements for a system that provides authentication, authorization, and accounting (AAA) services and that can go beyond these basics and add commands and/or object attributes. Diameter security uses respected encryption standards such as Internet Protocol Security (IPSec) or Transport Layer Security (TLS); its cryptographic capabilities are extensible and will be able to use future encryption protocols as they are implemented. Diameter-capable devices are emerging into the marketplace, and this protocol is expected to become the dominant form of AAA services. -The Terminal Access Controller Access Control System (TACACS), defined in RFC 1492, is another remote access authorization system that is based on a client/server configuration. Like RADIUS, it contains a centralized database, and it validates the user's credentials at this TACACS server. The three versions of TACACS are the original version, Extended TACACS, and TACACS+. Of these, only TACACS+ is still in use. The original version combines authentication and authorization services. The extended version separates the steps needed to authenticate individual user or system access attempts from the steps needed to verify that the authenticated individual or system is allowed to make a given type of connection. The extended version keeps records for accountability and to ensure that the access attempt is linked to a specific individual or system. The TACACS+ version uses dynamic passwords and incorporates two-factor authentication.
Firewall Rules
Rule set 1: Responses to internal requests are allowed. In most firewall implementations, it is desirable to allow a response to an internal request for information. In stateful firewalls, this response is most easily accomplished by matching the incoming traffic to an outgoing request in a state table. In simple packet filtering, this response can be accomplished by set-ting the following rule for the external filtering router. (Note that the network address for the destination ends with .0; some firewalls use a notation of .x instead.) Use extreme caution in deploying this rule, as some attacks use port assignments above 1023. However, most modern firewalls use stateful inspection filtering and make this concern obsolete. Rule set 2: The firewall device is never accessible directly from the public network. If attackers can directly access the firewall, they may be able to modify or delete rules and allow unwanted traffic through. For the same reason, the firewall itself should never be allowed to access other network devices directly. If hackers compromise the firewall and then use its permissions to access other servers or clients, they may cause additional damage or mischief. The rules shown in Table 6-7 prohibit anyone from directly accessing the firewall, and prohibit the firewall from directly accessing any other devices. Note that this example is for the external filtering router and firewall only. Similar rules should be crafted for the internal router. Why are there separate rules for each IP address? The 10.10.10.1 address regulates external access to and by the firewall, while the 10.10.10.2 address regulates internal access. Not all attackers are outside the firewall! Rule set 3: All traffic from the trusted network is allowed out. As a general rule, it is wise not to restrict outbound traffic unless separate routers and firewalls are configured to handle it, to avoid overloading the firewall. If an organization wants control over outbound traffic, it should use a separate filtering device. Rule set 4: The rule set for SMTP: the packets governed by this rule are allowed to pass through the firewall, but are all routed to a well-configured SMTP gateway. It is important that e-mail traffic reach your e-mail server and only your e-mail server. Some attackers try to disguise dangerous packets as e-mail traffic to fool a firewall. If such packets can reach only the e-mail server and it has been properly configured, the rest of the network ought to be safe. Rule set 5: All ICMP data should be denied. Pings, formally known as ICMP Echo requests, are used by internal systems administrators to ensure that clients and servers can communicate. There is virtually no legitimate use for ICMP outside the network, except to test the perimeter routers. ICMP may be the first indicator of a malicious attack. It's best to make all directly connected networking devices "black holes" to external probes. A common net-working diagnostic command in most operating systems is traceroute; it uses a variation of the ICMP Echo requests, so restricting this port provides protection against multiple types of probes. Allowing internal users to use ICMP requires configuring two rules Rule set 6: Telnet (terminal emulation) access should be blocked to all internal servers from the public networks. Again, this rule is unnecessary if the firewall uses internal permissions rules like those in rule set 2. Rule set 7: When Web services are offered outside the firewall, HTTP and HTTPS traffic should be blocked from the internal networks via the use of some form of proxy access or DMZ architecture. This rule accomplishes two things: it allows HTTP traffic to reach the Web server, and it uses the cleanup rule (Rule 8) to prevent non-HTTP traffic from reaching the Web server. Rule set 8: The cleanup rule. As a general practice in firewall rule construction, if a request for a service is not explicitly allowed by policy, that request should be denied by a rule. Additional rules that restrict access to specific servers or devices can be added, but they must be sequenced before the cleanup rule. The specific sequence of the rules becomes crucial because once a rule is fired, that action is taken and the firewall stops processing the rest ofthe rules in the list.
Access Control Architecture Models
Security access control architecture models, which are often referred to simply as architecture models, illustrate access control implementations and can help organizations quickly make improvements through adaptation. -Formal models do not usually find their way directly into usable implementations; instead, they form the theoretical foundation that an implementation uses. -When a specific implementation is put into place, noting that it is based on a formal model may lend credibility, improve its reliability, and lead to improved results. -Some models are implemented into computer hardware and software, some are implemented as policies and practices, and some are implemented in both. -Some models focus on the confidentiality of information, while others focus on the information's integrity as it is being processed. -The first models discussed here—specifically, the trusted computing base, the Information Technology System Evaluation Criteria, and the Common Criteria—are used as evaluation models and to demonstrate the evolution of trusted system assessment, which include evaluations of access controls. -The later models—Bell-LaPadula, Biba, and others—demonstrate implementations in some computer security systems to ensure that the confidentiality, integrity, and availability of information is protected by controlling the access of one part of a system on another.
Access Control List (ACL)
Specifications of authorization that govern the right sand privileges of user to a particular information asset. ACLs include user access lists, matrices, and capabilities tables.
Timing Channels
TCSEC-defined covert channels that communicate by managing the relative timing of events.
Storage Channels
TCSEC-defined covert channels that communicate by modifying a stored object, such as in steganography.
Bell-LaPadula Confidentiality Model
The Bell-LaPadula (BLP) confidentiality model is a "state machine reference model"—in other words, a model of an automated system that is able to manipulate its state or status over time. BLP ensures the confidentiality of the modeled system by using MACs, data classification, and security clearances. -The intent of any state machine model is to devise a conceptual approach in which the system being modeled can always be in a known secure condition; in other words, this kind of model is provably secure. -A system that serves as a reference monitor compares the level of data classification with the clearance of the entity requesting access; it allows access only if the clearance is equal to or higher than the classification. -BLP security rules prevent information from being moved from a level of higher security to a lower level. Access modes can be one of two types: simple security and the * (star) property. -Simple security (also called the read property) prohibits a subject of lower clearance from reading an object of higher clearance, but it allows a subject with a higher clearance level to read an object at a lower level (read down). The * property (the write property), on the other hand, prohibits a high-level subject from sending messages to a lower-level object. In short, subjects can read down and objects can write or append up. BLP uses access permission matrices and a security lattice for access control. -This model can be explained by imagining a fictional interaction between General Bell, whose thoughts and actions are classified at the highest possible level, and Private LaPadula, who has the lowest security clearance in the military. It is prohibited for Private LaPadula to read anything written by General Bell and for General Bell to write in any document that Private LaPadula could read. In short, the principle is "no read up, no write down."
Biba Integrity Model
The Biba integrity model is similar to BLP. It is based on the premise that higher levels of integrity are more worthy of trust than lower ones. The intent is to provide access controls to ensure that objects or subjects cannot have less integrity as a result of read/write operations. The Biba model assigns integrity levels to subjects and objects using two properties: the simple integrity (read) property and the integrity * property(write). -The simple integrity property permits a subject to have read access to an object only if the subject's security level is lower than or equal to the level of the object. -The integrity * property permits a subject to have write access to an object only if the subject's security level is equal to or higher than that of the object. -The Biba model ensures that no information from a subject can be passed on to an object in a higher security level. This prevents contaminating data of higher integrity with data of lower integrity. This model can be illustrated by imagining fictional interactions among some priests, a monk named Biba, and some parishioners in the Middle Ages. Priests are considered holier(of greater integrity) than monks, who are in turn holier than parishioners. A priest cannot read (or offer) Masses or prayers written by Biba the Monk, who in turn cannot read items written by his parishioners. Biba the Monk is also prohibited from writing in a priest's sermon books, just as parishioners are prohibited from writing in Biba's book. -These properties prevent the lower integrity of the lower level from corrupting the "holiness" or higher integrity of the upper level. On the other hand, higher-level entities can share their writings with the lower levels without compromising the integrity of the information. This example illustrates the "no write up, no read down" principle behind the Biba model.
Brewer-Nash Model (Chinese Wall)
The Brewer-Nash model, commonly known as a Chinese Wall, is designed to prevent a conflict of interest between two parties. -Imagine that a law firm represents two people who are involved in a car accident. One sues the other, and the firm has to represent both. To prevent a conflict of interest, the individual attorneys should not be able to access the private information of these two litigants. The Brewer-Nash model requires users to select one of two conflicting sets of data, after which they cannot access the conflicting data.
Clark-Wilson Integrity Model
The Clark-Wilson integrity model, which is built upon principles of change control rather than integrity levels, was designed for the commercial environment. The model's change control principles are: • No changes by unauthorized subjects • No unauthorized changes by authorized subjects • The maintenance of internal and external consistency -Internal consistency means that the system does what it is expected to do every time, with-out exception. -External consistency means that the data in the system is consistent with similar data in the outside world. -This model establishes a system of subject-program-object relationships so that the subject has no direct access to the object. Instead, the subject is required to access the object using a well-formed transaction via a validated program. The intent is to provide an environment where security can be proven through the use of separated activities, each of which is provably secure. -The following controls are part of the Clark-Wilson model: • Subject authentication and identification • Access to objects by means of well-formed transactions • Execution by subjects on a restricted set of programs -The elements of the Clark-Wilson model are: • Constrained data item (CDI): Data item with protected integrity • Unconstrained data item: Data not controlled by Clark-Wilson; non validated input or any output • Integrity verification procedure (IVP): Procedure that scans data and confirms its integrity • Transformation procedure (TP): Procedure that only allows changes to a constrained data item -All subjects and objects are labeled with TPs. The TPs operate as the intermediate layer between subjects and objects. Each data item has a set of access operations that can be performed on it. Each subject is assigned a set of access operations that it can perform. The system then compares these two parameters and either permits or denies access by the subject to the object. -As an example, consider a database management system (DBMS) that sits between a database user and the actual data. The DBMS requires the user to be authenticated before accessing the data, only accepts specific inputs (such as SQL queries), and only provides a restricted set of operations, in accordance with its design. This example illustrates the Clark-Wilson model controls.
The Common Criteria (Successor to TCSEC & ITSEC)
The Common Criteria for Information Technology Security Evaluation, often called the Common Criteria or just CC, is an international standard (ISO/IEC 15408) for computer security certification. -It is widely considered the successor to both TCSEC and ITSEC in that it reconciles some differences between the various other standards. -Most governments have discontinued their use of the other standards. -CC is a combined effort of contributors from Australia, New Zealand, Canada, France, Germany, Japan, the Netherlands, Spain, the United Kingdom, and the United States. In the United States, the National Security Agency (NSA) and NIST were the primary contributors. -CC and its companion, the Common Methodology for Information Technology Security Evaluation (CEM), are the technical basis for an international agreement called the Common Criteria Recognition Agreement (CCRA), which ensures that products can be evaluated to determine their particular security properties. -CC seeks the widest possible mutual recognition of secure IT products. The CC process assures that the specification, implementation, and evaluation of computer security products are performed in a rigorous and standard manner. -CC terminology includes: • Target of Evaluation (ToE): The system being evaluated • Protection Profile (PP): User-generated specification for security requirements • Security Target (ST): Document describing the ToE's security properties • Security Functional Requirements (SFRs): Catalog of a product's security functions • Evaluation Assurance Levels (EALs): The rating or grading of a ToE after evaluation
Graham-Denning Access Control Model
The Graham-Denning access control model has three parts: a set of objects, a set of subjects, and a set of rights. -The subjects are composed of two things: a process and a domain. -The domain is the set of constraints that control how subjects may access objects. -The set of rights governs how subjects may manipulate the passive objects. -This model describes eight primitive protection rights, called commands, which subjects can execute to have an effect on other subjects or objects. Note that these commands are similar to the rights a user can assign to an entity in modern operating systems. -The eight primitive protection rights are: 1. Create object 2. Create subject 3. Delete object 4. Delete subject 5. Read access right 6. Grant access right 7. Delete access right 8. Transfer access right
Harrison-Ruzzo-Ullman Model
The Harrison-Ruzzo-Ullman (HRU) model defines a method to allow changes to access rights and the addition and removal of subjects and objects, a process that the Bell-LaPadula model does not allow. -Because systems change over time, their protective states need to change. HRU is built on an access control matrix and includes a set of generic rights and a specific set of commands. These include: • Create subject/create object • Enter right X into • Delete right X from • Destroy subject/destroy object -By implementing this set of rights and commands and restricting the commands to a single operation each, it is possible to determine if and when a specific subject can obtain a particular right to an object.
ITSEC
The Information Technology System Evaluation Criteria (ITSEC), an international set of criteria for evaluating computer systems, is very similar to TCSEC. Under ITSEC, Targets of Evaluation (ToE) are compared to detailed security function specifications, resulting in an assessment of systems functionality and comprehensive penetration testing. -Like TCSEC, ITSEC was functionally replaced for the most part by the Common Criteria. -The ITSEC rates products on a scale of E1 to the highest level of E6, much like the ratings of TCSEC and the Common Criteria. E1 is roughly equivalent to the EAL2 evaluation of the Common Criteria, and E6 is roughly equivalent to EAL7.
SESAME
The Secure European System for Applications in a Multivendor Environment(SESAME), defined in RFC 1510, is the result of a European research and development project partly funded by the European Commission. SESAME is similar to Kerberos in that the user is first authenticated to an authentication server and receives a token. The token is then presented to a privilege attribute server, instead of a ticket-granting service as in Kerberos, as proof of identity to gain a privilege attribute certificate (PAC). The PAC is like the ticket in Kerberos; however, a PAC conforms to the standards of the European Computer Manufacturers Association (ECMA) and the International Organization for Standardization/ International Telecommunications Union (ISO/ITU-T). -The remaining differences lie in the security protocols and distribution methods. SESAME uses public key encryption to distribute secret keys. SESAME also builds on the Kerberos model by adding sophisticated access control features, more scalable encryption systems, improved manageability, auditing features, and the option to delegate responsibility for allowing access.
Authentication
The access control mechanism requires the validation and verification of an unauthenticated entity's purported identity. -Authentication is the process of validating an unauthenticated entity's purported identity. There are three widely used authentication mechanisms, or authentication factors: • Something you know • Something you have • Something you are
Accountability
The access control mechanism that ensures all actions on a system-authorized or unauthorized-can be attributed to an authenticated identity. Auditability. -Accountability, also known as auditability, ensures that all actions on a system—authorized or unauthorized—can be attributed to an authenticated identity. Accountability is most often accomplished by means of system logs, database journals, and the auditing of these records. -Systems logs record specific information, such as failed access attempts and systems modifications. Logs have many uses, such as intrusion detection, determining the root cause of a system failure, or simply tracking the use of a particular resource.
Authorization
The access control mechanism that represents the matching of an authenticated entity to a list of information assets and corresponding access levels. -Authorization is the matching of an authenticated entity to a list of information assets and corresponding access levels. This list is usually an ACL or access control matrix. In general, authorization can be handled in one of three ways: • Authorization for each authenticated user, in which the system performs an authentication process to verify each entity and then grants access to resources for only that entity. This process quickly becomes complex and resource-intensive in a computer system. • Authorization for members of a group, in which the system matches authenticated entities to a list of group memberships and then grants access to resources based on the group's access rights. This is the most common authorization method. • Authorization across multiple systems, in which a central authentication and authorization system verifies an entity's identity and grants it a set of credentials. -Authorization credentials, which are also called authorization tickets, are issued by an authenticator and are honored by many or all systems within the authentication domain. -Sometimes called single sign-on (SSO) or reduced sign-on, authorization credentials are becoming more common and are frequently enabled using a shared directory structure such as the Lightweight Directory Access Protocol (LDAP).
Identification
The access control mechanism whereby unverified or unauthenticated entities who seek access to a resource provide a label by which they are known to the system. -Identification is a mechanism whereby unverified or unauthenticated entities who seek access to a resource provide a label by which they are known to the system. This label is called an identifier (ID), and it must be mapped to one and only one entity within the security domain. -Some organizations use composite identifiers by concatenating elements—department codes, random numbers, or special characters—to make unique identifiers within the security domain. -Other organizations generate random IDs to protect resources from potential attackers -Most organizations use a single piece of unique information, such as a complete name or the user's first initial and surname.
Virtual Password
The derivative of a passphrase.
False Reject Rate
The rate at which authentic users are denied or prevented access to authorized areas as a result of a failure in the biometric device. This failure is also known as Type I error or false negative. -The false reject rate describes the number of legitimate users who are denied access because of a failure in the biometric device. This failure is known as a Type I error. While a nuisance to unauthenticated people who are authorized users, this error rate is probably of little concern to security professionals because rejection of an authorized user represents no threat to security. -the false reject rate is often ignored unless it reaches a level high enough to generate complaints from irritated unauthenticated people.
False Accept Rate
The rate at which fraudulent users or nonusers are allowed access to systems or areas as a result of a failure in the biometric device. This failure is also known as a Type II error or a false positive. -The false accept rate conversely describes the number of unauthorized users who somehow are granted access to a restricted system or area, usually because of a failure in the biometric device. This failure is known as a Type II error and is unacceptable to security professionals.
Access Control
The selective method by which systems specify who may use a particular resource and how they may use it. -Access control is the method by which systems determine whether and how to admit a user into a trusted area of the organization—that is, information systems, restricted areas such as computer rooms, and the entire physical location. -Access control is achieved through a combination of policies, programs, and technologies. To understand access controls, you must first understand they are focused on the permissions or privileges that a subject (user or system)has on an object (resource), including if, when, and from where a subject may access an object and especially how the subject may use that object. -In general, access controls can be discretionary or nondiscretionary
Trusted network
The system of networks inside the organization that contains its information assets and is under the organization's control.
Untrusted network
The system of networks outside the organization over which the organization has no control. The internet is an example of an untrusted network.
Biometric Access Control
The use of physiological characteristics to provide authentication of a provided identification. Biometric means "life measurement" in Greek. -Biometric access control relies on recognition—the same thing you rely on to identify friends, family, and other people you know. -Biometric authentication technologies include the following: • Fingerprint comparison of the unauthenticated person's actual fingerprint to a stored fingerprint • Palm print comparison of the unauthenticated person's actual palm print to a stored palm print • Hand geometry comparison of the unauthenticated person's actual hand to a stored measurement • Facial recognition using a photographic ID card, in which a human security guard compares the unauthenticated person's face to a photo • Facial recognition using a digital camera, in which an unauthenticated person's face is compared to a stored image • Retinal print comparison of the unauthenticated person's actual retina to a stored image • Iris pattern comparison of the unauthenticated person's actual iris to a stored image -Among all possible biometrics, only three human characteristics are usually considered truly unique: • Fingerprints • Retina of the eye (blood vessel pattern) • Iris of the eye (random pattern of features found in the iris, including freckles, pits, striations, vasculature, coronas, and crypts) -Most of the technologies that scan human characteristics convert these images to some form of minutiae. Each subsequent access attempt results in a measurement that is compared with an encoded value to verify the user's identity. -A problem with this method is that some human characteristics can change over time due to normal development, injury, or illness, which means that system designers must create fallback or failsafe authentication mechanisms. -Signature and voice recognition technologies are also considered to be biometric access control measures. -Currently, the technology for signature capturing is much more widely accepted than that for signature comparison -Voice recognition works in a similar fashion; the system captures and stores an initial voice-print of the user reciting a phrase.
Something you are or Can produce
This authentication factor relies on individual characteristics, such as fingerprints, palm prints, hand topography, hand geometry, or retina and iris scans, or something an unverified user can produce on demand, such as voice pat-terns, signatures, or keyboard kinetic measurements. -Some of these characteristics are known collectively as biometrics -certain critical logical or physical areas may require the use of strong authentication—at least two authentication mechanisms drawn from two different factors of authentication, most often something you have and something you know. -For example, access to a bank's ATM services requires a banking card plus a PIN. Such systems are called two-factor authentication because two separate mechanisms are used. Strong authentication requires that at least one of the mechanisms be something other than what you know.
Something you have
This authentication factor relies on something an unverified user or system has and can produce when necessary. -One example is dumb cards, such as ID cards or ATM cards with magnetic stripes that contain the digital (and often encrypted) user PIN, which is compared against the number the user enters. -The smart card contains a computer chip that can verify and validate several pieces of information instead of just a PIN. -Another common device is the token—a card or key fob with a computer chip and a liquid crystal dis-play that shows a computer-generated number used to support remote login authentication. -Tokens are synchronous or asynchronous. -Synchronous tokens are synchronized with a server, both the server and token use the same time or a time-based database to generate a number that must be entered during the user login phase. -Asynchronous tokens don't require that the server and tokens maintain the same time setting. Instead, they use a challenge/response system, in which the server challenges the unauthenticated entity during login with a numerical sequence.
Authentication Factors
Three mechanisms that provide authentication based on something an unauthenticated entity knows, something an unauthenticated entity has, and something an unauthenticated entity is.
Tunnel Mode
Tunnel mode establishes two perimeter tunnel servers to encrypt all traffic that will traverse an unsecured network. In tunnel mode, the entire client packet is encrypted and added as the data portion of a packet addressed from one tunneling server to another. The receiving server decrypts the packet and sends it to the final address. The primary benefit of this model is that an intercepted packet reveals nothing about the true destination system. -The process is straight forward. First, the user connects to the Internet through an ISP or direct network connection. Second, the user establishes the link with the remote VPN server.
Covert Channels
Unauthorized or unintended methods of communications hidden inside a computer system. -Covert channels could be used by attackers who seek to exfiltrate sensitive data without being detected. Data loss prevention technologies monitor standard and covert channels to attempt to reduce an attacker's ability to accomplish exfiltration. -TCSEC defines two kinds of covert channels: • Storage channels, which are used in steganography, and in the embedding of data in TCP or IP header fields. • Timing channels, which are used in a system that places a long pause between packets to signify a 1 and a short pause between packets to signify a 0.
TCSEC's Trusted Computing Base (TCB)
Under the Trusted Computer System Evaluation Criteria (TCSEC), the combination of all hardware, firmware, and software responsible for enforcing the security policy. -The Trusted Computer System Evaluation Criteria (TCSEC) is an older DoD standard that defines the criteria for assessing the access controls in a computer system. -This standard is part of a larger series of standards collectively referred to as the Rainbow Series because of the color coding used to uniquely identify each document -TCSEC is also known as the "Orange Book" and is considered the cornerstone of the Rainbow series. TCSEC uses the concept of the trusted computing base (TCB) to enforce security policy. In this context, "security policy" refers to the rules of configuration for a system rather than a managerial guidance document. -TCB is only as effective as its internal control mechanisms and the administration of the systems being configured. TCB is made up of the hardware and software that has been implemented to provide security for a particular information system. This usually includes the operating system kernel and a specified set of security utilities, such as the user login subsystem. -"trusted" can be misleading—in this context, it means that a component is part of TCB's security system, but not that it is necessarily trustworthy. -Within TCB is an object known as the REFERENCE MONITOR, which is the piece of the system that manages access controls. Systems administrators must be able to audit or periodically review the reference monitor to ensure it is functioning effectively, without unauthorized modification. -One of the biggest challenges in TCB is the existence of COVERT CHANNELS. -TCSEC defines two kinds of covert channels: • Storage channels, which are used in steganography, and in the embedding of data in TCP or IP header fields. • Timing channels, which are used in a system that places a long pause between packets to signify a 1 and a short pause between packets to signify a 0.
Selecting the Right Firewall
When trying to determine the best firewall for an organization, you should consider the following questions: 1. Which type of firewall technology offers the right balance between protection and cost for the needs of the organization? 2. What features are included in the base price? What features are available at extra cost? Are all cost factors known? 3. How easy is it to set up and configure the firewall? Does the organization have staff on hand that are trained to configure the firewall, or would the hiring of additional employees (or contractors or managed service providers) be required? 4. Can the firewall adapt to the growing network in the target organization? -The most important factor, of course, is the extent to which the firewall design provides the required protection. The next important factor is cost, which may keep a certain make, model, or type of firewall out of reach.
Reference Monitor
Within TCB, a conceptual piece of the system that manages access controls-in other words, it mediates all access to objects by subjects.
EAL Scale
• EAL1: Functionally Tested: Confidence in operation against non serious threats • EAL2: Structurally Tested: More confidence required but comparable with good business practices • EAL3: Methodically Tested and Checked: Moderate level of security assurance • EAL4: Methodically Designed, Tested, and Reviewed: Rigorous level of security assurance but still economically feasible without specialized development • EAL5: Semi-formally Designed and Tested: Certification requires specialized development above standard commercial products • EAL6: Semi-formally Verified Design and Tested: Specifically designed security ToE • EAL7: Formally Verified Design and Tested: Developed for extremely high-risk situations or for high-value systems.
The 4 fundamental functions of access control systems
• Identification: I am a user of the system. • Authentication: I can prove I'm a user of the system. • Authorization: Here's what I can do with the system. • Accountability: You can track and monitor my use of the system.