CCNA4 Chapter 6 Quality of Service

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

best-effort model

provides best-effort packet delivery with no guarantee and treats all network packets with the same priority

Marking at Layer 2

802.1Q is the IEEE standard that supports VLAN tagging at layer 2 on Ethernet networks. When 802.1Q is implemented, two fields are added to the Ethernet Frame. As shown in Figure 1, these two fields are inserted into the Ethernet frame following the source MAC address field. The 802.1Q standard also includes the QoS prioritization scheme known as IEEE 802.1p. The 802.1p standard uses the first three bits in the Tag Control Information (TCI) field. Known as the Priority (PRI) field, this 3-bit field identifies the Class of Service (CoS) markings. Three bits means that a Layer 2 Ethernet frame can be marked with one of eight levels of priority (values 0-7) as displayed in Figure 2.

Differentiated services (DiffServ)

As its name suggests, DiffServ differentiates between multiple traffic flows. Specifically, packets are marked, and routers and switches can then make decisions (for example, dropping or forwarding decisions) based on those markings.

Classification and Marking

Before a packet can have a QoS policy applied to it, the packet has to be classified. Classification and marking allows us to identify or "mark" types of packets. Classification determines the class of traffic to which packets or frames belong. Only after traffic is marked can policies be applied to it. How a packet is classified depends on the QoS implementation. Methods of classifying traffic flows at Layer 2 and 3 include using interfaces, ACLs, and class maps. Traffic can also be classified at Layers 4 to 7 using Network Based Application Recognition (NBAR). Note: NBAR is a classification and protocol discovery feature of Cisco IOS software that works with QoS features. NBAR is out of scope for this course. Marking means that we are adding a value to the packet header. Devices receiving the packet look at this field to see if it matches a defined policy. Marking should be done as close to the source device as possible. This establishes the trust boundary. How traffic is marked usually depends on the technology. The table in the figure describes some the marking fields used in various technologies. The decision of whether to mark traffic at Layers 2 or 3 (or both) is not trivial and should be made after consideration of the following points: Layer 2 marking of frames can be performed for non-IP traffic. Layer 2 marking of frames is the only QoS option available for switches that are not "IP aware". Layer 3 marking will carry the QoS information end-to-end.

Best-Effort benefits and drawbacks

Benefits Model is the most scaleable. Scaleability is only limited by bandwidth limits in which case all traffic equally effected. No special QoS mechanisms are required. It is the easiest and quickest model to deploy. Drawbacks There are no guarantees of delivery Packets will arrive whenever they can and in any order possible if they arrive at all. No packets have preferential treatment. Critical data is treated the same as casual emails.

Integrated Services benefits and drawbacks

Benefits Exploit end to end resources admission control. Per-request policy admission control Signalling of dynamic port numbers Drawbacks Resource intensive due to the stateful architecture requirements for continuous signalling. Flow-based approach not scalable to large implementations such as the internet.

Differentiated Services benefits and drawbacks

Benefits High scalable Provides many different levels of quality Drawbacks No absolute guarantee of service quality. Requires a set of complex mechanisms to work in concert throughout the network.

Class-Based Weighted Fair Queuing (CBWFQ)

CBWFQ extends the standard WFQ functionality to provide support for user-defined traffic classes. For CBWFQ, you define traffic classes based on match criteria including protocols, access control lists (ACLs), and input interfaces. Packets satisfying the match criteria for a class constitute the traffic for that class. A FIFO queue is reserved for each class, and traffic belonging to a class is directed to the queue for that class, as shown in the figure. When a class has been defined according to its match criteria, you can assign it characteristics. To characterize a class, you assign it bandwidth, weight, and maximum packet limit. The bandwidth assigned to a class is the guaranteed bandwidth delivered to the class during congestion. To characterize a class, you also specify the queue limit for that class, which is the maximum number of packets allowed to accumulate in the queue for the class. Packets belonging to a class are subject to the bandwidth and queue limits that characterize the class. After a queue has reached its configured queue limit, adding more packets to the class causes tail drop or packet drop to take effect, depending on how class policy is configured. Tail drop means a router simply discards any packet that arrives at the tail end of a queue that has completely used up its packet-holding resources. This is the default queuing response to congestion. Tail drop treats all traffic equally and does not differentiate between classes of service.

Congestion Avoidance

Congestion management includes queuing and scheduling methods where excess traffic is buffered or queued (and sometimes dropped) while it waits to be sent on an egress interface. Congestion avoidance tools are simpler. They monitor network traffic loads in an effort to anticipate and avoid congestion at common network and internetwork bottlenecks before congestion becomes a problem. These tools can monitor the average depth of the queue, as represented in the figure. When the queue is below the minimum threshold, there are no drops. As the queue fills up to the maximum threshold, a small percentage of packets are dropped. When the maximum threshold is passed, all packets are dropped. Some congestion avoidance techniques provide preferential treatment for which packets will get dropped. For example, Cisco IOS QoS includes weighted random early detection (WRED) as a possible congestion avoidance solution. The WRED algorithm allows for congestion avoidance on network interfaces by providing buffer management and allowing TCP traffic to decrease, or throttle back, before buffers are exhausted. Using WRED helps avoid tail drops and maximizes network use and TCP-based application performance. There is no congestion avoidance for User Datagram Protocol (UDP)-based traffic, such as voice traffic. In case of UDP-based traffic, methods such as queuing and compression techniques help to reduce and even prevent UDP packet loss.

Selecting an Appropriate QoS Policy Model

How can QoS be implemented in a network? The three models for implementing QoS are: Best-effort model Integrated services (IntServ) Differentiated services (DiffServ)

Marking at Layer 3

IPv4 and IPv6 specify an 8-bit field in their packet headers to mark packets. As shown in Figure 1, both IPv4 and IPv6 support an 8-bit field for marking, the Type of Service (ToS) field for IPv4 and the Traffic Class field for IPv6. These fields are used to carry the packet marking as assigned by the QoS classification tools. The field is then referred to by receiving devices to forward the packets based on the appropriate assigned QoS policy. Figure 2 displays the contents of the 8-bit field. In RFC 791, the original IP standard specified the IP Precedence (IPP) field to be used for QoS markings. However, in practice, these three bits did not provide enough granularity to implement QoS. RFC 2474 supersedes RFC 791 and redefines the ToS field by renaming and extending the IPP field. The new field, as shown in Figure 2, has 6-bits allocated for QoS. Called the Differentiated Services Code Point (DSCP) field, these six bits offer a maximum of 64 possible classes of service. The remaining two IP Extended Congestion Notification (ECN) bits can be used by ECN-aware routers to mark packets instead of dropping them. The ECN marking informs downstream routers that there is congestion in the packet flow. The 64 DSCP values are organized into three categories: Best-Effort (BE) - This is the default for all IP packets. The DSCP value is 0. The per-hop behavior is normal routing. When a router experiences congestion, these packets will be dropped. No QoS plan is implemented. Expedited Forwarding (EF) - RFC 3246 defines EF as the DSCP decimal value 46 (binary 101110). The first 3 bits (101) map directly to the Layer 2 CoS value 5 used for voice traffic. At Layer 3, Cisco recommends that EF only be used to mark voice packets. Assured Forwarding (AF) - RFC 2597 defines AF to use the 5 most significant DSCP bits to indicate queues and drop preference. As shown in Figure 3, the first 3 most significant bits are used to designate the class. Class 4 is the best queue and Class 1 is the worst queue. The 4th and 5th most significant bits are used to designate the drop preference. The 6th most significant bit is set to zero. The AFxy formula shows how the AF values are calculated. For example, AF32 belongs to class 3 (binary 011) and has a medium drop preference (binary 10). The full DSCP value is 28 because you include the 6th 0 bit (binary 011100). Because the first 3 most significant bits of the DSCP field indicate the class, these bits are also called the Class Selector (CS) bits. As shown in Figure 4, these 3 bits map directly to the 3 bits of the CoS field and the IPP field to maintain compatibility with 802.1p and RFC 791. The table in Figure 5 shows how the CoS values map to the Class Selectors and the corresponding DSCP 6-bit value. This same table can be used to map IPP values to the Class Selectors.

QoS Models

In a very general understanding of QoS mechanisms, let's first look at the QoS models. The best effort model is not really an implementation of QoS, as packets are delivered on a best-effort basis. QoS is not really required or configured. The integrated services model, or IntServ model, provides a very high degree of QoS to IP packets with guaranteed delivery. It uses a signaling process known as RSVP, or resource reservation protocol. The IntServ model can severely limit the scalability of a network, and it is demanding on resources, and therefore doesn't scale well for large or enterprise networks. The differentiated services model, or DiffServ model, is a highly scalable and flexible implementation of QoS. It works off of manually configured traffic classes which need to be configured on routers throughout the network. If we look at some of the benefits and drawbacks of the best effort model, you can see that, under benefits, no special QoS mechanisms are required, and it is the easiest and quickest model to deploy. However, under drawbacks, notice that there are no guarantees of packet delivery. No packets have preferential treatment, and critical data is treated the same as casual email is treated. Basically, this is a non-QoS solution. Now compare this to the integrated services model, looking at some of the benefits and drawbacks. Under benefits you can see that the IntServ model has tighter QoS for real-time traffic, uses the signaling protocol RSVP, resource reservation protocol. It requires end-to-end signaling and per-request admission control. It uses packet classification, policing, queueing, and scheduling. Under drawbacks, notice that the IntServ flowbased approach is not scalable to large implementations such as the internet, and the integrated services model is rarely deployed by itself alone. Looking at this diagram, we can see that end-to-end signaling is required in the integrated services model. RSVP is implemented on the routers within the network and can be implemented on the hosts as well. Notice the QoS-aware nodes. Sessions and resources are dynamically reserved, one flow at a time. Now, looking at the differentiated services model, we can see some of the benefits and drawbacks. Under benefits, the differentiated services model provides better QoS scalability. It is a defined, class-based approach defining the policy and priority at the routers, known as per-hop behavior, or PHB. It uses packet marking directly in the packets. In other words, packets are marked at the routers. It can also use the NBAR or NBAR2 network-based application recognition services. It uses the six-bit differentiated services code point, or DSCP, in the IP header, and is also used with IPv6. Under drawbacks, it doesn't provide the absolute guarantee of services that integrated services does, and it requires a set of complex mechanisms to work throughout the network. This is why differentiated services is often used with integrated services. Both the integrated services and the differentiated services models are not exclusive, and they can be used together. In this diagram of a differentiated services example, you can see that the QoS nodes are unaware that QoS is being implemented across the network. That's because a manual QS policy has been configured on the routers in the network. Notice that as traffic goes across the network, the traffic is classified and colored.

First In First Out (FIFO)

In its simplest form, FIFO queuing, also known as first-come, first-served (FCFS) queuing, involves buffering and forwarding of packets in the order of arrival. FIFO has no concept of priority or classes of traffic and consequently, makes no decision about packet priority. There is only one queue, and all packets are treated equally. Packets are sent out an interface in the order in which they arrive, as shown in the figure. Although some traffic is more important or time-sensitive based on the priority classification, notice that the traffic is sent out in the order it is received. When FIFO is used, important or time-sensitive traffic can be dropped when congestion occurs on the router or switch interface. When no other queuing strategies are configured, all interfaces except serial interfaces at E1 (2.048 Mbps) and below use FIFO by default. (Serial interfaces at E1 and below use WFQ by default.) FIFO, which is the fastest method of queuing, is effective for large links that have little delay and minimal congestion. If your link has very little congestion, FIFO queuing may be the only queuing you need to use.

Network Traffic Trends

In the early 2000s, the predominant types of IP traffic were voice and data. Voice traffic has a predictable bandwidth need and known packet arrival times. Data traffic is not real-time and has unpredictable bandwidth need. Data traffic can temporarily burst, as when a large file is being downloaded. This bursting can consume the entire bandwidth of a link. More recently, video traffic has become the increasingly important to business communications and operations. According to the Cisco Visual Networking Index (VNI), video traffic represented 67% of all traffic in 2014. By 2019, video will represent 80% of all traffic. In addition, mobile video traffic will increase over 600% from 113,672 TB to 768,334 TB. The type of demands voice, video, and data traffic place on the network are very different.

Compare QoS Models

Integrated services Per- to request policy control. Signalling of dynamic port numbers such as h.323. Resources intensive due to the stateful architecture requirement for continuous signalling . Flow-based approach not scalable to large implementations such as the internet. Best effort No special QoS mechanisms are required. Scalability is only limited by bandwidth limits, in which case all traffic is equally effected. There are no guarantees of delivery Packets will arrive whenever they can and in any order possible if they arrive at all. No packets have preferential treatment. Critical data is treated the same as casual emails. Differentiated Services High scalable Provides many different levels of quality. No absolute guarantee of service quality. Requires a set of complex mechanisms to work in concert throughout the network.

Compare Queuing Algorithms

LLG the bandwidth assigned to the packets of a class determines the order in which packets are sent. Allows delay sensitive data such as voice to be sent before packets in other queues WFQ Simultaneously schedules interactive traffic to the front of the queue to reduce responce time. applies priority or weights to identify traffic and classify it into conversations or flows. an automated scheduling method that provide fair bandwidth allocation to all network traffic. classifies traffic into different flows based on packet header addressing. FIFO important or time sensitive traffic can be dropped when congestion occurs on the router or switch interface. Effective for large links that have little delay and minimal congestion. CBWFQ provides support for user-defined traffic classes. Pockets satisfying the match criteria for a class constitute for the traffic in that class A FIFO queue is reserved for each class and traffic belonging to a class is is directed to the queue for that class.

Data

Most applications use either TCP or UDP. Unlike UDP, TCP performs error recovery. Data applications that have no tolerance for data loss, such as email and web pages, use TCP to ensure that, if packets are lost in transit, they will be resent. Data traffic can be smooth or bursty. Network control traffic is usually smooth and predictable. When there is a topology change, the network control traffic may burst for a few seconds. But the capacity of today's networks can easily handle the increase in network control traffic as the network converges. However, some TCP applications can be very greedy, consuming a large portion of network capacity. FTP will consume as much bandwidth as it can get when you download a large file, such as a movie or game. Figure 1 summarizes data traffic characteristics. Although data traffic is relatively insensitive to drops and delays compared to voice and video, a network administrator still needs to consider the quality of the user experience, sometimes referred to as Quality of Experience or QoE. The two main factors a network administrator needs to ask about the flow of data traffic are the following: Does the data come from an interactive application? Is the data mission critical? Figure 2 is compares these two factors.

Bandwidth, Congestion, Delay, and Jitter

Network bandwidth is measured in the number of bits that can be transmitted in a single second, or bits per second (bps). For example, a network device may be described as having the capability to perform at 10 gigabits per second (Gbps). Network congestion causes delay. An interface experiences congestion when it is presented with more traffic than it can handle. Network congestion points are strong candidates for QoS mechanisms. Figure 1 shows three examples of typical congestion points. Delay or latency refers to the time it takes for a packet to travel from the source to the destination. Two types of delays are fixed and variable. A fixed delay is a specific amount of time a specific process takes, such as how long it takes to place a bit on the transmission media. A variable delay take an unspecified amount of time and is affected by factors such as how much traffic is being processed. Jitter is the variation in the delay of received packets. At the sending side, packets are sent in a continuous stream with the packets spaced evenly apart. Due to network congestion, improper queuing, or configuration errors, the delay between each packet can vary instead of remaining constant. Both delay and jitter need to be controlled and minimized to support real-time and interactive traffic.

Integrated services (IntServ)

Often referred to as hard QoS because IntServ can make strict bandwidth reservations. IntServ uses signaling among network devices to provide bandwidth reservations. Resource Reservation Protocol (RSVP) is an example of an IntServ approach to QoS. Because IntServ must be configured on every router along a packet's path, a primary drawback of IntServ is its lack of scalability.

Network Transmission Quality Terminology

Packet Loss this happens when congestion occurs. Serialization delay the fixed amount of time it takes to to transmit a frame from a NIC to a wire. Queue holds packets in memory until resources become available to transmit them. Bandwidth the number of bits that can be transmitted in a single second. code delay the fixed amount of time it takes to compress data of the source before transmitting to the internetworking device. congestion when the demand for bandwidth exceeds the amount available. Jitter caused by variations in delay. Propagation delay the variable amount of time it takes for the frame to traverse the links between the source and the destination.

Avoiding Packet Loss

Packet loss is usually the result of congestion on an interface. Most applications that use TCP experience slowdown because TCP automatically adjusts to network congestion. Dropped TCP segments cause TCP sessions to reduce their window sizes. Some applications do not use TCP and cannot handle drops (fragile flows). The following approaches can prevent drops in sensitive applications: Increase link capacity to ease or prevent congestion. Guarantee enough bandwidth and increase buffer space to accommodate bursts of traffic from fragile flows. There are several mechanisms available in Cisco IOS QoS software that can guarantee bandwidth and provide prioritized forwarding to drop-sensitive applications. Examples being WFQ, CBWFQ, and LLQ. Prevent congestion by dropping lower-priority packets before congestion occurs. Cisco IOS QoS provides queuing mechanisms that start dropping lower-priority packets before congestion occurs. An example being weighted random early detection (WRED).

The Purpose of QoS

QoS, what is it and why is it needed? QoS, or Quality of Service, allows us to prioritize certain types of traffic over others. Different types of traffic place different demands on the network. Video traffic and voice traffic require greater resources from the network. They require more bandwidth to achieve the type of quality that is needed in a phone call or streaming video. Financial transactions are time sensitive and they have greater needs than, let's say, a web page or regular data traffic like sending an email. In this diagram, we're given a general concept of how it works. Packets are buffered at the router and three priority queues have been established: a High Priority Queue, a Medium Priority Queue, and a Low Priority Queue. Voice over IP traffic in the High Priority Queue is given a higher priority so more of those packets are allowed to be forwarded across the network. Financial transactions, which are time sensitive also, are also given a greater priority so more of those are allowed as well. Then lastly, any leftover bandwidth is used for the static web page in the Low Priority Queue. When do we need QoS? We need QoS at points in the network where congestion is experienced. This could be points where you have an aggregation of many lengths, let's say many computers or many users, all having to go up a single uplink or across a single wire. Also, you can situations where there's a speed mismatch, you're going from a faster link to a slower link and also as you cross from a LAN to a WAN or from the WAN to the LAN going from network to network across a gateway router. Without QoS, packets are processed as they come in. When we have network congestion or variations in delay, which cause jitter, we experience packet loss. Now, if you're trying to stream a voice call or audio stream and you're experiencing excessive jitter or variations, the playout delay buffer can't tolerate it and packets are dropped due to that excessive jitter. Now, too many packets being dropped results in your call dropping out. Now, with QoS, more voice packets are processed or forwarded because they're in the High Priority Queue and it's been configured for zero packet loss. You still have, let's say, variations in delay and you can have jitter but an audio stream is able to make up for that and a playout delay buffer can send out a constant stream of audio information resulting in a call that is experiences zero drop off

Prioritizing Traffic

Quality of Service (QoS) is an ever increasing requirement of networks today. New applications available to users, such as voice and live video transmissions, create higher expectations for quality delivery. Congestion occurs when multiple communication lines aggregate onto a single device such as a router, and then much of that data is placed on fewer outbound interfaces, or onto a slower interface. Congestion can also occur when large data packets prevent smaller packets from being transmitted in a timely manner. When the volume of traffic is greater than what can be transported across the network, devices queue, or hold, the packets in memory until resources become available to transmit them. Queuing packets causes delay because new packets cannot be transmitted until previous packets have been processed. If the number of packets to be queued continues to increase, the memory within the device fills up and packets are dropped. One QoS technique that can help with this problem is to classify data into multiple queues, as shown in the figure. Note: A device implements QoS only when it is experiencing some type of congestion.

Classification and marking tools

Sessions or flow analysed to determine what class they belong to. Once determined the packets are marked.

Low Latency Queuing (LLQ)

The LLQ feature brings strict priority queuing (PQ) to CBWFQ. Strict PQ allows delay-sensitive data such as voice to be sent before packets in other queues. LLQ provides strict priority queuing for CBWFQ, reducing jitter in voice conversations, as shown in the figure. Without LLQ, CBWFQ provides WFQ based on defined classes with no strict priority queue available for real-time traffic. The weight for a packet belonging to a specific class is derived from the bandwidth you assigned to the class when you configured it. Therefore, the bandwidth assigned to the packets of a class determines the order in which packets are sent. All packets are serviced fairly based on weight; no class of packets may be granted strict priority. This scheme poses problems for voice traffic that is largely intolerant of delay, especially variation in delay. For voice traffic, variations in delay introduce irregularities of transmission manifesting as jitter in the heard conversation. With LLQ, delay-sensitive data is sent first, before packets in other queues are treated. LLQ allows delay-sensitive data such as voice to be sent first (before packets in other queues), giving delay-sensitive data preferential treatment over other traffic. Although it is possible to enqueue various types of real-time traffic to the strict priority queue, Cisco recommends that only voice traffic be directed to the priority queue.

Queuing Overview

The QoS policy implemented by the network administrator becomes active when congestion occurs on the link. Queuing is a congestion management tool that can buffer, prioritize, and, if required, reorder packets before being transmitted to the destination. A number of queuing algorithms are available. For the purposes of this course, we will focus on the following: First-In, First-Out (FIFO) Weighted Fair Queuing (WFQ) Class-Based Weighted Fair Queuing (CBWFQ) Low Latency Queuing (LLQ)

Best-Effort

The basic design of the Internet provides for best-effort packet delivery and provides no guarantees. This approach is still predominant on the Internet today and remains appropriate for most purposes. The best-effort model treats all network packets in the same way, so an emergency voice message is treated the same way a digital photograph attached to an email is treated. Without QoS, the network cannot tell the difference between packets and, as a result, cannot treat packets preferentially. The best-effort model is similar in concept to sending a letter using standard postal mail. Your letter is treated exactly the same as every other letter. With the best-effort model, the letter may never arrive, and, unless you have a separate notification arrangement with the letter recipient, you may never know that the letter did not arrive.

Differentiated Services

The differentiated services (DiffServ) QoS model specifies a simple and scalable mechanism for classifying and managing network traffic and providing QoS guarantees on modern IP networks. For example, DiffServ can provide low-latency guaranteed service to critical network traffic such as voice or video while providing simple best-effort traffic guarantees to non-critical services such as web traffic or file transfers. The DiffServ design overcomes the limitations of both the best-effort and IntServ models. The DiffServ model is described in RFCs 2474, 2597, 2598, 3246, 4594. DiffServ can provide an "almost guaranteed" QoS while still being cost-effective and scalable. The DiffServ model is similar in concept to sending a package using a delivery service. You request (and pay for) a level of service when you send a package. Throughout the package network, the level of service you paid for is recognized and your package is given either preferential or normal service, depending on what you requested. DiffServ is not an end-to-end QoS strategy because it cannot enforce end-to-end guarantees. However, DiffServ QoS is a more scalable approach to implementing QoS. Unlike IntServ and hard QoS in which the end-hosts signal their QoS needs to the network, DiffServ does not use signaling. Instead, DiffServ uses a "soft QoS" approach. It works on the provisioned-QoS model, where network elements are set up to service multiple classes of traffic each with varying QoS requirements. Figure 1 is a simple illustration of the DiffServ model. As a host forwards traffic to a router, the router classifies the flows into aggregates (classes) and provides the appropriate QoS policy for the classes. DiffServ enforces and applies QoS mechanisms on a hop-by-hop basis, uniformly applying global meaning to each traffic class to provide both flexibility and scalability. For example, DiffServ could be configured to group all TCP flows as a single class, and allocate bandwidth for that class, rather than for the individual flows as IntServ would do. In addition to classifying traffic, DiffServ minimizes signaling and state maintenance requirements on each network node. Specifically, DiffServ divides network traffic into classes based on business requirements. Each of the classes can then be assigned a different level of service. As the packets traverse a network, each of the network devices identifies the packet class and services the packet according to that class. It is possible to choose many levels of service with DiffServ. For example, voice traffic from IP phones is usually given preferential treatment over all other application traffic, email is generally given best-effort service, and nonbusiness traffic can either be given very poor service or blocked entirely. Figure 2 lists the benefits and drawbacks of the DiffServ model. Note: Modern networks primarily use the DiffServ model. However, due to the increasing volumes of delay- and jitter-sensitive traffic, IntServ and RSVP are sometimes co-deployed.

Integrated Services

The needs of real-time applications, such as remote video, multimedia conferencing, visualization, and virtual reality, motivated the development of the IntServ architecture model in 1994 (RFC 1633, 2211, and 2212). IntServ is a multiple-service model that can accommodate multiple QoS requirements. IntServ provides a way to deliver the end-to-end QoS that real-time applications require by explicitly managing network resources to provide QoS to specific user packet streams, sometimes called microflows. It uses resource reservation and admission-control mechanisms as building blocks to establish and maintain QoS. This practice is similar to a concept known as "hard QoS." Hard QoS guarantees traffic characteristics, such as bandwidth, delay, and packet-loss rates, from end to end. Hard QoS ensures both predictable and guaranteed service levels for mission-critical applications. Figure 1 is a simple illustration of the IntServ model. IntServ uses a connection-oriented approach inherited from telephony network design. Each individual communication must explicitly specify its traffic descriptor and requested resources to the network. The edge router performs admission control to ensure that available resources are sufficient in the network. The IntServ standard assumes that routers along a path set and maintain the state for each individual communication. In the IntServ model, the application requests a specific kind of service from the network before sending data. The application informs the network of its traffic profile and requests a particular kind of service that can encompass its bandwidth and delay requirements. IntServ uses the Resource Reservation Protocol (RSVP) to signal the QoS needs of an application's traffic along devices in the end-to-end path through the network. If network devices along the path can reserve the necessary bandwidth, the originating application can begin transmitting. If the requested reservation fails along the path, the originating application does not send any data. The edge router performs admission control based on information from the application and available network resources. The network commits to meeting the QoS requirements of the application as long as the traffic remains within the profile specifications. The network fulfills its commitment by maintaining the per-flow state and then performing packet classification, policing, and intelligent queuing based on that state.

QoS Tools

There are three categories of QoS tools Classification and marking tools Congestion avoidance tools Congestion management tools

Congestion avoidance tools

Traffic classes are allotted portions of network resources as defined by the QoS policy. The QoS policy also identifies how some traffic may be selective dropped, delayed or remarked to avoid congestion. The primary congestion avoidance tool is wred and is used to regulate TCP data traffic in a bandwidth efficient manner before tail drops caused by queue overflows occur.

Identify QoS Mechanism Terminology

Traffic policing - when the traffic rate reaches the configured maximum rate, excess traffic is dropped. Congestion avoidance - queuing and scheduling methods where excess traffic is buffered while it waits to be sent on an egress interface. WRaeD Algorithm - provides buffer management and allows TCP traffic to throttle back before buffers are exhausted. Classification- determines what class of traffic packets or frames belong to. Traffic Shaping - Retains excess packets in a queue and then schedules the excess for transmission over increments of time. Marking - Adding a value to the packet header. CoS bits - used to identify a layer 2 QoS marking. 802.1Q - An IEEE specification for implementing VLANS in layer 2 switched networks.

Shaping and Policing

Traffic shaping and traffic policing are two mechanisms provided by Cisco IOS QoS software to prevent congestion. Traffic shaping retains excess packets in a queue and then schedules the excess for later transmission over increments of time. The result of traffic shaping is a smoothed packet output rate, as shown in Figure 1. Shaping implies the existence of a queue and of sufficient memory to buffer delayed packets, while policing does not. Ensure that you have sufficient memory when enabling shaping. In addition, shaping requires a scheduling function for later transmission of any delayed packets. This scheduling function allows you to organize the shaping queue into different queues. Examples of scheduling functions are CBWFQ and LLQ. Shaping is an outbound concept; packets going out an interface get queued and can be shaped. In contrast, policing is applied to inbound traffic on an interface. When the traffic rate reaches the configured maximum rate, excess traffic is dropped (or remarked). Policing is commonly implemented by service providers to enforce a contracted customer information rate (CIR). However, the service provider may also allow bursting over the CIR if the service provider's network is not currently experiencing congestion.

Compare Traffic Characteristics

Voice Cannot be retransmitted if lost. Must receive a higher UDP priority traffic can be predictable and smooth. Does not consume a lot of network resources. Video Without QOS and a significant amount of bandwidth capacity the bandwidth degrades. requires at least 384kbs bandwidth. Traffic can be smooth or bursky. Data can be very greedy consuming large amounts of network capacity.

voice

Voice traffic is predictable and smooth, as shown in the figure. However, voice is very sensitive to delays and dropped packets; there is no reason to re-transmit voice if packets are lost. Therefore, voice packets must receive a higher priority than other types of traffic. For example, Cisco products use the RTP port range 16384 to 32767 to prioritize voice traffic. Voice can tolerate a certain amount of latency, jitter, and loss without any noticeable effects. Latency should be no more than 150 milliseconds (ms). Jitter should be no more than 30 ms, and voice packet loss should be no more than 1%. Voice traffic requires at least 30 Kbps of bandwidth.

Weighted Fair Queuing (WFQ)

WFQ is an automated scheduling method that provides fair bandwidth allocation to all network traffic. WFQ applies priority, or weights, to identified traffic and classifies it into conversations or flows, as shown in the figure. WFQ then determines how much bandwidth each flow is allowed relative to other flows. The flow-based algorithm used by WFQ simultaneously schedules interactive traffic to the front of a queue to reduce response time. It then fairly shares the remaining bandwidth among high-bandwidth flows. WFQ allows you to give low-volume, interactive traffic, such as Telnet sessions and voice, priority over high-volume traffic, such as FTP sessions. When multiple file transfers flows are occurring simultaneously, the transfers are given comparable bandwidth. WFQ classifies traffic into different flows based on packet header addressing, including such characteristics as source and destination IP addresses, MAC addresses, port numbers, protocol, and Type of Service (ToS) value. The ToS value in the IP header can be used to classify traffic. ToS will be discussed later in the chapter. Low-bandwidth traffic streams, which comprise the majority of traffic, receive preferential service, allowing their entire offered loads to be sent in a timely fashion. High-volume traffic streams share the remaining capacity proportionally among themselves. Limitations WFQ is not supported with tunneling and encryption because these features modify the packet content information required by WFQ for classification. Although WFQ automatically adapts to changing network traffic conditions, it does not offer the degree of precision control over bandwidth allocation that CBWFQ offers.

Congestion management tools

When traffic exceeds available network resources traffic is queued to await availability of resources. Common Cisco IOS based congestion management tools include CBWFQ and LLQ algorithms.

Trust Boundaries

Where should markings occur? Traffic should be classified and marked as close to its source as technically and administratively feasible. This defines the trust boundary as shown in the figure. Trusted endpoints have the capabilities and intelligence to mark application traffic to the appropriate Layer 2 CoS and/or Layer 3 DSCP values. Examples of trusted endpoints include IP phones, wireless access points, videoconferencing gateways and systems, IP conferencing stations, and more. Secure endpoints can have traffic marked at the Layer 2 switch. Traffic can also be marked at Layer 3 switches / routers. Re-marking of traffic is typically necessary. For example, re-marking CoS values to IP Precedent or DSCP values.

Video

Without QoS and a significant amount of extra bandwidth capacity, video quality typically degrades. The picture appears blurry, jagged, or in slow motion. The audio portion of the feed may become unsynchronized with the video. Video traffic tends to be unpredictable, inconsistent, and bursty compared to voice traffic. Compared to voice, video is less resilient to loss and has a higher volume of data per packet. summarizes the characteristics of video traffic. UDP ports, such as 554 used for the Real-Time Streaming Protocol (RSTP), should be given priority over other, less time-sensitive, network traffic. Similar to voice, video can tolerate a certain amount of latency, jitter, and loss without any noticeable affects. Latency should be no more than 400 milliseconds (ms). Jitter should be no more than 50 ms, and video packet loss should be no more than 1%. Video traffic requires at least 384 Kbps of bandwidth.

Packet Loss

Without any QoS mechanisms in place, packets are processed in the order in which they are received. When congestion occurs, network devices such as routers and switches can drop packets. This means that time-sensitive packets, such as real-time video and voice, will be dropped with the same frequency as data that is not time-sensitive, such as email and web browsing. For example, when a router receives a Real-Time Protocol (RTP) digital audio stream for Voice over IP (VoIP), it must compensate for the jitter that is encountered. The mechanism that handles this function is the playout delay buffer. The playout delay buffer must buffer these packets and then play them out in a steady stream. If the jitter is so large that it causes packets to be received out of the range of this buffer, the out-of-range packets are discarded and dropouts are heard in the audio. For losses as small as one packet, the digital signal processor (DSP) interpolates what it thinks the audio should be and no problem is audible to the user. However, when jitter exceeds what the DSP can do to make up for the missing packets, audio problems are heard. Packet loss is a very common cause of voice quality problems on an IP network. In a properly designed network, packet loss should be near zero. The voice codecs used by the DSP can tolerate some degree of packet loss without a dramatic effect on voice quality. Network engineers use QoS mechanisms to classify voice packets for zero packet loss. Bandwidth is guaranteed for the voice calls by giving priority to voice traffic over traffic that is not time-sensitive.

Traffic Characteristics

amount of traffic that passes a site is an important determinant of the potential sales at that site


संबंधित स्टडी सेट्स

Chapter 2 - Metric Measurement and Microscopy

View Set

Global Business with an Ethical Lens Final (CH. 8-17)

View Set

Chapter 15: Elections and Campaigns

View Set

Perfect Squares/Square Roots (1-20) and Perfect Cubes/Cube Roots (1-10)

View Set

US History AP Chap 1 Vocab & Questions

View Set