Computer Networks (Ross and Kurose)
slotted ALOHA advantages
- unlike channel partitioning, slotted ALOHA allows a node to transmit continuously at the full rate, R, when that node is the only active node (i.e. the node has frames to send). - slotted ALOHA is also highly decentralized, because each node detects collisions and independently decides when to retransmit (although slots must be synchronized in the nodes) - slotted ALOHA is also an extremely simple protocol in short, slotted ALOHA works particularly well when there is only one active node
physical layer
while the job of the link layer is to move entire frames from one network element to an adjacent network element, the job of the physical layer is to move the **individual bits** within the frame from one node to the next. The protocols in this layer are again link-dependent and further depend on the actual transmission medium of this link. For example, Ethernet has many physical-layer protocols: one for twisted-pair copper wire, one for coaxial cable, another for fiber, and so on.
issues with determining timeout interval length
- if the timeout interval is too short, we can have unnecessary retransmissions - if the timeout interval is too long, we could have a slow reaction to lost segments - the timeout interval should be longer than round-trip time, but RTT varies over time
self-clocking
because TCP uses acknowledgements to trigger (or clock) its increase in congestion window size, TCP is said to be
fast retransmit
because a sender often sends a large number of segments back to back, if one segment is lost, there will likely be many back-to-back duplicate ACKs. If the TCP sender receives three duplicate ACKs for the same data, it takes this as an indication that the segment following the segment that has been ACKed three times has been lost. In the case that three duplicate ACKs are received, the TCP sender performs a ___________, retransmitting the missing segment **before** that segment's timer expires.
collision
because all nodes are capable of transmitting frames, more than two nodes can transmit frames at the same time. When this happens, all of the nodes receive multiple frames at the same time
classful addressing
before CIDR was adopted, the network portions of an IP address were constrained to be 8, 16, or 24 bits in length, an addressing scheme known as classful addressing, since subnets with 8-, 16-, and 24-bit subnet addresses were known as class A, B, and C networks, respectively.
packet switch
between source and destination, each packet travels through communication links and packet switches (for which there are two predominant types, routers and link-layer switches). Packets are transmitted over each communication link at a rate equal to the full transmission rate of the link. So, if a source end system or a pack switch is sending a packet of *L* bits over a link with transmission rate *R* bits/second, then the time to transmit the packet is L/R seconds.
process
can be thought of as a program that is running within an end system; when end systems communicate, it is not really programs but processes that communicate. When processes are running on the same end system, they can communicate with each other with interprocess communication, using rules that are governed by the end system's operating system.
links
communication channels in the link layer that connect adjacent nodes along the communication path
socket
essentially the API between the application and the network. Any message sent from one process to another must go through the underlying network. A process sends messages into, and receives messages from, the network through a software interface called a socket. An analogy to understand processes and sockets: a process is analogous to a house and a socket is analogous to its door. When a process wants to send a message to another process on another host, it shoves the message out its door (socket). This sending process assumes that there is a transportation infrastructure on the other side of its door that will transport the message to the door of the destination process. Once the message arrives at the destination host, the message passes through the receiving process's door (socket), and the receiving process then acts on the message.
communication links (or channels)
examples include coaxial cable, copper wire, optical fiber, and radio spectrum. Different links can transmit data at different rates, with the transmission rate of a link measured in bits/second.
physical media
for HFC - combination of fiber cable and coaxial cable DSL and Ethernet - copper wire Mobile access networks - radio spectrum when a bit travels from source to destination, it passes through a series of transmitter-receiver pairs; through each transmitter receiver pair, the bit is sent by propagating electromagnetic waves, or optical pulses across a physical medium. The physical medium takes many shapes and forms and does not have to be of the same type for each transmitter-receiver pair along the path. There are two categories: 1) guided media: the waves are guided along a solid medium, such as a fiber-optic cable, a twisted-pair copper wire, or a coaxial cable 2) unguided media: the waves propagate in the atmosphere and outer space, such as wireless LAN or a digital satellite method
Destination-based forwarding
forward based only on destination IP address (traditional). Characterized by the two steps of looking up a destination IP address ('match') and then sending the packet into the switching fabric to the specified output port ('action')
what are the three types of channel partitioning protocol?
frequency division multiplexing (FDM), time division multiplexing (TDM) and code division multiple access (CDMA)
Classless Inter-Domain Routing (CIDR)
generalizes the notion of subnet addressing. As with subnet addressing, the 32-bit IP address is divided into two parts and again has the dotted-decimal form a.b.c.d/x, where x indicates the number of bits in the first part of the address. The x most significant bits of an address of the form a.b.c.d/x constitute the network portion of the IP address, and are often referred to as the prefix (or network prefix) of the address. An organization is typically assigned a block of contiguous addresses, that is, a range of addresses with a common prefix (see the Principles in Practice feature). In this case, the IP addresses of devices within the organization will share the common prefix. The remaining 32-x bits of an address can be thought of as distinguishing among the devices within the organization, all of which have the same network prefix. These are the bits that will be considered when forwarding packets at routers within the organization. These lower-order bits may (or may not) have an additional subnetting structure, such as that discussed above. For example, suppose the first 21 bits of the CIDRized address a.b.c.d/21 specify the organization's network prefix and are common to the IP addresses of all devices in that organization. The remaining 11 bits then identify the specific hosts in the organization. The organization's internal structure might be such that these 11 rightmost bits are used for subnetting within the organization, as discussed above. For example, a.b.c.d/24 might refer to a specific subnet within the organization.
web servers
house web objects, each addressable by a URL
time division multiplexing (TDM)
one of the ways a circuit in a link the frequency spectrum of a link is implemented: time is divided into frames of fixed duration, and each frame is divided into a fixed number of time slots. When the network establishes a connection across a link, the network dedicates one time slot in every frame to this connection. These slots are dedicated for the sole use of that connection, with one time slot available for use (in every frame) to transmit the connection's data
horizontal layering
paradigm where each layer implements some functionality or service, by 1) performing certain actions within that layer (for example, in the gate layer we load and unload people from an airplane) and by 2) using the services of the layer directly below it (for example, in the gate layer, using the runway-to-runway passenger transfer service of the takeoff/landing layer)
network edge
part of the network where there are hosts (clients and servers) running applications. PCs, phones, laptops, servers, etc.
why DNS doesn't use a centralized server
problems with this design include: - a single point of failure: if the DNS server crashes, so does the entire internet... - traffic volume: a single DNS server would have to handle all DNS queries (for all the HTTP requests and email messages generated from hundreds of millions of hosts) - distant centralized database: A single DNS server cannot be "close to" all the querying clients. If we put the single DNS server in New York City, then all queries from Australia must travel to the other side of the globe, perhaps over slow and congested links. This can lead to significant delays. - maintenance: The single DNS server would have to keep records for all Internet hosts. Not only would this centralized database be huge, but it would have to be updated frequently to account for every new host.
Acknowledgements (ACKs)
receiver sends signal back to sender that packet was successfully received - handles lost or corrupted packets - negative feedback version is NACK Sender sends a packet, then - retransmits packet if ACK not received - retransmits packet if NACK received - transmit next packet if ACK received
ARPANET reference model
reference used by the TCP/IP protocol suite; consists of the application, transport, network, link, and physical layers
Base HTML file
references other objects in a web page with the objects' URLs
routing
refers to the network-wide process that determines the end-to-end paths that packets take from source to destination. Takes place on longer timescales than forwarding (typically seconds) and is often implemented in software. Part of the control plane.
transport services available to applications
reliable data transfer - sending process can pass data into its socket and know with confidence that data will arrive without errors at the receiving process guaranteed available throughput at some specified rate - with such a service, an application could request a guaranteed throughput of r bits/sec, and the transport protocol would then ensure that the available throughput is always at least r bits/sec. timing guarantees - for example, every bit that the sender pumps into the socket might need to arrive at the receiver's socket no more than 100 msec later. This is useful in internet telephony, for example. security - for example, in the sending host, a transport protocol can encrypt all data transmitted by the sending process, and in the receiving host, the transport-layer protocol can decrypt the data before delivering the data to the receiving process
Automatic Repeat Request (ARQ)
reliable data transfer protocols that retransmit until data is finally received mechanisms include acknowledgements, timeouts, and sequence numbers types include stop-and-wait, go-back-N, and selective repeat
Network Layer (protocol stack)
responsible for moving network-layer packets known as datagrams from one host to another. The Internet transport-layer protocol (TCP or UDP) in a source host passes a transport-layer segment and a destination address to the network layer, just as you would give the postal service a letter with a destination address. The network layer then provides the service of delivering the segment to the transport layer in the destination host. The Internet's network layer includes the celebrated IP protocol, which defines the fields in the datagram as well as how the end systems and routers act on these fields. There is only one IP protocol, and all Internet components that have a network layer must run the IP protocol. The Internet's network layer also contains routing protocols that determine that datagrams must take between sources and destinations. The Internet has many routing protocols. The Internet is a network of networks, and within a network, the network administrator can run any routing protocol desired. Although the network layer contains both the IP protocol and numerous routing protocols, it is often simply referred to as the IP layer.
Frequency Division Multiplexing (FDM)
one of the ways a circuit in a link can be implemented: the frequency spectrum of a link is divided up among the connections established across the link. Specifically, the link dedicates a frequency band to each connection for the duration of the connection. The width of the band is called bandwidth.
processing delay
the time required to examine the packet's header and determine where to direct the packet is part of the processing delay. The processing delay can also include other factors, such as the time needed to check for bit-level errors in the packet that occurred transmitting the packet's bits from the upstream node to router A. After this nodal processing, the router directs the packet to the queue that precedes the link to router B
loss-tolerant applications
when a transport-layer protocol doesn't provide reliable data transfer, some of the data sent by the sending process may never arrive at the receiving process. Multimedia applications where lost data might result in a small glitch in audio/video or some other minor impairments are examples of
flow table
a 'match-plus-action' forwarding table for OpenFlow that includes: - a set of header field values to which an incoming packet will be matched. A packet that matches no flow table entry can be dropped or sent to the remote controller for more processing. In practice, a flow table may be implemented by multiple flow tables for performance cost reasons. - a set of counters that are updated as packets are matched to flow table entries. These counters might include the number of packets that have been matched by that table entry, and the time since the table entry was last updated - a set of actions to be taken when a packet matches a flow table entry. These actions might be to forward the packet to a given output port, to drop the packet, makes copies of the packet and sent them to multiple output ports, and/or to rewrite selected header fields.
congestion window (short definition)
a TCP state variable that limits the amount of data the sender can send into the network before receiving an ACK.
host aliasing
A host with a complicated hostname can have one or more alias names. For example, a hostname such as relay1.west-coast.enterprise.com could have, say, two aliases such as enterprise.com and www.enterprise.com. In this case, the hostname relay1.west-coast.enterprise.com is said to be a canonical hostname. Alias hostnames, when present, are typically more mnemonic than canonical hostnames. DNS can be invoked by an application to obtain the canonical hostname for a supplied alias hostname as well as the IP address of the host
forwarding table
A key element in every network router. A router forwards a packet by examining the value of one or more fields in the arriving packet's header, and then using these header values to index into its forwarding table. The value stored in the forwarding table entry for those values indicates the outgoing link interface at that router to which that packet is to be forwarded.
HFC (hybrid fiber-coax)
A link that consists of fiber cable connecting the cable company's offices to a node location near the customer and fiber or coaxial cable connecting the node to the customer's house.
propagation delay
Once a bit is pushed into the link, it needs to propagate to router B is the propagation delay. The bit propagates at the propagation speed of the link. The propagation speed depends on the physical medium of the link, and is in the range of 2×10⁸ meters/sec to 3×10⁸ meters/sec which is equal to, or slightly less than, the speed of light. The propagation delay is the distance between two routers divided by the propagation speed.
EstimatedRTT
=(1- α)*EstimatedRTT + α*SampleRTT recommended value for α is 0.125, in which case the above formula becomes EstimatedRTT=0.875⋅EstimatedRTT+0.125⋅SampleRTT This weighted average puts more weight on recent samples than on old samples. This is natural, as the more recent samples better reflect the current congestion in the network. In statistics, such an average is called an exponential weighted moving average (EWMA). The word "exponential" appears in EWMA because the weight of a given SampleRTT decays exponentially fast as the updates proceed.
congestion window
TCP congestion control mechanism variable that imposes a constraint on the rate at which a TCP sender can send traffic to the network. Specifically, the amount of unacknowledged data at a sender may not exceed the minimum of cwnd and rwnd, that is: LastByteSent-LastByteAcked ≤ min { cwnd, rwnd }
application layer (protocol stack)
where network applications and their application-layer protocols reside. Includes HTTP, SMTP, and FTP. Certain network functions are also done with the help of a specific application protocol, DNS. An application-layer protocol is distributed over multiple end systems using the protocol to exchange packets of information with the application in another end system. Packets of information at the application layer are referred to as messages.
network adapter (or network interface card)
where the link layer is implemented. At the heart of the network adapter is the link-layer controller, usually a single, special-purpose chip that implements many of the link-layer services (framing, link access, error detection, etc). Thus, much of a link-layer controller's functionality is implemented in hardware. Increasingly, network adapters are being integrated onto the host's motherboard (rather than being physically separate cards)
rwnd
notation for receive window (see 'receive window')
cwnd
notation for the congestion window (see 'congestion window')
Round Trip Time (RTT)
the time it takes for a small packet to travel from client to server and then back to the client
When is queuing delay large and when is it insignificant?
The answer to this question depends on the rate at which traffic arrives at the queue, the transmission rate of the link, and the nature of the arriving traffic, that is, whether the traffic arrives periodically or arrives in bursts. Let a denote the average rate at which packets arrive at the queue (a is in units packets/sec). Recall that R is the transmission rate; that is, it is the rate (in bits/sec) at which bits are pushed out of the queue. Also suppose, for simplicity, that all packets consist of L bits. Then the average rate at which bits arrive at the queue is La bits/sec.
MSS (maximum segment size)
The maximum amount of data that can be grabbed from a send/receive buffer by TCP and placed into a segment. It is typically set by first determining the length of the largest link-layer frame that can be sent by the local sending host (the so-called maximum transmission unit, or MTU), and then setting the MSS to ensure that a TCP segment (when encapsulated in an IP datagram) plus the TCP/IP header length (typically 40 bytes) will fit into a single link-layer frame.
role of the network layer
The primary role of the network layer is deceptively simple: to move packets from a sending host to a receiving host. To do so, two important network-layer functions can be identified: forwarding and routing
store-and-forward transmission
(pg. 23) means that the packet switch must receive the entire packet before it can begin to transmit the first bit of the packet onto the outbound link. Most packet switches use store-and-forward transmission at the inputs to the links. To explore store-and-forward transmission in more detail, consider a simple network consisting of two end systems connected by a simple router. A router will typically have many incident links, since its job is to switch an incoming packet onto an outgoing link; in this simple example, the router has the rather simple task of transferring a packet from one (input) link to the only other attached link. In this example, the source has three packets, each consisting of **L** bits, to send to the destination. In a snapshot of time, the source may have transmitted some of packet 1, and the front of packet 1 has already arrived at the router. Because the router employs store-and-forwarding, at this instant of time, the router cannot transmit the bits it has received; instead it must first buffer ('store') the packet's bits. Only after the router has received **all** of the packet's bits can it begin to transmit ("forward") the packet onto the outbound link. The amount of time that elapses from when the source begins to send the packet until the destination has received the entire packet can be calculated as follows: The source begins to transmit at time 0; at time **L**/**R** seconds, where **R** is the transmission rate of bits/second, the source has transmitted the entire packet, and the entire packet has been received and stored at the router. At time **L**/**R** seconds, since the router has just received the entire packet, it can begin to transmit the packet onto the outbound link towards the destination; at time 2(**L**/**R**), the router has transmitted the entire packet, and the entire packet has been received by the destination.
'guiding principles' for TCP send rate
- A lost segment implies congestion, and hence, the TCP sender's rate should be decreased when a segment is lost - An acknowledged segment indicates that the network is delivering the sender's segments to the receiver, and hence, the sender's rate can be increased when an ACK arrives for a previously unacknowledged segment - Bandwidth probing
TCP connection establishment steps
1) Client requests the connection: TCP client sends SYN segment to TCP server (no application data); SYN bit is set to 1 and an initial sequence number (client_isn) is randomly chosen 2) Server agrees to the connection: The server extracts the TCP SYN segment, allocates TCP buffers and variables, and sends a connection-granted SYNACK segment to the client TCP... this segment also has no application data, but the SYN bit is set to 1, the ACK field is set to client_isn+1, and the server chooses its own initial sequence number (server_isn) and puts this number in the sequence number field of the TCP header 3) Client acknowledges: Client allocates buffers and variables to the connection; the client sends a segment acknowledging the server's connection-granted segment (does so by putting server_isn+1 in the ACK field) and the SYN bit is set to 0, since the connection is established. This third stage may carry client-to-server data in the segment payload.
what are the 2 different types of link layer channels?
1) broadcast channels, which connect multiple hosts in wireless LANs, satellite networks, and hybrid fiber-coaxial cable (HFC) access networks. Since many hosts are connected to the same broadcast communication channel, a so-called medium access protocol is needed to coordinate frame transmission. In some cases, a central controller may be used to coordinate transmissions; in other cases, the hosts themselves coordinate transmissions. 2) the point-to-point communication link, such as that often found between two routers connected by a long-distance link, or between a user's office computer and the nearby Ethernet switch to which it is connected.
what are the three categories of multiple access protocols?
1) channel partitioning protocols 2) random access protocols 3) taking-turns protocols
3 components of TCP congestion control algorithm
1) slow start 2) congestion avoidance 3) fast recovery
a broadcast channel of R bits per second should have the following desirable characteristics:
1) when only one node has data to send, that node has a throughput of R bps 2) when M nodes have data to send, each of these nodes has a throughput of R/M bps. This need not necessarily imply that each of the M nodes has an instantaneous rate of R/M, but rather that each node should have an average transmission rate of R/M over some suitably defined interval of time 3) the protocol is decentralized; that is, there is no master node that represents a single point of failure for the network 4) the protocol is simple, so that it is inexpensive to implement
slotted ALOHA efficiency concerns
1) when there are multiple active nodes, a certain fraction of the slots will have collisions and therefore be 'wasted'. 2) another fraction of the slots will be empty because all active nodes refrain from transmitting as a result of the probabilistic transmission policy. The only 'unwasted' slots will be those in which exactly one node transmits.
SYN flood
A Denial of Service attack in which the attacker sends a large number of TCP SYN segments without completing the third handshake step. With the deluge of SYN segments, the server's connection resources become exhausted as they are allocated (but never used) for half-open connections; legitimate clients are then denied service. An effective defense called SYN cookies are now deployed in most major operating systems (see SYN cookies)
point-to-point
A TCP connection is always __________, that is, between a single sender and single receiver
load distribution (DNS)
A service of DNS, where network traffic is distributed among replicated servers, such as replicated web servers. Busy sites, such as cnn.com, are replicated over multiple servers, with each server running on a different end system and each having a different IP address. For replicated web servers, a *set* of IP addresses is thus associated with one canonical hostname. The DNS database contains this set of IP addresses. When clients make a DNS query for a name mapped to a set of addresses, the server responds with the entire set of IP addresses, but rotates the ordering of the addresses within each reply. Because a client typically sends its HTTP request message to the IP address that is listed first in the set, DNS rotation distributes the traffic among the replicated servers.
queuing delays
After nodal processing, the router directs the packet to the queue that precedes the link to the next router. At the queue, the packet experiences a queuing delay as it waits to be transmitted onto the link. The length of the queuing delay of a specific packet will depend on the number of earlier-arriving packets that are queued and waiting for transmission onto the link. If the queue is empty and no other packet is currently being transmitted, then our packet's queuing delay will be zero. On the other hand, if the traffic is heavy and many other packets are also waiting to be transmitted, the queuing delay will be long. Unlike processing, transmission, and propagation delay, the queuing delay can vary from packet to packet. For example, if 10 packets arrive at an empty queue at the same time, the first packet transmitted will suffer no queuing delay, while the last packet transmitted will suffer a relatively large queuing delay. Therefore, when characterizing queuing delay, one typically uses statistical measures (average delay, variance of delay, probability that delay exceeds some specified value).
Fiber to the Home (FTTH)
Broadband service provided via light-transmitting fiber-optic cables. Provides higher speeds than DSL and cable, although DSL & cable represent 85% of residential broadband access in the U.S.
switching fabric
Connects the router's input ports to its output ports
send buffer
Consider the sending of data from the client process to the server process. The client process passes a stream of data through the socket (the 'door' of the process); once the data passes through the door, the data is in the hands of TCP running in the client. TCP directs this data to the connection's _______________, which is one of the buffers that is set aside during the initial three-way handshake. From time to time, TCP will grab chunks of data from the _______________ and pass the data to the network layer.
non-persistent (http) connections
Each request/response pair is sent over a separate TCP connection Each TCP connection is closed after the server sends a web object - the connection does not persist for other objects. Note that each TCP connection transports exactly one request message and one response message (these TCP connections could be parallelized though)
output buffer (output queue)
For each attached link, a packet switch has an output buffer, which stores packets that the router is about to send into that link
Go-Back-N (textbook explanation)
If a timeout occurs the sender resends all packets that have been sent, but not yet been acknowledged. The sender is allowed to transmit multiple packets (when available) without waiting for an acknowledgement, but is constrained to have no more than some maximum allowable number, N, of unacknowledged packets in the pipeline
full-duplex service
If there is a TCP connection between Process A on one host and Process B on another host, then application-layer data can flow from Process A to Process B at the same time as application-layer data flows from Process B to Process A. Thus a TCP connection provides a:
forwarding (traditional approach)
In a traditional control plane approach, forwarding tables are configured by a routing algorithm, and a routing algorithm runs in each and every router and both forwarding and routing functions are contained within a router. In this setup, the routing algorithm function in one router communicates with the routing algorithm function in other routers to compute the values for its forwarding table. This communication is performed by exchanging routing messages containing routing information according to a routing protocol.
circuit switching
In circuit-switched networks, the resources needed along a path (buffers, link transmission rate) to provide for communication between the end systems are *reserved* for the duration of the communication session between the end systems. In packet-switched networks, these resources are *not* reserved; a session's messages use the resources on demand and, as a consequence, may have to wait (that is, queue) for access to a communication link. As a simple analogy, consider two restaurants, one that requires reservations and another that neither requires reservations nor accepts them. For the restaurant that requires reservations, we have to go through the hassle of calling before we leave home. But when we arrive at the restaurant, we may have to wait for a table before we can be seated. Traditional telephone networks are examples of circuit-switched networks. Consider what happens when one person wants to send information (voice or facsimile) to another over a telephone network. Before the sender can send the information, the network must establish a connection between the sender and the receiver. This is a *bona fide* connection for which the switches on the path between the sender and the receiver maintain connection state for that connection. In the jargon of telephony, this connection is called a *circuit*. When the network establishes the circuit, it also reserves a constant transmission rate in the network's links (representing a fraction of each link's transmission capacity) for the duration of the connection. Since a given transmission rate has been reserved for this sender-to-receiver connection, the sender can transfer the data to the receiver at the *guaranteed* constant rate.
TCP Reno
Newer version of TCP incorporating fast recovery
DSL (digital subscriber line)
One of the two most prevalent types of broadband residential internet access. A residence typically obtains DSL internet access from the same local telephone company (telco) that provides its wired local phone access. Thus, when DSL is used, a customer's telco is also its ISP. Each customer's DSL modem uses the existing telephone line to exchange data with a digital subscriber line access multiplexor (DSLAM) located in the telco's central office (CO). The home's DSL modem takes digital data and translates it to high-frequency tones for transmission over telephone wires to the central office (CO); the analog signals from many such houses are translated back into digital format at the DSLAM.
flow control service (TCP)
Recall that the hosts on each side of a TCP connection set aside a receive buffer for the connection. When the TCP connection receives bytes that are correct and in sequence, it places the data in the receive buffer. The associated application process will read data from this buffer, but not necessarily at the instant the data arrives. Indeed, the receiving application may be busy with some other task and may not even attempt to read the data until long after it has arrived. If the application is relatively slow at reading the data, the sender can very easily overflow the connection's receive buffer by sending too much data too quickly. TCP provides a ________________ to its applications to eliminate the possibility of the sender overflowing the receiver's buffer. ________________ is thus a speed-matching service; matching the rate at which the sender is sending against the rate at which the receiving application is reading.
Why is checksumming used at the transport layer and cyclic redundancy check used at the link layer?
Recall that the transport layer is typically implemented in software in a host as part of the host's operating system. Because transport-layer error detection is implemented in software, it is important to have a simple and fast error-detection scheme such as checksumming. On the other hand, error detection at the link layer is implemented in dedicated hardware in adapters, which can rapidly perform the more complex CRC operations.
Cable Internet Access
Requires cable modems, which connect to the home PC via ethernet ports. Fiber optics connect the cable head end to neighborhood-level junctions, from which traditional coaxial cable is then used to reach individual houses and apartments. At the cable head end, the cable modem termination system (CMTS) turns the analog signal sent from the cable modems in many downstream homes back into digital format one important characteristic of cable internet access is that it is a shared broadcast medium. In particular, every packet sent by the head end travels downstream on every link to every home, and every packet sent by a home travels on the upstream channel to the head end.
slow start
TCP connection state where the value of cwnd (congestion window) begins at 1 MSS (maximum segment size) and increases by 1 MSS every time a transmitted segment is first acknowledged. As an example, TCP from Host A might send one segment to the network destined for Host B and wait for an acknowledgement. When the acknowledgement arrives, the TCP sender increases the congestion window by one MSS and sends out two maximum-sized segments. These segments are then acknowledged, with the sender increasing the congestion window by 1 MSS for each of the acknowledged segments. giving a congestion window of 4 MSS, and so on. This process results in a doubling of the sending rate every RTT. Thus, the TCP send rate starts slow but grows exponentially during the slow start phase. TCP does a few things to end exponential growth: 1) If there is a loss event (i.e. congestion) indicated by a timeout, the TCP sender sets the value of cwnd to 1 and begins the slow start process anew. It also sets the value of a second state variable, ssthresh (shorthand for slow start threshold) to cwnd/2 - half of the value of the congestion window value when congestion was detected. 2) Since ssthresh is half the value of cwnd when congestion was last detected, it might be a bit reckless to keep doubling cwnd when it reaches or surpasses the value of ssthresh. Thus when the value of cwnd equals ssthresh, slow start ends and TCP transitions into congestion avoidance mode, during which TCP increases cwnd more cautiously 3) if three duplicate ACKs are detected, TCP performs a fast retransmit and enters the fast recovery state
receive window
TCP provides flow control by having the *sender* maintain a variable called the ___________. Informally, this variable is used to give the sender an idea of how much free buffer space is available at the receiver. Because TCP is full-duplex, the sender at each side of the connection maintains a distinct receive window. Let's investigate the receive window in the context of a file transfer. Suppose that Host A is sending a large file to Host B over a TCP connection. Host B allocates a receive buffer to this connection; denote its size by RcvBuffer. From time to time, the application process in Host B reads from the buffer. Define: 'LastByteRead' as the number of the last byte in the data stream read (and 'grabbed') from the buffer by the application process in B 'LastByteRcvd' as the number of the last byte in the data stream that has arrived from the network and has been placed in the receive buffer at B Because TCP is not permitted to overflow the allocated buffer, we must have LastByteRcvd-LastByteRead≤RcvBuffer The receive window, denoted rwnd, is set to the amount of spare room in the buffer: rwnd=RcvBuffer-[LastByteRcvd-LastByteRead] Host B tells Host A how much spare room it has in the connection buffer by placing its current value of rwnd in the receive window field of every segment it sends to A. Initially, Host B sets rwnd = RcvBuffer. Note that to pull this off, Host B must keep track of several connection-specific variables. Host A in turn keeps track for the last byte sent and last byte acked. LastByteSent-LastByteAcked is the amount of unacknowledged data that A has sent into the connection. By keeping the amount of unacknowledged data less than the value of rwnd, Host A is assured that it is not overflowing the receive buffer at Host B. Thus, Host A makes sure that throughout the connection's life, LastByteSent-LastByteAcked ≤ rwnd
data plane
The /per-router/ functions in the network layer that determine how a datagram (that is, a network-layer packet) arriving on one of a router's input links is forwarded to one of that router's output links.
Link Layer
The Internet's network layer routes a datagram through a series of routers between the source and destination. To move a packet from one node (host or router) to the next node in the route, the network layer relies on the services of the link layer. In particular, at each node, the network layer passes the datagram down to the link layer, which delivers the datagram to the next node along the route. At this next node, the link layer passes the datagram up to the network layer. The services provided by the link layer depend on the specific link-layer protocol that is employed over the link. For example, some link-layer protocols provide reliable delivery, from transmission node, over one link, to receiving node. Examples of link-layer protocols include Ethernet, WiFi, and the cable access network's DOCSIS protocol. Link layer packets are referred to as //frames//.
SYN cookies (textbook explanation)
Work as follows: - when the server receives a SYN segment, it does not know if the segment is coming from a legitimate user or is part of a SYN flood attack. So, instead of creating a half-open TCP connection for this SYN, the server creates an initial TCP sequence number that is a complicated function of source and destination IP addresses and port numbers of the SYN segment, as well as a secret number only known to the server. This carefully crafted initial sequence number is the so-called 'cookie'. The server then sends the client a SYNACK packet with this special initial sequence number. **The server does not remember the cookie or any other state information corresponding to the SYN**. - a legitimate client will return an ACK segment. When the server receives this ACK, it must verify that the ACK corresponds to some SYN sent earlier. For a legitimate ACK, the value in the acknowledgement field is equal to the initial sequence number in the SYNACK plus one. The server can then run the same hash function using the source and destination IP addresses and port numbers in the SYNACK (which are the same as in the original SYN) and the secret number. --> If the result of the function plus one is the same as the acknowledgement (cookie) value in the client's SYNACK, the server concludes that the ACK corresponds to an earlier SYN segment and is hence valid --> on the other hand, if the client does not return an ACK segment, then the original SYN has done no harm at the server, since the server hasn't yet allocated any resources in response to the original bogus SYN
Random Access Protocols
a class of multiple access protocol where a transmitting node always transmits at the full rate of the channel (namely, R bps). When there is a collision, each node involved in the collision repeatedly retransmits its frame (that is, packet) until its frame gets through without a collision. But when a node experiences a collision, it doesn't necessarily retransmit the frame right away. Instead it waits a random delay before retransmitting the frame. Each node involved in a collision chooses independent random delays. Because the random delays are independently chosen, it is possible that one of the nodes will pick a delay that is sufficiently less than the delays of the other colliding nodes and will therefore be able to sneak its frame into the channel without a collision.
Domain Name System (DNS)
a directory service that translates hostnames to IP addresses. The DNS protocol runs over UDP and uses port 53. DNS is commonly used by other application-layer protocols - including HTTP and SMTP to translate user-supplied hostnames to IP addresses. As an example, consider what happens when a browser (that is, an HTTP client), running on some user's host, requests the URL www.someschool.edu/index.html. In order for the user's host to be able to send an HTTP
object (in web context)
a file - such as an HTML file, a JPEG image, a Java applet, or a video clip - that is addressable by a single URL.
hostname
a mnemonic way to identify hosts
buffer
a place in physical memory storage used to temporarily store data while it is being moved from one place to another
multiplex
a system or signal involving simultaneous transmission of several messages along a single channel of communication
what are the pros and cons of TDM (time division multiplexing)?
advantages: 1) eliminates collisions 2) perfectly fair; each node gets a dedicated transmission rate of R/N bps during each frame time drawbacks: 1) each node is limited to an average rate of R/N bps (given the transmission rate of the broadcast channel is R bps and there are N nodes) even when it is the only node with packets to send 2) a node must always wait for its turn in the transmission sequence - again, even when it is the only node with a frame to send... imagine the partygoer who is the only one with anything to say (and imagine the even rarer circumstance where everyone wants to hear what that one person has to say). Clearly, TDM would be a poor choice for a multiple access protocol for this particular party.
what are the pros and cons of FDM?
advantages: 1) eliminates collisions 2) perfectly fair; each node gets a dedicated transmission rate of R/N bps in each frequency channel drawbacks: 1) each node is limited to an average rate of R/N bps (given the transmission rate of the broadcast channel is R bps and there are N nodes) even when it is the only node with packets to send
persistent (http) connections
all of the requests and their corresponding responses are sent over the same TCP connection requests for an entire web page, or even multiple web pages residing on the same server can be sent form the server to the same client over a single persistent TCP connection. These requests for objects can be made back-to-back, without waiting for replies to pending requests (pipelining). Typically, the HTTP server closes a connection when it isn't used for a certain time.
DHCP
allows a host to obtain (be allocated) an IP address automatically. A network administrator can configure DHCP so that a given host receives the same IP address each time it connects to the network, or a host may be assigned a temporary IP address that will be different each time the host connects to the network. In addition to host IP address assignment, DHCP also allows a host to learn additional information, such as its subnet mask, the address of its first-hop router (often called the default gateway), and the address of its local DNS server.
obtaining a block of addresses
an ISP might be allocated the address block 200.23.16.0/20 The ISP, in turn, could divide its address block into eight equal-sized contiguous address blocks and give one of these address blocks out to each of up to eight organizations that are supported by the ISP:
software-defined networking (SDN) approach (control plane)
an approach to the control plane where a physically separate remote controller computes and distributes the forwarding tables to be used by every router. This contrasts with the traditional approach of each router performing the routing function of the control plane. The routing device performs forwarding only in this paradigm. In this setup, the remote controller might be implemented in a remote data center with high reliability and redundancy, and might be managed by an ISP or other third party. The network is called 'software-defined' in this setup because the controller that computes forwarding tables and interacts with routers is implemented in software. The routers and the remote controller communicate by exchanging messages containing forwarding tables and other pieces of routing information.
input port (router)
an input port performs several key functions: - It performs the physical layer function of terminating an incoming physical link at a router. - An input port also performs link-layer functions needed to interoperate with the link layer at the other side of the incoming link - A lookup function is also performed at the input port; this will occur in the rightmost box of the input port. It is here that the forwarding table is consulted to determine the router output port to which an arriving packet will be forwarded via the switching fabric. Note that the term 'port' here is distinctly different from the software ports associated with network applications and sockets discussed elsewhere.
node
any device that runs a link-layer protocol. Includes hosts, routers, switches, and wifi access points
elastic applications
applications that can make use of as much, or as little, throughput as happens to be available; includes file transfer, email, and web transfers.
bandwidth-sensitive applications
applications that have throughput requirements
end-to-end congestion control
approach to congestion control wherein the network layer provides no explicit support to the transport layer for congestion-control purposes. Even the presence of network congestion must be inferred by the end systems based only on observed network behavior (for example, packet loss and delay). TCP takes this approach toward congestion control since the IP layer is not required to provide feedback to hosts regarding network congestion. TCP segment loss (as indicated by a timeout or the receipt of three duplicate acknowledgements) is taken as an indication of network congestion, and TCP decreases its window size accordingly.
port number
assigned to a socket when it's created.
transmission delay
assuming that packets are transmitted in a first-come-first-served manner, as is common in packet-switched networks, our packet can be transmitted only after all the packets that have arrived before it have been transmitted. Denote the length of the packet by //L// bits, and denote the transmission rate of the link from router A to router B by //R// bits/sec. The *transmission delay* is L/R. This is the amount of time required to push (that is, transmit), all of the packet's bits into the link. Transmission delay and propagation delay are different; the transmission delay is the amount of time required for the router to push out the packet; it is a function of the packet's length and the transmission rate of the link. Propagation delay is the time it takes for a bit to propagate from one router to the next; it is a function of the distance between the two routers.
human protocol (as an analogy to network protocol)
consider what you do when you want to ask someone for the time of day. Human protocol (or good manners, at least) dictates that one first offers a greeting to initiate communication with someone else. The typical response to a 'hi' is a returned 'hi' message. Implicitly, one takes a cordial 'hi' response as an indication that one can proceed and ask for the time of day. A different response to the initial 'hi' might indicate an unwillingness or inability to communicate. In this case, the human protocol would be to not ask for the time of day. In our human protocol, *there are specific messages we send, and specific actions we take in response to the received reply messages or other events*.
web page
consists of objects; for example, if a web page contains HTML text and five JPEG images, then the Web page has six objects: the base HTML file plus the five images.
reliable data transfer
data loss can have devastating consequences for many applications, such as email, file transfer, financial applications, etc. To support these applications, something has to be done to guarantee that the data sent by one end of the application is delivered correctly and completely to the other end of the application. If a protocol provides such a guaranteed delivery service, it is said to provide _________________. When a transport protocol provides this service, the sending process can just pass its data into the socket and know with complete confidence that the data will arrive without errors at the receiving process.
multiple access protocol efficiency
defined to be the long-run fraction of successful slots in the case where there are a large number of active nodes, each always having a large number of frames to send. Note that if no form of access control were used, and each node were to immediately retransmit after each collision, the efficiency would be zero.
HTTP
defines how web clients request web pages from web servers and how servers transfer web pages to clients. The HTTP client first initiates a TCP connection with the server. Once the connection is established, the browser and the server processes access TCP through their socket interfaces. HTTP need not worry about lost data or the details of how TCP recovers from loss or reordering of data within the network - that is the job of TCP and lower layer protocols. It is important to note that the server sends requested files to clients without storing any state information about the client. If a particular client asks for the same object twice in a period of a few seconds, the server does not respond by saying that it just served the object to the client; instead, the server resends the object, as it has completely forgotten what it did earlier.
network service model
defines the characteristics of end-to-end delivery of packets between sending and receiving hosts. when we consider the different types of service that might be offered by the network layer, we need to answer several questions: when the transport layer at a sending host passes a packet down to the network layer, can the transport layer rely on the network layer to deliver the packet to the destination? When multiple packets are sent, will they be delivered to the transport layer in the receiving host in the order they were sent? Will the amount of time between the sending of two sequential packet transmissions be the same as the amount of time between their reception? Will the network provide any feedback about congestion in the network? The answers to these questions and others are determined by the service model provided by the network layer. Possible services the network layer could provide include: - guaranteed delivery: this service guarantees that a packet sent by a source host will eventually arrive at the destination host - guaranteed delivery with bounded delay: this service not only guarantees delivery of the packet, but delivery within a specified host-to-host delay bound (eg. within 100 msec) - in-order packet delivery: this service guarantees that the packets arrive at the destination in the order that they were sent - guaranteed minimal bandwidth: This network-layer service emulates the behavior of a transmission link of a specified bit rate (for example, 1 Mbps) between sending and receiving hosts. As long as the sending host transmits bits (as part of packets) at a rate below the specified bit rate, then all packets are eventually delivered to the destination host. - security: The network layer could encrypt all datagrams at the source and decrypt them at the destination, thereby providing confidentiality to all transport-layer segments. however, the network layer only provides a SINGLE SERVICE, known as best-effort service
network protocol
defines the format and the order of messages exchanged between two or more communicating entities, as well as the actions taken on the transmission and/or receipt of a message or other event. For example, hardware-implemented protocols in two physically connected computers control the flow of bits on the 'wire' between the two network interface cards; congestion-control protocols in end systems control the rate at which packets are transmitted between sender and receiver; protocols in routers determine a packet's path from source to destination.
end systems (hosts)
devices connected to the internet. Includes computers, phones, laptops, servers, IoT devices, etc.
socket interface
how does one program running on one end system instruct the Internet to deliver data to another program running on another end system? End systems attached to the internet provide a _____________ that specifies how a program running on one end system asks the Internet infrastructure to deliver data to a specific destination program running on another end system. An analogy can me made in the US postal system; you cannot write a letter to someone and drop it out your window. Instead, the postal service requires that you write the recipient's full name, address, and zip code in the center of the envelope; seal the envelope; put a stamp in the upper-right hand corner of the envelope; and finally, drop the envelope in an official postal service mailbox. Thus, the postal service has its own 'postal service interface', or set of rules, that you must follow for the postal service to deliver a letter. In a similar manner, the internet has a socket interface that a program sending data must follow to have the internet deliver the data to the program that will receive the data.
Internet Service Provider (ISP)
how end systems access the internet. Includes residential ISPs, corporate ISPs, ISPs that provide WiFi access in airports & coffee shops, and cellular data ISPs. 'Lower-tier' ISPs are connected to each other through 'higher-tier' ISPs. ISP networks are managed independently, run the IP protocol, and conform to certain naming and address conventions.
total nodal delay
if we let d_proc, d_queue, d_trans, and d_prop denote the processing, queuing, transmission, and propagation delays, then the total nodal delay is given by d_nodal = d_proc + d_queue + d_trans + d_prop
addressing processes
in order for a process running on one host to send packets to a process running on another host, the receiving process needs to have an address. To identify the receiving process, two pieces of information need to be specified: 1) the address of the host (IP address) and 2) an identifier that specifies the receiving process in the destination host (port number) The sending process must identify the receiving process since, in general, a host could be running many network applications.
client/server processes
in the context of a communication session between a pair of processes, the process that initiates the communication is labeled the **client**, and the process that waits to be contacted to begin the session is called the **server**
throughput
in the context of a communication session between two processes along a network path, this is the rate at which the sending process can deliver bits to the receiving process. Consider transferring a large file from host A to host B across a computer network. This transfer might be, for example, a large video clip from one peer to another in a P2P file sharing system. The instantaneous ___________ at any instant of time is the rate (in bits per second) at which host B is receiving the file.
Servers and Clients
informally, clients tend to be desktop and mobile PCs, smartphones, and so on; servers tend to be more powerful machines that store and distribute web pages, stream video, relay email, and so on.
domain name system (DNS)
main task is to translate hostnames to IP addresses. The DNS is 1) a distributed database implemented in a hierarchy of DNS servers, and 2) an application layer protocol that allows hosts to query the distributed database. Often UNIX machines running the Berkeley Internet Name Domain (BIND) software. The DNS protocol runs UDP and uses port 53 DNS also provides host aliasing, mail server aliasing, and load distribution services
switch table
mechanism for switch filtering and forwarding. The switch table contains entries for some, but not necessarily all, of the hosts and routers on a LAN. An entry in the switch table contains (1) a MAC address, (2) the switch interface that leads toward that MAC address, and (3) the time at which the entry was placed in the table.
base station
responsible for sending and receiving data (e.g., packets) to and from a wireless host that is associated with that base station. A base station will often be responsible for coordinating the transmission of multiple wireless hosts with which it is associated. When we say a wireless host is"associated" with a base station, we mean that (1) the host is within the wireless communication distance of the base station, and (2) the host uses that base station to relay data between it (the host) and the larger network. Cell towers in cellular networks and access points in 802.11 wireless LANs are examples of base stations.
network core
routers or physical infrastructure that link access networks to one another
selective repeat
sender: - allowed to have up to N unACKed packets in the pipeline - maintains timer for **each** unACKed packet - retransmit unACKed packet when timer expires receiver: - sends **individual** ACK for each packet - **must** buffer out-of-order packets
Go-Back-N (slides explanation)
sender: - allowed to have up to N unACKed packets in the pipeline - maintains timer for oldest unACKed packet - retransmit all unACKed packets when timer expires receiver: - sends cumulative ACKs (i.e., ACK all packet numbers up to sequence number X) - do not ACK packet if there is a gap - no need to buffer out-of-order packets
downstream vs upstream transmission rates
since most people download more data than they upload, downstream transmission rates are usually higher than upstream transmission rates when downstream and upstream rates are different, access is said to be asymmetric
Internet Protocol (IP)
specifies the format of packets that are sent and received among routers and end systems
end-to-end argument
states that important functionality (eg. encryption, authentication, reliable message delivery) should be implemented in higher layers at the end systems for this reason the internet is sometimes referred to as a 'dumb network with smart hosts'
output port (router)
stores packets received from the switching fabric and transmits these packets on the outgoing link by performing the necessary link-layer and physical layer functions
Distributed Inter-frame Space (DIFS)
suppose a station (wireless device or an access point) has a frame to transmit. If initially the station sense the channel idle, it transmits its frame after a short period of time known as the
packet switches
takes a packet arriving on one of its incoming communication links and forwards it on one of its outgoing communication links. The two most prominent types are routers and link-layer switches.
control plane
the /network-wide/ logic that controls how a datagram is routed among routers along an end-to-end path from the source host to the destination host
forward error correction
the ability of the receiver to both detect and correct errors. These techniques are valuable because they can decrease the number of sender retransmissions required and allow for immediate correction of errors at the receiver. This avoids having to wait for the round-trip propagation delay needed for the sender to receive a NAK packet and for the retransmitted packet to propagate back to the receiver
SampleRTT
the amount of time between when the segment is sent (that is, passed to IP) and when an acknowledgement for the segment is received. Most TCP implementations do not measure a SampleRTT for every transmitted segment, but instead take only one SampleRTT measurement at a time; that is, at any point in time, the SampleRTT is being estimated for only one of the transmitted but currently unacknowledged segments, leading to a new value of SampleRTT approximately once every RTT. TCP also never computes a SampleRTT for a segment that has been retransmitted; it only measures SampleRTT for segments that have been transmitted once.
correspondent (mobile networks)
the entity wishing to communicate with the mobile node
multiplexing
the job of gathering data chunks in the source host from different sockets, encapsulating each data chunk with header information (that will later be used in demultiplexing) to create segments, and passing the segments to the network layer is called
twisted-pair copper wire
the least expensive and most commonly used guided transmission medium
openflow actions
the most important possible actions are: - forwarding: an incoming packet may be forwarded to a particular physical output port, broadcast over all ports (except the port on which it arrived) or multicast over a selected set of ports. The packet may be encapsulated and sent to the remote controller for this device. That controller then may (or may not) take some action on that packet, including installing new flow table entries, and may return the packet to the device for forwarding under the updated set of flow table rules. - dropping: a flow table entry with no action indicates that a matched packet should be dropped - modify-field: the values in ten packet header fields may be re-written before the packet is forwarded to the chosen output port.
foreign network (in the context of mobile networks)
the network in which a mobile node is currently residing
access network
the network that physically connects an end system to the first router (also known as the "edge router") on a path from the end system to any other distant end system.
multiple access problem
the problem of how to coordinate the access of multiple sending and receiving nodes to a shared broadcast channel
sequence numbers
the receiver needs to be able to detect duplicate packets the receiver needs to be able to handle out-of-order packets the sender assigns a unique sequence number to each packet for the receiver
forwarding
the router-local action of transferring a packet from an input link interface to the appropriate output link interface. Takes place at very short timescales (typically a few nanoseconds) and thus is typically implemented in hardware. Part of the data plane.
routing processor
the routing processor performs control-plane functions. In traditional routers, it executes the routing protocols, maintains routing tables and attached link state information, and computes the forwarding table for the router. In SDN routers, the routing processor is responsible for communicating with the remote controller in order to (among other activities) receive forwarding table entries computed by the remote controller, and install these entries in the router's input ports
best-effort service
the service model provided by the network layer. With best-effort service, packets are neither guaranteed to be received in the order in which they were sent, nor is their eventual delivery even guaranteed.
forwarding (link layer)
the switch function that determines the interfaces to which a frame should be directed
filtering (link layer)
the switch function that determines whether a frame should be forwarded to some interface or should just be dropped
fast recovery
transitions to the slow-start state after performing the same actions as in slow start and congestion avoidance: if a timeout event occurred, the value of cwnd is set to 1 MSS, and the value of ssthresh is set to half the value of cwnd when the loss event occurred. Otherwise if a duplicate ACK event occurred, ssthresh is set to cwnd/2 and TCP enters congestion avoidance state
transport layer (protocol stack)
transports application-layer messages between application endpoints. In the Internet there are two transport protocols, TCP and UDP, either of which can transport application-layer messages. TCP provides a connection-oriented service to its applications. This service includes guaranteed delivery of application-layer messages to the destination and flow control (sender/receiver speed matching). TCP also breaks long messages into shorter segments and provides a congestion-control mechanism, so that a source throttles its transmission rate when the network is congested. The UDP protocol provides a connectionless service to its applications. This is a no-frills service that provides no reliability, no flow control, and no congestion control. Transport layer packets are referred to as segments.
timeouts
unlike NACK, absence of ACK is determined implicitly sender sets a timer that triggers retransmission if an ACK is not received before the timer expires minimum timer expiration period should be at least one RTT - if the timer fires before RTT, we could end up sending duplicate ACKs
congestion avoidance (TCP state)
upon entry into the congestion avoidance state, the value of cwnd is approximately half its value when congestion was last encountered. After halving cwnd, rather than doubling the value of cwnd every RTT, TCP adopts a more conservative approach and increases cwnd by just a single MSS every RTT. This can be accomplished in several ways: a common approach is for the TCP sender to increase cwnd by MSS bytes whenever a new acknowledgement arrives the linear increase of 1 MSS per RTT ends when a timeout occurs; as in the case of slow start, the value of cwnd is set to 1 MSS, and the value of ssthresh is updated to half the value of cwnd when the loss event occurred. The loss event also can be triggered by a triple duplicate ACK event. In this case, the network is continuing to deliver segments from sender to receiver (as indicated by receipt of duplicate ACKs). So TCP's behavior to this type of loss event should be less drastic than with a timeout-indicated loss: TCP halves the value of cwnd (adding in 3 MSS for good measure to account for the triple duplicate ACKs received) and records the value of ssthresh to be half the value of cwnd when the triple duplicate ACKs were received. The fast-recovery state is then entered.
SYN segment
when a client requests a TCP connection to a server, it sends a 'SYN' segment to the server port. 'SYN' stands for synchronize. A SYN segment is identified because the flag bit in the segment's header, the SYN bit, is set to 1. This segment contains no application-layer data. The client will also choose a random initial sequence number (client_isn) and puts this number in the sequence number field of the initial TCP SYN segment (this is the raw sequence number in wireshark).
Demultiplexing
when a host receives an incoming transport-layer segment, it must examine the segment's fields and identify the receiving socket, then must deliver the data in that segment to the correct socket. This is called:
short interframe spacing (SIFS)
when a station in a wireless LAN sends a frame, the frame may not reach the destination station intact for a variety of reasons. To deal with this non-negligible chance of failure, the 802.11 MAC protocol uses link-layer acknowledgements. When the destination station receives a frame that passes the CRC, it waits a short period of time known as the ___________________ and then sends back an acknowledgement frame.