Networking - Chapter 3

¡Supera tus tareas y exámenes ahora con Quizwiz!

Transport vs Network

transport-layer protocol provides logical communication between processes running on different hosts, a network-layer protocol provides logical-communication between hosts.

UDP = no connection establishment

unlike TCP, UDP does not use three-way handshake before it starts to transfer data and thus, does not introduce any delay to establish a connection

source (16 bits) and destination (16 bits) port numbers

used for multiplexing/demultiplexing data from/to upper-layer applications

Two-Tuple

a UDP socket is fully identified by a two-tuple consisting of a destination IP address and a destination port number if two UDP segments have different source IP addresses and/or source port numbers but have the same destination IP address and destination port number, then the two segments will be directed to the same destination via the same destination socket

utilization

(of the sender or channel) defined as the fraction of time the sender is actually busy sending bits into the channel stop-and-wait protocol has a rather dismal sender utilization

IP Service Model

a best-effort delivery service

Retransmission (ARQ)

a packet that is received in error at the receiver will be re-transmitted by the sender

Finite-State Machine (FSM)

an abstract machine that can be one of a finite number of states at any given time

Three-Way Handshake

in order to establish a TCP connection, three packets are sent between two hosts, for this reason, the procedure is often referred to as a 3-way handshake

End-End Principle

in system design; states that since certain functionality must be implemented on an end-end basis: "functions placed at the lower levels may be redundant or of little value when compared to the cost of providing them at higher level"

Reminders:

Packets: Application-layer = messages Transport-layer = segments Network-layer = datagrams Link-layer = frames Transport-layer protocols: more than one may be available such as TCP or UDP

TCP Connection Management

TCP connection establishment can significantly add to perceived delays many of the most common network attacks - including the popular SYN flood attack - exploit vulnerabilities in TCP connection management

TCP Segments

TCP pairs each chunk of client data with a TCP header, thereby forming TCP segments. The segments are passed down to the network layer, where they are separately encapsulated within network-layer IP datagrams. The IP datagrams are then sent into the network. When TCP receives a segment at the other end, the segment's data is placed in the TCP connection's receive buffer. The application reads the stream of data from this buffer. Each side of the connection has its own send buffer and its own receive buffer.

UDP = small packet header overhead

TCP segment = 20 bytes of header overhead in every segment UDP = 8 bytes of header overhead in every segment

Unidirectional Data Transfer

data transfer from the sending to the receiving side

Selective Acknowledgement

proposed modification of TCP; allows a TCP receiver to ACK out-of-order segments selectively rather than just cumulatively ACKing the last correctly received, in order segment

Internet Protocol

provides logical communication between hosts said to be an unreliable service does not guarantee segment delivery, orderly delivery of segments or integrity of data in the segments each host has an IP address

Sequence Number

used for seq. numbering of packets of data flowing from sender to receiver; gaps in the sequence numbers of received packets allow the receiver to detect a lost packet; packets with duplicate sequence numbers allow the receiver to detect duplicate copies of a packet a solution for handling corrupted ACKs or NAKs adopted in almost all existing data transfer protocols, including TCP adds a new field to the data packet and have the sender number its data packets by putting a sequence number into this field the receiver then need only check this sequence number to determine whether or not the received packet is a re-transmission handles possible duplicate packets

Pipelining

*stop-and-wait protocols have poor sender 'utilization'; network protocols can limit the capabilities provided by the underlying network hardware a solution to this performance problem is to allow the sender to send multiple packets w/o waiting for ACKs i.e. - if a user is allowed to transmit three packets before having to wait for ACKs, the utilization of the sender is essentially tripled many in-transit sender-to-receiver packets can be viewed as filling a pipeline, this technique is known as pipelining

point-to-point

A TCP connection is between a single sender and a single receiver. So-called "multicasting" (see the online supplementary materials for this text)—the transfer of data from one sender to many receivers in a single send operation—is not possible with TCP.

cumulative acknowledgments

Because TCP only acknowledges bytes up to the first missing byte in the stream, TCP is said to provide cumulative acknowledgments.

Best-effort Delivery Service

IP makes it "best effort" to deliver segments between communicating hosts but makes no guarantees

Transport-Layer Multiplexing and De-multiplexing

extending host-to-host delivery to process-to-process delivery

Congestion Control

form of sender control TCP sender can be throttled due to congestion within the IP network *similar to flow control as it throttles the sender but each action is taken for very different reasons; the terms should not be used interchangeably

Well-Known Port Numbers

port numbers ranging from 0 to 1023; restricted, which means that they are reserved for use by well-known application protocols such as HTTP (which uses port number 80) and FTP (which uses port number 21)

Choosing UDP vs TCP

some applications are better suited for UDP for the following reasons: 1. Finer application-level control over what data is sent and when 2. No connection establishment

Multiplexing

the job of gathering data chunks at the source host from different sockets, encapsulating each data chunk with header information (that will later be used in de-multiplexing) to create segments, and passing the segments to the network layer is called multiplexing multiplexing requires (1) that sockets have unique identifiers, and (2) that each segment have special fields (source port number field and the destination port number field) that indicate the socket to which the segment is to be delivered.

flag field (6 bits)

1. ACK bit is used to indicate that the value carried in the acknowledgment field is valid; that is, the segment contains an acknowledgment for a segment that has been successfully received 2. RST, SYN, and FIN bits are used for connection setup and teardown 3. CWR and ECE bits are used in explicit congestion notification 4. Setting the PSH bit indicates that the receiver should pass the data to the upper layer immediately 5.URG bit is used to indicate that there is data in this segment that the sending-side upper-layer entity has marked as "urgent." The location of the last byte of this urgent data is indicated by the 16-bit urgent data pointer field. TCP must inform the receiving-side upper-layer entity when urgent data exists and pass it a pointer to the end of the urgent data. In practice, the PSH, URG, and the urgent data pointer are not used. However, we mention these fields for completeness.

GBN Sender: Responds to 3 Events

1. Invocation from above - when 'send' is called from above, the sender first checks to see if the window is full, that is, whether there are N outstanding, unACK packets; if the window is not full a packet is created and sent and variables are appropriately updated; if the window is full, the sender simply returns the data back to the upper layer, an implicit indication that the window is full; in real implementation, the sender would more likely have either buffered (but not immediately sent) this data or would have a synchronization mechanism that would allow the upper layer to call 'send' only when the window is not full 2. Receipt of an ACK - an ACK for a packet w/sequence number n will be taken to be a cumulative ack, indicating that all packets w/ a sequence number up to and including n have been correctly received at the receiver 3. Timeout Event - the name, Go-Back-N is derived from the sender's behavior in the prescence of lost or overly delayed packets; a timer will be used to recover from lost data or ACK packets; if a timeout occurs, the sender resends all packets that have been previously sent but have not yet been ACK

TCP Connection Management Steps

1. client side TCP sends a special TCP segment to the sever-side TCP; segment contains no app-layer data but one of the flat bits in the segment's header, the SYN bit is set to - this segment is thus, referred to as a SYN segment ; client randomly chooses the initial sequence number and puts this number int he seq. number field of the initial TCP SYN segment; segment is then encapsulated within an IP datagram and sent to the server; properly randomizing the seq. number helps avoid certain security attacks 2. Once the datagram containing the TCP SYN segment arrives at the server host, the server extracts the TCP SYN segment from the datagram, allocates the TCP buffers and variables to the connection and sends a connection-granted segment to the client TCP; this connection granted segment also contains no app-layer data but does contain 3 important pieces of information in the segment header: - SYN bit is set to 1 - ACK field is set to client_isn +1 - server chooses its own initial seq. number and puts this value in the seq. number field of the TCP segment header the connection granted segment is referred to as a SYNACK segments (allocation of these buffers and variables before completing the 3rd step of the three-way handshake makes TCP vulnerable to a DoS attack known as SYN flooding) 3. Upon receiving the SYNACK segment, the client also allocates buffers and variables to the connection; the client host then sends the server yet another segment - this last segment acknowledges the server's connection granted segment (done so by putting the value server_isn+1 in the ACK field of the TCP segment header); the SYN bit is set to 0, since the connection is established - this stage of the 3-way handshake may carry client-to-server data in the segment payload once these steps are complete, the client and host can send segments containing data to each other; in future segments, the SYN bit will be set to 0 if the client does not send an ACK to complete the third step in the 3-way handshake, eventually (often after a minute or more) the server will terminate the half-open connection and reclaim the allocated resources

SR Sender: Responds to 3 Events

1. data received from above - when data is sent from above, the SR sender checks the next available sequence number for the packet; if the seq. number is within the sender's window, the data is packetized and sent; otherwise, it is either buffered or returned to the upper layer for later transmission 2. time out - timers are again used to protect against lost packets; however, each packet must now have its own logical timer, since only a single packet will be transmitted on timeout; a single hardware timer can be used to mimic the operation of multiple logical timers 3. ACK received - if an ACK is received, the SR sender marks the packet as having been received, provided it is in the window

Consequences of Pipelining

1. the range of sequence numbers must be increased, since each in-transit packet must have a unique seq. number and there may be multiple, in-transit, unACK packets 2. the sender and receiver sides of the protocols have to buffer more than one packet; the sender will have to buffer packets that have been transmitted and not yet ACK (at a minimal) 3. the range of sequence numbers needed and the buffering requirements depend on the manner in which a data transfer protocol responds to lost, corrupted and overly delayed packets two basic approaches toward pipelining error recovery are Go-Back-N and selective repeat

De-multiplexing

Each transport-layer segment has a set of fields in the segment for the purpose of directing an incoming transport-layer segment to the appropriate socket at the receiving end, the transport layer examines these fields to identify the receiving socket and then directs the segment to that socket; this job of delivering the data in a transport-layer segment to the correct socket is called de-multiplexing

Network-layer Protocol

IP = internet protocol

full-duplex service

If there is a TCP connection between Process A on one host and Process B on another host, then application-layer data can flow from Process A to Process B at the same time as application-layer data flows from Process B to Process A.

What does a host do when it receives out-of-order segments in a TCP connection?

Interestingly, the TCP RFCs do not impose any rules here and leave the decision up to the programmers implementing a TCP implementation. There are basically two choices: either (1) the receiver immediately discards out-of-order segments (which, as we discussed earlier, can simplify receiver design), or (2) the receiver keeps the out-of-order bytes and waits for the missing bytes to fill in the gaps. Clearly, the latter choice is more efficient in terms of network bandwidth, and is the approach taken in practice.

TCP = GBN or SR?

TCP ACKs are cumulative and correctly received but out-of-rder segments are not individually ACKed by the receiver; the TCP sender need only maintain the SendBase - in this sense TCP looks a lot like a GBN-style protocol however, many TCP implementations will buffer correctly received but out-of-order segments with Selective ACK, TCP looks a lot like a generic SR protocol TCP's error-recovery mechanism is probably best categorized as a hybrid of GBN and SR protocols

Four-Tuple

TCP socket is identified by a four-tuple: source IP address, source port number, destination IP address, destination port number when a TCP segment arrives from the network to a host, the host uses all four values to direct (de-multiplex) the segment to the appropriate socket two arriving TCP segments with different source IP addresses or source port numbers will (with the exception of a TCP segment carrying the original connection-establishment request) be directed to two different sockets a server host may support many simultaneous TCP connection sockets, with each socket attached to a process and with each socket identified by its own four-tuple

SYN Flag

The SYN[chronize] flag is the TCP packet flag that is used to initiate a TCP connection

UDP Segment Structure

RFC 768 application data occupies the data field of the UDP segment UDP header has only four fields, each consisting of two bytes: Source port # Des port # Length Checksum (data follows after header) the port numbers allow the destination host to pass the application data to the correct process running on the destination end system (that is, to perform de-multiplexing) the length frield specifies the number of bytes in the UDP segment (header + data in bytes); explicit length value is needed since the size of the data field may differ from one UDP segment to the next checksum is used by the receiving host to check whether errors have been introduced into the segment; *also calculated over a few of the fields in the IP header in addition to the UDP segment

Reliable Data Transfer

Recall that the Internet's network-layer service (IP service) is unreliable. IP does not guarantee datagram delivery, does not guarantee in-order delivery of datagrams, and does not guarantee the integrity of the data in the datagrams. With IP service, datagrams can overflow router buffers and never reach their destination, datagrams can arrive out of order, and bits in the datagram can get corrupted (flipped from 0 to 1 and vice versa). Because transport-layer segments are carried across the network by IP datagrams, transport-layer segments can suffer from these problems as well. TCP creates a reliable data transfer service on top of IP's unreliable best-effort service. TCP's reliable data transfer service ensures that the data stream that a process reads out of its TCP receive buffer is uncorrupted, without gaps, without duplication, and in sequence; that is, the byte stream is exactly the same byte stream that was sent by the end system on the other side of the connection. How TCP provides a reliable data transfer involves many of the principles that we studied in In our earlier development of reliable data transfer techniques, it was conceptually easiest to assume that an individual timer is associated with each transmitted but not yet acknowledged segment. While this is great in theory, timer management can require considerable overhead. Thus, the recommended TCP timer management procedures [RFC 6298] use only a single retransmission timer, even if there are multiple transmitted but not yet acknowledged segments.

TCP segment structure

The TCP segment consists of header fields and a data field. The data field contains a chunk of application data. As mentioned above, the MSS limits the maximum size of a segment's data field. When TCP sends a large file, such as an image as part of a Web page, it typically breaks the file into chunks of size MSS (except for the last chunk, which will often be less than the MSS). Interactive applications, however, often transmit data chunks that are smaller than the MSS;

maximum segment size (MSS)

The maximum amount of data that can be grabbed and placed in a segment typically set by first determining the length of the largest link-layer frame that can be sent by the local sending host (the so-called maximum transmission unit, MTU), and then setting the MSS to ensure that a TCP segment (when encapsulated in an IP datagram) plus the TCP/IP header length (typically 40 bytes) will fit into a single link-layer frame. Note that the MSS is the maximum amount of application-layer data in the segment, not the maximum size of the TCP segment including headers.

How Long Before Packet is Truly Lost

The sender must clearly wait at least as long as a round-trip delay between the sender and receiver (which may include buffering at intermediate routers) plus whatever amount of time is needed to process a packet at the receiver. The approach thus adopted in practice is for the sender to judiciously choose a time value such that packet loss is likely, although not guaranteed, to have happened. If an ACK is not received within this time, the packet is retransmitted. Note that if a packet experiences a particularly large delay, the sender may retransmit the packet even though neither the data packet nor its ACK have been lost. This introduces the possibility of duplicate data packets in the sender-to-receiver channel.

UDP = no connection state

UDP does not track any connection state parameters such as receive and send buffers, congestion-control parameters and sequence and acknowledgement number parameters; for this reason, a server devoted to a particular application can typically support many more active clients when the application runs over UDP

UDP

UDP is an unreliable service; does just about as little as a transport protocol can do; aside from multi/demulti function and some light error checking, it adds nothing to IP UDP takes messages from the application process, attaches source and destination port number fields for the multi/demulti service, adds two other small fields and passes the resulting segment to the network layer the network layer encapsulates the transport-layer segment into an IP datagram and then makes a best-effort attempt to deliver the segment to the receiving host; if the segment arrives at the receiving host, UDP uses the destination port number to deliver the segment's data to the correct application process with UDP there is no handshaking between sending and receiving transport-layer entities before sending a segment - making UDP connection-less DNS is an example of an application-layer protocol that typically uses UDP

'Message-Dictation' Protocols

Use both positive acknowledgements (OK) and negative acknowledgements (Please repeat) that allow the receiver to let the sender know what has been received correctly and what has been received in error and thus, requires repeating.

acknowledgment number field (32 bits)

a critical part of TCP's reliable data transfer service Each of the segments that arrive from Host B has a sequence number for the data flowing from B to A. The acknowledgment number that Host A puts in its segment is the sequence number of the next byte Host A is expecting from Host B. It is good to look at a few examples to understand what is going on here. Suppose that Host A has received all bytes numbered 0 through 535 from B and suppose that it is about to send a segment to Host B. Host A is waiting for byte 536 and all the subsequent bytes in Host B's data stream. So Host A puts 536 in the acknowledgment number field of the segment it sends to B

sequence number field (32 bits)

a critical part of TCP's reliable data transfer service TCP views data as an unstructured, but ordered, stream of bytes. TCP's use of sequence numbers reflects this view in that sequence numbers are over the stream of transmitted bytes and not over the series of transmitted segments. The sequence number for a segment is therefore the byte-stream number of the first byte in the segment TCP connection randomly chooses an initial sequence number - this is done to minimize the possibility that a segment that is still present in the network from an earlier, already-terminated connection between two hosts is mistaken for a valid segment in a later connection between these same two hosts

SYN Cookies

When the server receives a SYN segment, it does not know if the segment is coming from a legitimate user or is part of a SYN flood attack. So, instead of creating a half-open TCP connection for this SYN, the server creates an initial TCP sequence number that is a complicated function (hash function) of source and destination IP addresses and port numbers of the SYN segment, as well as a secret number only known to the server. This carefully crafted initial sequence number is the so-called "cookie." The server then sends the client a SYNACK packet with this special initial sequence number. Importantly, the server does not remember the cookie or any other state information corresponding to the SYN. A legitimate client will return an ACK segment. When the server receives this ACK, it must verify that the ACK corresponds to some SYN sent earlier. But how is this done if the server maintains no memory about SYN segments? As you may have guessed, it is done with the cookie. Recall that for a legitimate ACK, the value in the acknowledgment field is equal to the initial sequence number in the SYNACK (the cookie value in this case) plus one. The server can then run the same hash function using the source and destination IP address and port numbers in the SYNACK (which are the same as in the original SYN) and the secret number. If the result of the function plus one is the same as the acknowledgment (cookie) value in the client's SYNACK, the server concludes that the ACK corresponds to an earlier SYN segment and is hence valid. The server then creates a fully open connection along with a socket. On the other hand, if the client does not return an ACK segment, then the original SYN has done no harm at the server, since the server hasn't yet allocated any resources in response to the original bogus SYN.

Error Detection (ARQ)

a mechanism is needed to allow the receiver to detect when bit errors have occurred such as the Internet Checksum with UDP. some techniques allow the receiver to detect and possibly correct packet bit errors these techniques require that extra bits (beyond the bits of original data to be transferred) be sent from the sender to the receiver

Telnet

a popular application-layer protocol used for remote login runs over TCP and designed to work between any pair of hosts interactive application many users prefer to use SSH protocol rather than Telnet, since data sent in a telnet connection (including passwords) are not encrypted, making telnet vulnerable to eavesdropping attacks

Fast Re-transmit

a problem w/ timeout-triggered re-transmission is that the timeout period can be relatively long; when a segment is lost, this long timeout periods forces the sender to delay re-sending the lost packet, thereby increasing the end-to-end delay fortunately, the sender can often detect packet loss that re-ACK a segment for wihch the sender has already received an earlier ACK after a sender receives multiple (three?) duplicate ACKs for the same segment, it performs a fast re-transmit, re-transmitting the missing segment before the segment's timer expires

Socket

a process (as part of a network application) can have one or more sockets, doors through which data passes from the network to the process and through which data passes from the process to the network; the transport layer in the receiving host does not actually deliver data directly to a process, but instead to an intermediary socket because at any given time there can be more than one socket in the receiving host, each socket has a unique identifier. The format of the identifier depends on whether the socket is a UDP or a TCP socket

Network Routers

act only on the network-layer fields of the datagram; they do not examine the fields of the transport-layer segment encapsulated with the datagram

Window Size

aka N

Duplicate ACK

an ACK that re-ACK a segment for which the sender has already received an earlier ACK when a TCP receiver receives a segment with a seq. number that is larger than the next, expected, in-order sequence number, it detects a gap in the data stream (a missing segment) since TCP does not use negative ACK, the receiver cannot send an explicit negative ACK back to the sender, instead it re-ACKs the last in-order byte of data it has received if the TCP sender receives multiple duplicate ACKs for the same data, it takes this as an indication that the segment following the segment that has been ACKed repeatedly has been lost (three times?)

Packet 'Life'

an approach taken to ensure that a sequence number is not re-used until the sender is 'sure' that any previously sent packets w/ that sequence number are longer in the network this is done by assuming that packet cannot 'live' in the network for longer than some fixed maximum amount of time a maximum packet lifetime of approx. three minutes is assumed in the TCP extensions for high-speed networks

Port Scanning

applications listen for requests on specific ports; if a port is open on a host, we may be able to map that port to a specific application running on the host useful for system administrators who are often interested in knowing which network applications are running on the hosts in their networks attackers use open ports to target hosts for attacks programs called port scanners, can be used to determine which applications are listening on which ports nmap.org

UDP = finer application-level control over what data is sent and when

as TCP has congestion control mechanisms that throttle the transport-layer TCP sender when one or more links between the source and destination hosts become excessively congested and will also continue to resend a segment until the receipt of the segment has been acknowledged by the destination, regardless of how long reliable delivery takes - TCP's service model is not particularly well matched to these applications' needs since real-time applications often require a minimum sending rate, do not want to overly delay segment transmission and can tolerate some data loss

Selective-Repeat Protocols

avoid unnecessary re-transmissions by having the sender re-transmit only those packets that it suspects were received in error (lost or corrupted) at the receiver this individual, as-needed re-transmission will require that the receiver individually ACK correctly received packets window size N will again be used to limit the number of outstanding, unACK packets in the pipeline; unlike GBN, the sender will have already received ACKs for some of the packets in the window SR receiver will ACK a correctly received packet whether or not its in order; out-of-order packets are buffered until any missing packets (packets with lower sequence numbers) are received, at which point a batch of packets can be delivered in order to the upper layer

Connection-Oriented Transport: TCP

because before one application process can begin to send data to another, the two processes must first "handshake" with each other—that is, they must send some preliminary segments to each other to establish the parameters of the ensuing data transfer. As part of TCP connection establishment, both sides of the connection will initialize many TCP state variables associated with the TCP connection. The TCP "connection" is not an end-to-end TDM or FDM circuit as in a circuit-switched network. Instead, the "connection" is a logical one, with common state residing only in the TCPs in the two communicating end systems. Recall that because the TCP protocol runs only in the end systems and not in the intermediate network elements (routers and link-layer switches), the intermediate network elements do not maintain TCP connection state. In fact, the intermediate routers are completely oblivious to TCP connections; they see datagrams, not connections

Segments

created by: (possibly) breaking the application message into smaller chunks and adding a transport-layer header to each chunk to create the transport-layer segment

TCP States

during the life of a TCP connection, the TCP protocol running in each host makes transitions through various TCP states 1. closed 2. application initiations a new tcp connection tcp on client sends SYN segment to TCP in server and enters the SYN_SENT state - waits for segment from the server TCP that includes an ACK for the client's previous segment 3. having received the segment, the client TCP enters ESTABLISHED state, while in EST state, the TCp client can send and receive TCP segments containing payload 4.. client wants to close connection and sends a TCP segment with FIN set to 1 and enters FIN_WAIT_1 state - waits for TCP segment froms erver with an ACK 5. when ACK is received, the client TCP enters FIN_WAIT_2 state - waits for another segment from server with FIN bit set to 1 6. after receiving this segment, the client TCP ACK the sever's segment and enters the TIME_WAIT state which allows the TCP client to resend the final ACK in case the ACK is lost - this time is implementation depend but typically values are 30 sec, 1 min and 2 min 7. after the wait, the connection formally closes and all resources on the client side (including port numbers) are released

Popular Internet Applications That Use TCP

email remote terminal access web file transfer *streaming multimedia *internet telephony both UDP and TCP are sometimes used today with multimedia applications such as internet phone, real-time video conferencing and streaming of stored audio and video; these applications can tolerate a small amount of packet loss so reliable data transfer is not absolutely critical for the application's success; real-time applications react very poorly to TCP's congestion control; however, when packet loss rates are low and with some organizations blocking UDP traffic for security reasons, TCP becomes an increasingly attractive protocol for streaming media transport

Reliable Data Transfer Protocol

has the responsibility of providing upper-layer entities with a reliable channel through which data can be transferred (no data bits are corrupted (no 0 to 1 or vice versa) or lost and all are delivered in the order in which they were sent this task is made difficult by the fact that the layer below the reliable data transfer protocol may be unreliable (i.e. TCP is implemented on top of unreliable IP)

Receive Buffer

hosts on each side of a TCP connection set aside a receive buffer for the connection when the TCP connection receives bytes that are correctly in seq., it places the data in the receive buffer the associated application process will read data from this buffer but not necessarily at the instant the data arrives

Persistent vs Non-persistent

if a client and server are using persistent HTTP, then throughout the duration of the persistent connection, the client and server exchange HTTP messages via the same server socket if the client and server are using non-persistent HTTP, then a new TCP connection is created and close for every request/response and hence, a new socket is created and later closed for every request response ; this frequent creating a closing of sockets can severly impact the performance of a busy web server (although a number of operating system tricks can be used to mitigate the problem)

Transport-Layer Protocols

implemented in end systems but not in network routers within an end system, a transport protocol moves messages from application processes to the network edge (network layer) and vice versa but it doesn't have any say about how the messages are moved within the network core; intermediate routers neither act on or recognize any information that the transport layer may have added to the application message sending side = transport layer converts the application-layer messages it receive from a sending application process into transport-layer packets (segments) the transport layer then passes the segment to the network layer at the sending end system, where the segment is encapsulated within a network-layer packet (datagram) and send to the destination receiving side = the network layer extracts the transport-layer segment from the datagram and passes the segment up to the transport layer; the transport layer then processes the received segment, making the data in the segment available to the receiving application services that a transport protocol can provide are often constrained by the service model of the underlying network-layer protocol; however, certain services can be offered by a transport protocol even when the underlying network protocol doesn't offer the corresponding service at the network layer even if you wanted to create a bare bones UDP transport protocol, the transport layer has to provide a multiplexing/de-multiplexing service in order to pass data between the network layer and the correct application-level process

Automatic Repeat reQuest (ARQ) Protocols

in a computer network setting, reliable data protocols based on a transmission that allows the receiver to let the sender know what has been received correctly or incorrectly - and needs to be resent three additional protocol capabilities are required in ARQ protocols to handle the presence of bit errors 1. Error detection 2. Receiver Feedback 3. Retransmission

Source Port Number

in an A-to-B segment, the source port number serves as part of a "return address" - when B wants to send a segment back to A, the destination port in the B-to-A segment will take its value from the source port of the A-to-B segment; the complete return address is A's IP address and source port number

Denial of Service (DoS) Attack: SYN Flood Attack

in this attack, the attackers send a large number of TCP SYN segments, w/o completing the third handshake step, with this deluge of SYN segments, the servers connection resources become exhausted as they are allocated but never used for half-opened connections - legitimate clients are then deined service such SYN flooding attacks were among the first documented DoS attacks an effective defense known as SYN cookies are now deployed in most major operating systems

Receive Window

informally, the receive window is used to give the sender an idea of how much free buffer space is available at the receiver b/c TCP is full-duplex, the sender at each side of the connection maintains a distinct receive window rwnd = rcvbuffer - [lastbytereceived - lastyteread) rwnd is dynaic host b tells host a how much space room it has in the connection buffer by placing its current value of rwnd in the receive window field of every segment it sends to A; initially, host B sets rwnd = receivedbuffer to do this, host B must keep track of several connection specific variables host A keeps tracks of lastbytesend and lastbyteACKed lastbytesent - lastbyteACKed = the amount of unACK data less than the value of rwnd TCP specification requires Host A to continue to send segments with one data byte when B's receive window is zero; eventually, the buffer will begin to empty and the ack will contain a nonzerio rwnd value 0 = no room / full

Reliable Data Transfer with UDP

it is possible for an application to have reliable data transfer when using UDP; this can be done if reliability is built into the application (adding acknowledgement and re transmission mechanisms) not an easy task and would place a lot of work on developer for debugging

Estimating RTT

most TCP implementations take only one 'sampleRTT' measurement at a time; at any point in time, the 'sampleRTT' is being estimated for only one of the transmitted but currently unACK segments, leading to a new value of 'sampleRTT' aprox. once every RTT TCP never computes a 'sampleRTT' for a segment that has been re-transmitted; only measures for segments that have been transmitted once 'sampleRTT' values fluctuate from segment to segment due to congestion in the routers and to the varying load on the end systems; any 'sampleRTT' value may be atypical to estimate a typical RTT, it is necessary to take some sort of average TCP maintains an average called 'EstimatedRTT' of the sample RTT values; upon obtaining a new 'sampleRTT', TCP updated 'estmatedRTT' according to the formula: estamtedrtt = (1 - a) * estimatedrtt + a * sample rtt estimatedrtt is a weighed combination of the previous value of estimatedrtt and the new value for samplertt. the recommended value of a is a = 0.125 (1/8), in which the formula above becomes: estimatedrtt = 0.875 * estimatedrtt + 0.125 +sample rtt estimatedrtt is a weighted sample; puts more weight on recent samples than old samples - as the more recent samples better reflect the current congestion in the network it is also valuable to have a measure of variability of the RTT devRTT, as an esimate of how much sampleRTT typically deviates from EstimatedRTT: DevRTT=(1−β)⋅DevRTT+β⋅|SampleRTT−EstimatedRTT| It is desirable to set the timeout equal to the EstimatedRTT plus some margin The margin should be large when there is a lot of fluctuation in the SampleRTT values; it should be small when there is little fluctuation. The value of DevRTT should thus come into play here. All of these considerations are taken into account in TCP's method for determining the retransmission timeout interval: TimeoutInterval=EstimatedRTT+4⋅DevRTT An initial TimeoutInterval value of 1 second is recommended [RFC 6298]. Also, when a timeout occurs, the value of TimeoutInterval is doubled to avoid a premature timeout occurring for a subsequent segment that will soon be acknowledged. However, as soon as a segment is received and EstimatedRTT is updated, the TimeoutInterval is again computed using the formula above

Flow-Control Service

provided by TCP to its applications to eliminate the possibility of the sender overflowing the receiver's buffer speed-matching service - matching the rate at which the sender is sending against the rate at which the receiving application is reading provided by having the sender maintain a variable called the receive window UDP does not provide a flow control service, and segments may be lost at the receiver due to buffer overflow; a typical UDP implementation will append the segments in a finite-size buffer that precedes the corresponding socket- the process reads on entire segment at a time from the buffer, if the process does not read the segments fast enough from the buffer, the buffer will overflow and segments will get dropped

Connection-Establishment Request

nothing more than a TCP segment with a destination port number and a special connection-establishment bit set in the TCP header also includes a source port number that was chosen by the client the transport layer at the server notes the following four values in the connection-request segment: 1. the source port number in the segment 2. the IP address of the source host 3. the destination port number in the segment 4. its own IP address the newly created connection socket is identified by these four values; all subsequently arriving segments whose source port, source IP address, destination port and destination IP address match these four values will be de-multiplexed to this socket the client and server can now send data to each other

send buffer

one of the buffers that is set aside during the initial three-way handshake. From time to time, TCP will grab chunks of data from the send buffer and pass the data to the network layer. Interestingly, the TCP specification [RFC 793] is very laid back about specifying when TCP should actually send buffered data, stating that TCP should "send that data in segments at its own convenience."

options field

optional and variable-length; is used when a sender and receiver negotiate the maximum segment size (MSS) or as a window scaling factor for use in high-speed networks. time-stamping option is also defined.

Logical Communication

provided by transport-layer protocols to application processes running on different hosts from an application's perspective, it is as if the hosts running the processes were directly connected; in reality, the hosts may be on opposite sides of the planet, connected by numerous routers and a wide range of link types. allows the transport layer to send messages to each other w/o worrying about the details of the physical infrastructure used to carry these messages

UDP Checksum

provides for error detection used to determine whether bits within the UDP segment have been altered (i.e. by noise in the links or while stored in a router as it moved from source to destination) UDP at the sender side performs the 1s complement of the sum of all the 16-bit words in the segment, with any overflow encountered during the sum being wrapped around; the result is put in the checksum field of the UDP segment the 1s complement is obtained by converting all the 0s to 1s and converting 1s to 0s the 1s complement of the sum 0100101011000010 = 1011010100111101 which becomes the checksum if no errors are introduced into the packet, then clearly the sum at the receiver will be 11111111111111111 if one of the bits is 0 then we know that errors have been introduced into the packet checksum is needed despite many link-layer protocols (such as Ethernet protocol) also providing error checking because there is no guarantee that all the links between source and destination provide error checking (one of the links may use a link-layer protocol that does not provide error checking); even if segments are correctly transferred across a link, it's possible that bit errors could be introduced when a segment is stored in a router's memory; UDP must provide error detection at the transport layer on an end-end basis also UDP provides error checking, it does nothing to recover from an error; some implementations of UDP simply discard the damaged segment; others pass the damaged segment to the application with a warning

Popular Internet Applications That Use UDP

remote file server *streaming multimedia *internet telephony *network management *name translation although commonly done today, running multi-media applications over UDP is controversial since UDP has no congestion control but congestion control is needed to prevent the network from entering a congested state in which very little useful work is done; the loss of congestion control in UDP can result in high loss rates between UDP sender and receiver and the crowding out of TCP sessions - a potentially serious problem

header length field (4 bits)

specifies the length of the TCP header in 32-bit words. The TCP header can be of variable length due to the TCP options field. (Typically, the options field is empty, so that the length of the typical TCP header is 20 bytes.)

three-way handshake

the client first sends a special TCP segment; the server responds with a second special TCP segment; and finally the client responds again with a third special segment. The first two segments carry no payload, that is, no application-layer data; the third of these segments may carry a payload. Because three segments are sent between the two hosts, this connection-establishment procedure is often referred to as a three-way handshake Once a TCP connection is established, the two application processes can send data to each other. Once the data passes through the door, the data is in the hands of TCP running in the client.

UDP and TCP

the fundamental responsibility of UDP and TCP is to extend IP's delivery service between two end systems to a delivery service between two processes running on the end systems also provide integrity checking by including error-detection fields in their segments' headers; process-to-process data delivery and error checking—are the only two services that UDP provides like IP, UDP is an unreliable service; does just about as little as a transport protocol can do; aside from multi/demulti function and some light error checking, it adds nothing to IP if a developer chooses UDP instead of TCP, then the application is almost directly talking with IP TCP provides reliable data transfer using flow control, sequence numbers, acknowledgments, and timers; ensures that data is delivered from sending process to receiving process, correctly and in order; converts IP's unreliable service between end systems into a reliable data transport service between processes TCP also provides congestion control; not so much a service provided to the invoking application as it is a service for the Internet as a whole, a service for the general good; prevents any one TCP connection from swamping the links and routers between communicating hosts with an excessive amount of traffic; give each connection traversing a congested link an equal share of the link bandwidth; done by regulating the rate at which the sending sides of TCP connections can send traffic into the network UDP traffic, on the other hand, is unregulated; an application using UDP transport can send at any rate it pleases, for as long as it pleases

Receiver Feedback (ARQ)

the only way for the sender to learn if the receiver has received a packet and if it was correct is for the receiver to provide explicit feedback to the sender the positive (ACK) and negative (NAK) acknowledgement replies are examples of such feedback 0 could indicate NAK while 1 could indicate an ACK

Go-Back-N (GBN) Protocol

the sender is allowed to transmit multiple packets (when available) w/o waiting for an ACK but is constrained to have no more than some maximum allowable number, N, of unACK packets in the pipeline the range of permissible sequence numbers for transmitted but not yet ACK packets can be viewed as a window of size N over the range of sequence numbers as the protocol operates, the window slides forward over the sequence number space GBN = sliding-window protocol flow control and TCP congestion control are reasons to impose a limit of outstanding, unACK packets to a value of N

SendBase

the smallest sequence number of a transmitted but unACK byte

Round Trip Time (RTT)

the time from when a segment is sent until it is acknowledged

Thread

there is not always a one-to-one correspondence between connection sockets and processes; many web servers often use only one process and create a new thread with a new connection socket for each new client connection a thread can be viewed as a lightweight subprocess

receive window field (16 bits)

used for flow control; used to indicate the number of bytes that a receiver is willing to accept

QUIC protocol

used in Google's chrome browser implements reliability in an application layer protocol on top of UDP

Countdown Timer

used to timeout/re-transmit a packet, possibly b/c the packet (or its ACK) was lost within the channel Implementing a time-based re-transmission mechanism requires a countdown timer that can interrupt the sender after a given amount of time has expired. The sender will thus need to be able to (1) start the timer each time a packet (either a first-time packet or a re-transmission) is sent, (2) respond to a timer interrupt (taking appropriate actions), and (3) stop the timer.

TCP Connection Termination

when a TCP connection comes to an end, either of the two processes participating in a TCP connection can end the connection; the resources (buffers and variables) in the hosts are de-allocated either host can choose to end the connection the hosts' application process issues a close command, this causes the TCP to send a special TCP client to the other hosts process; this special segment has a flag bit in the segment's header, the FIN bit, set to 1 when the other host receives this segment, it sends the issuing host an ACK segment in return and sends its own shutdown segment which has the FIN bit set to 1, finally the issuing host ACK the other's shutdown segment and at this point, all resources in the two are now de-allocated

Alternating-Bit Protocol

when a protocol's packet sequence numbers alternate between 0 and 1

Stop-and-Wait protocols

when the sender is in the wait-for-ACK-or-NAK state, it cannot get more data from the upper layer thus, the sender will not send a new piece of data until it is sure that the receiver has correctly received the current packet protocols with this behavior are known as stop-and-wait protocols

GBN Performance Drawback

when the window size and bandwidth-delay product are both large, many packets can be in the pipeline; a single packet error can thus cause GBN to re-transmit a large number of packets, many unnecessarily; as the probability of channel errors increases, the pipeline can become filled with these unnecessary re-transmissions

Doubling the Timeout Interval

whenever the timeout event occurs, TCP re-trainsmits the not-yet-ACK segment with the smallest sequence number; each time TCP re-transmits, it sets the next timeout interval to twice the previous value rather than deriving it from the last estimatedRTT or DevRTT limited form of congestion control; as timer expiration is most likely caused by congestion in the network, (too many packets arriving at one or more router queues in the path between source and destination) causing packets to be dropped and or long queuing delays. if sources were to re-transmit packets persistently, the congestion may get worse; instead, TCP acts more politely, with each sender re-transmitting after longer and longer intervals


Conjuntos de estudio relacionados

Excel Chapter 1: End-of-Chapter Quiz

View Set

International Obligations on Intellectual Property

View Set