CSE 3461: Networking - Midterm 1

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

In summary, today's Internet—a network of networks—is complex, consisting of a dozen or so tier-1 ISPs and hundreds of thousands of lower-tier ISPs. The ISPs are diverse in their coverage, with some spanning multiple continents and oceans, and others limited to narrow geographic regions. The lower-tier ISPs connect to the higher-tier ISPs, and the higher-tier ISPs interconnect with one another. Users and content providers are customers of lower-tier ISPs, and lower-tier ISPs are customers of higher-tier ISPs. In recent years, major content providers have also created their own networks and connect directly into lower-tier ISPs where possible.

Describe the internet: a network of networks

There are two types of HTTP messages, request messages and response messages. An HTTP request (or response) consists of 1. Header: meta information 2. Body (sometimes): the payload The header consists of 1. Method (requests) / Status (response) 2. Header fields (separated by newlines) 3. Blank line

Explain HTTP request-response architecture

R = transmission rate, L = bits, a = average rate at which packets arrive to the queue. La/R = traffic intensity. If La/R = 1 the queue is full so there is heavy traffic intensity. >1 causes packet loss because the packets have no where to go and are dropped. Now consider the case La/R ≤ 1. Here, the nature of the arriving traffic impacts the queuing delay. For example, if packets arrive periodically—that is, one packet arrives every L/R seconds—then every packet will arrive at an empty queue and there will be no queuing delay. On the other hand, if packets arrive in bursts but periodically, there can be a significant average queuing delay.

Explain average queueing delay vs traffic intensity

Web document transfers, and financial applications—data loss can have devastating consequences (in the latter case, for either the bank or the customer!). Thus, to support these applications, something has to be done to guarantee that the data sent by one end of the application is delivered correctly and completely to the other end of the application. If a protocol provides such a guaranteed data delivery service, it is said to provide reliable data transfer. When a transport-layer protocol doesn't provide reliable data transfer, some of the data sent by the sending process may never arrive at the receiving process. This may be acceptable for loss-tolerant applications, most notably multimedia applications such as conversational audio/video that can tolerate some amount of data loss. In these multimedia applications, lost data might result in a small glitch in the audio/video—not a crucial impairment.

Explain data loss in the context of network application requiremens

A Web cache—also called a proxy server—is a network entity that satisfies HTTP requests on the behalf of an origin Web server. The Web cache has its own disk storage and keeps copies of recently requested objects in this storage. As an example, suppose a browser is requesting the object http://www.someschool.edu/campus.gif. Here is what happens: 1. The browser establishes a TCP connection to the Web cache and sends an HTTP request for the object to the Web cache. 2. The Web cache checks to see if it has a copy of the object stored locally. If it does, the Web cache returns the object within an HTTP response message to the client browser. 3. If the Web cache does not have the object, the Web cache opens a TCP connection to the origin server, that is, to www.someschool.edu. The Web cache then sends an HTTP request for the object into the cache-to-server TCP connection. After receiving this request, the origin server sends the object within an HTTP response to the Web cache. 4. When the Web cache receives the object, it stores a copy in its local storage and sends a copy, within an HTTP response message, to the client browser (over the existing TCP connection between the client browser and the Web cache).

Explain how web caching works

In a network application, end systems exchange messages with each other. To send a message from a source end system to a destination end system, the source breaks long messages into smaller chunks of data known as packets. Between source and destination, each packet travels through communication links and packet switches (for which there are two predominant types, routers and linklayer switches). Packets are transmitted over each communication link at a rate equal to the full transmission rate of the link. So, if a source end system or a packet switch is sending a packet of L bits over a link with transmission rate R bits/sec, then the time to transmit the packet is L/R seconds.

Explain packet switching

For example, if an Internet telephony application encodes voice at 32 kbps, it needs to send data into the network and have data delivered to the receiving application at this rate. If the transport protocol cannot provide this throughput, the application would need to encode at a lower rate (and receive enough throughput to sustain this lower coding rate) or may have to give up, since receiving, say, half of the needed throughput is of little or no use to this Internet telephony application. Applications that have throughput requirements are said to be bandwidth-sensitive applications.

Explain throughput in the context of network application requirements

A transport-layer protocol can also provide timing guarantees. As with throughput guarantees, timing guarantees can come in many shapes and forms. An example guarantee might be that every bit that the sender pumps into the socket arrives at the receiver's socket no more than 100 msec later. Such a service would be appealing to interactive real-time applications, such as Internet telephony, virtual environments, teleconferencing, and multiplayer games, all of which require tight timing constraints on data delivery in order to be effective. Long delays in Internet telephony, for example, tend to result in unnatural pauses in the conversation; in a multiplayer game or virtual interactive environment, a long delay between taking an action and seeing the response from the environment (for example, from another player at the end of an end-to-end connection) makes the application feel less realistic.

Explain timing in the context of network application requirements

By a network of communication links and packet switches

How are end systems connected?

The first time a user visits a site, the user can provide a user identification (possibly his or her name). During the subsequent sessions, the browser passes a cookie header to the server, thereby identifying the user to the server. Cookies can thus be used to create a user session layer on top of stateless HTTP. For example, when a user logs in to a Web-based e-mail application (such as Hotmail), the browser sends cookie information to the server, permitting the server to identify the user throughout the user's session with the application.

How do cookies work?

Average throughput of a connection: 0.75W/RTT where W is the threshold after a loss occurs

How do you calculate the average throughput of a TCP connection?

SampleRTT = the amount of time from when a segment is sent to ACK. EstimatedRTT = 0.875 • EstimatedRTT + 0.125 • SampleRTT

How do you estimate TCP RTT?

Must be larger than RTT but not too large otherwise there will be data transfer delays. TimeoutInterval = EstimatedRTT + 4 • DevRTT

How do you estimate the timeout interval for TCP?

BitTorrent is a popular P2P protocol for file distribution [Chao 2011]. In BitTorrent lingo, the collection of all peers participating in the distribution of a particular file is called a torrent. Peers in a torrent download equal-size chunks of the file from one another, with a typical chunk size of 256 KBytes. When a peer first joins a torrent, it has no chunks. Over time it accumulates more and more chunks. While it downloads chunks it also uploads chunks to other peers. Once a peer has acquired the entire file, it may (selfishly) leave the torrent, or (altruistically) remain in the torrent and continue to upload chunks to other peers. Also, any peer may leave the torrent at any time with only a subset of chunks, and later rejoin the torrent.

How does BitTorrent work?

there are three classes of DNS servers—root DNS servers, top-level domain (TLD) DNS servers, and authoritative DNS servers—organized in a hierarchy as shown in Figure 2.19. http://www.dcs.bbk.ac.uk/~ptw/teaching/IWT/internet-apps/dns-server-hierarchy.gif To understand how these three classes of servers interact, suppose a DNS client wants to determine the IP address for the hostname www.amazon.com. To a first approximation, the following events will take place. The client first contacts one of the root servers, which returns IP addresses for TLD servers for the top-level domain com. The client then contacts one of these TLD servers, which returns the IP address of an authoritative server for amazon.com. Finally, the client contacts one of the authoritative servers for amazon.com, which returns the IP address for the hostname www.amazon.com.

How does DNS server hierarchy work?

Suppose that some application (such as a Web browser or a mail reader) running in a user's host needs to translate a hostname to an IP address. The application will invoke the client side of DNS, specifying the hostname that needs to be translated. (On many UNIX-based machines, gethostbyname() is the function call that an application calls in order to perform the translation.) DNS in the user's host then takes over, sending a query message into the network. All DNS query and reply messages are sent within UDP datagrams to port 53. After a delay, ranging from milliseconds to seconds, DNS in the user's host receives a DNS reply message that provides the desired mapping. This mapping is then passed to the invoking application.

How does DNS work?

When a user starts an FTP session with a remote host, the client side of FTP (user) first initiates a control TCP connection with the server side (remote host) on server port number 21. The client side of FTP sends the user identification and password over this control connection. The client side of FTP also sends, over the control connection, commands to change the remote directory. When the server side receives a command for a file transfer over the control connection (either to, or from, the remote host), the server side initiates a TCP data connection to the client side. FTP sends exactly one file over the data connection and then closes the data connection. If, during the same session, the user wants to transfer another file, FTP opens another data connection. Thus, with FTP, the control connection remains open throughout the duration of the user session, but a new data connection is created for each file transferred within a session

How does FTP work?

HTTP uses TCP as its underlying transport protocol (rather than running on top of UDP). The HTTP client first initiates a TCP connection with the server. Once the connection is established, the browser and the server processes access TCP through their socket interfaces. The client sends HTTP request messages into its socket interface and receives HTTP response messages from its socket interface. Similarly, the HTTP server receives request messages from its socket interface and sends response messages into its socket interface. Once the client sends a message into its socket interface, the message is out of the client's hands and is "in the hands" of TCP. Because an HTTP server maintains no information about the clients, HTTP is said to be a stateless protocol.

How does HTTP work?

First, the client SMTP (running on the sending mail server host) has TCP establish a connection to port 25 at the server SMTP (running on the receiving mail server host). If the server is down, the client tries again later. Once this connection is established, the server and client perform some application-layer handshaking—just as humans often introduce themselves before transferring information from one to another, SMTP clients and servers introduce themselves before transferring information.

How does SMTP work?

The approach taken by TCP is to have each sender limit the rate at which it sends traffic into its connection as a function of perceived network congestion. If a TCP sender perceives that there is little congestion on the path between itself and the destination, then the TCP sender increases its send rate; if the sender perceives that there is congestion along the path, then the sender reduces its send rate. The congestion window, denoted cwnd, imposes a constraint on the rate at which a TCP sender can send traffic into the network. Slow start initializes cwnd to 1 when a TCP connection begins and doubles per RTT until it reaches the slow start threshold. Then congestion avoidance begins and increases cwmd by 1 for each RTT. If a loss is detected (Timeout or three duplicated ACK), back to slow start, cut threshold by half

How does TCP control congestion?

if N TCP sessions share same bottleneck link, each should get 1/N of link capacity. It works because with two competing sessions: as throughput increases via additive increase, Multiplicative decrease decreases throughput proportionally. Which evens out the bandwidth equally.

How does TCP fairness work?

Implementing a time-based retransmission mechanism requires a countdown timer that can interrupt the sender after a given amount of time has expired. The sender will thus need to be able to (1) start the timer each time a packet (either a first-time packet or a retransmission) is sent, (2) respond to a timer interrupt (taking appropriate actions), and (3) stop the timer.

In RDT what is a countdown timer used for?

The sequence number is a field added to a data packet to identify the packet. That way the receiver can determine if the packet is a retransmission in the event of an ACK or NAK.

In RDT what is a sequence number used for?

Step 1: Client end system sends TCP FIN control segment to server Step 2: Server receives FIN, replies with ACK. Closes connection, sends FIN. Step 3: Client receives FIN, replies with ACK. Enters "timed wait" - will respond with ACK to received FINs Step 4: Server receives ACK. Connection closed. http://cis.msjc.edu/courses/core_courses/csis202/images/TCPConnectionTearDown.png

In TCP what is a Four-way handshake (connection teardown)?

Step 1: Client end system sends TCP SYN control segment to server Specifies initial seq # (client_isn) Step 2: Server end system receives SYN, replies with SYNACK control segment ACKs received SYN Allocates buffers Specifies server → receiver initial seq. # (server_isn) Step 3: Client allocates buffers and variables upon receiving SYNACK

In TCP what is a Three-way handshake (connection establishment)?

Figure 1.13 illustrates a circuit-switched network. In this network, the four circuit switches are interconnected by four links. Each of these links has four circuits, so that each link can support four simultaneous connections. The hosts (for example, PCs and workstations) are each directly connected to one of the switches. When two hosts want to communicate, the network establishes a dedicated end-to-end connection between the two hosts. Thus, in order for Host A to communicate with Host B, the network must first reserve one circuit on each of two links. In this example, the dedicated end-to-end connection uses the second circuit in the first link and the fourth circuit in the second link. Because each link has four circuits, for each link used by the end-to-end connection, the connection gets one fourth of the link's total transmission capacity for the duration of the connection. Thus, for example, if each link between adjacent switches has a transmission rate of 1 Mbps, then each end-to-end circuit-switch connection gets 250 kbps of dedicated transmission rate. http://i.stack.imgur.com/zLEv9.jpg

In circuit switching what is an end to end principle?

█FDM: frequency-division multiplexing. With FDM, the frequency spectrum of a link is divided up among the connections established across the link. Specifically, the link dedicates a frequency band to each connection for the duration of the connection. With FDM, each circuit continuously gets a fraction of the bandwidth. █TDM: time-division multiplexing. For a TDM link, time is divided into frames of fixed duration, and each frame is divided into a fixed number of time slots. With TDM, each circuit gets all of the bandwidth periodically during brief intervals of time (that is, during slots)

In circuit switching, what is FDM and TDM

AIMD: Additive increase, multiplicative decrease. Increase cwnd window by 1 per RTT Decrease cwnd window by factor of 2 on loss event

TCP congestion avoidance: How does AIMD work?

█Scenario 1: Two senders, two receivers One router, infinite buffers, No retransmission. The packets would get backed up in the routers infinite buffer queue causing very long delays. █Scenario 2: Two senders, two receivers One router, finite buffers, sender retransmission of lost packet. The sender could retransmit a packet that has been delayed in the queue but not lost. Causes router to use extra bandwidth sending duplicate packets.

What are 2 scenarios that could cause congestion?

There are two types of DNS messages: queries and replies. Both query and reply messages have the same format.

What are DNS messages?

streaming multimedia, internet telephony

What 2 internet applications use TCP or UDP?

remote file server NFS, network management SNMP, routing protocol RIP, DNS

What 4 internet applications typically use UDP?

SMTP, telnet, HTTP, FTP

What 4 internet applications use TCP as their transport protocol?

We mentioned above that an HTTP server is stateless. This simplifies server design and has permitted engineers to develop high-performance Web servers that can handle thousands of simultaneous TCP connections. However, it is often desirable for a Web site to identify users, either because the server wishes to restrict user access or because it wants to serve content as a function of the user identity. For these purposes, HTTP uses cookies. Cookies, defined in [RFC 6265], allow sites to keep track of users. Most major commercial Web sites use cookies today.

What are cookies?

http://mediaplayer.pearsoncmg.com/_ph_cc_ecs_set.title.Distributed_Hash_Tables_(DHTs)_(Chapter_2)__/aw/streaming/ecs_kurose_compnetw_6/DHT.m4v

What are distributed hash tables and how does circular DHT work?

█user agent: mail client █mail server: authenticates client and sends and receives messages █SMTP: application layer-protocol for sending mail.

What are the 3 components of internet email?

cookie technology has four components: (1) a cookie header line in the HTTP response message; (2) a cookie header line in the HTTP request message; (3) a cookie file kept on the user's end system and managed by the user's browser; and (4) a back-end database at the Web site

What are the 4 components of a cookie?

One of the most compelling features of P2P architectures is their self-scalability. For example, in a P2P file-sharing application, although each peer generates workload by requesting files, each peer also adds service capacity to the system by distributing files to other peers. P2P architectures are also cost effective, since they normally don't require significant server infrastructure and server bandwidth (in contrast with clients-server designs with datacenters).

What are the advantages of P2P architecture?

█Finer application-level control over what data is sent, and when: no delay or throttling. █No connection establishment: no delay in establishing a connection. █No connection state: can support many more active clients. █Small packet header overhead: The TCP segment has 20 bytes of header overhead in every segment, whereas UDP has only 8 bytes of overhead.

What are the advantages of using UDP over TCP?

ISPs and companies or universities usually invest in web caching. Web caching reduces the response time for a client and can substantially reduce traffic on a network which reduces costs on bandwidth.

What are the advantages of web caching and who usually deploys it?

As long as the layer provides the same service to the layer above it, and uses the same services from the layer below it, the remainder of the system remains unchanged when a layer's implementation is changed.

What are the benefits of layering architecture?

In a client-server architecture, there is an always-on host, called the server, which services requests from many other hosts, called clients. Clients do not communicate directly with eachother and the server has a fixed IP address. Common applications: web servers, ftp, email. In a P2P architecture, there is minimal (or no) reliance on dedicated servers in data centers. Instead the application exploits direct communication between pairs of intermittently connected hosts, called peers. Common applications: bitTorrent, skype, iptv.

What are the differences between client-servers vs P2P architecture and common applications that use them?

█ Flow control is controlled by the receiving side. It ensures that the sender only sends what the receiver can handle. Think of a situation where someone with a fast fiber connection might be sending to someone on dialup or something similar. The sender would have the ability to send packets very quickly, but that would be useless to the receiver on dialup, so they would need a way to throttle what the sending side can send. Flow control deals with the mechanisms available to ensure that this communication goes smoothly. █ Congestion control is a method of ensuring that everyone across a network has a "fair" amount of access to network resources, at any given time. In a mixed-network environment, everyone needs to be able to assume the same general level of performance. A common scenario to help understand this is an office LAN. You have a number of LAN segments in an office all doing their thing within the LAN, but then they may all need to go out over a WAN link that is slower than the constituent LAN segments. Picture having 100mb connections within the LAN that ultimately go out through a 5mb WAN link. Some kind of congestion control would need to be in place there to ensure there are no issues across the greater network.

What are the differences between flow control and congestion control?

In client-server file distribution, the server must send a copy of the file to each of the peers—placing an enormous burden on the server and consuming a large amount of server bandwidth. In P2P file distribution, each peer can redistribute any portion of the file it has received to any other peers, thereby assisting the server in the distribution process.

What are the differences between server-client and P2P file distribution?

There is virtually no different between wireline and wireless network communications from the network layer or above. On the link layer wireless network communications experience service degradation that wireline connections do not. These include: █ path loss and shadow fading (which decrease the signal strength as the signal travels over a distance and around/through obstructing objects) █ multipath fading (due to signal reflection off of interfering objects) █ interference (due to other transmissions and electromagnetic signals).

What are the differences between wired and wireless connections and what 3 service degradation problems does wireless face?

A record: IP address. NS record: domain name. CNAME: canonical hostname or alias, As an example, (foo.com, relay1.bar.foo.com, CNAME) is a CNAME record. MX record: mail record.

What are the different DNS record types?

█The sequence number field and the acknowledgment number field are used by the TCP sender and receiver in implementing a reliable data transfer service, as discussed below. █ receive window: used for flow control. █ header length: specifies the length of the TCP header in 32-bit words. The TCP header can be of variable length due to the TCP options field. █ Source port █ Destination port █ Checksum █ Urgent data: The location of the last byte of this urgent data is indicated by the 16-bit urgent data pointer field. █ Options: used when a sender and receiver negotiate the maximum segment size (MSS) or as a window scaling factor for use in high-speed networks.

What are the fields of a TCP segment?

In RDT packet sequences alternate between 0 and 1 which is why they are called an alternating-bit protocol

What is an Alternating-bit protocol?

200 OK request succeeded, requested object later in this message 301 Moved Permanently requested object moved, new location specified later in this message (Location:) 400 Bad Request request message not understood by server 404 Not Found requested document not found on this server 505 HTTP Version Not Supported

What are the http response status codes?

█ Application: supporting network applications - FTP, SMTP, HTTP █ Transport: host-host data transfer - TCP, UDP █ Network: routing of datagrams from source to destination - IP, routing protocols █ Link: data transfer between neighboring network elements - PPP, Ethernet █ Physical: bits "on the wire", "over the air"

What are the layers in the TCP stack model and what are they each responsible for?

POP3, IMAP, HTTP. They are necessary because obtaining messages requires a pull operation and SMTP is a push operation.

What are the mail retrieval protocols and why are they necessary?

█Application █Presentation █Session █Transport █Network █Link █Physical █The role of the presentation layer is to provide services that allow communicating applications to interpret the meaning of data exchanged. █The session layer provides for delimiting and synchronization of data exchange, including the means to build a checkpointing and recovery scheme. █In TCP these roles are left up to the application developer to implement if they need the services.

What are the seven layers of the OSI model and what do the extra layers that TCP lacks do?

█End-end congestion control: -No explicit feedback from network -Congestion inferred from end-system observed loss, delay -Approach taken by TCP █Network-assisted congestion control: -Routers provide feedback to end systems -Single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM) -Explicit rate sender should send at

What are the two broad approaches to congestion control?

source port #, destination port #, application data

What important fields does a transport layer segment have?

In a Go-Back-N (GBN) protocol, the sender is allowed to transmit multiple packets (when available) without waiting for an acknowledgment, but is constrained to have no more than some maximum allowable number, N, of unacknowledged packets in the pipeline.

What is Go-Back-N?

the checksum is used to determine whether bits within the UDP segment have been altered. all the 16 bit words in a segment are added together and the 1's complement of the sum becomes the checksum. The receiver end then adds the checksum to the sum of words and if they are all 1's there is no error.

What is UDP checksum and how does it work?

UDP is a no-frills, lightweight transport protocol, providing minimal services. UDP is connectionless, so there is no handshaking before the two processes start to communicate. UDP provides an unreliable data transfer service—that is, when a process sends a message into a UDP socket, UDP provides no guarantee that the message will ever reach the receiving process. Furthermore, messages that do arrive at the receiving process may arrive out of order. UDP does not include a congestion-control mechanism, so the sending side of UDP can pump data into the layer below (the network layer) at any rate it pleases. (Note, however, that the actual end-to-end throughput may be less than this rate due to the limited transmission capacity of intervening links or due to congestion).

What is UDP?

Each torrent has an infrastructure node called a tracker. When a peer joins a torrent, it registers itself with the tracker and periodically informs the tracker that it is still in the torrent. In this manner, the tracker keeps track of the peers that are participating in the torrent. A given torrent may have fewer than ten or more than a thousand peers participating at any instant of time.

What is a BitTorrent tracker?

a mechanism that allows a cache to verify that its objects are up to date. An HTTP request message is a so-called conditional GET message if (1) the request message uses the GET method and (2) the request message includes an If-Modified- Since: header line.

What is a conditional GET?

Non-persistent: each request/response pair be sent over a separate TCP connection. For example an HTML file with 10 image objects would require a total of 11 TCP connections. The connections can be parallel to shorten the response time. A file request takes 2 RTT - one RTT to establish the TCP connection and one RTT to request and receive an object. http://www.networkinginfoblog.com/contentsimages/Back-of-the-envelope%20calculation%20for%20the%20time%20needed%20to%20request%20and%20receive%20an%20HTML%20file.JPG Persistent: With persistent connections, the server leaves the TCP connection open after sending a response. Subsequent requests and responses between the same client and server can be sent over the same connection. In particular, an entire Web page (in the example above, the base HTML file and the 10 images) can be sent over a single persistent TCP connection.

What is a persistent and non-persistent connection?

An address for a network application process

What is a port number?

A protocol defines the format and the order of messages exchanged between two or more communicating entities, as well as the actions taken on the transmission and/or receipt of a message or other event.

What is a protocol?

most applications consist of pairs of communicating processes, with the two processes in each pair sending messages to each other. Any message sent from one process to another must go through the underlying network. A process sends messages into, and receives messages from, the network through a software interface called a socket. A socket is the interface between an application layer and a transport layer.

What is a socket?

A transport-layer protocol provides for logical communication between application processes running on different hosts. By logical communication, we mean that from an application's perspective, it is as if the hosts running the processes were directly connected; in reality, the hosts may be on opposite sides of the planet, connected via numerous routers and a wide range of link types. Application processes use the logical communication provided by the transport layer to send messages to each other, free from the worry of the details of the physical infrastructure used to carry these messages. transport-layer protocols are implemented in the end systems but not in network routers.

What is a transport layer service?

too many sources sending too much data too fast for network to handle. Problems: Lost packets (buffer overflow at routers) Long delays (queueing in router buffers)

What is congestion and what problems does it cause?

in TCP Reno when a loss is detected the threshold is cut in half but the process does not start again at slow start. Instead it goes back into congestion avoidance.

What is fast recovery?

let's consider how a receiving host directs an incoming transport-layer segment to the appropriate socket. Each transport-layer segment has a set of fields in the segment for this purpose. At the receiving end, the transport layer examines these fields to identify the receiving socket and then directs the segment to that socket. This job of delivering the data in a transport-layer segment to the correct socket is called demultiplexing. The job of gathering data chunks at the source host from different sockets, encapsulating each data chunk with header information (that will later be used in demultiplexing) to create segments, and passing the segments to the network layer is called multiplexing

What is multiplexing and demultiplexing?

The time required to examine the packet's header and determine where to direct the packet is part of the processing delay

What is processing delay?

Once a bit is pushed into the link, it needs to propagate to router B. The time required to propagate from the beginning of the link to router B is the propagation delay. The bit propagates at the propagation speed of the link. The propagation speed depends on the physical medium of the link

What is propagation delay?

At the queue, the packet experiences a queuing delay as it waits to be transmitted onto the link. The length of the queuing delay of a specific packet will depend on the number of earlier-arriving packets that are queued and waiting for transmission onto the link.

What is queuing delay?

selective-repeat protocols avoid unnecessary retransmissions by having the sender retransmit only those packets that it suspects were received in error (that is, were lost or corrupted) at the receiver. For example, if the window size of GBN was 1000 and 1 transmission didn't get through, without selective repeat the protocol would resend all 1000 transmissions instead of the single faulty one.

What is selective repeat?

Store-and-forward transmission means that the packet switch must receive the entire packet before it can begin to transmit the first bit of the packet onto the outbound link.

What is store-and-forward transmission?

L = packet bits R = transmission rate bits/sec The source begins to transmit at time 0; at time L/R seconds, the source has transmitted the entire packet, and the entire packet has been received and stored at the router (since there is no propagation delay). At time L/R seconds, since the router has just received the entire packet, it can begin to transmit the packet onto the outbound link towards the destination; at time 2L/R, the router has transmitted the entire packet, and the entire packet has been received by the destination. Thus, the total delay is 2L/R.

What is the delay in store-and-forward transmission between a packet switch (router) and two end systems?

A recursive query asks a dns server to obtain mapping on its behalf http://www.networkinginfoblog.com/contentsimages/Recursive%20queries%20in%20DNS.JPG An iterative query gets a direct mapping reply from each dns server in the hierachy http://userpages.umbc.edu/~dgorin1/451/OSI7/dcomm/dns2.JPG

What is the difference between Iterative and recursive DNS queries?

Hosts and end systems are the same thing. They are computers or devices that are connected to a network. Laptop, tablet, gaming console, smartphones, televisions. A web server is an end system.

What is the difference between a host and an end system? List several different types of end systems. Is a Web server an end system?

In circuit-switched networks, the resources needed along a path (buffers, link transmission rate) to provide for communication between the end systems are reserved for the duration of the communication session between the end systems. In packet-switched networks, these resources are not reserved; a session's messages use the resources on demand, and as a consequence, may have to wait (that is, queue) for access to a communication link.

What is the difference between a packet switching network and a circuit switching network?

Stop and wait: The sender waits and does not send any more data while waiting for an ACK or NAK. Stop and wait can under utilize channel bandwidth because for transmissions with a long delay it just waits without sending data. Pipelined: Pipelining allows the sender to send 3 packets before receiving an ACK which triples the channel utilization of the sender.

What is the difference between a stop and wait and pipelined protocol?

a transport-layer protocol provides logical communication between processes running on different hosts, a network-layer protocol provides logical communication between hosts

What is the relationship between transport and network layers?

It is the rate that bits can be transferred. throughput depends on the transmission rates of the links over which the data flows. ie if a server is uploading a file at 2mbps but the cable has a capacity of 1mbps it will travel at 1mbps. If 10 servers are uploading at 1mbps and share a 5mbps link then the throughput is 500k for each server.

What is throughput and how does it effect computer networks?

Assuming that packets are transmitted in a first-come-first-served manner, as is common in packet-switched networks, a packet can be transmitted only after all the packets that have arrived before it have been transmitted.

What is transmission delay?

Because a mail client isn't always online. A sending SMTP server is always on to attempt to send messages. A receiving SMTP server is always online to accept messages when the mail client is offline.

Why is there an SMTP server on the sending and receiving end?


Kaugnay na mga set ng pag-aaral

303 Hinkle PrepU Chapter 57: Management of Patients With Female Reproductive Disorders

View Set

Unit 3: Absolutism and Constitutionalism

View Set

BLAW 235: Ch 7 - Strict Liability and Product Liability

View Set

Exam #3 (CH 54 - Mgmnt of Pts W/ Kidney Disorders)

View Set

تُحْفَةُ الْأَطْفَالِ وَالْغِلْمَانِ فِي تَجْوِيْدِ الْقُرْآنِ

View Set

Topic 2 Test 2- Social Influence and Persuasion

View Set

Homeostasis - NSC 3361.HN1 - Introduction To Neuroscience - S24

View Set