CS6250 Computer Networks Exam 1
Describe the two types of multiplexing/demultiplexing.
1. Connectionless Multiplexing/Demultiplexing: Uses the UDP socket that consists of a two-tuple made up of a destination IP address and port. Transport layer identifies port from receiverd datagram that includes the port. 2. Connection Oriented Multiplexing/Demultiplexing: Uses the TCP socket and is a four-tuple that consists of IP address, source port, destination IP address, destination port.
● What are four reasons for IXPs increased popularity?
1. IXPs are large interconnection hubs handling large traffic volumes. Some large IXPs handle as much traffic as Tier 1 ISPs 2. They play an important role in mitigating DDoS attacks 3. The "real world" infrastructures provide an excellent research playground for multiple applications 4. IXPs are active marketplaces and technology innovation hubs. They are providing new services beyond interconnection like DDoS mitigation or SDN-based services
Explain the TCP Three-way Handshake.
3-way Handshake: Step 1: The TCP client sends a special segment, (containing no data) and with SYN bit set to 1. The Client also generates an initial sequence number (client_isn) and includes it in this special TCP SYN segment. Step 2: The Server upon receiving this packet, allocates the required resources for the connection and sends back the special 'connection-granted' segment which we call SYNACK. This packet has SYN bit set to 1, ack field containing (client_isn+1) value and a randomly chosen initial sequence number in the sequence number field. Step 3: When the client receives the SYNACK segment, it also allocates buffer and resources for the connection and sends an acknowledgment with SYN bit set to 0.
What is a bridge, and how does it "learn"?
A bridge is a device with multiple inputs/outputs. A bridge transfers frames from an input to one (or multiple) outputs. Though it doesn't need to forward all the frames it receives. A learning bridge learns, populates and maintains a forwarding table. The bridge consults that table so that it only forwards frames on specific ports, rather than over all ports. So how does the bridge learn? When the bridge receives any frame this is a "learning opportunity" to know which hosts are reachable through which ports. This is because the bridge can view the port over which a frame arrives and the source host.
What is the end-to-end (e2e) principle?
A design choice that shaped the current internet architecture. It states the network core should be simple and minimal, while the end systems should carry the intelligence. Network functions should be simple and essential commonly used functions so any host can utilize the service and higher form functions should be built into the application itself. Lower level layers should be independent and free to perform only their designed function and the higher-level layers deal with the more intricate functions that deal with the specific application.
What is a distributed algorithm?
A distributed algorithm is an algorithm designed to run on computer hardware constructed from interconnected processors. Distributed algorithms are used in many varied application areas of distributed computing, such as telecommunications, scientific computing, distributed information processing, and real-time process control. https://en.wikipedia.org/wiki/Distributed_algorithm
What are sockets?
A network socket is a software structure within a network node of a computer network that serves as an endpoint for sending and receiving data across the network. The structure and properties of a socket are defined by an application programming interface (API) for the networking architecture. Sockets are created only during the lifetime of a process of an application running in the node.https://en.wikipedia.org/wiki/Network_socket."A process sends messages into, and receives messages from, the network through a software interface called a socket. Let's consider an analogy to help us understand processes and sockets. A process is analogous to a house and its socket is analogous to its door....a socket is the interface between the application layer and the transport layer within a host." - Kurose and Ross, 2.1
● How does a route server work?
A route server: ● Collects and shares routing information from its peers or participants that connects with (i.e. IXP members that connect to the RS). ● Executes its own BGP decision process and also re-advertise the resulting information (I.e. best route selection) to all RS's peer routers. ● Also known as Multi-lateral BGP peer sessions.
What are the ramifications of the hourglass shape of the internet?
A. Many technologies that were not originally designed for the internet have been modified so that they have versions that can communicate over the internet (such as Radio over IP). B. It has been a difficult and slow process to transition to IPv6, despite the shortage of public IPv4 addresses.
● How does an AS determine what rules to import/export?
AS business relationships drive an AS' routing policies and influence which routes an AS needs to import or export. There are 3 selective transit routes: Transit customer routes, Transit Provider Routes, Peer Routes. The main rule is financial incentive. Transit customer routes > peer routes > transit provider routes. Similarly as exporting, ASes are selective about which routes to import based, primarily, on which neighboring AS advertises them and what type of business relationship is established. An AS receives route advertisements from its customers, providers and peers. When an AS receives multiple route advertisements towards the same destination, from multiple ASes, then it needs to rank the routes before selecting which one to import. The routes that are preferred first are the customer routes, then the peer routes and finally the provider routes. The reasoning behind this ranking is that an AS... 1. wants to ensure that routes towards its customers do not traverse other ASes unnecessarily generating costs, 2. uses routes learned from peers since these are usually "free" (under the peering agreement), 3. and finally resorts to import routes learned from providers as these will add to costs.
● What is an AS?
AS stands for autonomous systems. ISPs, IXPs and CDNs can all operate as an AS. An AS is a group of routers including the links among them that operate under the same administrative authority.
What is the main idea behind a link state routing algorithm?
Also called Dijkstra's Algorithm. In link state routing, the link costs and the network topology are known to all nodes (for example by broadcasting these values).
● What kind of relationship does AS have with other parties?
An AS is independent of other ASes. They set their own policies, make its own traffic engineering decisions and interconnection strategies and also determine how the traffic leaves and enters the network.
What is Automatic Repeat Request or ARQ?
An error-control method for data transmission that uses acknowledgements (messages sent by the receiver indicating that it has correctly received a packet) and timeouts (specified periods of time allowed to elapse before an acknowledgment is to be received) to achieve reliable data transmission over an unreliable communication channel. If the sender does not receive an acknowledgment before the timeout, it usually re-transmits the packet until the sender receives an acknowledgment or exceeds a predefined number of retransmissions. https://en.wikipedia.org/wiki/Automatic_repeat_request
What is the EvoArch model?
An hourglass shaped model of the Internet where the outer bands are more frequently modified or replaced and the further in you go the harder it is for that layer to be altered or modified.
Describe each layer of the OSI model.
Application layer: Service, Interface, Protocol. Ex: Turn on your smartphone and look at the list of apps. (HTTP, SMTP, FTP, DNS) Presentation layer: Plays intermediate role of formatting the information received from the layer below and delivering it to the application layer. Ex: converting big endian to little endian. Session layer: Responsible for the mechanism that manages the different transport streams that belong to the same session between end-user and application process. Ex: teleconference app, it is responsible for tying together audio and video streaming. Transport layer: Responsible for the end-to-end communication between end hosts. 2 transport protocols, TCP and UDP. TCP includes a connection-oriented service to the applications that are running on the layer above, guaranteed delivery of the application-layer messages, flow control, and congestion control mechanism. UDP provides a connectionless, best-effort service to the applications that are running in the layer above without reliability, flow, or congestion control. In this layer the packet is called a segment. Network layer: This layer is responsible for moving the packet of information, called a datagram, from one host to another. The network layer is responsible for delivering the datagram to the Transport layer on the destination host. In this layer there are the IP Protocol and the routing tables. Data Link layer: Packets are referred to as frames. Examples include: ethernet, ppp, wifi. Responsible for moving the frames from one node (host or router) to the next node. Services offered by the data link layer protocol include reliable delivery (transmission of the data from one transmitting node, across one link, to the receiving node. Physical layer: This layer is the actual hardware responsible to transfer bits within a frame between two nodes connected through a physical link. Ex: Ethernet (twisted-pair copper, coax, fiber-optics).
Provide examples of popular protocols at each layer of the five-layered Internet model.
Application: NFS, DNS, SNMP, ftp, rcp, telnet, HTTP Transport: TCP, UDP Internet: IP, ARP, ICMP Data Link: PPP, IEEE 802.2, Ethernet Physical Network: Token Ring, RS-232
● What are the basics of BGP?
BGP session. A pair of routers, known as BGP peers, exchange routing information over a semi-permanent TCP port connection called a BGP session. To begin a BGP session a router will send an OPEN message to another router. Then the sending and receiving router will send each other announcements from their individual routing tables. Depending on the number of routes being exchanged, this can take from seconds up to several minutes. A BGP session between a pair of routers in two different ASes is called external BGP (eBGP) session, and a BGP session between routers that belong to the same AS is called internal BGP (iBGP) session. BGP messages. After a session is established between BGP peers, the peers can exchange BGP messages to provide reachability information and enforce routing policies. We have two types of BGP messages: UPDATE Announcements: These messages advertise new routes and updates to existing routes. They include several standardized attributes. Withdrawals: These messages are sent when a previously announced route is removed. This could be due to some failure or due to a change in the routing policy. 2. KEEPALIVE: These messages are exchanged to keep a current session going. BGP prefix reachability. In the BGP protocol, destinations are represented by IP Prefixes. Each prefix represents a subnet or a collection of subnets that an AS can reach. Gateway routers running eBGP advertise the IP Prefixes they can reach according to the AS's specific export policy to routers in neighboring ASes. Then, using separate iBGP sessions, the gateway routers disseminate routes to internal routers according to the AS's import policy. Internal routers run iBGP to propagate the routes to other internal iBGP speaking routers. Path Attributes and BGP Routes. In addition to the reachable IP prefix field, advertised BGP routes consist of a number of BGP attributes. Two notable attributes are AS-PATH and NEXT-HOP. AS-PATH. Each AS, as identified by the AS's autonomous system number (ASN), that the route passes through is included in the AS-PATH. This attribute is used to prevent loops and to choose between multiple routes to the same destination, the route with the shortest path. NEXT-HOP. This attribute refers to the IP address (interface) of the next-hop router along the path towards the destination. Internal routers use the field to store the IP address of the border router. Internal BGP routers will have to forward all traffic bound for external destinations through the border router. If there is more than one such router on the network and each advertises a path to the same external destination, NEXT-HOP allows the internal router to store in the forwarding table the best path according to the AS routing policy.
● What is BGP?
BGP stands for Border Gateway Protocol. The border routers of ASes use BGP to exchange routing information with each other.
● Explain how TCP CUBIC works.
CUBIC uses a cubed polynomial as its growth function. To maintain the TCP fairness it uses a multiplicative decrease and reduces the window to half. TCP CUBIC is fair regardless of RTT because the calculation that is used depends on the elapsed time from two congestion events, rather than being dependent on the RTT of a connection.
● How does poison reverse solve the count-to-infinity problem?
Called poison reverse: since z reaches x through y, z will advertise to y that the distance to x is infinity (Dz(x)=infinity). However z knows that this is not true and Dz(x)=5. z tells this lie to y, as long as it knows that it can reach to x via y. Since y assumes that z has no path to x except via y, it will never send packets to x via z. So z poisons the path from z to y.
What is congestion control?
Congestion control controls the transmission rate to protect the network from congestion to avoid longer queues and packet drops
● What is end-to-end congestion control?
E2E does not provide any explicit feedback about congestion to the end hosts. Instead, the hosts infer congestion from the network behavior and adapt the transmission rate. Eventually, TCP ended up using the end-to-end approach. This largely aligns with the end-to-end principle adopted in the design of the networks. Congestion control is a primitive provided in the transport layer, whereas routers operate at the network layer. Therefore, the feature resides in the end nodes with no support from the network. Note that this is no longer true as certain routers in the modern networks can provide explicit feedback to the end-host by using protocols such as ECN and QCN.
● Walk through an example of the distance vector algorithm.
Each node x updates its own distance vector using the Bellman Ford equation: Dx(y) = minv{c(x,v) + Dv(y)} for each destination node y in the network. A node x, computes the least cost to reach destination node y, by considering the options that it has to reach y through each of its neighbor v. So node x considers the cost to reach neighbor v, and then it adds the least cost from that neighbor v to the final destination y. It calculates that quantity over all neighbors v and it takes the minimum.
What are advantages and disadvantages of a layered architecture?
Each protocol layer offers different services. Some advantages are scalability, flexibility, and ease of adding / removing components making it easier for cost-effective implementations. Disadvantages include: some layers functionality depends on the information from the other layer and violates the goal of layer separation; one layer may duplicate lower layer functionalities; overhead both in computation and in message headers caused by abstraction barriers between layers.
● What are the goals of congestion control?
Efficiency. We should get high throughput or utilization of the network should be high. Match the load to available capacity. Fairness. Each user should have its fair share of the network bandwidth. The notion of fairness is dependent on the network policy. For this context, we will assume that every flow under the same bottleneck link should get equal bandwidth. Low delay. In theory, it is possible to design protocols that have consistently high throughput assuming infinite buffer. Essentially, we could just keep sending the packets to the network and they will get stored in the buffer and will eventually get delivered. However, it will lead to long queues in the network leading to delays. Thus, applications that are sensitive to network delays such as video conferencing will suffer. Thus, we want the network delays to be small. Fast convergence. The idea here is that a flow should be able to converge to its fair allocation fast. This is important as a typical network's workload is composed of a lot of short flows and few long flows. If the convergence to fair share is not fast enough, the network will still be unfair for these short flows.
What is encapsulation, and how is it used in a layered model?
Encapsulation is when data (called a header) is appended to the packet through each layer to signify its on the correct path to the destination host.
Explain a round in the EvoArch model.
EvoArch is a discrete-time model that is executed over rounds. At each round, we perform the following steps: A) We introduce new nodes, and we place them randomly at layers. B) We examine all layers, from the top to the bottom, and we perform the following tasks: 1) We connect the new nodes that we may have just introduced to that layer, by choosing substrates based on the generality probabilities of the layer below s(l−1), and by choosing products for them based on the generality probability of the current layer s(l). 2) We update the value of each node at each layer l, given that we may have new nodes added to the same layer l. 3) We examine all nodes, in order of decreasing value in that layer, and remove the nodes that should die. C) Finally, we stop the execution of the model when the network reaches a given number of nodes.
What is flow control and why do we need to control it?
Flow control is TCP's rate control mechanism that helps match the sender's rate against the receiver's rate of reading the data. The sending host maintains a "receive window" which provides the sender an idea of how much data the receiver can handle at that moment. "TCP provides a flow-control service to its applications to eliminate the possibility of the sender overflowing the receiver's buffer. Flow control is thus a speed matching service—matching the rate at which the sender is sending against the rate at which the receiving application is reading." - Kurose 3.5.5
What is hot potato routing?
Hot potato routing is a technique/practice of choosing a path within the network, by choosing the closest egress point(network exit) based on intra domain path cost (Interior Gateway Protocol/IGP cost). Hot potato routing simplifies computations for the routers as they are already aware of the IGP path costs. It makes sure that the path remains consistent, since the next router in the path will also choose to send the packet to the same egress point. Hot potato routing also effectively reduces the network's resource consumption by getting the traffic out as soon as possible.
● What is an IXP?
IXPs are physical infrastructures that provide the means for ASes to interconnect and directly exchange traffic with one another. The ASes that interconnect at an IXP are called participant ASes. The physical infrastructure of an IXP is usually a network of switches that are located either in the same physical location, or they can be distributed over a region or even at a global scale.
● Which services do IXPs provide?
IXPs provide: 1. Public Peering 2. Private Peering 3. Route Servers and Service Level Agreements (SLAs) 4. Remote peering through resellers 5. Mobile Peering 6. DDoS Blackholing 7. Free value-add services: Extra things for the "good of the internet" like Consumer broadband tests, DNS root name servers and distribution of local time through NTP
● Is TCP fair in the case where two connections have the same RTT? Explain. Different RTT?
In TCP, fairness means: for k-connections passing through one common link with capacity R bps, each connection gets an average throughput of R/k If two connections have the same RTT then the throughput for each should sum up to R. Since TCP relies on acknowledgements of received packets, RTT (round trip time) affects the connection speed. In AIMD, a connection with a faster RTT would be able to ramp faster than another connection with slower RTT. (alt answer) If the RTT's are different, the connections with smaller RTT values would increase their congestion window faster than the ones with longer RTT values which leads to an unequal sharing of the bandwidth.
● What is the computational complexity of the link state routing algorithm?
In other words, in the worst case, how many computations are needed to find the least-cost paths from the source to all destinations in the network? In the first iteration we need to search through all nodes to find the node with the minimum path cost. But as we proceed in the next iterations, this number decreases. So in the second iteration we search through (n-1) nodes. This decrease continues at every step. So by the end of the algorithm, after we go through all the iterations, we will need to search through n(n+1)/2 nodes. Thus the complexity of the algorithm is in the order of n squared O(n^2).
● What is the difference between iBGP and eBGP?
In the previous topic we saw that we have two flavors of BGP: eBGP (for sessions between border routers of neighboring ASes) and iBGP (for sessions between internal routers of the same AS). Both protocols are used to disseminate routes for external destinations. The eBGP speaking routers learn routes to external prefixes and they disseminate them to all routers within the AS. This dissemination is happening with iBGP sessions.
● What is network-assisted congestion control?
In this we rely on the network layer to provide explicit feedback to the sender about congestion in the network. For instance, routers could use ICMP source quench to notify the source that the network is congested. However, under severe congestion, even the ICMP packets could be lost, rendering the network feedback ineffective.
● Walk through an example of the link state routing algorithm.[1] [2] [3] [4] I wonder if it's meant for us to go through an iteration instead of giving the logic behind the algorithm. Maybe, I wasn't entirely sure how to answer that one. I'll post a comment in piazza later today. It would be good to know if we are going to be given some values and have to go through an iteration. Did you ever get an answer?
Initialization: N' (set only including source node u) = {u} for all nodes v: If v is a neighbor of u: Then D(v) = c(u,v) Else: D(v) = ∞ Loop: Find w not in N' such that D(w) is a minimum: Add w to N' Update D(v) for each neighbor v of w and NOT in N': D(v) = min( D(v), D(w) + c(w,v) ) /* new cost to v is either old cost to v or known least path cost to w plus cost from w to v */ Exit: Until N' = N
● What is an example of a link state routing algorithm?
Link state routing consists of: Initialization step: All currently known least-cost paths from (u) source node to its direct attached neighbors. Loop (Iterations): A loop is executed for every destination to (v) every other node in the network. During each iteration we're looking for sets of nodes that are NOT included in the initialization and identify the node (w) with the least cost path from the previous iteration. Exit: It exits by returning the shortest paths and their costs from the source node to every other node in the network.
What is multiplexing, and why is it necessary?
Multiplexing is the ability for a host to run multiple applications using the network simultaneously. It's necessary to ensure multi-tasking with the host. Multiplexing essentially combines multiple signals into one with 1 IP and multiple ports, where each application binds itself to a port. Allow multiple apps to communite to different servers with 1 IP address
What is Go-back-N?
Now let's look at how does the receiver notify the sender of a missing segment. One way is for the receiver to send an ACK for the most recently received in-order packet. The sender would then send all packets from the most recently received in-order packet, even if some of them had been sent before. The receiver can simply discard any out-of-order received packets.
What is the Open Shortest Path First (OSPF) protocol?
Open Shortest Path First (OSPF) is a routing protocol which uses a link state routing algorithm to find the best path between the source and the destination router. It is a link-state protocol that uses flooding of link-state information and a Dijkstra least-cost path algorithm. Advances include authentication of messages exchanged between routers, the option to use multiple same cost paths, and support for hierarchy within a single routing domain.
Repeaters, hubs, bridges, routers operate on which layers?
Repeaters and Hubs work over L1 (Physical Layer) Bridges and Layer 2-Switches work over L2 (Data link layer) Routers and Layer 3-Switches work over L3 (Network layer)
● What is a slow start in TCP?
Slow start is called "slow" start despite using an exponential increase because in the beginning it sends only one packet and starts doubling it after each RTT.
● What is the difference between the forwarding and routing?
So by forwarding we refer to transferring a packet from an incoming link to an outgoing link within a single router. By routing we refer to how routers work together using routing protocols to determine the good paths (or good routes as we call them) over which the packets travel from the source to the destination node.
● Explain Additive Increase/Multiplicative Decrease (AIMD) in the context of TCP.
TCP decreases the window when the level of congestion goes by halving the window size, and it increases the window when the level of congestion goes down by adding to the window size. This causes convergence to the optimal bandwidth by quickly cutting use in times of congestion while slowly increasing utilization when the congestion clears. The idea behind additive increase is to increase the window by one packet every RTT (Round Trip Time). Once TCP Reno detects congestion, it reduces the rate at which the sender transmits. So, when the TCP sender detects that a timeout occurred, then it sets the CongestionWindow (cwnd) to half of its previous value.
● How does a TCP sender limit the sending rate?
TCP uses a congestion window which is similar to the receive window used for flow control. It represents the maximum number of unacknowledged data that a sending host can have in transit (sent but not yet acknowledged). TCP uses a probe-and-adapt approach in adapting the congestion window. Under regular conditions, TCP increases the congestion window trying to achieve the available throughput. Once it detects congestion then the congestion window is decreased. In the end, the number of unacknowledged data that a sender can have is the minimum of the congestion window and the receive window.
Explain the TCP connection teardown.
Teardown: Step 1: When client wants to end the connection, it sends a segment with FIN bit set to 1 to the server. Step 2: Server acknowledges that it has received the connection closing request (FIN-ACK) and is now working on closing the connection. Step 3: The Server then sends a segment with FIN bit set to 1, indicating that connection is closed. Step 4: The Client sends an ACK for it to the server. It also waits for sometime to resend this acknowledgment in case the first ACK segment is lost.
What is the main idea behind the distance vector routing algorithm?
The Distance Vector algorithm is based on the Bellman Ford algorithm, which states each node exchanges their distance vectors to its neighbors which update their own view of the network. It is an iterative that loops until the neighbors do not have new updates to send to each other. It's also asynchronous which means it does not require the nodes to be synchronized with each other (not requiring the latest updates when they are not ready, while still ensuring convergence). Finally it's distributed which means direct nodes send information to one another, then they resend their results back after performing the calculation locally on that node, this means that each node has its own computing power and is not a centralized network. Helpful hint: There are videos on Udacity from the previous classes at Georgia Tech that explain this, and other concepts, in more detail. I found these videos extremely helpful to further understand several concepts in the chapters covered. Here is the video I found: https://classroom.udacity.com/courses/ud436/lessons/1729198657/concepts/6490994890923[1] Thanks, this was more helpful and added additional insight.
● Describe the relationships between ISPs, IXPs, and CDNs.
The Internet is a complex ecosystem. It is built with a network of networks. These networks include (ISPs) Internet Service Providers, (IXPs) Internet Exchange Points, (CDNs) Content Delivery Networks. In 2019, there were apx. 500 IXPs around the world. large scale Tier-1 ISPs that operate at a global scale, and essentially they form the "backbone" network over which smaller networks can connect IXPs are interconnection infrastructures, which provide the physical infrastructure, where multiple networks (eg ISPs and CDNs) can interconnect and exchange traffic locally. CDNs are networks that are created by content providers with the goal of having greater control of how the content is delivered to the end-users, and also to reduce connectivity costs. Some example CDNs include Google and Netflix. These networks have multiple data centers - and each one of them may be housing hundreds of servers - that are distributed across the world.
What are the differences and similarities of the OSI model and five-layered Internet model?
The OSI model and the 5-layered Internet Model have many of the same layers, with the difference being three of the layers are combined in the 5-layered model. Specifically the five-layer model combines the application, presentation, and session layers from the OSI model into a single application layer.
● What is the Routing Information Protocol (RIP)?
The Routing Information Protocol (RIP) is based on the Distance Vector protocol. The first version, released as a part of the BSD version of Unix, uses hop count as a metric (i.e. assumes link cost as 1). The metric for choosing a path could be shortest distance, lowest cost or a load-balanced path. In RIP, routing updates are exchanged between neighbors periodically, using a RIP response message, as opposed to distance vectors in the DV Protocols. These messages, called RIP advertisements, contain information about sender's distances to destination subnets.
Explain the Spanning Tree Algorithm.
The algorithm runs in "rounds" and at every round each node sends to each neighbor node a configuration message with three fields: a) the sending node's ID, b) the ID of the roots as perceived by the sending node, and c) the number of hops between that (perceived) root and the sending node. At every round, each node keeps track of the best configuration message that it has received so far, and it compares that against the configuration messages it receives from neighboring nodes at that round. At the very first round of the algorithm, every node thinks that it is the root.
● Explain TCP throughput calculation.
The calculation is: P = Probability of packet loss MSS = Maximum Segment Size RTT = Round-Trip Times BW = data per cycle / time per cycle BW = MSS/RTT * C / sqrt(P)
● How does a host infer congestion?
The host infer congestion from the network behavior mainly through 2 signals: First is the packet delay. As the network gets congested, the queues in the router buffers build up. This leads to increased packet delays. Thus, an increase in the round-trip time, which can be estimated based on ACKs, can be an indicator of congestion in the network. However, it turns out that packet delay in a network tends to be variable, making delay-based congestion inference quite tricky. Another signal for congestion is packet loss. As the network gets congested, routers start dropping packets. Note that packets can also be lost due to other reasons such as routing errors, hardware failure, TTL expiry, error in the links, or flow control problems, although it is rare.
● What are / were the original design goals of BGP? What was considered later?
The original design goals of BGP were: 1. Scalability manage the complications of this growth, while achieving convergence in reasonable timescales and providing loop-free paths. 2. Express routing policies BGP has defined route attributes that allow ASes to implement policies (which routes to import and export), through route filtering and route ranking 3. Allowing cooperation among ASes individual AS can still make local decisions (which routes to import and export) while keeping these decisions confidential from other ASes. Security was considered later as the complexity and size of the Internet has been increasing. Solutions have not been widely deployed or adopted due to difficulty to transition to new protocols and lack of incentives.
What is the purpose of the Spanning Tree Algorithm?
The purpose of the Spanning Tree Algorithm is to prevent broadcast storm through the network and cause stalls or heavy congestion. The spanning tree also prevents loops (cycles) from occurring in a network.
● How does a router use the BGP decision process to choose which routes to import?
The router compares routes by going through a list of attributes and chooses the path with the lowest amount of hops, in the simplest scenario. The router uses a decision tool called LocalPref attribute which is the preferred route learned through a specific AS over other ASes. An AS ranks routes by first preferring the routes learned from its customers, then the routes learned from its peers, and finally from the routes learned from its providers. Another attribute that can affect routing decisions is the MED (Multi-Exit Discriminator) attribute. This is used by ASes connected by multiple links to designate which of those links are preferred for inbound traffic.
How does a router process advertisements?
The router consists of a route processor (which is the main processing unit) and interface cards that receive data packets which are forwarded via a switching fabric. Let us break down router processing in a few steps: 1. Initially, the LS update packets which contain LSAs from a neighboring router reaches the current router's OSPF (which is the route processor). This is the first trigger for the route processor. As the LS Updates reach the router, a consistent view of the topology is being formed and this information is stored in the link-state database. Entries of LSAs correspond to the topology which is actually visible from the current router. 2. Using this information from the link-state database, the current router calculates the shortest path using the shortest path first (SPF) algorithm. The result of this step is fed to the Forwarding Information Base (FIB) 3. The information in the FIB is used when a data packet arrives at an interface card of the router, where the next hop for the packet is decided and its forwarded to the outgoing interface card.
What is selective ACKing?
The sender retransmits only those packets that it suspects were received in error. The receiver in this case would acknowledge a correctly received packet even if it is not in order. The out-of-order packets are buffered until any missing packets have been received at which point the batch of the packets can be delivered to the application layer. "The fourth extension allows TCP to augment its cumulative acknowledgment with selective acknowledgments of any additional segments that have been received but aren't contiguous with all previously received segments. This is the selective acknowledgment, or SACK, option. When the SACK option is used, the receiver continues to acknowledge segments normally—the meaning of the Acknowledge field does not change—but it also uses optional fields in the header to acknowledge any additional blocks of received data. This allows the sender to retransmit just the segments that are missing according to the selective acknowledgment." Peterson 5.3.8 "A proposed modification to TCP, the so-called selective acknowledgment [RFC 2018], allows a TCP receiver to acknowledge out-of-order segments selectively rather than just cumulatively acknowledging the last correctly received, in-order segment. When combined with selective retransmission—skipping the retransmission of segments that have already been selectively acknowledged by the receiver" Kurose 3.5.4
What does the transport layer provide?
The transport layer is the logical connection. Transport layer consists of TCP and UDP protocols. The transport layer provides an end-to-end connection between two applications that are running on different hosts. Of course the transport layer provides this logical connection regardless if the hosts are in the same network. User datagram protocol (UDP) and the Transmission Control Protocol (TCP). -Some applications that use UDP include video streaming, DNS, and Online Multiplayer Games. - Some applications that use TCP include Web, SMTP and FTP
What is transmission control and why do we need to control it?
Transmission control is implemented in the transport layer. It deals with issues of fairness in using the network. Transmission control has two parts, flow control and congestion control.
● What are two main challenges with BGP? Why?
Two main challenges are misconfigurations and faults. A misconfiguration can result in an excessively large number of updates which in turn can result in route instability, router processor and memory overloading, outages, and router failures. IXP stands for Internet Exchange Point. They provide physical infrastructure for ASes to interconnect and directly exchange traffic with each other. The physical infrastructure of an IXP is usually a network of switches that are located either in the same physical location, or they can be distributed over a region or even at a global scale. Typically, the infrastructure has fully redundant switching fabric that provides fault-tolerance, and the equipment is usually located in facilities such as data centers to provide reliability, sufficient power and physical security.
What are the differences between UDP and TCP?
UDP is A.) an unreliable protocol as it lacks the mechanism that TCP has in place. B.) a connectionless protocol that does not require the establishment of a connection (example: three way handshake) before sending packets. Some benefits to UDP are: A.) no congestion control or similar mechanisms. B.) No connection management overhead. The UDP packet structure is a 64 bits header consisting: 1.) Source and destination ports. 2.) Length of the UDP segment (header and data). 3.) Checksum (an error checking mechanism). Since there is no guarantee for link-by-link reliability, we need a basis mechanism in place for error checking. The UDP sender adds the src port, the dest port and the packet length. Then it takes the sum and performs an 1s complement (all 0s are turned to 1 and all 1s are turned to 0s). If during the sum there is an overflow, it's wrapped around. The receiver adds all the four 16-bit words (including the checksum). The result should be all 1s unless an error has occurred. To detect errors, the receiver adds the four words (the three original words and the checksum). If the sum contains a zero, the receiver knows there has been an error. While all one-bit errors will be detected, but two-bit errors can be undetected (e.g., if the last digit of the first word is converted to a 0 and the last digit of the second word is converted to a 1). TCP is a protocol consisting of a three-way handshake and a connection teardown: 3-way Handshake: Step 1: The TCP client sends a special segment, (containing no data) and with SYN bit set to 1. The Client also generates an initial sequence number (client_isn) and includes it in this special TCP SYN segment. Step 2: The Server upon receiving this packet, allocates the required resources for the connection and sends back the special 'connection-granted' segment which we call SYNACK. This packet has SYN bit set to 1, ack field containing (client_isn+1) value and a randomly chosen initial sequence number in the sequence number field. Step 3: When the client receives the SYNACK segment, it also allocates buffer and resources for the connection and sends an acknowledgment with SYN bit set to 0. Teardown: Step 1: When client wants to end the connection, it sends a segment with FIN bit set to 1 to the server. Step 2: Server acknowledges that it has received the connection closing request and is now working on closing the connection. Step 3: The Server then sends a segment with FIN bit set to 1, indicating that connection is closed. Step 4: The Client sends an ACK for it to the server. It also waits for sometime to resend this acknowledgment in case the first ACK segment is lost.
What are the two main protocols within the transport layer?
User datagram protocol (UDP) and the Transmission Control Protocol (TCP).
What are the examples of a violation of e2e principle?
Violations include firewalls and traffic filters. Firewalls violate because they are intermediate devices that are operated between two end hosts and they can drop the end host communications. Network Address Translation (NAT) boxes are also a violation because it uses the single public IP address and distributes a new IP scheme to the hosts connected to it to route data through re-writing the header info to route to the correct destination host. NAT boxes are a violation because they are not globally addressable or routable.
What is a packet for the transport layer called?
WA segment. (If it is a TCP packet) Datagram (If it is UDP packet)
When would an application layer protocol choose UDP over TCP?
When developers need just a simple mechanism for transmission control. Typically, use UDP in applications where speed is more critical than reliability. For example, it may be better to use UDP in an application sending data from a fast acquisition where it is acceptable to lose some data points. You can also use UDP to broadcast to any machine(s) listening to the server. https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000P9ZLSA0#:~:text=Typically%2C%20use%20UDP%20in%20applications,s)%20listening%20to%20the%20server.
What is fast retransmit?
When the sender receives 3 duplicate ACKs for a packet, it considers the packet to be lost and will retransmit it instead of waiting for the timeout.
● When does the count-to-infinity problem occur in the distance vector algorithm?
When two or more nodes keep updating their values and informs the neighbors of the change and they in turn update their values causing the original root to update its value again. This continues for a long time in a constant loop. This happens primarily when a neighbor's advertised path includes the present node's path as a loop.
What is Stop and Wait ARQ?
also referred to as alternating bit protocol, is a method in telecommunications to send information between two connected devices. It ensures that information is not lost due to dropped packets and that packets are received in the correct order. It is the simplest automatic repeat-request (ARQ) mechanism. A stop-and-wait ARQ sender sends one frame at a time; it is a special case of the general sliding window protocol with transmit and receive window sizes equal to one in both cases. After sending each frame, the sender doesn't send any further frames until it receives an acknowledgement (ACK) signal. After receiving a valid frame, the receiver sends an ACK. If the ACK does not reach the sender before a certain time, known as the timeout, the sender sends the same frame again. The timeout countdown is reset after each frame transmission. The above behavior is a basic example of Stop-and-Wait. However, real-life implementations vary to address certain issues of design. https://en.wikipedia.org/wiki/Stop-and-wait_ARQ
● What is the difference between iBGP and IGP-like protocols (RIP or OSPF)?
iBGP is not another IGP-like protocol (eg RIP or OSPF). IGP-like protocols are used to establish paths between the internal routers of an AS based on specific costs within the AS. In contrast, iBGP is only used to disseminate external routes within the AS.