Net tech exam 1

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

How does bipolar signaling differ from unipolar signaling? Why is Manchester encoding more popular than either?

Bipolar signaling differs from unipolar signaling because signals with bipolar are either positive or negative. Unipolar signaling is either positive or zero and while bipolar return to zero (RZ) and alternate mark inversion (AMI) return to zero between each bit, each binary value is still represented by one or zero. Manchester signaling changes bits in the middle of a signal. Having mid-bit transmission makes it easier to detect errors

What is the purpose of a subnet mask?

Every computer within a TCP/IP network is given a subnet mask so that they can tell what other computers are on that same subnet. Computers must know what computers are on their subnet in order to properly route messages. A subnet mask is a 4-byte binary number that looks just like an IP address. It has to be a continuous stream of ones that are consecutive. An example is a subnet mask of 255.255.254.0 that indicates the first 2 bytes plus the first 7 bits of third byte indicate the subnet address, because in binary numbers, this is 11111111.11111111.11111110.00000000. So computers within this subnet will have the same exact subnet mask. You can also see that all the ones are continuous.

Describe three types of guided media.

Guided media is the physical medium that a message actually travels through. The 3 types and descriptions are as followed: Twisted Pair Cable: These are insulated wires that are twisted together in order to minimize the amount of electromagnetic interference from other wires. Four paired wires are typically seen in LANs such as Cat 5e and Cat 6 but they can occur in hundreds of bundled pairs. Coaxial Cable: Coaxial cable is becoming obsolete. It is a copper wire that has an outer shell that's also a conductor. They are less prone to interference than twisted pair cables because they have more insulation. Existing cable companies continue to use them even though they are much more expensive than twisted pair cables. Fiber-Optic Cables: This technology uses impulses of light from lasers or LEDs (light emitting diodes) that carry information inside of hair thin pieces of glass. They are made of 2 layers: the optical core and the metal cladding that surrounds the core. Early generations of the cables would allow light to bounce off the side multiple times, cause the signal to degrade after so many feet called multimode. Graded-index multimode attempts to solve the degradation problem by having the light speed up when it approaches the edge. This gives it around a 1000 foot range. Single mode fiber optic uses a laser to emit the light, but it has to be perfectly aligned so the light can go all the way through. This makes it much more expensive. Fiber optics can carry far more data than coaxial or twisted pair at much higher speeds. It is also physically more durable.

Why does HTTP use TCP and DNS use UDP?

HTTP uses TCP because the request is larger and so it has to be segmented and then reassembled at the time it is received. The field for a TCP segment is typically a 20 byte header (because the options field is rarely used) which includes a source and destination port identifier and a sequence number which helps to reassemble the segments and make sure no segments have been lost. DNS uses UDP because it is a very small packet that has to be sent and the UDP header is only 8 bytes compared to HTTP's header which is 20 bytes which allows for quicker transmission. The UDP is quicker because it does not disassemble and reassemble and there is no handshake that takes place. A request is sent to the DNS server and it sends a small packet via UDP which is a one way transmission with no acknowledgment needed. Also, UDP does not need to connect.

What does the data link layer do

The data link layer is the second layer of the OSI's seven layer of computer networking. The data link layer is the protocol layer that transfers data between adjacent network nodes in a WAN or between nodes on the same LAN. In the OSI model the data link layer has a function of error control. Most data link layer software used in LAN's is configured to detect errors, not correct them. When a packet with an error is dicovered, it is discarded.

What does the network layer do?

The network layer is like the on and off ramp of the highway system; it helps to control the flow of traffic and get "cars" (packets) to the highway (transport layer) from the city streets (data link layer). The network layer is also in charge of message forwarding, host addressing for application, network and data link layer addresses, and is also in control putting sequence numbers for UDP transmissions in connectionless sessions. To elaborate on the "on and off ramp" slimily: the network layer is the transition layer between the data link layer and the transport layer, moving your packets from your local network to a more distant network through the "internet highway", the transport layer. Much like on and off ramps show the which way the highway goes and where they will lead to, Network layer selects which device to send the packet to next on its way to its destination, either a switch or a router. It is simply a utility layer that helps support the transport layer, which has most of the actual protocols for internet communication.

Explain why most telephone company circuits are now digital.

The reason most telephone companies now use digital circuits is because "it produces fewer errors; is more efficient; permits higher maximum transmission rates; is more secure; and simplifies the integration of voice, video, and data on the same circuit

What is a server? What is a client? What is a circuit?

A server is a machine that stores data or software and is accessed by multiple clients. This allows many users to access the same information from a centralized location. A client is the input-output device used by an end-user to communicate with the rest of the network. Some clients may be a full PCs meaning it has a full version of the OS and applications on it and some may be a Thin-client meaning it has a lite version of the OS and all applications are accessed from a server. A circuit is the pathway that information travels on. Circuits can be made up of different media or protocols such as copper wire, fiber optic, and wireless; also including routing devices such as hubs and switches. Basically servers and clients are networked together via the circuit.

What is a session?

A session is an interaction or conversation between two communication devices. Some examples of devices include to computers, a laptop transferring content to a computer, or two cellphones communicating. A session can also be a user interacting with a communication device.

What is the difference between an interior and an exterior routing protocol? Give an example of each

An autonomous system is a network operated by one organization. Interior routing protocol is then the routing protocol used inside an autonomous sytem. The book gives the example of IBM or a university as an example of an interior routing protocol. An exterior routing protocol is the routing protocol used between autonomous systems. Border Gateway Protocol is an example of exterior routing protocol.

Compare and contrast stop-and-wait ARQ and continuous ARQ

Both Stop-and-wait ARQ and Continuous ARQ are both error correction via retransmission methods. They work by sending an acknowledgement (ACK) when a packet is received without error and a negative acknowledgement (NAK) when a packet is received containing errors. The key difference between these two types of ARQ is the method of transmission. Stop-and-wait ARQ has the sender wait for a response, after every data packet sent to the receiver, before sending additional packets. This method of transmission is considered half-duplex because only one side is transmitting data at a time. While using continuous ARQ the sender does not wait for acknowledgement before sending additional packets. Data packets are being transmitted as the receiver sends back ACK or NAK for a particular packet, making it a full duplex transmission technique. Error correction for continuous ARQ either uses Link Access Protocol for Modems (LAP-M), which only resends the packets in err, or Go-Back-N ARQ, which resends the packet in err and all packets that followed after. Finally, the last difference is the need for flow control. Stop-and-wait is completely controlled by the receiver, as the next packet won't be sent until the previous packet has been acknowledged. Continuous is usually controlled by the sliding window, the maximum number of packets to be sent before having to receive an acknowledgement, assuring that the receiver is not bombarded by packets the system can't handle.

Compare and contrast the three types of addresses used in a network.

Computers can have three different addresses: application layer address, network layer address, and data link layer address. Data link layer addresses are usually part of the hardware, whereas network layer and application layer addresses are set by software. The data link layer address is permanently encoded in each network card and is commonly called the physical address or MAC address. Hardware manufacturers have an agreement that assigns each manufacturer a unique set of permitted data link layer addresses, which uniquely identifies it from every other computer in the world. Network layer and application layer addresses are generally assigned by a software configuration file. Network managers can assign any network layer addresses they want. As with network layer addresses, network managers can assign any application layer address they want, but a network standards group must approve application layer addresses to ensure that no two computers have the same application layer address. Network layer addresses and application layer addresses go hand in hand, so the same standards group usually assigns both.

How does forward error correction work? How is it different from other errorcorrection methods?

Forward error correction work uses codes containing sufficent redundancy tp prevent errors by detecting and correcting them at te receiving end without retransmission of the original message.The redundancy or extra bits required ,varies with diffrent schemes. It is deffrent fom other error correction metods becasue it uses redundancy which vaires base on amount of bits, and can give a account for a equal amount of data bits for each error-detected, which is more suffeicen the blaming the dat link layer for the error in data and not being able to full detect the error bits.

What feature distinguishes serial mode from parallel mode?

In serial mode, bits are transferred sequentially or one at a time. In parallel mode multiple bits are sent simultaneously. Serial cords are less expensive since all the data transfer is taking place over a single cord as opposed to a parallel cord which is transferring data over multiple lines that are bundled together into one cord.

Describe the seven layers in the OSI network model and what they do

Layer 7 - Application Layer: This layer handles applications that require a network connection such as DNS, FTP, HTTP, DHCP, Telnet and various other applications, as well as network management and monitoring. The application layer is where the end user gets access to the network through the use of applications. Layer 6 - Presentation Layer: This layer provides a context for communication between the layers. It prepares the data from lower layers for presentation to the application layer. The presentation layer also handles encryption and decryption, as well as compression. The presentation layer handles procedures for entering into and departing from the network. Layer 5 - Session Layer: This layer controls the dialogs between computers. It structures and manages all sessions. The session layer sets up the communication between two computers, maintains it and then tears it down. Layer 4 - Transport Layer: This layer performs flow control, breaks down larger messages into packets, confirms receipt of packet transmissions (for TCP), performs error checking. Layer 3 - Network Layer: This layer determines the next path (another computer) that offers the best route through the network for messages to be sent. The network layer also determines the full address for the next computer on the route if need be. Layer 2 - Data Link Layer: This layer takes the bits sent from the physical layer and it creates message boundaries that indicate where messages begin and end. The data link layer also decides when devices can transmit in order to keep more than one computer from transmitting at the same time. Switching occurs on this layer. This layer solves problems associated with damaged and lost messages, as well as duplicate messages in order to prevent the layers above it from experiencing errors in transmission (also known as error control). Layer 1 - Physical Layer: This layer transmits data in bits in ones and zeros. The rules are defined in this layer as to how ones and zeros are transmitted. It controls how bits are sent such as how many bits per second. The physical layer connects all of the wiring/cables and physical components together.

What is media access control, and why is it important?

Media access control "refers to the need to control when computers transmit." The media access control (MAC) functions at the data link layer. There are two sublayers of the data link layer and one of the sublayers is called the media access control sublayer and it controls the physical hardware. "The MAC sublayer software at the sending computer controls how and when the physical layer converts bits into the physical symbols that are sent down the circuit. At the receiving computer, the MAC sublayer software takes the data link layer PDU from the LLC sublayer, converts it into a stream of bits, and controls when the physical layer actually transmits the bits over the circuit. At the receiving computer, the MAC sublayer receives a stream of bits from the physical layer and translates it into a coherent PDU, ensures that no errors have occurred in transmission, and passes the data link layer PDU to the LLC sublayer." Media access control is not necessary for point-to-point full-duplex configurations because there are only two computers on the circuit, and with full-duplex configuration, both computers can transmit at any time. In point-to-point half-duplex configuration, media access control is important. If errors occur during transmission, media access control uses two different approaches: controlled access and contention. With the contention method, "computers wait until the circuit is free (i.e. no other computers are transmitting) and then transmit whenever they have data to send. Contention is commonly used in Ethernet LANs." With the controlled access method, controlled access "controls the circuit and determines which clients can transmit at what time. There are two commonly used controlled access techniques: access requests and polling. With the access request technique, client computers that want to transmit will send a request to transmit to the device that is controlling the circuit (e.g., the wireless access point). The controlling device grants permission for one computer at time to transmit. While one computer has permission to transmit, all other computers will wait until that computer is finished, and then, if they have something to transmit, they use a contention technique to send an access request." With the polling method, polling "is the process of sending a signal to a client computer that gives it permission to transmit. With polling, the clients store all messages that need to be transmitted. Periodically, the controlling device (e.g., a wireless access point) polls the client to see if it has data to send. If the client has data to send, it does so. If the client does not have data to send, it responds negatively, and the controller then asks another client if it has data to send." Media access control is important to prevent collisions, and to transmit data effectively.

What is middleware, and what does it do?

Middleware is the software solution to client-server architecture communication. Middleware sits between the application software on the client side and the application software on the server side. It provides a standard way of communication that can translate between software from different venders. It also can manage message transfers from the client side to the server side so that the client does not need to know what specific server contains the application data. The application software from the client sends all messages to the middleware which in turns forwards it to the correct server. This ensures that the client application software will not have to be constantly updated as the server is updated, only the middleware will need to be updated to continue to run smoothly. Middleware is useful that it enables software from different venders to communicate, and work together. There are many standards that govern middleware, but two of the major ones are the Distributed Computing Environment (DCE) and the Common Object Request Broker Architecture (COBRA). Both standards cover all aspects of the client-server architecture, and as long as one or the other is used it will allow the software to communicate with the corresponding software standard.

Why are network layers important?

Network layers are important because by breaking down what goes on it becomes easier to understand and troubleshoot it if there are any problems. The layers allow us to break down everything into pieces to see what is responsible for what. In doing so it allows us to better understand how networking works and how to improve on it or fix it should something go wrong. By having these layers it makes it much easier to program different fucntions at different layers.

Describe three approaches to detecting errors, including how they work, the probability of detecting an error, and any other benefits or limitations.

Parity checking, which adds an additional bit (the check bit) to the end of a string of code to indicate whether the number of '1s' in the string is even or odd. The probability of detecting an error using this method is about 50%. Longitudinal redundancy checking adds one additional character called the block check character (BCC) to the end of the entire packet of data. The value of the BCC is determined in the same way as the check bit in parity checking, but by counting longitudinally through the message rather than by counting vertically. For example, counting the 2nd digit of every new line of code. This method of error detection is highly reliable at 98% for typical burst errors larger than 10 bits. Polynomial checking adds a single or multiple characters based on an algorithm to end the message. There are actually two different types of polynomial checking, check sum and cyclical redundancy check. The probability of detecting bursts errors longer than 16 bits is more than 99.99% percent.

What roles do SMTP, POP, and IMAP play in sending and receiving email on the Internet?

SMTP is the most commonly used email standard. The user creates the email message using an email client, which formats the message into a SMTP packet that includes information of the senders address and destination. The SMTP packet gets sent to a mail server and runs a special application layer software package called a mail transfer agent. The SMTP packet is read to find the destination address then sends the packet through the internet from mail server to mail server till it reaches the destined server then stores the message in the receiver's mailbox on that server. When the mail server receives the IMAP or POP request, it converts the original SMTP packet created by the message sender into a POP or an IMAP packet that is sent to the client's computer. POP or IMAP provide a host of functions that enable the user to manage his or her email, such as creating mail folders, deleting mail, creating address books, and so on.

What does the transport layer do?

The Transport layer is the fourth layer in the Open Systems Interconnection model and is responsible for assembling, compiling, and encoding data that is ready to be transported. The Transport layer receives information from the session layer and passes it on to the network layer, which consists of the actual hardware used to send the transmission. It is also used for all software-related transportation of data between two or more applications or devices. With its extremely fast data assembling, and its error-checking capabilities, the Transport layer remains the most advantageous in both connection and byte-oriented communications

What benefits and problems does dynamic addressing provide?

The benefits that dynamic addressing provides is it gives a slight bit more security since the ISP changes the external IP of the address of a user every so often, also that multiple users can use a single IP address from an ISP. The problems would arise if a user needed an IP address that does not change, such as a web host address. If the user had an ISP that was using a dynamic addressing method, the IP could change and point to an address that is not used or in the worst case scenario an address that is being used with a different website. Dynamic IPs also make life simpler when it comes to network management. Instead of manually configuring all the workstations every time changes are made to the network, the DHCP server just leases an address to the computers that need one. The MAC addresses don't change, so the network adminstrator can still figure out which machines are sending and receiving packets. The only problem I've encountered was when the DHCP server wasn't working properly, such as assigning my address to someone else while I was currently using it.

: Is the bit rate the same as the symbol rate? Explain

The only time the bit rate and the symbol rate are the same is when one bit is sent on each symbol. A bit is one unit of information and a baud or symbol is a unit of signaling speed used to indicate the number of times per second the signal on the communication circuit changes, Depending on the modulation technique that is used, you can figure out the bit rate and symbol rate.

What is the purpose of a data communications standard?

The purpose of the data communication standard is to ease the creation of software and hardware across networks. Just like any other standard, the data communication standard creates a rulebook of sorts, allowing software and hardware to work coherently with each other on networks that are linked. This sort of rulebook is necessary because the World Wide Web is essentially a giant linked network. The standard also allows individual network layers to be easily updated

How does a thin client differ from a thick client?

The thick verses thin are different ways of classifying client-server architectures. A thin client approach is where the server is handling most of the logic commands for application data. A thick client approach is where the client is responsible for the logic commands for applications that it is running. One example of a thin client is a webpage that uses server side JavaScript to make the page the user is viewing, dynamic. An example of a thick client is downloading an online games launcher and core files that updates with the server, and the server then relays that information to other users, allowing for massive online multiplayer games, without much strain on the server itself.

Briefly describe the different types of application architectures.

There are three different fundamental application architectures. In host-based networks, the server performs virtually all of the work. In client-based networks, the client computer does most of the work; the server is used only for data storage. In client-server networks, the work is shared between the servers and clients. The client performs all presentation logic, the server handles all data storage and data access logic, and one performs the application logic. Client server networks can be cheaper to install and often better balance the network loads but are far more complex and costly to develop manage.

Compare and contrast two-tier, three-tier, and n-tier client-server architectures. What are the technical differences, and what advantages and disadvantages does each offer?

Two-Tier, Three-Tier, and n-Tier Architectures are ways in which the application logic can be partitioned between the client and the server. The two-tier architecture is the most common, because it uses only two sets of computers, one set of clients and one set of servers. A three-tier architecture is divided up so that the client computer is responsible for presentation logic, the application server is responsible for application logic, and a database server is responsible for data access logic and data storage. A N-tier architecture is similar to a three-tier, except that the application logic itself is spread across two or more different sets of servers. The primary arguments for and principle benefits of the three and N-tier architecture models is typically increased load balancing and greater security. However, these also come with a larger initial and recurring cost, the need for more monitoring and management, and a greater load on the network. N-tier networks also suffer from a greater programming cost, due to the need for it to interface with many different systems.

: How is TCP different from UDP?

UPD is normally used when the sender needs to send a single packet to the receiver. This is because UDP does not check for lost messages it does not have a form of flow control error connection. UDP offers faster speeds than TCP because it does not have as many layers at TCP. UDP has only four fields which is 8 bytes of overhead, plus the application layer packet: source port, destination port, length, and CRC-16. This protocol is often used for media steaming online which needs speed over correction. This is why when watching a video or audio you can see weird things happen to the picture or sound when steaming. UPD is never used for sending data over websites and database information. TPC has a 24 byte header and a source port, destination port, CRC-16 but also has an options field which is rarely used. Which results into a 20 byte long header. TCP provides a sequence number so that TCP software at the destination can assemble the segments into the correct order and make sure no data is lost. TCP also provides a flow control which can slow down data if needed. UDP has no error correction flow or flow control, While TCP has more fields than UDP. This offer UDP speed and flexibility but can cause errors and collisions. TCP is more widely used because it guarantees delivery and has more options and fields than UDP. This allows TCP to be more widely used on web pages, databases, and sending important information.

Compare and contrast unicast, broadcast, and multicast messages

Unicast messages are sent from one sender to just one reciever. Unicast messaging is the predominant format for most transmission on LAN's and across the internet. Many Unicast messages are sent through the TCP transport protocol. When you need to send a unicast message to multiple destinations it must be sent out to each and every single reciever individually. Broadcast messages are sent from one sender to all connected recievers. Broadcast messages work by sending the message to a special broadcasting address for the network, it is important to remember that while routers will recieve the broadcast traffic they will not forward it through the router. Broadcast messaging allows all devices that recieve messages that are also connected to the network will recieve the same message. Multicast Messaging sends the message from one sender to a selected group of recievers. This is done through the Internet Group Management Protocol to identify groups and group members. Each Multicast group has it's own special IP address in the range on 224.0.0.0 and 239.255.255.255. Addresses in the 224.0.0.0 range are reserved for local subnet communications. All of these are different methods that are used for sending messages over computer networks.

Describe how a Web browser and Web server work together to send a Web page to a user

When an internet user chooses to view a Web page a Web browser and Web server must work together to send that Web page to the user. To initiate the request, the user must first type the Web page's uniform resource locator (URL) into their Web browser or click on a hyperlink that contains the URL. The URL is the Web server address and also known as the domain name. A Web browser is an application layer software package. Examples of Web browsers are Microsoft's Internet Explorer or Google Chrome. Once the user types the URL into their Web browser, the Web page quickly appears in the user's Web browser. However, prior to the Web page loading, many events occur between the Web browser and the Web Server. Below are the events that occur when a Web browser and Web server work together to send a Web page to a user.


Kaugnay na mga set ng pag-aaral

4. Financial statements and accounting

View Set

Chapter 34: The Immune System CONNECT

View Set

Physical Fitness : 3. CARDIOVASCULAR FITNESS

View Set

1,001 CCNA Questions: Chapter 18 Cisco IOS Fundamentals

View Set

Chapter 9: Healthy Relationships

View Set