FIT1047 - Wk. 7 - 9

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

CSMA/CA Media Access Control - solution to help find collisions

- 1st collision: everybody waits 0 or 1 time unit - 2nd: everybody waits between 0 and 3 time units - 3rd: everybody waits between 0 and 7 time units...

CSMA/CA Media Access Control - ways of preventing the issue of not finding collisions

- ARQ = Automatic Repeat ReQuest - AP sends ACK (acknowledgement) after receiving a frame. So, if there is no acknowledgement - you know the message has not been sent - devices only send next frame after receiving ACK for previous frame, otherwise re-send origina 802.11 may use controlled access - device can send "Request To Send" (RTS) . This is mainly used in large networks - only transmit frame if AP sends "Clear To Send" (CTS) Due to the hidden node problem, we cannot reliably detect collisons. Instead, the receiver needs to acknowledge (ACK) every single frame. If there is no ACK then: - we may not sense a carrier (too far away) o re-sending immediately might therefore be a bad idea

CSMA/CA Media Access Control - CSMA/CA

- Carrier Sense, Multiple Access - Collision Avoidance - Compare to 802.3: Collision Detection - Devices try to actively avoid collisions

analog signal

- Continuous, often sinusoidal wave - E.g. sound (pressure wave in air), light and radio (electromagnetic waves)

CSMA/CA Media Access Control - All devices in a WLAN share the medium

- use the same channel (frequency band) - need to deal with collisions in wifi we don't always know that there is a collision.

WAYS TO DEAL WITH ALLOWING MANY DEVICES TO ACCESS THE SEVER --> Connection-based MAC

Any device can transmit at any time - "first come first served" Collisions: two devices transmitting at the same time - packets in a collision are damaged - avoid collisions by carrier sensing (listening on the network for transmission) - detect collisions and re-transmit Used in ethernet

network application architectures

As mentioned in the introduction to this module, networks enable computers to communicate. In most cases, a client will communicate with a server, and together they provide an application to the user. For example, you can consider Moodle to be an application that is provided by the combination of a web server and a web browser, or a video chat application that is provided by an app on your smartphone, a server on the Internet, and an app on the phone of the person you're calling. Each application has different tasks to fulfil, and when implementing an application, we have to decide who performs which task: the server or the client(s)? A standard way of describing the different tasks is the following.

what are collisions in regards to server stuff

Collisions: two devices transmitting at the same time

ip adresses

In the diagram above, the clients, the routers and the server are all annotated with their IP addresses. An address, in general, is a unique identifier. Remember how we use addresses to identify individual locations in memory (see Memory). In the same way, each device in a network needs a unique address to identify it as the receiver of a particular message. On the Internet, each device that needs to either send and receive messages (clients and servers), or forward them to other networks (routers), needs one IP address for each network that it is connected to. Routers therefore have at least two IP addresses. We will learn more about IP addresses in the module about the Networkand Transport layers. Something that isn't shown in the diagram is that each device in fact also has another address, the so-called MAC address. While the IP address is used to send messages from one network to another network, the MAC address is used within the same local area network. This will be explained in the module about the Data Link Layer.

process switching

In the discussion so far, one step was still missing: How does an OS switch from one process to another one? Just look again at step 6 above: The interrupt handler restores the process context from memory. Since this interrupt handler is part of the OS, it can simply decide to restore a differentprocess context. In fact, the OS keeps process contexts stored in memory for all processes that are currently in the ready or blocked state. Process switching therefore means to pick the process context of a process that is currently ready, and then jump into that process's code in step 8 above

virtualising the memory

Limited Direct Execution, i.e., virtualising the CPU, takes care of limiting access to the CPU as well as to the I/O devices. However, it does not limit access to the memory. Virtualising the memory has three main goals: 1. To enable protection of a process's memory against access from other (malicious or buggy) processes. 2. To make programming easier because a programmer does not need to know exactly how the memory on the target computer is organised (e.g. how much RAM is installed, and at which address the program will be loaded into RAM). 3. To enable processes to use more memory than is physically installed as RAM in the computer, by using external storage (e.g. hard disks) as temporary memory. Virtualising the CPU meant that for each process, it looks as if it had exclusive access to the CPU. The same holds for virtual memory: a process does not need to "know" that there are other processes with which it shares the memory.

WLAN RADIO FREQUENCIES - what do they do, and what causes collision

Most WLANs use the 2.4GHz and/or 5GHz range - high frequencies allow for large bandwidth - but higher frequencies have stronger attenuation WLAN channels - Networks in the same area should not use the same frequencies - WLAN spectrum is divided into channels, each network is set to a different channel - All these channels are separate, so there's no interference. But if there was a network at channel 1 and 3, then there's a spectrum that both channel 1 and 3 would use. o This is a problem because of interferences and collisions. o Both would have trouble encoding the same data

network cables, and different types

Physical connection between network devices Different types: - UTP (unshielded twisted pair, most common type for LAN) - STP (shielded twisted pair) - Optical fibre (not yet common in LANs) - Coaxial (only old LANs)

what is message encapsulation

The easiest analogy is that of an envelope. The actual message is put inside an envelope, and the sender and receiver addresses are written on the outside of the envelope. In addition, each layer may add some additional information. For example, if the transport layer had to split up a large message into multiple packets, it would write something like "packet 12 of 134" on the envelope as well, so that the receiver can puzzle them back together in the right order. Compared to the regular postal service, each layer adds its own envelope, and puts the envelope for the layers above inside. So the actual packet that is being transmitted by the hardware would be a data link layer envelope that contains a network layer envelope that contains a transport layer envelope that contains (part of) the application layer message! This is called message encapsulation.

application layer

The application layer (layer 5), finally, is the actual application software that a user interacts with. For example, your web browser or your instant messaging app are implemented at the application layer. Different devices on the network implement a different subset of these layers. A switch only needs to implement the hardware and data link layers, because it is only responsible for communication inside a local network. It ignores the higher level layers. Similarly, a router requires an implementation of the network layer in order to perform its function, but it can ignore the transport and application layer. Servers and clients implement all five layers. The diagram below illustrates how different devices communicate at the different layers.

application logic/ business logic

The application logic or business logic defines how the application behaves, i.e., it implements what should happen when the user clicks a button or receives a new message or types a word.

data access logic

The data access logic defines how the application manages its data, it is responsible e.g. for updating text documents when the user makes changes, or retrieving a piece of information when the user performs a search.

multiprogramming

The first multiprogramming systems divided up the memory, and each process was allocated a fixed region that it was allowed to use. This figure shows three processes sharing the memory, and a few regions of memory still being free for new processes. This setup poses two challenges. First, you don't know where a program will start in memory when you write it. Remember how instructions in MARIE require you to write down concrete addresses, such as Add 100 (meaning add the value stored in address 100 to the AC register). Even if you use symbolic labels, as soon as you assemble your program into machine code, those labels get replaced by fixed, concrete addresses. You can imagine that programming like this is pretty much impossible if you don't know at which address your code will start when it gets loaded by the OS! So the first challenge is to enable programmers to program as if their code was loaded at address 0, and then have a mechanism that makes sure that those virtual addresses are translated to physical addresses when the code is loaded or executed. The second challenge is that, in the picture above, nothing prevents process B from reading or writing the memory area that's been set up for process A or C. If the code for B contains bugs, it may accidentally overwrite the code for process A in memory, so that when the OS continues executing A, it crashes or behaves in an undesired way. Even worse, if B contains malicious code, it could not only make A crash, it could also access sensitive data (imagine A uses your credit card details) or modify A to do something the user doesn't want. That's essentially how viruses work. Virtual memory therefore means that the operating system, together with the hardware, creates the illusion that each process has its own address space, and it prevents any process from accessing any other process's address space.

layers and protocols

The point we want to bring across in this module is how to transfer data in a network that is made up of millions of devices, made by thousands of different hardware manufacturers, running very different software. Of course, this can only work in the presence of international, open standards that prescribe how devices "talk to each other". But standardisation is not enough. The web browser on your phone may know how to "talk to" any web server in the world. But does it also need to know how to access both the 4G mobile network and WiFi, depending on what you're using at any one time? These two technologies are very different and require quite different software to run, but of course the software engineers that develop your browser have enough on their hands just dealing with modern web technology, and shouldn't have to worry about every possible piece of network hardware that their browsers may have to use. We've seen in the module on Operating Systems that the usual answer to these problems is virtualisation. The OS provides a layer of abstraction above the hardware, and the application software just accesses the OS (through systems calls), which contains drivers that are tailored to the concrete hardware that's present in your computer. The same principle is used for implementing complex computer networks. We define a hierarchy of layers of abstraction, each with a specific set of tasks and a well-defined interface to the other layers. In addition, each layer defines a "language", which we call a protocol, that prescribes how the software implementing the same layer on different devices can interact.

presentation logic

The presentation logic is the part of the application that provides the user interface, i.e., the elements such as menus, windows, buttons etc. that the end user interacts with.

process scheduling

This section deals with the policies that the Operating System uses in order to switch between processes, the so-called process scheduling. The OS needs to decide how long each process gets to use the CPU before it switches to a different process, and which process to switch to next. There are multiple conflicting goals that an OS could try to achieve with its scheduling policy. One goal could be to finish each process as quickly as possible, i.e., try to achieve an optimal turnaround time (the time between the process becoming ready for the first time and its completion). A different goal may be to allocate the same amount of CPU time to each process in a given time interval, e.g. if 10 processes need to be executed concurrently, make sure each of them gets roughly 1/10 of the CPU time on average. This would achieve a level of fairness between the processes.

topology meaning

Topology meaning how its constructed

what is the transmission rate

gigabits per second, which is a common measure of the speed of a connection. We also call it the transmission rate, and it's one of the fundamental characteristics of a network.

What are the core tasks of Operating systems

• Managing multiple processes running in parallel. A process is a program that is currently being executed. • Managing the memory that processes use. • Provide access to file systems, the network and other I/O resources. Modern OSs provide many more functions (such as the graphical user interface or a number of system tools mentioned above), but we will focus on these core tasks here. The core functionality of an OS is provided by the Operating System Kernel, or kernel, for short. Almost all other functions are in fact provided as application code that is simply pre-installed when you install an OS.

problems with shared ethernet - limited network size

- CSMA/CD limits size of collision domain Solution: implement logical star topology! - Limits size of network as well as the number of computers that can be put on the network as the more there are, the more chance of collision.

ethernet - Media access control: CSMA/CD

- Carrier Sense (CS): listen on bus, only transmit if no other signal is "sensed" - Multiple Access (MA): several devices access the same medium. Meaning lots of computers using the same cable to communicate, - Collision Detection (CD): when signal other than own is detected: o transmit jam signal (so all other devices detect collision) o both wait random time before re-transmitting

ethernet

- Describes both the physical layer and the data link layer. One describes the hardware, and the other describes how to access the hardware.

digital transmission

- Digital signals are typically transmitted through copper cables - A digital signal encodes 0s and 1s into different voltage levels on the cable - This results in a square wave - Simplest encoding: unipolar - The length if one bit is a unit of time, so you need to measure if the voltage is low or high during that time period each bit gets one timeslot, because it's a second a bit The sender and receiver have to be in sync to know exactly when the new bit starts Manchester encoding contains both the actual signal and the timing signal at the same time. This is good because the receiver always get the correct time information - Phone modems communicate using noise because that way they're using all the frequencies in the spectrum, and then the other modem is able to work out which frequencies is received at that end because maybe there are transmission problems on the way and certain frequencies get filtered out, so this way the second modem can tell the first one that certain frequencies don't work, so that they can negotiate the frequencies.

digital data

- Discrete values (e.g. 0 and 1, or characters in the alphabet) - Discrete step from one symbol to the next

wireless local area networks - Wi-Fi (or "Wireless Ethernet")

- IEEE 802.11 family of standards - Original standard from 1997-1999 (802.11a, 802.11b) - Widely used: 802.11g (2003), 802.11n (2009) - Latest: 802.11ac

ethernet - physical layer

- Originally 10Mbps over shared media coaxial cable basically means having one cable that runs through the entire house or office, with all computers connected to that. These are not common anymore. - Now mostly switched 100Mbps or 1Gbps over UTP an additional device in a network called a switch that connects all devices in an area to the network. - Standards exist for optic fiber up to 100Gbps

analog data

- Range of possible values (e.g. temperature, air pressure) - Continuous variation over time

ethernet - dominant LAN technology

- Standardised as IEEE 802.3 - used by almost all LANs - developed in 1973, standardised in 1980

how does the switch know the destination port, explain and draw the process

- The switch has 4 ports, 0, 1, 2 and 3. And port 0 is connected to A, and so on - All computers have a MAC address. A network interface card can always be identified by the MAC address - So, A wants to send a message to B. So it sends the message to the network (so basically to the switch). The switch doesn't know where everything is connected (assuming it was just turned on), but it does know that it just received a message from port A and that its connected to port 0. It still doesn't know where computer B is, so acting like a hub - it sends the message to all computers (called flooding the frame). - Basically when the green MAC/port table is empty, the switch acts as a hub. - Now if B wants to reply, it sends a message to A, so now the switch know where B is connected, and will never flood A or B again because it knows their address. - So now if another computer sends a message to another one, for example if C sends a message to A, then the computer will now know C's address and continue to build its table of MAC addresses. Once the switch then knows all the addresses, it can send messages point to point without having to flood the ports.

how can digital data be turned to sounds

- This is called modulation. Waveforms are used and then signal gets modulated onto the waveform. The waveform is changed a bit into each time unit and that way the signal is encoded into the waveform. o The most simple form of this is called frequency modulation. For sound waves, this would mean changing the pitch of the sound o It's like using the pitch of the sound, for example high pitch sounds are 1 and low pitch sounds are 0. Frequency modulation o The amplitude is the height of the wave. This can mean how loud it is. For example a loud sound is 1 and a soft sound is 0. - Just using the above method isn't more efficient as using digital transmission - The most efficient way would be to combine them This means in 1 time unit, over 8 symbols can be transmitted, instead of 2. So between 0 and 7, unlike 0 and 1 like before. The data rate is increased by a factor of 3. In 1 time unit, 3 bits of information can be transferred. If you use too many amplitudes however, the receiver will be unable to distinguish the main one. If they become too similar, then any kind of noise would make it hard to impossible to decode the signal

data link layer

- To control access to the physical layer - If two or computers connected by cable trying to transmit information to the shared network can be problematic, - controls access to the physical layer (MAC = Media Access Control) - encodes/decodes between frames and signals - implements error detection whenever something is sent on the network, it is in a noisy environment, so it's important to ensure the message sent is the same one that is received, as its easy for errors to occur. - interfaces to the network layer (software)

wireless local area networks - basic setup

- WLAN NICs connect to Access Points (APs) using radio frequencies - APs are connected to wired LANs (or backbones)

digital signal

- Waveform with limited number of discrete states

physical layer, physical media

- We transmit information using physical signals - A signal travels through a medium: o electrical signals through e.g. copper wires o radio waves through "air" (or, really, space) o light signals through space or optical fibres

other wireless LAN technologies

- WiMAX (802.16) - Bluetooth (802.15), also called WPAN (Wireless Personal Area Network) while Wifi connects many devices to a central network, Bluetooth is used to connect personal area networks, such as your phone to your watch

problems with shared ethernet - broadcasting

- all frames are delivered to all devices, not just destination

wireless local area networks - wireless LANs

- eliminate cables (heritage buildings, rented apartments, ...) - allow for more flexible network access - facilitate mobile workers (e.g. hospital)

what are three scheduling policies

- first-come first-served - shortest job first - round-robin scheduling.

name 3 problems with shared ethernet

- half-duplex - broadcasting - limited network size

SWITCHED ETHERNET - Network switch

- looks like hub - 16 to 24 ports for UTP cables - but: circuit no longer shared!

problems with shared ethernet - half-duplex

- only one device can send at a time

SWITCHED ETHERNET - switch being a 2 layer device

- reads MAC address of frame - transmits only to destination port

different types of application architectures

- server-based architecture - client-based architecture - thin-client architecture - multi-tier architecture - peer-to-peer architecture

a backbone network

A Backbone Network (BN) connects multiple LANs using routers. Often BNs don't contain any clients or servers, but simply serve as the circuit that connects the different LANs. Backbone networks usually provide very high speed, because they need to be able to handle all the network traffic between the LANs. Backbone networks are usually still quite local. For example, at Monash, a BN would connect the different floors of a building, or the different building on a campus. Both LANs and BNs are usually owned and operated by the organisation that uses them (e.g. Monash owns all the hardware including all the cables and devices for its LANs and BNs), and they don't require any licenses or permits to be built and operated. When we go to larger scales, this typically changes.

local area network

A Local Area Network (LAN) is a group of clients and/or servers that share a local circuit, i.e., they are directly connected to each other using just switches and cables or radio waves. All devices in a LAN can communicate directly with each other (without going through a router). LANs are typically limited to a single building, or a single floor of a building, potentially even a single room. One example of a LAN would probably be the wireless network you use at home.

a metropolitan area network

A Metropolitan Area Network (MAN) is the next larger scale network. It can span several kilometres, and connects LANs and BNs across different locations. For instance, the Monash Caulfield and Clayton campuses are connected via a MAN. A MAN is usually not built and owned by the organisation that uses it, but leased from a telecommunications company. This makes sense, because it would be prohibitively expensive to dig trenches across the city every time two offices want to get connected. A telecommunications company can operate its own network and then lease capacity to their clients.

what is a client

A client is a device (e.g. a computer or a smart phone) that enables users to access the network, such as the laptops in the picture above. Clients can also be very specialised devices, for instance IP phones, which are telephones that are directly connected to the computer network rather than to a dedicated telephone network (e.g. all the telephones used at Monash are Cisco IP phones).

what are protocols

A protocol is a formal language that defines how two applications talk to each other. In the case of the "middle layers" of the Internet Model (i.e., data link, network and transport), the protocols take the form of well-defined headers of data that are added to the actual message data by the sender, containing information such as sender and receiver addresses, the type of content of the message, and error detection codes like a CRC (see Representing numbers).

what is a router

A router connects different networks. If a device wants to communicate with another device that is outside of its own network, it has to send the messages via a router. In the diagram above, the WLAN access point on the left acts as both a switch (establishing a local area network) and a router (connecting it to the Internet). The LAN with the server is constructed using a separate switch and router. Although we represent the Internet as an opaque cloud in the diagram, there's actually nothing special about it: it's just a collection of routers, switches, servers and clients. We say that the Internet is a network of networks, connecting millions of individual networks, which contain billions of devices. We will discuss the structure of the Internet in more detail in the module about The Internet.

what is a server

A server is a device (usually a dedicated computer) that provides services to clients. For example, when you open the Monash homepage in a web browser, your client establishes a connection to the Monash web server computer, which sends the requested information (text, images etc.) back to you. In addition to sending information back to you, a server can also provide other types of services. A print server, for example, lets multiple users share a printer. A "smart" light bulb lets you turn it on and off via the network. An email server forwards messages to the specified recipients.

virtualising the address

A simple approach for implementing virtual addresses uses an additional register, the base register B. At any point in time, it simply has to contain the address at which the code for the currently running process starts in memory. Any instruction that accesses the memory now always adds the value stored in B to the address that's being used. For example, Load 100 now really means Load B+100, Add A300 means Add B+A300 and so on. Note that this doesn't change the instruction set, i.e., a programmer still codes as if the addresses start at 0 (like in MARIE), but the CPU takes into account the base register B when executing the instructions.

what is a switch

A switch connects multiple devices to form a Local Area Network (LAN). All devices in the same LAN can directly communicate with each other.

system calls

A system call is, at its core, a special CPU instruction that switches the CPU from user mode into kernel mode and then jumps to a special subroutine of the OS. Many CPU architectures simply provide an instruction that causes an interrupt - as mentioned above, any interrupt will cause the CPU to switch to kernel mode. We call this a software interrupt, in order to distinguish it from interrupts caused by I/O devices. Some architectures (such as Intel x86) provide special instructions for triggering a system call, which can be executed faster than an interrupt. The application code that runs in user mode of course needs to let the OS know what kind of privileged operation it wants to perform (open a file or send a message?). To enable this, the OS sets up a table of system call handlers. This table is just a contiguous block of memory, and each location contains an address of a subroutine that performs one particular function. A user mode application can then put the number of the OS subroutine it wants to call into a register before triggering an interrupt or calling the special system call instruction. The following steps are then going to happen: 1. The CPU is switched into kernel mode 2. It jumps to the interrupt handler for software interrupts 3. The interrupt handler saves the process context into memory (i.e., the current state of registers etc) 4. The interrupt handler makes an indirect jump to entry i of the system call table, if i was the number that the user mode application stored in the register before triggering the interrupt. 5. The code for the system call handler is executed and returns to the interrupt handler. 6. The interrupt handler restores the process context from memory. 7. The CPU is switched back to user mode. 8. The interrupt handler makes a jump to return to the user space application that called it. All in all, this is the same mechanism as an interrupt vector, which we discussed in the module on Input/Output devices. An important step for both interrupt vectors and system call tables is the saving and restoring of the process context. That's why transitions from user mode to kernel mode (and back) are also called context switches.

what is an operating system

An Operating System (OS) is a piece of software (a program), or a collection of different pieces of software, that manages the resources in a computer and provides a convenient interface between application programs and the hardware. The operating system provides the graphical user interface (GUI) that applications can use to draw windows, buttons and so on. But the core functionality of an OS is actually much more fundamental.

what does an OS do

An operating system provides a level of abstraction between hardware and software. This is an important concept in IT: We want to hide complicated, diverse, low-level concepts behind a simple interface. The following figure shows how an OS fits into our overall view of a computer: Any higher-level module only "talks" directly to the modules with which it has a boundary. In this example, an application program would never talk to the hardware directly - rather, it uses well-defined interfaces in the OS to access things like the network or the graphics hardware. The big advantage is that, as an application programmer, you don't need to know how the hardware works. For example, you don't need to know whether your computer is connected to the Internet via cable-based Ethernet or wireless 4G networking (which are very different technologies) - you can simply call a function in the OS (like a subroutine) to send a message to another computer via the Internet!

what is latency

Another fundamental characteristic of a network is latency, which describes how long it takes for one bit of data to travel from a sender to a receiver. A number of factors affect the latency of a network. The most fundamental one is the signal speed, meaning for example how fast a signal can travel in a copper cable or as a radio wave. This is limited by the speed of light (roughly 300,000 km/s), no message can travel faster than that, but usually messages in cables are slower (we can assume roughly 200,000 km/s in a copper cable, for example). Then there are all the devices that messages pass through on their way from the sender to the receiver. Each switch and router takes a short amount of time to process the message and decide what to do with it (we'll see this in detail later). All of this adds up to a measurable delay, especially across long distances such as between continents.

COOPERATIVE AND PREEMPTIVE TIMESHARING

As we have seen above, the mechanism that triggers a switch from user mode applications into the OS kernel is an interrupt. This includes "real" I/O interrupts, but also software interrupts (system calls) that the application makes. If we only have these two sources of interrupts, it is possible that the OS will not get an opportunity to switch to a different process for a long time. Imagine a process that just performs a long running computation, without making any system calls, and without any interrupts happening from I/O devices. In that case, the process would block the entire system, since the OS never gets a chance to switch to another process. We call this cooperative timesharing, because all processes must cooperate with the OS and make system calls in regular intervals, otherwise some other processes can "starve" (i.e., not get scheduled at all). The advantage of cooperative timesharing is that it is relatively easy to implement, but the downside is that buggy or malicious processes can make a system unusable. In order to address these disadvantages, modern computer architectures introduce a third type of interrupt (in addition to I/O and software interrupts): timer interrupts. These are hardware circuits that generate an interrupt in regular intervals, usually after a programmable number of clock ticks. This gives the OS full control: it sets up a timer interrupt before executing the context switch to a process, so it can be guaranteed that the process will be preempted at the latest when the timer "fires", if it doesn't make any system calls before that. Consequently, we call this preemptive timesharing. In preemptive timesharing systems, the OS (or the user) can always kill buggy or malicious processes (e.g. through the task manager in Windows or the kill command in Linux and Mac OS), since the OS will regain control of the system several times per second.

process and programs

Before we start discussing how an OS achieves this abstraction, we need to introduce the notion of a process, and discuss how a process relates to a program. A short definition is the following: A process is a running instance of a program. So a program is the code that you write, or the result of compiling the code into machine code. In most cases, a program is a file that's stored on disk (or maybe printed in a textbook). A process, on the other hand, is created by loading the program code into the computer and then executing it. Let's further clarify this distinction. Assume that you work for a software development company. You write some code for the company that gets distributed as a mobile app. This is a program. Hopefully your company is really successful, and your app is used by millions of people around the world. On each of your users' devices, an instance of your program is running as a process.

LIMITED DIRECT EXECUTION

CPU virtualisation via process switching as discussed above has a number of challenges. The first one is performance. Clearly, virtualisation should not create a huge overhead, and most of the CPU time should be spent on actually running processes, not managing them. The second challenge is control, which means that the OS should enable fair scheduling of processes and offer a certain level of protection against malicious or buggy code. The solution to these challenges is to get some support from the hardware, in a mechanism called limited direct execution (LDE). In order to achieve good performance, each process is allowed to run directly on the CPU. That is what direct means in LDE. Now remember that the OS is nothing but a piece of software, and while the code for a process is executed on the CPU, clearly the code for the OS is not executed at the same time. So how can the OS switch processes or protect against buggy code while it is not running? That's where the limited in LDE becomes important. CPUs have different modes of operation that enable the OS to restrict what an application program can do. In user mode, only a subset of all instructions are allowed to be executed. Typically, instructions that perform I/O would not be allowed in user mode, and we will see in the module on Virtual Memory that certain memory operations are also blocked. Normal applications are executed in user mode. In kernel mode, code is run without any restrictions, i.e., it has full access to I/O devices and memory. The OS runs in kernel mode. Whenever an interrupt happens (as discussed in Input/Output devices), the CPU switches into kernel mode to execute the interrupt handler (which is part of the OS, since it deals with I/O). Kernel mode is sometimes also called supervisor mode. Clearly, user mode places quite severe restrictions on application programs. Without access to I/O instructions, a process cannot read any files, send or receive messages over the network, or draw characters on the screen. But applications need to do all those things! So in order to give user mode processes access to these "privileged" operations, the OS provides a number of "hooks" that a user mode process can use, much like a subroutine, to perform functions like reading from disk or sending a message over the network. These hooks are called system calls.

client-based architecture

Client-based architectures became popular because they enabled companies to have a central file storage facility, enabling multiple users to work on the same files together, and providing a central back-up mechanism. However, client-based architectures also have some disadvantages. Imagine searching for the phone number of your lecturer in the Monash directory. In a client-based architecture, you would have to download the entire directory over the network to your client, just to search for a single entry. Even worse, if you were accessing some database to make small changes, for every change you would need to transmit the whole database. This would put enormous stress on the network, but also cause big problems if multiple users want to access the same database simultaneously. These issues can be mitigated by using client-server architectures. Here, the client only performs the presentation and application logic, while the server implements both data access and storage. The typical example are database servers, which can process queries, such as searches or requests for modification of individual records, and send only the relevant results back to the clients. We can consider typical email systems as client-server, where emails are stored on a server and clients access only those that the user is currently interested in.

how are client and server connected with each other

Clients and servers are connected with each other via the circuit, which we call all cables, radio links, and devices in between that enable the communication. Two types of devices in particular are essential in modern networks.

ROUND-ROBIN SCHEDULING

Compared to the previous two policies, the next one will split up each process into short time slices. The OS can then cycle through all the processes one by one. In the figure, you can see how P1 (which takes two time units in total) has been split into four short time slices, and P2 has been split into six. The schedule first cycles through all five processes twice. After that, P3 and P5 have already finished, which are repeated twice. Then P1 finishes, and we get P2, P4, P2, and then the rest of P4. This type of scheduling produces a fair schedule, which means that during a certain time interval, all processes get roughly equal access to the CPU. The shorter we make each time slice, the fairer the schedule is going to be. But on the other hand, each time the OS switches between different processes, it has to fire a timer interrupt, execute the interrupt handler, and perform the context switch to the new process. This switching takes time, so if we make the time slices too short, we will create a noticeable overhead. The OS needs to make the right compromise between fairness and efficiency here. Another problem with simple round-robin scheduling is that some processes may be more important than others. For example, if a video player cannot update the picture at least 25 times per second, the video will begin to flicker. A process that just performs a lengthy computation in the background, on the other hand, won't be affected as much if it gets less access to the CPU for a short while. Modern Operating Systems therefore implement a variant of round-robin scheduling that can give certain processes (such as the video player) higher priority than others.

wide area network

Finally, a Wide Area Network (WAN) is very similar to a MAN except that it would connect networks over large distances. For example, if Monash had a direct connection between its Australian and Malaysian campuses, that would be considered a WAN. Just as with MANs, the actual circuits used for WANs are usually owned and operated by third-party companies who sell access to their networks.

what types of networks do organisations tend to use

Finally, a Wide Area Network (WAN) is very similar to a MAN except that it would connect networks over large distances. For example, if Monash had a direct connection between its Australian and Malaysian campuses, that would be considered a WAN. Just as with MANs, the actual circuits used for WANs are usually owned and operated by third-party companies who sell access to their networks. An organisation would typically use multiple of these network types depending on its requirements. A small company may only operate a single LAN or maybe a couple of LANs and a BN, while larger organisations connect their LANs and BNs using MANs and/or WANs. Any such large network could be operated completely autonomously, i.e., without any connection to other networks outside of the organisation. But of course most companies would also connect their networks to the wider Internet (which will be covered in the module on The Internet).

peer-to-peer architecture

Finally, there is an architecture that doesn't use servers at all. Some applications directly connect multiple clients with each other, with each client implementing all aspects of the application. This is called a peer-to-peer architecture. Prominent use cases for peer-to-peer systems are distributed file sharing as well as some audio and video conferencing services.

benefits of switches and MAC

Full-duplex circuits - point-to-point connection between computer and switch - no collisions possible But frames may still be sent at the same time - e.g. A sends to B while C sends to D - or A and B both send to C simultaneously - switch has memory: stores second frame until transmission of first frame is finished, then forwards the second - store and forward. This would stop collisions altogether. Switched Ethernet runs at up to 95% capacity, compared to 50% for shared Ethernet!

SHORTEST JOB FIRST

If we sort the customers by the number of items they want to buy, in increasing order, we get the following schedule: As you can see, the average turnaround time has been reduced to 5.4, and in fact it's possible to show that this policy always results in an optimal schedule with respect to turnaround time. We call this the shortest job first policy. Of course it wouldn't be that easy to implement this strategy in a supermarket with a single checkout, because customers would get angry if we start allowing people with few items in their trolleys to jump the queue. But adding "express checkouts" for customers with few items has a similar effect. Both first-come first-served and shortest job first assume perfect knowledge about the time that a "job" takes from start to completion. In some types of scheduling problems, this may be a valid assumption (for example when scheduling the production lines in a factory). In other situations, we may be able to make good guesses that approximate the actual job duration. But in the case of scheduling processes in an Operating System, we typically have no idea how long a single process will take to finish. Some processes are in fact not designed to ever finish. Can you think of a process for which that would be true? Furthermore, the OS needs to not only schedule a process and then wait until it's finished, but also preempt processes and switch to other ones in order to create the illusion of concurrency. The next policy can deal with these requirements.

thin-client architecture

If you've kept track, you will have noticed that there is one combination left that we haven't discussed yet. If the client performs only the presentation logic, while the server implements application and data access logic and data storage, we are looking at a thin-client architecture. You have all used this kind of architecture, in fact you are probably using it while reading this text: most web applications can be considered thin-client. The web browser only "renders" the page on the user's screen, but any interaction (e.g. clicking a button) is sent back to the web server, which implements the application logic and sends back the results for the browser to display. Similarly, many smartphone apps only work while you are connected to the Internet, because parts of their application logic are performed by remote servers rather than on your phone. A good example is also "digital assistants", like Microsoft Cortana, Google Now, Amazon Echo, or Apple's Siri. When you ask them a question, that question is sent over the network to a server, which does all the complex processing to analyse your request and come up with an answer. The result is sent back to your phone, which implements the presentation logic by turning the result into a synthesised voice, or displaying it on its screen.

solution to problems with shared ethernet

Implement logical star topology

what is the NIC, and how does it connect to the computer

Implements physical and data link layer - includes unique data link layer address (MAC address) - provides physical connection to the network (socket or antenna) - implements protocols (error detection, construction of frames, modulation or encoding etc) implementing standard ways of communicating using things like error detection etc Connection to the computer - often built into motherboards - or connected via USB, PCI Express etc - Internally it's just an I/O interface, and it uses the standard I/O that the computer provides

server-based architecture

In a server-based architecture, all four tasks are performed by the server. The client is just a "dumb terminal", which sends individual keystrokes back to the server and displays text-based output from the server. This was a popular architecture in the 1960s and 1970s, because it enabled multiple users to access a single large, expensive "mainframe" computer. Server-based architectures have the advantage that any update to the software on the server is immediately available to all users (since no software is actually running on the clients). However, upgrading the hardware means replacing a large, expensive computer rather than inexpensive terminals. The second architecture we're going to look at places only the data storage on the server, while presentation, application and data access logic are performed by the client. This is called a client-based architecture, and it was developed in the late 1980s. The typical example for this type of architecture is a file server. For example, when you log into any lab computer at Monash, your own files are available over the network. This means that they are not stored on the local hard disk of that computer, but rather on a Monash file server. Now let's say you open a text document in Microsoft Word to make some changes. The entire document file will be transferred from the server to your client. Microsoft Word implements all the presentation, application and data access logic. When you save the file, it is sent back over the network to the file server.

virtual memory

In a virtual memory system, the instructions operate on virtual addresses, which the OS, together with the hardware, translates into physical addresses (i.e., the actually addresses of the RAM).

multi-tier architecture

In many client-server and thin-client architectures, more than one server is required to handle the demands of potentially millions of users. Usually, this means that the tasks are further split, with dedicated servers for the application logic, and dedicated database servers handling the data access and storage. We call this a multi-tier architecture.

quick summary of the mechanism for process switching

Let's quickly summarise the mechanism for process switching. The CPU has a user mode and a kernel mode, in order to control the I/O and memory access for applications, which use system calls into the OS to access the privileged operations. Interrupts cause the CPU to switch into kernel mode. These can be I/O interrupts, software interrupts (system calls), or timer interrupts. The latter enable preemptive timesharing, where the OS always regains control of the system several times per second. Now that we have seen how to switch between processes, let's look at how the OS makes the decision when to switch.

memory protection

Now that we know how the OS can give every process its own address space, the second task is to make sure it can only read and write inside its own address space, too. Just using the simple base register B is not enough: If a process has been allocated the physical addresses from, say, A00 to B00, it could still execute an instruction Load 105. In that case, since the base register would be set to A00, it would actually load from address A00+105=B05, which is outside of its address space. Staying with our simple model, we can extend the system by one more register, the bounds register, which contains the highest address that the current process is allowed to access. The CPU will then check for each memory access whether it is an address between the base register and the bounds register. If the process tries to access memory outside of its address space, this generates an interrupt, which causes a context switch and gives the operating system the chance to kill the process.

how does a switch work

Now you can begin to understand how e.g. a switch works: it receives a packet, and it only needs to look at the outer-most envelope (which has the data link layer address of the destination) to deliver the message. A router already needs to take a look inside that envelope, in order to find out the network layer destination address. It will then create a new envelope, with the next data link layer destination address that the packet should be sent to (which may be another router!). But neither a switch nor a router will ever look into the network layer envelope (and they don't need to understand its contents). The transport layer software on a client or server will use the information on the transport layer envelope to reassemble the entire message, before handing it over to the application layer software.

OS abstraction through virtualisation

Operating Systems achieve abstraction through virtualisation. This means that they provide a virtual form of each physical resource to each processes. Physical resources include the CPU, the memory, the external storage (hard disks etc) and the network and other I/O devices. Instead of using these physical resources directly, the process uses functionality in the OS to get access, and the OS can provide the "illusion" that each process • has the CPU completely to itself (i.e., there are no other processes running) • has a large, contiguous memory just for itself • has exclusive access to system resources (i.e., it doesn't have to worry about sharing these resources with other processes) Let's look at an every-day example of virtualisation. On a normal day, let's say you leave the house at 7:30am, drive to work, park your car at 8:10am, leave work at 5pm, pick up your car from the car park and drive home, arriving at 5:45pm, where the car sits in the garage until the next morning. In a car sharing scenario, your car could be somewhere else while you're not using it, e.g. someone could use it during the day to visit clients, and someone else working night shifts could have it from 6pm till 7am. Let's assume we can make sure that whenever you and the other two car users need the car, it is right where you and they expect it to be. Then we have turned the one physical car into three virtual cars. Operating systems do the same, sharing the scarce physical resources among multiple processes. For each process, the OS makes it look as if it had all the resources to itself. But it goes further: Imagine everyone could leave some stuff in the glove compartment, but they always only see their own stuff! And the fuel gauge is always at the level where you left the car, no matter who drove it in the meantime. Operating systems isolate processes against each other, preventing them from accessing each others' resources, such as memory and files on disk.

when did operating systems become important

Operating Systems only became really important when the first computers arrived that had support for multiprogramming, the ability to run multiple programs at the same time. This was important since computers were expensive, so this scarce resource needed to be utilised as efficiently as possible. We have seen that CPUs execute an individual instruction at a time (in the module on Central Processing Units (CPUs)) - and while modern CPUs can in fact process multiple instructions in parallel, they are still not able to run multiple programs in parallel. The innovation of the first multiprogramming OSs was therefore to implement task switching, where each program is allowed to run for a certain amount of time, before the CPU switches to the next program. This is still one of the most important pieces of functionality in any modern OS kernel. With the advent of multiprogramming, another problem appeared, and it also needed to be solved by the OS. When two programs are running at the same time, we have to make sure that they cannot read or write each other's memory. Otherwise, an incorrect (or malicious) program can cause havoc and affect the stability, correctness and security of the entire system. Similarly, if different programs belong to different users, the OS must make sure that their files are protected from unauthorised access.

data-link layer

The data link layer (layer 2) defines the interface between hardware and software. It specifies how devices within one local area network, e.g. those connected directly via cables or radio waves to a switch, can exchange packets.

data storage

The data storage is where the data is kept, e.g. in the form of files on a disk. In a "traditional" application that doesn't use any networking, all four tasks would be performed on the same computer. But in a networked application, we can split up the work between the client and the server.

what is the internet model and the different layers

The first thing to note about the Internet Model is that it describes a packet switching network. That means that any communication between devices over the network happens in the form of (relatively) short packets, which are sent ("switched") across multiple intermediate points, which on the Internet are called routers. Since each packet can only contain a limited amount of data (typically in the order of 1500 bytes), larger data needs to be split up by the sender into individual packets, and reassembled by the receiver. Based on this idea of packet-switching, the Internet Model defines five different layers of abstraction. From the bottom up, they are the hardware, data link, network, transport and application layers.

virtualising the CPU

The goal of virtualising the CPU is to run multiple processes "simultaneously", but in a way that individual processes don't need to know that they are running in parallel with any other process. The Operating System creates this illusion of many processes running in parallel by switching between the processes several times per second. This poses two main challenges: - A process shouldn't notice that it's not running continuously. For example, let's assume that the Operating System executes process A for 100 milliseconds, then switches to B for another 100 milliseconds, and then switches back to A. The programmer who writes the code for A shouldn't have to take into account which other processes might be running at the same time as A. - Each process should get a fair share of the overall CPU time. Otherwise, the illusion of concurrency would break down. For example, if you want to play some music in the background while scrolling through a web page, the music player process and the web browser process both need to get enough time every second to do their work. If for one second, only the music player had the CPU to itself, the scrolling would start to lag behind, but if only the web browser was "hogging" the CPU, the music would stop. The following sections develop solutions to these two challenges. The first section deals with howto virtualise the CPU, i.e., the mechanisms that the OS uses to switch between processes. The second section explains when to switch between processes, i.e., the policies that the OS uses to determine which process gets how much time, and which process to switch to next.

hardware layer / physical layer

The hardware layer (layer 1, also known as the physical layer) is concerned with the actual hardware, such as cables, plugs and sockets, antennas, etc. It also specifies the signals that are transmitted over cables or radio waves, i.e., how the sequence of bits that make up a packet is converted into electrical or optical wave forms.

explain the network structure diagram

The left half shows a wireless local area network (WLAN) with three laptops, connected to a WLAN access point and router in the middle. The right half shows a network with a server (the large box) that's connected to a router via a switch. The two networks are both connected to the Internet, which we represent as a cloud. This simple network contains the four main hardware components that are used to build all modern computer networks.

abstraction

The main goal of an OS is to make computers easier to use, both for end users and for programmers. For end users, an OS typically provides a consistent user interface, and it manages multiple applications running simultaneously. Most OSs also provide some level of protection against malicious or buggy code. For programmers, the OS provides a programming interface that enables easy access to the hardware and input/output devices. The OS also manages system resources such as memory, storage and network. We can summarise these functions in one word: abstraction. The OS hides some of the complexity between consistent, well-documented interfaces - both for the end user, and for the programmer. Abstraction is probably the most important concept in IT! Without many levels of abstraction and interfaces and layers that are built on top of each other, it would be impossible for anyone to master the complexity of a modern computer system.

virtualisation mechanisms

The mechanisms for virtualising a CPU classify each process as being in one of three states: ready, running, or blocked. When the process is created by loading program code into memory, it is put into the ready state. Ready means ready for execution, but not currently being executed by the CPU. When the OS decides that the process should now be executed, it puts it into the runningstate. We say that the OS schedules the process. In the running state, the code for the process is actually executed on the CPU. Now one of two things can happen. Either the OS decides that the time for this process is up. In this case, the process is de-scheduled, putting it back into the readystate, from which it will be scheduled again in the future. Or, the process requests some I/O to happen, e.g. opening a file from an external storage. Since I/O can take a long time, and the process will have to wait until the file has been opened, the OS takes advantage of this opportunity and puts the process into the blocked state. As soon as the I/O is finished, the process will be put back into ready, from where it will be scheduled again. Here's a picture (a so-called state transition diagram) that summarises these concepts: Using the example of a music player and a web browser running concurrently, here is a table of different process states they could be in over a number of time steps.

network layer

The network layer (layer 3) is responsible for routing, i.e., deciding which path a packet takes through the network. There are often multiple possible paths, and each individual packet may take a different path from source to destination. The network layer is of course most important for routers, which are the network devices whose primary task it is to perform routing. But every client and server also has network layer software and performs routing, because it may be connected to more than one network (e.g. 4G and WiFi).

What is the main goal of connection different computers with each other

The primary goal of connecting different computers with each other in a network is to enable them to communicate, i.e., to exchange information. This enables people to do all sorts of things such as the using the World Wide Web, instant messaging, video conferencing, online multiplayer games, ride-sharing apps, or video streaming services.

real virtual memory systems

The simple approach explained above is a bit unrealistic, and wouldn't work for modern computers and operating systems. The main drawback is that each process needs to be allocated a fixed block of physical RAM. Often, the OS can't know yet how much RAM a particular process will need when it loads it. E.g., you can't know exactly how long a document you're going to write when you open a word processor application. Realistic virtual memory systems therefore implement a more complex approach, where memory is allocated in smaller chunks called pages. The OS keeps a list of pages for each process (rather than just a single block). A process can request a block of memory to use from the OS, which will either return an address from an existing page associated with the process, or add a new page for the process if the existing ones are already full. The advantage is that the pages for a single process don't all have to form a contiguous region of physical memory, as illustrated below. Clearly, this creates more work for the OS, since it needs to keep track of the mapping from virtual addresses (inside the process) to physical memory pages. A simple base register isn't enough any more. Modern CPUs in fact contain complex Memory Management Units, which are hardware circuits that perform much of the "paging" for the OS. A paged virtual memory system has another big advantage. A page of virtual memory doesn't need to exist at all in physical RAM! For example, if a process hasn't used a certain page for a while, the OS could decide to store the contents of that page to the external storage (hard disk), and reuse the memory for other processes. As soon as the original process tries to access that page again, the Memory Management Unit will cause an interrupt (because the process tries to read from a page that doesn't exist), which gives the OS an opportunity to load the page back from disk (probably in exchange for another page that hasn't been used for a while). This works very well as long as the "swapping" of pages from memory to the hard disk and back doesn't become too frequent. You can sometimes observe a computer becoming unresponsive if an individual process requests so much memory that the OS is busy doing nothing but taking care of page swaps.

transport layer

The transport layer (layer 4) establishes a logical connection between an application sending a message and the receiving application. It takes care of breaking up a large message into individual packets and reassembles them at the receiving side. It also makes sure that messages are received correctly, re-sending packets if they were received with errors (or not received at all). The transport layer therefore performs the main form of virtualisation that makes implementing networked applications much easier: From an application point of view, we can open a connection to a remote server, send a long message, and receive a reply, without worrying about anything that goes on at the lower-level layers.

protocol data units

These "envelopes" are called protocol data units (PDU), and the PDU for each layer has its own name. At the hardware layer, the PDU is simply a bit. At the data link layer, the PDU is called a frame. The network layer PDU is called a packet, and the transport layer PDU is a segment or a datagram (depending on the concrete protocol used). At the application layer, we generally talk about messages.

FIRST-COME FIRST-SERVED

This is a simple policy that you are familiar with from the supermarket checkout. Let's assume a single checkout is open, and five people are queuing. The first person buys two items, the next one three, the third has a single item, the fourth buys six items, and you are the final customer with just a single item in your trolley. Let's make a simple assumption that the time each customer takes at the checkout just depends on the number of items they buy. So customer 1 required two time units, customer 4 requires six time units, and you take a single time unit. We can now define a metric for how good the schedule is. A usual metric would be the turnaround time, which is the time between arriving at the end of the queue and leaving the supermarket. Let's assume that all customer arrive roughly at the same time, then your turnaround time would be 2+3+1+6+1=13 time units (if each item takes one minute to process, you'd be waiting 13 minutes at the checkout before you can leave the supermarket). Now for you as an individual that's bad news: you only have a single item to buy, and you're waiting for 13 minutes! But even overall this schedule isn't great. We can compute the averageturnaround time over all customers, which in this example would be So on average, these five customers wait 7.6 time units before they can leave. Let's look at a policy that can improve this average.

how has the topology of the original ethernet cable evolved, refer to things like hub

Topology of the original ethernet cable looked like this (wk. 9, pg. 5 notes), that cable connect to everyone's computer - Advantage of this was it was very simple and easy to add more computers to the network - Disadvantage was that if something happened to one part of the cable, such as someone tripping - the whole network goes down, and the problem needs to be found - Hub based is when a new device in the middle connects to each computer indiviudlly - Disadvantage is that many cables would be required to connect all the computers in an office for example to the hub - Advantage was that if one cable had an issue, the rest of the computers could still run. - Every signal received on one port of the hub is sent to all other ports o Collisions can still happen though is two signals are sent at once

address space

We call the addresses that can be used by a process its address space. For example, as a MARIE programmer, you can use the entire RAM available in MARIE, so the address space of a program running on a MARIE CPU is 000 to FFF (since addresses in MARIE are 12 bits wide). This is the same situation as in early computers, which only ran a single program at a time. The OS was typically loaded as a library, starting at address 0, and the user program and data used the addresses behind the OS, as illustrated in the figure below. Since there is only one program (we can't really speak of processes since there is no scheduling or concurrency) running at a time, there is no need for memory protection or creating the illusion that the program has the memory to itself.

transmission types and examples

analog signals for analog data: e.g. analog FM radio digital signals for digital data: e.g. old Ethernet, USB, the bus in a computer analog signals for digital data e.g. modems, ADSL, Ethernet, WiFi, 4G, ...

network structure diagram

check notes

virtualisation mechanisms diagram

check notes

example of how virtualising the address works

check notes pg. 10, wk. 7

draw a diagram of the layers and explain it

in notes

what are typical transmission rates

• 1 Mbps (megabit per second, or million bits per second) from your home to your Internet Service Provider if you connect to the Internet using ADSL. • 10-20 Mbps from your Internet service provider to your home is you connect using ADSL. Notice how these rates are asymmetric, your downloads are 10-20 times faster than your uploads. In fact the "A" in "ADSL" stands for asymmetric for exactly this reason. • 50-500 Mbps within a wireless local area network (WLAN, sometimes also called WiFi). This means that if you e.g. use your home WiFi to copy a file from your laptop to your phone, the file will be transmitted at between 50 and 500 Mbps, depending on the concrete WiFi technology that you're using. • 1 Gbps (gigabit per second, or billion bits per second) within a typical cable-based LAN, for example within a Monash computer lab. • 10 Gbps in many backbone networks. • Tbps (terabits per second) in the fastest networks that are currently in development.


Kaugnay na mga set ng pag-aaral

Cardiovascular System Multiple Choice

View Set

The Businessowners Policy (BOP) Liability Provisions

View Set

Business Communications quiz (chap. 4-8)

View Set

California: Real Estate Principles - Chapter 1

View Set