COSC243 OS

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

I/O buffering

Buffer if used to provide communication between two processes with different speeds. o Producer fills up the buffer; o Switches to a second buffer, and signals to the consumer; o Consumer reads full buffer, then waits. Buffering is also used in file systems to improve performance.

What happens if an IRQ is set

CPU saves its current location and jumps to the address (indicated by signal) in the interrupt vector

Process State

The operating system keeps track of the state of each process. -New -Running -Waiting -Ready -Terminated

Blocking I/O

The process that issues a blocking I/O system call will wait until the I/O is completed

The file concept

The programmer doesn't care what medium the data is stored in they just want to access it so the OS left to deal with it. OS provides logical unit of storage called a file. The user refers to files and the OS maps files onto regions of secondary storage. It is the smallest unit of secondary storage.

Deadlocks and Unsafe States

The system can be in an unsafe state without necessarily being in a deadlock. IT'S BECAUSE THE CONCEPT OF SAFE STATE IS EXPRESSED IN TERMS OF THE MAXIMUM SET OF RESOURCES CLAIMED BY EACH PROCESS. IT'S UP TO THE ACTUAL PROCESSES THEMSELVES TO DETERMINE WHETHER/WHEN TO ASK FOR THE 'DANGER' RESOURCES.

Dead lock occurs in graph allocation graph when:

There is a cycle where each process in the cycle where each process has a resource allocated and is requesting a resource and all resource types in the cycle have been allocated to processes in the cycle

Page Replacement Algorithms

These are often evaluated with respect to a reference string of pages requested. i.e. a list of reference pages that will be accessed in some order. FIFO Optimal Page Replacement Algorithm? LRU

Multitasking System

Think its one that can run multiple processes at the same time

Disk Scheduling

This is a benefit of the kernel's I/O subsystem. These algorithms are aimed to decrease seek time and make information on the disk fairly seeked for in particular not having starvation. Algorithms focus on which cylinder is being accessed not which sector as moving the head takes a lot longer

Bad Blocks

This is a damaged block (defective) which can be replaced with a redundant sector preventing them from being used. A program searches for bad blocks. It keeps a record of a disk's bad blocks Sophisticated: o Keep a list of bad blocks that are frequently updated. o Set aside a number of SPARE SECTORS which the O/S can't see, and redirect. (This is what SCSI disk controllers do.) Problem with this is that the kernel can't see which ones are redirected and therefore disk scheduling algorithms screw up. This can be solved by putting the scheduling in the controller. But this has its own problems as spread your spare sectors around on the disk. SECTOR SLIPPING: shuffling all the data on disk to make room for the spare block right next to the one it's replacing

Virtual Machines

This is a program that emulates the kernel of one machine so that it can be run on other machines e.g. Java Virtual Machine JVM

external fragmentation.

This is the space of memory between processes that has not been allocated. NOTE: there might be enough memory in total to fit a particular process, but not all in one place. a hole between every pair of processes is the worse case.

kill()

This is when another process terminates the process. You can only kill a process created by the same user Leads to Zombie and Orphan processes

What is a trap

This is when the MMU realizes that a process is trying to access memory that is not theirs by the reallocation register

Kernels can deal with interrupts with threads as well explain how

This would be done with preemptive priority scheduling. It is done with threads instead of processes as threads are quicker at context switching. They should be kernel level threads so that the kernel knows about them for scheduling.

Opening and Closing Files

To avoid the overhead of searching, many systems require that a file is opened before operations are performed on it. • The system maintains an open file table. • This records the disk location of the file, and information about how it is currently being used.

Where is the Page table stored

ideally it would be in special purpose registers however page tables are big so this is not feasible. Keeping it in main memory would slow it down and therefore a cache called Translation Look-aside Buffer (TLB) which is a special set of parallel-access registers called associative registers. This TLB holds the most recently used page tables.

Web servers

without threads would have to fully service on person before moves onto next. Daemon is the thread that looks for new requests

Partially loaded processes advantages

• A process will not be constrained by the amount of available memory. • It should be possible to increase the number of processes being multitasked. • It will be quicker to swap processes in and out of memory during multitasking.

The Producer-Consumer Problem

• A producer process writes data to a buffer of fixed size. • A consumer process reads data from the buffer. Producer must wait if buffer is full and consumer must wait if buffer is empty This causes a process synchronization issue Two pointers (one for reading and one for writing) are used which is changed when out of range to create a circular data structure.

File attributes list 6 + where they are stored

• File name • File type • Location • Size • Protection • Housekeeping information in an ENTRY of a dictionary

Character devices

get character put character can support processing facilities like buffering and editing of buffering e.g. accepting keyboard input line at a time and then treating the input as special in some cases.

Heap holds grows?

global/permanent variables up

Is a device hardware or software

hardware

each surface has a ___________ to read it

head

Job pool

Multiple jobs on a disk waiting to be executed. The OS decides which to execute next

How will the following influence page fault time (a) Install a faster CPU. (b) Install a bigger paging disk. (c) Increase the degree of multiprogramming. (d) Decrease the degree of multiprogramming. (e) Install more main memory. (f) Install a faster hard disk. (g) Increase the page size.

(a) Install a faster CPU. Reduces CPU utilisation because the CPU spends less time executing the statements as its quicker and therefore spends more time in page faults (b) Install a bigger paging disk. No difference (note this is storing more space for paging info?) (c) Increase the degree of multiprogramming. Worse as there will be more page fault (d) Decrease the degree of multiprogramming. Better as there will be less page faults (e) Install more main memory. Better as the CPU can store more pages in main memory at one time thus reducing the number of page faults (f) Install a faster hard disk. A little better as the hard disk can copy over pages (needed in page faults) to main memory quicker. (g) Increase the page size. Indecisive he said. Although you will have more on the page, the pages are therefore bigger and therefore less pages can be fit into main memory. So you will have less pages loaded to memory at once which means switching algorithms probably not as good as less frames to work with. (this is a hard question I can think of many buts...(exceptions))

In many Unix shells, a search path (the path environment variable) is used to locate executable files. This variable is often set in the shell's setup file in your home directory (eg .bashrc). (a) What is a search path? (b) Why is it important that this search path includes . (the current directory) only as the last directory to search if at all?

(a) List of directories that is searched for a program/file which includes the current directory. (b) You don't want to look in the current working directory first as this will mean you will run that program over ones specified in the other (system?) directories (as it would be the first one found). You don't want this to happen as you lose the important functions of these other programs and furthermore poses a security issue as a virus could put unwanted programs in current working directories that would be run over the default ones when trying to execute the default one which could screw with your system. Therefore the current working directory is searched last

Storage Management involves:

- Allocating and deallocating storage - Keeping track of storage in use and free storage - Transferring data between primary and secondary storage

Common device controller registers (7)

- DATA-in: a register containing a byte of data for the CPU to read - DATA-OUT: a register containing a byte of data written by the CPU - STATUS: contains bits of device status such as BUSY bit which indicates controller state. Also DATA-READY bit which indicates data in register is ready. Or there is error bits. -CONTROL REGISTER: contains a command written by the CPU. There is a COMMAND-READY bit to indicate the CPU has sent a new command to the control register A handshaking protocol is established via the devices registers that use flags to communicate between the CPU and device.

Advantages of Cooperating processes:

- Information sharing - Computation speedup(introduce some parallelism into program execution) - Modularity (dividing system functions into multiple processes) - Convenience can do multiple things with files at once.

LRU Approximation Hardware

- Provide a last-time-used field for each entry in the page table - Keep a stack of page numbers most recently used - Associate a REFERENCE BIT with each page which is initially set to 0 and when used is set to 1 - associate several reference bits with each page, using subsequent bits to store the value of page's reference bit at a given moment

Multiprogrammed Batch Systems

- Several jobs held in main memory - When one job has to wait for I/O the OS switches CPU to another job keeping track of other jobs state. Makes the CPU busy more of the time

I/O Device Management

- Tracks the status of each device - Allocating devices to particular processes - De-allocating devices - Scheduling tasks for individual devices (disk scheduling)

Threads can...(3)

- blocking - have different states e.g. ready, terminated - thread can create child threads

Batch operating system (4)

- first operating system introduced to increase efficiency of I/O processes - Programmers prepared their jobs as batches and interacted with operators which grouped jobs together and submitted batches of jobs to the machine. - OS transferred control from one job to another. - Couldn't interact with program during execution. (non-interactive) Refer to figure 1.

Time stuff in registers can be accessed (if not specified)

0 ms

How many page tables do we need

1 per process

Responding to an Interrupt

1) We save the state of the current process called context. 2) We transfer control to a fixed memory location holding an appropriate interrupt-handling routine. 3) When the routine is finished, we restore the state of the interrupted process.

Deadlock prevention

1. Mutual Exclusion : All resources can be shared by the processes which is not always feasible e.g. printer 2. Hold and wait: by forcing a process to be allocated all its resources before it starts execution, or by only allowing a process to request resources when it has none. This has the problems low resource management and starvation. Also hard to tell what resources dynamic processes will need. 3. No Preemption: process can preemptive a resource from another. Some resources you cant do this with though as you have to save the state so it can be returned to (e.g. cant do this with printer) 4. Circular Wait: imposes a total ordering of all resource types, and requires the processes to request resources in that increasing order. Programmers have to follow the same order in their programs which is hard in practice NUMBER 5 IS MORE AVOIDANCE I BELIEVE 5. Safe State?: we can make processes declare in advance what resources they will use in the worse case. From here it can be judged whether there is a chance of deadlock and if so you can delay resource acquiring.

Working set model

Approximation of the programs current locality. This is done by using a triangle(which is a time period that defines what recent pages are recently referenced) that represents the working set window. The working set is the the most recently referenced pages. You want to allocate a page with enough frames for its current working set to reduce thrashing. If a page hasn't been used for triangle time units it is removed from the working set. Small working set window will cause thrashing and a large one will waste memory space.

Suppose we have a demand-paged memory. The page table is held in registers. It takes 8 milliseconds to service a page-fault if an empty page is available or the replaced page is not modified, and 20 milliseconds if the replaced page is modified. Memory access time is 100 nanoseconds. Assume that the page to be replaced is modified 70% of the time. What is the maximum acceptable page fault rate for an effective access time of no more than 200 nanoseconds?

100 ns to access if no page fault Average of 16.4 ms to access if there is a page fault (0.7*20 + (1-0.7)*8). Note unless stated otherwise assume the memory access time is included in the page fault time. 200ns = P*16.4 + (1-P)*100ns 200 = P*16.4x10^6 + (1-P)*100 After rearranging = 6.0975x10-16

The Linux EXT2 inode is organised so that there are 12 pointers to actual file data- blocks which are filled first, then there is a pointer to an indirect file data-block which contains 512 pointers to data blocks, followed by a pointer to a double indirect data-block which contains pointers to 512 indirect data blocks, followed by a pointer to a triple indirect data-block which contains pointers to 512 double indirect data-blocks. (See the diagram on p23 of the overheads for Lecture 22 for an approximation.) Assuming that data-blocks are 2048 bytes long, what is the largest possible file that can be stored under Linux?

12*2048 + 512*2048 + 512*512*2048 + 512*512*512*2048 =2.74x10^11 The top is quicker access and more direct access and the bottom of the equation is slower access and more indirect access To reference even more memory one of the 12 direct access would be turned into a less direct access one

Light weight process

A light weight process is one of multiple threads that make up a process.

The minimum addressable disk block is typically of the order of 512 or 1024 bytes. This means that any read or write to the disk actually consists of reading or writing 512 or 1024 bytes of data. What are the implications for reading/writing small amounts of data (e.g. integers)?

A lot of time or space wasted when reading or writing to these blocks as when the Integers are put one per block all the left over bytes still need to be read/written. Therefore it's better to put them into an array so that it is stored one block after the other.

Disk Formatting

A new disk is platters of magnetic material which needs to be formatted before it can be used. Low-level formatting: normally done in factory. This creates the sectors on the disk and fills them with initial data structure. This includes: - a header and trailer containing information used by the disk controller e.g. sector number. ERROR-CORRECTING CODE(ECC) - Data area usually 512 bytes Partitioning done by the operating system. Divides an area of the drive which is dedicated to the OS, different file systems can be used on these partitions) Logical formatting: Making an (empty) file system low level formatting doesn't leave traces of data you previously had on there but logical formatting does.

Process Creation

A process is created by another process: we talk of parent and child processes. • A process can create a copy of itself using "fork()"; • The child process can then call "exec()" to load a new program; Two processes can run at same time and child can even share parents resources

asynchronous I/O

A process that issues an asynchronous I/O call will not wait for the I/O operation to be completed but once it has been it will be transferred to the process. Here you don't know how long the I/O will take but you know all the data will be transferred once the I/O operation is complete. Sounds like the address of the process to transfer the data to is stored in a buffer

What is a process?

A program in execution (its dynamic/has a life time)

Deadlock

A set of processes is in a deadlocked state if every process is waiting for an event that can only be caused by another process in the set.

Programmer interface

A set of system calls

directories

A simple model: like a table that holds all files and their location in secondary storage - File names have to be unique so they become lengthy - searching takes a long time! as you have to search through each thing tree-structured: one directory per user. a master directory is a table of the user directories. This idea can be used to make directories inside directories creating a path which specifies where the file should be looked for

Spooling (I/O)

A spool is a buffer that holds output for a device that cannot accept interleaved data streams - e.g. a printer. • Rather than sending data straight to the printer, the OS stores the data in buffers. • The printer can then process the output from one buffer at a time.

Q: Why are page sizes always powers of 2?

A: This is so you can break down binary lines into pages & offsets without difficult rules.

Why is the parent process given the child process' PID?

Allows parent to keep track of child process when it carries out computation. The parent process should wait() after a fork so that it will check to see if its child has returned a value/status to it when it exit() or kill(). When it has the parent can remove the child from the PCB with the PID and then carry on with its execution after the wait(). Note it can use the return value/status from the child in computation. When process is wait() it is not in ready queue i.e. blocked

System Calls

Also called switch Programs request hardware through the kernels system calls. They are written in low level language It's an OS service Stops programs from accessing things it shouldn't be able to

Operating System is defined as?

An operating system is a program that manages the different aspects of the operation of the machine. • Manages processes running on the machine. • Manages data storage like RAM and file systems on the machine. • Manages input/output devices associated with the machine. • Manages networks and communications with other machines. • Manages the user interface. • Manages security and protection issues. It runs in privileged mode e.g. has access to instructions and hardware so processes have to get service to these via it via system calls It is running on the machine from boot up

ssize_t read(int fd, void *buf, size_t count);

Attempts to read up to count bytes from the file descriptor fd into the buffer starting at buf. On success the number of bytes read is returned and the file position is advanced by this number. it may be less than count and 0 means end of file. On error -1 is returned and ernno is sent the error code.

Race Condition

Caused by multiple processes sharing a variable which they may read and write to. A processes in an RR schedule (with SMALL time quantum) may load a process into register and manipulate it but be interrupted before saving the new state of the variable. Therefore a second process might subsequently load the state and manipulate it and get interrupted before saving its state as well. Therefore the second process has manipulated the wrong state of the shared variable as the first process did not save its state in time. Therefore the two processes will then save the state of the variable as different in their next time quantum which will lead to the last process to execute the store instruction saving the final state of the variable. This state will be wrong as it does not take into account the manipulation the other process did on it. USE SCHEDULED OUT NOT interrupted e.g. if producer and consumer processes in a round robin (small time quantum) it is possible that they both read the shared counter variable into their register and manipulate it in their first time quantum, and then in another time quantum they store the variables value as either their register (holding old counter value) + 1 or - 1. This means that the value of counter will be stored as an incorrect value by whichever process executes their store command last in the RR. Sounds like I should give in-depth example in exam with made up values for registers

Hierarchical Structure in Operating Systems

Central requirement of OS is modularity: i.e. organizing processes into hierarchical levels, and impose constraints on how processes in different levels can communicate with each other -different OS enforces different modularity (MS-DOS doesn't enforce any). What you want is them to talk to hardware via the kernel (System call) Some modularity in all OS is found in distinction between system programs & application programs

Main things devices vary in

Character stream vs block stream: A character-stream device transfers bytes one by one, whereas a block device transfers a block of bytes as a unit. Seqential vs random access A sequential device transfers data in a fixed order determined by the device, whereas the user of a random access device can instruct the device to seek to any of the available data storage locations. Synchronous vs asynchronous A synchronous device performs data transfers with predictable response times. An asynchronous device exhibits irregular or unpredictable response times. sharable vs dedicated A sharable device can be used concurrently by several processes or threads; a dedicated device cannot. O/Ss are nevertheless able to work with a smallish set of device types. Each type can be associated with a standard set of commands that the device driver will accept.

testAndSet (Overleaf: java simulation)

Code in L18 MAYBE:Critical sections are protected by locks. A process must acquire a lock to enter critical section.On exiting critical section the process will release the lock so other processes can acquire it testAndSet is a instruction that is executed atomically. So that even if two processes call the instruction at the same time, they will be executed one after the other. Here it reads the value storing to temp. Changes the target value (lock) which is always true because it will either have been true before or needs to be changed to true as the process calling this constructor will be entering critical section thus needing to change lock to true. The previous value of lock (target) is then returned through temp.get() LOOK at code L18

handshaking in Programmed I/O

Communication between CPU and device controller goes as follows: 1. CPU reads the busy bit until it is clear 2. The CPU writes to the data-out register 3. CPU set the command-ready bit in a write command and writes the command to the control register 4. The controller reads the control register since the command-ready bit is set. It sees the write command and immediately sets its busy bit. then it reads the data out register and writes it to the device. 5. Controller clears the command-ready bit and then the busy bit.

When are symbols converted into addresses (1)

Compile time- creates either - absolute code if its known where the process will reside in memory - relocatable code if it is not known i.e. programs designed as separate modules and compiled separately and therefore needs to be linked so that variables are correctly bound.

Distributed System

Computation is distributed among several processes. Processors dont share memory or clock communicate via data lines e.g. WWW advantages and disadvantages like Parallel OS

Open file table

Contains open files: name - location index - pointer - open count open count keeps track of how many processes are using the file Pointer is the location of the process accessing it LOOK UP LOCATION INDEX

Synchronisation Hardware

Could not allow interrupts in hardware when shared variables being accessed but its not feasible on multiprocessor machines. Another hardware solution is to provide special atomic instructions, where such an instruction will be executed atomically without interruption. Examples are testAndSet and CompareAndSwap (CAS).

Network Devices

Create socket, connect to local or remote socket, listen for remote applications to local sockets, send information to socket, receive information from a socket, select monitors i.e. a set of sockets

Memory management

Deals with how to get processes into main memory, and the organisation of that memory. Accesses addresses in primary memory

DMA

Direct Memory Access Solves the labor involved in Programmed I/O Allows for large chunks of data to be send with minimal main CPU intervention. DMA controller used. The CPU writes a block to the DMA command block saying which bytes are to be transferred and where to but them. The DMA controller then does the handshaking with the device through its basic CPU while the main CPU can run other stuff at the same time. DMA controller controls the memory bus and informs the device where to read/write the data. After the transfer the DMA controller interrupts the CPU saying data transfer complete. DMA has its own CPU which does transfer to main one doesn't have to

Partially loaded processes

Dynamic loading/linking is when a process may be executing but not all of its pages may not be loaded to main memory. e.g. error handling pages.

Indexed Allocation

Each file has an index block containing a table specifying the physical block for each logical block. Directory entry references this index block Corruption not bad as bad for block than for index adv: very direct access No external fragmentation as blocks can be scattered around physical memory disadv: -Internal fragmentation as a large array has to be allocated to hold indexes and its difficult to decide the size. -lots of head seeks to and from index and blocks

More complex method of allocating memory to processes

Each free region is termed a HOLE When a process arrives it is put into a hole big enough for it. If the hole is bigger than the process the new hole we create is recorded. If there is no hole big enough the process has to wait. Has external fragmentation but no internal fragmentation

Linked Allocation

Each file is a linked list of disk blocks. directory has pointer to first and last block. each block contains pointer to next adv: no external fragmentation as more blocks can be allocated anywhere and therefore files can be any size without the need to preallocate blocks. disadv: -only effective for sequential access as to get to a block of the file in the middle you have to follow a series of pointers -Pointers take up space but this can be addressed by CLUSTERING BLOCKS (groups of blocks contiguous) - Internal fragmentation which gets worse as you have more clusters -Not reliable as if one pointer is lost/damaged everything after then is lost

Contiguous allocation.

Each file occupies a set of contiguous blocks on the disk. adv: sequential access good as next character in block or next block direct access good as the number of blocks can just be counted disadv: External fragmentation (through adding and deleting files) Internal fragmentation created as files grow and shrink

Priority of interrupts

Errors most urgent e.g. invalid memory access will generate garbage if not caught. Device generated interrupts next as if an I/O buffer overflows you loose data Traps aren't major and therefore can wait

3 types of interrupts

Errors/EXCEPTIONS Device generated interrupts Software interrupts/TRAPS adv: CPU avoids busy waiting disadv: extra hardware difficult to implement due to e.g. priority of interrupts processing interrupts involves time expensive context switching

Resource-Allocation Graphs: request edge

Figure 4.

Readers Writers Problem

For processes that share a file. This problem is when processes that read don't change values so multiple reading processes can be carried out at once however writing processes do, so only one writing process can be carried out at once. Also no readers should be able to read while a process is writing. Can be solved with semaphores

Paging

For this the local address space is broken down (partitioned) into FIXED SIZE units called pages. The main memory is broken int units of the SAME SIZE called pages/friends logical addresses are broken down by special hardware via two components: a page number (identifying the page), and an offset (an address within that page) KNOW DIAGRAM Figure 8. A page table indicates which frame each page is stored in. It allows non-contiguous storage. It implements run-time address binding

Priority Scheduling (I/O)

Goes to requests with higher priorities e.g. page faults so that they are handled first

Monitors

High-level langauge construct which makes synchronisation easy for programmers. -Its a abstract data type=encapsulates private data with public methods to operate on the data - Methods provided by it is guaranteed to operate with mutual exclusion.

When choosing disk scheduling you need to consider

How much head movement there will be in how the disk is set up to store files. There are many variables that can influence this and therefore the disk-scheduling algorithm should be implemented as a module in the kernel. - Contiguous Allocation: there will be minimized head movement with sequential access. Therefore you are better off with C-SCAN because it does less work. If blocks are organized sequentially (middle-outside) then use C-SCAN in the same direction - Linked allocation has more head seeks and you should use SSTF - FATs/index allocation: you should cache the FAT or index block. If not you should implement an algorithm with a hotspot which is returned to more often than other regions and put the table there. e.g. SCAN. This makes there being a maximum of half a head SCAN to access it. But you should still cache the most recently used stuff. e.g. SSTF as distance from inside to outside will be greater than middle to inside/outside and therefore the middle of the disk is bias. Therefore FAT or swap space should be there as it will be accessed frequently. Cache most recent if possible.

Critical Section Problem

How to choose protocol for implementing critical sections to satisfy the three requirements: - Mutual Exclusion = only one process can execute its critical section at one time. - Bound Waiting = When a process wants to enter the critical section there must be a bound on the number of times other processes can enter their critical section before that process gets a chance - Progress = If a process wants to enter its critical section a process that is not executing critical section should be able to stop it. i.e. the decision about whether it can enter its critical section can't be postponed indefinitely.

Batch Operating System negatives:

I/O still really slow compared to CPU processing so CPU was idle for a lot of time

Kernel I/O services

I/O subsystem seems to be the kernel here. It provides spoooling, I/O buffering and scheduling

Entry sets

If a thread calls a synchronized method and another thread already owns the lock, the thread blocks, and is put into the entry set for the object. • When the lock is released by the thread currently using it, the JVM checks the entry set for the object associated with the method. • If this set is nonempty, the JVM selects an arbitrary thread from the set to be the next owner of the lock. Pretty much only one process can execute critical section by holding a lock. if a process wants to but doesn't have the lock it waits in a entry set which is like a queue. When the process with the lock has done its critical section it frees the lock and a random process from the entry set is given the lock and can run its critical section.

Deadlock Detection

If deadlock has occurred you want to recover from them. If there is just one instance of each resource, we can use the resource-allocation graph to detect a deadlock. Usually we check for deadlock when CPU utilization falls below threshold

How does O/S keep track of a process' page table

In PCB, as a pointer

Mounting

Mounting To make a file system available to processes on the system, it must be mounted. • The operating system is given the name of the device to be mounted. • It is also given a directory from which the file system will be accessible. (E.g. /user1.) • The files in the mounted system will then be available as if they were files in that directory. (E.g. /user1/newfile.)

The dining philosophers problem:

In this problem there is 5 philosphors sitting at a table. They go through 3 stages: eating, thinking and being hungry. To eat they must first grab a chopstick with their left hand and then another with their right. There is only 5 chopsticks so therefore if everyone grabs one in their left hand at once there will be no chopsticks for anyone to grab with their right. Therefore no one will ever be able to eat, when they cant put down their chopstick for another to pick up with their right hand to eat. This is an example of deadlock.

Time Sharing System

Interactive program/multitasking Constantly checking for user input while its running Several of these are multiprogrammed and so the CPU can switch between them quickly so they appear to be running independently (slower than they would by themselves) Allows several users to share one CPU

Multithreading models

Is a system that supports both kernel and user threads via: • In a many-to-one model, many user threads are mapped onto a single kernel thread. Problem: the kernel isn't informed when a thread blocks. • In a one-to-one model, each user thread is mapped onto a single kernel thread. Problem: there's a fixed maximum number of kernel threads. • In a many-to-many model, a set of user threads is multiplexed onto a (smaller or equal) set of kernel threads. (You need to have an extra mechanism that allocates each user thread to a kernel thread.) This scheme avoids the disadvantages of both above schemes.

How does OS handle many possible devices?

It has imposed a standard interface protocol so that it doesn't have to be rewritten for each new device. Special kernel modules called device drivers are used which encapsulate device specific information. The drivers can translate system calls into device-specific commands. (It is good modular design to have a device driver as the kernel doesn't have to be rewritten every time a new device comes along) Each device has a device driver and these drivers are software. The disadvantage of this is that it is a little slower. USB drives come with the device driver in ROM which if the computer has never seen this drive it is installed.

Problems with low-level synchronisation tools

It's fairly easy for programmers to misuse a synchronisation tool like a semaphore.

Java synchronized keyword

Java doesn't use monitors. But it provides a similar service through the synchronized keyword. • Every object in Java is associated with something called a lock. • Normally, when an object has one of its methods invoked, this lock is ignored. • But if you declare a method as synchronized, a thread calling the method needs to own the lock for the object. If a thread calls a synchronized method and the lock for the object is unowned, it becomes the owner, and can enter the method. When it exits the method, it releases the lock.

L1

L1

L10

L10

L11

L11

Wait Signal code

L18

L2

L2

I/O hardware (incl. picture)

L23 The I/O device is linked to a machine via a port. Each port has a unique address to identify it. The link is normally a set of wires (bus) called a port A bus is wires which has a protocol. This protocol is an agreement on how communication signals are sent. At each end of the link is a device controller which has its own CPU in it which can reduce the load on the main CPU by processing things like error checking, interrupts, buffering. It also has its own registers. Device controllers are hardware

L3

L3

L5

L5

L6

L6

L7

L7

L8

L8

L9

L9

Scheduling Queues

Linked Lists usually used for these • Job queue: all the processes in the system (including those on disk). • Ready queue: all the processes which are ready to execute. • Device queue: all the processes waiting to use a particular device. (One queue per device

When are symbols converted into addresses (2)

Load time: Where the process resides was not known at compile time so the compiler must generate relocatable code (at compiler time?). The process may then be loaded into main memory at load time.

How to convert page number + offset to frame number + offset

Look at the page size, create ranges for each possible page number they give. i.e. page size 10 the logical page 0 range is 0-9 and the logical page 1 one is 10-19. Create the physical ranges by replacing the page number in the table with the frame number. e.g. if page 3 turns into frame 6 the logical range is 30-39 (with page size 10) and the physical range is 60-69.

memory management unit

Maps between logical address to physical address • It takes a logical address generated by the CPU, and adds a number N to compute the physical address. • The number is held in a RELOCATION REGISTER KNOW DIAGRAM FROM including both registers

Why is the parent's address space copied to the child process (with fork()) if execlp is just going to wipe it all out?

May not call execlp, but execlp still needs the space to pass parameters and Copy On Write (COW) is used to avoid the copying

Internal Fragmentation

Memory space that is not used by a process but has been allocated to the process, thus getting rid of small external fragmentation between processes.

Error recovery and RAIDs

Modern error-recovery strategies often involve cooperation between disks. One method is RAID (redundant array of independent disks) • Each block of data is broken into sub-blocks, with one sub-block stored on each disk. • The disks have their rotations synchronised. The independent disks allow for redundancy in storage. - MIRRORING: each disk holds all the data - BLOCK INTERLEAVED PARITY: a parity bit for the group of sub-blocks is written to a special PARITY BLOCK. if one of the sub-blocks is lost, it can be recovered from the other sub-blocks plus the parity block.

Parallel Operating System:

Multiple CPUs that share bus, clock, memory & devices. Improves performance and reliability but need to consider issues like RAW etc Increase through put not linear to processors

Conditions for deadlock

Mutual exclusion: at least one of the held resources must be nonsharable. Hold and wait: there must be at least one process that's holding a resource and waiting for another resource. No preemption: a resource can only be released by the process that's holding it. i.e. you cant deprive a process from a resource it already holds. Circular wait: • P1 is waiting on a resource held by P2; • P2 is waiting on . . . • Pn is waiting on a resource held by P1.

Assume you have a page reference string for a process with m frames, initially all empty. The page reference string has length p with n distinct numbers occurring in it. For any page replacement algorithm: (a) What is a lower bound on the number of page faults? (b) What is an upper bound on the number of page faults?

Note a page reference string is a series of page numbers/reference e.g. 1 2 3 1 2 3 Its length is how many number/references in that page reference e.g. 6 Distinct numbers occurring in the page reference string is how many different page references there are in page reference string e.g. 3. (a) What is a lower bound on the number of page faults? n (b) What is an upper bound on the number of page faults? p

FIFO incl. disadvantages and advantages

Oldest page is replaces Adv: Simple to understand and implement Disadv: first pages might be ones that are often referenced. Belay's anomaly: its possible to increase the number of page faults when increasing the number of frames in memory(which goes against the normal trend). Probably because it is more likely that the page we need is in memory but it'll just get chucked out at some constant rate.

Pure demand paging

Only loading pages into main memory when they are refrenced. note you can swap ones loaded still. Disadvantage is a lot of page faults at the start of the process

Application Programs

Ordinary users interact with it

How page table can do valid/invalid bit

Page table generally includes other pages that are not mapped to that process. These page numbers will have an associated bit with them to say that they are invalid. Valid ones will have a 0 bit. protection bit can also be present e.g. read-write/read-only Note invalid can also represent that the page hasn't been loaded into main memory

Java, creating threads

Pass runnable class into Thread constructor OR make class that extends Thread Both needs a run method

Storage includes?

Primary storage = main memory Secondary storage = hard disk (internal) Tertiary storage = floppy disks, tapes, Cds (external)

Types of System Calls (5)

Process Control Memory Management File/Device Manipulation Housekeeping Communications/Socket API

Process management involves:

Process Control Block (PCB) - Creating and deleting processes - Suspending and resuming processes -Process synchronization - Process communication - Deadlock handling

Deadlock recovery

Process termination: • Abort all deadlocked processes. • Abort one process at a time Resource preemption: • Preempt resources one at a time (but in what order?) • Affected processes need to be restarted. (Or alternatively ROLLED BACK).

Operations on resources

Processes can: REQUEST a resource type be GRANTED a resource USE a resource RELEASE a resource (implemented in system calls or sometimes semaphores)

The Flow of Processes in an Operating System

Processes move from one type of queue to another as different things happen to them. A queueing diagram is often used to represent this. Refer to Figure 3.

Virtual Machines Benefits

Protection: users aren't even aware there are other users Good for system consolidation: Increases hardware utilization Solution to system compatibility problems

System programs

Provide general purpose low level function which is packaged in a way which is convenient for a user to employ e.g. system administer The set of these programs define the user-interface

How are processes stopped from accessing memory that is not theirs?

Reallocation register and a limit register

Optimal Page incl. disadvantages and advantages

Replace the page that won't be used for the longest amount of time. adv: Produces the optimal: i.e. least page faults. disadv: Cant predict the future reference string

3. What are the advantages and disadvantages of recording the name of the creating program with the file's attributes (as is done in MAC-OS)?

Research fork is management information. Typically the program we created the file with is the one we want to open it with as therefore this program is stored in this information so that when the file is double clicked on it is opened with that program. This is quick process however it doesn't work if the file was not created by the system. The alternative is to have a table which looks for the signal program (default program) to open the specified file extension. This solves the issue above but is slower. And if the table gets cleared the used must define each signal program manually.

When are symbols converted into addresses (3)

Run-time If a process can be moved during execution from one place in memory to another. Therefore binding is delayed to execution time. Run-time binding is useful for saving on memory: • Only loading modules if they're needed. • Allowing several processes to share one copy of a module. Main memory storage, dynamically loaded system library = load library at runtime which is shared by processes so only one copy of library is make.

Shortest Seek-Time First Scheduling

SSTF Choose the request which is the shortest distance from the current head position adv: Much shorter seek times as big swings are eliminated unless they are the only seeks left disadv: Starvation can occur as if there is a constant stream of close seeks a request far away will not be serviced bias in middle cylinders Note it is technically not optimal head seek time as it can end up jumping from one side to another from a middle one depending on whats closest

Process Control Blocks (PCB)

The operating system keeps record of each process in the system. Can be prioritized or interrupted Stores information in main memory like: - Process ID number (uniquely identifies process) - Pointer (makes linked list by pointer to next process in list) - Process State - Program Counter - Contents of CPU registers - Memory management information - I/O status information (allocated resources) - Accounting information

Deadlock avoidance

Safe State When processes declare the maximum resources they need and with these a sequence among the processes exist so that a process either uses free resources or use resources locked by its previous processes. Therefore eventually a process will get all the resources including those locked by its previous processes which should have been finished by then, as the first process in the sequence will wait for no locked resources. This means processes might not run all at once and instead follow a sequence Don't need to order processes specially or impose order on execution If process comes and makes the state unsafe you just suspend it for a while and recheck. Note time to wait before in safe state unpredictable.

Spooling (def + 2)

Simultaneous Peripheral Operations On Line Solution to I/O bottle neck: When performing I/O for one job occurs the CPU executes another job Buffer in a secondary storage medium (disk) needed to store jobs being inputted into system and to store the output of some jobs in the system. (job pool). This allows one job to be executed when one is being inputted (read) and another being outputted (e.g. to printer) a spool is a buffer that holds output for a device. Refer to figure 2.

Virtual Machines Problems

Slower Implementation is difficult and complex

Device controllers registers:

Some are control registers others hold data. These registers have an address range. The CPU can read and write to the controllers registers via the bus with a particular address to control and receive data from the device.

Memory-mapped files

Some virtual memory systems support this. It is when the processes virtual memory can be associated with a file it has open. This makes reading and writing much faster to the file (unless there is a page fault) on close the memory region rewritten to the disk

SCAN Scheduling

Start seeking at one end and then move to the other end servicing all I/O requests you get as you move along. When one end is reached begin moving to the other edge adv: - Every request is guaranteed to get serviced as long as the disk arm is incremented before we check for new incoming requests. But processes may still be starved due to the fact it might take a very long time to get to them if processes arriving frequently just in front of head. disadv: - Not fair: Data in the middle of the disk is advantaged over those at the top and the bottom. - If disk arm not incremented before we check for new incoming processes starvation can occur

Advantage and disadvantage of paging

Stops external fragmentation but leads to internal fragmentation which is on average the size of half one page for process. Allows for demand paging and its benefits You need to maintain a page table. i.e. memory, circuitry

contiguous memory allocation

Storing all logical memory for a process in one continuous block of physical memory. e.g. best-fit worse-fit etc

Creating Child Processes in UNIX

System calls: - fork: creates a new process, consisting of a copy of the address space of the parent process (however it can share parents resources). The value the fork function returns differs for the child and parent process so they know which one they are. child = 0 and parent = PID (Process ID) of child execlp: loads command which loads a new program into its memory space erasing the copy of the parent. arguments are (filename, arg1, arg2 ...) This makes an new process with its own PCB

T21 paging scheduling

T21 Paging scheduling

Link time address binding

THIS ONE NOT IN TXT BOOK BUT: Programs are often designed as separate modules which are compiled separately, but make reference to each other. These must be linked, so that their variables are correctly bound.

File Allocation Table (FAT)

Table created at the start of each partition with an entry for each block in the partition. Directory entry specifies the block number of the first block in the file.The value of the FAT entry for the first block will identify the block number of the next block in the file (FAT). adv: -no external fragmentation as more blocks can be allocated anywhere and therefore files can be any size without the need to preallocate blocks. - direct access is better supported because chaining through FAT is faster than chaining through a linked list disadv: -only effective for sequential access as to get to a block of the file in the middle you have to follow a series of pointers EVEN slower than linked as more head seeks if FAT is not cached -Pointers take up space but this can be addressed by CLUSTERING BLOCKS (groups of blocks contiguous) - Internal fragmentation which gets worse as you have more clusters -Not reliable as if one pointer (OR MAYBE THIS SHOULD BE FAT TABLE) is lost/damaged everything after then is lost

C-SCAN

The head starts from one end and moves to the other end servicing requests. When it reaches the other end it moves back to the starting end without servicing requests and then starts again. adv: No region of the disk is favoured disadv: Starvation possible. If you increment the head before checking for new incoming processes you can guarantee every request is serviced but if you don't do this there is no guarantee. Also if a constant stream off requests come in right in front of the heads position the far away requests won't be serviced for a very long time i.e. starvation.

C-LOOK

The head starts from the request closest to one end and moves to the request closest to the other end servicing requests on its way. When it reaches the other end it moves back to the request closest to the starting end without servicing requests and then starts again. adv: is preferred over SCAN and LOOK disadv: Starvation possible. If you increment the head before checking for new incoming processes you can guarantee every request is serviced but if you don't do this there is no guarantee. Also if a constant stream off requests come in right in front of the heads position the far away requests won't be serviced for a very long time i.e. starvation.

How is a disk read

The heads all move together, accessing a cylinder of the disk. To access a particular sector, the heads are moved to the appropriate cylinder, the correct head is electronically enabled, and the disk is rotated until the correct sector comes under the head.

Virtual memory concept

The idea that programmers only need to deal with the logical memory/virtual memory and the memory manager will deal with the mapping of logical addresses to physical memory and which pages are loaded into main memory via the backing store.

Multitasking in Memory management

The idea that when a process is waiting it doesn't need to be in main memory and therefore could be swapped out

(Peterson's Algorithm) repeat flag[0] := true; turn := 1; while(flag[1] and turn ==1) do no-op; critical section flag[0] := false; remainder section until false (in terms of critical section problem)

Turn is introduced to avoid deadlock If both processes are interesting in critical section at the same time (flag[] = true). Who evers turn is set last gets the first go in the critical section thus preventing deadlock. Note that turn is always set to the other person. Process will do nothing when the other process wants to run its critical section and its not their turn. however if the other process does not want to run its critical section the process that does can keep running theirs until the other process wants to. It also satisfies bound waiting as a process waiting on the other one to run its critical section can run theirs as soon as the other process is done and set is flag to false. turn further implements this as when the processes is interested again it sets turn to the other process. Both these factors means that the processes only has to wait till the other processes is done once with their critical section and therefore satisfies bound waiting Mutual exclusion satisfied as turn can only ever be 1 or 0 which means that even when both processes are wanting critical section the ones whose turn it is not cannot execute it. Not this process is not scalable as there isn't a version that works for n number of processes Because modern computers do basic machine language instructions there is no guarantee it will work on them

Computers Resources

Types: Memory space CPU cycles I/O devices Files Note you have different instances of these types

The Locality Model of Execution

Used to prevent thrashing. You keep track of the frames a processes is currently using i.e. the frames that are activity being used together. a locality is a section of pages that are called together in a stage of a program. A program moves between localities as it runs. you want to allocate enough frames for the locality so thrashing only occurs when changing localities These localities can overlap each other. It is the basis of cache

Thread solution in producer consumer problem

Uses the entry set WAIT SET to solve the problem. If a synchronised method has been entered and WAIT() is called from not being able to produce or consume from the buffer. This releases the lock and adds it to the wait set. The process in the wait set is released and given lock when NOTIFY is called by a process consuming or producing so that the buffer is no longer full or empty.

Demand paging

Usually done via the valid/invalid bit. when pages are only loaded to physical memory when they are needed, if they arent there already. Note processes in physical memory is duplicated in backing store (possible disadvantage) slower when page faults occur

How are System Calls Implemented

Via interrupts which pass control to appropriate handler in the kernel System Calls transfer controls to the kernel by generating interrupts/traps/soft interrupt

Critical Sections

Way of preventing race conditions. It guarantees that certain sections of cooperating programs are not interleaved. A section of code is protected so that only one cooperating process may be in its critical section at one time.

Where does polling occur in communication between CPU and device controller

When CPU keeps polling the busy bit to see when its clear. Also polling for input devices. This causes busy waiting Busy waiting leads to bad CPU utilisation which is the time the CPU spends doing important things like user processes and not kernel processes. The advantage here is its good when real time inputs are important. e.g. good for breaks in car

Prepaging

When a process is suspended you remember its current working set and then when it restarts you preload the pages in the working set into memory so that initial page faults don't occur. This is time saving as long as the working pages are still referenced soon after suspension or the time taken to preload was pointless

Page fault/segmentation fault/trap

When a process tries to access memory when is not indicated by the valid/invalid bit as valid a page fault occurs. The OS will determine if the page fault is due to invalid memory access or that the page frame is not available by using the memory information in the PCB mainly how many pages has been allocated to the process. If it is determined as invalid the process terminates. Otherwise the page is loaded into free physical memory, if non available for that process it is swapped and if the DIRTY BIT is set, the frame swapped to backing store is saved their (as it changed). Then the page table modified appropriately and the instruction restarted.

Resource-Allocation Graphs: Granted

When a request is granted, the request edge is transformed instantaneously into an assignment edge.

Resource-Allocation Graphs: Release

When a resource is released, the assignment edge is deleted.

Cooperating processes

When execution of one process can affect the execution of another this is from sharing a resource

Thrashing

When process hasn't been allocated enough frames it thrashes. This is when page faults occur and the page required to swap out is needed shortly after. Often a process thrashing will spend more time thrashing than executing. Solution = allocate more pages, or kill a process to make enough frames to allocate

Multitasking environment (Time sharing)

When processes are multiprogrammed and are interactive, thus responds to user input. This means its important for processes to be frequently swapped between CPU and queues so processes been interactive. Processes are then said to be sharing time in the CPU

Non-contiguous memory allocation

When the logical memory for a processes is distributed throughout physical memory. e.g. paging

Mouse and I/O

first they polled but then interrupts, first they were processed sequentially then just the last one as it had all information needed.

Nonmaskable interrupts

for signalling error messages. Never disabled.

Message Passing Systems

general way of communicating between processes to pass message between processes a COMMUNICATION LINK must be established. main instructions for the link is send and receive Direct communication achieved by send/receive(ProcessID, message, size) Indirect communication is achieved by send/receive(socket, message, size). Which is generally better as process IDs can change socket is identified by IP address and port number.

repeat while turn =/= 0 do no-op; critical section turn := 1; remainder section until false; (in terms of critical section problem)

While statement says it does nothing which turn != 0 but when it does it can go its critical section, then change value of turn to give next process a turn then can keep doing code until wants to enter critical section again. shared variable which alternates between processes. This satisfies bound waiting as the process will always get a go entering critical section when the other processes have. i.e. its its turn. It also satisfies mutual exclusion as only the process who's turn it is can run critical section disadvantage of this is that there is no progress as e.g. P0 cant enter critical section when its P1s turn and therefore must wait until P1 does. This is therefore not a good solution for producer consumer problem as the buffer can only be of size 1 as they take turns producing and consuming

repeat flag[0] := true; while flag[1] do no-op; critical section flag[0] := false; remainder section until false; (in terms of critical section problem)

Will only enter critical section when flag[1] is not true. i.e. when the other process doesn't want to. It sets its flag to true so that when the other one sets its to false it can do critical section and will stop other process from doing so until it is done. Setting your flag to false after doing critical section allows other processes to enter critical section before you get a chance to again. If P0 and P1 raise their flags at the same time they are in deadlock and therefore bound waiting requirement is not satisfied. and the others are

ssize_t write(int fd, const void *buf, size_t count);

Writes up to count number of bytes from the buffer into the file referred to by the file descriptor fd at the current offset. Returns the number of bytes successfully written which can be less than count. -1 returned if an error on wire() occurs. errno is sent the error code and processes it. The return values of the system call is checked in the user applications and the exceptions are dealt with properly in order to guarantee reliability and robustness of the applications.

LOOK

You go both ways servicing requests but you go back the other way once you serviced the request closest to one side.

What is the structure of a file

a byte stream or a sequence of bytes terminated with EOF (end of file?)

Information is referenced on a disk via

a multipart address which includes: drive number, surface number, track number and sector number.

The smallest addressable part of the disk is called

a sector

Heavy weight process

a task with one thread

Q: Consider a paging system with the page table stored in memory. a) If a memory reference takes 200 nanoseconds, how long does a paged memory reference take? b) If we add associative registers, and 75% of all page table references are found in the associative registers, what is the effective memory reference time? (Assume that finding a page table entry in the associative registers takes zero time, if the entry is there.)

a) It takes 200 seconds for the CPU to look in memory and get the result. So it takes 400 seconds as memory is accessed twice b) The associative register is the TLB. The CPU can search the TLB at the same time as the Page Table maybe? Anyway the answer is: 75%*0 + 25%*200 + 200 = 250

Q: Consider a logical address space of 8 pages of 1024 words each, mapped onto a physical memory of 32 frames. a) How many bits are there in the logical address? b) How many bits are there in the physical address?

a) There are 8 pages each with 1024 words on them so the number of words you have is 8x1024=8192. The number of bits you need to make up 8192 things is 2^x=8192 so x = 13. b) There are 4 times as many frames as pages and therefore there must be 4 times as many bits. Therefore 2^13*2^2 = 2^15 bits. Note that 2^2 is just 4 (it makes the multiplication easy in head).

File types adv: disadv:

adv: Having file types stops the printing out of binaries as if they are ASCII and a proper program can be used to process the file disadv: -Hard to deal with new file formats. -Encryption difficult -There is an overhead in enforcing file types as the OS must ask what type each file is and have ways of behaving accordingly which is extra code.

Advantages and disadvantages of threads i.e. lightweight processes

adv: • Responsiveness. Multithreading an interactive application may allow a program to continue running, even if part of it is waiting. • Resource sharing. We don't need special setups for shared memory when implementing communicating processes as threads in the same task. • Economy. Thread creation is easier. Switching between threads is faster than between (heavyweight) processes, because there's less to change (registers, PC and stack)(share some information in PCB). • Scalability. Easy to work on multiprocessor architectures. • program with many threads gets more CPU time (user level) disadv: • Because threads share data, bugs caused by the coder can lead to corrupted data being read by one thread as another thread is halfway through reading to it. • Other programs that aren't heavy weight don't get as much CPU time

Block devices

can read block write block seek block (for random access block devices)

a task is?

collection of code section, data section, and O/S housekeeping info.

count variable

count is bound to address

Maskable interrupt:

device-generated interrupts and traps. These can be disabled when the OS is in the middle of processing a more urgent interrupt.

disk scheduling T24 & 25

disk scheduling T24 & 25

Simple method of allocating memory to processes

divide memory into a number of fixed-size partitions. Problems: processes can be different sizes, presumably the partitions would have to be as big as the biggest processes which makes huge redundancy

Lazy swapping:

don't load a page into main memory until it is referenced by the CPU. i.e. demand paging

effective access time

effective access time = ? (1 - P) x MEMORY ACCESS TIME + P x PAGE FAULT TIME P = probability of page fault bad if page fault time greater than the time the CPU doing stuff for

Resource-Allocation Graphs: assignment edge

figure 5.

Look at dead lock examples figure 6. & figure 7.

figure 6. & figure 7.

User-level threads

implemented by user-level libraries without the need for system calls - Fastest kind of threads to switch between as only bit of PCB needs to be loaded -Kernel doesn't know about individual threads within a process and therefore the process with the most threads get the most CPU time and if a system call is made the whole heavy weight processes is suspended until it's complete (i.e. all of the processes threads)

Kernel-level threads

implemented via system calls -kernel knows about individual threads so each processes gets equal time in CPU however switching between threads a lot slower.

All processes are the child ancestor of

init

What are atomic instructions

instructions guaranteed not to be interrupted implemented via hardware.

An orphan process

is a child process that has had its parent stopped before it, and therefore the init adopts the child. a process whose parent terminated (inherited by init).

interrupt vector

is an array of locations that hold the addresses of these interrupt-handling routines. (Usually held in low memory.)

Dirty bit is used in page faults when

is looked at when swapping a page. if its set the page has been changed in physical memory and therefore needs to be saved to the backing store before being swapped.

logical address/virtual address logical address space

is one referred to in a piece of executable code (e.g. to refer to a variable or a location). It is used in a process' address space? USED IN COMPILE AND LOAD TIME CPU executes instructions involving these Each program is given a logical address space on compile-time and load time binding occurs where logical address are used starting from 0 to the memory limit of the process. mapping this to physical address is done by MMU

memory unit

is responsible for accessing main memory.

physical address physical address space

is the address which is actually sent to the memory unit runs from 0 to the main memory size

A zombie process

is when the parent process doesn't call wait to determine if the child has ended as when the child ends its still stored in the PCB zombie process wastes time as it is still scheduled waste space in PCB one which terminated, but its live parent process is not WAITING for it. Since no process is receiving its EXIT STATUS, it stays in the process table.

Dynamic linking

it is only done at run-time. o A 'stub' containing a pointer is included in place of the code for each routine. o When the stub is referenced, the pointer is initialised: - if the routine is already loaded, use it; - otherwise load the routine. Advantages of dynamic linking? SAVING ON MEMORY SPACE. (DON'T NEED MULTIPLE COPIES OF THE SAME ROUTINES). SAVING ON LOAD TIME. (ONLY NEED TO LOAD A ROUTINE ONCE). EASE OF UPDATING LIBRARIES (E.G. TO FIX BUGS, CHANGE VERSIONS).

LRU advantages and disadvantages

least recently used page is replaced. This works by assuming the future resembles the past. adv: It is optimal for looking back intime. i.e. it produces the less page faults without knowing which pages will be referenced next. disadv: Its time consuming to keep a record of LRU

Threads

lightweight process which allows a process with multiple threads to do multiple tasks at 'once' special kind of process that allow more complex data-sharing. therefore Thread just has PC, register set and stack space which needs to be loaded during context switching and not shared stuff Threads share data section, code section, memory and I/O resources

Stack holds grows?

local/dynamic variables down

Thread problem in producer consumer problem

look at code L19 slide 7. if the buffer is full for the producer or empty for the consumer Thread.yeild() is called to suspend the process removing it from the ready queue until it is no longer full or empty. The problem here is if one process is in the critical section and realizes the buffer is full or empty it will yeild(). However it will still have the lock and therefore when the other process wakes up and tries to enter its critical section it cant as it doesn't have the lock. Therefore it cannot change the buffer and both threads are in deadlock.

File organization module

maps the logical structure of file systems onto secondary storage. A file is broken into logical blocks, to make the mapping to disk blocks easier to manage.

mmap family of system calls allow

mmap allows two processes to share some region of their memory space A memory region is created that is shared by the processes

Sequential access (file)

most common method. o A file pointer identifies a record within the file. o The pointer can be moved incrementally forwards (in read or write operations) or backwards (in rewinding).

Multiprogramming environment is when

multiple processes running at 'same time' so you have to save states of processes when they waiting on I/O and swap them out with another based on scheduling algorithm. Takes into account critical sections?

First computers

no intermediary systems between programmer and hardware so they had to give instructions directly to CPU which meant it was idle a lot!

FCFS scheduling (I/O)

note 0 = middle of disk adv: It's fair and simple/quick to implement disadv: Big swings can occur which increases average seek time

Direct access: (file)

o A file is viewed as a numbered sequence of records. o Operations (e.g. read, write) can be carried out on any record in any order.

Page number out of bounds ERROR

occurs when trying to access a page out of range for the process

Each sector corresponds to...

one logical block of the disk.

a file is the....

only way to store data on a disk

Swapping

processes are swapped in and out of main memory to/from a backing store. Swapping is done by the memory manager, which is a module of the kernel. • The memory manager needs to work in synch with the CPU scheduler.

The bound buffer problem

producer-consumer problem which uses a bound buffer instead of a message box. two private variables are present: - in which is next position in buffer to write to - out which is the next position in buffer to each from two shared variables: - buffer - counter which says how many items in buffer Because of the shared variables a Race Condition occurs Can be solved with peterson's algorithm, semaphore ,...

Multilevel indexing

reduces internal fragmentation by allowing indexes to index indexes. indexes are called inodes. This allows for infinite file sizes, but uses up a fair bit of memory through storing these inodes direct blocks reference block, single indirect reference an inode that references blocks, double indirect references an inode which references inodes which reference blocks

On exit() a process will

return its status to the parent processes waiting on it to terminate The OS will deallocate it its resources

Programmed I/O

sending data byte-by-byte to a device

User Interface

shell is the program users interact with the OS. Intermediary system between programmer and the hardware

How small should the page size be

small but not too small that the disk I/O and housekeeping becomes a major time delay

disks are arranged on a rotating

spindle

the sides of a platter is called

surfaces

Synchronous vs. asynchronous I/O system calls

sync parent waits for child to end via polling async parent keeps running while child runs. For the parent to find out if the child has ended it polls for a pre-established signal. E.g. mapped memory or location of a file with its value as the result from the child. One thread deals with the request I/O request is blocked until the result Flags are set to indicate if value is there Main thread keeps running but doesn't know if the result has been written and therefore it or another thread keeps polling (I think the main one does)

Semaphores Wait(S){ while S<= 0 do no-op; S := S -1 } Signal(S){ S := S + 1 } P0: S0;//statement 1 Signal(synch/mutex) P1: wait(synch/mutex); S1;//statement 2

synchronization tool making use of atomic operations Cooperating processes share a semaphore mutex A semaphore S is an integer variable that can only be accessed via two atomic instructions. S0 is executed first and then signals to P1 do do S1 next stopping them from executing at the same time (mutual exclusion) Busy waiting occurs here as the wait() keeps checking if S <= 0

What is a signal on a bus

temporal sequence of electrical voltages on each wire

cat test.txt | grep 'party'

test.txt piped to party via a read write buffer i.e. producer consumer

swap space

the OS maintains an area of the disk for using as the backing store. doing it in a directory would be slow and therefore a special partition is created where the disk allocation algorithm on this partition is optimized for speed rather than memory efficiency.

File System is?

the mechanism by which the user accesses/manipulates stored data in secondary storage like hard disk.

Nonblocking I/O

the process that issues a blocking I/O call will wait for a fixed (small) interval, and return after this interval regardless of it there is still some data to be transferred Here you know how long I/O is going to take but you don't know if the data will be transferred after the I/O completes

Static linking

this is done before the program is executed.

Code for readers writers problem plus description of how it solved it

var readcount (=0): integer; //Keeps a count of the number of readers in critical section mutex (=1): semaphore; //This semaphore is used by readers to update readcount wrt (=1): semaphore; //Used by readers and writers to enter critical section Writer: wait(wrt); ... perform writing; ... signal(wrt); Reader: wait(mutex); readcount := readcount + 1; if readcount = 1 then wait(wrt); /*First reader waits for resource to enter critical section (and when granted blocks writers from entering)*/ signal(mutex); ... perform reading; ... wait(mutex); readcount := readcount - 1; if readcount = 0 then signal(wrt); /*Last reader signals availability of resource so writers may enter critical section.*/ signal(mutex); The first reader to enter the critical section sets the shared semaphore 'wrt' to 0 so that no writer is able to enter the critical section (if there is already a writer in this critical section, the reader may not enter until the writer signals the availability of the wrt resource). Readers that follow are still able to enter the critical section and read the data because they need not wait on the wrt semaphore resource (because readcount > 1). The last reader to exit the critical section signals the availability of the wrt resource thus allowing waiting writers to enter the section. This implementation is considered reader preferential because writers must wait until all readers have left the section (and if more readers arrive before readcount == 0 then they are admitted into the section ahead of any waiting readers).

blocking is

wait while an I/O process completes

Semaphore without busy waiting

wait(S) deincrements S.val and allows the first process to run (if S.val started as 1) as it is not < 0. S.val for the other processes are less than 0 and therefore they are blocked (block;) and thus adding them to a waiting queue. When a process is finished its critical section it calls Signal(S) which increments S.val and if S.val <= 0 it will remove a processes from the S.queue and call wakeup(P) on it, giving it a ready status and allowing it to run its critical section. This solves busy waiting as the processes in the queue do not keep checking statements but satisfies mutual exclusion, bound waiting and progress. The initial value of S.val is how many processes we want to be able to run the section of code at once. If only 1 S.val starts as 1. abs(S.val) = no. of processes waiting on that semaphore Code in L18 Consecutive signals can result in arbitrarily large positive values. (IDK why)

Independent processes

when the execution of one process cannot affect the execution of another

Real-Time Systems

when there is specific time requirements for processes to be completed a real-time OS is required. Hard real-time system guarantees that jobs are completed within certain specified times Soft real-time systems is when some jobs have high priorities than others so they are processed first.

Algorithm for allocating processes memory and advantages/disadvantages for each?

• First-fit: allocate the first hole you find that's big enough. Quick because every hole not searched. But hole allocated might not be optimal • Best-fit: find the hole that leaves the smallest leftover hole. Have to search the whole set of holes each time. You end up creating very small holes. • Worst-fit: find the hole that leaves the biggest leftover hole. You have to search each hole before allocating. You dont create smaller holes however you may create holes that large processes cant fit into.

Types of scheduler

• Long-term scheduler (batch systems): decides which processes loaded onto the disk should be moved to main memory. • Short-term scheduler (aka CPU scheduler): chooses how to allocate the CPU amongst the processes which are ready to execute. (out of programs already loaded in memory which one should I run)

Organising Groups of Files

• The file system is first divided into partitions. A partition can be thought of as a virtual disk/harddrives. • Each partition contains a directory of files that reside on it. they can also have different file systems implemented on them and different scheduling algorithms if they are not being accessed at the same time.

4 ways of processes to share data (interprocesses)

• mmap() = memory map • pipe moves output in order putting it into a different process as input, it seems like piping is buffering... i.e writing to somewhere and having the place read to something after each write, in order. • Fork(), wait(). This one after each other will cause the process to fork(), creating a child. This process will then wait() (block (stop) until child process ends and returns an excess code) and then uses the excess code as data for when the parent keeps running again. • Sockets- socket is a link between two processes that allow them to send and receive data from one another through: o send(socket, message, size) o receive(socket, message, size)

Components of a Process

• text section/code section: the program code itself • data section: any global variables used by the program • process stack: any local variables currently being used • program counter: a pointer to some place in the program code • contents of CPU registers • memory management information • device/file allocation information • accounting information


Ensembles d'études connexes

Chapter 16 / Autonomic nervous system (A&PI)

View Set

Pulmonary/Respiratory Exam Med-Surg

View Set

9.8 - Exemptions from Registration

View Set

I am going to die bc of this class

View Set

Anatomy and Physiology Comprehensive Exam, Chapters 1-16

View Set

Excel Chapter 1: End of Chapter Quiz

View Set

Mineralogy/Petrology Test 3 INFO

View Set

Environmental science chapter 10 discussion

View Set