CECS 326 Operating Systems
Thrashing
A phenomenon where a process is spending more time swapping pages in and out than actually running. Solve with working set model
Multilevel feedback queue
A process can move between various queues. Scheduler assigns CPU time to each process as it enters a queue. If job takes longer than given time, it is moved to the next queue
Deadlock detection (single instance)
Algorithm to detect a cycle in a wait-for graph with n vertices(processes). Periodically checks for a cycle in the graph.
Synchronization hardware
Computer components that provide support for implementation of CS code
Page table
Keeps track of memory allocation by having a valid/invalid bit for each page. Shows where each page is mapped to whichever frame in the physical memory
Virtual Address Space
Logical view of how a process can be stored in memory.
Ignore potential deadlock problem
Many systems actually adopt this approach
Throughput
Number of processes that complete their execution per time unit (maximize)
Asymmetric multiprocessing
Only one of the processors has access to the system data structures, so there is no need to figure out data sharing
Dining-philosophers problem (synchronization issue)
Only two out of five processes are able to access shared data at the same time, how to guarantee no process will be left out
Multilevel queue
Partitions of separate queues: foreground (round robin) and background(FCFS). Scheduling is performed between the queues.
Virtual memory Motivation
Physical memory is limited, and programs can take a lot of memory whilst running.
If RAG has a cycle and several instances per resource
Possibility of deadlock
Readers-Writers problem (synchronization issue)
Readers only read the data, while writers can both read and write. Allow multiple simultaneous readers, but only one writer.
PRA most frequently used
Replace page with largest count using counter to iterate references made to each page
PRA least frequently used
Replace page with smallest count using counter to iterate references made to each page
Performance measures of scheduling algorithms
CPU utilization, Throughput, Turnaround time, Waiting time, Response time
Mutex lock
CS is protected by acquire() and release() of a boolean variable that indicates the availability of the lock (busy waiting)
Semaphore
CS is protected by wait() and signal() (or post()) of an integer. When integer is nonzero, process(es) can execute.
CPU - I/O burst cycle
process execution consists of a cycle of CPU execution and I/O wait. (CPU burst followed by I/O burst) May increase CPU utilization with multiprogramming
Push migration
Checks the load for each processor. If there is an imbalance, move tasks from overloaded CPU to a more idle processor:
Resource allocation graph
Circle = process Resource = square Instances = small squares inside resource Request = arrow from p to r hold = arrow from r to p
RAG dashed-line
Claim edge: indicates that process P may request resource R.
Condition Variables
condition x, y; x.wait() and x.signal() kind of like semaphores
Linux Synchronization options
Atomic integers, Mutex locks, Semaphores, Spinlocks, Reader-writer versions of both semaphore and spinlock
Page replacement algorithm (PRA)
Selects which frames to replace
Preemptive scheduling
1. Switches from running to ready state 2. Switches from waiting to ready state
Nonpreemptive scheduling
1. Switches from running to waiting state 2. Terminates
Bounded buffer problem (synchronization issue)
A buffer with 'n' amount of slots, each of which can hold one item
No preemption
A condition where a resource can be released only voluntarily by the process holding it (only after the process completes its task)
Mutual exclusion
A condition where only one process can use a resource at any given point
Deadlock
A specific state in which two or more processes are waiting for another to release a shared data. (Sometimes circular chain in more than two processes)
Deadlock Recovery
Abort all deadlocked processes or one process at a time; roll back processes to some safe state where resource(s) is released
Preemptive kernal
Allows preemption of process when in kernal mode (used to handle CS in OS)
Deadlock Detection
Allows system to enter deadlock state. Needs detection algorithm and recovery scheme.
Waiting time
Amount of time a process has been waiting in the ready queue (minimize)
Response time
Amount of time it takes from when a request was submitted until the first response is produced, not output (minimize)
Turnaround time
Amount of time to execute a particular process (minimize)
Race condition
Data inconsistency resulting from two or more operations attempting to perform at the same time
PRA first in first out
Belady's Anomaly
Demand Paging
Bring a page into memory instead of bringing a process' entire address space. Less unnecessary I/O, less memory used, faster response, more simultaneous users
If RAG has a cycle and only one instance per resource
Deadlock
Frame allocation algorithm
Determines how many frames to give each process
Symmetric multiprocessing (SMP)
Each processor self schedules. All process are in common ready queue, or each has its own private queue of ready processes (more common)
Time slice
Each queue gets a certain amount of CPU time which it can schedule amongst its processes. (e.g., 80% to foreground, 20% to background)
Basic Page Replacement
Find desired page's location on disk. Then find a free frame (or use page replacement algorithm). Then bring page into the free frame, update page and frame tables. Restart instruction that caused problem and resume the process
Scheduling algorithms
First come first serve, Shortest job first, Shortest remaining time first, Priority, Round robin
PRA second chance
Generally FIFO with hardware-provided reference bit that indicates if page is replaceable (= 0)
Dispatcher module
Gives control of the CPU to the process selected by the short-term scheduler. (Switching context, switching to user mode, jumping to the proper location to restart program)
Deadlock detection (multiple instances)
Graph reduction algorithm. Data structures: Available (vector), Allocation (matrix n x m), Request (matrix n x m)
Monitor
High-level abstraction that allows process synchronization. Only one process can be active within the monitor. Data within the monitor is encapsulated and protected.
Pull migration
Idle processors pulls waiting task(s) from busier processor
Progress
If there aren't any process executing in its CS AND there is some process that wishes to enter CS, then the selection of the processes that will enter CS next can't be postponed indefinitely
Hyperthreading
Multiple threads per core. Faster processing, consume less power
Deadlock can occur with the following 4 conditions
Mutual exclusion, hold and wait, no preemption, circular wait
If RAG doesn't have a cycle
No deadlock
Deadlock Recovery
Process termination, Rollback, Starvation
Deadlock Avoidance (simplest approach)
Requires each process to declare the maximum number of resources of each type that it will ever need.
Deadlock Prevention
Restraining the ways that resource requests can be made so one of the following conditions cannot hold: Mutual exclusion, hold and wait, no preemption, circular wait
Critical section/phase
Sections in a program where the process requires mutually exclusive access to shared data
Time quantum
Small unit of CPU time that is assigned. That time must pass before the assigned process is added to the ready queue for execution
PRA least recently used
Stack algorithm to replace page that hasn't been used for the longest time.
PRA optimal
Stack algorithm to replace page that won't be used for the longest time.
Bounded Waiting
There must be a limit on the number of times that other process can enter CS after a process requests to enter its CS and before the request is granted
Deadlock Avoidance (safe state)
System is in a safe state when each process request in a sequence can be satisfied with the current available resources. This ensures that a system will never enter an unsafe state
Banker's Algorithm
System model for deadlock avoidance. Tests by simulating the allocation of maximum number of resources
Virtual memory
The separation of user logical memory from physical memory. Address space can be shared by multiple processes. More concurrent programs running, less I/O required to load or swap processes
Dispatch latency
Time it takes for the dispatcher to stop one process and start another running.
Deadlock Avoidance Algorithms
Use a resource-allocation graph for single instance of a resource type, and use banker's algorithm for multiple instances
Hold and wait
When a process holding at least one resource is waiting to acquire additional resources held by other processes
Circular wait
When three or more processes are holding resources, and are waiting for each others' resources to free up.
SMP load balancing
attempt to keep workload evenly distributed among processors. This is needed when each processor has its own private queue of ready processes
CPU utilization
keeps the CPU as busy as possible (maximize)
Non-preemptive kernal
kernel-mode process runs until it exits kernal mode, blocks, or voluntarily yields CPU (essentially free from race conditions) (used to handle CS in OS)
exponential distribution
large number of short CPU bursts (Usually I/O bound programs)
Banker's Algorithm data structures
n = total processes, m = total resources types Available (vector of length m) Max (n x m matrix) Allocation (n x m matrix) Need (n x m matrix)
Short-term scheduler
selects one of the processes from the ready queue to be executed when CPU becomes idle.
Fixed priority scheduling
serve all from foreground then from background. (possibility of starvation)
hyperexponential
small number of long CPU bursts. (Usually CPU-bound programs)