CPSC 3220 Exam 2 Terms
test-and-set instruction
an instruction used to write 1 (set) to a memory location and return its old value as a single atomic (i.e., non-interruptible) operation.
short-term scheduler
decides which of the ready and in-memory processes is to be executed after an interrupt or system call
future knowledge
decisions based on knowledge of service times
long-term scheduler
determines which programs are admitted to the system for processing
medium-term scheduler
handles processes that have been put on a wait list to run
priority aging
increases the priority of a job as it waits to be executed in the system
release lock
makes the lock FREE
energy-aware scheduling
scheduling where the CPU is slowed down, processors are turned off when not in use, and memory is disabled that is not in use.
scheduling for condition variable signal under Mesa semantics
waiting can be done with a conditional when scheduling occurs
acquire lock
waits until the lock is FREE and then atomically makes the lock BUSY
policy
ways to choose which activities to perform
thundering herd (broadcast notify/signal)
when a large number of processes waiting for an event are awoken when that event occurs, but only one process is able to proceed at a time
lost update problem
when two different transactions are trying to update the same column on the same row within a database at the same time
semaphore
A type of synchronization variable with only two atomic operations, P() and V(). P waits for the value of the semaphore to be positive, and then atomically decrements it. V atomically increments the value, and if any threads are waiting in P, triggers the completion of the P operation.
fine-grain locking
A way to increase concurrency by partitioning an object's state into different subsets each protected by a different lock.
lock ordering
A widely used approach to prevent deadlock, where locks are acquired in a pre-determined order
blocking bounded queue
A bounded queue where a thread trying to remove an item from an empty queue will wait until an item is available, and a thread trying to put an item into a full queue will wait until there is room
circular waiting
A necessary condition for deadlock to occur: there is a set of threads such that each thread is waiting for a resource held by another
I/O bound task
A task that primarily does I/O, and does little processing
compute bound task
A task that primarily uses the processor and does little I/O.
busy waiting
A thread spins in a loop waiting for a concurrent event to occur, consuming CPU cycles while it is waiting
lock
A type of synchronization variable used for enforcing atomic, mutually exclusive access to shared data.
space-sharing scheduling
allocating different processors to different tasks
no preemption
A necessary condition for deadlock to occur: once a thread acquires a resource, its ownership cannot be revoked until the thread acts to release it.
bounded resources
A necessary condition for deadlock: there are a finite number of resources that threads can simultaneously use
atomic read-modify-write instruction
A processor-specific instruction that lets one thread temporarily have exclusive and atomic access to a memory location while the instruction executes. Typically, the instruction (atomically) reads a memory location, does some simple arithmetic operation to the value, and stores the result.
liveness property
A constraint on program behavior such that it always produces a result.
safety property
A constraint on program behavior such that it never computes the wrong result.
deadlock
A cycle of waiting among a set of threads, where each thread waits for some other thread in the cycle to take some action.
mutually recursive locking
A deadlock condition where two shared objects call into each other while still holding their locks. Deadlock occurs if one thread holds the lock on the first object and calls into the second, while the other thread holds the lock on the second object and calls into the first
nested waiting
A deadlock condition where one shared object calls into another shared object while holding the first object's lock, and then waits on a condition variable. Deadlock results if the thread that can signal the condition variable needs the first lock to make progress
spin lock
A lock where a thread waiting for a BUSY lock "spins" in a tight loop until some other thread makes it FREE.
readers/writers lock
A lock which allows multiple "reader" threads to access shared data concurrently provided they never modify the shared data, but still provides mutual exclusion whenever a "writer" thread is reading or modifying the shared data
scheduler activations
A multiprocessor scheduling policy where each application is informed of how many processors it has been assigned and whenever the assignment changes
wait while holding
A necessary condition for deadlock to occur: a thread holds one resource while waiting for another
work conserving scheduling policy
A policy that never leaves the processor idle if there is work to do.
priority inversion
A scheduling anomaly that occurs when a high priority task waits indefinitely for a resource (such as a lock) held by a low priority task, because the low priority task is waiting in turn for a resource (such as the processor) held by a medium priority task
gang scheduling
A scheduling policy for multiprocessors that performs all of the runnable tasks for a particular process at the same time
FIFO
A scheduling policy that performs each task in the order in which it arrives.
Round Robin
A scheduling policy that takes turns running each ready task for a limited period before switching to the next task.
oblivious scheduling
A scheduling policy where the operating system assigns threads to processors without knowledge of the intent of the parallel application
affinity scheduling
A scheduling policy where tasks are preferentially scheduled onto the same processor they had previously been assigned, to improve cache reuse
multi-level feedback queue
A scheduling policy with multiple priority levels managed using round robin queues, where a task is moved between priority levels based on how much processing time it has used.
critical section
A sequence of code that operates on shared state
workload
A set of tasks for some system to perform, along with when each task arrives and how long each task takes to complete.
publish (for RCU)
A single, atomic memory write that updates a shared object protected by a read-copy-update lock. The write allows new reader threads to observe the new version of the object.
priority donation
A solution to priority inversion: when a thread waits for a lock held by a lower priority thread, the lock holder is temporarily increased to the waiter's priority until the lock is released.
safe state
A state of an execution such that regardless of the sequence of future resource requests, there is at least one safe sequence of decisions as to when to satisfy requests such that all pending and future requests are met
unsafe state
A state of an execution such that there is at least one sequence of future resource requests that leads to deadlock no matter what processing order is tried
read-copy-update (RCU)
A synchronization abstraction that allows concurrent access to a data structure by multiple readers and a single writer at a time.
synchronization barrier
A synchronization primitive where n threads operating in parallel check in to the barrier when their work is completed. No thread returns from the barrier until all n check in
condition variable
A synchronization variable that enables a thread to efficiently wait for a change to shared state protected by a lock.
MCS lock
An efficient spinlock implementation where each waiting thread spins on a separate memory location.
test and test-and-set
An implementation of a spinlock where the waiting processor waits until the lock is FREE before attempting to acquire it
memory barrier
An instruction that prevents the compiler and hardware from reordering memory accesses across the barrier— no accesses before the barrier are moved after the barrier and no accesses after the barrier are moved before the barrier.
optimistic concurrency control
allows transactions to execute in parallel without locking data, but only lets a transaction commit if none of the objects accessed by the transaction have been modified since it began.
lock-free data structure
Concurrent data structure that guarantees progress for some thread: some method will finish in a finite number of steps, regardless of the state of other threads executing in the data structure
grace period (for RCU)
For a shared object protected by a read-copy-update lock, the time from when a new version of a shared object is published until the last reader of the old version is finished.
per-processor data structure
a separate copy of the multi-level feedback queue for each processor
false sharing
Extra inter-processor communication required because a single cache entry contains portions of two different data structures with different sharing patterns.
concurrency
Multiple activities that can happen at the same time.
quisecent (for RCU)
No reader thread that was active at the time of the last modification is still active
Banker's algorithm
a thread states its maximum resource requirements when it begins a task, but then acquires and releases those resources incrementally as the task runs
disable and enable interrupts
Privileged hardware instructions to temporarily defer any hardware interrupts, to allow the kernel to complete a critical task.
starvation
The lack of progress for one task, due to resources given to higher priority tasks.
time quantum
The length of time that a task is scheduled before being preempted.
ready list
The set of threads that are ready to be run but which are not currently running
deadlocked state
The system has at least one deadlock.
real-time scheduling
adjust processing rate to ensure meet goals, but if there's a delay between adjusting the rate and seeing the change in your metric, then can get an unstable system.
preemption
When a scheduler takes the processor away from one task and gives it to another.
mutual exclusion
When one thread uses a lock to prevent concurrent access to a shared data structure.
race condition
When the behavior of a program relies on the interleaving of operations of different threads
processor scheduling policy
When there are more runnable threads than processors, the policy that determines which threads to run first
dynamic priority scheduling
a scheduling algorithm that calculates priorities during execution
shortest job first preemptive
a scheduling policy that always picks a process with the shortest execution time
shortest job first non-preemptive
a scheduling policy that picks the process with the shortest execution time to execute next
priority boosting
system can boost and lower the dynamic priority, to ensure that it is responsive and that no threads are starved for processor time
hardware instruction reordering
the CPU will reorder instructions, based on priority, to get higher priority instructions past a buffer and into memory
synchronization
the action of causing a set of data or files to remain identical in more than one location.
compiler instruction reordering
the compiler optimizes the order of instructions, but only when single thread programs' behavior does not change
wait on condition variable
the current thread blocks until the condition variable is notified or a spurious wakeup occurs
mechanism
the implementations that enforce policies, and often depend to some extent on the hardware on which the operating system runs
apparent concurrency
the result of interleaved execution of concurrent activities
real concurrency
the result of overlapped execution of concurrent activities
deadline
the time a process need to be finished by
CPU burst
the time when the process is being executed in the CPU
signal on condition variable
unblock at least one of the threads that are blocked on the specified condition variable
broadcast on condition variable
unblocks all threads currently blocked on the specified condition variable
scheduling for condition variable signal under Hoare semantics
waiting can be done in a loop when scheduling occurs