Final COMP 530 - Operating Systems

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Real-Time systems

'Adds temporal correctness to logically correct systems'. It is a type of multiprogramming system with a different objective. Instead of executing things in parallel, it's to allow programs to execute that have a notion of a deadline or timing constraint. It must do the right thing AND do it at the right time.

Atomic operation

As if the critical section happens in an instant. As if it were a single line of code.

Distributed systems

A collection of physically separate, possibly heterogeneous, computer systems that are networked to provide the users with access to the various resources that the system maintains. Not necessarily closely linked.

Clustered systems

A form of distributed, multi-CPU systems but machines are closely linked by local-area network. Used to provide high availability of service. Can be structured asymmetrically or symmetrically. Asymmetric puts the host machine in hot-standby mode to monitor the active server in case it goes down. Symmetric allows hosts to run applications and monitor each other.

Readers/Writers system

A generalization of producer/consumer systems but consumers ('readers') do not destroy the data in the buffer they just read it. Multiple readers and multiple writers, readers may be reading simultaneously but only one writer can be active at a time. Reading and writing cannot proceed simultaneously. We want to make sure readers don't starve writers (& vice versa). Monitors in a reader/writer system do not actually encapsulate the shared data (we want lots of readers to be able to read in parallel). Once you return from StartRead you know there is no writer writing (if there is a writer trying to write, wait). EndRead should decrement the number of readers and signal a writer once 0 readers are left. StartWrite ensures no one is writing or reading. EndWrite signals readers waiting to read if there are any else signals writers. No starvation here because readers give priority to writers and writers give priority to readers.

Hoare monitor

A high-level synchronization primitive distinguished by its synchronization semantics. Part of synchronization includes ensuring that the monitor invariant (state of the monitor) and the condition variable are made true prior to signaling a waiting process. This allows us to assert that the condition is true when the context returns to the waiting process. Suffers from the priority inversion problem.

Time-sharing systems

A logical extension of multiprogramming whereby the CPU executes multiple jobs by switching among them very quickly to give the appearance that the processes are executing in parallel.

Message passing

A mechanism used intead of shared memory communication. Interprocess communication is instead explicit, and synchronization is implicit. Extensible to communication in distributed systems. send(ToProcessId, message), receive(FromProcessId, var message). Communication happens indirectly through the kernel (kernel may have the buffer we were using in previous examples).

Yield

A process actively makes a transition to the ready state. Could be for a number of reasons like cooperative multitasking. Sprinkle in code to cooperatively yield the CPU and increase utilization..

Process control block

A process is represented in the operating system by this. It serves as a repository for any information that may vary from process to process.

Deadlock

A set of processes cannot make process because each process is waiting on another process.

Trap

A software-generated interrupt caused either by an error or by a specific request from a user program that an operating-system service be performed.

System call

A special operation that generates a software interrupt to the processor, forcing it to run a service routine before continuing with the calculations it was doing. Generally, the actual call function generates a trap to transfer control to the operating system where the real desired call is made. Provides an interface to the services made available as routines written in C, C++, and assembly language. System calls are executed in a separate context from the user context. 'You don't just branch to the code, you get into the operating system by software interrupt'.

Batch systems

A system where jobs are processed in bulk.

Time quantum

A time slice of the processor allocated to the running process. Small quantums seem more responsive but the overhead of round robin goes up because there are more context switches. Larger time quantums would be better in systems where there is a saturation of processes that take longer to execute. Typically around 10-50ms in length.

Medium-term scheduling

Adjusts the level of multiprogramming by suspending processes. Lets everything into the system and if it determines there is a performance problem it will suspend some process.

Long-term scheduling

Adjusts the level of multiprogramming through admission control. When a process is created it is not immediately put into the ready queue it instead is placed into the admission queue. Typically will block CPU bound processes if a lot of I/O bound jobs are executing.

Correctness Conditions for a Producer/Consumer System

All data produced by the produced is eventually consumed by the consumer. 'liveness' eventually something good happens. The consumer only consumes a given data item once. The consumer only consumes data items produced by the producer.

Ready/running/wait states

All processes in the system are represented in a state. Ready signifies that a process has everything it needs to be able to execute it just needs to be allocated the processor. Running means that process has been allocated the processor. Waiting state signifies that a process is waiting on some event like IO completion.

General semaphore

Also called counting semaphores, the values are not limited to 0 and 1. Often keep track of the state of some resource. Example: in a producer/consumer system, counting semaphores can be used to keep track of the number of full and empty buffers. Then the conditional synchronization of down(), will logically make a process wait to produce a character 'if a buffer has 0 empty buffers'. When implementing semaphores with a Kernel (instead of burning up the CPU just spinning when you have to wait) the structure of a semaphore includes a binary semaphore, a number of waiting processes, a queue of those processes, and a value.

Semaphore

An abstract data type that has semantics that guarantee mutual exclusion. It encapsulates a non-negative integer and exports a down() [P(), wait()] and up() [V(), signal()] function. Synchronization takes place in the down function, which basically means "decrement the value of the semaphore if > 0, else wait". The up and down operations are assumed to be atomic. Major issue with semaphores is that they are used for both mutual exclusion and condition synchronization, so in multi producer or multi consumer systems, the relationship between the functions of the semaphores is incredibly unclear.

Priority

An integer used to sort the ready queue. Typically low values equal high priority.

Turnaround time

An older term for batch jobs referring to the time it takes from start to receiving the finished reports.

Short-term scheduling

By and large what you think about when you talk about scheduling. Determine which process should execute next. Schedule when a process makes a transition: 1) running to waiting 2) running to ready 3) waiting to ready 4) a process is created 5) from running to terminated.

Personal/Big Iron systems

Computing for interactivity, single user, small response time primary concern. I/O intensive, memory access intensive, typically supporting many users: throughput and transactions per second primary concern.

Processor state

Consists of the values of all processor registers.

Memory state

Contains the program code, data, and execution stack for the process. More specifically, the program code is just binary executable code (a.out), the data segment contains everything your program needs to execute, the execution stack implements procedure call and return.

Move the setting of the status flags to before conditional synch

Correct mutual exclusion algorithm that makes sure no processes are in the critical section at the same time, but fails on the bounded waiting condition. If they execute in lockstep, neither ever enters the critical section.

Condition variable

Declared in your program, the condition variable represents a queue in synchronization with monitors. You can perform three operations on the condition variables: wait(cv), signal(cv), empty(cv). Wait blocks the caller and puts it onto a condition specific queue in the waiting state [not a conditional wait, if you call wait, you're blocked]. Signal will transition a waiting process on a condition variable to the ready state [but no concept of a count]. Empty allows you to test if anybody is waiting.

Interrupt enable/disable

Disable interrupts to ensure the critical section is non-preemptible. Typically you want to execute with the interrupts enabled as often as possible or else you won't be able to time slice or do I/O.

Correctness Conditions

Does it guarantee mutual exclusion? (Multiple processes running the entry protocol) Expedient (processes can't delay the decision about who enters)? Fail-safe (If a process fails outside of the critical section it doesn't preclude another process from entering)? Bounded waiting (Can we guarantee every process gets into the critical section, is there a bound on how long a process may have to wait)?

Execution state

Either a process is ready to execute (meaning it has all of the resources it needs to execute), running (its been allotted the processor), waiting (its waiting for an event).

Mutual exclusion

Ensuring that there is ever one process inside the critical section at any given time.

Response time

For interactive jobs, typically in the main loop of the program we want to judge the turnaround time for one instance of the loop. In a shell, for example, the program may be open for hours but we want to judge the time it takes to read a command, fork a process, and wait for it to finish.

Round-Robin

For time sharing systems. Allocate the processor in discrete units called quantums. Switch to the next ready process at the end of each quantum. Processes execute every (n - 1)q time units.

Monolithic OS

Operating system is a single program.

Throughput

Given as a formula of the number of jobs or transactions completed per unit time. Is a rate.

Interrupt

Hardware or software may trigger an interrupt (signifying some event) by sending a signal to the CPU. Modern computers are interrupt driven, meaning they will do nothing if there are no devices to service or users to respond to.

Test-and-Set

Hardware provided program that executes a multi-step instruction (performs a LOAD, COMPARE, and STORE) in one individual operation. Takes a value and a memory location and returns the value. If that value happened to equal the value at the memory location, the value is set to zero. The boolean test and set simply returns the boolean value it is passed but sets it to false. "Like a 'read' with a side effect of setting the value to false"

Off-line operation

I/O is completed without the CPU. The CPU reads to a tape drive and writes to a separate tape drive. A card reader writes to the tape drive. The tape drive can also write to a line printer. This allows reading and writing to take place independently of the CPU.

Convoy effect

IO bound processes appear to execute in a convoy immediately following CPU hogging processes. This happens because if there is no mechanism to control the order of execution between larger and smaller jobs (maybe they are served on a first come first served policy), I/O bound jobs that only require a very small amount of CPU time will spend a lot of time waiting for the CPU bound jobs even though the I/O device is ready to work.

Non-preemptive scheduling

If another process comes along, it must wait for the first process to make a state transition.

Critical section

If shared memory locations are manipulated in multiple programs then there is a possibility that the value can become corrupted (preemption at a bad time can leave the producer and consumer with different values for the shared variable). Critical section problem can lead to a violation in one or more correctness conditions of the producer/consumer paradigm (if the critical section is code that synchronizes the two). Operations on critical sections must appear to be atomic.

Priority inversion

In Hoare monitor semantics with priority scheduling, the situation arises where the producer may signal the low priority consumer [that the buffer is not empty] and the consumer will begin to consume data but is then preempted by a medium priority process while holding the monitor lock, which in turn is preempted by a high-priority consumer/producer. The high-priority process will try to consume [or produce] data but realize that the monitor lock is held by the low-priority consumer. It will then seem that high-priority processes are fundamentally limited in how quickly they may execute by the low-priority consumer. Called a priority inversion because even though the high priority process should execute as quickly as possible, it is executing at the rate of the low priority consumer. Mars rover suffered from priority inversion.

Synchronized class

In Java, you do not have something called a 'Mesa monitor', they call them instead synchronized classes. Keyword 'synchronized' placed in function definition and guarantees that the functions are executed in a mutually exclusive manner with respect to all other functions in that class with the 'synchronized' keyword. wait() and notify() use Mesa semantics without a condition variable. Instead, there is only one queue. They do this because they determined you don't need separate queues because consumers and producers will not be waiting at the same time. When you do a notify in remove, you are only ever releasing a producer, when you do a notify in deposit, you are only ever releasing a consumer.

Admission queue

In Long-term scheduling systems, processes are not immediately placed onto the ready queue when they are created, they go here. Admission control keeps the saturation of processes even by not letting a process into the ready queue if the system is over-saturated.

Suspended queue

In Medium-term scheduling systems, certain jobs that are ready to execute are taken out of the ready queue and suspended for performance reasons.

Layered OS

In early unix systems, the OS was designed such that the hardware layer was at the bottom, device management and process management on top of that, and user applications at the top. Layered operating systems are also monolithic.

Multiprogramming

Increases CPU utilization by structuring jobs so that the CPU always has one to execute. The OS keeps several jobs in memory simultaneously.

Preemption

Interruption of the running process.

Urgent queue

Just like the queue to get into the monitor, but instead this is a high priority queue in the waiting state. If there is ever someone waiting on both the monitor entry queue and the urgent queue, the monitor is going to prioritize the process on the urgent queue first.

Combine alternation with status flags

L5P2 21:00, why is this a correct solution if they execute in lock step it looks like neither will get into the critical section?

Monitor

Largely how synchronization is done in most programs. A higher level primitive than semaphores. An encapsulation mechanism that encapsulates some shared data and functions to operate on the data. The functions are guaranteed to be executed in a mutually exclusive manner. Conditional synchronization is done through the condition variable. Much clearer than semaphores because you do not have to worry about also implementing mutual exclusion.

Starvation

Low priority processes can be repeatedly pushed to the bottom of the ready queue and it is possible that it could never execute. Classical way to deal with this is aging.

Multiprocessor systems

Multiprocessor systems generally derive their names from where memory is located. Memory can be remote or local ("Shared Memory"->Multi-core processing, "Distributed Memory"->Distributed/Clustered systems where the memory you care about is spread across many devices).

Status flags

Mutual Exclusion algorithm that uses an array of booleans and simply tests if the other process is inside the critical section, if not IT goes into the critical section. This is not a correct algorithm because it introduces race conditions. The correctness of the code depends on the order of execution of the processes. In this specific example P1 and P2 race to set their inCS value to true and enter the critical section.

Shortest-Job-First

Optimal scheduling policy. Sort the ready queue by an estimate of how long a process will need the cpu. The head of the queue will have the job with the shortest completion time. Supermarket express lanes. Can be either preemptive or non-preemptive.

Indivisible operation

Non-preemptible, atomic operation. You are disallowing an operation from being divided up.

Reset the flag if critical section is busy

Not correct mutex algorithm. Instead of spinning while your inCS boolean is true, break out of the loop when the other process leaves the critical section. This is not an expedient solution because both processes will spin indefinitely waiting for the other to get into the critical section. Also doesn't provide bounded waiting because P2 could conspire to never let P1 in if it just simply executes a faster loop.

Virtual machine emulation

Often monolithic operating systems exist to export a hardware interface (looks like abstract hardware) of the processor(s) it runs on. It can do this for a number of user applications so that each could have its own copy of the OS. 'Virtual Machines', and popular with Java.

Aging

Over time, we decrease the priority value of the process thereby making sure it will eventually execute.

Ready queue

Processes in the ready queue are those that already have all the resources needed to execute.

Waiting queue(s)

Processes that are waiting for an event or interrupt to occur are placed in the waiting queue.

I/O bound

Processes that spend the most time doing I/O. Most things you would interact with.

CPU bound

Processes that spend the most time using the CPU.

Dispatch

Removing a process from the ready queue (always dequeue from the head).

Busy-waiting

Simplest form of conditional synchronization. Do a NOOP while some condition is true.

First-Come-First-Served

Simplest scheduling policy, measured average response time can be very large if large jobs execute before short jobs. This is the making of the convoy effect. Assumed a non-preemptive system. Worst policy you can imagine for response time.

Binary semaphore

Special semaphore to solve the critical section problem. It only ever has the value 0 or 1. Binary semaphores used to provide mutual exclusion should always be initialized to 1.

Spooling

Spooling can be used when the output device operates slower than the CPU. Without spooling the CPU must wait for a printer to finish printing its current job before the CPU can send the data to be printed. With spooling the CPU can unload the print data onto a buffer such as a disk drive. The printer can read the print jobs from the buffer. The CPU can continue working after unloading the print job onto the buffer without having to wait for the printer.

Procedure call

Standard procedure calls simply create an activation record on the user context for and branches to the code to execute in the text segment of the user context.

Buffering

Storing data temporarily in a buffer. A program can save its output to a buffer and continue its execution without having to wait for its output to be read. Buffers can be bounded unbounded, or zero.

Mesa monitor

Synchronization semantics are a little different from hoare monitors. Notify as a hint. "I just incremented fullcount, signifying I filled a buffer but who knows whether the buffer will still be full by the time you get there". Testing fullcount should then be a while loop to retest the condition when the process wakes up. Much easier to implement than a Hoare monitor.

Interactive systems

System which provides direct communication between user and the system.

Multilevel feedback queues

Systems like Linux and UNIX in general use this. A combination of priority and round robin. There are n ready queues, for n priority levels and within each queue, round robin scheduling is executed until the queue is empty. If a process does not finish it is placed at the end of the next lower priority level. Also note that the highest priority queues execute with the smallest quantums and as programs need to execute for longer, they will get more and more CPU as it is repeatedly demoted. Can't be starved to death.

On-line operation

The CPU reads directly from a card reader and writes directly to a line printer. This means the CPU must wait on the card reader to be reloaded or for the line printer to finish printing.

Process

The basic unit of execution in an operating system. Defined by its PCB. Things that have their own entire memory context.

Multiprocessing

The interconnection between processors is a memory bus in tightly coupled/parallel systems. In a symmetric multi processor every processor has a full copy of the operating system. Asymmetric multiprocessing is when every processor may not have a full copy of the OS and a processor that does dispatches jobs to those that may not.

Context

The name of the data structure that the operating system maintains for each process. Contains everything the OS should want to know about the process. Context contains the state of memory, state of execution, and state of the processor.

Multiprogramming level

The number of processes that exist in the system.

Context switch

The process of switching from one process to another. Operating system must make sure that when it is time for some program to execute that its context is loaded into memory. Once a system call is made to the OS, the context is saved before scheduling another process. Once it is time to resume the first program, the state will be restored so that the process executes with the exact same execution environment as before the system call.

Optimal processor scheduling

This means just that you can't do better, not that it is the ONLY scheduling policy that can do this well.

Thread

Thread is a sub-process that shares the memory of the PCB with other threads. When switching between threads you no longer have to worry about the overhead of saving the context of the process, you just save the registry values when switching contexts. Much lighter weight!

Shared memory

Threads communicate by reading and writing variables. Threads share a set of global program variables.

Waiting time

Time spend waiting in queues such as the ready queue (not the waiting queue!).

Real-Time scheduling

Timing constraints are translated into deadlines or rate requirements. Rate monotonic scheduling assigns a priority value by computing 1/rate, where rate is how often the loop must run (1/.033 = 30 for video, 1/.020 = 50 for audio). Deadline scheduling sets the priority to the release time + period, must be dynamically recomputed. Most people actually implement rate monotonic though since you want control over the priorities of the processes, deadline scheduling changes them dynamically.

Direct memory access

To reduce the overhead of bulk data transfer with devices, DMA allows the device controller to transfer an entire block of its local buffer to memory without intervention by the CPU. Only one interrupt is generated per block instead of one interrupt per byte on low-speed devices.

Mutex Algorithms

Turn taking/strict alternation: An algorithm for mutual exclusion that forces the processes to alternate by conditionally synchronizing on some value denoting who's turn it is to get into the critical section. Note that this algorithm is not expedient (If P2 has a lot of code in its main loop, P1 could return back up to the conditional synchronization line and not get into the critical section even if P2 is not actively in the critical section.) or fail-safe (If P2 were to fail in its lots of code, P1 could only get into the critical section a single time).

Producer/Consumer system

Two threads that communicate by a shared memory buffer. The producer produces data (writes it to the shared buffer), and the consumer reads from the buffer. The producer and consumer have to coordinate their activities.

Client/Server system

Under the covers of the microkernel, a client/server architecture is used to allow user applications to access services. This makes it easy to extend to distributed systems because you can use the same client/server approach and not change anything about the OS or the client program and instead use stub procedures to initiate client communication with the machine with the desired server program.

Utilization

Unitless measure of how long a resource is busy. Is a percent.

Cooperative multitasking

User programs are written to give up the CPU in a cooperative way, ensuring that fewer computationally long programs are executing in general

Condition synchronization

Waiting to synchronize based on some condition of the state of the computation. Often used in producer/consumer systems that are keeping track of characters in a buffer. For example, waiting to add to the buffer IF the buffer is already full.

Dual-mode operation

We needed some mechanism to be able to distinguish between user-code and operating system-code in order to keep errant users from incorrectly using privileged instructions that could have some negative effect on the system. They accomplish this by setting a 'mode' bit to 1 if operating in user-mode and 0 if operating in kernel mode. Privileged instructions can only be executed in kernel-mode.

Fork/Join

When fork is called, a new child process is created with the text segment being the code for the function defined. When the first statement after the fork() executes it is executing logically in parallel with the new child process. Join is a synchronization primitive that allows a parent and child to synchronize. In Linux, you make a system call to fork() and it duplicates the parent process and then both execute the line after fork(), so you must use an if-statement to separate the functions of the parent and child.

Preemptive scheduling

When some external event occurs (a timer interrupt, IO completion interrupt, a process made a transition from waiting to ready), the running process is preempted. Whether or not the preempting process is allocated the CPU is a policy decision.

Microkernel

Whenever a user wants to use an OS service, it is actually implemented as another user program on top of the OS Kernel. Kernel as a communication substrate used to glue parts of the OS together.


Kaugnay na mga set ng pag-aaral

PN Mental Health Online Practice (B)

View Set

316- Chap 54: Upper Respiratory Drugs PREPU

View Set