Concurrency

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Atomic Locking

Compare & Swap Instruction Also called a "compare and exchange instruction" a compare is made between a memory value and a test value if the values are the same a swap occurs carried out atomically

Key words

Concurrency Mutual Exclusion Hardware Support Semaphores Monitors Readers/Writers Problem Deadlocks Detection & Recovery Avoidance Prevention

Disable interrupts

Interrupt Disabling: - uniprocessor system - disabling interrupts guarantees mutual exclusion Disadvantages: - the efficiency of execution could be noticeably degraded - this approach will not work in a multiprocessor architecture

Semaphores

One way to solve the lost wakeup problem is by using a lock Another way to solve this problem is to use a semaphore: an integer to count the number of wakeups pending

Critical Section

Piece of code (e.g., ATM balance update) that must run atomically Mutual exclusion: Ensure at most one process at a time

Race Conditions

Two(ormore)processes run in parallel and output depends on order in which they are executed ● ATMExample - SALLY: balance += $50; BOB: balance -= $50; - Question: If initial balance is $500, what will final balance be? SALLY BOB X = ReadBalance(500) X = X + 50 WriteBalance(X) Y = ReadBalance(550) Y = Y - 50 WriteBalance(Y) Net: $500 This (or reverse) is what you' d normally expect to happen.

Critical Sections Requirements

1. No two processes may be simultaneously inside their critical regions 2. No assumptions may be made about speeds or the number of CPUs 3. No process running outside its critical region may block any process 4. No process should have to wait forever to enter its critical region

Concurrency

1. Understand the Challenges Concurrency poses in an Operating System 2. Understand the definitions of key terms like race condition, critical section, mutual exclusion, etc. 3. Understand Multiple Approaches to Enforcing Mutual Exclusion. 4. Understand classic problems like: Readers-Writers and Dining Philosophers. 5. Understand what is a Deadlock, Deadlock Protection and Avoidance.

Peterson's Algorithm

A concurrent programing algo for mutual exclusion that allows two or more processes so share a single-use resource without conflict. Formulated by Gary Peterson in 1981. While the original formulation was only for 2 processes it can be generalized to more than 2.

Mutex

A mutex is a way to have mutual exclusion without busy waiting Implementation is very similar to the busy-wait version of critical regions, but instead of looping, we yield the CPU

Atomic Locking

A similar instruction exists on x86: xchg REG, MEM Atomically exchanges the contents of a register and a memory location You can see that this is equivalent to TSL if the register is set to 1

Atomic Operations Solutions

Atomic operations are at the root of most synchronization solutions Processor has to support some atomic operations If not, we're stuck! OS uses low-level primitives to build up more sophisticated atomic operations For example, locks that support blocking instead of busy-waiting We'll look at an example soon

Producer-Consumer Problem

Assume: Buffer is empty. Consumer just read count to see if it's 0 At that point the scheduler decides to stop running the consumer temporarily and start running the producer. Producer insters an intem into the buffer, increments count and notices that it's now1 Since it is now 1, it wakes up the consumer Unfortunately the consumer is not yet asleep (just preempted), so the signal is lost. When the consumer next wakes up, it sees that the count it read is zero, so it goes to sleep Sooner or later the producer will fill up the buffer and go to sleep Both will sleep forever. Could you fix it? Wakeup bit...

Peterson's Algorithm

Before using the shared variables, each process: calls enter_region() with it's own process number, 0 or 1 as parameter. This will cause it to wait until it's safe to enter. after it has finished with the shared variables it calls leave_region() to indicate that it is done and to allow the other process to enter. Let's see how this works: Initially no process in the critical region Process 0 calls enter_region() It indicates interest by seeting the array element to 0 and also the turn to 0

Locks and Interrupts

But now consider what happens if we get an interrupt that calls interrupt_handler after we acquire the lock We re-enter interrupt_handler, which tries to acquire the lock... But it can't! The lock is held by the earlier call to the interrupt handler, which

Peterson's Algorithm

Each process indicates its interest by setting its entry in the "interested" array. Then, it sets the global turn variable to its own process number. Finally, loop until turn indicates that it's our turn and we see that the other process is no longer interested. There is still a race - but regardless of the winner, only one process will get to enter its critical region.

Race Conditions

If the work to be done by a computer can be organized so that some portions of the work can be done in parallel, then a system with multiple processors will yield greater performance than one with a single processor of the same type. This is illustrated in Figure 2.12 . With multiprogramming, only one process can execute at a time; meanwhile all other processes are waiting for the processor. With multiprocessing, more than one process can be running simultaneously, each on a different processor.

Race Conditions

If two processes or threads need to update some data at the same time, we may have a race condition. The name comes from the idea that the two are both racing each other to be the first to write or read the data. These correctness problems are notoriously difficult to debug - problem only occurs when the timing is just right and is hard to reproduce. Note that we don't actually need true parallelism here for a mistake to occur! Just preemption at the wrong time. Multiple processors do make this sort of problem more likely to manifest, though

Producer-Consumer Problem

Imagine we have two tasks, one that produces items and places them in a fixed-size buffer, and one that consumes them An example you have seen already - a pipe! If the buffer is full, the producer sleeps until there's space If the buffer is empty, the consumer sleeps until there's data available

Disable interrupts

In a uniprocessor system, concurrent processes cannot have overlapped execution; they can only be interleaved. Furthermore, a process will continue to run until it invokes an OS service or until it is interrupted. Therefore, to guarantee mutual exclusion, it is sufficient to prevent a process from being interrupted. This capability can be provided in the form of primitives defined by the OS kernel for disabling and enabling interrupts. Because the critical section cannot be interrupted, mutual exclusion is guaranteed. The price of this approach, however, is high. The efficiency of execution could disable interruptsbe noticeably degraded because the processor is limited in its ability to interleave processes. Another problem is that this approach will not work in a multiprocessor architecture. When the computer includes more than one processor, it is possible (and typical) for more than one process to be executing at a time. In this case, disabled interrupts do not guarantee mutual exclusion.

Sleep and Wakeup

Instead of waiting in a loop, wasting CPU, we would like to put the waiting process to sleep, waking up when the lock is released. Often, sleep and wakeup take as parameters the address of some variable so we can match up sleeps with wakeups. E.g., in xv6: sleep(void *chan) wakeup(void *chan)

Atomic Operations

Series of operations that cannot be interrupted Some operations are atomic with respect to everything that happens on a machine Other atomic operations are atomic only with respect to conflicting processes, threads, interrupt handlers, etc. On typical architectures: Individual word load/stores and ALU instructions Synchronization operations (e.g., fetch_and_add, cmp_and_swap) ATM example: Balance updates were NOT atomic Solution: Enforce atomic balance updates Question: How?

The Lost Wakeup Problem

Suppose the buffer is empty, and consumer checks that count is 0 Just before consumer actually goes to sleep, it gets preempted, and producer puts something in the buffer Since count is now 1, producer tries to wake up consumer, but consumer isn't asleep yet, so it does nothing Control returns to the consumer, who goes to sleep, and never wakes up - it has missed its wakeup

Priority Inversion

Suppose we're using priority scheduling, and we have a high-priority process H and a low priority process L. So whenever H is runnable, it will always be chosen over L Now, L enters a critical region, but is then preempted to run H H wants to enter the critical region, but the lock is already held by L, so it enters a busy wait loop. But now H is always runnable, and will always be chosen over L, so L can never leave its critical region and the system is stuck.

Synchronization Problems

Synchronization can be required for different resources Memory: e.g., multithreaded application OS object: e.g., two processes that read/write same system file Hardware device: e.g. two processes that both want to burn a DVD There are different kinds of synchronization problems Sometimes we just want activities to not interfere with each other Sometimes we care about ordering

Lock

Synchronization mechanism that enforces atomicity Semantics: Lock(L): 1. If L is not currently locked atomically lock it. 2. If L is currently locked block until it becomes free. Unlock(L): Release control of L You can use a lock to protect data: Lock(L) before accessing data, Unlock(L) when done

The Martian Inversion

The Mars Pathfinder mission used the VxWorks realtime operating system with priority scheduling Meterological data gathering was low-priority task, communication task was medium-priority Both shared a data bus, controlled by a lock This caused a classic priority inversion, hanging the lander until the watchdog timer reset it This was debugged from 140 million miles away by examining system log data Fixed by uploading a snippet of C code that turned on priority inheritance for the lock Priority inheritance says that a process holding a lock is elevated to the highest priority of anything waiting for the lock *If you hold a lock, you get promoted so that you can make progress.

Semaphore

The fundamental principle is this: Two or more processes can cooperate by means of simple signals, such that a process can be forced to stop at a specified place until it has received a specific signal. Any complex coordination requirement can be satisfied by the appropriate structure of signals. For signaling, special variables called semaphores are used. To transmit a signal via semaphore s , a process executes the primitive semSignal(s) . To receive a signal via semaphore s , a process executes the primitive semWait(s) ; if the corresponding signal has not yet been transmitted, the process is suspended until the transmission takes place. To achieve the desired effect, we can view the semaphore as a variable that has an integer value upon which only three operations are defined: 1. A semaphore may be initialized to a nonnegative integer value. 2. The semWait operation decrements the semaphore value. If the value becomes negative, then the process executing the semWait is blocked. Otherwise, the process continues execution. 3. The semSignal operation increments the semaphore value. If the resulting value is less than or equal to zero, then a process blocked by a semWait operation, if any, is unblocked. Other than these three operations, there is no way to inspect or manipulate semaphores. We explain these operations as follows. To begin, the semaphore has a zero or positive value. If the value is positive, that value equals the number of processes that can issue a wait and immediately continue to execute. If the value is zero, either by initialization or because a number of processes equal to the initial semaphore value have issued a wait, the next process to issue a wait is blocked, and the semaphore value goes negative. Each subsequent wait drives the semaphore value further into minus territory. The negative value equals the number of processes waiting to be unblocked. Each signal unblocks one of the waiting processes when the semaphore value is negative.

Mutual Exclusion

The key trouble we ran into was that the read-update-write sequence on a shared resource could be interleaved between two processes. (read1-read2-update2-write2-update1-write1) The way to solve this is through mutual exclusion -making sure that only one process has access to the shared resource at a time.

Synchronization

The race condition happened because there were conflicting accesses to a resource. Basic idea behind most synchronization: If two threads, processes, interrupt handlers, etc. are going to have conflicting accesses, force one of them wait until it is safe to proceed. Conceptually simple, but difficult in practice The problem is that we need to protect all possible locations where two (or more) threads or processes might conflict.

Synchronization

The raceconditionhappenedbecausetherewereconflicting accesses to a resource ● Basicideabehindmostsynchronization: - If two threads, processes, interrupt handlers, etc. are going to have conflicting accesses, force one of them wait until it is safe to proceed ● Conceptuallysimple,butdifficultinpractice - The problem is that we need to protect all possible locations where two (or more) threads or processes might conflict

Compare & Swap Instruction

This version of the instruction checks a memory location ( *word ) against a test value ( testval ). If the memory location's current value is testval, it is replaced with newval ; otherwise it is left unchanged. The old memory value is always returned; thus, the memory location has been updated if the returned value is the same as the test value. This atomic instruction therefore has two parts: A compare is made between a memory value and a test value; if the values are the same, a swap occurs. The entire compare&swap function is carried out atomically—that is, it is not subject to interruption. Another version of this instruction returns a Boolean value: true if the swap occurred; false otherwise. Some version of this instruction is available on nearly all processor families (x86, IA64, sparc, IBM z series, etc.), and most operating systems use this instruction for support of concurrency. Atomic Locking

Avoiding Interrupt Deadlocks

To get around this problem, we must make sure that when a lock is held by an interrupt handler we disable interrupts xv6 actually goes further - all locks in the kernel disable interrupts on acquire and re-enable on release To make things safe. Disable on acquire and re-enable on release

Synchronization (Concurrency Control)

Using atomic operations to eliminate race conditions

Critical Sections

We can formulate the problem by classifying what programs do into two parts. The majority of the time they do things that don't require synchronization; the things they do only affect non-shared resources. Some of the time, they need to access shared memory or files; we call this a critical region or critical section of the program. If we can arrange it so that two programs are never in a critical section at the same time, we can avoid race conditions.

Atomic instruction

We can have a simpler solution if the hardware helps us out a bit with an atomic instruction For example, "test and set lock": TSL RX, LOCK Atomically reads the memory at address LOCK into RX and then stores a nonzero value back into LOCK No other processor is allowed to access the memory at address LOCK until TSL is done

Solution for the Lost Wakeup Problem

We can have the producer and the consumer share a lock. The consumer acquires the lock, checks the value of count, and goes to sleep, passing the lock to the sleep function, which releases it. The producer acquires the lock before calling wakeup; if the process is not yet fully asleep, it will wait, ensuring that the wakeup is sent after the process is actually asleep. The consumer is gonna sleep while holding the lock Two operations that need to be atomic: Check count & goto sleep The xv6 pipe implementation. It's doing just this.

Concurrency

We now turn to OS and programming language mechanisms that are used to provide concurrency. Table 5.3 summarizes mechanisms in common use. We begin, in this section, with semaphores. The next two sections discuss monitors and message passing. The other mechanisms in Table 5.3 are discussed when treating specific operating system examples, in Chapters 6 and 13 .

Issues in Concurrency

We want to run things concurrently for performance reasons Particularly if we have multiple processors (even phones now typically have 2+ CPU cores) Some of the Intel i7s have 6 cores. (i7-990x) But when multiple concurrent tasks (processes or threads) need to operate on some shared resources, things can get messy

Busy Waiting

Whenever the a process is waiting to enter a critical section under these schemes, it sits in an infinite loop. This wastes CPU time. It can also interact badly with scheduling. Both Petterson's Algo and Hardware approaches suffer from Busy Waiting.

Semaphore

[DOWN08] points out three interesting consequences of the semaphore definition: • In general, there is no way to know before a process decrements a semaphore whether it will block or not. After a process increments a semaphore and another process gets woken up, both processes continue running concurrently. There is no way to know which process, if either, will continue immediately on a uniprocessor system. • When you signal a semaphore, you don't necessarily know whether another process is waiting, so the number of unblocked processes may be zero or one.

Shared Memory Synchronization

● Threads share memory ● Preemptive thread scheduling is a major problem - Context switch can occur at any time, even in the middle of a line of code (e.g., "X = X + 1;") » Unitofatomicity-Machineinstruction » Cannot assume anything about how fast processes make progress - Individual processes have little control over order in which processes run ● Need to be paranoid about what scheduler might do ● Preemptive scheduling introduces non-determinism


Set pelajaran terkait

Chapter 22: Pediatric Nursing Interventions and Skills

View Set

Mathematics Test with Rationales

View Set

CO Health Insurance Exam- Exam FX

View Set

Eukaryotic and Prokaryotic cells

View Set

OB Ch 17, Postpartum adaptions prepu, Chapter 15: Postpartum Adaptations, Chapter 15 - Postpartum Adaptations, Chapter 16 Nursing Management During the Postpartum Period, Prep U: Chapter 15: Postpartum Adaptations, OB: Chapter 22: Nursing Management…

View Set