CSCI361 Operating Systems Chapter 5
Sempahore implementation
1) Imperative that the semWait and semSignal operations be implemented as atomic primitives 2) Can be implemented in hardware or firmware 3) Software schemes such as Dekker's or Peterson's algorithms can be used 4) Use one of the hardware-supported schemes for mutual exclusion
Solution to Producer/Consumer Problem
1) introduce an auxiliary variable that can be set in the consumer's critical section for use later on to prevent an infinite buffer. 2) Add a constraint that ensures the buffer is finite and that it is treated as circular storage. Pointer values must be expressed modulo the size of the buffer
Condition Variable
A Common Concurrency Mechanism, this is a data type used to block a process or thread until a particular condition is true
Event Flags
A Common Concurrency Mechanism, this is a memory word used as a synchronization mechanism. Threads can wait for a single one of these or a combination of these by checking a sngle or multiple one of these. The thread is blocked until all of the required bits are set (AND) or at least one of the bits are set (OR)
Mailboxes/Messages
A Common Concurrency Mechanism, this is a mens for two processes to exchange information and that may be used for synchronization
Monitor
A Common Concurrency Mechanism, this is a programming language construct that encapsulates variables, access procedures and initialization code within an abstract data type. Provides equivalent functionality to that of semaphores and is easier to control. Implemented in a number of programming language and has also been implemented as a program library It's variable may only be accessed via its access procedures and only one process may be actively accessing the monitor at any one time. The access procedures are critical sections. a _______ may have a queue of processes that are waiting to access it.
Binary Semaphore
A Common Concurrency Mechanism, this is a special integer value used for signaling that takes only 0 or 1 as its value
Semaphores
A Common Concurrency Mechanism, this is an integer value used for signaling among processes. Only three operations may be performed on this,all of which are atomic. Initialize, Increment and Decrement. Decrement may block a process while Increment may unblock a process.
Mutex
A Common Concurrency Mechanism, this is similar to a binary semaphore except that the process that locks the _____ (sets to 0) must be the one to unlock it(set to 1)
Spinlocks
A Common Concurrency Mechanism, this is where a process executes in an infinite loop waiting on the value of this to indicate availability
Multiple applications
A concurrency context, multiprogramming was invented to allow processing time to be dynamically shared among a number of active applications.
Structured applications
A concurrency context, some applications can be effectively programmed as a set of concurrent processes
Operating system structure
A concurrency context, the same structuring advantages apply to systems programs, and we have seen that operating systems are themselves often implemented as a set of processes or threa
Interrupt Disabling
A form of a mutual exclusion guarantee on a uniprocessor system since concurrent processes cannot have overlapped execution; they can only be interleaved. This capability can be provided in the form of primitives defined by the OS kernel for disabling and enabling interrupts. The efficiency of execution could be noticeably degraded because the processor is limited in its ability to interleave processes. Another problem is that this approach will not work in a multiprocessor architecture.
Processes indirectly aware of each other
A form of process interaction, these are are processes that are not necessarily aware of each other by their respective process IDs but that share access to some object, such as an I/O buffer. Such processes exhibit cooperation in sharing the common object.
Processes unaware of each other
A form of process interaction, these are independent processes that are not intended to work together. The best example of this situation is the multiprogramming of multiple independent processes. These can either be batch jobs or interactive sessions or a mixture. Although the processes are not working together, the OS needs to be concerned about competition for resources. For example, two independent applications may both want to access the same disk or file or printer. The OS must regulate these accesses.
Processes directly aware of each other
A form of process interaction, these are processes that are able to communicate with each other by process ID and that are designed to work jointly on some activity. Again, such processes exhibit cooperation
csignal(c)
A function involved in Monitor Synchronization, this resumes execution of some process blocked after a cwait on the same condition. If there are several such processes, choose one of them; if there is no such process, do nothing.
cwait(c)
A function involved in Monitor Synchronization, this suspends execution of the calling process on condition c . The monitor is now available for use by another process.
Atomic operation
A function or action implemented a a sequence of one or more instructions that appear to be indivisible---that is no other process can see an intermediate state or interrupt the operation. An example of this is updating a database.
Must Be Enforced
A requirement for Mutual Exclusion this ensures only one process at a time is allowed into its critical section, among all processes that have critical sections for the same resource or shared object. This ____ __ _________
Critical Section
A section of code withi a process that requires access to shared resources and that must not be executed while another process is in a corresponding section of code
Livelock
A situation where two or more processes continuously change their states in response to changes in other processes without doing any useful work
Nonblocking send, blocking receive
Although the sender may continue on, the receiver is blocked until the requested message arrives. This is probably the most useful combination. It allows a process to send one or more messages to a variety of destinations as quickly as possible. A process that must receive a message before it can do useful work needs to be blocked until such a message arrives. An example is a server process that exists to provide a service or resource to other processes.
need for mutual exclusion
An issue in resource competition, it is important that only one program at a time be allowed in its critical section. We cannot simply rely on the OS to understand and enforce this restriction because the detailed requirements may not be obvious. In the case of the printer, for example, we want any individual process to have control of the printer while it prints an entire file. Otherwise, lines from competing processes will be interleaved. This is the ____ ___ ______ _________
deadlock
An issue in resource competition, suppose that each process needs access to both resources to perform part of its function. Then it is possible to have the following situation: the OS assigns R1 to P2, and R2 to P1. Each process is waiting for one of the two resources. Neither will release the resource that it already owns until it has acquired the other resource and performed the function requiring both resources. This describes
starvation
An issue in resource competition, suppose that three processes (P1, P2, P3) each require periodic access to resource R. Consider the situation in which P1 is in possession of the resource, and both P2 and P3 are delayed, waiting for that resource. When P1 exits its critical section, either P2 or P3 should be allowed access to R. Assume that the OS grants access to P3 and that P1 again requires access before P3 completes its critical section. If the OS grants access to P1 after P3 has finished, and subsequently alternately grants access to P1 and P3, then P2 may indefinitely be denied access to the resource, even though there is no deadlock situation. This describes
Blocking send, Blocking receive
Both sender and receiver are blocked until the message is delivered Sometimes referred to as a rendezvous Allows for tight synchronization between processes
Resource Competition
Concurrent processes come into conflict when they are competing for use of the same resource for example: I/O devices, memory, processor time, clock 3 control problems must be faced: 1) The need for mutual exclusion 2) deadlock 3) starvation Together this describes ________ ___________
OS Concerns
Design and management issues raised by the existence of concurrency. The OS must: 1. The OS must be able to keep track of the various processes. This is done with the use of process control blocks. 2. The OS must allocate and de-allocate various resources for each active process. At times, multiple processes want access to the same resource. These resources include • Processor time: This is the scheduling function, discussed in Part Four. • Memory: Most operating systems use a virtual memory scheme. The topic is addressed in Part Three. • Files: Discussed in Chapter 12 . • I/O devices: Discussed in Chapter 11 . 3. The OS must protect the data and physical resources of each process against unintended interference by other processes. This involves techniques that relate to memory, files, and I/O devices. 4. The functioning of a process, and the output it produces, must be independent of the speed at which its execution is carried out relative to the speed of other concurrent processes
Lampson and Redell's definition of monitors
Developed for for the language Mesa [LAMP80]. Their approach overcomes the problems just listed and supports several useful extensions. The Mesa monitor structure is also used in the Modula-3 systems programming language [NELS91]. In Mesa, the csignal primitive is replaced by cnotify , with the following interpretation: When a process executing in a monitor executes cnotify(x), it causes the x condition queue to be notified, but the signaling process continues to execute. The result of the notification is that the process at the head of the condition queue will be resumed at some convenient future time when the monitor is available. However, because there is no guarantee that some other process will not enter the monitor before the waiting process, the waiting process must recheck the condition. This approach is less prone to error and lends itself to a more modular approach to program construction.
Solution to Readers/Writers Problem
For writers, the following semaphores and variables are added to the ones already defined: • A semaphore rsem that inhibits all readers while there is at least one writer desiring access to the data area • A variable writecount that controls the setting of rsem • A semaphore y that controls the updating of writecount
A one-to-many relationship
Form of Indirect Process Communication that allows for one sender and multiple receivers. It is useful for applications where a message or some information is to be broadcast to a set of processes.
A one-to-one relationship
Form of Indirect Process Communication that 1) allows a private communications link to be set up between two processes. 2)This insulates their interaction from erroneous interference from other processes.
A many-to-many relationship
Form of Indirect Process Communication that allows multiple server processes to provide concurrent service to multiple clients.
A many-to-one relationship
Form of Indirect Process Communication that is useful for client/server interaction; 1) one process provides service to a number of other processes. 2)In this case, the mailbox is often referred to as a port.
Indirect Addressing
In this case, messages are not sent directly from sender to receiver but rather are sent to a shared data structure consisting of queues that can temporarily hold messages. Such queues are generally referred to as mailboxes . Thus, for two processes to communicate, one process sends a message to the appropriate mailbox and the other process picks up the message from the mailbox. A strength of the use of ________ ___________ is that, by decoupling the sender and receiver, it allows for greater flexibility in the use of messages.
Nonblocking send, nonblocking receive
Neither party is required to wait.
Race Condition
Occurs when multiple processes or threads read and write data items. The final result depends on the order of execution. the "loser" is the process that updates last and will determine the final value of the variable
mutual exclusion
Requirement that when one process is in a critical section that accesses shared resources, no other process may be in a critical section that accesses any of those shared resources
Hoare's definition of monitors
Requires that if there is at least one process in a condition queue, a process from that queue runs immediately when another process issues a csignal for that condition. Thus, the process issuing the csignal must either immediately exit the monitor or be blocked on the monitor.
Addressing
Schemes for specifying processes in send and receive primitives fall into two categories: 1) Direct ___________ 2) Indirect ___________
Requirements for Mutual Exclusion
The folowing must be true: 1. Must be enforced 2. A process that halts in its noncritical section must do so without interfering with other processes. 3. It must not be possible for a process requiring access to a critical section to be delayed indefinitely: no deadlock or starvation. 4. When no process is in a critical section, any process that requests entry to its critical section must be permitted to enter without delay. 5. No assumptions are made about relative process speeds or number of processors. 6. A process remains inside its critical section for a finite time only.
Synchronization
The communication of a message between two processes implies some level of _______________ between the two: The receiver cannot receive a message until it has been sent by another process. In addition, we need to specify what happens to a process after it issues a send or receive primitive. Consider the send primitive first. When a send primitive is executed in a process, there are two possibilities: Either the sending process is blocked until the message is received, or it is not. Similarly, when a process issues a receive primitive, there are two possibilities: 1. If a message has previously been sent, the message is received and execution continues. 2. If there is no waiting message, then either (a) the process is blocked until a message arrives, or (b) the process continues to execute, abandoning the attempt to receive.
Difficulties of Concurrency
The following arise with this: 1. The sharing of global resources is fraught with peril. 2. It is difficult for the OS to manage the allocation of resources optimally. 3. It becomes very difficult to locate a programming error
Producer/Consumer Problem
The general statement is this: There are one or more producers generating some type of data (records, characters) and placing these in a buffer. There is a single consumer that is taking items out of the buffer one at a time. The system is to be constrained to prevent the overlap of buffer operations. That is, only one agent (producer or consumer) may access the buffer at any one time. The problem is to make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data from an empty buffer.
Distributed processing
The management of multiple processes executing on multiple, distributed computer systems. The recent proliferation of clusters is a prime example of this type of system.
Multiprocessing
The management of multiple processes within a multiprocessor
Multiprogramming
The management of multiple processes within a uniprocessor system
Infinite Buffer
The producer can generate items and store them in the buffer at its own pace. Each time, an index ( in ) into the buffer is incremented. The consumer proceeds in a similar fashion but must make sure that it does not attempt to read from an empty buffer. Hence, the consumer makes sure that the producer has advanced beyond it ( in > out ) before proceeding. This describes the ________ ______ and .
readers/writers problem
There is a data area shared among a number of processes. The data area could be a file, a block of main memory, or even a bank of processor registers. There are a number of processes that only read the data area (readers) and a number that only write to the data area (writers). The conditions that must be satisfied are as follows: 1. Any number of readers may simultaneously read the file. 2. Only one writer at a time may write to the file. 3. If a writer is writing to the file, no reader may read it. Thus, readers are processes that are not required to exclude one another and writers are processes that are required to exclude all other processes, readers and writers alike.
Common Concurrency Mechanisms
These are: 1) Semaphores 2) Binary Semaphore 3) Mutex 4) Condition Variable 5) monitor 6) Event Flags 7) Mailboxes/Messages 8) Spinlocks
Indirect Process Communication
This can come in several forms such as: 1) A one-to-one relationship 2) A many-to-one relationship 3) A one-to-many relationship 4) A many-to-many relationship The association of processes to mailboxes can be either static or dynamic. Ports are often statically associated with a particular process; that is, the port is created and assigned to the process permanently
Concurrency
This encompasses a host of design issues, including communication among processes, sharing of and competing for resources (such as memory, files, and I/O access), synchronization of the activities of multiple processes, and allocation of processor time to processes.
Special Machine Instruction Advantages
This gives the following advantages: • It is applicable to any number of processes on either a single processor or multiple processors sharing main memory. • It is simple and therefore easy to verify. • It can be used to support multiple critical sections; each critical section can be defined by its own variable.
Atomicity
This guarantees isolation from concurrent processes
Principles of Concurrency
This has 2 principles: 1) Interleaving and overlapping - can be viewed as examples of concurrent processing both present the same problems 2) Uniprocessor - the relative speed of execution of processes cannot be predicted: depends on activities of other processes the way the OS handles interrupts scheduling policies of the OS
Concurrency contexts
This has 3 different ways of manifesting: 1) Multiple applications 2) Structured Applications 3) Operating System Structure Together these make up ___________ ________
Semaphore Operations
To achieve the desired effect, we can view the semaphore as a variable that has an integer value upon which only three things are defined: 1. A semaphore may be initialized to a nonnegative integer value. 2. The semWait operation decrements the semaphore value. If the value becomes negative, then the process executing the semWait is blocked. Otherwise, the process continues execution. 3. The semSignal operation increments the semaphore value. If the resulting value is less than or equal to zero, then a process blocked by a semWait operation, if any, is unblocked. Other than these three operations, there is no way to inspect or manipulate semaphores. Together these are _________ __________
Operating System Design
This is concerned with the management and process of threads: 1) Multiprogramming 2) Multiprocessing 3) Distributed processing
Monitor Synchronization
This is done by the use of condition variables that are contained within the monitor and accessible only within the monitor. Condition variables are a special data type in monitors, which are operated on by two functions. cwait(c) csignal(c) Note that monitor wait and signal operations are different from those for the semaphore. If a process in a monitor signals and no task is waiting on the condition variable, the signal is lost.
Message Passing
This is one approach to providing both of these functions works with distributed systems and shared memory multiprocessor and uniprocessor systems. The actual function is normally provided in the form of a pair of primitives: send (destination, message) receive (source, message) A process sends information in the form of a message to another process designated by a destination A process receives information by executing the receive primitive, indicating the source and the message
semWait
This operation decrements the semaphore value. If the value becomes negative, then the process executing the semWait is blocked. Otherwise, the process continues execution.
semSignal
This operation increments the semaphore value. If the resulting value is less than or equal to zero, then a process blocked by a semWait operation, if any, is unblocked.
Process Interaction
We can classify the ways of process awareness in the following ways: 1) Processes unaware of each other 2) Processes indirectly aware of each other 3) Processes directly aware of each other Together we call this _______ ___________
Message Passing Requirements
When processes interact with one another two fundamental requirements must be satisfied: 1) synchronization - to enforce mutual exclusion 2) communication - to exchange information This is one approach to providing both of these functions works with distributed systems and shared memory multiprocessor and uniprocessor systems
Direct Addressing
With this, the send primitive includes a specific identifier of the destination process. The receive primitive can be handled in one of two ways. One possibility is to require that the process explicitly designate a sending process. Thus, the process must know ahead of time from which process a message is expected. This will often be effective for cooperating concurrent processes. In other cases, however, it is impossible to specify the anticipated source process. An example is a printer server process, which will accept a print request message from any other process. For such applications, a more effective approach is the use of implicit addressing. In this case, the source parameter of the receive primitive possesses a value returned when the receive operation has been performed.
Compare&Swap Instruction
also called a "compare and exchange instruction". This version of the instruction checks a memory location ( *word ) against a test value ( testval ). If the memory location's current value is testval, it is replaced with newval ; otherwise it is left unchanged. The old memory value is always returned; thus, the memory location has been updated if the returned value is the same as the test value. This atomic instruction therefore has two parts: A compare is made between a memory value and a test value; if the values are the same, a swap occurs. The entire compare&swap function is carried out atomically—that is, it is not subject to interruption. Another version of this instruction returns a Boolean value: true if the swap occurred; false otherwise. Some version of this instruction is available on nearly all processor families (x86, IA64, sparc, IBM z series, etc.), and most operating systems use this instruction for support of concurrency.
Message Format
depends on the objectives of the messaging facility and whether the facility runs on a single computer or on a distributed system. For some operating systems, designers have preferred short, fixed-length messages to minimize processing and storage overhead. If a large amount of data is to be passed, the data can be placed in a file and the message then simply references that file. A more flexible approach is to allow variable-length messages. The message is divided into two parts: 1) a header, which contains information about the message. The header may contain an identification of the source and intended destination of the message, a length field, and a type field to discriminate among various types of messages. additional control information, e.g. pointer field so a linked list of messages can be created; a sequence number, to keep track of the number and order of messages passed between source and destination; and a priority field. 2) a body, which contains the actual contents of the message
Weak Semaphore
the order in which processes are removed from the queue is not specified
Strong Semaphore
the process that has been blocked the longest is released from the queue first (FIFO)
Special Machine Instruction Disadvantages
• Busy waiting is employed: Thus, while a process is waiting for access to a critical section, it continues to consume processor time. • Starvation is possible: When a process leaves a critical section and more than one process is waiting, the selection of a waiting process is arbitrary. Thus, some process could indefinitely be denied access. • Deadlock is possible: Consider the following scenario on a single-processor system. Process P1 executes the special instruction (e.g., compare & swap, exchange ) and enters its critical section. P1 is then interrupted to give the processor to P2, which has higher priority. If P2 now attempts to use the same resource as P1, it will be denied access because of the mutual exclusion mechanism. Thus, it will go into a busy waiting loop. However, P1 will never be dispatched because it is of lower priority than another ready process, P2.
Sempahore Consequences
• In general, there is no way to know before a process decrements a semaphore whether it will block or not. • After a process increments a semaphore and another process gets woken up, both processes continue running concurrently. There is no way to know which process, if either, will continue immediately on a uniprocessor system. • When you signal a semaphore, you don't necessarily know whether another process is waiting, so the number of unblocked processes may be zero or one. Together these define _________ ____________