Module 3

Ace your homework & exams now with Quizwiz!

In Peterson's solution, we have two shared variables, what are they?

boolean flag[i] :Initialized to FALSE, initially no one is interested in entering the critical section int turn : The process whose turn is to enter the critical section.

For the banker's algorithm to work, it should know what three things?

1. How much of each resource each person could maximum request [MAX] 2. How much of each resource each person currently holds [Allocated] 3. How much of each resource is available in the system for each person [Available]

What are 2 disadvantages of Peterson's Solution

1. It involves Busy waiting. 2. It is limited to 2 processes.

Any solution to the Critical Section problem must satisfy three requirements, what are they?

1. Mutual Exclusion : If a process is executing in its critical section, then no other process is allowed to execute in the critical section. 2. Progress : If no process is executing in the critical section and other processes are waiting outside the critical section, then only those processes that are not executing in their remainder section can participate in deciding which will enter in the critical section next, and the selection can not be postponed indefinitely. 3. Bounded Waiting : A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

What are 3 limitaions for semaphore?

1. One of the biggest limitation of semaphore is priority inversion. 2. Deadlock, suppose a process is trying to wake up another process which is not in sleep state. Therefore a deadlock may block indefinitely. 3. The operating system has to keep track of all calls to wait and to signal the semaphore.

Advantages of Thread over Process

1. Responsiveness: If the process is divided into multiple threads, if one thread completes its execution, then its output can be immediately returned. 2. Faster context switch: Context switch time between threads is lower compared to process context switch. Process context switching requires more overhead from the CPU. 3. Effective utilization of multiprocessor system: If we have multiple threads in a single process, then we can schedule multiple threads on multiple processor. This will make process execution faster. 4. Resource sharing: Resources like code, data, and files can be shared among all threads within a process. Note: stack and registers can't be shared among the threads. Each thread has its own stack and registers. 5. Communication: Communication between multiple threads is easier, as the threads shares common address space. while in process we have to follow some specific communication technique for communication between two process. 6. Enhanced throughput of the system: If a process is divided into multiple threads, and each thread function is considered as one job, then the number of jobs completed per unit of time is increased, thus increasing the throughput of the system.

Name a few different models of concurrency:

1. Shared Mutable State Model 2. Clojure Language -Functional Way Parallelism (using the tool Leiningen) 3. Actor Model 4. Channels 5. Reactive Stream

What are 2 ways a race condition in critical sections can be avoided?

1. Treat the critical section as an atomic instruction. 2. Proper thread synchronization by using locks or atomic variables.

Can a mutex be locked more than once?

A mutex is a lock. Only one state (locked/unlocked) is associated with it. However, a recursive mutex can be locked more than once (POSIX complaint systems), in which a count is associated with it, yet retains only one state (locked/unlocked). The programmer must unlock the mutex as many number times as it was locked.

Consider the standard producer-consumer problem. Assume, we have a buffer of 4096 byte length. A producer thread collects the data and writes it to the buffer. A consumer thread processes the collected data from the buffer. Objective is, both the threads should not run at the same time. How does Mutex handle?

A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa. At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.

To review, what is a Semaphore and what are the 2 types?

A semaphore S is an integer variable that can be accessed only through two standard operations : wait() and signal(). The wait() operation reduces the value of semaphore by 1 and the signal() operation increases its value by 1. wait(S){ while(S<=0); // busy waiting S--; } signal(S){ S++; } Semaphores are of two types: Binary Semaphore - This is similar to mutex lock but not the same thing. It can have only two values - 0 and 1. Its value is initialized to 1. It is used to implement the solution of critical section problem with multiple processes. Counting Semaphore - Its value can range over an unrestricted domain. It is used to control access to a resource that has multiple instances.

Consider the standard producer-consumer problem. Assume, we have a buffer of 4096 byte length. A producer thread collects the data and writes it to the buffer. A consumer thread processes the collected data from the buffer. Objective is, both the threads should not run at the same time. How does Semaphore handle?

A semaphore is a generalized mutex. In lieu of single buffer, we can split the 4 KB buffer into four 1 KB buffers (identical resources). A semaphore can be associated with these four buffers. The consumer and producer can work on different buffers at the same time.

What is a Semaphore?

A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled by another thread. This is different than a mutex as the mutex can be signaled only by the thread that called the wait function.

Can we acquire mutex/semaphore in an Interrupt Service Routine?

An ISR will run asynchronously in the context of current running thread. It is not recommended to query (blocking call) the availability of synchronization primitives in an ISR. The ISR are meant be short, the call to mutex/semaphore may block the current running thread. However, an ISR can signal a semaphore or unlock a mutex.

Describe the basic, action, access and condition variable differences between Semaphore and Monitor:

Basic: Semaphores is an integer variable - Monitor is an abstract data type. Action: The value of Semaphore S indicates the number of shared resources availabe in the system - The Monitor type contains shared variables and the set of procedures that operate on the shared variable. Access: With Semaphore, when any process accesses the shared resources it performs wait() operation on S and when it releases the shared resources it performs signal() operation on S - When any process wants to access the shared variables in the monitor, it needs to access it through the procedures. Condition variable: Semaphore does not have condition variables - Monitor has condition variables.

Define Semaphore:

Being a process synchronization tool, Semaphore is an integer variable S. This integer variable S is initialized to the number of resources present in the system. The value of semaphore S can be modified only by two functions wait() and signal() apart from initialization. The wait() and signal() operation modifies the value of the semaphore S indivisibly. Which means when a process is modifying the value of the semaphore, no other process can simultaneously modify the value of the semaphore. Further, the operating system distinguishes the semaphore in two categories Counting semaphores and Binary semaphore.

What are the 2 types of semaphores?

Binary Semaphores and Counting Semaphores

Describe Binary Semaphores:

Binary Semaphores can only be either 0 or 1. They are also known as mutex locks, as the locks can provide mutual exclusion. All the processes can share the same mutex semaphore that is initialized to 1. Then, a process has to wait until the lock becomes 0. Then, the process can make the mutex semaphore 1 and start its critical section. When it completes its critical section, it can reset the value of mutex semaphore to 0 and some other process can enter its critical section.

What values can a binary semaphore take?

Binary semaphore can take only two values 0 and 1 and ensure the mutual exclusion. There is one other type of semaphore called counting semaphore which can take values greater than one.

Define Condition Variables:

Conditional variables were introduced for additional synchronization mechanism. The conditional variable allows a process to wait inside the monitor and allows a waiting process to resume immediately when the other process releases the resources. The conditional variable can invoke only two operation wait() and signal(). Where if a process P invokes a wait() operation it gets suspended in the monitor till other process Q invoke signal() operation i.e. a signal() operation invoked by a process resumes the suspended process.

What happens if a non-recursive mutex is locked more than once?

Deadlock. If a thread which had already locked a mutex, tries to lock the mutex again, it will enter into the waiting list of that mutex, which results in deadlock. It is because no other thread can unlock the mutex. An operating system implementer can exercise care in identifying the owner of mutex and return if it is already locked by same thread to prevent deadlocks.

Example:Imagine a pair of processes using two resources, as shown: void process_A(void) { enter_reg(& resource_1); enter_reg(& resource_2); use_both_resources(); leave_reg(& resource_2); leave_reg(& resource_1); } void process_B(void) { enter_reg(& resource_1); enter_reg(& resource_2); use_both_resources(); leave_reg(& resource_2); leave_reg(& resource_1); }

Each of the two processes needs the two resources and they use the polling primitive enter_reg to try to acquire the locks necessary for them. In case the attempt fails, the process just tries again. If process A runs first and acquires resource 1 and then process B runs and acquires resource 2, no matter which one runs next, it will make no further progress, but neither of the two processes blocks. What actually happens is that it uses up its CPU quantum over and over again without any progress being made but also without any sort of blocking. Thus this situation is not that of a deadlock( as no process is being blocked) but we have something functionally equivalent to deadlock: LIVELOCK.

What we mean by "thread blocking on mutex/semaphore" when they are not available?

Every synchronization primitive has a waiting list associated with it. When the resource is not available, the requesting thread will be moved from the running list of processor to the waiting list of the synchronization primitive. When the resource is available, the higher priority thread on the waiting list gets the resource (more precisely, it depends on the scheduling policies).

Explain the writer code on previous card

If a writer wants to access the object, wait operation is performed on wrt. After that no other writer can access the object. When a writer is done writing into the object, signal operation is performed on wrt.

Define Binary Semaphore:

In Binary semaphore, the value of semaphore ranges between 0 and 1. It is similar to mutex lock, but mutex is a locking mechanism whereas, the semaphore is a signalling mechanism. In binary semaphore, if a process wants to access the resource it performs wait() operation on the semaphore and decrements the value of semaphore from 1 to 0. When process releases the resource, it performs a signal() operation on the semaphore and increments its value to 1. If the value of the semaphore is 0 and a process want to access the resource it performs wait() operation and block itself till the current process utilizing the resources releases the resource.

Define Counting Semaphore:

In Counting Semaphore, the value of semaphore S is initialized to the number of resources present in the system. Whenever a process wants to access the shared resources, it performs wait() operation on the semaphore which decrements the value of semaphore by one. When it releases the shared resource, it performs a signal() operation on the semaphore which increments the value of semaphore by one. When the semaphore count goes to 0, it means all resources are occupied by the processes. If a process need to use a resource when semaphore count is 0, it executes wait() and gets blocked until a process utilizing the shared resources releases it and the value of semaphore becomes greater than 0.

Explain the reader code on previous card

In the above code, mutex and wrt are semaphores that are initialized to 1. Also, rc is a variable that is initialized to 0. The mutex semaphore ensures mutual exclusion and wrt handles the writing mechanism and is common to the reader and writer process code. The variable rc denotes the number of readers accessing the object. As soon as rc becomes 1, wait operation is used on wrt. This means that a writer cannot access the object anymore. After the read operation is done, rc is decremented. When re becomes 0, signal operation is used on wrt. So a writer can access the object now.

How does the implementation of counting semaphore reduce waste of CPU cycle?

In this implementation whenever process waits it is added to a waiting queue of processes associated with that semaphore. This is done through system call block() on that process. When a process is completed it calls signal function and one process in the queue is resumed. It uses wakeup() system call.

On the basis of synchronization, processes are categorized as one of the following two types:

Independent Process : Execution of one process does not affects the execution of other processes. Cooperative Process : Execution of one process affects the execution of other processes.

Deadlock explained

Knowing Ross needs $10 urgently, instead of giving $8, you end up giving him $10. And you are left with only $1. In this state, Chandler still needs $2 more, Ross needs $3 more, and Joey still needs $3 more, but now you don't have enough money to give them and until they complete the tasks they need the money for, no money will be transferred back to you. This kind of situation is called the Unsafe state or Deadlock state, which is solved using Banker's Algorithm.

Explain how semaphore implements mutual exclusion:

Let there be two processes P1 and P2 and a semaphore s is initialized as 1. Now if suppose P1 enters in its critical section then the value of semaphore s becomes 0. Now if P2 wants to enter its critical section then it will wait until s > 0, this can only happen when P1 finishes its critical section and calls V operation on semaphore s. This way mutual exclusion is achieved. Look at the below image for details which is Binary semaphore.

Banker's algorithm explained

Let's say you've got three friends (Chandler, Ross, and Joey) who need a loan to tide them over for a bit. You have $24 with you. Chandler needs $8 dollars, Ross needs $13, and Joey needs $10. You already lent $6 to Chandler, $8 to Ross, and $7 to Joey. So you are left with $24 - $21 (6+8+7) = $3 Even after giving $6 to Chandler, he still needs $2. Similarly, Ross needs $5 more and Joey $3. Until they get the amount they need, they can neither do whatever tasks they have to nor return the amount they borrowed. (Like a true friend!) You can pay $2 to Chandler, and wait for him to get his work done and then get back the entire $8. Or, you can pay $3 to Joey and wait for him to pay you back after his task is done. You can't pay Ross because he needs $5 and you don't have enough. You can pay him once Chandler or Joey returns the borrowed amount after their work is done. This state is termed as the safe state, where everyone's task is completed and, eventually, you get all your money back.

Is the code below Livelock, Deadlock or Starvation? var l1 = .... // lock object like semaphore or mutex etc var l2 = .... // lock object like semaphore or mutex etc // Thread1 Thread.Start( ()=> { while (true) { if (!l1.Lock(1000)) { continue; } if (!l2.Lock(1000)) { continue; } /// do some work }); // Thread2 Thread.Start( ()=> { while (true) { if (!l2.Lock(1000)) { continue; } if (!l1.Lock(1000)) { continue; } // do some work });

Livelock

What is Livelock?

Livelock occurs when two or more processes continually repeat the same interaction in response to changes in the other processes without doing any useful work. These processes are not in the waiting state, and they are running concurrently. This is different from a deadlock because in a deadlock all processes are in the waiting state.

Why are Monitors preferred over Semaphore?

Monitors are easy to implement than semaphore, and there is little chance of mistake in monitor in comparison to semaphores.

How do monitors help solve the producer-consumer problem?

Monitors make solving the producer-consumer a little easier. Mutual exclusion is achieved by placing the critical section of a program inside a monitor.

Is a Mutex a binary semaphore?

No. There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is binary semaphore. But they are not! The purpose of mutex and semaphore are different. May be, due to similarity in their implementation a mutex would be referred as binary semaphore.

Are binary semaphore and mutex same?

No. We suggest to treat them separately, as it is explained signalling vs locking mechanisms. But a binary semaphore may experience the same critical issues (e.g. priority inversion) associated with mutex. We will cover these in later article. A programmer can prefer mutex rather than creating a semaphore with count 1.

Is it necessary that a thread must block always when resource is not available?

Not necessary. If the design is sure 'what has to be done when resource is not available', the thread can take up that work (a different code branch). To support application requirements the OS provides non-blocking API. For example POSIX pthread_mutex_trylock() API. When mutex is not available the function returns immediately whereas the API pthread_mutex_lock() blocks the thread till resource is available.

What leads to Livelocks?

Occurrence of livelocks can occur in the most surprising of ways. The total number of allowed processes in some systems, is determined by the number of entries in the process table. Thus process table slots can be referred to as Finite Resources. If a fork fails because of the table being full, waiting a random time and trying again would be a reasonable approach for the program doing the fork. Consider a UNIX system having 100 process slots. Ten programs are running, each of which having to create 12 (sub)processes. After each process has created 9 processes, the 10 original processes and the 90 new processes have exhausted the table. Each of the 10 original processes now sits in an endless loop forking and failing - which is aptly the situation of a deadlock. The probability of this happening is very little but it could happen.

What is a classical software based solution to the critical section problem?

Peterson's Solution

How is the above problem resolved?

Pic 3

How is the above problem resolved?

Pic 4

How is the above problem resolved?

Pic 5

Describe a Producer/Consumer problem using Semaphore:

Problem Statement - We have a buffer of fixed size. A producer can produce an item and can place in the buffer. A consumer can pick items and can consume them. We need to ensure that when a producer is placing an item in the buffer, then at the same time consumer should not consume any item. In this problem, buffer is the critical section. To solve this problem, we need two counting semaphores - Full and Empty. "Full" keeps track of number of items in the buffer at any given time and "Empty" keeps track of number of unoccupied slots.

In the code below, the critical sections of the producer and consumer are inside the monitor. Once inside the monitor, a process is blocked by the Wait and Signal primitives if it cannot continue

See pic

In the code shown in the attached pics, the critical sections of the producer and consumer are inside the monitor. Once inside the monitor, a process is blocked by the Wait and Signal primitives if it cannot continue

See pic

How are Semaphore and Monitor in OS similar?

Semaphore and Monitor both allow processes to access the shared resources in mutual exclusion. Both are the process synchronization tool. However, they are very different from each other.

How are Semaphore and Monitor in OS different?

Semaphore is an integer variable which can be operated only by wait() and signal() operation apart from the initialization. On the other hand, the Monitor type is an abstract data type whose construct allow one process to get activate at one time.

Why is a Semaphore called a signaling mechanism?

Semaphore is signaling mechanism ("I am done, you can carry on" kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend calls you, an interrupt is triggered upon which an interrupt service routine (ISR) signals the call processing task to wakeup.

Another definition of Semaphor?

Semaphore was proposed by Dijkstra in 1965 which is a very significant technique to manage concurrent processes by using a simple integer value, which is known as a semaphore. Semaphore is simply a variable which is non-negative and shared between threads. This variable is used to solve the critical section problem and to achieve process synchronization in the multiprocessing environment.

What is concurrent programing?

Simply described, it's when you are doing more than one thing at the same time. Not to be confused with parallelism, concurrency is when multiple sequences of operations are run in overlapping periods of time. In the realm of programming, concurrency is a pretty complex subject. Dealing with constructs such as threads and locks and avoiding issues like race conditions and deadlocks can be quite cumbersome, making concurrent programs difficult to write. Through concurrency, programs can be designed as independent processes working together in a specific composition. Such a structure may or may not be made parallel; however, achieving such a structure in your program offers numerous advantages.

Banker's Algorithm further explained

So we need MAX and REQUEST. If REQUEST is given MAX = ALLOCATED + REQUEST NEED = MAX - ALLOCATED A resource can be allocated only for a condition. REQUEST<= AVAILABLE or else it waits until resources are available. Let 'n' be the number of processes in the system and 'm' be the number of resource types. Available - It is a 1D array of size 'm'. Available [j] = k means there are k occurrences of resource type Rj. Maximum - It is a 2D array of size 'm*n' which represents maximum demand of a section. Max[i,j] = k means that a process i can maximum demand 'k' amount of resources. Allocated - It is a 2D array of size 'm*n' which represents the number of resources allocated to each process. Allocation [i,j] =k means that a process is allocated 'k' amount of resources. Need - 2D array of size 'm*n'. Need [i,j] = k means a maximum resource that could be allocated. Need [i,j] = Max [i,j] - Allocation[i,j]

Process Synchronization are handled by two approaches, what are they?

Software Approach Hardware Approach

What is a mutex and critical section?

Some operating systems use the same word critical section in the API. Usually a mutex is a costly operation due to protection protocols associated with it. At last, the objective of mutex is atomic access. There are other ways to achieve atomic access like disabling interrupts which can be much faster but ruins responsiveness. The alternate API makes use of disabling interrupts.

Explain how Semaphore works:

Some point regarding P and V operation P operation is also called wait, sleep or down operation and V operation is also called signal, wake-up or up operation. Both operations are atomic and semaphore(s) is always initialized to one.Here atomic means that variable on which read, modify and update happens at the same time/moment with no pre-emption i.e. in between read, modify and update no other operation is performed that may change the variable. A critical section is surrounded by both operations to implement process synchronization.See below image.critical section of Process P is in between P and V operation.

Is the code below Livelock, Deadlock or Starvation? Queue q = ..... while (q.Count & gt; 0) { var c = q.Dequeue(); ......... // Some method in different thread accidentally // puts c back in queue twice within same time frame q.Enqueue(c); q.Enqueue(c); // leading to growth of queue twice then it // can consume, thus starving of computing }

Starvation

What is starvation?

Starvation is a problem which is closely related to both, Livelock and Deadlock. In a dynamic system, requests for resources keep on happening. Thereby, some policy is needed to make a decision about who gets the resource when. This process, being reasonable, may lead to a some processes never getting serviced even though they are not deadlocked. Starvation happens when "greedy" threads make shared resources unavailable for long periods. For instance, suppose an object provides a synchronized method that often takes a long time to return. If one thread invokes this method frequently, other threads that also need frequent synchronized access to the same object will often be blocked.

Why is a mutex called a locking mechanism?

Strictly speaking, a mutex is locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there is ownership associated with mutex, and only the owner can release the lock (mutex).

Explain how semaphores work with multiple processes:

Suppose there is a resource whose number of instance is 4. Now we initialize S = 4 and rest is same as for binary semaphore. Whenever process wants that resource it calls P or wait function and when it is done it calls V or signal function. If the value of S becomes zero then a process has to wait until S becomes positive. For example, Suppose there are 4 process P1, P2, P3, P4 and they all call wait operation on S(initialized with 4). If another process P5 wants the resource then it should wait until one of the four processes calls signal function and value of semaphore becomes positive.

What is a hardsware solution to the synchronization problem?

TestAndSet - TestAndSet is a hardware solution to the synchronization problem. In TestAndSet, we have a shared lock variable which can take either of the two values, 0 or 1 [0 Unlock 1 Lock] Before entering into the critical section, a process inquires about the lock. If it is locked, it keeps on waiting until it becomes free and if it is not locked, it takes the lock and executes the critical section. In TestAndSet, Mutual exclusion and progress are preserved but bounded waiting cannot be preserved.

What is banker's algorithm?

The Banker's algorithm sometimes referred to as avoidance algorithm or Deadlock algorithm was developed by Edsger Dijkstra (another of Dijkstra's algorithms!). It tests the safety of allocation of predetermined maximum possible resources and then makes states to check the deadlock condition.

What is another description of the Hardware Approach?

The Hardware Approach of synchronization can be done through Lock & Unlock technique.Locking part is done in the Entry Section, so that only one process is allowed to enter into the Critical Section, after it complete its execution, the process is moved to the Exit Section, where Unlock Operation is done so that another process in the Lock Section can repeat this process of Execution.This process is designed in such a way that all the three conditions of the Critical Sections are satisfied.

Process vs Thread?

The primary difference is that threads within the same process run in a shared memory space, while processes run in separate memory spaces.Threads are not independent of one another like processes are, and as a result threads share with other threads their code section, data section, and OS resources (like open files and signals). But, like process, a thread has its own program counter (PC), register set, and stack space.

How do we solve the readers-writers problem?

The readers-writers problem is used to manage synchronization so that there are no problems with the object data. For example - If two readers access the object at the same time there is no problem. However if two writers or a reader and writer access the object at the same time, there may be problems. To solve this situation, a writer should get exclusive access to an object i.e. when a writer is accessing the object, no reader or writer may access it. However, multiple readers can access the object at the same time. This can be implemented using semaphores.

What is the readers-writers problem?

The readers-writers problem relates to an object such as a file that is shared between multiple processes. Some of these processes are readers i.e. they only want to read the data from the object and some of the processes are writers i.e. they want to write into the object.

What are events?

The semantics of mutex, semaphore, event, critical section, etc... are same. All are synchronization primitives. Based on their cost in using them they are different. We should consult the OS documentation for exact details.

Define Monitor:

To overcome the timing errors that occurs while using semaphore for process synchronization, the researchers have introduced a high-level synchronization construct i.e. the monitor type. A monitor type is an abstract data type that is used for process synchronization. Being an abstract data type monitor type contains the shared data variables that are to be shared by all the processes and some programmer-defined operations that allow processes to execute in mutual exclusion within the monitor. A process can not directly access the shared data variable in the monitor; the process has to access it through procedures defined in the monitor which allow only one process to access the shared variables in a monitor at a time. A monitor is a construct such as only one process is active at a time within the monitor. If other process tries to access the shared variable in monitor, it gets blocked and is lined up in the queue to get the access to shared data when previously accessing process releases it.

A semaphore uses two atomic operations, what are they?

Wait and Signal for process synchronization. A Semaphore is an integer variable, which can be accessed only through two operations wait() and signal().

Explain the race condition:

When more than one process is executing the same code or accessing the same memory or any shared variable in that condition, there is a possibility that the output or the value of the shared variable is wrong - so all the processes are racing to say that their output is correct. Several processes access and process the manipulations over the same data concurrently, then the outcome depends on the particular order in which the access takes place.

How does binary semaphore waste CPU cycle time?

Whenever any process waits then it continuously checks for semaphore value and waste CPU cycle

Can a thread acquire more than one lock (Mutex)?

Yes, it is possible that a thread is in need of more than one resource, hence the locks. If any lock is not available the thread will wait (block) on the lock.

Does Peterson's Solution preserve all three conditions, referenced above? :

Yes. 1. Mutual Exclusion is assured as only one process can access the critical section at any time. 2. Progress is also assured, as a process outside the critical section does not block other processes from entering the critical section. 3. Bounded Waiting is preserved as every process gets a fair chance.

Can a race condition occur inside a critical section?

Yes. This happens when the result of multiple thread execution in the critical section differs according to the order in which the threads execute.

Let's get back to the previous safe state.

You give $2 to Chandler and let him complete his work. He returns your $8 which leaves you with $9. Out of this $9, you can give $5 to Ross and let him finish his task with total $13 and then return the amount to you, which can be forwarded to Joey to eventually let him complete his task. (Once all the tasks are done, you can take Ross and Joey to Central Perk for not giving them a priority.) The goal of the Banker's algorithm is to handle all requests without entering into the unsafe state, also called a deadlock. The moneylender is left with not enough money to pay the borrower and none of the jobs are complete due to insufficient funds, leaving incomplete tasks and cash stuck as bad debt. It's called the Banker's algorithm because it could be used in the banking system so that banks never run out of resources and always stay in a safe state.

Go to this link and read about code for banker's algorithm

https://www.hackerearth.com/blog/developers/dijkstras-bankers-algorithm-detailed-explaination/

Difference between Livelock and Deadlock?

A deadlock is a state in which each member of a group of actions, is waiting for some other member to release a lock. A livelock on the other hand is almost similar to a deadlock, except that the states of the processes involved in a livelock constantly keep on changing with regard to one another, none progressing. Thus Livelock is a special case of resource starvation, as stated from the general definition, the process is not progressing.

What is the Difference between Deadlock, Starvation, and Livelock?

A livelock is similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing. Livelock is a special case of resource starvation; the general definition only states that a specific process is not progressing.

What is a Thread?

A thread is a path of execution within a process. A process can contain multiple threads.

Why multithreading?

A thread is also known as lightweight process. The idea is to achieve parallelism by dividing a process into multiple threads. For example, in a browser, multiple tabs can be different threads. MS Word uses multiple threads: one thread to format the text, another thread to process inputs, etc. More advantages of multithreading are discussed below

What are Mutex and Semaphore?

As per operating system terminology, mutex and semaphore are kernel resources that provide synchronization services (also called as synchronization primitives).

Process synchronization problem arises in which type of process category?

Cooperative Process - because resources are shared in Cooperative processes.

Describe Counting Semaphores:

Counting Semaphores can have any value and are not restricted over a certain domain. They can be used to control access to a resource that has a limitation on the number of simultaneous accesses. The semaphore can be initialized to the number of instances of the resource. Whenever a process wants to use that resource, it checks if the number of remaining instances is more than zero, i.e., the process has an instance available. Then, the process can enter its critical section thereby decreasing the value of the counting semaphore by 1. After the process is over with the use of the instance of the resource, it can leave the critical section thereby adding 1 to the number of available instances of the resource.

What is a Critical Section?

Critical Section is a code segment that can be accessed by only one process at a time. Critical section contains shared variables which need to be synchronized to maintain consistency of data variables. [in the figure - the process requests for entry in the Critical Section]

Is the code below Livelock, Deadlock or Starvation? var p = new object(); lock(p) { lock(p) { // deadlock. Since p is previously locked // we will never reach here... }

Deadlock

How is the above problem resolved?

Pic 2

How is the above problem resolved?

See attached pics on next 5 slides - Pic 1

Go to this link and read about different models for concurrency

https://www.toptal.com/software/introduction-to-concurrent-programming

What code is used to define the reader process?

wait (mutex); rc ++; if (rc == 1) wait (wrt); signal(mutex); . . READ THE OBJECT . wait(mutex); rc --; if (rc == 0) signal (wrt); signal(mutex);

What code is used to define the Writer process?

wait(wrt); . . WRITE INTO THE OBJECT . signal(wrt);


Related study sets

Exam Cram Chapter 8 CompTIA Network + N10-006

View Set

improvement of study chapter 2 and 4

View Set

Chapter 12: Spinal Cord & Spinal Nerves.

View Set

MS2 Respiratory / Hematology Review Questions

View Set

Pharmocology Section #1 (Lexi + Emily+ Delaney)*

View Set

Second Year 2nd semester mid-term

View Set

PF: 1.3.1 Lesson: Services provided from taxes

View Set