Midterm 2

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

What would be the size of a page table for a system with 48-bit addressing assuming a 1 kb frame size and assuming that 4 bytes are needed for every entry. Describe each of the strategies to keep the size of the page table in check?

2^48(#of addresses) / 2^10(frame size) = 2^38 pages. 2^4*2^38 = 2^42bytes = 2^12GB

The CPU executing a process generates (5,0,4) as a logical address in a system that uses an inverted page table to compute physical addresses. Let the following be the inverted page table: [(2,1),(5,2),(7,3),(5,0),(6,4)]. Assuming that the page size is 6 find the decimal version of the physical address corresponding to this logical address. Show the result and explain how did you derive it.

3(page with 5,0) 3(page with 5,0)*6(page size) + 4 = 22

Let the logical memory of some process be a contiguous sequence of letters from a to z (26 letters, only lower case). Consider a computer system that uses 8-bit addressing scheme with six bits for two-level page indexing with the first level taking four bits, the second taking two, and the offsets another two. Assuming that the following is the content of the outer page table: [4, -, 0, 5, -, -, -, 2, 17, 9, -, 11, 13, -, 12, -] and that the content of the third (counting from zero) frame is: [8, 19, 10, 1] and the content of the sixth frame is: [3, 15, 6, 14] show the binary version of the logical address of letter p knowing that the physical address of p is 63. Show the result and explain how did you derive it.

63/4 = 15 % 4 = offset 3 page 15 offset 3 ____ __ 11 [abcd][efgh][ijkl][mnop] 3rd 0011 __ 11 because it's in 4th position, points to 1 0011 01 11

Define what a monitor is. Explain what problems related to semaphores led to the introduction of monitors? Why is using monitors safer than using semaphores?

A monitor is a high-level abstraction that provides a convenient and effective high-level language mechanism for process synchronization, and only one process may be active in the monitor at a time. If a semaphore is used incorrectly, then it can result in timing errors that are hard to detect, as they only occur in certain sequences and don't always occur. The monitor type also declares the variables whose values define the state of an instance of that type, along with the bodies of functions that operate on those variables. Thus, a function defined within a monitor can access only those variables declared locally within the monitor and its formal parameters.

Explain what a CPU burst is and how it is used by CPU scheduler. Next, compute a series of predictions for CPU bursts for a process with the following CPU burst history: 10, 4, 6, 2, 10, 8, 2, 8, 10, 4, 6 Assume alpha = 0.75 Start with tau_zero = 8 Show all intermediate details of your computation.

A CPU burst is the time that a process has control of the CPU, and the time is how long it has the CPU to complete. The scheduler uses the burst to determine which process should go when, depending on what scheduling it's using. tau1 = (.75*10) + ((.25)8) = 7.5 + 2 = 9.5 tau2 = (.75*4) + ((.25)9.5) = 3 + 2.375 = 5.375 tau3 = (.75*6) + ((.25)5.375) = 4.5 + 1.34375 = 5.84375 tau4 = (.75*2) + ((.25)5.84375) = 1.5 + 1.4609375 = 2.9609375 tau5 = (.75*10) + ((.25)2.9609375) = 7.5 + 0.740234375 = 8.240234375 tau6 = (.75*8) + ((.25)8.240234375) = 6 + 2.060058594 = 8.060058594 tau7 = (.75*2) + ((.25)8.060058594) = 1.5 + 2.015014648 = 3.515014648 tau8 = (.75*8) + ((.25)3.515014648) = 6 + 0.878753662 = 6.878753662 tau9 = (.75*10) + ((.25)6.878753662) = 7.5 + 1.719688416 = 9.219688416 tau10 = (.75*4) + ((.25)9.219688416) = 3 + 2.304922104 = 5.304922104 tau11 = (.75*6) + ((.25)5.304922104) = 4.5 + 1.326230526 = 5.826230526

Explain with details what a critical section is. Provide two real-life examples of problems that are similar to the critical section problem and explain why do you think they fall to the same category.

A critical section is a section of code prone to race condition (where shared data is accessed), no other critical section is allowed to run at the same time as another one. If you and a roommate are both out of milk and you both go to get some without consulting each other, you then have too much milk. It's similar because if you cooperated with your roommate, only one of you would get the milk and you wouldn't have too much. If you need to leave a car at a mechanic and both you and your spouse both decide to take the car that needs repairing to the mechanic, but since you both went in the same car, there's no car to take you two home. If you two cooperated, then you would know to take another car so you can go back home

Explain what a locality model is. Then, assuming that an OS uses a working-set window of size 4 show all successive working sets (show one set per line) for the following reference set: {3, 4, 3, 3, 6, 5, 5, 6, 2, 1, 2, 4, 3, 3, 4, 2, 1, 1, 1, 2, 3}

A locality model is the fact that each process has references to addresses that are localized in time periods, over time a process execution migrates from one locality to another, and localities may overlap. Assuming a working-set windows size 4 [3,4] [3,4,6] [3,6,5] [3,6,5] [5,6] [2,5,6] [1,2,5,6] [1,2,6] [1,2,4] [1,2,3,4] [2,3,4] [3,4] [2,3,4] [1,2,3,4] [1,2,4] [1,2] [1,2,3] [1,2,3] [2,3] [3]

Explain what a race condition is. Provide two examples with a detailed step-by-step justification for including them.

A race condition is when there is concurrent access to shared data and the final outcome depends upon the order of execution that is controlled by the OS and may vary over time Whoever gets up first gets to shower first, they both want to shower but the order of execution says that whoever wakes up first gets to use it first Walking through a door, and the one who gets there first walks through it first. Whoever arrives first can use it thanks to order of execution

What is a semaphore? Does it suffer from the busy-waiting problem? Is it possible to implement it so that it does not? Explain how a semaphore can be used to solve the critical section problem. Please note that that requires a proof that using a semaphore indeed is the basis for a solution to the critical section problem. Is the implementation of semaphores a critical section problem itself? Why using spinlocks is not a big problem in this implementation?

A semaphore is an integer variable with two indivisible (atomic) standard operations to modify it. It does suffer from the busy-waiting problem. It can be implemented so it doesn't by using a waiting queue with each semaphore so it now has a value (of type integer; like before) and a pointer to the waiting queue, and with 2 operations, block (place in waiting queue) and wakeup (move to ready queue). A solution to the critical section problem using semaphores is using a mutex to enforce mutual exclusion so that only one process is in it's mutual exclusion at a time, this is accomplished by using wait() before a critical section and then signal() once the critical section ends The implementation of semaphores is a critical section problem itself, because it must guarantee that no two processes can execute wait() and signal() on the same semaphore at the same time. Using spinlocks is not a problem because the implementation of wait() and signal() is short so there is little busy-waiting if the critical section is rarely occupied

Define what a spinlock is. Then, explain why spinlocks are not appropriate for single processor computers, but are acceptable - and often used - on multiprocessor systems.

A spinlock is when a process is in it's critical section and another process tries to enter theirs, the process loops continuously in the entry code, it's called that because the process is "spinning" while waiting for the lock to become available. On single processor systems, the thread holding a lock cannot be running while another thread is testing the lock, because only one thread/process can be running at a time. Therefore the thread will continue to spin and waste CPU cycles until its time slice end They are often used in multiprocessor systems because some processors can spin-lock while others continue to execute. It can spin while waiting for the lock

You have 10 programs that use a certain library. Explain the process of linking the programs with a static version of the library. a dynamic version of the library.

A static library becomes part of the program after linking. It is cloned to every program that uses it. Static libraries use a ".a" extension. For dynamic version, a stub is used to locate the appropriate memory resident library routine. The stub replaces itself with the address of the routine, and executes the routine.

Invent a deadlock avoidance scheme based on the Banker's Algorithm that can be used to solve the Dining Philosophers problem. Provide an example that illustrates how the scheme works. Provide a formal proof that the implementation indeed prevents deadlocks. Do not submit code; rather, plain English description of the algorithm and why it will work.

At each step when a process request resources, check if both the amount requested is less than the amount needed and available. If either are true, the then process must release its resources and wait

Consider the following state of the system: Process Allocation Max Available A B C A B C A B C P0 0 1 0 7 5 3 3 3 2 P1 2 0 0 3 2 2 P2 3 0 2 9 0 2 P3 2 1 1 2 2 2 P4 0 0 2 4 3 3 Prove that granting a request (0, 2, 0) to P0 will end up in a safe state. Use the following notation in your answers. Please note that there are three resources and five processes in the example. Available = [ 15 10 5 ] Allocation = [ 0 0 0 ][ 0 0 0 ][ 0 0 0 ][ 0 0 0 ][ 0 0 0 ] Max = [ 7 5 3 ][ 3 2 2 ][ 9 0 2 ][ 2 2 2 ][ 4 3 3 ] Need = [ 7 5 3 ][ 3 2 2 ][ 9 0 2 ][ 2 2 2 ][ 4 3 3 ]

Available (3,1,2) P3 need (0,1,1) < (3,1,2) Available (3,1,2) +(2,1,1) P3 Allocation P1 need (1,2,2) < (5,2,3) Available (5,2,3) +(2,0,0) P1 Allocation P0 need (7,2,3) < (7,2,3) Available (7,2,3) +(0,3,0) P4 need (4,3,1) < (7,5,3) Available (7,5,3) Available +(0,0,2) P2 need (6,0,2) < (7,5,5) Available (7,5,5) +(3,0,2) (10,5,7) Is safe

Explain what Bellady's Anomaly is. Illustrate the problem by computing page faults in an OS that uses memory with three frames and with four frames. Use the following reference set: {1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5}

Belady's Anomaly is where more frames can cause more page faults where it should cause less page faults. FIFO *1 - - 1 *2 - 1 2 *3 *4 2 3 4 *1 3 4 1 *2 *5 1 2 5 >1 2 5 1 >2 5 *3 2 5 3 *4 >5 3 4 9 page faults *1 - - - 1 *2 - - 1 2 *3 - 1 2 3 *4 >1 2 3 4 1 >2 3 4 *5 2 3 4 5 *1 3 4 5 1 *2 4 5 1 2 *3 *4 1 2 3 4 *5 2 3 10 page faults

List and describe scheduling criteria for scheduling algorithms.

CPU utilization-keep the CPU as busy as possible Throughput-# of processes that complete their execution per time unit Turnaround time-amount of time to execute a particular process Waiting time-amount of time a process has been waiting in the ready queue Response time-for time-sharing environment, amount of time it takes from when a request was submitted until the first response is produced. The objectives of a scheduling algorithm are to maximize CPU utilization and throughput, and to minimize turnaround time, waiting time, and response time

Define a deadlock. In the lecture, we said that the solution of the Dining Philosophers problem with semaphores suffers from a potential deadlock problem; explain why. Provide a detailed step-by-step example that leads to a deadlock.

Deadlock is two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. If all of the philosophers get hungry at the same time, they will all reach for the chopstick on the left and because there are no chopsticks left on the table, they all can't eat because they all need another chopstick. A deadlock example could involve 4 cars crossing an intersection, and if there is no proper signaling and they all go at the same time, none of them will be able to move any further

Define deadlock prevention and deadlock avoidance. Explain what is the difference between the two.

Deadlock prevention Provides a set of methods for ensuring that at least one of the necessary conditions cannot hold Deadlock avoidance requires that the operating system be given in advance additional information concerning which resources a process will request and use during its lifetime. The difference is that deadlock prevention and avoidance is that, deadlock prevention is a situation when deadlock situation is bound to happen but using some logic, it is able to prevent the deadlock, while deadlock avoidance is completely ruling out that a deadlock will occur.

Describe and compare approaches to evaluate scheduling algorithms.

Deterministic modeling takes a particular predetermined workload and defines the performance of each algorithm for that workload Queueing Theory - Mathematical analytical models using probabilistic distributions of job arrivals, CPU bursts, etc. Simulation, simulators using probabilistic distributions or trace tapes (captures of real data) Implementation, measuring performance of a live system

Compute the memory effective access time in a system with the following characteristics: page faults happen once every 2000 memory accesses on average, disk access time is 5 ms, probability that the dirty bit is set on the vitctim page is 0.1, memory access time is 100 nanoseconds, and page fault and restart overhead is in total 10 nanoseconds.

EAT = (1 - (1/2000) ) * 100 + (1/2000) * (5,000,000) EAT = (1- .0005) * 100 + (.0005) * 5,000,000 EAT = .9995 *100 + .0005 * 5,000,000 EAT = 99.95 + 2,500 EAT = 2599.95

Assume that the hit ratio in the MMU that performs paging with an associative memory (translation look-aside buffer (TLB)) is 90%. The access to main memory is 50 nanoseconds, and the lookup time for the TLB is 10 nanoseconds. Compute what is the effective access time of this system. Show the result and explain how did you obtain it.

EAT = 50 + 10*.9 + (50 + 10)(1-.9) EAT = 50 + 9 + (60)(.1) EAT = 59 + 6 EAT = 65 I got this using the formula for effective access time and then solved for EAT

Evaluate CPU scheduling algorithm based entirely on process priorities. Are there any potential pitfalls? If yes, how can the algorithm be changed to address them?

FCFS: The most straightforward one, since the order of arrival is the order in which it's done. The only main pitfall is that the average waiting time is quite large comparatively. SJF: This one is pretty efficient as it gets the smaller processes out of the way first so that they aren't waiting behind a large process to get done. SRTF: The most efficient one, as it stops a process in the middle of it if one with a shorter run time arrives, meaning shorter jobs will get finished first. Some of these have the problem of starvation, where low priority processes will never be executed, this can be solved by aging, which increases priority of a process as time goes on

Explain what a page fault is, how it is detected, and describe step-by-step the process of page replacement.

If there is a reference to a page, first reference to that page ("page invalid") will trap to operating system, this is known as a page fault. It's detected by the operating system and finds the invalid reference and will then abort or a valid reference to a page is not in memory which will then abort. To replace the page, the OS will get an empty frame, swap the page into the frame, reset page and frame tables, set the validation bit to valid, restart the instruction that caused the page fault.

Prove that in exponential averaging the further a value is in the past, the less impact it has on the average value.

In exponential averaging the further the value the less impact it has since both α and (1 - α) are less than or equal to 1, therefore each successive term has less weight than its predecessor, because each one divides to be smaller and smaller

The system that you are designing cannot afford the overhead of deadlock prevention and deadlock avoidance. What is needed to implement deadlock detection scheme? Discuss the options available in dealing with deadlocked systems?

In order to implement deadlock detection, you need a detection algorithm(the OS tries to detect potential deadlocks) and a recovery scheme( a way to recover from the problem such as terminating a process or a tree of processes). One option to deal with deadlock systems is maintaining a wait-for-graph. This scheme periodically invokes an algorithm that checks for graph cycles. Another option is using vectors to keep track of states of processes and data structures are used to keep track of resources. The scheme will loop through the vectors looking for a case where no resources can be requested while no process can finish will reveal a deadlock.

When can the binding of instructions and data to memory addresses occur? For each binding type, explain the details of the binding process and the implications.

It can occur in compile time, load time, and execution time. Compile: Absolute code can be generated if you know at compile time where the process will reside in memory. Code requires recompilation if the starting location changes Load: If memory location is not known at compile time, code must be relocatable. Binding is done when loading program, and the program has to stay in memory Execution: Binding delayed until run time. The process must be able to be moved during its execution from one memory segment to another. Special hardware is needed for this to work

Explain how paging alleviates the problems inherent to contiguous allocation of memory to programs. What is a page table? What is the difference between a page and a frame? What is a free frame list?

It reduces fragmentation from copying whole code of process, paging allows for the program to be kept in logical memory. The logical "page" is the same size as the physical "frame". When needed, the process is allocated an available frame. A page table is a table that keeps track of page number and page offset. The page is a block of memory allocated in logical memory. A frame is a block of physical memory. A free frame list is a list of the frames that are currently free to use.

Can a stack the top of which always keeps the last referenced page be the foundation for implementing the LRU page replacement algorithm? How can you find the victim page? Is it an efficient approach to implementing the policy? Consider both array-based and linked-list-based implementations of a stack. For each, explain step-by-step the procedure to keep the last referenced page at the top of the stack.

It would keep the top pointing to the correct victim page because it would run through the array to figure out which number would be used closest, so in that cause it will be able to figure out what to pop off the stack.

Explain how to implement efficient page replacement algorithms that approximate LRU. What is a "second chance" approach?

LRU cannot be implemented without special hardware. Aging is a way to approximate LRU which is based on LRU and the working set. We can approximate the age/usage. With each page associate a reference bit. Initially set to 0; Change to 1 upon reference. Search: if 1, then change to 0 and continue if 0, replace the page and change to 1. Second chance approach needs a second reference bit to simulate a clock by shifting the first reference bit to the second one.

What are the challenges of load time and execution time binding? What are the mechanisms that support relocatable code? Describe the process of loading code into any part of memory.

Load time binding: the challenges are that the program must remain in memory and that the code must relocatable. For execution time binding, the challenge is that the process must be portable from one memory segment to another. The mechanisms that support relocatable code are "base" and "limit" registers. The base gives the beginning of the space for the process. The limit is the scope of the base (base + limit). the code is loaded into memory within one of these scopes via one of the allocation approaches.

What are the conditions necessary for a deadlock to occur? Do they all need to be satisfied? Will just one be sufficient? Support your answer with details and examples.

Mutual Exclusion only one process at a time can use a resource Hold and wait a process holding at least one resource is waiting to aquire additional resources held by other processes. No preemption a resource can be release only voluntarily by the process holding it after that process has completed its task Circular wait There exists a set {P0,P1,...,Pn} of waiting processes such that P0 is waiting for a resource that is held by P1,P1 is waiting for a resource that is held by P2, ..., Pn-1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0.

Provide a proof that the two-process Peterson's solution satisfies all requirements for a correct solution to the critical section problem. HINT: Analyze all possible scheduling cases for two processes.

Mutual Exclusion: for a two process system, P0 and P1, they can never be in their critical section at the same time since no state can satisfy both turn = 0 and turn = 1 Progress: The Peterson algorithm satisfies this condition since the OS selects P0 first, P0 will set the flag[0] condition to false after it exits its critical section therefore allowing P1 to break its busy wait every time and will access its critical section Bounded waiting: Peterson's algorithm satisfies this condition since the wait for one process to access its critical section will never be longer than one turn. For example, once P0 will give priority to P1 once it exits its critical section by setting flag[0] to false and then break P1's loop.

What requirements does a solution to the critical section problem need to satisfy? Provide a commentary for each.

Mutual exclusion, if a process is executing in it's critical section, no other processes can be in it's critical section at the same time. Progress, if no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. Bounded Waiting, a bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

Consider a computer system that uses 5-bit addressing scheme with 3 bits for page numbers and two bits for offsets. Let the logical memory of some process be a contiguous sequence of letters from a to z (26 letters, only lower case). What is the binary version of the logical address of letter m assuming that the following is the content of the page table: [4, 1, 0, 5, 2, 7, 9, 11]? Find the decimal version of physical address of the letter m. Show the result and explain how did you derive it.

Number of pages 2^3(bits for page number) = 8 Number of addresses per page 2^2(offset) = 4 Binary of m is 10100, because m is on page 3, which points to virtual address space 5, which is 101, and you add the offset, because it's in the 0 place on page 3, you add 00 to get 10100 Physical address is 5(page number) * 4(size of each page) = 20

Consider the following state of the system: Process Allocation Max Available A B C A B C A B C P0 0 1 0 7 5 3 3 3 2 P1 2 0 0 3 2 2 P2 3 0 2 9 0 2 P3 2 1 1 2 2 2 P4 0 0 2 4 3 3 Prove that a request for (3, 3, 0) cannot be granted to P4. Use the following notation in your answers. Please note that there are three resources and five processes in the example. Available = [ 15 10 5 ] Allocation = [ 0 0 0 ][ 0 0 0 ][ 0 0 0 ][ 0 0 0 ][ 0 0 0 ] Max = [ 7 5 3 ][ 3 2 2 ][ 9 0 2 ][ 2 2 2 ][ 4 3 3 ] Need = [ 7 5 3 ][ 3 2 2 ][ 9 0 2 ][ 2 2 2 ][ 4 3 3 ]

P1 need (1,2,2) < (0,0,2) Available P3 need (0,1,1) < (0,0,2) Available P4 need (1,0,1) < (0,0,2) Available P2 need (6,0,0) < (0,0,2) Available P0 need (7,4,3) < (0,0,2) Available Can't do, none of them work at the start?

Consider the following state of the system: Process Allocation Max Available A B C A B C A B C P0 0 1 0 7 5 3 3 3 2 P1 2 0 0 3 2 2 P2 3 0 2 9 0 2 P3 2 1 1 2 2 2 P4 0 0 2 4 3 3 Prove that the sequence <P1, P3, P4, P2, P0> satisfies the safety criteria. Use the following notation in your answers. Please note that there are three resources and five processes in the example. Available = [ 15 10 5 ] Allocation = [ 0 0 0 ][ 0 0 0 ][ 0 0 0 ][ 0 0 0 ][ 0 0 0 ] Max = [ 7 5 3 ][ 3 2 2 ][ 9 0 2 ][ 2 2 2 ][ 4 3 3 ] Need = [ 7 5 3 ][ 3 2 2 ][ 9 0 2 ][ 2 2 2 ][ 4 3 3 ]

P1 need (1,2,2) < (3,2,2) Available (3,2,2) +(2,0,0) P1 Allocation P3 need (0,1,1) < (5,3,2) Available (5,3,2) +(2,1,1) P3 Allocation P4 need (4,3,1) < (7,4,3) Available (7,4,3) +(0,0,2) P4 Allocation P2 need (6,0,0) < (7,4,5) Available (7,4,5) +(3,0,2) P2 Allocation P0 need (7,4,3) < (10,4,7) Available (10,4,7) +(0,1,0) P0 Allocation (10, 5, 7) Can do

You have a system with 1 MB RAM. The system uses 4 kB pages. Your system loads 5 programs with the following lengths: 167852 bytes 209376 bytes 32866 bytes 254871 bytes 128527 bytes Calculate the scope of internal fragmentation after all programs are loaded into the memory. Show the result and explain how did you compute it.

Page size is 4 * 1024 = 4096 bytes/page 167852/4096 = 41 frames with 0.1 left 209376/4096 = 52 frames with 0.9 left 32866/4096 = 9 frames with 0.98 left 254871/4096 = 63 frames with 0.8 left 128527/4096 = 32 frames with 0.6 left Total scope of fragmentation is 0.1+0.9+0.98+0.8+0.6 = 3.38*4096 = 13844.48

How does a segmentation memory architecture differ from paging? How can they both be integrated in a hybrid architecture? Describe such a hybrid architecture with details on how logical addresses would be translated into their physical equivalents.

Paging is use to get a large linear address without having to buy more physical memory. Segmentation allows programs and data to be broken into logically independent address spaces and to aid sharing and protection Supports both segmentation and segmentation with paging. CPU generates logical address and is given to segmentation unit which produces linear addresses. Linear address given to paging unit which generates physical address in main memory, paging units form equivalent of MMU

The following is a resource allocation graph: V={P1, P2, P3, P4, P5, R1, R2, R3, R4} E={(R1, P1), (R2, P2), (R3, P5), (R4, P4), (P1, R2), (P2, R3), (P5, R4), (P4, R1), (P3, R1), (P3, R3)} Using these data detect whether there is a deadlock in the system. Describe the algorithm that you used in your computation using data representations that are feasible to implement on a computer. Note that a computer does not have a pencil and paper.

R1-> P1 P1-> R2 R2->P2 R3->P5 P5-> R4 R4->P4 P4->R1 P3->R3 P3->R1 There will be a deadlock in this system because P3 is requesting resources from R1 and R3 where P4 is asking for resources from R1 and P2 is asking resources from R3, this causes an issue because P3 would have to wait until R3 and R1 are released to be able to do something. The algorithm would check everytime it visits a row on a 2d array and if it visits the same row twice, it's a deadlock

Describe in details time-sharing and real-time Linux scheduling algorithms.

Real-Time: FCFS and RR, highest priority process always runs first, "r-t" processes assigned static priorities from 0 to 9, "r-t" processes enjoy longer time quanta Time-Sharing: Prioritized credit-based, Algorithm works as a sequence of epochs, Ready process with the highest credit is given CPU, Credit lowered when timer interrupt occurs (the last time quantum subtracted). When credit = 0, process has to wait for another epoch to start and when all processes have credit = 0, a new epoch starts

Discuss frame allocation dilemma; i.e., options for distributing frames between processes.

Replacement scopes: global replacement: process selects a replacement frame from the set of all frames. One process may be given a frame taken away from another Loal replacement: each process selects only from its own set of allocated frames Fixed Allocation: Equal Allocation: give each process the same number of frames. Equality is not always good! Prportional allocation: allocate according to the size of process Dynamic Allocation Priority allocation: use a proportional allocation scheme using priorities rather than size. If process generates a page fault select for replacement a frame from a process with lower priority number

Explain with details and examples why resource allocation graph cannot be used for deadlock avoidance in systems that have resources with multiple instances.

Resource allocation graph cannot be used for deadlock avoidance in systems because it does not tell how long the process will require the resources for. It just points to which it requests and not how long it will require it for. In a deadlock avoidance system, it wants to know where each process is pointing to what resource, but does not tell how long it needs it for.

In the lecture notes, we analyze step-by-step two possible cases for scheduling processes that used Peterson's solution to the synchronization problem. Using similar analysis, analyze two potential scenarios for executing at least two processes that utilize an atomic operation of swapping two variables for entering their critical sections. Show all the steps in your analysis of the scenarios.

Scenario 1 flag[1] = TRUE; turn = 2; while (flag[2] && turn == 2) ; while (flag[2] && turn == 2) ; while (flag[2] && turn == 2) ; // critical section flag[1] = FALSE; flag[1] = TRUE; turn = 2; while (flag[2] && turn == 2) ; flag[2] = TRUE; turn = 1; while (flag[1] && turn == 1) ; // critical section flag[2] = FALSE; flag[2] = TRUE; turn = 1; while (flag[1] && turn == 1) ; while (flag[1] && turn == 1) ; // critical section Scenario 2 flag[1] = TRUE; turn = 2; while (flag[2] && turn == 2) ; // critical section flag[1] = FALSE; flag[2] = TRUE; turn = 1; while (flag[1] && turn == 1) ; while (flag[1] && turn == 1) ; while (flag[1] && turn == 1) ; // critical section flag[2] = FALSE;

Assume that the a scheduler uses a round robin scheme with a quantum time of 3. Compute the average turnaround time the following processes in the ready queue: <Process><CPU Burst Time> P1 6 P2 3 P3 1 P4 7 Assume that no other processes are competing for access to CPU. Show the detailed timeline of scheduling events.

T0: CPU: <idle> QUEUE: <empty> T1: CPU: P1(6) QUEUE: <empty> T2: CPU: P1(5) QUEUE: P2(3) T3: CPU: P2(3) QUEUE: P1(4) P3(1) T4: CPU: P2(2) QUEUE: P1(4) P3(1) P4(7) T6: CPU: P4(7) QUEUE: P1(4) P3(1) T9: CPU: P4(4) QUEUE: P1(4) P3(1) T12: CPU: P4(1) QUEUE: P1(4) P3(1) T13: CPU: P3(1) QUEUE: P1(4) T14: CPU: P1(4) QUEUE: <empty> T17: CPU: P1(1) QUEUE: <empty> T18: CPU: <idle> QUEUE: <empty> AVERAGE WAITING TIME: 6

Consider the following task workload pattern: <Process> <CPU Burst Time> <Arrival Time> <Priority> P1 7 1 10 P2 2 3 2 P3 3 5 7 P4 8 11 9 P5 15 14 4 P6 2 14 1 P7 4 21 9 P8 8 25 3 P9 10 28 5 P10 8 32 2 Assuming the multi-level scheduling policy described below scheduling policy, compute the average waiting time. Use the following format to illustrate the details of your computation: T0: P1(10,7) arrival CPU: P1(10,7) [waited 0] Q1: <empty> Q2: <empty> Q3: P1(10,7) T3: P2(2,2) arrival CPU: P1(10,5) [no wait] Q1: P2(2,2) Q2: <empty> Q3: P1(10,5) T5: P3(7,3) arrival CPU: P1(10,3) [no wait] Q1: P2(2,2) Q2: <empty> Q3: P1(10,3) P3(7,3) T6: P1 used its quantum of 5, Q3 adjusted CPU: P2(2,2) [waited 3] Q1: P2(2,2) Q2: P3(6,3) Q3: P1(9,2) ... SCHEDULING POLICY There are three queues Q1, Q2, and Q3: Q1 corresponds to tasks with the priority p < 4 (HIGH) Q2 - for tasks with priorities p > 3 and p < 7 (NORMAL), and Q3 - for tasks with priority p > 6 (LOW). Initially, a new task is added to a queue that corresponds to it's priority. CPU scheduler uses a RR routine (Q1-Q2-Q3) with variable time quantum of across all queues. Processes coming from Q1 are allocated 20 time units, the ones from Q2 - 10, and the lowest priority processes are allowed to run only for 5 time units. A process with the highest priority is selected from within each queue. If there are a number of processes with the same priority, then the FCFS policy is used. If any queue is empty, then a task from the next queue is selected. Decrease the priority of every process by 1 on the expiry of each time quantum (for that queue only). A process is moved to the higher-priority queue if its current priority justifies that. Priority never gets below zero. If a process arrives on time quantum expiry, it's priority is not changed; i.e., the adjustment of the priorities is done before inserting a new process in a queue.

T0: P1(10,7) arrival CPU: P1(10,7) [waited 0] Q1: <empty> Q2: <empty> Q3: P1(10,7) T3: P2(2,2) arrival CPU: P1(10,5) [no wait] Q1: P2(2,2) Q2: <empty> Q3: P1(10,5) T5: P3(7,3) arrival CPU: P1(10,3) [no wait] Q1: P2(2,2) Q2: <empty> Q3: P1(10,3) P3(7,3) T6: P1 used its quantum of 5, Q3 adjusted CPU: P2(2,2) [waited 3] Q1: P2(2,2) Q2: P3(6,3) Q3: P1(9,2) T8: P2 done, P3 put into CPU CPU: P3(6,3) Q1: <empty> Q2: P3(6,3) Q3: P1(9,2) T11: P3 ends, P4 arrives and is put into Q3, P1 put into CPU CPU: P1(9,2) Q1: <empty> Q2: <empty> Q3: P1(9,2), P4(9,8) T13: P1 ends, P4 put into CPU CPU: P4(9,8) Q1: <empty> Q2: <empty> Q3: P4(9,8) T14: P5 arrives, is put into Q2, P6 arrives, is put into Q1 CPU: P4(9,7) Q1: P6(1,2) Q2: P5(4,15) Q3: P4(9,7) T18: P4 used it's quantum of 5, Q3 adjusted, P6 to CPU CPU: P6(1,2) Q1: P6(1,2) Q2: P5(4,15) Q3: P4(8,3) T20: P6 done, P5 to CPU CPU: P5(4,15) Q1: <empty> Q2: P5(4,15) Q3: P4(8,3) T21: P7 arrives, to Q3 CPU: P5(4,14) Q1: <empty> Q2: P5(4,14) Q3: P4(8,3) P7(9,4) T25: P8 arrives, to Q1 CPU: P5(4,10) Q1: P8(3,8) Q2: P5(4,10) Q3: P4(8,3) P7(9,4) T28: P9 arrives, to Q2 CPU: P5(4,7) Q1: P8(3,8) Q2: P5(4,7) P9(5,10) Q3: P4(8,3) P7(9,4) T30: P5 uses its quantum, is moved to Q1, P4 to CPU CPU: P4(8,3) Q1: P8(3,8) P5(3,5) Q2: P9(5,10) Q3: P4(8,3) P7(9,4) T32: P10 arrives, to Q1 CPU: P4(8,1) Q1: P8(3,8) P5(3,5) P10 (2,8) Q2: P9(5,10) Q3: P4(8,1) P7(9,4) T33: P4 done, P10 to CPU CPU: P10(2,8) Q1: P8(3,8) P5(3,5) P10(2,8) Q2: P9(5,10) Q3: P7(9,4) T41: P10 done, P9 to CPU CPU: P9(5,10) Q1: P8(3,8) P5(3,5) Q2: P9(5,10) Q3: P7(9,4) T51: P9 done, P7 to CPU CPU: P7(9,4) Q1: P8(3,8) P5(3,5) Q2: <empty> Q3: P7(9,4) T55: P7 done, P5 to CPU CPU: P5(3,5) Q1: P8(3,8) P5(3,5) Q2: <empty> Q3: <empty> T60: P5 done, P8 to CPU CPU: P8(3,8) Q1: P8(3,8) Q2: <empty> Q3: <empty> T68: P8 done, all processes complete CPU: <empty> Q1: <empty> Q2: <empty> Q3: <empty>

Consider the following task workload pattern: <Process> <CPU Burst Time> P1 7 P2 10 P3 3 P4 8 P5 15 P6 2 P7 4 Assuming the FCFS scheduling policy, compute the average waiting time. Use the following format to illustrate the details of your computation: T01: P1(3) T04: P3(6) T10: P6(5) T15: P4(...) ...

T0: P1(7) T7: P2(10) T17: P3(3) T20: P4(8) T28: P5(15) T43: P6(2) T45: P7(4) T49: <done>

Consider the following task workload pattern: <Process> <CPU Burst Time> <Arrival Time> P1 7 1 P2 10 3 P3 1 5 P4 8 11 P5 15 14 P6 2 15 P7 2 21 Assuming the SRTF scheduling policy, compute the average waiting time. Use the following format to illustrate the details of your computation: T01: P1(3) T04: P3(6) T10: P6(5) T15: P4(...) ...

T1: P1(7) T3: P1(5) QUEUE: P2(10) T5: P3(1) QUEUE: P1(3) P2(10) T6: P1(3) QUEUE: P2(10) T9: P2(10) T11: P2(8) QUEUE: P4(8) T14: P2(5) QUEUE: P4(8) P5(15) T15: P6(2) QUEUE: P2(4) P4(8) P5(15) T17: P2(4) QUEUE: P4(8) P5(15) T21: P7(2) QUEUE: P4(8) P5(15) T23: P4(8) QUEUE: P5(15) T31: P5(15) T46: <done>

How is it possible to run concurrently programs that in total have higher memory requirements than the amount of available physical memory? What is the difference between a physical address and a logical address? How is the logical address mapped to its physical counterpart? What is the lifetime of such binding?

The CPU generates a virtual memory address that is bound to a separate physical address, so that not all the required memory must actually be available. A physical address is the address seen by the memory unit and is the location of actual data. A logical address is the virtual address generated by the cpu. The logical address is mapped to its physical by the Memory Management Unit by adding the value in the relocation register.

Discuss the use of resource numbering scheme to prevent deadlocks. Describe how the effectiveness of the scheme can be proven formally. Then, illustrate the theory by designing a deadlock prevention algorithm based on resource numbering for the the Dining Philosophers problem.

The resource numbering scheme orders all resource types and requires that each process requests resources in increasing order of enumeration. The algorithm would check for a process that is requesting a resource with a lower value that the one it already has. if it finds one, it has the process release the higher ordered resource before the process requests for the lower ordered resource so it will not allow a cycle to form. For the dining philosophers, in order to prevent deadlocks a philosopher must have the chopstick with a lower value first before picking up the other. If it tries to pick up a lower numbered chopstick while having the hire number chopstick, it must drop the higher number chopstick it has and waits until the lowered number chopstick is available

Explain what a virtual memory is including the role of demand paging realized by the lazy swapper in memory virtualization. Furthermore, describe the benefits of memory virtualization.

Virtual memory is separation of user logical memory from physical memory where only part of the program needs to be in memory for execution. This is known as the lazy swapper method where a page is swapped into memory only when the page is needed (demand paging). The benefits of memory virtualization are: Flexible process management: the process doesn't need to wait for an opening in memory so only a portion can go in giving it a better response time, the total logical space for all processess can be larger than the available physical address space giving it more concurrent processes and users, and the address spaces giving it more concurrent processes and users, and the address space can be shared by several processes such as dynamic libraries Less i/o: fewer pages to swap in and out and improved performance.

Define what a resource allocation graph is. Let there will be a cycle in a resource allocation graph. Can you tell if there is a deadlock? Support your answer with all necessary details and an example.

Visual representation of what process is asking for what resource. A resource allocation graph is a set of vertices V = V(Process) U V(Resource) and a set of edges E = E(Resource) U E(Allocation) such that V(Process) is the set of all the process in the system and V(Resource) is the set of all resource types in the system. If a graph contains a cycle and there is one instance per resource type, then there will be a deadlock. If there is a cycle with several instances per resource type, then there is a possibility of a deadlock

Provide a formal proof that blocking interrupts can be used as a solution for the critical section problem on a single-processor computer. Would it work on a multiprocessor machine? Justify your answer.

When a process is ready to enter into its critical section it sets a flag to disable interrupts. This ensures that no other process can be scheduled and ensures mutual exclusion re-enabling the interrupts after the critical section can ensure progress and bounded waiting. On multiprocessor machines, the flag for disabling interrupts would have to be relayed to all processors which is costly

Discuss efficient implementation of working sets for memory paging based on reference counts. How can we find quickly the victim page in such a system? Think in terms of fast assembler operations.

Working set is used to have sets of pages available to prevent trashing. It tries to build an approximation of the locality model. Approximate working set with interval timer + a reference bit. Basically, build a history of the references in the last time slot

Consider the following requests (Rn) for memory allocation (A) and deallocation (D) from a memory pool of size 20: R1 A 6 R2 A 3 R3 A 5 R4 A 2 R1 D R5 A 4 R4 D R6 A 1 R7 A 2 Compute explicitly (i.e., show all steps) the external fragmentation assuming the first-fit allocation policy. Use the following notation: [xxxxxxxxxxxxxxxxxxxx] [111111xxxxxxxxxxxxxx] [111111222xxxxxxxxxxx] which shows the initial state and then the two following allocations.

[xxxxxxxxxxxxxxxxxxxx] [111111xxxxxxxxxxxxxx] [111111222xxxxxxxxxxx] [11111122233333xxxxxx] [1111112223333344xxxx] [xxxxxx2223333344xxxx] [5555xx2223333344xxxx] [5555xx22233333xxxxxx] [55556x22233333xxxxxx] [55556x2223333377xxxx]

Consider the following requests (Rn) for memory allocation (A) and deallocation (D) from a memory pool of size 20: R1 A 6 R2 A 3 R3 A 5 R4 A 2 R1 D R5 A 4 R4 D R6 A 1 R7 A 2 Compute explicitly (i.e., show all steps) the external fragmentation assuming the worst-fit allocation policy. Use the following notation: [xxxxxxxxxxxxxxxxxxxx] [111111xxxxxxxxxxxxxx] [111111222xxxxxxxxxxx] which shows the initial state and then two following allocations.

[xxxxxxxxxxxxxxxxxxxx] [111111xxxxxxxxxxxxxx] [111111222xxxxxxxxxxx] [11111122233333xxxxxx] [1111112223333344xxxx] [xxxxxx2223333344xxxx] [5555xx2223333344xxxx] [5555xx22233333xxxxxx] [5555xx222333336xxxxx] [5555xx22233333677xxx]

Consider the following requests (Rn) for memory allocation (A) and deallocation (D) from a memory pool of size 20: R1 A 6 R2 A 3 R3 A 5 R4 A 2 R1 D R5 A 4 R4 D R6 A 1 R7 A 2 Compute explicitly (i.e., show all steps) the external fragmentation assuming the best-fit allocation policy. Use the following notation: [xxxxxxxxxxxxxxxxxxxx] [111111xxxxxxxxxxxxxx] [111111222xxxxxxxxxxx] which shows the initial state and then the two following allocations.

[xxxxxxxxxxxxxxxxxxxx] [111111xxxxxxxxxxxxxx] [111111222xxxxxxxxxxx] [11111122233333xxxxxx] [1111112223333344xxxx] [xxxxxx2223333344xxxx] [xxxxxx22233333445555] [xxxxxx22233333xx5555] [xxxxxx222333336x5555] [77xxxx222333336x5555]

Assume that an OS with 3 frames uses FIFO replacement policy . For the following reference set {4, 3, 6, 2, 1, 2, 5, 4, 3, 4, 2, 3, 6, 1, 2, 3} Show with details all page replacements. State clearly how many page faults have occurred. Show frame content (which pages are loaded) for each page reference per line. Use "-" for empty frame; otherwise, use the page number of the page loaded into a particular frame. Precede the number corresponding to the page being referenced with ">". If there is a page fault, indicate which page was replaced by preceding the corresponding number with "*". For example, the following line: - 4 >2 indicates that the first frame is free, the second frame holds page number 4, and the page number 2 held in the third frame has just been referenced. The following line: *6 4 2 indicates that page 6 was referenced and faulted; it indicates that page replacement occurred in the first frame. For example, in FIFO, the following string: 1 1 5 2 4 2 0 2 5 1 0 5 0 2 3 will be processed as follows: *1 - - >1 - - 1 *5 - 1 5 *2 *4 5 2 4 5 >2 4 *0 2 4 0 >2 4 0 *5 *1 0 5 1 >0 5 1 0 >5 1 >0 5 1 *2 5 1 2 *3

{4, 3, 6, 2, 1, 2, 5, 4, 3, 4, 2, 3, 6, 1, 2, 3} *4 - - 4 *3 - 4 3 *6 *2 3 6 2 *1 6 >2 1 6 2 1 *5 *4 1 5 4 *3 5 >4 3 5 4 3 *2 4 >3 2 *6 3 2 6 *1 2 6 1 >2 6 1 *3 12 page faults

Assume that an OS with 3 frames uses LRU replacement policy . For the following reference set {4, 3, 6, 2, 1, 2, 5, 4, 3, 4, 2, 3, 6, 1, 2, 3} Show with details all page replacements. State clearly how many page faults have occurred. Show frame content (which pages are loaded) for each page reference per line. Use "-" for empty frame; otherwise, use the page number of the page loaded into a particular frame. Precede the number corresponding to the page being referenced with ">". If there is a page fault, indicate which page was replaced by preceding the corresponding number with "*". For example, the following line: - 4 >2 indicates that the first frame is free, the second frame holds page number 4, and the page number 2 held in the third frame has just been referenced. The following line: *6 4 2 indicates that page 6 was referenced and faulted; it indicates that page replacement occurred in the first frame. For example, in FIFO, the following string: 1 1 5 2 4 2 0 2 5 1 0 5 0 2 3 will be processed as follows: *1 - - >1 - - 1 *5 - 1 5 *2 *4 5 2 4 5 >2 4 *0 2 4 0 >2 4 0 *5 *1 0 5 1 >0 5 1 0 >5 1 >0 5 1 *2 5 1 2 *3

{4, 3, 6, 2, 1, 2, 5, 4, 3, 4, 2, 3, 6, 1, 2, 3} *4 - - 4 *3 - 4 3 *6 *2 3 6 2 *1 6 >2 1 6 2 1 *5 2 *4 5 *3 4 5 3 >4 5 3 4 *2 >3 4 2 3 *6 2 3 6 *1 *2 6 1 2 *3 1 13 page faults

Assume that an OS with 3 frames uses OPT replacement policy . For the following reference set {4, 3, 6, 2, 1, 2, 5, 4, 3, 4, 2, 3, 6, 1, 2, 3} Show with details all page replacements. State clearly how many page faults have occurred. Show frame content (which pages are loaded) for each page reference per line. Use "-" for empty frame; otherwise, use the page number of the page loaded into a particular frame. Precede the number corresponding to the page being referenced with ">". If there is a page fault, indicate which page was replaced by preceding the corresponding number with "*". For example, the following line: - 4 >2 indicates that the first frame is free, the second frame holds page number 4, and the page number 2 held in the third frame has just been referenced. The following line: *6 4 2 indicates that page 6 was referenced and faulted; it indicates that page replacement occurred in the first frame. For example, in FIFO, the following string: 1 1 5 2 4 2 0 2 5 1 0 5 0 2 3 will be processed as follows: *1 - - >1 - - 1 *5 - 1 5 *2 *4 5 2 4 5 >2 4 *0 2 4 0 >2 4 0 *5 *1 0 5 1 >0 5 1 0 >5 1 >0 5 1 *2 5 1 2 *3

{4, 3, 6, 2, 1, 2, 5, 4, 3, 4, 2, 3, 6, 1, 2, 3} *4 - - 4 *3 - 4 3 *6 4 3 *2 4 *1 2 4 1 >2 4 *5 2 >4 5 2 4 *3 2 >4 3 2 4 3 >2 4 >3 2 *6 3 2 *1 3 2 1 3 >2 1 >3 2 9 page faults


Ensembles d'études connexes

Principles of Investments (Final)

View Set

Chronic Illness and Disability, CH 9

View Set

PHP and MySQL Web Development - Chapter 5

View Set