OS Final Exam. - Problems

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

9.4 Consider the following page-replacement algorithms. Rank these algorithms on a five-point scale from "bad" to "perfect" according to their page-fault rate. Separate those algorithms that suffer from Belady's anomaly from those that do not.

1 Optimal replacement no 2 LRU replacement no 3 Second-chance replacement yes 4 FIFO replacement yes

7.11 Consider the deadlock situation that could occur in the dining-philosophers problem when the philosophers obtain the chopsticks one at a time. Discuss how the four necessary conditions for deadlock indeed hold in this setting. Discuss how deadlocks can be avoided by eliminating any one of the four conditions.

1) Mutual exclusion is required for chopsticks 2) Philosophers hold onto the chopstick in hand while they wait for the other chopstick 3) No preemption of chopsticks - a chopstick allocated to a philosopher cannot be forcibly taken away 4) Circular wait Solving deadlock 1) Simultaneous sharing of chopsticks 2) Have philosophers relinquish the first chopstick if they are unable to obtain the other chopstick 3) Allow chopsticks to be forcibly taken away if a philosopher has had a chopstick for a long period of time 4) Enforce a number ing of chopsticks and always obtain the lower numbered chopstick before obtaining the higher numbered one

Request 240 Request 120 Request 60 Request 130

240 byte - 256 120 - 128 60 -64 130 - 245

Release 240 Release 60 Release 120

256 - 130 bytes

Compare the circular-wait scheme with the deadlock-avoidance schemes (like Banker's algorithm) with respect to the following issues: a. Runtime overheads b. System throughput

A deadlock avoidance scheme tends to increase the runtime overheads due to the cost of keep track of the current resource allocation. However, a deadlock-avoidance scheme allows for more concurrent use of resources than schemes that statically prevent the formation of deadlock. In that sense, deadlock-avoidance scheme could increase system throughput.

9.1 Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs.

A page fault occurs when an access to a page has not been brought into main memory takes place

8.14 On a system with paging, a process cannot access memory that it does not own. Why? How could the operating system allow access to other memory? Why should it or should it not?

Because the operating system controls the contents of this table, it can limit a process to accessing only those physical pages allocated to the process. There is no way for a process to refer to a page it does not own because the page will not be in the page table To allow such access, an operating system system simply needs to allow entries for non-process memory to be added to the process's page table. This is useful when two or more processes need to exchange data—they just read and write to the same physical addresses (which may be at varying logical addresses)

8.5 What is the effect of allowing two entries in a page table to point to the same page frame in memory? Explain how this effect could be used to decrease the amount of time needed to copy a large amount of memory from one place to another. What effect would updating some byte on one page have on the other page?

By allowing two entries in a page table to point to the same page frame in memory, users can share code and data. If the code is reentrant, much memory space can be saved through the shared use of large programs such as text editors, compilers, and database systems. "Copying" large amounts of memory could be effected by having different page tables point to the same memory location. However, sharing of nonreentrant code or data means that any user having access to the code can modify it and these modifications would be reflected in the other user's copy.

12.11 What are the advantages of the variant of linked allocation that uses a FAT to chain together the blocks of a file?

The advantage is that while accessing a block that is stored at the middle of a file, its location can be determined by chasing the pointers stored in the FAT as opposed to accessing all of the individual blocks of the file in a sequential manner to find the pointer to the target block. Typically, most of the FAT can be cached in memory and therefore the pointers can be determined with just memory accesses instead of having to access the disk blocks.

9.20 When a page fault occurs, the process requesting the page must block while waiting for the page to be brought from disk into physical memory. Assume that there exists a process with five user-level threads and that the mapping of user threads to kernel threads is one to one. If one user thread incurs a page fault while accessing its stack, would the other user threads belonging to the same process also be affected by the page fault—that is, would they also have to wait for the faulting page to be brought into memory? Explain.

They would not have to wait for the faulting page to be brought into memory - they have separate stacks

9.32 What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem? Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault.

Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault. The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming.

8.19 Program binaries in many systems are typically structured as follows. Code is stored starting with a small, fixed virtual address, such as 0. The code segment is followed by the data segment that is used for storing the program variables. When the program starts executing, the stack is allocated at the other end of the virtual address space and is allowed to grow towards lower virtual addresses. What is the significance of this structure for the following schemes?

Contiguous Memory Allocation • Contiguous-memory allocation requires the operating system to allocate the entire extent of the virtual address space to the program when it starts executing. This could be much higher than the actual memory requirements of the process Pure Segmentation • Gives the operating system flexibility to assign a small extent to each segment at program startup time and extend the segment if required Pure Paging • Pure paging does not require the operating system to allocate the maximum extent of the virtual address space to a process at startup time, but it still requires the operating system to allocate a large page table spanning all of the program's virtual address space. When a program needs to extend the stack or the heap, it needs to allocate a new page but the corresponding page table entry is preallocated

9.33 Is it possible for a process to have two working sets, one representing data and another representing code? Explain.

Yes, it is possible to have two working sets. Many processors provide two TLB's for this very reason.

9.36 A system provides support for user-level and kernel-level threads. The mapping in this system is one to one (there is a corresponding kernel thread for each user thread). Does a multithreaded process consist of:

a working set for each thread

8.12 Most systems allow a program to allocate more memory to its address space during execution. Allocation of data in the heap segments of programs is an example of such allocated memory. What is required to support dynamic memory allocation in the following schemes?

a) Contiguous memory allocation Might require relocation of the entire program since there is not enough space for the program to grow its allocated memory space. b) Pure segmentation Might also require relocation of the segment that needs to be extended since there is not enough space for the segment to grow its allocated memory space. c) Pure paging Incremental allocation of new pages is possible in this scheme without requiring relocation of the program's address space.

12.4 Consider a system that supports the strategies of contiguous, linked, and indexed allocation. What criteria should be used in deciding which strategy is best utilized for a particular file?

• Contiguous—if file is usually accessed sequentially, if file is relatively small. • Linked—if file is large and usually accessed sequentially. • Indexed—if file is large and usually accessed randomly.

8.13 Compare the memory organization schemes of contiguous memory allocation, pure segmentation, and pure paging with respect to the following issues: • External fragmentation • Internal fragmentation • Ability to share code between processes

Contiguous Memory Allocation • Suffers from external fragmentation - address spaces are allocated contiguously and holes develop as old processes dies and new processes are initiated • Does not suffer from internal fragmentation • Does not allow code sharing between processes - A process's virtual memory segment is not broken into non-contiguous fine grained segments Pure Segmentation • Suffers from external fragmentation - a segment of a process is laid out contiguously in physical memory and fragmentation would occur as segments of dead processes are replaced by segments of new processes • Does not suffer from internal fragmentation • Allows code sharing - two different processes could share a code segment but have distinct data segments Pure Paging • Does not suffer from external fragmentation • Suffers from internal fragmentation - Processes are allocated in page granularity and if a page is not completely utilized, it results in internal fragmentation and a waste of space • Allows code sharing at the granularity of pages

9.17 What is the copy-on-write feature, and under what circumstances is its use beneficial? What hardware support is required to implement this feature?

Copy on Write allows processes to share pages rather than each having a separate copy of the pages. However, when one process tries to write to a shared page, then a trap is generated and the OS makes a separate copy of the page for each process. This is commonly used in a fork() operation where the child is supposed to have a complete copy of the parent address space. Rather than create a separate copy, the OS allows the parent and child to share the parent's pages. However, since each is supposed to have its own private copy of the pages, the pages are copied when one of them attempts a write. The hardware support required to implement is simply the following: on each memory access, the page table needs to be consulted to check whether the page is write-protected. If it is indeed write-protected, a trap would occur and the operating system could resolve the issue.

8.9 Explain the difference between internal and external fragmentation

Internal fragmentation Definition Internal fragmentation occurs when there is unused space within one of the partitions of memory allocated to a process Schemes that suffer from internal fragmentation Fixed-size memory allocation schemes Solving internal fragmentation Internal fragmentation can be reduced by using multiple variable-sized partitions When unused space can be used The space that is allocated to a process and is unused cannot be used by the system until the process releases it External Fragmentation Definition External fragmentation occurs when there is enough free space to satisfy a request for memory but there is not enough contiguous available memory to be allocated to a process Schemes that suffer from external fragmentation Multiple partition allocation schemes Solving external fragmentation External fragmentation can be solved using compaction When unused space can be used The space that is allocated to a process and is unused can be used by the system after compaction is performed

8.3 Why are page sizes always a power of 2?

Page sizes are always a power of 2 so that you can compute the page number and offset using just shift and mask operations

11.12 Provide examples of applications that typically access files according to the following methods:

Sequential Word processors Music players Video players Random Databases Video editors Audio editors

7.17 Consider a system consisting of four resources of the same type that are shared by three processes, each of which needs at most two resources. Show that the system is deadlock free.

Suppose the system is deadlocked Each process is holding one resource and is waiting for one more. Since there are three processes and four resources one process must be able to obtain two resources. This process requires no more resources, and it will return its resources when done.

9.38 Consider a system that allocates pages of different sizes to its processes. What are the advantages of such a paging scheme? What modifications to the virtual memory system provide this functionality?

The program could have a large code segment or use large- sized arrays as data. These portions of the program could be allocated to larger pages, thereby decreasing the memory overheads associated with a page table. The virtual memory system would then have to maintain multiple free lists of pages for the different sizes and should also need to have more complex code for address translation to take into account different page sizes.

12.8 Explain how the VFS layer allows an operating system to support multiple types of file systems easily.

VFS introduces a layer of indirection in the file system implementation. In many ways, it is similar to object-oriented programming techniques. System calls can be made generically (independent of file system type). Each file system type provides its function calls and data structures to the VFS layer. A system call is translated into the proper specific functions for the target file system at the VFS layer. The calling program has no file-system-specific code, and the upper levels of the system call structures likewise are file system-independent. The translation at the VFS layer turns these generic calls into file-system-specific operations.

7.17 Consider the dining-philosophers problem where the two chopsticks are placed at the center of the table and an two of them could be used by a philosopher. Assume that requests for chopsticks are made one at a time. Describe a simple rule for determining whether a. particular request could be satisfied without causing deadlock given the current allocation of chopsticks to philosophers

When a philosopher makes a request for the first chopstick, do not grant the request if there is no other philosopher with two chopsticks and if there is only one chopstick remaining

9.34 Consider the parameter DELTA used to define the working-set window in the working-set model. When DELTA is set to a small value, what is the effect on the page-fault frequency and the number of active (nonsuspended) processes currently executing in the system? What is the effect when DELTA is set to a very high value?

When Δ is set to a small value, there could be a large value of page-faults. The set of resident pages for a process might be underestimated, allowing a process to be scheduled even though all of its required page faults are not resident. When Δ is set to a very high value, a process's resident set is overestimated and this might prevent many processes from being scheduled even though their required pages are resident. However, once a process is scheduled, it is unlikely to generate page faults since its resident set has been overestimated.

9.16 Consider a system that uses pure demand paging. a. When a process first starts execution, how would you characterize the page-fault rate? b. Once the working set for a process is loaded into memory, how would you characterize the page-fault rate? c. Assume that a process changes its locality and the size of the new working set is too large to be stored in available free memory. Identify some options system designers could choose from to handle this situation.

a. The page-fault rate is 100% because each page reference will lead to a page fault until all pages that are required for execution are loaded into memory. b. Once the working set for a process is loaded into memory, the page-fault rate is 0% because all the pages required for execution are loaded into memory. c. Allow the operating system to suspend one of the processes. The process's pages are written out (swapped), and its frames are allocated to the new process and the suspended process can be restarted later.

9.15 A simplified view of thread states is Ready, Running, and Blocked, where a thread is either ready and waiting to be scheduled, is running on the processor, or is blocked (for example, waiting for I/O). This is illustrated in Figure 9.31. Assuming a thread is in the Running state, answer the following questions, and explain your answer: a. Will the thread change state if it incurs a page fault? If so, to what state will it change? b. Will the thread change state if it generates a TLB miss that is resolved in the page table? If so, to what state will it change? c. Will the thread change state if an address reference is resolved in the page table? If so, to what state will it change?

a. Yes, the thread will change state if it incurs a page fault. It will change from the Running state to the Blocked state. b. Not necessarily. If the page table entry is not found in the TLB (TLB miss), the page number is used to index and process the page table. If the page is already in main memory, then the TLB is updated to include the new page entry, while the process execution continues since there is no I/O operation needed. If the page is not in main memory, a page fault is generated. In this case, the process needs to change to the Blocked state and wait for I/O to acess the disk. c. No, the thread will not change state if an address reference is resolved in the page table. The page needed is loaded in main memory already.

Consider a demand-paged computer system where the degree of multiprogramming is currently fixed at four. The system was recently measured to determine utilization of the CPU and the paging disk. Three alternative results are shown below. For each case, what is happening? Can the degree of multiprogramming be increased to increase the CPU utilization? Is the paging helping? a. CPU utilization 13%, disk utilization 97% b. CPU utilization 87%, disk utilization 3% c. CPU utilization 13%, disk utilization 3%

a. • Thrashing is occurring • The degree of multiprogramming should be decreased to increase CPU utilization • Paging is not helping b. • Utilization of the CPU is very good; the CPU is busy most of the time • The degree of multiprogramming should stay the same. Increasing it may lead to thrashing • Paging helps to keep the CPU busy most of the time - helping in this case c. • CPU utilization is low • The degree of multiprogramming should be increased to increase CPU utilization • Paging is not helping or hurting

9.27 Consider a demand-paging system with the following time-measured utilizations: CPU utilization 20% Paging disk 97.7% Other I/O devices 5% For each of the following, indicate whether it will (or is likely to) improve CPU utilization. Explain your answers. a. Install a faster CPU. b. Install a bigger paging disk. c. Increase the degree of multiprogramming. d. Decrease the degree of multiprogramming. e. Install more main memory. f. Install a faster hard disk or multiple controllers with multiple hard disks. g. Add prepaging to the page-fetch algorithms. h. Increase the page size.

• Install a faster CPU - No • Install a bigger paging disk - No • Increase the degree of multiprogramming - No • Decrease the degree of multiprogramming - Yes • Install more main memory - Likely to improve CPU utilization as more pages can remain resident and not require paging to or from the disks • Install a faster hair disk or multiple controllers with multiple hard disks - Also an improvement, for the disk bottleneck is removed by faster response and more throughput to the disks, the CPU will get more data more quickly • Add preparing to the page-fetch algorithms - Again the CPU will get data faster, so it will be more in use. This is only the case if the paging action is amenable to prefetching (i.e. some of the access is sequential) • Increase the page size - Increasing the page size will result in fewer page faults if data is being accessed sequentially. If data access is more or less random, more paging action could ensue because fewer pages can be kept in memory and more data is transferred per page fault. So this change is as likely to decrease utilization as it is to increase it

7.12 Assume a multithreaded application uses only reader-writer locks for synchronization. Applying the four necessary conditions for deadlock, is deadlock still possible if multiple reader-writer locks are used?

• Mutual exclusion - Multiple reader-writer locks • Hold and wait - Hold one reader/writer, while waiting for another lock • No preemption - Preemption not possible • Circular wait - Multiple threads wait for a reader-writer lock already acquired by some other thread

9.14 Assume that a program has just referenced an address in virtual memory. Describe a scenario in which each of the following can occur. (If no such scenario can occur, explain why.) • TLB miss with no page fault • TLB miss and page fault • TLB hit and no page fault • TLB hit and page fault

• TLB miss with no page fault - The TLB did not contain the specified page table entry, but the page table entry exists in the page table and is valid • TLB miss and page fault - The TLB did not contain the specified page table entry, and the page table did not contain the page table entry or the page table entry is invalid • TLB hit and no page fault - The TLB contained the page table entry and it was valid and in memory. • TLB hit and page fault - This situation cannot occur. It cannot have a TLB entry if the page is not in memory. The operating system updates the page tables and TLB when a page in memory has been flushed out to disk.


Set pelajaran terkait

Time Management, Prioritization, Delegation (CH.6) Burnout (CH.13)

View Set

Are You Mastering Audience Engagement? A Fun Quiz to Find Out!

View Set

EMT - Prehospital Emergency Care: Part 6: Assessment: Chapter 13: Patient Assessment

View Set

Chapter 7: Connect Master Intro to Business

View Set

module 2: chapter 3 and 4 retail accounting

View Set

Song Lyrics - The Beatles - Hey Jude

View Set

3: Macro Environmental Analysis and Scenario Planning

View Set

History of Rock and Roll Chapters 10-15

View Set