Chapter 9 Operating Systems

Ace your homework & exams now with Quizwiz!

A system provides support for user-level and kernel-level threads. The mapping in this system is one to one (there is a corresponding kernel thread for each user thread). Does a multithreaded process consist of (a) a working set for the entire process or (b) a working set for each thread? Explain

(B) is correct A working set for each thread. Whenever a new user-level thread is created in the system, a kernel thread mapping to it is also created Thus in a multithreading environment, a process consists of a working set for each thread.

Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs.

A page fault occurs when a process addresses a page whose valid/invalid bit is set to invalid. it also can be stated that this happens when a process addresses a point in logical memory that is not currently in physical memory. 1.When a page fault occurs, the OS traps, suspending the process. 2. It checks to see if the reference itself is legal or illegal. 3.If the reference was illegal, then the process is aborted. if the reference was legal, the OS prepares to bring the needed frame into physical memory. if there is a frame available from the free frame list, then the page is simply loaded. If there is not a free frame available, the OS selects a current frame as a victim, may write this frame back to secondary storage, and loads the needed page into the newly available frame. Finally, then page table is updated to reflect this modification, and the process's execution is restarted from the line which caused the trap.

We have an operating system for a machine that uses base and limit registers, but we have modified the machine to provide a page table. Can the page tables be set up to simulate base and limit registers? How can they be, or why can they not be?

A page table is used to mark entries stored in it using a valid/invalid bit. The base and limit registers signify the starting and ending address to be used in memory for a program. If the size of the segments in memory is fixed, the base and limit registers can be simulated using a page table. This can be achieved by entering the contents of the base register into the page table and then the valid/invalid bit can be used to mark the segment which is currently in the memory. in this manner, the page table can be used to simulate base and limit registers.

The slab-allocation algorithm uses a separate cache for each different object type. Assuming there is one cache per object type, explain why this scheme doesn't scale well with multiple CPUs. What could be done to address this scalability issue?

A second strategy for allocating kernel memory is known as slab allocation. A slab is made up of one or more physically contiguous pages. A cache consist of one or more slabs. There is a single chance for each unique kernel data structure. Each cache is populated with objects that are instantaneous of the kernel data structure the cache represents.

Consider a demand-paged computer system where the degree of multiprogramming is currently fixed at four. The system was recently measured to determine utilization of the CPU and the paging disk. Three alternative results are shown below. For each case, what is happening? Can the degree of multiprogramming be increased to increase the CPU utilization? Is the paging helping? a. CPU utilization 13 percent; disk utilization 97 percent b. CPU utilization 87 percent; disk utilization 3 percent c. CPU utilization 13 percent; disk utilization 3 percent

A. Here CPU utilization is less than disk utilization means system leads to thrashing. The degree of multiprogramming should be decreased to increase CPU utilization. Here paging causes underutilization of CPU. So it is not helping. B. Utilization of CPU is very good; CPU is being busy most of the time. The degree of multiprogramming probably should stay the same, increasing it may lead to thrashing. Here paging helps to keep the CPU busy most of the time doing useful work. So, paging is helping the case. C. Utilization of CPU is not appreciable; the CPU is not getting enough work. The degree of multiprogramming should be increased to increase the CPU utilization. Here, paging is not really helping or hurting.

Assume that you are monitoring the rate at which the pointer in the clock algorithm moves. (The pointer indicates the candidate page for replacement.) What can you say about the system if you notice the following behavior: a. Pointer is moving fast. b. Pointer is moving slow.

A. If the pointer is moving fast, then the program is accessing a large number of pages simultaneously. it is most likely that during the period between the point at which the bit corresponding to a page is cleared and it is checked again, the page is accessed again and therefore cannot be replaced. This results in more scanning of the pages before a victim is found. B. If the pointer is moving slow, then the virtual memory system is finding candidate pages for replacement quickly, indicating that many of the resident pages are not accessed.

Assume that you have a page-reference string for a process with m frames (initially all empty). The page-reference string has length p, and n distinct page numbers occur in it. Answer these questions for any page-replacement algorithms: a. What is a lower bound on the number of page faults? b. What is an upper bound on the number of page faults?

A. Since the page-reference string is initially empty, each page must be inserted in the beginning. The number distinct page numbers is 'n' and thus each one of them must be loaded in the memory. Therefore, 'n' is the lower bound. B. The upper bound on the number of page faults depends on the size of the memory. Given that the length of the page-reference string is 'p', if the number of frames i.e. 'm' is greater than the distinct page numbers i.e. 'n', the upper bound would be 'n' no matter what the value of 'p' is. Otherwise, when the value of 'm' is less than 'n', then the upper bound will be equal to the 'p'.

The VAX/VMS system uses a FIFO replacement algorithm for resident pages and a free-frame pool of recently used pages. Assume that the free-frame pool is managed using the LRU replacement policy. Answer the following questions: a. If a page fault occurs and the page does not exist in the free-frame pool, how is free space generated for the newly requested page? b. If a page fault occurs and the page exists in the free-frame pool, how is the resident page set and the free-frame pool managed to make space for the requested page? c. What does the system degenerate to if the number of resident pages is set to one? d. What does the system degenerate to if the number of pages in the free-frame pool is zero?

A. When a page fault occurs and if the page does not exist in the free frame pool, then one of the pages in thef ree frame pool needs to be evicted to the disk, creating space for one of the resident pages to be moved to the free fame pool. The accessed page is then moved to the resident set. B. If the page exists in the free-frame pool, then it is moved into the set of resident pages, while one of the resident pages is moved to the free-frame pool. C. The system degenerates into the page replacement algorithm used in the free frame pool, namely an LRU managed system. D. The system degenerates into a FIFO-managed system

Consider a system that uses pure demand paging. a. When a process first starts execution, how would you characterize the page-fault rate? b. Once the working set for a process is loaded into memory, how would you characterize the page-fault rate? c. Assume that a process changes its locality and the size of the new working set is too large to be stored in available free memory. Identify some options system designers could choose from to handle this situation.

A. When a process first starts execution in a system that uses pure demand paging, the page fault rate is very high, and it will be 100 percent. At the starting of the execution of a process, the memory does not contain any pages. Each page reference will lead to page fault until all the pages that are required for execution will load into the f.uckin memory. B. Once the working set for a process is loaded into memory, the page fault rate will be 0% because the all the pages required for current execution are loaded into the memory. C. If a process changes its locality and the size of the new working set is too large to be stored in available free memory, then the designers handles this situation by allowing OS to suspend one of the processes.

A simplified view of thread states is Ready, Running, andBlocked, where a thread is either ready and waiting to be scheduled, is running on the processor, or is blocked (for example, waiting for I/O). This is illustrated in Figure 9.31. Assuming a thread is in the Running state, answer the following questions, and explain your answer: a. Will the thread change state if it incurs a page fault? If so, to what state will it change? b. Will the thread change state if it generates a TLBmiss that is resolved in the page table? If so, to what state will it change? c. Will the thread change state if an address reference is resolved in the page table? If so, to what state will it change?

A. Yes, a thread changes its state from Running state to the Blocked state when a page fault occurs because when a page fault occurs, the process starts to wait for I/O operation to finish. B. Not necessarily, because if a page table entry is not found in the TLB (TLB miss), the page number is used to index and process the page table. C. No, because no I/O operation is needed if the address reference is resolved in the page table, which indicates the page required is loaded in the main memory.

What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?

Causes: 1. A process that does not have enough frames to run, quickly page-faults again and again. CPU utilization = decreased 2. CPU scheduler, on seeing low CPU utilization, will increase the degree of multipgramming while all processes spending their time in paging. THEREFORE, high paging operations (swap-in and swap-out) taking place, due to lack of available frames and increasing degree of multiprogramming while all the processes spending all their time in paging causes thrashing.

Assume that we have a demand-paged memory. The page table is held in registers. It takes 8 milliseconds to service a page fault if an empty frame is available or if the replaced page is not modified and 20 milliseconds if the replaced page is modified. Memory-access time is 100 nanoseconds. Assume that the page to be replaced is modified 70 percent of the time. What is the maximum acceptable page-fault rate for an effective access time of no more than 200 nanoseconds?

Demand paged memory the page table is held in registers To service a page-fault = 8 milliseconds If the replaced page is modified then requires = 20 ms Memory access time = 100 ms EAT (Effective access time) = (1-p) x (memory access time) p = 0.000006

Suppose that your replacement policy (in a paged system) is to examine each page regularly and to discard that page if it has not been used since the last examination. What would you gain and what would you lose by using this policy rather than LRU or second-chance replacement?

In general this would not be an especially effective policy. By using this algorithm, you would gain many more free pages in memory that can be used by newly faulted pages. However, this comes at the expense of having back into memory that LRU or second-change would not have evicted in the first place. This would be effective in workloads where a page is used once and then never again, because it wouldn't matter if the page was discarded.

Discuss the hardware support required to support demand paging.

Loading the entire program into memory results in loading the executable code for all options, regardless of whether an option is ultimately selected by the user or not. Initially loading pages are needed. This is known as demand paging. The valid-invalid scheme described can be used for this purpose. This time, however, when this bit is set to "valid", the associated page is both legal and in memory. If the bit is set to "invalid", the page either is not valid but is currently on the disk. The page-table entry for a page that is brought into memory is set as usual, but the page-table entry for a page that is not currently in memory is either simply marked invalid or contains the address of the page on disk. Secondary memory usually disk, pager, and swapper.

For each of the following, indicate whether it will (or is likely to) improve CPU utilization. Explain your answers. c. Increase the degree of multiprogramming.

No, The degree of multi-programming allows the maximum number of processes that can be executed in a single processor system. As the number of processes share the same amount of memory in multi-programming, it will not show any improvement of CPU utilization.

For each of the following, indicate whether it will (or is likely to) improve CPU utilization. Explain your answers. b. Install a bigger paging disk.

No. If a bigger paging disk is installed in the system, it can store a greater number of pages in the secondary storage disk. This will increase the paging disk utilization. Therefore, CPU swaps the pages from the disk to a memory greater number of times.

For each of the following, indicate whether it will (or is likely to) improve CPU utilization. Explain your answers. h. Increase the page size.

No. Increasing page size will result an increasing number of page faults when the data are being accessed sequentially. if the data is being accessed randomly, then the paging action needed by the CPU is more because fewer pages are kept int he memory and the remaining pages are stored on the disk. This will cause more data needed to be transferred per every page fault.

For each of the following, indicate whether it will (or is likely to) improve CPU utilization. Explain your answers. a. Install a faster CPU.

No. Installing a faster CPU will not improve the utilization of the CPU. As the CPU is attached to motherboard, the speed and the CPU utilization are depending on the motherboard components.

When a page fault occurs, the process requesting the page must block while waiting for the page to be brought from disk into physical memory. Assume that there exists a process with five user-level threads and that the mapping of user threads to kernel threads is one to one. If one user thread incurs a page fault while accessing its stack, would the other user threads belonging to the same process also be affected by the page fault—that is, would they also have to wait for the faulting page to be brought into memory? Explain.

Only one thread which is accessing the stack is blocked. Remaining threads will not wait for the faulting page because in one to one mapping each user level thread has a kernel level thread which has separate stacks for its operation. hence, when a page fault occurs in one thread, the process requesting the page fault must block while waiting for the page to be brought down from disk into physical memory, but it does not affect the other user threads.

You have devised a new page-replacement algorithm that you think may be optimal. In some contorted test cases, Belady's anomaly occurs. Is the new algorithm optimal? Explain your answer.

Optimal Algorithm: An optimal algorithm would never never suffer from Belady's anomaly according to its definition. it would replace a page which is not in use for the longest time. Therefore, if the newly designed algorithm suffers from Belady's anomaly even in the rarest situations, it is not an optimal algorithm.

Suppose that a machine provides instructions that can access memory locations using the one-level indirect addressing scheme. What sequence of page faults is incurred when all of the pages of a program are currently nonresident and the first instruction of the program is an indirect memory-load operation? What happens when the operating system is using a per-process frame allocation technique and only two pages are allocated to this process?

Sequence of page-faults: Page fault to access the instruction, a page fault to access the memory location that contains a pointer to the target memory location, and a page fault when the target memory location is accessed. The OS will generate three page faults with the third page replacing the page containing the instruction. If the instruction needs to be fetched again to repeat the trapped instruction, then the sequence of page faults will continue indefinitely. If the instruction is cached in a register, then it will be able to execute completely after the third page fault.

Suppose that you want to use a paging algorithm that requires a reference bit (such as second-chance replacement or working-set model), but the hardware does not provide one. Sketch how you could simulate a reference bit even if one were not provided by the hardware, or explain why it is not possible to do so. If it is possible, calculate what the cost would be.

Simulating reference bit: The reference bit is required by the paging algorithm in the OS, but the hardware component in the system does not provide the reference bit. But, the reference bit can be generated using valid and invalid bit in the hardware component. At first, simulation in the OS is done using a reference. now, the software bit in the OS is assigned to 1. The valid and invalid bit in the hardware component is assigned as valid.

How does the system detect thrashing?

Systems can detect thrashing by observing rapid decrease in CPU utilization, as the degree of multiprogramming is increasing.

What happens when a page fault occurs with respect to blocking etc

The page must be blocked while waiting for the page to bring into main memory from disk.

Consider a system that allocates pages of different sizes to its processes. What are the advantages of such a paging scheme? What modifications to the virtual memory system provide this functionality?

The program may have a large code segment or user large-sized arrays of data. These portions of the program could be allocated to larger pages, thereby decreasing the memory overheads associated with a page table. the virtual memory system would then have to maintain multiple free lists of pages for the different sizes and should also need to have more complex code for address translation to take into account different page sizes.

What is thrashing?

Thrashing is a situation in which paging activity is very high. If a process is thrashing then it spends more time in paging than in execution.

Once it detects thrashing, what can the system do to eliminate this problem?

To remove thrashing, system can decrease the degree of multiprogramming immediately.

What is the copy-on-write feature, and under what circumstances is its use beneficial? What hardware support is required to implement this feature?

When two processes are accessing the same set of program values, then it is economical to use a common set of pages into the virtual address spaces of the two programs in a write-protected manner. When a write does indeed take place, then a copy must be made to allow the two programs to individually access the different copies without interfering with each other. Used by Windows, Linux, Solaris

Is it possible for a process to have two working sets, one representing data and another representing code? Explain.

Yes in fact many processors provide two TLB's for this very reason. As an example, the code being accessed by a process may retain the same working set for a long period of time. However, the data the code accesses may change, thus reflecting a change in the working set for data accesses.

For each of the following, indicate whether it will (or is likely to) improve CPU utilization. Explain your answers. d. Decrease the degree of multiprogramming.

Yes, All the fuc.kin processes running in the multiprogramming environment share the common memory. If the degree of mutliprogramming is decreased, then no process will be waiting for the memory and the process which is currently being executed is using the memory, it will not wait for the memory.

For each of the following, indicate whether it will (or is likely to) improve CPU utilization. Explain your answers. g. Add prepaging to the page-fetch algorithms.

Yes. A prepaging scheme is added to the page-fetch algorithms, it will bring the pages into the memory before as they are requested by the CPU. Therefore, the CPU will not wait for the data and get it quickly. Therefore, the CPU executes the pages quickly and improves the CPU utilization. It will raise a problem when the page is brought into the memory which is not needed by the CPU. Then prepaging scheme will not get benefited to improve the utilization.

For each of the following, indicate whether it will (or is likely to) improve CPU utilization. Explain your answers. f. Install a faster hard disk or multiple controllers with multiple hard disks.

Yes. The CPU will access the disk whenever the requested page is not available in the main memory. If a faster hard disk or multiple hard disks with multiple controllers are installed, then the CPU can access a greater number of pages from the disk. This will result a removal of bottleneck and increasing throughput to the disks.

For each of the following, indicate whether it will (or is likely to) improve CPU utilization. Explain your answers. e. Install more main memory.

yes. Installing more main memory will definitely increase the CPU utilization. The main memory stores all the programs which are requested by the CPU for its execution. AS the size of the main memory is large, it will not depend on the secondary disk and no need of swap outs. So that, the CPU can execute the programs by directly accessing memory.

Assume that a program has just referenced an address in virtual memory. Describe a scenario in which each of the following can occur. (If no such scenario can occur, explain why.) • TLB miss with no page fault • TLB miss and page fault • TLB hit and no page fault • TLB hit and page fault

• TLB miss with no page fault POSSIBLE. TLB did not contain this page table entry but it did exist in the page table and the entry was valid. • TLB miss and page fault POSSIBLE TLB did not contain this page table entry so it asked the page table but the entry was either invalid or out on disk. • TLB hit and no page fault POSSIBLE TLB contained that entry and it was valid and in memory. • TLB hit and page fault IMPOSSIBLE It is impossible because it cannot have a TLB entry if page is not in memory. OS updates the page tables and TLB when a page in the memory has been flushed out to disk.


Related study sets

Module 3 - Mental Health Concepts

View Set

Exercise Physiology: Health and Physical Fitness

View Set

2.5 Identify Basic Features of Mobile Operating Systems Computer operating systems are not the only type of operating system

View Set

Lesson 1 Estructura 1.3 Present tense of ser Review

View Set