TDT4186 Kap 8 - Review questions

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

How does the use of virtual memory improve system utilization?

1. More processes may be maintained in main memory. Because we are only going to load some of the pieces of any particular process, there is room for more processes. This leads to more efficient utilization of the processor, because it is more likely that at least one of the more numerous processes will be in a Ready state at any particular time. 2. A process may be larger than all of main memory. One of the most fundamental restrictions in programming is lifted. Without the scheme we have been discussing, a programmer must be acutely aware of how much memory is available. If the program being written is too large, the programmer must devise ways to structure the program into pieces that can be loaded separately in some sort of overlay strategy. With virtual memory based on paging or segmentation, that job is left to the OS and the hardware. As far as the programmer is concerned, he or she is dealing with a huge memory, the size associated with disk storage. The OS automatically loads pieces of a process into main memory as required.

How is a page fault trap dealt with?

1. The memory address requested is first checked, to make sure it was a valid memory request. 2. If the reference was invalid, the process is terminated. Otherwise, the page must be paged in. 3. A free frame is located, possibly from a free-frame list. 4. A disk operation is scheduled to bring in the necessary page from disk. ( This will usually block the process on an I/O wait, allowing some other process to use the CPU in the meantime. ) 5. When the I/O operation is complete, the process's page table is updated with the new frame number, and the invalid bit is changed to indicate that this is now a valid page reference. 6. The instruction that caused the page fault must now be restarted from the beginning, ( as soon as this process gets another turn on the CPU. )

What is the relationship between FIFO and clock page replacement algorithms?

A clock page replacement algorithm is similar to FIFO, but is a circular queue, pointing to the oldest referenced page. If the oldest page is referenced again, the pointer goes to the next oldest referenced page, and so on. The oldest page is always replaced first. U-clock When a page needs to be replaced, the OS scans the buffer to find a frame with a use bit set to 0. Each time it encounters a frame with a use bit of 1, it resets that bit to 0 and continues on. If any of the frames in the buffer have a use bit of 0 at the beginning of this process, the first such frame encountered is chosen for replacement. If all of the frames have a use bit of 1, then the pointer will make one complete cycle through the buffer, setting all the use bits to 0, and stop at its original position, replacing the page in that frame. UM-clock A use-bit and a modify-bit, (u, m). During buffer scan, 1 scan. replace the first (0,0) frame 2 scan. replace the first (0,1) frame, and set u=0 for each u=1 encountered. 3 scan: All use-bits are now 0. repeat step 1. 4 scan: if still no (0,0) is found, repeat step 2 and there will be a (0,1).

Why is it not possible to combine a global replacement policy and a fixed allocation policy?

A fixed allocation policy ensures a fixed number of frames for any process. With a global repl pol, the OS might swap out a page of another process than the one executing and demanding a page, thus having one more page for the executing process than the fixed size implies, and the other process will have one less page.

What is the difference between a resident set and a working set?

A resident set of a process is the part of a process that is actually in main memory any given time, while the working set of a process is the number of the process' pages which have been referenced during a given time interval. They might not be the same as some pages in main memory are "stale" and have not been used recently.

What is demand paging?

Demand paging is to collect a page into main memory only when it is needed and referenced.

Paging vs Segmentation

See Table 8.2 - Characteristics of Paging and Segmentation

What is the difference between demand cleaning and precleaning?

The difference between demand cleaning and pre-cleaning is that a cleaning policy is the opposite of a fetch policy: it is concerned with determining when a modified page should be written out to secondary memory. With demand cleaning, a page is written out to secondary memory only when it has been selected for replacement. A pre-cleaning policy writes modified pages before their page frames are needed so that pages can be written out in batches.

What is the purpose of a translation lookaside buffer?

The purpose of a TLB is to find Page map tables faster, as page map tables hold the addresses of all processes' pages in main memory and are always needed to reference a page. Translation Lookaside Buffer (TLB) is nothing but a special cache used to keep track of recently used transactions. TLB contains page table entries that have been most recently used. Given a virtual address, the processor examines the TLB if a page table entry is present (TLB hit), the frame number is retrieved and the real address is formed. If a page table entry is not found in the TLB (TLB miss), the page number is used to index the process page table. TLB first checks if the page is already in main memory, if not in main memory a page fault is issued then the TLB is updated to include the new page entry.

Which considerations determine the size of a page?

There are several points that can factor into choosing the best page size.[5] Page size versus page table size A system with a smaller page size uses more pages, requiring a page table that occupies more space. A multi-level paging algorithm can decrease the memory cost of allocating a large page table for each process by further dividing the page table up into smaller tables, effectively paging the page table. Page size versus TLB usage Since every access to memory must be mapped from virtual to physical address, reading the page table every time can be quite costly. Therefore, a very fast kind of cache, the Translation Lookaside Buffer (TLB), is often used. The TLB is of limited size, and when it cannot satisfy a given request (a TLB miss) the page tables must be searched manually (either in hardware or software, depending on the architecture) for the correct mapping. Larger page sizes mean that a TLB cache of the same size can keep track of larger amounts of memory, which avoids the costly TLB misses. Internal fragmentation of pages Rarely do processes require the use of an exact number of pages. As a result, the last page will likely only be partially full, wasting some amount of memory. Larger page sizes lead to large amount of wasted memory, as more potentially unused portions of memory are loaded into main memory. Smaller page sizes ensure a closer match to the actual amount of memory required in an allocation. Page size versus disk access When transferring from a rotational disk, much of the delay is caused by seek time, the time it takes to correctly position the read/write heads above the disk platters. Because of this, large sequential transfers are more efficient than several smaller transfers. Transferring the same amount of data from disk to memory often requires less time with larger pages than with smaller pages.

Explain thrashing.

Thrashing is a state that occurs when the OS spends a lot of time switching out processes or instructions from main memory to disk right before it is referenced again, causing the OS to slow down unnecessarily.

Why is the principle of locality crucial to the use of virtual memory?

Using virtual memory to have direct access to as many processes as possible is a key aspect to system performance. To achieve this, virtual memory is based on the principle of locality so that only bits of processes are loaded into main memory. The assumption is that only a few pieces of a process will be needed over a short period of time, and this is what v.mem. implements. The principle of locality helps avoid thrashing.

What are the drawbacks of using either only a precleaning policy or only a demand cleaning policy?

With demand cleaning, a page is written out to secondary memory only when it has been selected for replacement. A precleaning policy writes modified pages before their page frames are needed so pages can be written out in batches. Both precleaning and demand cleaning have drawbacks. With precleaning, - page is written out but remains in main memory until the page replacement algorithm dictates that it be removed - Precleaning allows the writing of pages in batches, but many of them might have been modified before replacement with demand cleaning, - the writing of a dirty page is coupled to, and precedes, the reading in of a new page. -- minimize page writes, but it means that a process that suffers a page fault may have to wait for two page transfers before it can be unblocked. This may decrease processor utilization. A better approach incorporates page buffering. Clean only pages that are replaceable, but decouple the cleaning and replacement operations. With page buffering, replaced pages can be placed on two lists: modified and unmodified. The pages on the modified list can periodically be written out in batches and moved to the unmodified list. A page on the unmodified list is either reclaimed if it is referenced or lost when its frame is assigned to another page.


संबंधित स्टडी सेट्स

A & P of pregnancy Practice questions

View Set

Chapter 10 Conflict and Negotiations

View Set

Leçon 3 - L'influence des Médias - VHL exercises

View Set

Chapter 58: Special Skin and Wound Care (Complete)

View Set