Systems II: Virtual Memory
Benefits of virtual memory
Allows for more efficient process creation. Virtual memory can be implemented via: Demand paging Demand segmentation Combination of the two
What is demand paging
Bring a page into memory only when it is needed. Benefits include less i/o needed, less memory needed, faster response, more processes.
Implementations of LRU
Counter implementation - every page has a counter, every time a page is referenced through this entry, copy the clock into the counter. When a page needs to be replaced, look at the counters. STack implementation - keep a stack of page numbers in a double link form. When a page is referenced, move it to the top.
Frame allocation algorithms
Each process needs minimum number of frames for efficient execution and to avoid thrashing.
Types of frame allocation algorithms
Equal allocation - if there are 100 frames and 5 processes, give each process 20 frames. Proportional allocation - allocate according to the size of process. Priority allocation - use a proportional allocation scheme using priorities rather than size. Priority allocation - use a proportional allocation scheme using priorities rather than size.
Page fault frequency allocation
Establish an acceptable page-fault rate. If rate is too low, process may lose frames. If rate is too high, process should gain frames. If fram is not available, swap out one of the processes.
Page replacement algorithm
GLobal replacement -os selects a replacement frame from the set of all frames. Local replacement - os selects a page only from processes own set of allocated frames WE want the lowest page-fault rate.
Thrashing
IF a process does not have enough pages in memory the page fault rate is very high. This leads to low cpu utilization os may think that it needs to increase the degrees of multiprogramming another process added to the system causing even higher page fault rate. Thrashing = a process is busy swapping pages in and out i. e. high paging activity.
What is a page fault
If there is a reference to a page the first reference will always be a page fault and then a trap to OS. Since the reference is invalid the page will be loaded into an empty frame. Then the page table is updated
Counting page replacement algorithm
Keep a counter on the number of references that have been made to each page.
Example of Demand paging performance
MEmory access = 100nsec page fault time = 20msec less than 10% degradation 110 >= (1-p)*100 + p*20,000,000 p = < 0.0000005
Notes about performance of demand paging
MEmory access in nanosecond while page fault is in milliseconds.
Tables in Memory managment techniques
One level paging - one page table per process. TWo level paging - two page tables per process inverted page table - one page table in the system Pure segmentation - one segment table per process. Segmentation with one level paging - one segment table per process plus one page table for each entry in the segment table Segmentation with two level paging - one segment table per process plus two page tables for each entry in the segment table.
What is virtual memory
Only parts of the program need to be in memory for program execution. Logical address space can therefore be much larger than physical address space. Allows the degree of multiprogramming to be higher.
What is page replacement
Page fault service routine that includes a page-replacement algorithm. Finds the location of the desired page on disk. Finds a free frame, reads the desired page into the newly free frame, and updates the page table.
OTher considerations
Page replacement and frame allocation are crucial for efficient virtual memory.
Working set model allocation
Paging works because of the locality model. Processes migrate from one locality to another. Localities may overlap. if Demand exceeds the number of frames in memory we have thrashing.
LRU approximation algorithms
Reference-bit algorithm - each page associates with a bit. When a page is referenced a bit is set to 1. REplace one with 0 if it exists. If all bits are set, clear all of them Additional-reference-bits algorithm - periodically each reference bit is shifted into the high order bit. Shift registers contain the history of page use for the last eight time periods . The page with the lowest unsigned number is replaced. Second change clock algorithm - replaces pages in clock order that have reference bit = 0. If a page is to be replaced in clock order that has ref bit 1, then set ref bit to 0 and leave it in memory. Enhanced second-chance algorithm - also uses a modify bit along with another r bit.
LRU algorithm
Replace page that has not been used for the longest period of time (no efficient implementation.
FIFO algorithm
Replace the oldest page a.k.a. the page that has been in memory for the longest time.
Optimal page replacement algorithm
Replaces the page that will not be used for the longest period of time.
What to do about thrashing
Suspend one process, check again if demand exceeds number of frames in memory.
MFU algorithm
based on the argument that the page with the smallest count was probably just brought in and has yet to be used.
What locality makes demand paging efficient
locality of reference is important for demand paging, because in a paging environment the locality of reference can be stated as follows: Most programs tend to make large number of references to a small number of pages, and not hte other way around. Thus only a small fraction of hte pages are heavily accessed and the rest are barely used if at all. If those active pages are loaded pages into main memory only few page faults will occur.
Performance of Demand paging
p = number of page faults EAT = (1-p)x memory access + p x Page fault time page fault time = page fault overhead + swap page + restart overhead
LFU algorithm
replaces page with smallest count