OS Ch. 8: Virtual Memory

Ace your homework & exams now with Quizwiz!

advantages of virtual memory

-abstract main memory into a large uniform array of storage -allow programs to be larger than physical memory -allow processes to share files easily and implement shared memory (how???)

LRU (least recently used) page replacement

-replace the page that has not been used for the longest period of time how to implement LRU? -note page use times -stack of page numbers where whenever a page is referenced, it is removed from the stack and put on the top

FIFO page replacement

-when a page must be replaced, the oldest page is chosen

Two methods for allocating memory to kernel processes? Why must kernel memory be allocated differently from user-mode processes?

1. buddy system 2. slab allocation Kernel memory is allocated from a free-memory pool different form the list used to satisfy ordinary user-mode processes. Two primary reasons: 1. The kernel requests memory for data structures of varying sizes, some of which are less than a page in size. Therefore paging is inappropriate. 2. Sometimes the kernel requires memory residing in physically contiguous places

a slab may be in one of three possible states

1. full 2. empty 3. partial 1. slab allocator attempts to satisfy the request with a free object in a partial slab. 2. if no partial slabs available, free object is assigned from an empty slab 3. if no empty slabs are available, a new slab is allocated from contiguous physical pages and assigned to a cache

how to prevent another process from paging out a memory frame containing an I/O buffer?

1. only doing I/O from system memory to I/O device (high overhead) 2. allow pages to be locked into memory- if the frame is locked, it cannot be selected for replacement

hardware to support demand paging

1. page table - translate virtual addresses to physical frame addresses in memory 2. secondary memory - swap space; holds pages not present in main memory. this is usually a high-speed disk

Describe buddy system allocation What happens if the kernel requests 21 KB of memory from a 256 KB segment?

Buddy system allocates memory from a fixed-size segment consisting of physically contiguous pages. A memory request is rounded up to the next highest power of The segment is divided into two 128KB segments, one of those 128 KB segments is divided into 2 64 KB segments, one of those 64KB segments is divided into two 32-KB buddies, and one of those 32-KB segments is used to satisfy the request con: internal fragmentation still occurs

what is thrashing? How to solve thrashing? (locality model of process execution)

a process is thrashing if it spends more time paging than executing summary: if CPU utilization is too low, the CPU increases the degree of multiprogramming by introducing a new process into the system. This process, upon page faulting, takes frames away from other processes. Those processes begin to page fault as well, and take frames from other processes. These faulting processes--which are interrupted and not running--use the paging device to swap pages in and out. The CPU sees the decreasing CPU utilization and further increases the degree of multiprogramming --> self-reinforcing cycle of increased page faults Limit/prevent thrashing by using the locality model of process execution. Locality model states that as a process executes, it moves from locality to locality (locality = a set of pages that are used together). If we allocate enough frames to a process to accomodate its current locality, then page faulting will be reduced (the process can keep in memory all the pages that it is actively using).

describe copy on write

allows parent and forked child process to initially share the same pages. If either process writes to a shared page, a copy of the shared page is created examples of when page writes might occur: when stack or heap must expand

what is a stack algorithm?

an algorithm for which it can be shown that the set of pages in memory for n frames is always a subset of the set of pages that would be in memory for n+1 frames. for LRU, the set of pages in memory would be the n most recently referenced frames. If the number of frames is increased, these n pages will still be the most recently referenced and so will still be in memory.

what is the working-set model? what happens if there are not enough frames for each process to have WORKING_SET_WINDOW number of frames?

define a number WORKING_SET_WINDOW; the set of pages in the most recent WORKING_SET_WINDOW page references is the working set. if a page is in active use, it's in the working set; if it is no longer being used, it will drop from the working set WORKING_SET_WINDOW time units after its last reference if there are not enough frames for each process to have its working-set size, then a process is suspended and restarted later when frames free up. this prevents thrashing while keeping the degree of multiprogramming as high as possible

what does the dirty bit do?

dirty bit is set by the hardware whenever any byte in the page is written into (when actual DATA is changed), indicating that the page has been modified when we select a page for replacement, we examine its dirty bit. if the bit is set, we know that the page has been modified since it was read in from the disk. in this case, we must write the page to the disk. if the modify bit is not set, however, the page has not been modified since it was read into memory, so we don't need to write the memory page to the disk--we only need to read the new page data from disk.

memory-mapped I/O

each I/O controller includes registers to hold commands and data being transferred. in memory-mapped I/O, virtual memory addresses are mapped to device registers, so that reads and writes to these virtual memory addresses cause the data to be transferred to and from the device registers very quickly

how does page replacement work?

if no frame is free, we find a memory frame that is not currently being used and free it how to free a frame? write the (victim) frame's contents to swap space and change the page table to indicate that the page is no longer in memory (-->entire page table remains in memory???). we can now use the freed frame to hold the page/frame that was desired, for which the process faulted in other words: 1. Find the location of the desired page on the disk 2. Find a free frame: -if there is a free frame, use it -if there is no free frame, PAGE REPLACEMENT NECESSARY. use a page-replacement algorithm to select a victim frame -write the victim frame to the disk; change the valid and invalid bits on the page table and change the frame tables. 3. Read the desired page/frame's data (from disk) into the newly freed frame; change the page and frame tables 4. Continue the user process from where the page fault occurred two page transfers are required (one out (write frame to disk), one in (read page data from disk)) page replacement allows enormous virtual memory to be provided to programmers on a smaller physical memory

LRU-approximation page replacement

in each page table entry, the reference bit is set by the hardware whenever that page is referenced. After some time, we can determine which pages have been used and which have not been used by examining the reference bits, although we do not know the order of use can also be done using eight bits, each bit is set using bit-shifting

what does the frame table hold?

information about which frames are mapped

what does the memory management unit do?

map logical pages to physical frames in memory

what is the optimal page-replacement algorithm?

replace the page that will not be used for the longest period of time. (unfortunately, we do not know which page will not be used for the longest period of time)

second-chance algorithm enhanced second-chance algorithm

second chance algorithm: FIFO replacement algorithm in which when a page is selected, we inspect its reference bit. if the value is 0, we replace the page, but if the reference bit is 1, we give the bit a second chance and set its reference bit to 0 && reset its arrival time/move it to the back of the queue, and move on to select the next FIFO page. thus, a page that is given a second chance will not be replaced until all other pages have been replaced (or given second chances) enhanced second chance algorithm: consider the reference bit and the modify bit as an ordered pair (ref, modified) 1. (0ref, 0mod) - neither recently used nor modified = best page to replace 2. (0ref, 1mod) - not recently used but modified = page neesd to be written out before replacement 3. (1ref, 0mod) - recently used but clean = probably will be used again soon 4. (1ref, 1mod) - recently used and modified = probably will be used again soon, and page will need to be written out to disk before it can be replaced we replace the first page encountered in the lowest nonempty class

With shared memory, how do different processes know what frame the desired data is in?

shared library system: OS helps link processes to the physical frames of the desired memory

Describe slab allocation How does slab allocation eliminate fragmentation?

slab = one or more physically contiguous pages cache = one or more slabs for a particular data structure when a cache is created, a number of objects (marked as "free") are allocated to the cache. the number of objects in the cache depends on the size of the associated slab. when a new object for a kernel DS is needed, the allocator can assign any free object from the cache to satisfy the request. the object is marked as used. e.g. when the Linux kernel creates a new task, it requests the necessary memory for the struct task_struct object from the task_struct cache SLAB ALLOCATION ELIMINATES FRAGMENTATION because each kernel data structure has an associated cache, and each cache is made up of one or more slabs that are divided into chunks the size of the objects being represented

what is a NUMA system?

system in which memory access times vary significantly are collectively known as non-uniform memory access systems

How is an executable program loaded from disk into memory using demand paging?

the page table, with the entire virtual memory space, is always there. the memory contents of pages are loaded only when they are demanded during program execution. pages that are never accessed (such as code that is never run) are never loaded into physical memory

what is memory mapping a file?

treating I/O as routine memory access memory mapping = allowing a part of a process's virtual address space to be associated with a file in disk (mapping a disk block to a page in memory) A memory-mapped file is a segment of virtual memory that has been assigned a direct byte-for-byte correlation with some portion of a file or file-like resource.

what happens when over-allocation of memory occurs?

user process is executing; page fault occurs; OS finds desired page on disk, but finds that there are no free frames on the free-frame list; all memory is in use solution: page replacement

how to distinguish between pages in memory and pages on the disk?

valid-invalid bit: when bit is set to "valid", the associated page is legal and in memory. if the bit is "invalid", the page is either not valid (i.e. not in the logical address space of the process) or is valid but currently on the disk.

what is a sparse address space? why is using a sparse address space beneficial?

virtual address spaces that include holes holes can be filled as the stack or heap segments grow, or if we wish to dynamically link libraries during program execution

What is vfork()?

virtual memory fork - intended to be used when child calls exec() immediately after creation with vfork(), parent process is suspended, child process uses address space of the parent. vfork() does not use copy on write; rather, the child is able to modify page tables and those changes will be visible to the parent once it resumes

what is a page fault? how does OS handle a page fault?

when a process tries to access a page that was not brought into memory (and is marked as invalid) paging hardware detects that the invalid bit is set --> causes an interrupt to the OS. OS then: 1. check whether the page reference was a valid or invalid memory access (e.g. if trying to access a virtual address between the stack and heap) 2. if the reference was invalid, terminate the process. if it was within the virtual address space but page is not in memory, bring the page in 3. find a free frame in physical memory (by taking one from the free-frame list/frame table) 4. schedule a disk operation to read the desired page into the newly allocated frame 5. when disk read is complete, we mark that the page is now in memory in page table (set valid bit) 6. restart the interrupted instruction


Related study sets

Chapter 24: Worksheet - Credit Protection

View Set

Law of Sines Assignment and Quiz

View Set