Quiz 5

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

segmentation allows each table to grow or shrink, independently each segment consists of a linear

seq of address(starting at 0 and going up to some max values).

Protection

◼ can be done just like paging system: protection associated with pages or lock and key associated with frames.

segmentation does not

(1,x) doesnt exist if x is larger than segment. Because this check is done in HW, SW checks are not longer necessary

when a page is to be evicted from memory, does it have to be one of the faulting process' own pages, or can it be a page belonging to another process?

, we are effectively limiting each process to a fixed number of pages; in the latter case we are not.

Clock Page Replacement Algo

. A better approach is to keep all the page frames on a circular list in the form of a clock, The hand points to the oldest page.

Thrashing a process is busy swapping pages in and out (disk activity) rather than doing useful computation (CPU activity)

. ◼ It has been observed that as page frames per VM space decrease, the page fault rate increases If a process does not have "enough" page frames, the page-fault rate is very high.

Page Fault Handling

1. Hardware traps to kernel 2. General registers saved 3. OS determines which virtual page is needed 4. OS checks validity of address, seeks page frame (page (re)placement policy) 5. If selected frame to replace is dirty (modified), write it to disk 6. OS schedules a disk operation to bring new page in from disk 7. After transfer is complete (indicated by an interrupt), page tables are updated 8. Faulting instruction backed up to when it began (complications can arise: see following slides) 9. Faulting process scheduled (from suspended queue to ready queue) 10. Registers restored (context switch when process is scheduled) 11. Program continues from instruction that generated page fault. 148 Steps in Han

Segmented memory address translation steps

1. Look-up VA in TLB 2. If TLB hit ➔ get PA from TLB; done 3. If TLB miss ➔ look-up VA in segment table (ST) 1. If valid bit = 1 1. if VA < bounds for this segment then PA = physical segment address + offset 2. If VA > bounds then raise segmentation violation exception 2. If valid bit = 0 then raise segment-fault exception

Paged memory address translation steps

1. Look-up VA in the TLB 2. If TLB hit ➔ get PA from TLB; done 3. If TLB miss ➔ look-up VA in the page table; 1. If valid bit =1, then get PA from PT; done 2. If valid bit =0, raise page-fault exception.

Four times when OS involved with paging

1. Process creation 2. Process execution 3. Page fault time 4. Process termination time

Segmentation VM so far is 1-dimensional. may be better to have

2 or more separate virtual address spaces. Compiler's tables with stack growing and shrinking in unpredictable ways during compilation

THE LRU Page Replacement Algo

A good approximation to the optimal algorithm is based on the observation that pages that have been heavily used in the last few instructions will probably be heavily used again soon. Conversely, pages that have not been used for ages will probably remain unused for a long time. This idea suggests a realizable algorithm: when a page fault occurs, throw out the page that has been unused for the longest time. This strategy is called LRU (Least Recently Used) paging.

The Second Chance Page Replacement Algo

A simple modification to FIFO that avoids the problem of throwing out a heavily used page is to inspect the R bit of the oldest page. If it is 0, the page is both old and unused, so it is replaced immediately. If the R bit is 1, the bit is cleared, the page is put onto the end of the list of pages, and its load time is updated as though it had just arrived in memory. Then the search continues. The operation of this algorithm, called second chance,

optimal page replacement algo

At the moment that a page fault occurs, some set of pages is in memory. One of these pages will be referenced on the very next instruction . Other pages may not be referenced until 10, 100, or perhaps 1000 instructions later. Each page can be labeled with the number of instructions that will be executed before that page is first referenced.

Use of Segments

Avoid paging inefficiency: many pages include data representing programming entities that may not be used immediately

Paging Policies

Fetch strategies (when to fetch pages) ◼ Placement strategies (in what page frame to put the new page) ◼ Replacement strategies (if there are no free page frames, which existing frame to replace)

Segment hit ratio: percentage of time segment found in associative memory

If not found in associative memory, must load from segment tables: ◼ Requires additional memory reference

FIFO Page Replacement

One possibility is to find the product that the supermarket has been stocking the longest (i.e., something it began selling 120 years ago) and get rid of it on the grounds that no one is interested any more. In effect, the supermarket maintains a linked list of all the products it currently sells in the order they were introduced. The new one goes on the back of the list; the one at the front of the list is dropped.

demand paging is present

Page is needed reference to it ◼ invalid reference abort ◼ Valid reference but not-in-memory bring to memory Unix, linux Windows NT etc implement this

SII suffers external fragmentation

Suffers external fragmentation - use compaction? How to do this? Two ways: •Move data in memory •incremental compaction while segments are being swapped in and out. Swap them out and swap them in into new locations.

optimal place replacement algo

The only problem with this algorithm is that it is unrealizable. At the time of the page fault, the operating system has no way of knowing when each of the pages will be referenced next.

NRU : When a process is started up, all of its page table entries are marked as not in memory. As soon as any page is referenced, a page fault will occur.

The operating system then sets the R bit (in its internal tables), changes the page table entry to point to the correct page, with mode READ ONLY, and restarts the instruction. If the page is subsequently modified, another page fault will occur, allowing the operating system to set the M bit and change the page's mode to READ/WRITE.

The R and M bits can be used to build a simple paging algorithm as follows. When a process is started up, both page bits for all its pages are set to 0 by the operating system. Periodically (e.g., on each clock interrupt), the R bit is cleared, to distinguish pages that have not been referenced recently from those that have been.

When a page fault occurs, the operating system inspects all the pages and divides them into four categories based on the current values of their R and M bits: Class 0: not referenced, not modified. Class 1: not referenced, modified. Class 2: referenced, not modified. Class 3: referenced, modified.

Clock Page Replacement Algo when a page fault occurs

When a page fault occurs, the page being pointed to by the hand is inspected. If its R bit is 0, the page is evicted, the new page is inserted into the clock in its place, and the hand is advanced one position. If R is 1, it is cleared and the hand is advanced to the next page. This process is repeated until a page is found with R = 0. Not surprisingly, this algorithm is called clock.

Fetch Strategies

When should a page or segment be brought into primary (main) memory from secondary (disk) storage? ◼ Demand fetch (wait until you need it) ◼ Anticipatory fetch (predict which page will be needed) ◼ Hard to do in practice ➔ not done. Demand paging more popular and is what is used in popular OS's.

When a page fault occurs, the operating system has to choose a page to evict (remove from memory) to make room for the incoming page. If the page to be removed has been modified while in memory, it must be rewritten to the disk to bring the disk copy up to date. If, however, the page has not been changed (e.g., it contains program text), the disk copy is already up to date, so no rewrite is needed. The page to be read in just overwrites the page being evicted.

While it would be possible to pick a random page to evict at each page fault, system performance is much better if a page that is not heavily used is chosen. If a heavily used page is removed, it will probably have to be brought back in quickly, resulting in extra overhead.

segments provide

a user with a two dimensional virtual memory instead of a linear virtual address space. Dimension 1: a collection of segments (segment number) ◼ Dimension 2: an address space or segment (offset)

use of segments

allow each segment to grow independently Segment for each user entity: stack, code, data, array ◼ Protect each user entity independently: stack, code, data, array

paging segmentation can total address space exceed size of PM

both yes

segmentation implementation issues can

cache segment tables in registers, can keep segment table in memory at per process location,

Segmentation Implementation Issues Segment table base register and segment table length register

changed at context switch time

Another low-overhead paging algorithm is the FIFO (First-In, First-Out) algorithm.

consider a supermarket that has enough shelves to display exactly k different products. One day, some company introduces a new convenience food—instant, freeze-dried, organic yogurt that can be reconstituted in a microwave oven. It is an immediate success, so our finite supermarket has to get rid of one old product in order to stock it.

performance of demand paging

f we plug these numbers into the t eff formula, we get approximately 1 page fault out of 300,000 accesses.

segementation requires that is

freeing the programmer from having to manage the expanding and contracting tables.

Shared segments can have different protections

from different processes by having different segment table protection access bits

The NRU (Not Recently Used) algorithm removes a page at random Implicit in this algorithm is the idea that it is better to remove a modified page that has not been referenced in at least one clock tick (typically about 20 msec) than a clean page that is in heavy use. The main attraction of NRU is that it is easy to understand, moderately efficient to implement, and gives a performance that, while certainly not optimal, may be adequate.

from the lowest-numbered nonempty class.

most computers with virtual memory have two status bits, R and M, associated with each page. R

is set whenever the page is referenced (read or written). M is set when the page is written to (i.e., modified).

Demand Paging

never bring a page into primary memory until it is needed ◼ Bring a page into memory only when it is needed. ◼ Less I/O needed ◼ Less memory needed ◼ Faster response ◼ More users at any one time

paging segmentation num of address space

p:1 s:many

paging segmentation why was this technique invented

paging to get a large linear AD without having to buy more physical memory segmentation to allow programs.data to be broken up into logically indepdent AS and to aid sharing/protection

paging segmentation can procedure and data be distinguished and separetely protected

paging no segment yes

Protection can add

read, , write, execute protection bits to page table to protect memory ◼ Check is done by hardware during access ◼ Can give shared memory location different protections from different processes by having different page table protection access bits.

The optimal page replacement algorithm says that the page with the highest label should be removed. If one page will not be used for 8 million instructions and another page will not be used for 6 million instructions,

removing the former pushes the page fault that will fetch it back as far into the future as possible. Computers, like people, try to put off unpleasant events for as long as they can

FIFO. The operating system maintains a list of all pages currently in memory, with the most recent arrival at the tail and the least recent arrival at the head. On a page fault, the page at the head is removed and the new page added to the tail of the list. When applied to store

s, FIFO might remove mustache wax, but it might also remove flour, salt, or butter. When applied to computers the same problem arises: the oldest page may still be useful. For this reason, FIFO in its pure form is rarely used.

Sharing

with segmentation no forced protection or sharing. Just protect or share what you want.

(virtual) pages are contiguous in memory, not so

with segments

Performance of Demand Paging

◼ 𝑡eff = (1 − 𝑝) × 𝑚𝑎 + 𝑝 × f 𝑝 probability of page fault, 𝑚𝑎 memory access time, 𝑓 page fault processing time

Hybrid Schemes: Segmented Paged Virtual Memory ◼

◼ Allows sharing of software components ◼ Allows two dimensional view of memory ◼ Reduces in-memory page table size

Hybrid Schemes: Segmented Paged Virtual Memory

◼ If segments are too big to fit into memory, segments can be paged. ◼ Overcomes external fragmentation problem of segmented memory

Hybrid Schemes: Segmented Paged Virtual Memory ◼ We don't need to worry about compaction. ◼ On average, half a page per segment is wasted by internal fragmentation (the last page)

◼ If segments are too big to fit into memory, segments can be paged. ◼ Overcomes external fragmentation problem of segmented memory

Segmentation Implementation Issues

◼ Length of segment table given by segment table length register ◼ we need this, because the number of segments might vary by process. ◼ This is not the case for paged memory: size of the page table is determined by the virtual address space and page size.

Code and data can be shared by mapping them into pages with common page frame mappings

◼ Most OS's reserve a certain part of the VM address space for sharing

hybrid schemes We don't need to worry about compaction.

◼ On average, half a page per segment is wasted by internal fragmentation (the last page) ◼ Allows sharing of software components ◼ Allows two dimensional view of memory ◼ Reduces in-memory page table size

Shared pages ◼ If virtual addresses are stored anywhere in the code or data, page numbers must correspond.

◼ What happens when a page is swapped in and out? ◼ You have to update all process page tables that are sharing it. ➔ a lot of work! ◼ Implies that practically you don't swap shared pages out.


Ensembles d'études connexes

Software, File and Platform Services

View Set

Chapter 11 - Managing Project Risks

View Set

OS Worksheet International Trade (Micro)

View Set

BI100 Practice Quizzes (Ch. 10-13)

View Set

Module 4 Passive Transport: Diffusion and Osmosis

View Set