OS Final Exam Review

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Fragmentation -How to reduce external fragmentation?

By compaction: -----Shuffle memory contents to place all free memory together in one large block -----Compaction is possible only if relocation is dynamic, and is done at execution time -----I/O problem ----------latch job in memory while it is involved in I/O ----------Do I/O only into OS buffers

Single-level Directory

A single Directory for all users, naming problem, grouping problem

Process Synchronization Race Condition:

where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes places

Page Replacement Algorithms (OPT)

(optimal page replacement algorithm (also known as OPT, clairvoyant replacement algorithm, or Bélády's optimal page replacement policy) when a page needs to be swapped in, the operating system swaps out the page whose next use will occur farthest in the future.

Several pieces of information are associated with an open file

--------File pointer: On systems that do not include a file offset as part of the read() and write() system calls, the system must track the last read-write location as a current-file-position pointer. This pointer is unique to each process operating on the file and therefore must be kept separate from the on-disk file attributes. --------File-open count: As files are closed, the operating system must reuse its open-file table entries, or it could run out of space in the table. Multiple processes may have opened a file, and the system must wait for the last file to close before removing the open-file table entry. The file-open count tracks the number of opens and closes and reaches zeron on the last close. The system can then remove the entry. --------Disk location of the file: most file operations require the system to modify data within the file. The information needed to locate the file on disk is kept in memory so that the system does not have to read it from disk for each operation --------Access rights: each process opens a file in an access mode. This information is stored on the per-process table so the operating system can allow or deny subsequent I/O requests.

File Structure

--------None - Sequence of words, bytes --------Simple record structure ----------------Lines, Fixed length, Variable length --------Complex Structures ----------------Formatted document, Relocatable load file --------Can simulate last two with first metho by inserting characters --------who decides: ----------------Operating System, Program

Conditions for Dead Lock all must be true for deadlock

-------mutual exclusion -can't mess with mutex -------hold & wait -------no preemption -------circular wait

Methods for handling deadlocks

-------we can use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlocked state (deadlock prevention provides a set of methods for ensuring that at least one of the necessary conditions cannot hold. These methods prevent deadlocks by constraining how requests for resources can be made) -------we can allow the system to enter a deadlocked state, detect it, and recover. -------we can ignore the problem altogether and pretend that deadlocks never occur in the system(used by most OS it is then up to the application developer to write programs that handle deadlocks)

Segmentation Architecture -logical address consists of a two tuple: -Segment table (and each table entry has) -Segment-table base register (STBR) -Segment-table length register(STLR)

-----<segment-number, offset> -----Segment table: maps two-dimensional physical addresses; each table entry has: ----------base: contains the starting physical address where the segments reside in memory ----------limit: specifies the length of the segment -----(STBR): points to the segment table's location in memory -----(STLR): indicates number of segments used by a program; segment number s is legal if s < STLR

The ability to execute a program that is only partially in memory would confer many benefits:

-----A program would no longer be constrained by the amount of physical memory that is available. Users would be able to write programs for an extremely large virtual address space, simplifying the programming task. -----Because each user program could take less physical memory, more programs could be run at the same time, with a corresponding increase in CPU utilization and throughput but with no increase in response time or turnaround time -----Less I/O would be needed to load or swap user programs into memory, so each user program would run faster.

Effective Access Time

-----Associative Lookup = E time unit ----- Assume memroy cycle time is 1 microsecond -----Hit ratio: Percentage of times that a page number is found in the associative registers; ratio related to number of associative registers -----Hit ratio = alpha

Contiguous Allocation -----Relocation registers used to protect user processes from each other, and from changing operating-system code and data

-----Base register contains value of smallest physical address -----Limit regist contains range of logical addresses - each logical address must be less than the limit register -----MMU maps logical address dynamically

Binding of Instructions and Data to Memory -----Address binding of instructions and data to memory addresses can happen at three different stages

-----Compile time: If memory location known priori, absolute code can be generated; must recompile code if starting location changes. -----Load time: Must generate relocatable code if memory location is not known at compile time -----Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers)

Fragmentation -External and Internal

-----External Fragmentation: total memory space exists to satisfy a request, but it is not contiguous -----Internal Fragmentation: allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used

Dynamic Storage-Allocation Problem -How to satisify a request of size n from a list of free holes

-----First-fit: Allocate the first hole that is big enough -----Best-fit: Allocated the smallest hole that is big enough; must search entire list, unless ordered by size. ----------Produces the smallest leftover hole -----Worst-fit: Allocate the largest hole; must also search entire list ----------Produces the largest leftover hole

Memory-Management Unit (MMU)

-----Hardware device that maps virtual to physical address -----in MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory -----The user program deal with logical addresses; it never ses the real physical addresses

Contiguous Allocation (Cont) Multiple-partition allocation

-----Hole: block of available memory; holes of various size are scattered throughout memory. -----When a process arrives, it is allocated memory from a hole large enough to accommodate it

Paging

-----Logical address space of a process can be noncontiguous; process is allocated physical mmory whenever the latter is available -----Divide physical memory into fixed-sized blocks call frames (size is power of 2, between 512 bytes and 8,192 bytes) -----Divide logical memory into blocks of same size called pages -----Keep track of all free frames -----To run a program of size n pages, need to find n free frames and load program -----Set up a page table to translate logical to physical addresses -----internal fragmentation

Memory Protection -implementation -valid-invalid

-----Memory protection implemented by associating protection bit with each frame -----Valid-invalid bit attached to each entry in the page table: ----------"valid" indicates that the associated page is in the process' logical address space, and is thus a legal page ----------"invalid" indicates that the page is not in the process' logical address space

Segmentation -memory-management -a program is a logical unit such as:

-----Memory-Management scheme that supports user view of memory -----A program is a logical unit such as: ----------main program ----------procedure ----------function ----------method ----------object ----------local variables, global variables ----------common block ----------stack ----------symbol table ----------arrays

Background of Main Memory

-----Program must be brought (from disk) into memory and place within a process for it to be run -----Main memory and registers are only storage CPU can access directly -----Register access in one CPU clock(or less) -----Main memory can take many cycles -----Cache sits between main memory and CPU registers -----Protection of memory required to ensure correct operation

Segmentation Architecture (Cont.)

-----Protection ----------with each entry in segment table associate: ---------------validation bit = 0 -> illegal segment ---------------read/write/execute privileges -----Protection bits associated with segments; code sharing occurs at segment level -----since segments vary in length, memory allocation is a dynamic storage-allocation problem -----Elements within a segment are identified by their offset from the beginning of the segment

Contiguous Allocation -----Main memory usually into two partitions:

-----Resident operating system, usually held in low memory with interrupt vector -----User processes then held in high memory

Two-Level Paging Example -A logical address (on 32-bit machine with 1K page size) is divided into: -Since the page table is paged, the page number is further divided into: -Thus, a logical address is as follows:

-A logical address(on 32-bit machine with 1K page size) is divided into: -----a page number consisting of 22 bits -----a page offset consisting of 10 bits -since the page table is paged, the page number is further divided into: -----a 12-bit page number -----a 10-bit page offset

A computer provides each process with 65536 Bytes of address space divided into pages of 4096 bytes. A program has text size 32768 bytes, data size 16386 bytes, stack size 15870 bytes [Remember that a page may not contain parts of different segments] Will this program fit in the address space? If the page size were 512 bytes would it fit?

1.) 65536/4096 = 16 pages (total # of pages) 2.) 32768/4096 + 16386/4096 + 15870/4096 = 5 + 8 + 4 = 17 Pages NO if the page size is 512 bytes, would it fit? yes

Procedure for handling page fault

1.) Check an internal table (usually kept with the process control block) for this process to determine whether the reference was a valid or an invalid memory access 2.) if the reference was invalid, we terminate the process. if it was valid but we have not yet brought in that page, we now page it in. 3.) we find a free frame(by taking one from the free-frame list) 4.) we schedule a disk operation to read the desired page into the newly allocated frame 5.) when the disk read is complete, we modify the internal table kept with the process and the page table to indicate that the page is now in memory 6.) we restart the instruction that was interrupted by the trap. The process can now access the page as though it had always been in memory

The hardware to support demand paging is the same as the hardware for paging and swapping:

1.) Page table. This table has the ability to mark an entry invalid through a valid-invalid bit or a special value of protection bits. 2.) Secondary memory. This memory holds those pages that are not present in main memory. The secondary memory is usually a high-speed disk. It is known as the swap device, and the section of disk used for this purpose is known as swap space.

Under the normal mode of operation, a process may utilize a resource in only the following sequence:

1.) Request - the process requests the resource. if the request cannot be granted immediately (for example, if the resource is being used by another process), then the requesting process must wait until it can acquire the resource. 2.) Use - The process can operate on the resource (for example, if the resource is a printer, the process can print on the printer.) 3.) Release - The process releases the resource

In addition to separating logical memory from physical memory, virtual memory allows files and memory to be shared by two or more processes through page sharing leading to the following benefits:

1.) System libraries can be shared by several processes through mapping of the shared object into virtual address space. Although each process considers the libraries to be part of its virtual address space, the actual pages where the libraries reside in physical mmory are shared by all the processes. Typically, a library is mapped read-only into the space of each process that is linked with it. 2.) Similarly, processes can share memory. Two or more processes can communicate through the use of shared memory. Virtual memory allows one process to create a region of memory that it can share with another process. Processes sharing this region consider it part of their virtual address space, yet the actual physical pages of memory are shared 3.)Pages can be shared during process creation with the fork() system call, thus speeding up process creation.

Readers writers variations

1.) first readers-writers problem, requires that no reader be kept waiting unless a writer has already obtained permission to use the shared object. No reader should wait for other readers to finish simply because a writer is waiting. (writers may starve) 2.) The second readers-writers problem requires that, once a writer is ready, that writer perform its write as soon as possible. If a writer is waiting to access the object, no new readers may start reading. (readers may starve)

32 bit address 2 level paging system 8kb page size. Outer page table has 1024 entries. How many bits are used to represent the second level page table?

10 bits 9 bits 13 bits 31 21 12 (8k page) 0 32 - (13+10) = 9 bits

Let p be the probability of a page fault (0<=p<=1). We would expect p to be close to zero - that is, we would expect to have only a few page faults. The effective access times is then:

= (1-p) * ma + p * page fault time

Given the logical address 0xAEF9 with a page size of 256 bytes, what is the page number? What is the page offset?

A E F 9 1010 1110 1111 1001 256 bytes = 2^8 10101110 <- Page number 11111001 <- offset 8 bits

Directory Structure

A collection of nodes containing information about all files. Both the directory structure and the files reside on disk. Backups of these two structures are kept on tapes.

For each use of a kernel managed resource by a process or thread, the operating system checks to make sure that the process has requested and has been allocated the resource.

A system table records whether each resource is free or allocated; for each resource that is allocated, the table also records the process to which it is allocated. If a process requests a resource that is currently allocated to another process, it can be added to a queue of processes waiting for this resource.

Page Replacement Algorithms 2nd-chance

A modified form of the FIFO page replacement algorithm, known as the Second-chance page replacement algorithm, fares relatively better than FIFO at little cost for the improvement. It works by looking at the front of the queue as FIFO does, but instead of immediately paging out that page, it checks to see if its referenced bit is set. If it is not set, the page is swapped out. Otherwise, the referenced bit is cleared, the page is inserted at the back of the queue (as if it were a new page) and this process is repeated. This can also be thought of as a circular queue. If all the pages have their referenced bit set, on the second encounter of the first page in the list, that page will be swapped out, as it now has its referenced bit cleared. If all the pages have their reference bit cleared, then second chance algorithm degenerates into pure FIFO.

What define the logical address space?

A pair of base and limit registers

Why would anyone favor a preemptive kernel over a nonpreemptive one?

A preemptive kernel may be more responsive, since there is less risk that a kernel-mode process will run for an arbitrarily long period before relinquishing the processor to waiting processes. (of course, this risk can also be minimized by designing kernel code that does not behave in this way) Furthermore, a preemptive kernel is more suitable for real-time programming, as it will allow a real-time process to preempt a process currently running in the kernel.

Deadlock

A process request resources; if the resources are not available at that time, the process enters a waiting state. Sometimes, a waiting process is never again able to change state, because the resources it has requested are held by other waiting processes.

Process Synchronization Critical Section:

A segment of code in a process in which the process may be changing common variables, updating a table, writing a file, etc. The important feature of the system is that, when one process is executing in its critical section, no other process is allowed to execute in its critical section.

The readers-writers problem and its solutions have been generalized to provide reader-writer locks on some systems.

Acquiring a reader-writer lock requires specifying the mode of the lock: either read or write access. When a process wishes only to read shared data, it requests the reader-writer lock in read mode. A process wishing to modify the shared data must request the lock in write mode. Multiple processes are permitted to concurrently acquire a reader-writer lock in read mode, but only one process may acquire the lock for writing, as exclusive access is required for writers.

Physical Address

Addresses seen by the memory unit

Preemptive Kernels

Allows a process to be preempted while it is running in kernel mode. Must be carefully designed to ensure that shared kernel data are free from race conditions. Preemptive kernels are especially difficult to design for SMP architctures, since in these environments it is possible for two kernel-mode processes to run simultaneously on different processors.

Memory Mapped Files

Consider a sequential read of a file on disk using the standard system calls open(), read(), and write(). Each file access requires a system call and disk access. Alternatively, we can use the virtual memory techniques discussed so far to treat file I/O as routine memory access. Memory Mapping allows a part of the virtual address space to be logically associated with the file. This can lead to significant performance increases.

System Resource-Allocation Graph

Consists of a set of vertices V and a set of edges E. The set of vertices V is partitioned into two different types of nodes: P = {P1, P2, ... , Pn}, the set consisting of all the active processes in the system, and R = {R1, R2, .. , Rn}, the set consisting of all resource types in the system.

File System File Operations

Creating a file, Writing a file, Reading a file, Repositioning within a file, Deleting a file, Truncating a file Reposition with a file: The directory is searched for the appropriate entry, and the current-file-position pointer is repsitioned to a given value. Repositioning within a file need not involve any actual I/O. This file operation is also known as a file seek. Truncating a file: The user may want to erase the contents of a file but keep its attributes. Rather than forcing the user to delete the file and then recreate it, this function allows all attributes to remain unchanged - except for file length - but lets the file be reset to length zero and its file space released.

S5FS: Inode

Device, Inode number, Mode, Link Count, Owner & Group, Size, Disk Map

File-System Structure

Disks provide most of the secondary storage on which file systems are maintained. Two characteristics make them convenient for this purpose: --------A disk can be rewritten in place; it is possible to read a block from the disk, modify the block, and write it back into the same place --------A disk can access directly any block of information it contains. Thus, it is simple to access any file either sequentially or randomly, and switching from one file to another requires only moving the read-write heads and waiting for the disk to rotate.

Effective Access Time (EAT) formula

EAT = (1 + E)alpha + (2+E)(1-alpha) EAT = 2 + E - Alpha

Tree-Structured Diretories

Efficient Searching, Grouping capability, current directory (working directory) such as cd /spell/mail/prog, type list Absolute or relative path name Creating a new file is done in current directory Delete a file --------rm <file-name> Creating a new subdirectory is done in current directory --------mkdir <dir-name>

File System Structure

File systems provide efficient and convenient access to the disk by allowing data to be stored, located, and retrieved easily. I/O control level consists of device drivers and interrupt handlers to transfer information between the main memory and the disk system. The basic file system needs only to issue generic commands to the appropriate device driver to read and write physical blocks on the disk.

Dynamic Storage-Allocation Problem -which is best?

First-fit and best-fit better than worst-fit in terms of speed and storage utilization

Linear Address in Linux broken into four parts:

Global directory, Middle Directory, Page Table, Offset

Acyclic-Graph Directories

Have shared subdirectories and files Two different names (aliasing) If dict deletes list -> dangling pointer --------Solutions: --------Backpointers, so we can delete all pointers, Variable size records a problem --------Backpointers using a daisy chain organization --------Entry-hold-count solution New directory entry type --------Link: another name (pointer) to an existing file --------Resolve the Link: follow pointer to locate the file

File System Structure How to improve I/O efficiency

I/O transfers between memory and disk are performed in units of blocks. Each block has one or more sectors, depending on the disk drive sector size varies from 32 bytes to 4096 bytes; usual size is 512.

Basic Page Replacement

If no frame is free, we find one that is not currently being used and free it. We can free a frame by writing its contents to swap space and changing the page table to indicate that the page is no longer in memory. we can now use the free frame to hold the page for which the process faulted.

Thrashing

If the process does not have the number of frames it needs to support pages in active use, it will quickly page-fault. At this point, it must replace some page. However, since all its pages are in active use, it must replace a page that will be needed again right away. Consequently, it quickly faults again, and again, and again, replacing pages that it must bring back in immediately. A process is thrashing if it is spending more time paging than executing.

Address Translation Scheme Address generated by CPU is divided into:

If the size of the logical address space is 2^m, and a page size is 2^n bytes, then the high-order m-n bits of a logical address designate the page number, and the n low-order bits designate the page offset. page number | page offset --------p----------|----------d----| !!!!!!!!m-n!!!!!!!!!| ---------n-----| -----Page number(p): used as an index into a page table which contains base address of each page in physical memory -----page offset(d) - combined with base address to define the physical memory address that is sent to the memory unit

Deadlock Characterization

In a deadlock, processes never finish executing, and system resources are tied up, preventing other jobs from starting.

Page Replacement Algorithms (LRU)

Least Recently Used similar in name to NRU, differs in the fact that LRU keeps track of page usage over a short period of time, while NRU just looks at the usage in the last clock interval. LRU works on the idea that pages that have been most heavily used in the past few instructions are most likely to be used heavily in the next few instructions too. While LRU can provide near-optimal performance in theory (almost as good as Adaptive Replacement Cache), it is rather expensive to implement in practice.

How are logical and physical addresses similar and how do they differ?

Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme

Memory Mapping Basic Mechanism

Memory mapping a file is accomplished by mapping a disk block to a page (or pages) in memory. Initial access to the file proceeds through ordinary demand paging, resulting in a page fault. However, a page-sized portion of the file is read from the file system into a physical page. Note that writes to the file mapped in memory are not immediate (synchronous) writes to the file on disk. Some systems may choose to update the physical file when the operating system periodically checks whether the page in memory has been modified. When the file is closed, all the memory-mapped data are written back to disk and removed from the virtual memory of the process.

It is up to the ________ _________ ____ to map logical pages to physical page frames in memory

Memory-management unit (MMU)

Deadlock Necessary Conditions

Mutual Exclusion, Hold and Wait, No Preemption, Circular Wait. -------1.) Mutual Exclusion - At least one resource must be held in a non sharable mode; that is, only one process at a time can use the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released. -------2.)Hold and wait- A process must be holding at least one resource and waiting to acquire additional resourcs that are currently being held by other processes -------3.)No preemption - Resources cannot be preempted; that is, a resource can be released only voluntarily by the process holding it, after that process has completed its task. -------4.)Circular wait - A set {Po, P1, ... , Pn} of waiting processes must exist such that Po is waiting for a resource held by P1, P1 is waiting for a resource held by P2, ... , Pn-1 is waiting fora resource held by Pn, and Pn is waiting for a resource held by Po. We emphasize that al four condition must hold for a deadlock to occur. The circular-wait condition implies the hold-and-wait condition, so the four conditions are not completely independent.

Process Synchronization A solution to the critical section problem must satisfy the following three requirements:

Mutual Exclusion, Progress, Bounded Waiting -Mutual Exclusion: If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. -Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely. -Bounded Waiting: There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

File System File Attributes

Name, Identifier, Type, Location, Size, Protection, Time date & user identification. Identifier: This unique tag, usually a number, identifies the file within the file system, it is the non-human readable name for the file Location: This is a pointer to a device and to the location of the file on that device Protection: Access-control information determines who can do reading, writing, executing, and so on. Time, date, and user ID: This information may be kept for creation, last modification, and last use. These data can be useful for protection, security, and usage monitoring.

Open File Descriptor

Open file Descriptor Table (per process) --------The file descriptoers index the entries in this table, and each entry contains a pointer to an entry in a table that the kernel maintains for all open files. Open file Table (per system) --------Maintained by the kernel and shared by all processes --------Each entry in this table contains the file status flags (read, write, append, etc.), the current file offset, a reference count of the number of descriptor entries that currently point to a pointer to the entry for this file in the v-node table.

File System Calls

Open function converts a filename to a file descriptor and returns the decsriptor number. The descriptor returned is always the smallest descriptor that is not currently open in the process. Clse function decrements the reference count in the associated file table entry. The kernel will not delete the file table entry until its reference count is zero.

Main Memory Random facts

Paging suffers from internal fragmentation Segmentation has holes in physical memory

Deadlocks

The implementation of a semaphore with a waiting queue may result in a situation where two or more processes are waiting indefinitely for an event that can be caused only by one of the waiting processes.

Directory Overview

Search for a file, Create a file, Delete a file, List a directory, Rename a file, Traverse the file system.

Two-level directory

Seperate directory for each user Path Name Can have the same file name for different user Efficient searching no grouping capability

Semaphore Process Control Block

The list of waiting processes can be easily implemented by a link field in each process control block. Each semaphore contains an integer value and a pointer to a list of PCBs. Correct usage of semaphores does not depend on a particular queueing strategy for the semaphore lists.

Logical vs. Physical Address Space

The concept of a logical address space that is bound to a separate physical address space is central to proper memory management

Semaphore Implementation

The definitions of the wait() and signal() semaphore operations suffer from busy waiting. To overcome the need for busy waiting, we can modify the definition of the wait() and signal() operations: -When a process executes the wait() operation and finds that the semaphore value is not positive, it must wait. However, rather than engaging in busy waiting, the process can block itself. The block operation places a process into a waiting queue associated with the semaphore, and the state of the process is switched to the waiting state. The control is transferred to the CPU scheduler, which selects another process to execute.

One solution to the problem of external fragmentation is compaction

The goal is to shuffle the memory contents so as to place all free memory together in one large block. Not always possible If relocation is static and is done at assembly or load time, compaction cannot be done

Page Replacement Algorithms MFU

The idea is that the most recently used things will remain in the primary cache, giving very quick access. MFU works well if you have a small number of items that are referenced very frequently, and a large number of items that are referenced infrequently. A typical desktop user, for example, might have three or four programs that he uses many times a day, and hundreds of programs that he uses very infrequently. If you wanted to improve his experience by caching in memory programs so that they will start quickly, you're better off caching those things that he uses very frequently.

Belady's Anomaly

The number of faults for four frames (ten) is greater than the number of faults for three frames(nine)! Belady's Anomaly: for some page-replacement algorithms, the page-fault rate may increase as the number of allocated frames increases. We would expect that giing more memory to a process would improve its performance.

cause of thrashing

The operating system monitors CPU utilization. The CPU scheduler sees the decreasing CPU utilization and increases the degree of multiprogramming as a result. The new process tries to get started by taking frames from running processes, causing more page faults and a longer queue for the paging device. As a result, CPU utilization drops even further, and the CPU scheduler tries to increase the degree of multiprograamming even more. Thrashing has occurred, and system throughput plunges. The page-fault rate increases tremendously. As a result, the effective memory access time increases. No work is getting done, because the processes are spending all their time paging. TLDR; As the degree of multiprogramming increases, CPU utilization also increases, although more slowly, until a maximum is reached. If the degree of multiprogramming is increased even further, thrashing sets in, and CPU utilization drops sharply. At this point, to increase CPU utilization and stop thrashing, we must decrease the degree of multiprogramming

Page fault

The paging hardware, in translating the address through the page table, will notice that the invalid bit is set, causing a trap to the operating system. This trap is the result of the OS failure to bring the desired page into memory

Hit Ratio

The percentage of times that the page number of interest is found in the TLB. An 80 percent hit ratio, for example, means that we find the desired page number in the TLB 80% of the time. If it takes 100 n/s to access memory, then a mapped-memory access takes 100n/s when the page number is in the TLB. If we fail to find the page number in the TLB then we must first access mmory for the page table and frame number (100 n/s) and then access the desired byte in memory (100 n/s), for a total of 200 n/s.

Process Synchronization Entry Section:

The section of code implementing the request for permission to enter critical section.

Page Replacement Algorithm FIFO

The simplest page-replacement algorithm is a FIFO algorithm. The first-in, first-out (FIFO) page replacement algorithm is a low-overhead algorithm that requires little bookkeeping on the part of the operating system. The idea is obvious from the name - the operating system keeps track of all the pages in memory in a queue, with the most recent arrival at the back, and the oldest arrival in front. When a page needs to be replaced, the page at the front of the queue (the oldest page) is selected. While FIFO is cheap and intuitive, it performs poorly in practical application. Thus, it is rarely used in its unmodified form. This algorithm experiences Bélády's anomaly.

Page Replacement Algorithms (LFU) - Least Frequently Used

The standard characteristics of this method involve the system keeping track of the number of times a block is referenced in memory. When the cache is full and requires more room the system will purge the item with the lowest reference frequency.

Local Replacement Algorithm

Used to limit the effects of thrashing. If one process starts thrashing, it cannot steal frames from another process and cause the latter thrashing, they will be in the queue for the paging device most of the time. The average service time for a page fault will increase because of the longe average queue for the device. Thus, the effective access time will increase even for a process that is not thrashing.

Executable & Linkage Format (ELF)

Virtual address space of a process Each ELF file is made up of one ELF header, followed by file data. The file data can include: -----Program header table, describing zero or more segments -----Section header table, describing zero or more sections -----Data referred to by entries in the program header table, or the section header table The segments contain information that is necessary for runtime execution of the file, while sections contain important data for linking and reloction. Each byte in the entire file is taken by no more than one section at a time, but there can be orphan bytes, which are not covered by a section. In the normal case of a Unix executable one or more sections are enclosed in one segment.

Sparse address spaces

Virtual address spaces that include holes Using a sparse address space is beneficial because the holes can be filled as the stack or heap segments grow or if we wish to dynamically link libraries(or possibly other shared objects) during program execution.

It is critical that semaphore operations be executed atomically

We must guarantee that no two processes can execute wait() and signal() operations on the same semaphore at the same time. This is a critical section problem, and in a single processor environment, we can solve it by simply inhibiting interrupts during the time the wait() and signal() operations are executing. This scheme works in a single-processor environment because once interrupts are inhibited, instructions from different processes cannot be interleaved. Only the currently running process executes until interrupts are reenabled and the scheduler can regain control.

(page replacement) We can reduce overhead by using a modify bit (or dirty bit)

When this scheme is used, each page has a modify bit associated with it in the hardware. Set by the hardware whenever any byte in the page is written into, indicating that the page has been modified. When we select a page for replacement, we examine its modify bit. If the bit is set, we know that the page has been modified since it was read in from the disk. In this case, we must write the page to the disk. If the modify beit is not set, however, the page has not been modified since it was read into memory. In this case, we need not write the memory page to the disk: it is already there.

Is the kernel code subject to race conditions?

Yes, at a given point in time, many kernel-mode processes may be active in the operating system. As a result, the code implementing an operating system (kernel code) is subject to several possible race conditions: structures for maintaining memory allocation, maintaining process lists, and for interrupt handling.

Process Synchronization Cooperating Process:

a process that can affect or be affected by other processes executing in the system. Cooperating processes can either directly share a logical address space (that is, both code and data) or be allowed to share data only through files or messages.

Indefinite blocking or starvation

a situation in which processes wait indefinitely within the semaphore. Indefinite blocking may occur if we remove processes from the list associated with a semaphore in LIFO order.

Contiguous Allocation (Cont) -----Operating System maintains information about:

a.) allocated partitions b.) free partitions (hole)

Priority-inheritance protocol

according to this protocol, all processes that are accessing resources needed by a higher-priority process inherit the higher priority until they are finished with the resources in question. When they are finished, their priorities revert to their original values.

Copy-on-Write

allows the parent and child processes initially to share the same pages. These shared page are marked as copy-on-write pages, meaning that if either process writes to a shared page, a copy to the shared page is created.

Locality Model

as a process executes, t moves from locality to locality. A locality is a set of pages that are actively used together. A program is generally composed of several different localities, which may overlap. To prevent thrashing the working-set strategy starts by looking at how many frames a process is actually using.

Binary Semaphore

can range only between 0 and 1. Thus, binary semaphores behave similarly to mutex locks. In fact, on systems that do not provide mutex locks, binary semaphores can be used instead for providing mutual exclusion.

Counting Semaphore

can range over an unrestricted domain. Can be used to control access to a given resource consisting of a finite number of instances. The semaphore is initialized to the number of resources available. Each process that wishes to use a resource performs a wait() operation on the semaphore (thereby decrementing the count). When a process releases a resource, it performs a signal() operation (incrementing the count). When the count for the semaphore goes to 0, all resources are being used. After the, processes that wish to use a resource will block until the count becomes greater than 0.

Deadlock Avoidance

deadlock avoidance requires that the operating system be given in advance additional information concerning which resources a process will request and use during its lifetime. It can decide for each request whether or not the process should wait. The system must consider the resources currently available, the resources currently allocated to each process, and the future requests and releases of each process.

The valid-invalid bit scheme

distinguishes between the pages that are in memory and the pages that are on the disk. when this bit is set to "valid", the associated page is both legal and in memory. If the bit is set to "invalid", the page either is not valid (that is, not in the logical address space of the process) or is valid but is currently on the disk. The page-table entry for a page that is brought into memory is set as usual, but the page-table entry for a page that is not currently in memory is either simply marked invalid or contains the address of the page on disk. While the process executes and accesses pages that are memory resident, execution proceeds normally.

Non preemptive Kernels

does not allow a process running in kernel mode to be preempted; a kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU. Essentially free from race conditions on kernel data structures, as only one process is active in the kernel at a time.

Memory Mapped I/O

each I/O controller includes registers to hold comands and the data being transferred. Ranges of memory addresses are set aside and are mapped to the device registers. Reads and writes to these memory addesses cause the data to be transferred to and from the device registers. This method is appropriate for devices that have fast response times, suh as video controllers. Memory Mapped I/O is also convenient for serial and parallel ports used to connect modems and printers to a computer. The CPU transfers data through these kinds of devices by reading andwriting a few device registers, called an I/O port. To send out a long string of bytes through a memory-mapped serial port, the CPU writes on data byte to the data register and sets a bit in the control register to signal that the byte is available. The device takes the data byte and then clears the bit in the control register to signal that it is ready for the next byte. If the CPU uses polling to watch the control bit, constantly looping to see whether the device is ready, this method of operation is called programmed I/O. if the CPU does not poll the control bit, but instead receives an interrupt when the device is ready for the next byte, the data transfer is said to be interrupt driven.

We say that a set of processes is in a deadlocked state when

every processes in the set is waiting for an event that can be caused only by another process in the set. The events with which we are mainly concerned here are resource acquisition and release.

Logical Address

generated by the CPU; also referred to as virtual address (virtual address is address of process)

Priority Inversion occurs only

in systems with more than two priorities

In a multiprocessor environment, interrupts must be disabled on every processor. Otherwise,

instructions from different processes (running on different processors) may be interleaved in some arbitrary way. SMP systems must provide alternative locking techniques-such as compare_and_swap() or spinlocks - to ensure that wait() and signal() are performed atomically.

The bounded-buffer problem

int n; semaphore mutex = 1; semaphore empty = n; semaphore full = 0; The structure of the producer process do { ... /*produce an item in next_produced */ wait(empty); wait(mutex); .... /*add next_produced to the buffer */ ... signal(mutex); signal(full); } while(true); //////////////////////////////////////////// The structure of a consumer process do { wait (full); wait (mutex); .... /*remove an item from buffer to next_consumed */ ... signal(mutex); signal(empty); ... /* consume the item in next_consumed */ ... } while (true); We can interpret this code as the producer producing full buffers for the consumer or as the consumer producing empty buffers for the producer.

Instead of swapping in a whole process, the pager brings only those pages into memory that it guessed will be used. Thus,

it avoids reading into memory pages that will not be used anyway, decreasing the swap time and the amount of physical memory needed

Demand paging

load pages only as they are needed, commonly used in virtual memory systems. Pages are loaded only when they are demanded during program execution. Pages that are never accessed are thus never loaded into physical memory. Similar to paging system with swapping where processes reside in secondary memory (usually a disk). Use a lazy swapper instead of swapping entire process into memory. A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a process.

Demand Paging Example Demand paged memory. Page table is held in registers. Takes 8m/s to service page fault if an empty page is available or the replace page is not modified, and 20m/s if the replaced page is modified. Memory access time is 100n/s. Assume that the page to be replace is modified 70% of the time. What is the maximum acceptable page-fault rate for an effective access time of no more than 200 nano seconds?

mem-access time - 100 n/s Page fault = p Is frame available? Yes (30% 8 m/s) No (70% 20 m/s) Answer: EAT = (1-P) x MAT + P(.2(8,000,000)) + .7(20,000,000) 200 = (1-p) x 100 + P(16,400,000) p = 6.097 x 10^-6

Two general approaches used to handle critical sections in OS:

preemptive kernels and nonpreemptive kernels

What's the purpose of a multilevel page table

saving memory that is needed single vs multilevel

Another possible solution to the external-fragmentation problem is to permit the logical address space of the processes to be noncontiguous, thus allowing a process to be allocated physical memory wherever such memory is available. Two complementary techniques achieve this solution:

segmentation and paging.

The readers-writers problem

semaphore rw_mutex = 1; semaphore mutex = 1; int read_count = 0; The structure of a writer process do { wait(rw_mutex); ... /* waiting is performed */ ... signal(rw_mutex); } while (true); //////////////////////////////////////////// The structure of a reader process do{ wait(mutex); read_count++; if (read_count == 1) ------wait(rw_mutex); signal(mutex); ... /*reading is performed*/ ... wait(mutex); read_count--; if(read_count == 0) ------signal(rw_mutex); signal(mutex); } while(true); We distinguish two types of processes as readers and writers. If two readers access the shared data simultaneously, no adverse effects will result. However, if a writer and some other process (either a reader or a writer) access the database simultaneously, chaos may ensue. To ensure that these difficulties do not arise, we require that the writers have exclusive access to the shared database while writing to the database. This synchronization problem is referred to as the readers-writers problem.

50 percent rule

statistical analysis of first fit reveals that even with some optimization, given N allocated blocks, another 0.5 N blocks will be lost to fragmentation. That is, one-third of memory may be unuseable

A crucial requirement for demand paging is:

the ability to restart any instruction after a page fault. Because we save the state (registers, condition code, instruction counter) of the interrupted process when the page fault occurs, we must be able to restart the process in exactly the same place and state, except that the desired page is no in memory and is accessible. If we fault when we try to store in C because C is in a page not currently in memory, we will have to get the desired page, bring it in, correct the page table, and restart the instruction. The restart will require fetching the instruction again, decoding it again, fetching the two operands again, and then adding again.

The virtual address space of a process refers to

the logical (or virtual) view of how a process is stored in memory

Virtual memory involves

the separation of logical memory as perceived by users from physical memory. This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available.

Process Synchronization Critical Section Problem:

to design a protocol that the processes can use to cooperate. Each process must request permission to enter critical section.


संबंधित स्टडी सेट्स

Chapter 2: Frequency Distributions in Tables and Graphs

View Set

Victor Prep English Vocabulary Podcast

View Set

Chapter 7: Asepsis and Infection Control

View Set

Mastering Chapter 12: Cell Division Mitosis

View Set

Unit 12: Abnormal Behavior, Myers AP Psychology, 3rd edition

View Set

UNIT 8: PROVIDE CONTACT CENTRE SERVICES

View Set