CS 300 Module 6

Ace your homework & exams now with Quizwiz!

Multiprocessor

processes/threads can execute concurrently. CPU scheduling is needed to schedule processes/threads to run on the CPU.

Suppose a page size is 256 (2⁸) bytes, the page of the virtual address 0x123 is _____, the offset is _____. If this page is loaded into frame 4, the physical address for this virtual address is _____.

Suppose a page size is 256 (2⁸) bytes, the page of the virtual address 0x123 is 0x1, the offset is 0x23 (or 35). If this page is loaded into frame 4, the physical address for this virtual address is 0x423.

If continuous allocation is used. P1's base address is 0x2043. What is the physical address for P1's virtual address of 0x110?

0x2153

CPU may be scheduled when these happen:

1) When a child process is forked by its parent process and enters into the ready queue 2) When a process goes back form the waiting queue to the ready queue, because of the completion of some waiting events such as I/O completion. 3) When the current process finishes its execution and leaves from the CPU 4) When the current process uses up its time slice and leaves from the CPU 5) When a process needs to wait for I/O operations and leaves from CPU

Thrashing

--Because page faults involve disk operations, they are much slower than memory access. To improve the memory access time, the objective of the memory management system is to decrease the page fault rate. -If a working set of a process is not large enough, there will be constant page faults, which will result in very low CPU utilization, and no useful work can be done. You may have such experience before. In particular, when you have opened many applications, the computer may have no response or respond very slowly. To mitigate this "thrashing" problem, we can increase the physical memory capacity or decrease the number of processes (e.g. kill some processes).

Paging Schemes

--Paging schemes are is usually used by modern general purpose OS to overcome the above problems in continuous allocation. --This is often referred to as a virtual memory scheme. The idea is to divide memory into fixed-size pages or frames. --The virtual address space is divided into multiple fixed sized pages. Physical memory is divided into same-sized frames. --If a process needs n pages, the system can allocate n non-contiguous frames to it. Since all pages and frames are the same size, any free frames can be used for any page. This eliminates the external fragmentation issue. -all required n pages can be mapped to non-contiguous frames and can be allocated one by one instead of all at once. This enables the physical memory to accommodate more, larger processes. -It also enables dynamic allocation where a process wants to dynamically increase its memory. If a process needs more pages, it can be easily fulfilled as long as there are free frames. -no need to swap the whole process. only need to swap pages. therefore, less I/O is needed. --HOWEVER, more overhead than continuous allocation. With the paging scheme, an additional page table is needed for each process is needed to keep track of the mapping between pages and frames. --Page tables need to be stored in physical memory. --In addition, whenever a virtual address is generated, the MMU needs to first access the page table to find the mapped frame number in order to calculate the physical address, and then access that physical address. Therefore, additional memory access time are needed for any memory access.

Translation Look-aside buffer

-Since the page table resides in main memory, each memory access requires an additional page table access first. Therefore, this doubles the access time for each memory access. -To improve performance, we can use a cache for the page table. A Translation Look-aside Buffer (TLB) is a special fast lookup cache for page tables. -It caches some page number mappings and frame number mappings. -While it takes less time to access the TLB than the page table, the TLB has far fewer entries than page tables. Therefore, not all mappings can be found in TLB. When CPU generates a virtual address, MMU first searches TLB to find the frame number. If it cannot find in TLB (which is called a TLB miss in this case), MMU then searches the page table, and update TLB accordingly. Usually TLB holds only the mappings for the current process: when there is a context switch from P1 to P2, all entries in TLB for P1 are invalidated. New mappings need be loaded from P2 's page table. This is called TLB flushing.

Virtual address (logical address)

-To prevent processes from accessing each other's memory content, virtual (or "logical") address space is assigned to each process. -Instead of using physical addresses directly, each process can access any data or code in its own virtual address space. -The virtual addresses are generated by a program during execution. Each virtual address space is independent of the others. -The virtual addresses are then mapped to the physical addresses through a hardware memory management unit (MMU)

Which of the following events can trigger the CPU scheduling?

-fork a new process -the current process is terminated -the current process makes a blocking system call such as read() -the current process uses up its time quantum,

Memory Management

-important service that an OS provides is memory management. -main memory, RAM, is an important component in the computer system. -A program must be loaded into the main memory to execute -In a multiprogramming system, multiple processes together with the OS share main memory. --Goals of memory management include: 1) Support high degree multiprogramming so that more processes can run simultaneously 2) Decrease the average access time so that the overall system speed can be improved 3) isolate the OS from other processes, as well as the processes from each other, so that they cannot arbitrarily access data from each other - RAM can be viewed as a one-dimensional array where addresses are array indices and contents are array elements.

Shortest Job First (SJF)

A scheduling algorithm that deals with each user or task based on the getting the smaller ones out of the way. -attempts to minimize the waiting time by scheduling the shortest process first -several problems with this: 1) execution time of each process is hard to predict 2) long processes will suffer and may starve, if shorter processes constantly arrive -Each process is inserted into the ready queue and sorted based on their execution time.

Continuous Allocation

A simple scheme to translate logical address from physical address is to add a relocation address as Physical address = logical address + base -Requires a continuous allocation scheme where the whole process is allocated as a chunk in the main memory starting with base. -Each process is allocated a chunk of memory starting at address base. The base and limit should be only accessed by the kernel. Several Problems: 1) result in external fragmentation: various free spaced slots generated in the memory. Ex: if p3 finishes its execution, its memory is reclaimed. If a new process p4 arrives but requires more memory than P3, then P4 cannot fit in that free slot, causing external fragmentation. 2) The multiprogramming degree is low. It cannot support many processes running simultaneously. 3) The physical memory needs to be larger than the virtual address space. Otherwise, it cannot map the whole virtual address space into the physical memory. 4) The system needs to know how much memory each process requires in advance. 5) It cannot support dynamic allocation well. For example, if P2 wants more memory, then it cannot get more even though there is available memory in other locations. 6) It cannot support shared memory well. If multiple processes want to share certain memory region, this is hard to support. 7) The overhead of swapping the whole process is large. In the previous example, if P4 needs to run, and there is no enough memory available, then the system needs to displace some other process e.g. P3. Then the whole P3 need be transferred to the swapping area--which is usually on the disk.

Which of the following actions can be taken to mitigate the thrashing?

A. increase the physical memory capacity B. Decrease the degree of multiprogramming C. Increase the disk access speed

Physical address

Addresses seen by the memory hardware unit. -if using physical addresses for all processes, a process could easily access another processes address and their contents.

T/F: whenever the current process terminates, the CPU scheduler will pick a process from all processes in the system to run on the CPU next

FALSE should be a process from the ready queue

T/F: All processes share virtual address space.

False

T/F: a page fault interrupt is generated when MMU cannot find the mapping in TLB

False

T/F: a race condition occurs when the output of a program does not depend on the scheduling order

False

Multiprocessor scheduling algorithms

Multicore processes, each CPU can execute a process at the same time. Multiprocessor scheduling algorithms not only need to specify which process to run next but also on which CPU core.

_____ supports dynamic memory allocation well. _____ has external fragmentation. _____ supports shared memory well.

Paging supports dynamic memory allocation well. Contiguous allocation has external fragmentation. Paging supports shared memory well.

Which of the following scheduling algorithms have starvation problem?

SJF, Static priority scheduling

Suppose it takes 20ns to access TLB, and 200ns to access the physical memory. Suppose a page is already loaded in the memory, it takes _____ ns to access it if the mapping can be found in TLB. It takes _____ ns to access this page if the mapping cannot be found in TLB.

Suppose it takes 20ns to access TLB, and 200ns to access the physical memory. Suppose a page is already loaded in the memory, it takes 220ns to access it if the mapping can be found in TLB (20ns to access TLB + 200ns to access the corresponding frame in the physical memory). It takes 420ns to access this page if the mapping cannot be found in TLB (20ns to access TLB + 200 ns to access the page table + 200ns to access the corresponding frame).

Suppose the page size is 16 (2⁴) bytes, and initially both page table and TLB are empty, the following virtual addresses are accessed in order: 0x33, 0x30. Accessing 0x33 is _____. Accessing 0x30 is _____.

Suppose the page size is 16 (2⁴) bytes, and initially both page table and TLB are empty, the following virtual addresses are accessed in order: 0x33, 0x30. Accessing 0x33 is Page Fault. Accessing 0x30 is TLB hit.

Workset

The working set of a process is the set of pages in the virtual address space of the process that are currently resident in physical memory. When a process references a page that is not currently in its working set, a page fault occurs. The system page fault handler attempts to resolve the page fault and, if it succeeds, the page is added to the working set.

_____ is the logical view by a process. _____ is seen by the hardware memory unit. MMU translate _____ into _____. _____ provides isolation between processes.

Virtual address is the logical view by a process. Physical address is seen by the hardware memory unit. MMU translate Physical address into Virtual address. Virtual address provides isolation between processes.

Page Fault

What if a page is not in the memory yet? I.e., MMU searches the page table, and cannot find the corresponding frame. In that case, a page fault interrupt is generated by MMU. The page fault interrupt handler in OS is executed to find that page on the disk, allocate a new frame to it and update the page table and TLB. --Demand paging and prepaging are two schemes for handling this. -- The idea of demand paging is to load a page into memory only when it is accessed. --The idea of prepaging is to load a page before it is accessed. An extra bit is added to each entry in the page table to indicate if a page is already loaded into memory or not. -When a memory-related instruction is executed, CPU first generates a linear virtual address. MMU splits the virtual address into two parts: the page number and the offset. Then MMU checks TLB to find the corresponding frame number. If the frame number is found, MMU combines the frame number and the offset to generate the physical address. --If it cannot find the frame number in TLB, it then checks the page table. If it finds the frame number in the page table, then the physical address can be generated. Then a memory access occurs. If not found, MMU generates a page fault interrupt. Then the current instruction is interrupted. OS takes control and the interrupt handler executes to load the page. After the interrupt is handled, the previous instruction is re-executed. ---The page fault interrupt handler locates the page and loads it into a free frame. Usually the OS kernel maintains a free frame list. If there is no free frame, then a page replacement algorithm is executed to find a "victim" page to be paged out. The system can simply discard the victim page if it was not modified, or write back into the swapping area if it was modified before. Different page replacement algorithms can be used to select the victim page. A well-known algorithm is LRU (least Recently Used). The idea is to pick the page that has not been used for the longest time.

First Come First Served (FCFS)

a priority sequencing rule that specifies that the job or customer arriving at the workstation first has the highest priority. -The process executes until it voluntarily leaves the CPU -No starvation, every process will get the chance to execute -but a short process can have a long waiting time if it arrives after a long process.

CPU scheduling

done by the kernel. all related data and code are in the kernel address space and can only be accessed/executed in the kernel mode. Two tasks involved in CPU scheduling: 1) pick a process already in the ready queue to run on the CPU next 2) to perform a context switch, if the picked one is different from the current one. Scheduling overhead includes both the overhead of picking a process and the context switch overhead. --ONLY a process in the READY queue can be scheduled to run on the CPU

Address Translation:

how a virtual address is translated into a physical address. Each virtual address is broken into two parts: page number (p) and the offset (d). Physical --each physical address is broken into 2 parts: the frame number (f) and the offset (d). --Since the page and the frame are the same size, the offset (d) in a virtual page is the same as the offset in a physical frame. By looking at the page table, MMU can easily find the mapped frame (f) for page (p). Then put the frame number (f) and the offset (d) together, we can easily get the physical address. --Example: For example, if a page size is 16 (2⁴) bytes, then 4 bits are needed to represent the offset as (0000, 0001, 0010, 0011, ..., 1111). A hexadecimal digit is 4 binary bits. So we can use 1 hex digit to represent the offset as (0x0, 0x1, ...., 0xa, 0xb, ....0xf). If a virtual address is 0x2a, then it can be splitted into two parts, the page number 0x2 and the page offset 0xa. If the page table shows that page 2 is mapped to frame 5. Then the physical address is 0x5a, since the frame offset is the same as the page offset.

Round Robin (RR)

lets every process take a turn to execute. Each process executes for a time quantum in a circular order. The objective of RR is fairness and quick response time. -If the quantum of time is too large, then there will be the issue of a short process having to wait a long time. if it is too small then there will be a lot of context switches and the overhead will be too high. -in general the quantum can be chosen so that most short processes can finish in one round and longer processes need several rounds.

Uniprocessor

one CPU, processes/threads can interleave their execution

Priority based scheduling

the process with the highest priority in the ready queue will be scheduled first. -priority is given as a fixed value and each process is executed until it voluntarily releases the CPU resource. -The process with the highest priority is selected to run on the CPU -priority can be based on many different things therefore, certain processes can be subject to starvation if higher priority processes keep arriving.

Multiprogramming

to improve CPU utilization, multiprogramming is used by most OS's

Multilevel feedback queue

used by UNIX. -use multiple separate ready queues with different priorities. Each queue can have its own scheduling algorithms. - A system process queue has higher priority than the user process queues. A foreground process queue has higher priority than a background process queue. RR can be used in the foreground process queue and SJF can be used in the background process queue.

Complete fair scheduling

used in LINUX. - measures how much runtime each task has had and tries to ensure everyone get their fair share of CPU time. -uses a variable called vruntime to record the virtual execution time of a task. -A lower vruntime indicates that a process used less time and therefore has more need of the CPU

Real Time scheduling

used to schedule the real-time processes which have time constraints as deadlines. -Earliest Deadline First (EDF) picks the process with the earliest deadline to run on the CPU.

Synchronization problem

when multiple processes or threads share the same data and the output depends on who runs precisely when a race condition occurs. IF there are two threads, both executing similar things, if half of the first code executes then gets interrupted and goes to the second thread and then that half executes then finishes the first thread and finishes the second thread. These produce incorrect results. -to ensure proper execution, one thread needs to have proper synchronization so that it produces the right results.


Related study sets

Real Estate University | S1 | Chapter 2

View Set

A&P 2 Chapter 25 Metabolism & Energetics

View Set

Chapter 3: Identifying Stakeholders and Issues

View Set

Nachschlageteil Lesen (Veelvoorkomende examenvragen, inhoud)

View Set

Chapter 4: Life Insurance Policy Provisions, Riders, and Options

View Set

Allowance Method vs Direct Write-Off Method

View Set