CS471 Final Review (Part 1)

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

How to calculate Average Turn Around Time

(Sum of the differences between time of arrival and time when completed) / (number of processes)

Virtual Memory: Effective Memory Access time with Paging + TLB | how to calculate With and without TLB

(TLB hit ratio) * (avg time to lookup/access (TLB and memory) entry) + (TLB miss) * (avg time to lookup TLB + 2*(memory access time)) If no TLB is used and translation only used page tables 2*(memory access time)

Producer-Consumer Problem what it is what produce/consumer does

(a.k.a. bounded buffer problem) Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process Producer puts items in buffer Consumer removes items from buffer This buffer is shared between multiple threads

How to calculate average wait time of a scheduler implementation

(sum of difference between arrival times and times at which execution began) / (number of processes)

Process Scheduling: CFS timeSlice_k equation

(weight_k/(sum of weight_i from n=0 to n-1))*sched_latency

Enhanced Second-Chance Page replacement algorithm steps Phase 1: Where does scanner begin How is reference bit affected Phase 2: If Phase 1 fails scan for Phase 3: if Phase 2 fails Location of pointer Value of pages Next steps

1. Beginning at the current position of the pointer, scan the pages. The first page with the reference bit = 0 and modify bit = 0 is replaced. No changes are made to the reference bit in this phase. 2. If Phase 1 fails, scan again, looking for a page with the reference bit = 0, and modify bit = 1. a. During this scan, set the reference bit to 0 on each page that is bypassed. 3. If Phase 2 fails, the pointer should be in its original position and all the pages will have the reference bit 0. Repeat Phase 1, and if necessary, Phase 2.

Virtual Memory: Basic Page replacement Location of desired page found in What must first be found What is read into it and what must be updated What happens to the process

1. Find the location of the desired page on disk swap space 2. Locate a free frame: - If there is no free frame, use a page replacement algorithm to select a victim frame - Write the victim page to the disk - update the page tables accordingly. 3. Read the desired page into the free frame. Update the page table 4. Put the process (that experienced the page fault) back to the ready queue

Virtual Memory: Closer look at servicing page faults 1. what does operating receive 2. what is done with registers and process state? 3. You must ensure the interrupt is a __________ 4. What is checked for validity and where is it located? 5.Steps of issues a read from disk to a free frame? 6. What does CPU do while process waits? 7. What does I/O an interrupt indicate? 8. What is saved for the running process? 9. Where must interrupt come from? 10. What must be updated to show the table in memory? 11. What does process have to wait for? 12. What must be restored to restart the interrupted instruction?

1. Trap to the operating system 2. Save the registers and process state 3. Determine that the interrupt was a page fault 4. Check that the page reference was legal and determine the location of the page on the disk 5. Issue a read from the disk to a free frame: a. Wait in a queue for this device until the read request is serviced b. Wait for the device seek and/or latency time c. Begin the transfer of the page to a free frame 6. While waiting, allocate the CPU to another process 7. Receive an interrupt from the disk I/O subsystem (I/O completed) 8. Save the registers and process state for the running process 9. Determine that the interrupt was from the disk 10. Update the page table to show page is now in memory 11. Wait for the CPU to be allocated to this process again 12. Restore the registers, process state, and new page table, and then restart the interrupted instruction

Hard Disk Drive: Platter How it stores data? how many platters can a disk have? How is data stored?

A circular hard surface on which data is stored by inducing magnetic changes A disk can have one or more platters Platters are made of hard metal with magnetic layering that stores data persistently (across power cycles)

Direct Memory Access (DMA) What is its purpose? How CPU communicates? How it impacts CPU?

A device that transfers data between memory and the I/O device CPU tells DMA where the data is and the size of the data No intervention of CPU is needed as DMA carries out the transfer

Issues with disabling interrupts Issues with disabling interrupts in multiprocessor systems? Problem with disabling all CPUs in multiprocessor?

A malicious or buggy program Disable the interrupt, and never enable it again Control lost from OS, the process runs forever Multiprocessor systems Interrupts disabled on one CPU, but other CPUs can run other processes which can create critical sections Disable on all CPUs: performance penalties

Working-Set Model builds on what principle What does Δ define Which pages constitute the working set Accuracy of working set dependent on Locality and a small Δ Locality and large Δ

A model based on locality principle The parameter Δ, defines the working set window The set of pages in the most recent page references of process Pi constitutes the working set The accuracy of the working set depends on the selection of Δ What if Δ is too small? ➣it will not encompass the entire locality. What if Δ is too large? ➣ it will encompass several localities.

Process Scheduling: Priority Based How its done What it is An example

A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer = higher priority) EX: SJF is priority scheduling where priority is the inverse of predicted next CPU burst time

Process Creation fork()

A running process (parent) executes fork() system call to spawn a new process (child) Child process has a separate copy of parent's address space (including code, data and stack) Both parent and child processes continue execution at the instruction following the fork() system call Called once but return twice 0 for child process and PID of the child for parent process

Critical Section

A segment of code in which a thread uses resources (such as certain instance variables) that can be used by other threads, but that must not be used by them at the same time. Critical section must be mutually exclusive in time

Virtual Memory: Translation Look-aside buffer (TLB) Number of entries Type of Cache Entry format

A typical TLB has 32, 64 or 128 entries TLB is a fully associative cache TLB hardware allows parallel searching (if a given entry is there in the TLB or not) for all entries TLB entry looks like: VPN | PFN | other bits

Process Scheduling: Multi-Level Feedback Queue (MLFQ) Basic Rules

A, B, and C are processes Many distinct queues Each with different priority Rule 1: If Priority(A) > Priority(C), A runs and C doesn't Rule 2: If Priority(A) = Priority(B), A and B run in Round Robin Rule 3 When a job enters the system, it is placed in the highest-priority queue Rule 4a If a job uses up an entire time slice while running, its priority is reduced Move the job down one queue because the job is using more CPU Likely to be a CPU-bounded job and not interactive Rule 4b If a job gives up the CPU before the time slice is up, it stays at the same priority level. Likely to be an interactive job. For example, a job waiting for user's keyboard input (less to do with CPU but more with keyboard I/O) Rule 5: After some time period S, move all the jobs in the system to the topmost queue Give jobs priority boost Periodically give a fair chance to all jobs so that they don't starve

Condition Variable What is it? How does a thread interact with it?

An object with a queue that threads can put themselves on to wait until some condition is true When some other thread changes the condition to true, one of the threads waiting in the queue can be signaled (woken up) to execute

Virtual Memory: TLB address space identifier (ASID) What ASID does and its use

ASID works as a process identifier Use ASID to lookup the correct translation entry

Virtual Memory: Privileged Operations

Address translation should be handled in kernel mode Updating base or bound registers should be privileged too Requires registering an exception handler if the bound check fails

Virtual Memory: Page table address translation speed Where they are stored Memory accesses for lookups

Address translation using page tables can be very slow Page tables are stored in memory too ○ Each data or instruction access requires two memory accesses ○ One for page table (for address translation) and another for accessing the physical memory location itself

Interrupt driven I/O Advantage Disadvantage: Small I/O operation What other device had a similar issue? Hybrid approach? Is CPU involved? What is Livelock? What is coalescing?

Advantage Wastes less CPU cycles Disadvantages What if the I/O operation is too small? ➢ Creating interrupt and context switch might take more time than serving the I/O request itself ➢ Similar to spin-locks ➢ Hybrid approach: Poll for a small time, if the operation is done, return; else, put in a request, sleep and wait for the interrupt Involvement of CPU is not completely eliminated Livelock - too many hardware interrupts ➢ Coalescing - wait and coalesce interrupt with other interrupt(s)

Virtual Memory: Multi-level page tables advantage/disadvantage

Advantages ○ Amount of page table space used is in proportion to amount of address space used ○ When carefully constructed, each portion of the page table fits within a page itself ➣ Extending or shrinking page tables is simply adding or removing page Disadvantages ○ What happens in case of a TLB miss? ➣ Two memory references required for address translation (one for directory, another one for page table) ➣ Typical time-space trade-off Added complexity of multiple levels of lookups and maintaining the table

Virtual Memory: Paging advantages

Advantages ○ Fixed size pages allows better free space management ○ Flexibility in placing virtual pages over physical frames ○ Swapping some pages from memory to disk

Programmed I/O: difficulty of implementing interaction with different I/O impact on CPU

Advantages ○ Simpler implementation ○ Generalizes well across different types of I/O devices Disadvantage ○ Constant polling wastes CPU cycles

Virtual Memory: Paging Advantages

Advantages ○ Simplifies free space management compared to segmentation ➣ Instead of maintaining a free-list of variable sized free space blocks, simply maintain a list of which pages are free ➣Reduces external fragmentation problem

Idea behind threads:

Allow multiple threads of execution within the same process environment The threads of execution can carry out tasks that are independent of each other to a large degree Multiple threads running in parallel in one process is analogous to having multiple processes running in parallel

Virtual Memory: Virtual Address Space | what it is, who sees it, what the OS does with it, why it exists

An abstraction provided by OS Process sees virtual addresses OS translates the virtual address to physical address Reduce complexity/easy to use A process uses "local" virtual addresses Protection and unauthorized access

What is a process provided by and its purpose?

An abstraction provided by OS to run programs

Virtual Memory: Inverted Page Tables What is it another solution for? How is it done? What do the entries indicate? What must be done to find an entry How is access made faster?

Another solution for large page tables Instead of storing many page tables (one per process), keep one page table that has an entry for each physical frame of memory The entry tells which process is using this page, and which virtual page of that process maps to the physical frame. Finding an entry requires a linear search A hash table is maintained to speed up the lookups

Process Scheduling: Multi-Processor Scheduling Scheduling Process Migration Work Stealing

Apply any scheduling policy for each queue (ex: round robin) Process migration: Migrate jobs from one queue to the other, to perform load balancing Work stealing: A queue that is low on jobs will peak into other queues and "steal" some work

Process Scheduling: Shortest Job First

Associate with each process the length of its next CPU burst The CPU is assigned to the process with the smallest (next) CPU burst Two schemes: non-preemptive preemptive - Also known as the Shortest-Remaining-Time-First (SRTF)

Parallelism

Better utilize multiple CPUs to run in parallel A multithreaded application can run on multiple CPUs at the same time A single threaded application can only use one CPU at a time

Responsiveness

Blocking of one thread (say due to an I/O) does not mean the entire process is blocked, other threads can keep running Multithreading an interactive application may allow a program to continue running even if part of it is blocked or performing a lengthy operation Example: applications with GUI

Hard Disk Drive: Sectors What does a grouping create? Typical size?

Blocks of fix-sized laid out on each track Typical size: 512 bytes

Virtual Memory: Demand Paging When is a page brought into memory? Advantages of approach? What happens when a page is needed?

Bring a page into memory only when it is needed ○ Less I/O needed ○ Less memory needed ○ Faster response ○ Support more processes/users If a page is needed ○ Use the reference to page ○ If not in memory, must bring from the disk swap space to memory

Process Scheduling: CFS Virtual Runtime How it accumulates How CFS uses it How does nice value affect vruntime

CFS tracks virtual runtime (vruntime) of processes Each process accumulates vruntime when it runs Which process to schedule? CFS picks the one with lowest vruntime Nice value of the process decides how much of actual (physical) runtime is counted towards vruntime

context switch

CPU makes a switch from Process A to Process B Save the context of Process A Load the context of Process B

Process Scheduling Metrics

CPU utilization Percentage of time CPU is busy executing jobs Turnaround time (TA) Amount of time to execute a particular process TA = T (completion) - T (arrival) Waiting time Amount of time a process has been waiting in the ready queue TW = T (completion) - T (arrival) - T (burst size) = TA - T (burst size) Response time Amount of time it takes from when a request was submitted until the first response is produced, not the complete output TR = T (start execution) - T (arrival)

Process Scheduling: Contradicting Objectives in Multi-Processor Scheduling

Cache affinity: keep executing on the same CPU because your cache is populated already Load balancing: migrate jobs to other CPUs because it is free (although its cache won't be populated)

Hardware Cache What is its speed? Size of contents it holds? What does it hold copies of? What is its speed? Where is it found within a system?

Cache is essentially fast, small memories that hold copies of "popular" data that is found in the main memory of the system

Process Scheduling: Gaming/Fooling the MLFQ scheduler

Can I rewrite my program so that my job's priority never decreases in the MLFQ? Yes, just issue some I/O request periodically, give up CPU for a short time, and maintain a high priority (stay in the top queue)

Virtual Memory: Linear Page Table Structure

Can be an array of page table entries (array index is VPN) stored in memory

Possible solutions to disadvantages of user level threads

Change all system calls to non-blocking ➢ Reduces compatibility with current OS implementations Sometimes it may be possible to tell in advance if a call will block ➢ For example, select system call in some versions of Unix

exec() system call

Child process can execute exec( ) system call to load a new executable its memory Useful when we want the child process to do something different (typically the case)

Process Termination Exit()

Child process termination (Child) process executes the last statement and terminates Invokes exit() system call to ask OS to terminate it Process may also terminate due to errors Parent may terminate the execution of its child processes Process resources (memory, locks etc.) are de-allocated by the operating system

Process Scheduling: Where does short term scheduling chooses process to execute from? Where is a process from ready queue allocated? Execution frequency compared to long term scheduling?

Chooses process to execute from available set in memory Selects a process from ready queue and allocates CPU Executes much more frequently compared to long-term scheduling

Process address space components

Code segment: contains the executable code Data segment: contains global or static variables (e.g., initialized variables) Stack contains local variables, function parameters and return addresses Heap: stores dynamically allocated memory (e.g., using malloc())

Virtual Memory: Segmentation What it divides up

Contiguous address space can be wasteful in terms of memory usage First approach to propose the use of non-contiguous address spaces Divide the address space in separate segments (code, heap, stack, etc.) Each segment is placed independently in physical memory Base and bound registers for each segment

Virtual Memory: Multi-Level Page Tables Basic Idea

Convert the linear page tables into a tree-like structure The tree-like structure can help in saving memory for page tables

Time economy of process creation vs thread creation Time economy of process context switch vs thread switch

Creating processes is much more time consuming than creating threads Context switch of a process is more time consuming than thread switching

Process Creation: When are they created / hierarchy

Creation: User actions, for example, start a new program A running process can create a new process System initialization Hierarchy: Parent process creates child processes, which in turn create other processes

Hard Disk Drive: Tracks What is a track composed of?

Data is encoded on each surface in concentric circles of sectors which are called tracks

Virtual Memory: Swap Space What it is What information does OS to swap pages

Dedicated space used for swapping pages in/out OS will remember the disk address of pages swapped out to the swap space from memory

Process Scheduling: What does a Multi-Level Feedback Queue (MLFQ) try to solve?

Designing a scheduler that minimizes response time for interactive jobs while also minimizing turn-around time without prior knowledge of job length

Page Replacement: LRU Approximation Algorithm Difficulty of determining LRU page How to store recency of referenced pages Problem with increasing number of frames

Determining the LRU page is non-trivial ○ Need to maintain a data structure (like an array) just to store which pages were referenced recently ○ As number of frames increase, searching the array becomes slower and slower

Virtual Memory: Multi-level page tables how to create What is done with invalid entries How to track invalid pages

Divide the linear page table into pages ○ If an entire page of page-table entries is invalid, do not allocate that page at all ○ Use a directory structure to track which page of page table is valid

Virtual Memory: Segmentation Addressing Number of parts What they refer to How the stack grows

Divide the virtual address in two parts First part refers to which segment Second part refers to offset within that segment Stack grows backwards

Components of address translation with paging

Divide the virtual address into two pieces ➣ Virtual page number (VPN) ➣ Offset

Critical Section: Fairness

Does each process waiting to get the lock get a fair chance to acquire it when it is free? Avoid starvation: cases when a thread/process has to wait forever to acquire the lock The wait should be bounded

CFS: Dynamic CPU Slice What is used to determine CPU slice

Dynamic CPU time slice sched_latency is a parameter used to determine dynamic CPU slice If there are n processes ready to run, the time slice is sched_latency/n What if there are too many processes ready to run? Time slice cannot be lower than a predefined value (e.g. 6 ms)

Virtual Memory: Demand paging EAT equation

EAT with demand paging = (1 - p) * ( EMAT ) + p * (page fault overhead +EMAT) EAT = Effective access time EMAT = effective memory access time

Process Scheduling: Round Robin What does each process get What happens after this time lapses Where are new processes added Wait time for n process with time quantum q

Each process gets a small unit of CPU time (time quantum or slice) After this time has elapsed, the process is preempted and added to the end of the ready queue Newly-arriving processes (and processes that complete their I/O bursts) are added to the end of the ready queue If there are n processes in the ready queue and the time quantum is q, then no process waits more than (n-1)q time units

Virtual Memory: Size Allocation Algorithms

Equal allocation - If we have n processes and m frames, give each process m/n frames. Proportional allocation - Allocate according to the size of process.

Methods of Device Interaction (i.e. how OS communicates with I/O device) Explicit I/O Instruction Set Memory Mapped I/O

Explicit I/O instruction set ○ Instructions specify a way for the OS to send data to specific device registers ○ Execution of these instructions happens in kernel/privileged mode Memory mapped I/O ○ OS makes the device registers available to the CPU like they are memory locations ○ CPU runs load and store instructions to read and write data

Virtual Memory: Segmentation Disadvantage

External fragmentation of memory

Comparing process scheduling policies (FCFS, SJF, SRTF, RR) FCFS: turn-around time and response time SJF and SRTF: turn around time and response time RR: turnaround time and response time

FCFS Simple but very high turn-around time and response time SJF and SRTF Lower turn-around time and response time Preemption is useful RR High turn-around time but lowest response time

First-in-First-Out (FIFO) page replacement algorithm How it selects victims How it is implemented Difficulty Drawbacks

FIFO replacement algorithm chooses the "oldest" page in the memory as the victim. Implementation: FIFO queue holds identifiers of all the pages in memory. ○ We replace the page at the head of the queue. ○ When a page is brought into memory, it is inserted at the tail of the queue. Easy to understand and implement. Drawbacks ○ The "oldest" page may contain a heavily used variable. ○ Will need to bring back that page in near future

File descriptors What value does it have? How the kernel uses it to identify? What three file descriptors does a new process run?

File descriptors are small non-negative integers that the kernel uses to identify the files being accessed by a particular process. A child process inherits its parent process's standard file descriptors when it is created A shell opens three "standard" file descriptors whenever a new process runs ○ standard input ○ standard output ○ standard error

Virtual Memory: Address Translation Steps with paging What is retrieved from virtual addressed? What is used to lookup in page table? What are you looking for in a page table? What is done with the offset?

Find VPN from the virtual address Lookup the VPN in page table, find PFN Carry forward the offset as it is

Space allocation schemes | First Fit What it is Pro/Con

Find the first block that is big enough to fit the request block Pros Faster, reduces search time Cons Imbalance - beginning of the memory becomes more densely allocated Cannot guarantee lower external fragmentation

Virtual Memory: Page lookup steps with present bit

First check if the page is valid If valid, see if it is present in memory If not present, cause page fault

Virtual Memory: When is a TLB flushed? Problem with frequently flushing a TLB after this event? Is repopulating a fast process? How does frequent flushing affect cache hit performance?

Flush TLB cache after every switch ○ Frequent context switches require frequent flushing ○ Emptying TLB after every context switch decreases cache hit performance?

Virtual Memory: External Fragmentation

Free memory gets divided into small pieces Memory allocation for new processes becomes a challenge Total free memory might be large enough to hold a new process, but free memory is divided into holes (blocks of free memory)

Timer Interrupts

Generated by a timer within the processor. This allows the operating system to perform certain functions on a regular basis. Example, run 2 processes in round-robin manner A timer interrupt is generated periodically (after every few milliseconds) Allows OS to regain the control of CPU

I/O Interrupt

Generated by an I/O controller, to signal normal completion of an operation or to signal a variety of error conditions.

Virtual Memory: How can process select victim frame in global replacement? How does global replacement affect process fault-rate? How is a victim frame selected in local replacement? How do unused pages affect a process in local replacement?

Global replacement - process selects a replacement frame from the set of all frames; one process can take a frame from another. Under global replacement, the page-fault rate of a given process depends also on the paging behavior of other processes. Local replacement each process selects from only its own set of allocated frames. Less used pages of memory are not made available to a process that may need them.

Virtual Memory: Memory Management Unit What it is Name an approach

Hardware device that is responsible for address translation - One per CPU Base and bound approach MMU holds both the hardware registers: base and bound

Anatomy of Typical I/O Device Hardware Interface: Types of register and purposer: I/O device internals: How do they vary? What do complex devices need and implement? What is the Firmware and its purpose? What language is it usually written in?

Hardware interface ○ Interface between the internals of the I/O device and the rest of the computer system (connected through a bus) Registers ○ Status: used for reading/setting the status of the device (busy, idle, etc.) ○ Command: used to tell the device to perform a specific task ○ Data: pass data to and from the device I/O device internals ○ Specific to the I/O device itself and manufacturer customization's ○ Complex I/O devices have a micro-controller and memory to implement hardware specific tasks ○ Firmware (software within the hardware device) to implement hardware functionality ➢ Usually written in lower-level programming languages (embedded C, verilog, etc.)

CPU Thrashing and paging activity What does it mean about the system?

High-paging activity The system is spending more time paging than executing.

Challenges in lottery Scheduling

How to assign tickets? Users can assign tickets but.. Ticket assignment is a challenging problem How to account for I/O? Processes that do more I/O might get less CPU time How to use tickets for CPU and I/O bound processes is an open problem Limited application

Process Scheduling: Completely Fair Scheduler (CFS) Components What is an important objective? Should searching for a process be CPU intensive?

Idea: Extend proportional fair scheduling concept to make it more practical Dynamic CPU time slice Priority Virtual runtime Efficiency an important objective Searching for which process to run next should not be a highly CPU intensive task

Prepaging Idea: Purpose: Tradeoff:

Idea: Prepage some of the pages a process will need, before they are referenced An attempt to reduce the large number of page faults that occurs at process startup Trade-off between the cost of pre-paging activity and the reduction in the number of page faults.

Things to consider when building multi-threaded application

Identify tasks within your program that you think are (more or less) independent Verify if these tasks have similar workload Check what (if any) data is shared between the tasks Create a thread for each task Test and debug

Spatial Locality

If a program accesses data item at address x, it is likely to access data items near x as well

Critical Section: mutual exclusion

If process P is executing in its critical section, then no other processes can be executing in their critical sections.

Semaphore properties: sem_wait How does value of S impact action how many threads can be in the queue

If s is greater than or equal to zero sem_wait return immediately if it is less than zero sem_wait will suspend the execution of the calling thread/process and puts it in semaphore's queue multiple threads can be waiting in the queue

Concurrency: Sleeping

If spinning wastes CPU cycles, put the thread to sleep Check if lock is available or not atomically using TAS If not available yield (allow other threads/processes to run) Issues: need a context a switch to make a process go to sleep which takes some time

Semaphores vs Locks | Bank teller problem

If there are 4 bank tellers you need 4 locks and scaling and managing the locks can be difficult A Semaphore is more scalable as you can just set the value of it when initialized and it will put threads to wait when it drops below the initialized value e.g. sem_init(&teller, 0, 4);

Virtual Memory: Page Replacement with no free frames What I/O occurs

If there are no free frames, two page transfers needed at each page fault First to write victim page back to the disk, second to bring the new page from disk to main memory

Virtual Memory: What is a page fault

If there is a reference to a page which is not in memory, this reference will result in a trap Called page fault

Race Condition

In a time-shared system, the atomic completion of instruction execution cannot be guaranteed CPU scheduling can be uncontrolled and can give indeterminate results (a condition called race condition)

Multi-Core CPUs What it is What issue it addresses What must it exploit and when?

Inclusion of multiple computing cores on a single chip Critical to address the power wall Need to exploit parallelism at run-time

Problem with priority based scheduling and a solution to problem

Indefinite Blocking (or Starvation) - low priority processes may never execute Aging - as time progresses increase the priority of the processes that are waiting in the system for a long time

Reaping child processes: wait()

Information about a terminated process is kept by the operating system until the process is reaped by its parent Parent reaps the child process through wait (int *child_status) Child's exit status is passed to the parent Child process is then discarded

Space allocation schemes | Buddy Allocator How it works

Initially free space is of size 2^N bytes When a request for memory is made ➣ Search for free space recursively ➣ Divide the free space by two until the smallest block big enough for the request is found

Virtual Memory: Hybrid Approach (Segmentation + Paging) How number of page tables changes? How does a sparse address space impacts page table size? How do contents of base register change? What are user segment bits used to determine? What does the base register of a segment locate? What does the page table map together? What does it use to translate virtual address to physical address?

Instead of having a single page table Store one page table per segment With sparse address spaces, these page tables will be small Base and bound registers ○ Instead of base register pointing to where the segment is in physical memory, it points to the physical address of segment's page table ○ Bound register set to number of entries in the page table. Use segment bits to first determine the segment (example, 00: code, 01, heap, 11 stack, etc.) ○ Use the base register of that segment to locate its page table ○ Use the page table and map the VPN to PFN ○ Translate the virtual address to physical address using PFN and offset

Process Scheduling: Proportional Fair Scheduler

Instead of optimizing for lower turn-around time or response time, a scheduler should guarantee that each job obtains a certain percentage of CPU time Two popular fair Schedulers: lottery scheduler Completely Fair Scheduler (CFS) - Linux Scheduler

Interrupt Driven I/O: How it works with OS What does CPU do? What is created after operation finishes? Where do OS and CPU jump to? Why is a process woken?

Instead of polling the device constantly, OS issues a request and puts the process to sleep/waiting state ○ Context switch to another process and run it on CPU ○ When the device finishes the operation, it creates a hardware interrupt ○ OS and CPU jumps to the corresponding ISR (Interrupt Service Routine - registered at boot time) ○ Wakes up the process to continue its execution

Virtual Memory: Translation Look-aside buffer (TLB) What it is Purpose Steps when translating

Is a hardware cache that is part of the memory management unit (MMU) Meant to make address translation faster when paging Holds popular virtual page to physical frame translations When translating virtual address to physical address, first check if there is an entry of the mapping in TLB If yes ( TLB hit), directly use it, without consulting page table If not (TLB miss), check in the page table

cache affinity

It is preferable to use the same CPU when a process runs since the associated cache already has all the necessary information

Process Scheduling: Long term scheduling What it does with ready-to-run processes? What does it determine for memory?

Keep ready-to-run processes on disk storage and load them into memory as needed Long-term scheduler or job scheduler determines which processes to be brought in main memory

User level threads disadvantages

Kernel does not see multiple user threads in a process, it only sees a process The implementation of blocking system calls is highly problematic (e.g. read from the keyboard). All the threads in the process risk being blocked!

Kernel Level Threads What are the threads supported by? What does kernel do? How are threads managed? How are calls that might block a thread implemented? When a thread blocks how may the kernel choose a thread?

Kernel threads are supported directly by the OS: The kernel performs thread creation, scheduling and management in the kernel space The kernel has a thread table that keeps track of all threads in the system. All calls that might block a thread are implemented as system calls (greater cost) When a thread blocks, the kernel may choose another thread from the same process, or a thread from a different process (major difference from user-level threads)

Virtual Memory: Paging with very large tables

Large pages mean fewer pages per address space Reduces the number of entries in each page table and total page table size Problem: internal fragmentation ○ Size of a page is larger than what is typically requested ○ Most pages are partially utilized, resulting in waste memory

Virtual Memory: TLB Replacement Policy Advantage What does it exploit? Disadvantage? What is the relationship to process and entries? TLB miss relation to process and TLB frames? Simple and efficient solution to disadvantage?

Least Recently Used (LRU) Advantage ➣ Exploits the locality in memory reference stream Disadvantage ➣ When a process uses more entries than a TLB has ➣ Consider a TLB with n entries, a process loops over n+1 pages ➣ TLB has three entries (VPN 1, 2, and 3) and the process loops over VPN 1, 2, 3 and 4, results in a lot of TLB misses Random replacement ○ Randomly choose a page entry to replace for the new one ○ Such random algorithms can be robust against corner/special cases ○ Simple and efficient to implement

Virtual Memory: Major steps in handling page fault What must it locate and where? What does it initiate? Where do pages get moved? What is done with the page table? Where does the process go after servicing the fault?

Locate an empty physical frame in memory Initiate disk I/O Move page (from disk swap space) to the empty frame Update page table; set PFN and set present bit to 1 Move back the process to the ready queue (instruction that experienced the page fault will need to be eventually re-executed)

Impact of I/O and CPU bound process on: long term scheduling short term scheduling

Long term scheduler should select a good mix of I/O bound and CPU bound processes Ensures that both CPU ready queue and I/O devices queues are balanced and not empty

Paging: Controlling page-fault rate

Maintain "acceptable" page-fault rate. If actual rate too low, process loses frame. If actual rate too high, process gains frame.

Free list space allocation scheme what structure does it maintain? How does it use the structure?

Maintain a linked list of free blocks Whenever we need to allocate memory, traverse the list and use some criterion to pick a free block

Virtual Memory: Free space allocation schemes Their purpose

Meant to reduce external fragmentation proactively

Virtual Memory: Dynamic relocation with base and bound method

Moves a process to a different part of memory by changing the value of the base register

Virtual Memory: Page swapping vs. Process Swapping

Moving individual pages of a process (demand paging) versus the entire process

Concurrency: Lock Evaluation Mutual Exclusion? Can it starve a thread? What is its performance?

Mutual Exclusion achieved Can lead to starvation Poor performance

Requirements for solving critical sections

Mutual exclusion Fairness Performance

Spinlock Evaluation

Mutual exclusion achieved Fairness - process/thread waiting to acquire a lock can spin forever (e.g. no gauruntee on fairness) Performance - spinning wastes CPU cycles Most useful when critical section is small

Process Scheduling: CFS Priority Name of variable and its range of values

Niceness: A classic UNIX mechanism to assign priority to processes Nice value -20 (higher priority) to 19 (lower priority) Default value 0

Critical Section: Disabling Interrupts

No interrupts means: No context switch (recall: timer interrupts) Process will enter critical section and exit it, and it cannot be switched out of CPU in between Atomic execution: all or nothing

wait(int *child_status) system call

Suspends parent process execution until one of its children terminates Return value is the PID of the child process that terminated If child_status != NULL, then the object it points to will be set to a status indicating why the child process terminated

When SJF and SRTF are most optimal

Non-preemptive SJF is optimal if all the processes are ready simultaneously Gives minimum average waiting time for a given set of processes SRTF is optimal (preemptive) if the processes may arrive at different times

Convoy effect

Number of relatively short process get queued behind long process

How does OS detect and respond to thrashing? Which page replacement algorithm is used If a process needs the replaced frame it causes more: What happens to the waiting queue Utilization of CPU OS's affect on paging activity

OS observes low CPU utilization and increases the degree of multiprogramming. Global page-replacement algorithm is used, it takes away frames belonging to other processes But these processes need those pages, they also cause page faults. Many processes join the waiting queue for the paging device, CPU utilization further decreases. OS introduces new processes, further increasing the paging activity.

Programmed I/O: How OS interacts with device What does CPU write data to? How is disk asked to copy data? How does the OS confirm completion of transfer?

OS repeatedly polls the device to see if it is idle or busy ○ Write data to the register ➢Carried out by the CPU ➢ Example: Writing a block of data to disk requires writing the DATA register many times ○ Ask disk to copy data internally from register to disk ○ OS keeps polling to see of the transfer is complete

Hard Disk Drive: Disk Head How many per patter? Where must disk head move to write to a sector?

One per surface of platter To read or write a specific sector, disk head should be moved to the desired track and sector

Space allocation schemes | Worst Fit What it is pro/con

Opposite of the best fit Search through the free list, and find the largest hole that can fit the new block Pros Reduces the size of largest free block Cons Requires an exhaustive search of the free list

Concurrency Problems:

Ordering: Many practical cases where a thread wishes to check whether a specific condition is true before it can continue its execution

Least-Recently-Used (LRU) page replacement algorithm How it does it replace pages?

Page replacement where the page that has gone unused the longest is replaced.

Per Process Items vs Per Thread Items

Per Process: Address space Global Variables Open Files Per Thread program counter registers stack state

Virtual Memory: External Fragmentation and Compaction What it is Issues with approach

Periodically, shuffle/move the existing segments and create larger, contiguous blocks of free memory Frequent memory copy takes time Interrupts process execution, running process stopped before its segments can be moved over

Hard Disk Drive: Spindle What connects to it? What does it rotate? Unit of rate of rotation?

Platters are connected to a spindle and a motor which rotates them The rate of rotation is often measured in rotations per minute (RPM)

Process Control Block: Memory Management

Pointer to code segment Pointer to data segment Pointer to stack segment

Interrupt table

Points to Interrupt service routines - segment of code which determines what actions to take for each type of interrupt Once the interrupt has been serviced by the ISR, the control is returned to the interrupted program Need to save the "process state" on its kernel stack before ISR takes over

Process Scheduling: MLFQ Starvation

Presence of interactive jobs can starve long CPU-bound jobs

Problems with limit of physical memory reason for problem Solution to space How does this relate to demand paging How is virtual memory affected

Problem: Physical memory is limited ○ Virtual address space of programs can be really large Solution: Not all pages of a process have to be the main memory at all times ○ Demand paging: just load the pages that are needed to run the process at that time ○ Load the pages as needed from hard-disk to memory, swap them out when they are not needed Large virtual memory ○ Since many more virtual pages can be sitting on the disk, processes have an illusion of very large virtual memory ○ OS takes care of loading/swapping pages at run-time

Process Control Block: Process Management

Process ID Registers Program counter (PC) Program Status Word (PSW) Stack pointer Process state Priority Scheduling parameters Parent process Time when process started CPU time used

Process Scheduling: Preemptive shortest job first (a.k.a. shortest remaining time first)

Process in execution can be interrupted by a shorter process that gets put into the queue. Otherwise exactly like SJF

Kernel Level Threads Advantages

Process scheduling can be made thread-aware ➢ Kernel knowing about the threads can better schedule them to run Calls ➢ Blocking calls can be implemented using system calls without any OS modifications

Process Scheduling: First Come First Serve

Processes are completed atomically in the order they arrive

User mode: Number of available operations? But what if it wants to issue an I/O request or send a packet over the network?

Program code running is user mode can perform limited set of operations But what if it wants to issue an I/O request or send a packet over the network? issue a system call

Program vs. process

Program often refers to a passive entity An executable file stored on disk Process is an instance of a computer program that is being executed Running a program can create many processes

Why are system calls needed?

Protection - a limitation of direct execution When a user program wants to perform a privileged operation, it issues a system call If processes are allowed to do whatever they want, it could be harmful to a computer system System calls provide an interface from user program to restricted kernel operations

Reader-Write Problem with Semaphores Solution to problem

Reader-Writer Problem: A data object (e.g. a file) is to be shared among several concurrent threads Multiple readers and writers A writer thread must have exclusive access to the data object no other reader or writer thread Multiple reader threads may access the shared data simultaneously without a problem Solution: Have a reader_count variable, and a binary read/write sem. Whenever a reader gets called, increment the reader_count variable. If the reader value is greater than one ensure no writer can write by acquiring the sem. Decrement once reader is done reading and check if the reader_count is zero to wake up the waiting writer.

Goal of Optimal Page Replacement

Replace the page that will not be used for the longest period of time in the future

Process Control Block: What it is/components

Representation of process used by the OS One per process Process Management Memory Management File Management

Process Control Block: File Management Components

Root directory Working directory User ID Group ID

Process Scheduling: MLFQ Refined Rules

Rule 1: If Priority(A) > Priority(B), A runs and B doesn't Rule 2: If Priority(A) = Priority(B), A and B run in Round Robin Rule 3: When a job enters the system, it is placed in the highest-priority queue Rule 4: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue) Rule 5: After some time period S, move all the jobs in the system to the topmost queue

Space allocation schemes | Best Fit How it works Pro/Con

Search through the free list and find the smallest hole that is large enough to accommodate the new block Pros Reduces the wasted space and external fragmentation Cons Requires an exhaustive search of the free list

Process Scheduling: CFS Efficiency Search time of CFS using a queue Most efficient method implementation Where are ready processes kept? What does it do with a process that does I/O? How long do insertion and deletion take? When is it especially efficient?

Searching in a queue can be O(n) Balanced binary tree: red-black tree Keep all the ready processes in a red-black tree If a process does I/O, it is removed from the tree Insertion and deletion can be done in O(log n) Especially efficient when n can be in thousands

Virtual Memory: Paging vs Segmentation Address space size Allocation of segments Problem with segmentation

Segmentation ○ Address space divided into variable sized segments ○ Each segment can be placed at different location on the physical memory ○ Disadvantage: external fragmentation Paging ○ Divide the address space into fixed-sized pieces (called pages)

Virtual Memory: Segment Sharing What it is Name a challenge

Segmentation allows sharing of segments between processes Challenge: Additional checks needed to ensure a process has the right access to read, write and/or execute a segment

What is a locality of pages locality model on process execution

Set of pages that are actively used together According to the locality model as a process executes it moves from locality to locality

waitpid (pid, int *child_status, options)

Suspends current process unit a specific child process (with PID == pid) terminates

Process Scheduling: Lottery Scheduler What is a share of a process represented by? What does a percentage of tickets represent? How is which process to run determined?

Share of a process is represented by tickets The percentage of tickets that a process has represents it share of CPU utilization Lottery scheduler is probabilistic ○ Pick a random number and determine the process to run based on whose ticket number won the lottery

Resource sharing

Sharing the address space and other resources may result in high degree of cooperation Faster inter-thread communication through address space

Building a shell

Shell is a user program that shows you a prompt and waits for you to type a command The fork(), wait() and exec() calls essential in building a UNIX shell

Process Scheduling: Benefits of Lottery Scheduling

Simple and light-weight Randomness allows for quick decision making on which process to run Scheduler does not consume much CPU Tickets can represent more than CPU share such as general resource share for a process For example, can also be used to assign virtual memory A simple, efficient way to track proportionality between processes Ticket transfer A process can temporarily hand off its tickets to another process, increasing its likelihood of getting CPU share Useful in many applications like client-server Process A is waiting for Process B to do a task, but Process B does not get to run Process A can hand over some of its tickets to Process B, and make sure that it gets to run soon Ticket inflation Trusting processes collaborate to boost the ticket count of a process that they all agree should run

Virtual memory: Different sized processes How base and bound handle them at runtime and during a context switch

Simply load their size to bound register when process is running Save the value of base and bound registers at the time of context switch

How does MLFQ no about length or type of process? How are priority and time slice assigned? What happens to jobs who want more CPU time? How does it compare to SJF?

Since there is no knowledge about length or type of job Start by assigning high-priority and short time slice ➣ A short job will be completed with higher priority ➣ If a job wants more CPU time, reduce its priority This works as an approximation to SJF without the need of knowing job length

Spinning vs. Sleeping vs. Both

Sleeping helps saving CPU cycles. Allows the process with lock to complete its critical section faster Spin for a bit, if the lock is available then acquire it (saved a context switch) If not, go to sleep (save CPU cycles)

Device driver disadvantages: Device with special abilities

Some devices have special capabilities but device driver cannot expose them to the generalized interface

Temporal Locality

Something (data/instruction) that was accessed recently will be accessed again soon For example, variables and instructions accessed in a loop

Space allocation schemes | Next Fit What it is Pro/Con

Start the search from last allocated block in the free list, and find the first block that is big enough to fit the requested block Pros Similar to first-fit, except allows for better balanced allocation throughout the memory Faster, reduces search time Cons Cannot guarantee lower external fragmentation

Thread Control Block (TCB) stores per __________ items Where is it linked from?

Stores per thread items, typically linked from PCB of the process

Spinlocks What are they based on? What do they check for and do?

TAS-based locks Continuously spinning/checking if a lock is available

Virtual Memory: Translation look aside buffer (TLB) and context switch problem

TLB entries can be ambiguous after context switch Ex: Process P1 requests a translation VPN 10 -> PFN 100 Process P1 is descheduled, context switch starts Process P2 Process P2 requests a translation VPN 10 -> PFN 170

Virtual Memory: Translation Look-aside buffer (TLB) advantages On TLB hit what is saved? Result of being properly populated?

TLB hit saves a memory reference otherwise needed to access the page tables When properly populated, TLB can make address translation much faster

What is the context of a process?

The context of a process includes values of the general-purpose registers, program counter, stack pointer, program status word and other information ○ Save the context ➣ OS saves these values from CPU registers to process's PCB ○ Load the context ➣ OS loads values from PCB to CPU registers

How to avoid thrashing

To avoid thrashing, we must provide every process in memory as many frames as it needs to run without an excessive number of page faults.

Moore's Law

The number of transistors per square inch on an integrated chip doubles every 18 months

User level thread advantages

The operating system does not need to support multi-threading Efficiency Since the kernel is not involved, thread switching may be very fast Scheduling Each process may have its own customized thread scheduling algorithm Thread scheduler may be implemented in the user space very efficiently

Wrapper functions

The standard C library includes a set of "wrapper" functions that developers can use The wrapper functions makes the necessary checks before making the system call (which are written in assembly)

How to handle zombie and orphan processes?

The zombie and orphan processes are adopted by the special system process init (pid=1) that periodically cleans them up

Second-Chance Page Replacement Algorithm How it builds on FIFO What value is used to pick a page to replace What value do frequently used pages maintain One way to implement

This is basically FIFO algorithm with the reference bit. When a page is selected for replacement, we inspect its reference bit. ○ If the reference bit = 0, we directly replace it. ○ If the reference bit = 1, we give that page a second chance and move on to select the next FIFO page. However, we set its reference bit to 0. If a page is used often enough to keep its reference bit set to 1, it will not be replaced One way to implement the second-chance algorithm is to use a circular queue .

HDD Seek Time What is it? What is the average seek?

Time it takes to move the head to the desired track Avg seek time is 1//3 of full seek time

HDD: Equation for Time of I/O (Tio) Equation for Rate of I/O (Rio)

Tio = Tseek + Trotation + Ttransfer Rio = Size of transfer / Tio

Virtual memory: Goal Descriptions Transparency Efficiency Protection

Transparency ➣ OS should translate from virtual addresses to physical address without the process even knowing about it Efficiency ➣Fast translations ➣Better usage of memory resource Protection ➣ Protect one process's memory from other processes ➣ Isolation: process execution should not affect other parts of the memory

Interrupts: different types

Traps (software interrupt) I/O interrupts Timer Interrupts Hardware failure interrupts

Virtual memory: Contents of base register? Contents of bound register? What does it protect? How to determine size?

Two registers: base and bound Base Register: Address in physical memory where the process address space is loaded Bound register: Ensure that address is within the confines of address space Provides protection - within the "bounds" of the process Initialized to size of the process address space

Virtual Memory: Page faults System performance impact What happens to process with page fault How does page fault affect disk drive

affects the system performance negatively The process experiencing the page fault will not be able to continue until the missing page is brought to the main memory The process will be blocked (moved to the waiting state) Dealing with the page fault involves disk I/O ○ Increased demand to the disk drive ○ Increased waiting time for process experiencing the page fault

Virtual Memory: How does OS know where the missing page is on the disk?

Use the page table PFN field to store the page address on swap space

Enhanced Second Chance Page Replacement Algorithm How it builds on second chance algorithm Meaning of each value When is a page replaced

Use the second-chance algorithm by considering both the reference bit and the modify bit together ○ (0,0) neither recently used nor modified - best page to replace. ○ (0,1) not recently used but modified - not quite as good, because the page will need to be written out before replacement. ○ (1,0) recently used but clean - probably will be used again soon. ○ (1,1) recently used and modified - probably will be used again soon, and we will need to write it out to disk We replace the first page encountered in the lowest non-empty class.

Where can race conditions happen?

User processes Kernel processes Between multiple threads of a process

Trap interrupt: How does a user program make one?

User program makes a system call (trap is type of an interrupt)

Virtual Memory: Page faults vs the number of frames and Belady's anomaly

Usually, for a given reference string the number of page faults decreases as we increase the number of frames. Belady's Anomaly More frames => more page faults for some reference strings Depends on which page replacement algorithm is used

Virtual Memory: Translation Look-aside buffer (TLB) other bits

Valid bits: says whether the entry is a valid translation or not (useful in a context switch) Protection Bit: determines how a page can be accessed Present bit: indicates whether or not page table entry is in memory Dirty bit: Global bit:

Working-set model variables WSSi stands for: Demand of frames (D) is determined by: How are D and the number of frames in memory related:

WSSi = Size of the working set of the process pi D = Σ WSSi = total demand of frames if D > the number of frames in memory => Thrashing

Critical Section: Performance

What is the impact of lock on CPU execution of processes/threads that are waiting or have acquired the locks?

Fork: Copy-On-Write How it relates to parent and child pages: When is a page copied? How does it affect process creation?

allows both parent and child processes to initially share the same pages in memory. If either process modifies a shared page, only then the page is copied. Allows more efficient process creation

wait() system call

allows the parent to wait for a child process to finish what it has been doing

Priority Inversion Problem

considering priority based process scheduling on CPU

OS manages processes through:

each process has a unique PID Process address space Process control block

Non-preemptive scheduling

each running process keeps the CPU until it completes or it switches to the waiting (blocked) state.

How to know which process to wake up after lock

When a process yields after lock is not available, put it in a queue Wake up the first process in a queue Wake up the first process in queue when lock is available

What happens when a system call is made?

When a user program makes a system call, a trap instruction is issued The trap instruction switches to kernel mode and raises privilege level Once in kernel mode, the privileged operations are performed When finished, OS issues return-from-trap Continue program execution in user mode

Kernel Mode

While in kernel mode, code can perform privileged operations such as issuing an I/O

Page Replacement: Reference Bit Initial Value of reference bit? What is reference bit set to when page is referenced? What does reference bit help you use? Is the order of use known?

With each page associate a bit, initially set to 0. When the page is referenced, bit set to 1. By examining the reference bits, we can determine which pages have been used We do not know the order of use, however.

Virtual Memory: Page Replacement Modify Bit (a.k.a. dirty bit)

Write the (victim) page back to disk only if it has been modified since it was brought to memory Maintain a dirty bit in page table If the dirty bit is set, write the page to disk at the time of page replacement, do not otherwise Saves I/O operations when possible

What happens to child process when parent process does not reap?

Zombie: a child process that has terminated but is never reaped by the parent becomes a zombie process Orphan: the parent process terminates without waiting for its children, the child processes become orphans

Preemptive Scheduling Name an example

a running process may be also forced to release the CPU even though it is neither completed nor blocked. Ex:In time-sharing systems, when the running process reaches the end of its time quantum (slice)

Device drivers Who provides abstraction to manage I/O operations What does it interpret

abstraction provided by OS to manage I/O operations Interprets standard calls on file system (without knowing the type of disk and its interconnect) and communicates with the I/O device A piece of software within the OS that knows the specifics of the device ○ It exposes a "generic" interface to the rest of the OS, allowing other programs to interact with the device in one unified way ○ The device driver encapsulates the specifics of how to communicate OS commands to the I/O device

Semaphores and ordering

ex: if parent needs to wait on child after fork, initialize binary sem and call sem_wait after parent forks. Call sem_post in child once the necessary actions the parent is waiting on are completed

Hardware failure interrupts

exactly what it sounds like. example is a power failure

Virtual Memory: Page replacement What happens to a pages frames and multiprogramming? What conditions cause issues Objective with page fault rate

frees the frames and reduces multiprogramming May be bad under condition of free a memory frame already in use Primary Objective: Use the algorithm which provides lowest page-fault rate

Test-And-Spin Locks What helps it achieve mutual exclusion? How is it performed? What does it do if lock is zero? If lock is one?

hardware support to achieve mutual exclusion TAS is performed atomically using hardware support if lock == 0 TAS returns 1 Set lock = 1 if lock == 1 TAS returns 1 Set lock = 1 (acquired) Keep checking until its 0 (spinning)

An atomic operation

is performed without interruption

Thread table

keeps track only of the per-thread items (program counter, stack pointer, register, state..)

CPU Power Wall

metaphorical wall signifying the peak power constraint of a system.

Concurrency: Flag Variables Evaluation

not enough to guarantee mutual exclusion Can lead to starvation Poor performance

semaphore what it is what its methods are Are the methods atomically executed

object with integer value that can be modified by two operations sem_wait (sem_t *s) - decrements the value of semaphores by one and waits if the value of sem s is negative sem_post (sem_t *s) - increments value of semaphore s by one. If there are one or more waiting threads, wake one. Both operations performed atomically

Virtual Memory: Page fault rate desired range What do the extremes of the range mean

p = page fault rate 0 <= p <= 1 0 - no page faults 1 - every reference is a page fault

Context switch time is:

pure overhead overhead is dependent on hardware

Virtual Memory: Page Table

stores the mapping of virtual page to page frames

User level threads

supported above the kernel and are implemented by a thread library at user level The thread library provides support for thread creation, scheduling and management with no support from kernel When threads are managed in user space, each process needs its own private thread table to keep track of the threads in that process.

Cache Coherence Problem

the issue of maintaining caches holding the same data up to date and accurate when the CPU writes to a cache

Multiple threads within a process share...

their address space

HDD: Rotational Delay What is it? What is average rational delay?

time it takes to rotate the platter and get the desired sector under the head Avg rotational delay = R/2 R = full rotational delay

Virtual memory: When is virtual address translated to physical address?

translate virtual address to physical address in run-time

Virtual memory fork: What happens to address space: Speed of process creation What happens to parent

vfork() - virtual memory fork() ○ Do not copy the address space ○ Parent and child process share the exact same address space ○ Faster process creation but use with caution ➣ Parent is suspended until child exits or invokes exec() call

Process scheduling: CFS vruntime_i equation

vruntime_i = vruntime_i + (weight_0/weight_i)*runtime_i What is weight?

Semaphore properties: sem_post What does it do? what does a negative value mean?

wakes up a thread from the queue if the value is negative, the value is equal to the number of waiting threads

When do race conditions arise? What is the final result of the data dependent on?

when multiple threads/processes are writing/reading some shared data the final result depends on who runs precisely when (depending on CPU scheduling)

Virtual memory: Sparse address spaces Fixed Size and page utilization Effect mapping unused pages on memory? Solution to prevent unused pages?

○ Out of the fixed-sized virtual address space, many of the pages are actually not used at all ○ Mapping these unused pages to physical frames will waste memory ○ Page table entry includes a "valid bit" which is set if the virtual page is mapped to a physical frame


Set pelajaran terkait

MGMT 472 Chapter 7 Compensation Systems

View Set

adult 1 final study Exams 1 and 2

View Set

G Suite for Education Certification Exam

View Set

Chapter 46: Nursing Management: Patients With Neurologic Disorders

View Set

Chapter 25: Nursing Care of the Child With an Alteration in Immunity/Immunologic Disorder

View Set

Reconstruction/Conquering the West

View Set

Chapter 6: Jobs and Unemployment

View Set

Chapter 4. Activity-Based Costing

View Set