Midterm (Intro + Virtualization)

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Which function call causes the processor to directly resume the execution of scheduler() in xv6? 1. swtch(&proc->context, cpu->scheduler) 2.trap(struct trapframe *tf) 3.swtch(&cpu->scheduler, proc->context) 4.yield() 5.sched()

1

When a process is created and waiting for memory allocation, which state is it in? 1.intial 2.blocked 3.running 4.ready 5.zombie

1

Which of the following statement is true about paging? 1)It has internal fragmentation issue. 2)Only the page table needs to be maintained. The key information in the page table is the PFN. PFN is just the index of a page frame, which requires fewer bits than a base or bound in segmentation. 3)It has external fragmentation issue.

1,2

Which statements are true about base-and-bounds dynamic relocation? 1)Access to the base and bounds registers is privileged 2)base-and-bounds dynamic relocation has an internal fragmentation issue. 3)Bounds register is used for memory mapping from logical to physical address. 4)Base register is used for memory protection.

1,2

How does MLFQ reduce average waiting time? 1. It mimics Shortest Remaining Time First (SRTF) policy 2. Use the age (attainted service) of a job to approximate the remaining size of a job/process 3. Short jobs are most likely to be finished in the top priority queues

1,2,3

In the era of minicomputer, which of the following were the key features provided by OS? 1: Multi-programming 2:Concurrency 3:Memory Protection 4: Mobility

1,2,3

Which are the components of machine state that comprises a process? 1. I/O information 2. Memory 3.Registers

1,2,3

Which of the following statements are true for multi-level paging? 1)Unlike paging + segmentation approach, multi-level paging doesn't have external fragmentation issue. 2)Multi-level paging can be used to compress page table more efficiently for OS. 3)Multi-level paging can introduce more memory access time for a virtual address translation.

1,2,3

Which of the following statements are true for swap space? 1)The overall size of swap space can be larger than the physical memory. 2)Hard drive or Solid state disk can be used for swap space 3)The basic swap space unit is a page. 4)Swap space is as fast as TLB.

1,2,3

Which statements are true regarding Shortest Job First (SJF) scheduling policy? 1.It is not practical since it is hard to determine the length of a next CPU burst before running it 2. Some job might starve 3. It is optimal policy among all non-preemptive scheduling policies in terms of reducing average waiting time

1,2,3

Which statements are true when we try to prove SJF can minimize the average waiting time? 1.For any non-SJF policy, we can swap jobs step by step so that eventually all jobs will follow the SJF order. The number of swapping is finite. 2.Moving a short process before a long one decreases the waiting time of the short process more than it increases the waiting time of the long process. 3.For any non-SJF policy, we can always find a pair of jobs which are not scheduled in the SJF order.

1,2,3

Which statements are true? 1.For any PS scheduling policy, we can use aging approach to overcome starvation, i.e., gradually increasing the priority over time. 2.NP-PS scheduling policy may result in job starvation. 3.Priority can be defined by the urgency of a job. 4.P-PS scheduling policy never results in job starvation.

1,2,3

How does MLFQ reduce the response time? 1. Low priority jobs could preempted by a new job 2. It mimics RR with short quantum size 3. All jobs will only get one chance in the highest priority queue before priority boosting 4. All new jobs will be allocated to the high priority queue with short quantum size

1,2,3,4

Which of the following are true regarding xv6? 1.xv6 can boot on real hardware 2.xv6 is the implementation of Unix on x86 3.We can install xv6 in Docker container. 4.xv6 can run with multiple cores

1,2,3,4

Which of the following statements are true for page fault? 1)When the OS accesses a page table entry and see the present bit is 0, a page fault will be issued. 2)When the present bit is 0, the PFN value is the disk address of the swap space in the disk. 3)Upon a page fault, the OS is invoked to service the page fault by swapping the page into memory. 4)The process will be in the blocked state while the page fault is being serviced.

1,2,3,4

Which of the following statements are true for using Hybrid solution: paging + segmentation to manage page tables? 1)We use the base to hold the physical address of the page table of that segment. 2)It may result in external fragmentation since page table can be of arbitrary size, which needs contiguous pages to store. 3)Unallocated pages between the stack and the heap no longer take up space in a page table. 4)The bounds register is used to indicate the end of the page table (i.e., how many valid pages it has).

1,2,3,4

Which of the following statements are true regarding process and program? 1.Process is active 2.Process is a running program 3.Program is passive and lifeless 4.When you are using a browser with your laptop, an OS is a running process in the background

1,2,3,4

Which of the following statements are true? 1..System calls allow the kernel to provide a gateway through which certain key pieces of functionality are exposed to user programs, such as file access 2. Code that runs in kernel mode can do what it likes 3. Even a user process takes control of a CPU, the OS can regain control of the CPU with a hardware interrupt such as timer interrupt 4. Any code that runs in user mode is restricted in what it can do

1,2,3,4

Which statements are true about Round-Robin (RR) scheduling policy? 1.RR scheduling policy can provide time sharing for small quantum size. 2.RR scheduling policy might result in large context switch overhead for small quantum size. 3.RR scheduling policy can provide better response time for small quantum size. 4.RR scheduling policy might result in long average waiting time.

1,2,3,4

Which statements are true? 1.When the quantum size goes to infinity, RR is reduced to FCFS. 2.If the time quantum is 49 and the context switch overhead is 1, then the CPU utilization on average is 98%. 3.When the quantum size goes to 0, RR becomes a processor sharing in theory. 4.A rule of thumb on how to pick the quantum size: 80 percent of the CPU bursts should be shorter than the time quantum.

1,2,3,4

Which statements are true? 1)The loader in the OS sets up the logical addresses space to run the program. 2)The hardware just provides the low-level mechanism for efficient address translation. 3)Logically, the program is in memory always starting at addresses 0 and ending at the maximum size. 4)The OS must keep track of which locations are free and which are in use, and maintain control over how memory is used.

1,2,3,4

Which of the following are the design goals for OS? 1: To achieve energy efficiency, security, or mobility. 2: To build up some abstractions in order to make the system convenient and easy to use. 3:To provide high performance or to minimize the overheads of the OS. 4:To build a reliable OS since OS is always running. 5:To provide protection between applications, as well as between the OS and applications.

1,2,3,4,5

Which statements are true regarding FIFO (First Come First Serve) scheduling policy? 1.It is fair 2.The ready queue is managed in the FIFO order 3.It might result in long average waiting time 4.Some jobs might starve 5. It has a convoy effect

1,2,3,5

What is an OS? 1. a hardware abstraction layer 2.an intermediary between programs and the computer hardware 3. a set of utilities 4. a resource manager

1,2,4

Which of the following statement is true regarding the separation of fork() and exec()? 1.The separation allows the shell to do a bunch of cool things easily such as list, redirection, and pipes 2. the separation lets the shell run another code after the call to fork() but before the call to exec() 3. the separation is not necessary 4. The code run between fork() and exec() can alter the environment of the about-to-be-run program, and thus enables a variety of interesting features to be readily built.

1,2,4

Which of the following statements are true for the interrupt? 1.Before running the scheduling policy, the OS needs to execute a low-level piece of code called context switch 2. A timer device can be programmed to raise an interrupt periodically 3.Time interrupt is a cooperative approach 4. After the timer interrupt is raised, the currently running process is halted, and a preconfigured interrupt handler in the OS runs

1,2,4

Which of the following is true: 1. "Give a number of possible programs to run on a CPU, which program the OS should run" is a policy issue 2. "How the OS stops running one program and starts running another on a given CPU is a policy issue 3. "How a program runs on CPU" is a mechanism issue

1,3

Which of the following statements are true regarding interrupt or trap? 1.These are 3 potential sources for interrupt: hardware, processor, and software 2.System call is a hardware interrupt 3.When executing a trap, on x86 the processor will push the program counter onto a per-process kernel stack 4.Timer interrupt is a software interrupt

1,3

What are the main pieces of OS we will cover in this course? 1.concurrency 2.distribution 3.virtualization 4.persistence

1,3,4

Which of the following statements are true regarding LRU page replacement policy? 1)It uses the recent past as an approximation of the near future. 2)Replace the page that has least frequently been used. 3)It is expensive to implement due to the overhead for accounting memory reference. 4)It can reduce the number of page faults due to the temporal locality of memory access.

1,3,4

Which statements are true regarding single-queue scheduling for multiprocessor? 1. Single-queue scheduling does not require much work to take an existing policy and adapt it to work on more than one CPU. 2.Single-queue allows concurrent access of the single queue without locking. 3.Simple single-queue scheduling design suffers from lack of scalability. 4.Simple single-queue scheduling design suffers from Cache Affinity

1,3,4

Which statements are true regarding cache for CPU? 1. Each cache holds copies of data from the memory that are frequently accessed by its corresponding CPU. 2.Both single-processor and multi-processor designs suffer from the cache coherence problem. 3.A multiprocessor scheduler should consider cache affinity when making its scheduling decisions, perhaps preferring to keep a process on the same CPU if at all possible. 4.Spatial locality refers to if a program accesses a data item at address x it is likely to access data items near x as well. 5.Temporal locality refers to when a piece of data is accessed, it is likely to be accessed again in the near future.

1,3,4,5

Which statements are true regarding multi queue scheduling for multi-processor? 1. Simple mutti-queue scheduling suffers from load imbalancing 2. Multi-queue design suffers from Cache Affinity 3. Different queues can use different scheduling policies 4. Multi-queue design is scalable 5. Migration of some processes can mitigate load imbalancing for multi-queue scheduling

1,3,4,5

Which of the following are true regarding Unix? 1.Linux is an open-source Unix-like OS. 2.Windows 10 has Unix at its core. 3.An mainframe OS has Unix at its core. 4.Mac OS X has Unix at its core. 5.Unix is completely written in C programming language.

1,4

Issues with Page Replacement and Thrashing?

1. CPU Utilization Low -> Increase # Processes 2.CPU Utilization High -> Memory over subscribed 3. Many Page Faults occur -> More Swapping 4. Low CPU Utilization -> Repeat

If the page size is 2^3 KB and the physical memory size 2^14 KB. How many bits are required for the PFN (physical frame number)?

11

We consider paging. If the page size is 2^3 KB. How many bits are required for the offset in a page?

13

At what stage does the OS initializes the trap table and the CPU needs to remember the location of the trap handler? 1)Idle Time 2)Boot Time 3)Run Time

2

There are three major steps of a system call. RUN: The kernel performs the privileged operations. RET: The kernel calls a special return-from-trap instruction, returns into the calling user program, and reduces to user mode. TRAP: A program executes a special trap instruction, jumps into the kernel, and raises to kernel mode. Which of the following orders is correct? 1)TRAP -> RET -> RUN 2)TRAP -> RUN -> RET 3)RUN -> RET -> TRAP 4)RUN -> TRAP -> RET

2

Which of the following page replacement policy is the optimal one in term of minimizing the number of page faults? 1)Replace page that has been least frequently used. 2)Replace page that will not be used for longest period of time. 3)Replace page that has been in the page table for longest period of time. 4)Replace page that has not been used for longest period of time.

2

Which of the following approaches can we use to increase TLB hit ratio? 1)Increasing the capacity of physical memory. 2)Increasing the capacity of TLB. 3)Evicting the least-recently-used or LRU entry for TLB overflow.

2,3

Which of the following is true about segmentation? 1)It has internal fragmentation issue. 2)It requires more base and bounds registers than the simple base-and-bounds relocation. 3)It has external fragmentation issue.

2,3

Which statements are true about MLFQ parameter tuning? 1.The number of queues does not matter in tuning 2. The low-priority queue contains long-running jobs that are CPU-bound, so are allocated with long time slices (quantum size) 3. The high priority queues are usually given short time slices (quantum size) for interactive jobs

2,3

Which statements are true for xv6 scheduler? 1. It is a multi-queue scheduler 2. Each CPU runs a RR scheduling policy 3. It maintains one global queue across all CPUs 4. It is a multi-processor scheduler

2,3,4

Which of the following statements are true for Clock page replacement policy? 1)It often achieves lower page fault rate than OPT. 2)It often achieves lower page fault rate than FIFO. 3)It often achieves lower page fault rate than LRU. 4)Clock uses single reference bit plus a circular list and a pointer.

2,4

Which statements are true about Multi-Level Feedback Queue (MLFQ)? 1.MLFQ is optimal in reducing average response time 2.MLFQ uses boost priority to avoid starvation 3. MLFQ is optimal in reducing average waiting time 4. MLFQ keeps track of the remaining quantum size to avoid gaming the scheduler

2,4

Including the initial parent process, how many processes are created by running the following program? #include <stdio.h> #include <unistd.h> int main() { int i; for (i=0; i<8; i++) fork(); return 0; }

256

We have the following reference string: 3 4 3 2 1 2 19 1 2 3 4 19. Right now the page reference reaches at 19. The VPNs have been allocated to physical frames in the following way: 3 , 4 , 2 , 1 Which VPN will be replaced for the 19 under the FIFO policy?

3

Go to paging-linear-translate.py for practice on translating VA for pages into PA. What is VPN of VA=0x00001926? What is VA = VA=0x00003464 in PA?

3, 60516

In the implementation of wait(), the state of child process will be used as a condition. When a parent calls wait(), it waits until one of its child processes changes its state to... 1.blocked 2.ready 3.initial 4. zombie 5.running

4

We have the following reference string: 3 4 3 2 1 2 14 1 2 3 4 14. Right now the page reference reaches at the 14. The VPNs have been allocated to physical frames in the following way: 3 ,4, 2, 1 Which VPN will be replaced for the highlighted 14 under LRU?

4

We have the following reference string: 3 4 3 2 1 2 15 1 2 3 4 15. Right now the page reference reaches at the one in 15. The VPNs have been allocated to physical frames in the following way: 3, 4, 2, 1 Which VPN will be replaced for the highlighted 15 under the OPT policy?

4

We have the following reference string: 3 4 3 2 1 2 28 1 2 3 4 28. Right now the page reference reaches at the 28 . The VPNs have been allocated to physical frames in the following way: 3, 4, 2, 1 Which VPN will be replaced for the highlighted 28 under the Second-Chance (Clock) policy?

4

Which of the following statements are true regarding page replacement policies? 1)Random page replacement often achieves lower page fault rate than FIFO. 2)Counting-based approach such as LFU and MFU often achieves lower page fault rate than LRU. 3)Page pre-fetching can reduce page fault rate without causing any extra overhead. 4)The 80-20 workload shows locality within, under which Clock can approximate well LRU.

4

Consider the Solaris scheduling table (lower # = lower priority) If a process with priority 50 finish its allocated time quantum, what new time quantum will be allocated?

40

Go to segmentation.py for practice in VA -> PA with segments VA = 1,015, PA = ? VA = 173, PA = ? VA = 558, PA = ? If invalid, answer -1

4683, 7063, -1

With TLB, in worst scenario, which of the following access pattern will happen for one virtual memory reference? 1)One TLB access only 2)One physical memory access only 3)Two physical memory accesses only 4)One TLB access plus one physical memory access 5)One TLB access plus two physical memory accesses

5

If the page size is 2^3 KB and the address space size is 2^6 KB. How many entries are there in the page table?

8

What is a PDE? What is at the Contents of a PDE?

A Page Directory Entry is a "page" in the Page Directory. The contents at a specific PDE is a hex number that determines whether there is any memory allocated. If the first bit is 0, there are no pages allocated inside the PDE. If the first bit is 1, the rest of the contents describe the page that holds the allocated PTEs.

What is a PTE? What is at the Contents of a PTE?

A Page Table Entry is a "page" in the page pointed by the Contents of the PDE it is a part of. The content at a specific PTE is a hex number that determines whether there are any allocated pages. If the first bit is 0, there are no pages allocated. If the first bit is 1, the rest of the contents describe the page that holds values stored in memory. The Contents of a PTE is the PFN of a VA translation.

What is thrashing?

A process is busy swapping pages in and out, spending more time paging than executing

What is a Swap Space? How do you support it?

A swap space is fixed, page-sized units area to transfer pages from the disk memory to the physical memory. If the OS is looking for a page in a page table, the contents of a page will have a present bit to indicate if a page in present in memory(1) or in disk(0).

How to solve the issue of thrashing?

Allocate enough frames for the process to use. Use Locality to never replace frequently used pages in a "Working Set"

What is an Address Space? Why is it useful?

An abstraction of a private address for multiple running processes within a single physical memory. For each process, its memory starts at 0 and ends at some maximum virtual address. The process actually resides at some arbitrary physical address in physical memory. A process doesn't have to worry about bumping memory against another process, nor worry about unauthorized memory accesses.

What is EAT?

EAT = Effective Access Time Probabilistic formula to determine how long it takes to access memory given a reference. Applied in TLB memory scenarios and Disk Swapping memory scenarios. General Form: p1(x1) + p2(x2) = EAT p1 = probability of event x1 p2 = probability of event x2

What is the general EAT equation with Page Fault/Hits and no TLB?

EAT = p * x1 + (1 - p) * x2 x1 = 2 * Memory Access Time x2 = 2 * Memory Access Time + Swap Time p = Page Hit Ratio p = # pages in memory/ # total references 1 - p = Page Fault Ratio

What is Second Change Page Replacement?

Every reference in memory has a reference bit. When the reference is brought in from disk memory, its bit is 0. If it is used in the string of references, the bit is set to 1. A pointer p points to some reference. If a replacement must occur, the reference pointed by p is replaced if the bit is 0. If the reference has bit = 1, set the bit to 0 and increment until you find the next reference with bit = 0 to replace. After replacement, the pointer is incremented.

Under any page replacement policy, there will always be fewer page faults if we have more physical frames. (T/F)

False

What is LFU Page Replacement?

Given a string of references, the reference that has been used the least in memory is replaced.

What is LRU Page Replacement?

Given a string of references, the reference that hasn't been used for the longest time is replaced.

What is FIFO Page Replacement?

Given a string of references, the reference that is the first-most in the physical memory is replaced.

What is OPT Page Replacement?

Given a string of references, the reference that will not be used for the longest time is replaced.

What if a process's virtual memory is larger than physical memory? How do you support this?

If physical memory cannot hold all of a process's virtual memory, it keeps the most in demand pages in memory, and stores the rest in a larger, disk memory. A good solution is to create a swap space between the small memory and a larger disk memory that holds the rest of a process's virtual memory addresses.

What is a Page Fault?

If the OS is trying to find a page in physical memory, and the present bit is 0, the memory reference has failed to find the page in physical memory. This requires the OS to go to disk memory, find the page, swap it into the swap space, and perform a page replacement scheme to push the page into physical memory.

What is the general form of a PA from a VA translated from a Multi-Paging memory scheme?

PA = PFN(Contents of PTE) + Offset

What is a PDBR? What does it do?

PDBR = Page Directory Base Register (I think) It points to a Page Directory that is used for Multi-Paging translations of VAs.

What is PFN?

Page Frame Number, which is a page in physical memory.

What is Page Replacement?

Page Replacement is used when the OS is swapping a page from the disk into memory and the memory is already full. A replacement scheme must be used to determine which page is replaced with the newly moved page. The policy with the least number of Page Faults given a string of memory references should be adopted.

What is the storage hierarchy? From top to bottom

Register -> Cache -> Memory(RAM) -> Disk(SSD or HDD)

What is TLB? What is it used for in Memory? What are some important terms related to TLB?

TLB = Translation Lookaside Buffer is used to speed up memory reference accesses with Paging. For Paging, there are 2 accesses that take place. First, Memory finds the Page Table in Kernel Memory. Second, Memory accesses the Page in the Page Table. The TLB can speed this up by keeping the frequently accessed address translations. There is a TLB hit and miss ratio associated.

How is a Page Fault Handled?

The OS runs a piece of code called "Page Fault Handler". The OS finds the desired in disk memory with PFN of the PTE in the Page Table for a disk address. The process is put into a blocked state while the page fault is being serviced. This allows the OS to run other ready processes.

What is the Offset? What is at the Contents of an Address specified by the Offset?

The Offset is address of a value relative to the beginning of a page. The Contents of the Offset is the value stored for a specific VA.

What is a Page Directory?

The first level of Multi-Paging. It holds many PDEs, which may or may not be allocated to prevent waste.

Convoy Effect

The phenomenon in which a long process will hold the CPU while many short processes pile up in the FIFO queue.

What is internal fragmentation?

The phenomenon in which portions of allocated memory for a process are empty or unused. For example, heap and stack allocations for a process have dynamic data within them.

What is external fragmentation?

The phenomenon where the physical memory has many unallocated pockets of memory that are too small to give to a process as a segment.

What is VPN for Operating Systems?

Virtual Page Number, which is a page in virtual memory.

Assume the page table of a process is kept in memory. The overhead to one memory access is 76 ns. We assume that a TLB is used and one TLB access requires 9 ns. What TLB hit ratio is needed to reduce the memory effective access time to 138 ns?

alpha = 0.303

In the demo of concurrency, when the loop size is 100, we will never get an incorrect outcome and the outcome will be always 200.

false

In the demo of memory virtualization, the addresses stored in p are the same, it implies each process has allocated memory for p at the same physical memory location.

false

In order to achieve persistence of data storage, the hardware storage has to be non-volatile, i.e., when power is down, the data won't be lost.

true

In the demo of CPU virtualization, if we run "./cpu A & ./cpu B & ./cpu C & ./cpu D", it will generate four processes, each of which will print out a letter (A, B, C or D) every second.

true


Set pelajaran terkait

American Popular Music: Race and Place Final

View Set

[Costanzo] Ventilation/Perfusion

View Set

AWS Cloud Practitioner Study Questions

View Set