32 All Quizzes

Ace your homework & exams now with Quizwiz!

Semaphores can be used for: Mutual exclusion General waiting for another thread to do something Both Neither

Both

Some address translation schemes need a mapping table. Select the appropriate answer below. Only paging systems need a mapping table. Only segmentation systems need a mapping table. Both paging and segmentation systems need mapping tables. Neither a paging system nor a segmentation system needs a mapping table.

Both paging and segmentation systems need mapping tables.

Why is there a lock->acquire() operation at the bottom of CV::wait() in Figure 5.18? The lock must be acquired before the return from a wait() operation. It doesn't hurt to ask for a lock multiple times so that you are sure that you actually get the correct one.

The lock must be acquired before the return from a wait() operation.

Threads created by a single process share the same open files. (T/F)

True

A given process may exhibit different working set sizes at different points in its execution. (T/F)

True Phase change behavior often results in different working set sizes.

What is an advantage of priority boosting when resuming a task that was waiting on I/O? allows CPU-bound tasks to finish quicker better cache reuse increases I/O device utilization increases priority as a task runs

increases I/O device utilization

To be used for mutual exclusion, a semaphore should be initialized to: 0 1 N, where N is the number of threads that will be trying to access the shared data structure or resource

1

Consider a paging system with an 11-bit page offset field in the virtual address. If the addressable unit is bytes, how big is a page in bytes? 11 bytes 1000 bytes 1024 bytes 2000 bytes 2048 bytes 4000 bytes 4096 bytes There is not enough information given to answer this question.

2048 bytes 2^11 = 2048

Consider the following workload shown as task id, arrival time, and service time triples. Which job departs first and at which time if you are using FIFO scheduling? arr work A 0 8 B 4 2 C 6 1 A departs at time unit 8 A departs at time unit 9 B departs at time unit 6 B departs at time unit 7 B departs at time unit 8 C departs at time unit 7 C departs at time unit 8 C departs at time unit 9

A departs at time unit 8

What will happen to RR if you run on a real system with a time quantum that is extremely short? Choose the answer that describes the most important performance impact of this choice on a real system. Longer tasks will starve. There is not enough time to decide which task should run next. The processor will be very responsive to tasks with short service times. The processor will spend too much time switching between tasks and not getting useful work done.

The processor will spend too much time switching between tasks and not getting useful work done.

What bad thing could happen if the user had access to the PSR (processor status register) and could change the value? The user could change the processor status to offline. The user could change the PC (program counter) value in user mode. The user could update the system status to complete even before execution is finished. The user could change the execution mode bit to kernel and gain full access to the system.

The user could change the execution mode bit to kernel and gain full access to the system.

If there were separate UNIX system calls for exists(), create(), and open(), what is NOT true of the following code segment? if (!exists(name)) create(name); fd = open(name); Another process could run just after exists() returns false and then create the non-existing file. Another process could run just before the call to open() and delete the existing file. These statements will always be executed in sequence with no interference from other processes.

These statements will always be executed in sequence with no interference from other processes.

Why should an OS kernel copy system call parameters before checking their validity? The parameters are only available in user memory for a short time and must be copied before being lost. The code in the system call implementation cannot access parameter values in the protected user memory region. The system call can modify the copied parameters to correct errors and then use the corrected parameters in the implementation of the system call. This prevents a user from modifying the parameters after they are checked for validity but before the parameters are used in the implementation of the system call.

This prevents a user from modifying the parameters after they are checked for validity but before the parameters are used in the implementation of the system call.

How many waiting lists are there in a single instance of a RWLock using the implementation in Figure 5.9? One Two Three Four

Three Three: one for the queueing lock "lock", one for the CV "readGo", and one for the CV "writeGo".

A limitation of green threads is that when a blocking call is made by a user-level thread, the kernel is unable to run a different user-level thread in that same process. Scheduler activations are an improvement over green threads since the scheduler activation for a process can inform the user-level thread scheduler that it should choose another user-level thread to run after a blocking call is made. (T/F)

True

A lock acquire operation performed on a busy lock puts the calling thread into the WAITING state. (T/F)

True

A thread enters the Running state when the thread scheduler resumes it. (T/F)

True

Across different workloads, SJF (preemptive) is optimal in terms of average response time. (T/F)

True

Bitmaps are designed for use with fixed-size free blocks. (T/F)

True

For ease in locating a waiting thread, each lock and each condition variable has its own waiting list. (T/F)

True

If tasks are all equal in size, FIFO is optimal in terms of average response time. (T/F)

True

RR avoids starvation. (T/F)

True

The OS kernel executes with full access to all the capabilities of the hardware. (T/F)

True

Threads are less expensive to create and destroy than processes. (T/F)

True

Threads created by a single process share the same memory address space. (T/F)

True

To avoid having to make a system call for every thread operation, some systems support a model where user-level thread operations are implemented entirely in a user-level thread library, without invoking the kernel. (T/F)

True

When a shared object is properly coded, you can always convert any CV::signal() operations to CV::broadcast() operations without changing the semantics of the shared object. However, making those conversions may impact performance. (T/F)

True

If a CV::signal() operation is executed when there are no threads in the condition variable's waiting list, it is essentially a no-op and there is no effect. (T/F)

True A condition variable is memoryless. See the discussion in section 5.4.1.

For a given lock, at most one thread can hold the lock at a given time. (T/F)

True If multiple threads could hold the same lock at the same time, then the lock cannot provide mutual exclusion.

A CV::wait() operation must atomically (a) release the lock, and (b) add the current thread to the waiting list. (T/F)

True If the two actions could be separated, then a second thread could run after the lock had been released but before the current thread had been placed on the waiting list. The second thread could potentially change the condition and signal the condition variable with no effect. The original thread would then be placed on the waiting list, and it may be waiting forever for a signal that was missed.

A single thread can hold multiple different locks at one time. (T/F)

True Threads are allowed to hold multiple locks. This would be appropriate when a thread needs to atomically update two shared objects, perhaps moving information from one to the other. However, we will see in Chapter 6 that any such threads should be coded to acquire the multiple locks in a fixed ordering. Otherwise, a deadlock situation can arise.

If a reader thread doesn't change a shared object, why does it need any synchronization at all for an RWLock? Readers do not really need synchronization, but this helps make the code more elegant and symmetric. A reader may need to access several fields or values in a shared object as part of a read transaction. If there were no synchronization, then a write could occur in between the reads of different fields. The fields could be then be inconsistent.

A reader may need to access several fields or values in a shared object as part of a read transaction. If there were no synchronization, then a write could occur in between the reads of different fields. The fields could be then be inconsistent. Consider a shared object with two values x and y, where there is a consistency invariant of x+y=10. A reader must read the two values in separate statements. A writer can update the shared object by, for example, decrementing one value and incrementing the other value, so that the invariant is maintained. If a reader does not have synchronization and a writer can run in between a reader's read of x and its read of y, then the x and y values obtained by the reader will not meet the consistency invariant.

A power failure alert is recognized by a computer system as which type of interrupt? Asynchronous interrupt Synchronous interrupt Deferred interrupt None of the above

Asynchronous interrupt

Consider a linked list composed of six variable-sized free blocks of: (A)250 -> (B)450 -> (C)520 -> (D)900 -> (E)400 -> (F)800 The block is identified by the letter, and the block size is given as the number of memory units for the payload size (e.g., bytes). Consider a request for 400 units. First-fit would choose which block? A B C D E F

B

Consider the following workload shown as task id, arrival time, and service time triples. Which job departs first and at which time if you are using SJF (preemptive, i.e., SRTN) scheduling? arr work A 0 8 B 4 2 C 6 1 A departs at time unit 8 A departs at time unit 9 B departs at time unit 6 B departs at time unit 7 B departs at time unit 8 C departs at time unit 7 C departs at time unit 8 A departs at time unit 9

B departs at time unit 6

This synchronization method or primitive can be used to implement optimistic concurrency control. Compare and swap instruction Mellor-Crummey Scott (MCS) lock Spin lock Test and test and set lock

Compare and swap instruction

Consider the same free list composed of: (A)250 -> (B)450 -> (C)520 -> (D)900 -> (E)400 -> (F)800 Consider the same request for 400 units. Best-fit would choose which block? A B C D E F

E

What other scheduling policy does RR start to resemble as the time quantum gets extremely large? FIFO MFQ SJF (preemptive) None of the above

FIFO

A TLB can speed up address translation for a paging system, but it is not useful for a segmentation system. (T/F)

False

A TLB has one entry for each page frame in physical memory. (T/F)

False

A TLB hit means that both read and write access is allowed to a page. (T/F)

False

A page fault is a protection error and should cause the process to abort. (T/F)

False

A tagged TLB will flush all entries with a given tag on a process switch. (T/F)

False

A user process executes with full access to all the capabilities of the hardware. (T/F)

False

Asynchronous I/O requires that a running thread must create a separate I/O thread whenever a read or write is made to a high-latency device. (T/F)

False

Bursty arrival patterns will provide shorter user response times. (T/F)

False

Executable file formats like ELF contain initialized data that will be placed in the heap when the program is loaded. (T/F)

False

High server utilization will provide shorter user response times. (T/F)

False

If a computer system is executing multiple processes, then each process must be an instance of a different program. (T/F)

False

In the MFQ scheduling policy, all priority levels must have the same time quantum values. (T/F)

False

It is correct programming logic to assume that the thread scheduling pattern will be the same on each run when the same program is executed multiple times with the same data and the same command line arguments. (T/F)

False

It is correct programming logic to assume that, once resumed, a thread will run without interruption up to the point of its next system call. (T/F)

False

SJF (preemptive) avoids starvation. (T/F)

False

Semaphores, like condition variables, are memoryless. (T/F)

False

The two modes of execution are called kernel mode and executive mode. (T/F)

False

Threads created by a single process share the same memory stack. (T/F)

False

Threads created by a single process share the same scheduling state (e.g., Ready, Waiting). (T/F)

False

A P() operation on any semaphore will cause the calling thread to be blocked and placed on a waiting queue until a V() operation can be executed by another thread in order to unblock it. (T/F)

False A P() operation only causes the calling thread to be blocked when the semaphore value is 0.

To improve the performance of your program, it is ok to sometimes avoid acquiring locks before accessing shared data. This is because it is quite simple for even novice programmers to reason about the execution and memory access interleavings among multiple threads and identify performance optimizations. (T/F)

False The textbook warns you to not be tempted by a performance optimization like this! It is quite difficult for most programmers to reason about the execution and memory access interleavings among multiple threads. You may end up spending hours or week tracking down a bug that you could have avoided by following the shared object coding guidelines in the textbook.

The thread_exit() call can immediately garbage collect the exited thread's resources and destroy the exited thread's thread control block. (T/F)

False The thread exit status should be retained until it can be read by the parent's call to thread_join().

A user stack will always be in a valid state. (T/F)

False The user can corrupt the user stack pointer by loading it with an invalid address, or the user stack may have overflowed.

Most current operating systems use true LRU page replacement. (T/F)

False True LRU replacement is too expensive.

Unix and Linux are both microkernel designs.

False Unix and Linux are monolithic kernel designs.

The Unix system call exec() creates a new process to run a program. (T/F)

False exec() does not create a new process but loads a program into the existing address space, copies arguments into memory, and starts execution.

In which role does an OS provide common services? Referee Illusionist Glue None of the above

Glue

Dynamic memory allocation is associated with which part of the process memory image? Data segment Heap segment Stack segment Text segment (i.e., machine instructions) None of the above

Heap segment

What is the design goal for virtual machines? High efficiency High overhead High response time None of the above

High efficiency

This synchronization method or primitive is an efficient form of spin locking where each waiting thread spins on a separate memory location. Mellor-Crummey Scott (MCS) lock Optimistic concurrency control Read-Copy-Update (RCU) Test and test and set lock

Mellor-Crummey Scott (MCS) lock

Consider a system call that does not access disk and that does not block further progress of an application, e.g., get_process_id(). Approximately how long does such a system call take? On the order of 0.25 ns On the order of 25 ns On the order of 0.25 ms On the order of 25 ms

On the order of 25 ns

Associate the type of memory fragmentation with the appropriate address translation scheme. Both paging systems and segmentation systems have external fragmentation. Both paging systems and segmentation systems have internal fragmentation. Paging systems have external fragmentation, and segmentation systems have internal fragmentation. Paging systems have internal fragmentation, and segmentation systems have external fragmentation.

Paging systems have internal fragmentation, and segmentation systems have external fragmentation.

What type of problems can occur if you place OS functionality in standalone server processes? Correctness problems Performance problems Protection problems There are no problems with this approach

Performance problems There will be some amount of communication overhead when a user requests a service that is implemented in a server process.

Consider a for loop that iterates through the elements of an array, e.g., for( i = 0; i < N; i++ ){ sum += a[i]; } Assume the index variable and the sum variable are register-allocated rather than memory-allocated. Taking into account only the data references (i.e., ignore the instruction fetches), what type of locality of reference do the array element accesses exhibit? Spatial Temporal Both spatial and temporal Neither

Spatial Memory accesses to a[0], a[1], a[2], ..., a[N-1] exhibit spatial locality. For this loop, none of the array elements are revisited.

How does the size of the virtual memory address space compare to the size of the physical memory address space? The virtual memory address space must be smaller than the physical memory address space. The virtual memory address space must be same size as the physical memory address space. The virtual memory address space must be larger than the physical memory address space. While the virtual memory address space is typically larger than the physical memory address space in modern computer systems, there is no requirement to do this. In fact, some historical systems have implemented a smaller size or equal size virtual memory address space as compared to physical address space.

While the virtual memory address space is typically larger than the physical memory address space in modern computer systems, there is no requirement to do this. In fact, some historical systems have implemented a smaller size or equal size virtual memory address space as compared to physical address space. For example, in 1975 the Digital Equipment Corporation (DEC) KL10 Model A processor used virtual addresses of 18 bits and physical addresses of 22 bits. More recently, ARMv8A processors used virtual addresses of 64 bits and physical addresses of 48 bits.

Load locked and store conditional instructions track memory accesses at what level of granularity? Byte Word Cache line

Word

Which scheduler typically runs after every interrupt? batch job initiator dispatcher swapper

dispatcher

What is the advantage of affinity scheduling? better cache reuse executes the tasks for a parallel process at the same time prevents priority inversion shorter queue lengths in per-processor ready lists

better cache reuse

Why does the book's implementation of RWLock::doneWrite() in Figure 5.10 use readGo.broadcast() rather than readGo.signal()? All readers should be awakened since they can run concurrently. Broadcast operations are always more efficient than signal operations. The broadcast operation has memory while the signal operation is memoryless.

All readers should be awakened since they can run concurrently.

A reference to a page that has a modified bit with value true (i.e., 1) should always cause an exception. (T/F)

False A modified bit is used to indicate that a write back is needed for a modified page before replacing that page. A value of true means that the page has been modified (i.e., written to) since it was first loaded into physical memory and is no longer the same as the original page in the process memory image on secondary storage. Before this page can be replaced in physical memory, it must be written back to secondary storage. Accessing a page with a modified bit with value true by either a read or a write will not cause an exception. Accessing a page with a modified bit with value false with a read will likewise not cause an exception. Accessing a page with a modified bit with value false with a write will not cause an exception in a processor with a hardware-managed TLB and hardware-managed page table walks but will cause the modified bit to change value to true in both the TLB entry and the PTE (to indicate that a previously clean page has been modified). Some operating systems use a background page cleaning algorithm that periodically writes back the modified pages and resets the modified bits in the corresponding PTEs to false and invalidates the entries for these pages in the TLB. This approach uses spare bandwidth to the secondary storage to reduce the number of replacements that encounter modified pages.

A page fault always indicates a logic error in a program; therefore, a process that has a page fault should always be terminated. (T/F)

False A page fault indicates a missing page. If the missing page is part of the memory image for a process, then it is not a logic error but rather is the standard way that the process makes a demand fetch of the page.

A simple system call interface limits the the amount of innovation possible in user applications. (T/F)

False A simple interface has not limited innovation for applications running on Unix. In fact, the textbook states that a simple and powerful system call interface was one of the key ideas in Unix and was responsible for much of its success.

A write permission exception (i.e., writing to a page that is marked in the page table as read only) always indicates a logic error in a program; therefore, a process that has a write permission exception should always be terminated. (T/F)

False A write permission exception is used to implement several functions of an OS, e.g., copy-on-write. The same approach can be taken with the presence bit to emulate a use bit in software.

Paging systems typically use a write-through policy. (T/F)

False A write-though policy requires an update of secondary storage (often hard disk) on every memory write. This would be much too slow. A write-back policy is the only reasonable choice for a virtual memory system.

Kernel buffering requires that a producer process and its corresponding consumer process perform their respective calls to write() and read() in strict alternation. (T/F)

False Each process can run at its own pace, and, apart from the "buffer full" and "buffer empty" edge conditions, each process can operate on a buffer multiple times before a process switch.

Locality of reference is a property of a computer system's memory hierarchy. For example, adding a cache memory will increase the locality of reference.

False Locality of reference is a property of program behavior. It can be exploited to a lesser or greater degree by a computer system's memory hierarchy, e.g., a cache can be used to exploit locality. Locality of reference can be increased by hand or by a compiler by restructuring the computation, e.g., by using loop reordering optimizations.

Modern operating systems use SJF (preemptive) because of its good response times. (T/F)

False Modern operating systems don't have the future knowledge of the task service times that is needed for SJF scheduling.

Producer and consumer processes can execute load/store instructions that directly access a kernel buffer. (T/F)

False Only the OS kernel is permitted to execute load/store instructions that access the kernel buffer. The OS kernel must copy data passed from the producer process into the kernel buffer, and copy data from the kernel buffer to pass to the consumer process.

A CV::signal() operation reacquires the lock associated with the condition variable and passes it to the thread that it wakes up. (T/F)

False The CV::signal() operation merely takes on thread off the condition variable's waiting list, if one is present, changes the scheduling state to READY, and places the thread on the ready list. The CV::wait() operation must itself reacquire the lock before returning from the call.

The Unix fork() system call is used to create a new process, and the Unix exec() system call is used to create an additional thread for the current process. (T/F)

False The UNIX exec() call brings in a new executable image into the memory address space of an existing process and starts running that new code. (In versions of UNIX with multithreaded processes like Solaris, a call to exec() terminates the existing threads, and a single thread for the new executable image is started.)

When a computer system is powered on, the first code that is executed is the command interpreter (or shell). (T/F)

False The code in the BIOS is the first code that is executed as part of booting the OS.

Consider Figure 4.12 and the discussion in section 4.8.1. on implementing multithreaded processes. Is the following statement in this context true or false? The ready list used by the scheduler contains a mix of both TCBs and PCBs. (T/F)

False The discussion in section 4.7 regarding implementing single-threaded processes allowed a mix of both TCBs and PCBs in the ready list since "the PCB and TCB each represent one thread". This matches the TCB/PCB organization shown in Figure 4.11. However, with multithreaded processes in which the process threads are visible in the OS kernel along with kernel threads, a PCB no longer defines a separately schedulable task. In this organization, only TCBs are placed in the ready list.

To use a semaphore for mutual exclusion, a thread should place a V() operation at the start of the critical section and a P() operation at the end of the critical section. (T/F)

False The mutual exclusion design pattern for a semaphore, sem, is: sem.P(); // critical section sem.V();

What type of problems can occur if you place OS functionality in library routines? Correctness problems Performance problems Protection problems There are no problems with this approach

Protection problems Buggy or malicious code can bypass permission checks if critical functionality is implemented in library routines that are linked with user code and loaded in user-accessible memory.

What is virtualization? Ensuring that only permitted actions are allowed. Allowing communication between applications in carefully controlled ways. Providing an application with the illusion of resources that are not physically present. None of the above

Providing an application with the illusion of resources that are not physically present.

Identify the missing word: The last step in creating a thread is to set its state to ________ and put the new TCB on the ready list, enabling the thread to be scheduled. INIT READY RUNNING WAITING FINISHED

READY

This synchronization method or primitive has a "grace period". Compare and swap instruction Mellor-Crummey Scott (MCS) lock Read-Copy-Update (RCU) Spin lock Test and set instruction

Read-Copy-Update (RCU)

This synchronization method or primitive is an efficient form of reader/writers locking. Compare and swap instruction Mellor-Crummey Scott (MCS) lock Read-Copy-Update (RCU) Spin lock Test and set instruction

Read-Copy-Update (RCU)

In which role does an OS manage resources, protect users and applications, and facilitate sharing? Referee Illusionist Glue None of the above

Referee

Some address translation schemes need variable-length, physical memory allocation routines that rely on algorithms like first-fit or best-fit or perhaps on data structures such as segregated free lists. Select the appropriate answer below. Paging systems need variable-length allocation routines to allocate page frames in physical memory. Segmentation systems need variable-length allocation routines to allocate segments in physical memory. Both paging systems and segmentation systems need variable-length allocation routines to deal with page frames and segments, respectively. Neither paging systems nor segmentation systems need variable-length allocation routines for pages or segments.

Segmentation systems need variable-length allocation routines to allocate segments in physical memory.

Why does the implementation of SpinLock in Figure 5.16 not have a waiting list? The waiting list is a hidden, private variable. The memory controller implements a hardware waiting list of test_and_set instructions. The code implements a busy wait, for which a processor continues to execute the instructions in the loop until the tested value changes.

The code implements a busy wait, for which a processor continues to execute the instructions in the loop until the tested value changes.

What is the only assumption you should make on a return from a CV::wait() operation? The condition that the current thread was waiting upon is now true. The current thread holds the lock. Another thread may hold the lock, so the current thread must call lock.acquire().

The current thread holds the lock.


Related study sets

Chapter 55 Burn Types/Causes MATCHING*

View Set

BIOL 1107 Unit 4: trait inheritance

View Set

Tableau CRM Einstein Discovery Consultant Practice Exam

View Set

Midterm Exam 00098 Subject Verb Agreement

View Set

Week 42 (Adolescent Growth and Development)- Lecture Content 1

View Set

Psych CH 24: Eating Regulation Response and Eating D/O

View Set

ACHS HLTH 101 Module 6: Cancer, Cardiovascular Health and Immunity

View Set

Physical Science 9 Chapter 12 Test

View Set