Operating Systems Final Exam

Ace your homework & exams now with Quizwiz!

What is the percent slowdown in average memory access time?

10%

Thread pools

A pool of threads which await work. Usually faster to service a request with an existing thread than to create a new thread. Allows the number of threads in an application to be bound to the size of the pool.

Why are spinlocks only used in multiprocessor systems?

A process in a spinlock requires another process to execute an event to allow it to leave the spinlock, thus it is inappropriate in a single-processor system, where only one process can execute at a time.

What is busy waiting and why is it an issue?

A process is in its critical section, and any other process trying to enter its critical section must loop continuously. It is an issue because a busy-waiting process is using the CPU.

What is the main advantage of using dynamic loading?

A routine is loaded into memory only when it is needed. Hence the actual size of a process in memory can be smaller than the actual program size.

The implementation of mutex locks suffers from busy waiting. What changes would be necessary in order for a process waiting for a mutex lock to be blocked and placed in a waiting queue until the lock becomes available?

A waiting queue would be associated with each mutex lock. When a process determines a lock is unavailable, they are placed within the queue. When a process releases the lock, it removes and awakens the first process from the list.

Why does the linux kernel have a policy where a process cannot hold a spinlock when attempting to acquire a semaphore?

Acquiring a semaphore may cause a process to fall asleep while waiting for the semaphore to become available. Spinlocks can only be held for a short duration and a process that is asleep may cause it to hold the spinlock for too long.

Most scheduling algorithms maintain a ready queue which lists processes eligible to run on a processor. On multicore systems, there are two options: each processing core has its own ready queue; a single-ready queue is shared by all processing cores. What are the advantages and disadvantages of these approaches?

Advantages: If each processing core has its own ready queue, there is no contention over a single ready queue when the scheduler is running concurrently on two or more processors. Each scheduler needs only use the processors own private ready queue. Load balancing may not be an issue with a single ready queue. Disadvantages: a single-ready queue must be protected with locks in order to prevent a race condition, and a processing core may be available to run a thread, yet it must first acquire the lock to retrieve the thread from the single queue. When each processing core has its own ready queue, there must be some load balancing between the different ready-queues.

What resources are used when a thread is created? How do they differ from those when a process is created?

Ahread, being smaller, require fewer resources: allocating a small data structure to hold a register set, stack, and priority. Creating a process requires allocating a process control block (a large data structure). Within the PCB, allocating and managing the memory map is typically the most time-consuming activity.

Preemptive kernel

Allows preemption of process when running in kernel mode.

Thread cancellation - terminating a thread (the target thread) before it is finished

Asynchronous cancellation terminates the target thread immediately. Deferred cancellation allows the target thread to periodically check if it should be cancelled.

Conflicts with average turnaround time and maximum wait time

Average turnaround time is minimized by executing the shortest tasks first. This however could starve long-running tasks and thereby increase waiting time.

Purpose of base and limit registers? What happens if a violation occurs?

Base and limit registers are used to identify the address space belonging to a process. If a violation occurs, then a trap is sent to the OS by the CPU.

How does the CPU protect user processes from one another

By checking to ensure that an address that a process is trying to access belongs to the address space of that process. If it is not, the CPU sends a trap to the OS.

Conflicts with CPU utilization and response time

CPU utilization is increased if the overheads associated with context-switching is minimized. The context switching overheads can be decreased by performing context switches infrequently, which could increase response time for processes.

Conflicts with I/O device utilization and CPU utilization

CPU utilization is maximized by running long-running CPU-bound tasks without performing context switches. I/O utilization is maximized by scheduling I/O-bound jobs as soon as they become ready to run, incurring the overheads of context switches.

What are two solutions to the problem of external fragmentation?

Compaction and paging

Difference between address binding at compile time, load time, and execution time?

Compile time - if memory location is known a priori then absolute code can be generated. If not, then relocatable code is generated.

Concurrent vs. parallel execution

Concurrent - multiple tasks all make progress non-simultaneously Parallel - multiple tasks make progress simultaneously across multiple processing cores

How does Java allow sharing between threads?

Creating an object and passing a reference to that object to each thread sharing the data.

What may concurrent access to shared data result in?

Data inconsistency

Different types of parallelism

Data parallelism - distributing a subset of data (used by multiple tasks) to the processor executing that task Task parallelism - distributing different tasks across different cores for simultaneous execution

Why are interrupts not appropriate for implementing synchronization primitives in multiprocessor systems?

Disabling interrupts only prevents other processes from executing on the processor in which interrupts are disabled. There are no limitations on other processors, thus the process disabling interrupts cannot guarantee mutual exclusive access to program state.

Challenges for parallel programmers

Dividing activities Balance Data splitting Data dependency Testing and debugging

Cons with many to one model for multithreading

Doesn't support concurrency since only one kernel thread is being used If a user thread makes a blocking call, the entire process is blocked

How does Dining-Philosopher guarantee that no two neighbors can be eating simultaneously?

Each neighbor shares a chopstick with one-another.

What is contiguous memory?

Each process is contained in a single section of memory that is contiguous to the section containing the next process

How does mutex signal() and wait() allow several processes to enter their critical section simultaneously

Each process will call signal(mutex), which will result in an incrementation of the semaphore but does not include blocking, then continue to their critical sections.

Relation for Priority and FCFS?

FCFS gives the highest priority to the oldest job.

In semaphore implementation with no busy waiting, what kind of queue is used to implement the semaphore list?

FIFO

What are the three methods of dynamic storage allocation and how are they different?

First fit - allocate the first hole that is big enough Best fit - allocate the smallest hole that is big enough Worst fit - allocate the largest hole. Also requires searching memory.

Disadvantage with one-to-one model

For each user thread created, there must also be a kernel thread. Too many kernel threads can hurt performance, so many systems limit user threads.

A signal handler processes signals

Generated by a particular event, delivered to a process, then handled

Assuming that no thread belongs to the REALTIME PRIORITY CLASS and that none may be assigned a TIME-CRITICAL priority, what combination of priority class and priority corresponds to the highest possible relative priority in Windows scheduling?

HIGH priority class and HIGHEST priority within that class. (numeric property = 15)

Why is it important for schedules to distinguish I/O-bound programs from CPU-bound programs?

I/O-bound programs only perform a small amount of computation before performing I/O, typically not using the entire CPU quantum. CPU-bound programs use their entire quantum without performing any blocking I/O operations. One can make better use of the computer's resources by giving higher priority to I/O-bound programs and allow them to execute ahead of CPU-bound programs.

Threads contain

ID Program counter Register set Stack

Why is implementing synchronization primitives by disabling interrupts not appropriate in a single-processor system if the synchronization primitives are to be used in user-level programs?

If a user-level program is able to disable interrupts, it may disable the timer interrupt and prevent context switching from taking place. This would allow it to use the processor without letting other processes execute

In the Dining Philosopher problem there exists an issue where a deadlock is possible. Give an example.

If everyone reaches for a chopstick in the same direction, then they will all have a chopstick in one hand with no possibility of obtaining one for the other hand.

What is a Counting semaphore?

Integer value ranges over an unrestricted domain

What is a Binary semaphore?

Integer values are either 1 or 0. Works like a mutex lock.

Internal vs. external fragmentation?

Internal fragmentation occurs when allocated memory is slightly larger than requested memory and there is part of the partition belonging to a process that is unused. External fragmentation occurs when there is total memory space to satisfy a request, but it is not contiguous.

What is an advantage of using a preemptive kernel when designing an OS?

It allows a higher-priority process to preempt a lower-priority process, in kernel mode, to access the CPU. It is useful when a real-time process needs the CPU and a non-real-time process is executing.

Advantages of using a thread pool?

It is faster since it allows since it allows work to be dispatched to an existing thread rather than creating a new one. It limits the number of threads allowed in an application which can save performance.

What is a downfall of paging and when does it occur?

It may suffer from internal fragmentation. This can occur if the size of a process is not divisible by the size of a page.

How is data consistency maintained?

It requires mechanisms that ensure orderly execution of cooperating processes

Suppose that a scheduling algorithm (short term CPU scheduling) favors those processes that have used the least processor time recently. Why will this algorithm favor I/O bound programs and not yet permanently starve CPU bound programs?

It will favor I/O because of the relatively short CPU burst request by them. CPU bound programs will not starve because the I/O-bound programs will relinquish the CPU often to perform I/O.

It is incorrect to use these kinds of queues for semaphore implementation with no busy waiting.

LIFO - a process can be removed from the queue that is waiting on an event to be executed by another process that is still in the queue.

Thread library: gives programmer an API for thread management

Library can be entirely in user space (no system calls can be used, only local function call in user space). Libraries can be kernel level supported by the OS (invoking a function involves a system call to the kernel.

Logical vs physical address

Logical address - virtual or abstract address generated by the CPU Physical address - actual address in main memory seen by the MMU

Relation for Multilevel feedback queues and FCFS?

Lowest level of MLFQ is FCFS

What is a Semaphore?

More sophisticated synchronization tool than mutex locks.

What is a race condition and why is it a problem?

Multiple processes try to manipulate the same data concurrently and the outcome depends on the order in which the access took place. It is a problem seen when the changes made by one process is unseen by the other process, leading to inconsistencies

What are the requirements for a critical section problem?

Mutual exclusion - if one process is executing in its critical section, no other processes can be in their critical section. Progress - if no process is executing in its critical section, and some processes want to enter their critical section, then only the processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely. Bounded waiting - there exists a bound or limit on the number of times that other processes can enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

Solutions to critical section problem?

Mutual exclusion - only one process can execute in their critical section at a time Progress - if no other process is in their critical section and there exists some processes that wish to enter their critical section, the selection of which process enters next cannot be postponed indefinitely Bounded waiting - a bound must exist on the number of other processes that are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

What is a critical-section problem?

N processes are accessing a segment of code where processes may change common variables, update a table, or write to a file (the critical section)

The nice command sets the nice value of a process on Linux. Why do some systems allow any user to assign a nice value >= 0 yet allow only the root user to assign nice values < 0.

Nice values < 0 are assigned a higher priority and such systems may not allow non-root processes to assign themselves higher priorities.

How does wait(mutex) ... wait(mutex) result in deadlock?

No signal operation is being called. The first process will decrement the semaphore and enter into its critical section will be blocked on the second wait(mutex). All other processes will be blocked on the first wait(mutex).

Relation for RR and SJF?

None

Windows XP threads

One-to-one Kernel-level ETHREAD - executive thread block KTHREAD - kernel thread block TEB - thread environment block

PCS vs SCS scheduling

PCS scheduling - done local to a process SCS scheduling - operating system schedules kernel threads On systems using 1:1, PCS and SCS are the same.

Primary thread libraries:

POSIX Pthreads (specification not implementation) Win32 Java (JVM)

Preemptive vs nonpreemptive scheduling

Preemptive - allows process to be interrupted during its execution, taking away the cpu and allowing it to be used by another process. Nonpreemptive - ensures that a process relinquishes control of the CPU only when it finishes with its current CPU burst.

Process synchronization

Processes may execute concurrently. This may be interrupted at any time, partially completing execution

What is the advantage for having different time-quantum sizes at different levels of a multilevel queueing system?

Processes that need more frequent servicing, such as interactive processes (editors), can be in a queue with a small time quantum. Processes with no need for frequent servicing can be in a queue with a larger quantum, requiring fewer context switches to complete the processing, and making more efficient use of the computer.

What is locking?

Protecting critical regions via locks

How does mutex lock work?

Protects the critical section by first acquire() a lock then release() the lock. A boolean value determines whether the lock is available. Calls to acquire and release must be atomic.

Linux thread

Referred to as tasks Created using clone() system call

Difference between number of clock cycles for register access by the CPU vs. memory access? How is this addressed?

Register access is possible in one clock cycle while the CPU requires multiple clock cycles to access main memory. This is addressed with caches.

Nonpreemptive kernel

Runs until exits kernel mode, blocks, or voluntarily yields CPU. Essentially free of race conditions in kernel mode.

Which scheduling algorithms could result in starvation?

SJF and Priority

Why do interrupt and dispatch latency times need to be bounded in a hard real-time system?

Save the currently executing instruction, determine the type of interrupt, save the current process state, then invoke the appropriate interrupt service routine. Dispatch latency is the cost associated with stopping one process and starting another. They both need to be minimized in order to ensure that real-time tasks receive immediate attention. Furthermore, sometimes interrupts are disabled when kernel data structures are being modified, so the interrupt does not get serviced immediately. For hard real-time systems, the time-period for which interrupts are disabled must be bounded in order to guarantee the desired quality of the service.

Actions taken by kernel during context-switch of kernel level threads.

Saving value of CPU registers being switched out and restoring CPU registers of new thread being scheduled

Relation for Priority and SJF?

Shortest job has highest priority

Two level model

Similar to many-to-many, user thread is bound to kernel thread IRIX HP-UX Tru64 UNIX Solaris 8 and earlier

How can we guarantee that no two processes can execute the wait() and signal() operations on the same semaphore at the same time on a single-processor system vs. a multiprocessor system?

Single processor system - disabling interrupts Multiprocessor system - disabling interrupts at all CPUs or by using hardware synchronization techniques such as test_and_set() or compare_and_swap().

Many-to-One model examples

Solaris green threads GNU portable threads

Semantics of fork() and exec() system calls

Some unix systems have two versions of fork(), one that duplicates all threads and one that only duplicates the calling thread. The exec() system call replaces the entire process (including all threads) with the program specified in the parameter.

Advantage of many-to-many as opposed to other multithreading models

Supports concurrency and is not affected by blocking calls. Also doesn't suffer from overhead issues as the one-to-one model does.

What is the main advantage of using swapping?

Swapping makes it possible for the total physical address space of all processes to exceed the real physical memory of the system, thus increasing the degree of multiprogramming in a system.

Synchronous vs. asynchronous threading

Synchronous - parent thread creates a child thread and waits for child thread to complete before continuing Asynchronous - parent thread creates child thread and continues execution.

What is the main advantage of using dynamic linking?

System libraries can be shared among multiple processes during execution, and each process can link to this shared library to access routine. With this approach all processes that use a library execute only a single copy of library code.

In the reader-writer problem, under what circumstances is it possible for a writer to experience starvation?

The last reader releases the lock and another reader re-acquires the lock before the writer is able to acquire it.

How does the signal() operation associated with monitors differ from the corresponding operation defined for a semaphore?

The signal() operation is not efficient in this sense: if a signal is performed and there are no waiting threads, then the signal is simply ignored and the system does not remember that the signal took place. If a subsequent operation is performed, then the corresponding thread simply blocks. In semaphores, however, every signal results in a corresponding increment of the semaphore value even if there are no waiting threads. A future wait operation would immediately succeed because of the earlier increment.

Difference between many-to-one and two-level multithreading model

The two level model allows a user-thread to bind to a single kernel thread along with the many-to-many approach.

A multithreaded web server wishes to keep track of the number of requests it services (hits). Consider the two strategies to prevent a race condition: basic mutex lock when updating hits, and atomic integer. Which is more efficient?

The use of locks is overkill as it requires a system call and possibly putting a process to sleep if the lock is unavailable. An atomic integer, however, provides an atomic update on the hits variable and ensures that there is no race condition on hits. Since it can be accomplished without kernel intervention, atomic integers are more efficient.

If the wait() and signal() semaphore operations are not being excluded atomically, how can mutual exclusion be violated?

The wait operation always decrements the value of the semaphore. If two wait operations are executed when the semaphore is 1, and they are not performed atomically, then both might proceed to decrement the semaphore value, thus violating mutual exclusion.

What is the purpose of the TLB and how is it implemented?

To reduce the number of trips the CPU makes to memory to access the page table. It is implemented as associative memory using fast hardware cache.

What is meant by transient kernel code and how does it affect the OS?

Transient code means the OS can load code at runtime that is not used frequently such as a device driver. This allows the size of the OS in memory to be dynamic.

Scheduler activations provide upcalls - a communication mechanism from kernel to thread library

Upcalls are useful to m:m and 2 level models.

User threads vs kernel threads

User threads - created and managed in user space. Does not involve the kernel Kernel threads - created and managed within the kernel.

Kernel vs. user threads; when is one better than the other?

User-threads are unknown by the kernel, but kernel is aware of kernel threads. On m:1 and m:m systems, user threads are scheduled by the thread library and the kernel schedules kernel threads. All user threads belong to a process, unlike kernel threads. Kernel threads are more expensive to maintain as they are stored in a kernel data structure.

When is multithreading better than single-threaded solutions?

Web server that services each request in separate threads. A parallelized application (such as matrix multiplication; different parts of matrix are simultaneously being worked on). Interactive GUI program (such as debugger; 1 thread monitors user input, another represents the running application, another monitors performance).

One-to-one examples

Windows NT/2000/XP Linux Solaris 9 and later

How is asynchronous threading achieved?

Windows: CreateSingleObject method Unix: pthread_join method Java: join method

Assume OS maps user-threads to kernel threads using m:m model with mapping done through LWPs. This system allows devs to create real-time threads for use in real-time systems. Is it necessary to bind a real-time thread in LWP?

Yes, timing is crucial to real-time applications. By binding an LWP to a real-time thread you are ensuring the thread will be able to run with minimal delay once it is scheduled.

Assume an OS maps user-level threads to the kernel using the many-to-many model and mapping is done using LWPs. The system also allows program developers to create real-time threads. Is it necessary to bind a real-time thread to an LWP?

Yes. Otherwise, a user thread may have to compete for an available LWP prior to being scheduled. By binding the user thread to an LWP, there is no latency while waiting for an available LWP and the real-time user thread can be schedule immediately.

A CPU scheduling algorithm determines the order of execution for scheduled processes. Given n processes to be scheduled on one processor, how many different schedules are possible?

n!

Signals are used in UNIX to

notify a process that a particular event has occurred

Thread-specific data

thread local storage (TLS)

How can a binary semaphore be used to implement mutual exclusion among n processes?

wait(mutex) -> critical section signal(mutex) -> remainder section


Related study sets

HESI Remediation Study Questions "Nursing Interventions"

View Set

My AP Questions and Answers Unit 4

View Set

Ch 3: The Accounting Cycle: End of the Period

View Set

10-3 Where Is Agriculture Distributed?

View Set

Skills Lesson: Types of Evidence and Logical Fallacies, Skills Lesson: Types of Evidence and Logical Fallacies Practice

View Set

Health-related Fitness Unit Test

View Set

Life insurance: Completing the application, underwriting, and delivering the policy

View Set

Disaccharide, Sucrose, Jonathan Jones

View Set

Chp5. Business-Level Strategy: Creating and Sustaining Competitive Advantages

View Set

Malleus Maleficarum: Vocabulary for Assessment

View Set