CSC 456 - Exam I

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Advantage/Disadvantage of Spinwait

Cost: wasted clock cycles while it spins and does nothing Benefit: no overhead from context-switch

Given a resource-allocation graph, how can you tell if there's a deadlock?

Given the definition of a resource-allocation graph, it can be shown that, if the graph contains no cycles, then no process in the system is deadlocked. If the graph does contain a cycle, then a deadlock may exist. If each resource type has exactly one instance, then a cycle implies that a deadlock has occurred. If the cycle involves only a set of resource types, each of which has only a single instance, then a deadlock has occurred. Each process involved in the cycle is deadlocked. In this case, a cycle in the graph is both a necessary and a sufficient condition for the existence of deadlock. If each resource type has several instances, then a cycle does not necessarily imply that a deadlock has occurred. In this case, a cycle in the graph is a necessary but not a sufficient condition for the existence of deadlock.

Degree of multiprogramming

How many processes in the mix

How does time-sharing work?

In a time-sharing system, a user's program is preempted at regular intervals, but due to the relatively slow human reaction time this occurrence is usually transparent to the user.

What is multiprogramming?

It allows several jobs to be in memory at the same time, thus ensuring that the CPU always has a job to execute.

Why is WAIT an option and not just ERROR when running Banker's Algorithm?

It's not the user's fault if resources are not available when they request. If the request is legal (not more than MAX, they shouldn't be penalized... just made to wait until a safe sequence can be found.

Most common OS solution for deadlocks. Why?

Linux/unix method is to ignore. It happens so rarely that it's not worth the cost/overhead of implementing prevention/avoidance or detection and recovery.

Priority inversion

Low priority process holds a resource needed by high priority process, and a medium priority process preempts the low priority process, therefore making the high priority process wait longer than necessary

Why does forked process start at the same spot?

Makes an exact copy of the PCB, just the process ID will be different-- therefore, the child has same register values and same program counter / instruction pointer, which tells what the next instruction should be.

With two instances of the same program running- will their processes be identical?

No. They will share the same text sections, but the data, heap, and stack sections differ.

What state is a process in when waiting for the CPU?

Ready. The wait state is waiting for an event (like IO event)

Describe the suspend state

The OS removes the process from the mix completely, which means you get back its resources, but information is saved about the process (PCB). It stops contending for resources/cpu. Might be used to balance out the mix.

What is a device queue?

The list of processes waiting for a particular I/O device. Each device has its own device queue.

Why do we want to run our processes concurrently?

There are several reasons for allowing concurrent execution: • information sharing • computation speedup • modularity • convenience.

How does an OS implement semaphores?

Uniprocessor: disable interrupts However, if semaphores are allowed to be used by the user, disabling interrupts on single processor removes the OS's ability to take over at any time. Multiprocessor: utilize other methods like test and set, compare and swap, and/or spinwait to ensure atomic operation Disabling interrupts on multiple processors is complex and expensive- can diminish performance greatly

Progress

(1.) If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and (2.) this selection cannot be postponed indefinitely.

Cancellation of a target thread may occur in what two different scenarios?

1. Asynchronous cancellation. One thread immediately terminates the target thread. 2. Deferred cancellation. The target thread periodically checks whether it should terminate, allowing it an opportunity to terminate itself in an orderly fashion.

Steps for Banker's Algorithm

1. Check if Request <= Need (ERROR) 2. Check if Request <= Avail (WAIT) 3. Run Safety Algorithm- "Pretend" to allocate the request, then, with the new Avail and Need calculated, see if you can find a safe sequence (take back resources as you go) If safe sequence is found- (GRANT) If not- (WAIT)

Methods of handling deadlock

1. Deadlock prevention: deadlock will never happen, one of the four necessary conditions is impossible. 2. Deadlock avoidance: keep the system in a safe state via something like the Banker's Algorithm. 3. Detection and recovery: detect cycle from graph and recover. Start killing processes. 4. Ignore

What are the five areas of multicore programming challenges?

1. Identifying tasks. This involves examining applications to find areas that can be divided into separate, concurrent tasks. Ideally, tasks are independent of one another and thus can run in parallel on individual cores. 2. Balance. While identifying tasks that can run in parallel, programmers must also ensure that the tasks perform equal work of equal value. In some instances, a certain task may not contribute as much value to the overall process as other tasks. Using a separate execution core to run that task may not be worth the cost. 3. Data splitting. Just as applications are divided into separate tasks, the data accessed and manipulated by the tasks must be divided to run on separate cores. 4. Data dependency. The data accessed by the tasks must be examined for dependencies between two or more tasks. When one task depends on data from another, programmers must ensure that the execution of the tasks is synchronized to accommodate the data dependency. 5. Testing and debugging. When a program is running in parallel on multiple cores, many different execution paths are possible. Testing and debugging such concurrent programs is inherently more difficult than testing and debugging single-threaded applications.

A solution to the critical-section problem must satisfy what three requirements

1. Mutual exclusion 2. Progress 3. Bounded waiting

Four conditions for a deadlock to occur

1. Mutual exclusion- at least one resource must be held in a non-sharable mode; that is, only one process at a time can use the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released. 2. Hold and wait- process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes 3. No preemption- a resource can be released only voluntarily by the process holding it, after that process has completed its task 4. Circular wait- A set {P0, P1, ..., Pn} of waiting processes must exist such that P0 is waiting for a resource held by P1, P1 is waiting for a resource held by P2, ..., Pn−1 is waiting for a resource held by Pn, and Pn is waiting for a resource held by P0

Solutions to priority inversion problem

1. Priority inheritance: low priority process inherits priority of the higher priority process waiting for the resource, so it can not be preempted by the medium priority process 2. Only allow two priorities

What are the four major categories for benefits of multithreaded programming?

1. Responsiveness. Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user. This quality is especially useful in designing user interfaces. For instance, consider what happens when a user clicks a button that results in the performance of a time-consuming operation. A single-threaded application would be unresponsive to the user until the operation had completed. In contrast, if the time-consuming operation is performed in a separate thread, the application remains responsive to the user. 2. Resource sharing. Processes can only share resources through techniques such as shared memory and message passing. Such techniques must be explicitly arranged by the programmer. However, threads share the memory and the resources of the process to which they belong by default. The benefit of sharing code and data is that it allows an application to have several different threads of activity within the same address space. 3. Economy. Allocating memory and resources for process creation is costly. Because threads share the resources of the process to which they belong, it is more economical to create and context-switch threads. Empirically gauging the difference in overhead can be difficult, but in general it is significantly more time consuming to create and manage processes than threads. 4. Scalability. The benefits of multithreading can be even greater in a multiprocessor architecture, where threads may be running in parallel on different processing cores. A single-threaded process can run on only one processor, regardless how many are available.

Describe the Bakery Algorithm

1. Variables initialized for all processes: choosing[n] = false, num[n] = {0} 2. Entry section: processes set choosing[i] = true while getting max(num)-- selecting the maximum number in the array-- choosing[i] = false when done Then each process loops through all other processes and adds spinwaits for any process choosing, as well as for any process who has a number that is less than i's number (but not 0). When the conditions are broken, the spinwaits are removed. If numbers are the same, then the process with the lower PID can enter the CS 3. CS 4. Exit section: num = 0 <-- no longer has a say in who enters the CS 5. RS

What is a counting semaphore?

A counting semaphore allows you initialize the semaphore variable S to the number of resources available. When the count reaches 0, the next thread to call wait() will engage in a spinwait until a call to signal() is made.

Describe Independent vs Cooperating process

A process is independent if it cannot affect or be affected by the other processes executing in the system. Any process that does not share data with any other process is independent. A process is cooperating if it can affect or be affected by the other processes executing in the system.

What are the components of a running process?

A program becomes a process when it's loaded into memory. • Text section- the actual code of the program • Data section- global variables • Stack- memory allocation for local variables, function calls, return pointers • Heap (potentially)- dynamically allocated memory

What is a binary semaphore?

A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic operations: wait() and signal(). It is used to control access to a common resource by multiple processes in a concurrent system such as a multitasking operating system. wait() decrements the counter (S--) signal() increments the counter (S++). Typical usage: semaphore S = 0 Process1 statement; signal(S); increments S -- wait(S); // spinwait while S <= 0. after signal(S) increments, it moves forward, but then decrements the counter back to 0, so that the next time wait is called, S will still be <= 0 Process2 statement; Semaphores are given to the user by the operating system to help with synchronization

What is a race condition and how can you prevent it?

A situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place To guard against a race condition, we need to ensure that only one process at a time can be manipulating the relevant data. To make such a guarantee, we require that the processes be synchronized in some way.

What is Peterson's Solution?

A solution to the CS problem that accounts for mutual exclusion, progress, and bounded wait.

What does fork() do?

A system call to create an exact copy of a running process. The only difference is the PID.

What is a target thread?

A thread that is to be canceled. Example- When a user clicks the stop button on a browser, the threads loading images should be cancelled.

Describe the two types of parallelism.

Data parallelism focuses on distributing subsets of the same data across multiple computing cores and performing the same operation on each core. Task parallelism involves distributing not data but tasks (threads) across multiple computing cores. Each thread is performing a unique operation. Different threads may be operating on the same data, or they may be operating on different data.

What happens when parent process is killed?

If orphan processes are NOT allowed- kill all descendants of the parent If orphan processes are allowed- the system usually has a predefined process that orphans get assigned to

What does fork() return?

If parent- returns child PID if child- returns 0

Mutual exclusion

If process Pi is executing in its critical section, then no other processes can be executing in their critical sections.

How can process avoid busywaiting while waiting for a semaphore variable?

Instead of busywaiting, a process could block itself, which would remove it from the run state and place it in the wait state in a queue associated with the semaphore. The control is passed to the CPU scheduler. Then when a call to signal() is made, a call to wakeup() would take a process out of that queue and add it to the ready queue. The semaphore implementation would change from a simple int variable to a struct which includes the int value and a linked list for processes in the associated waiting queue.

Two process solution to CS - Peterson's solution Does this satisfy all three requirements? boolean flag[2]; // intention to enter CS int turn; do { flag[i] = true; turn = j while (flag[j] && turn == j); // wait CS flag[i] = false; RS } while (1);

Mutual exclusion- Yes, process must wait for the turn variable to enter, and turn can only be set to i or j, not both Progress- Yes, process sets its flag to false when leaving the CS, which means it doesn't have a say in who enters the CS while in the RS Bounded wait- Yes, by setting turn to j in the entry section, a process has the chance to enter if it wants before anther process enters again

Two process solution to CS - Alg #1 Does this satisfy all three requirements? int turn; // 0 or 1 do { while (turn != i); // wait CS turn = j; RS } while (1);

Mutual exclusion: yes - either i's or j's turn Progress: no - if a process wants to enter a CS, only the processes not in the RS should have a say. In this algorithm, it would be possible for process i to be blocked from entering the CS by process j (turn is set to j), even though j is in the RS and doesn't want to enter Bounded wait: yes - turn is set to j when leaving CS

Two process solution to CS - Alg #2 Does this satisfy all three requirements? boolean flag[2]; // intention to enter CS do { flag[i] = true; while (flag[j]); // wait CS flag[i] = false; RS } while (1);

Mutual exclusion: yes - i is blocked if j's flag is up Progress: no - both processes could want to enter and be stuck in entry section Bounded wait: yes - flag is set to false when leaving CS

Describe how raising and lowering the degree of multiprogramming has an effect on performance.

Performance increases as we raise the degree of multiprogramming until we over-allocate resources. Then the system and CPU utilization will go way down, and a process might need to be removed from the mix and switched to the suspend state. If not over-allocated, a high degree of multiprogramming is good, because the CPU and resources are being used as much as possible.

Explain why implementing synchronization primitives by disabling interrupts is not appropriate in a single-processor system if the synchronization primitives are to be used in user-level programs.

Primitives: Test & set, Compare & swap. Designed to be used by the OS, not the user. A process could maintain complete control over the CPU until it was ready to relinquish. The OS should always have control to take over no matter what.

Why is message passing preferred over shared memory model on multicore systems?

Recent research on systems with several processing cores indicates that message passing provides better performance than shared memory on such systems. Shared memory suffers from cache coherence issues, which arise because shared data migrate among the several caches. In a shared memory multiprocessor system with a separate cache memory for each processor, it is possible to have many copies of shared data: one copy in the main memory and one in the local cache of each processor that requested it. When one of the copies of data is changed, the other copies must reflect that change. Cache coherence is the discipline which ensures that the changes in the values of shared operands (data) are propagated throughout the system in a timely fashion.

Deadlock prevention

Remove at least one of the four necessary conditions for a deadlock. 1. Mutual exclusion- At least one resource must be held in a non-sharable mode; that is, only one process at a time can use the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released. (Too important-- probably shouldn't get rid of) 2. Hold and wait- A process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes. Could use all-or-nothing strategy, but could lead to starvation- never lucky enough to get all the resources at once) 3. No preemption- Resources cannot be preempted; that is, a resource can be released only voluntarily by the process holding it, after that process has completed its task. You could allow preemption of resources by higher priority processes, but that could also lead to starvation- the same process is continually preempted. 4. Circular wait- A set {P0, P1, ..., Pn} of waiting processes must exist such that P0 is waiting for a resource held by P1, P1 is waiting for a resource held by P2, ..., Pn−1 is waiting for a resource held by Pn, and Pn is waiting for a resource held by P0.

Deadlock avoidance

Requires that the operating system be given in advance additional information concerning which resources a process will request and use during its lifetime. With this additional knowledge, it can decide for each request whether or not the process should wait. The Banker's Algorithm is a deadlock avoidance algorithm.

What are the two fundamental models of interprocess communication?

Shared memory and message passing. In the shared-memory model, a region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region. In the message-passing model, communication takes place by means of messages exchanged between the cooperating processes.

Describe concurrent execution of threads on a single-core vs multi-core system.

Singlecore- threads will execute sequentially, either dynamically or statically, depending on algorithm used, but they will not execute at the exact same time. Multicore- threads can execute at the same time on different cores.

What is cascading termination?

Some systems do not allow a child to exist if its parent has terminated. In such systems, if a process terminates (either normally or abnormally), then all its children must also be terminated.

Describe the behavior of fork() in a multithreaded process.

Some systems have two options for how fork() could behave: 1. fork() would duplicate all threads in the process 2. fork() would duplicate only the thread that invoked it If a call to exec() happens right after forking, then option 2 would be better, because there would be no point of duplicating all the threads for them to be immediately destroyed/overwritten with the process loaded with exec().

What are the multithreading models?

Support for threads may be provided either at the user level, for user threads, or by the kernel, for kernel threads. User threads are supported above the kernel and are managed without kernel support, whereas kernel threads are supported and managed directly by the operating system. There are three common ways of establishing such a relationship: • many-to-one model: very few systems continue to use the model because of its inability to take advantage of multiple processing cores. It allows the developer to create as many user threads as wanted, but it does not result in true concurrency, because the kernel can schedule only one thread at a time. • one-to-one model: maps each user thread to a kernel thread. It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call. It also allows multiple threads to run in parallel on multiprocessors. The only drawback to this model is that creating a user thread requires creating the corresponding kernel thread, which is overhead, so the number allowed is usually restricted. • many-to-many model: multiplexes many user-level threads to a smaller or equal number of kernel threads. two-level model is variation on the many-to-many model... still multiplexes many user level threads to a smaller or equal number of kernel threads but also allows a user-level thread to be bound to a kernel thread

Who can affect the mix of IO-bound and CPU-bound processes?

The Long-term scheduler and the Medium-term scheduler, if it exists.

What does it mean for the degree of multiprogramming to be stable?

The average rate of process creation must be equal to the average departure rate of processes leaving the system.

What is the critical section problem?

The critical-section problem is to design a protocol that the processes can use to cooperate. Each process must request permission to enter its critical section. The section of code implementing this request is the entry section. The critical section may be followed by an exit section. The remaining code is the remainder section.

Describe exec() and wait()

The exec() system call loads a binary file into memory (destroying the memory image of the program containing the exec() system call) and starts its execution. In this manner, the two processes are able to communicate and then go their separate ways. A wait() system call will move the parent process off the ready queue until the termination of the child. Because the call to exec() overlays the process's address space with a new program, the call to exec() does not return control unless an error occurs.

What is the objective of multiprogramming and timesharing? How does it meet those objectives?

The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running. To meet these objectives, the process scheduler selects an available process (possibly from a set of several available processes) for program execution on the CPU. For a single-processor system, there will never be more than one running process. If there are more processes, the rest will have to wait until the CPU is free and can be rescheduled.

If a parent process calls wait(), what happens? What does the &status variable return?

The parent process will enter the wait state until the child process finishes executing. The status variable will contain the exit status of the child process.

What is a zombie process?

The process has terminated but its parent has not called wait() and collected its status. The child process recourses are deallocated, but the process remains in the process table, which contains its exit status. All processes transition to this state when they terminate, but generally they exist as zombies only briefly. Once the parent calls wait(), the process identifier of the zombie process and its entry in the process table are released.

What is the producer consumer problem?

The producer consumer problem relates to synchronization between two processes- the producer is generating or serving content (such as a web-server) to be utilized by the consumer (such as a web-client). A buffer established in a region of shared memory can be used to keep track of work that needs to be produced/consumed, so that the consumer doesn't try to consume work that hasn't been produced yet, and a balance can be maintained between the two. Using blocking send() and receive() with a message passing model (instead of shared memory) will resolve the problem, because the rendevous point will force synchronization.

Why might a CPU-bound process be upset about being put into an IO-bound system?

The system has too many IO-bound processes, so any time the CPU-bound processes has to make an IO request, it has to wait in a long IO queue and could be slowed by the convoy effect, where the whole system is slowed down by a few greedy/slow processes.

Bounded waiting

There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

What reasons would a process leave the Run state?

When a process is allocated the CPU, it executes for a while and eventually quits, is interrupted, or waits for the occurrence of a particular event, such as the completion of an I/O request.

When to utilize spinwait

When it takes fewer clock cycles busy waiting (process still still in run state) than it would to do the context switching. If the CS is really long, then a busy wait might not be the best choice.

Reasons for interprocess communication?

• Information sharing. Since several users may be interested in the same piece of information (for instance, a shared file), we must provide an environment to allow concurrent access to such information. • Computation speedup. If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Notice that such a speedup can be achieved only if the computer has multiple processing cores. • Modularity. We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads. • Convenience. Even an individual user may work on many tasks at the same time. For instance, a user may be editing, listening to music, and compiling in parallel.

Kernel data structures prone to race conditions

• List of open files • Memory allocation • Process lists • Interrupt handling

Describe the process states.

• New. The process is being created. • Running. Instructions are being executed. • Waiting. The process is waiting for some event to occur (such as an I/O completion or reception of a signal). • Ready. The process is waiting to be assigned to a processor. • Terminated. The process has finished execution.

List items in the Process Control Block

• Process state • Program counter- address of the next instruction to be executed for this process. • CPU register content • CPU scheduling information- a process priority, pointers to scheduling queues, and any other scheduling parameters. • Memory management information • Accounting information- amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on. • I/O status information- the list of I/O devices allocated to the process, a list of open files, and so on.


Set pelajaran terkait

CMS 2 Assignment 2: Establishing Internal Alignment

View Set

Check your knowledge: AED for Adults

View Set

Old Testament Survey Unit 7 Quiz 1

View Set

Anatomy and Physiology chapter 11

View Set

Chapter 5: Digital Image Processing, Display, and Data Management

View Set

Chapter 5: Energy Efficiency and Renewable Energy

View Set

PE 150 - Healthy Wealthy and Wise

View Set