OS Midterm 1

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Throughput

# of processes that complete their execution per time unit

DO ROUND ROBIN EXAMPLES!!!

DO ROUND ROBIN EXAMPLES!!!

Turnaround time

amount of time to execute a particular process

What are some of the scheduling algorithms?

FCFS, SJF, SRTF

Message system

processes communicate with each other without sharing the same address space

What is Concurrency

supports more than one task making progress

Operating system goals:

Ease of use, Compromise between individual usability and resource utilization, optimized for usability and battery life

How many links can there be between every pair of communicating processes?

Exactly one link for every pair of processes.

IPC facility provides two operations:

send(message) - message size fixed or variable. receive(message)

What is a shared memory system?

shared memory requires communicating processes to establish a region of shared memory. Resides in the address space of the Process. Other processes must attach the shared memory to their address space

What does FCFS stand for? Describe it.

First-Come, First-Served (FCFS) Scheduling. Non-preemptive (Processes are assigned the CPU in the same order as they arrive at the Ready Queue) Advantage: Easy to implement and fairness (no starvation). Disadvantage: Low overall system throughput

Name activities are performed by OS to context switch between processes.

Save state of P1 Service interrupt Select next user process P2 Restore state of P2 and restart CPU

What does SRTF stand for? Describe it.

Shortest-remaining-time-first. After every interrupt select the process with shortest next burst time. Preemptive - if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. Advantage: Can yield minimum average waiting time. Disadvantage: Increased Overhead

Synchronization Hardware. What are Semephores?

variable or abstract data type that is used for controlling access, by multiple processes, to a common resource in a concurrent system such as a multiprogramming operating system

Compare the overhead of context switch between threads of the same process and the overhead of context switch between processes.

Context-switching between two threads is faster than between two processes. Switching among threads of a process is done by calls to library modules and is done very quickly

Define the difference between preemptive and nonpreemptive scheduling.

(1) Preemptive scheduling allows a process to be interrupted in the midst of its execution, taking the CPU away and allocating it to another process. Nonpreemptive scheduling ensures that a process relinquishes control of the CPU only when it finishes with its current CPU burst. (2) If nonpreemptive scheduling is used in a computer center, a process is capable of keeping other processes waiting for a long time.

What are some benefits of a multi-threading program?

*Responsiveness* (May allow a program to continue running if part of it is blocked). *Resource Sharing* (Sharing code, data, memory, and the resources of process) *Economy* (Allocating memory and resources for process creation is costly, Context switching is faster, More efficient use of multiple CPUs, Easier cooperation among threads) *Scalability* (Threads may be running in parallel on different processors, A single-threaded process can only run on one processor, regardless how many are available)

Describe the action taken by kernel to context -switch between processes.

1. In response to a clock interrupt, the OS saves the PC and user stack pointer of the currently executing process, and transfers control to the kernel clock interrupt handler, 2. The clock interrupt handler saves the rest of the registers, as well as other machine state, such as the state of the floating point registers, in the process PCB. 3. The OS invokes the scheduler to determine the next process to execute, 4. The OS then retrieves the state of the next process from its PCB, and restores the registers. This restore operation takes the processor back to the state in which this process was previously interrupted, executing in user code with user mode privileges.

Process Synchronization. What are some requirements/ solutions to the critical section?

A Critical Section is a code segment that accesses shared variables and has to be executed as an atomic action. It means that in a group of cooperating processes, at a given point of time, only one process must be executing its critical section. If any other process also wants to execute its critical section, it must wait until the first one finishes.

Provide two programming examples of multithreading giving improved performance over a single-threaded solution.

A Web server that services each request in a separate thread. (2) A parallelized application such as matrix multiplication where different parts of the matrix may be worked on in parallel. (3) An interactive GUI program such as a debugger where a thread is used to monitor user input, another thread represents the running application, and a third thread monitors performance.

Provide one programming example in which multithreading does provide better performance than a single-threaded solution and one example in which multithreading does not provide better performance than a single-threaded solution.

A Web server that services each request in a separate thread. (2) A parallelized application such as matrix multiplication where different parts of the matrix may be worked on in parallel. Any kind of sequential program is not a good candidate to be threaded, for example, calculating an individual tax return.

What are advantages and disadvantages of each of the following? Synchronous and asynchronous communication Fixed-sized and variable-sized messages Consider both the system level and the programmer level.

A benefit of synchronous communication is that it allows a rendezvous between the sender and receiver. A disadvantage of a blocking send is that a rendezvous may not be required and the message could be delivered asynchronously. As a result, message-passing systems often provide both forms of synchronization. In fixed-size messages, a buffer with a specific size can hold a known number of messages. (Easier for designers and more complicated for users) In variable-sized messages the number of messages that can be held by such a buffer is indefinite length. (Easier for users and more complicated for designers)

Describe the Priority SchedulingAlgorithm.

A priority number (integer) is associated with each process: The CPU is allocated to the process with the highest priority (smallest integer= highest priority): Preemptive, Nonpreemptive. SJF is priority scheduling where priority is the inverse of predicted next CPU burst time. Problem = Starvation - low priority processes may never execute. Solution = Aging - as time progresses increase the priority of the process.

What is non-preemptive scheduling?

A process is removed from the CPU by itself (system call) or by an interrupt generated by some other system component. After interrupt is serviced, the CPU may be assigned to any process that is ready to run.

What is Operating System?

A program that acts as an intermediary between a user of a computer and the computer hardware.

What are advantages and disadvantages of using kernel-level threads?

Advantages: Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a process having large number of threads than process having small number of threads. Kernel-level threads are especially good for applications that frequently block. Disadvantages: The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of times slower than that of user-level threads. Since kernel must manage and schedule threads as well as processes. It require a full thread control block (TCB) for each thread to maintain information about threads. As a result there is significant overhead and increased in kernel complexity.

OS is a control program

Controls execution of programs to prevent errors and improper use of the computer

What are the goals of Operating Systems for personal computers and mainframe computers?

Convenience and efficiency

Some computer systems do not provide a privileged mode of operation in hardware. Is it possible to construct a secure operating system for these computer systems? Give arguments both that it is and that it is not possible.

An operating system for a machine of this type would need to remain in control (or monitor mode) at all times. This could be accomplished by two methods: Software interpretation of all user programs. The software interpreter would provide, in software, what the hardware does not provide. Require meant that all programs be written in high-level languages so that all object code is compiler-produced. The compiler would generate the protection checks that the hardware is missing.

Provide two programming examples of multithreading that would not improve performance over a single-threaded solution.

Any kind of sequential program is not a good candidate to be threaded. An example of this is a program that calculates an individual tax return. (2) Another example is a "shell" program such as the C-shell or Korn shell. Such a program must closely monitor its own working space such as open files, environment variables, and current working directory.

what is interprocess communication?

Cooperating processes require an interprocess communication (IPC) mechanism

Explain Scheduling in the threading system.

Both many-to-many and Two-level models require communication to maintain the appropriate number of kernel threads allocated to the application Scheduler activations provide upcalls - a communication mechanism from the kernel to the thread library This communication allows an application to maintain the correct number kernel threads

What are scheduling criteria?

CPU utilization - keep the CPU as busy as possible Throughput - # of processes that complete their execution per time unit Turnaround time - amount of time to execute a particular process Waiting time - amount of time a process has been waiting in the ready queue Response time - amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)

What is queuing modeling?

Describes the arrival of processes, and CPU and I/O bursts probabilistically

What role does algorithms play in algorithm evaluations?

Determine criteria, then evaluate algorithms, Deterministic modeling: Type of analytic evaluation, Takes a particular predetermined workload and defines the performance of each algorithm for that workload.

Describe the Round Robin Algorithm.

Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Timer interrupts every quantum to schedule next process. Performance (q large =>FIFO, q small =>q must be large with respect to context switch, otherwise overhead is too high)

When a process creates a new process using the fork() system call, which of the following states is shared between the parent process and the child process? a.) Stack b.) Heap c.) Shared memory segments

Heap and shared memory segments are shared between the parent process and the newly forked child process.

What is preemptive scheduling?

If an interrupt causes the removal of a process from the CPU, after the interrupt is serviced the CPU is given back to the process that had the CPU. Consider access to shared data. Consider preemption while in kernel mode. Consider interrupts occurring during crucial OS activities

What is peer to peer computing?

In the peer-to-peer model all nodes in the system are considered peers and thus may act as either clients or servers - or both. A node may request a service, or provide such a service to other peers in the system.

Reasons for cooperating processes?

Information sharing, Computation speedup, Modularity, Convenience

What are two differences between user-level threads and kernel-level threads? Under what circumstances is one type better than the other?

Kernel sees every thread of every process. If a thread makes a system call, other threads within that process can run. Takes advantage of multiprocessors Switching among kernel-level threads of a process is done via interrupts Context switching of kernel-level threads is slower than for user-level threads but faster than process context switching. Switching among user-level threads of a process is done by calls to library modules and is done very quickly. If a user-level thread makes a system call, other threads within that process will be blocked. Does not take advantage of multiprocessors

What are the steps to process creation? (be able to show how processes are created.)

Know how to read coding examples with forking.

Can a link be associated with more than two processes

Link is associated with exactly two processes. (1 pair)

How are links established?

Link is established automatically between communicating processes.

Describe the differences among short term, medium term, and long term schedulers.

Long-term scheduler (or job scheduler) - selects which processes should be brought into the ready queue Short-term scheduler (or CPU scheduler) - selects which process should be executed next and allocates CPU Medium-term scheduler - removes processes from memory to reduce the degree of multiprogramming.

OS is a resource allocator

Manages all resources. Decides between conflicting requests for efficient and fair resource use

what are some multi-threading models?

Many-to-One One-to-One Many-to-Many

Distinguish between PCS and SCS scheduling

PCS scheduling is done local to the process. It is how the thread library schedules threads onto available LWPs. SCS scheduling is the situation where the operating system schedules kernel threads. On systems using either many-to-one or many-to-many, the two scheduling models are fundamentally different. On systems using one-to-one, PCS and SCS are the same.

What role do simulations play in algorithm evaluations?

Queueing models limited, Simulations more accurate. (Programmed model of computer system, Clock is a variable, Gather statistics indicating algorithm performance, Data to drive simulation gathered)

Process Synchronization. What is it?

Process Synchronization means sharing system resources by processes in a such a way that, Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data. Maintaining data consistency demands mechanisms to ensure synchronized execution of cooperating processes.

What advantage is there in having different time-quantum sizes on different levels of a multilevel queueing system?

Processes which need more frequent servicing, such as interactive processes, can be in a queue with a small q. Processes that are computationally intensive can be in a queue with a larger quantum, requiring fewer context switches to complete the processing, making more efficient use of the CPU.

What main services that OS provides?

Program execution I/O operations File System manipulation Communication Error Detection Resource Allocation Protection

What are some Threading issues?

Semantics of fork() and exec() system calls. Thread cancellation of target thread. Thread pools. Thread-specific data. Scheduler activation's.

Two models of IPC?

Shared memory. Message passing

What are two models of interprocess communication? Briefly describe each of them.

Shared-memory model. Strength: 1. Shared memory communication is faster the message passing model when the processes are on the same machine. Weaknesses: 1. Different processes need to ensure that they are not writing to the same location simultaneously. 2. Processes that communicate using shared memory need to address problems of memory protection and synchronization. Message-passing model. Strength: 1. Easier to implement than the shared memory model Weakness: 1. Communication using message passing is slower than shared memory because of the time involved in connection setup

What does SJF stand for? Describe it.

Shortest-Job-First (SJF) Scheduling. Associate with each process the length of its next CPU burst (Use these lengths to schedule the process with the shortest time, The earlier the I/O begins -the more work done by the system). Nonpreemptive - once CPU given to the process it cannot be preempted until completes its CPU burst. Advantage: SJF is optimal - gives minimum average waiting time for a given set of processes. Disadvantage: Non preemptive nature is not good for time sharing, The difficulty is knowing the length of the next CPU request, Could ask the user

What are user level threads?

Support provided at the user-level Managed above the kernel without kernel support Management is done by thread library Does not take advantage of multiprocessors

What are kernel level threads?

Supported and managed by OS, Virtually all modern general-purpose operating systems support them Kernel sees every thread of every process. If a thread makes a system call, other threads within that process can run. Switching among threads of a process is done via interrupts. Context switching is slower than for user-level threads but faster than process context switching. Takes advantage of multiprocessor

What are differences between symmetric and asymmetric multiprocessing? What are three advantages and one disadvantages of multiprocessing?

Symmetric processing treats all processors as equals; I/O can be processed on any of them. Asymmetric processing designates one CPU as the master, which is the only one capable of performing I/O; the master distributes computational work among the other CPUs. advantages: Multiprocessor systems can save money, by sharing power supplies, housings, and peripherals. Can execute programs more quickly and can have increased reliability. disadvantages: Multiprocessor systems are more complex in both hardware and software. Additional CPU cycles are required to manage the cooperation, so per-CPU efficiency goes down.

What is a Client Server?

The client-server model firmly distinguishes the roles of the client and server. Under this model, the client requests services that are provided by servers.

One way to evaluate an algorithm?

The first step in determining which algorithm ( and what parameter settings within that algorithm ) is optimal for a particular operating environment is to determine what criteria are to be used, what goals are to be targeted, and what constraints if any must be applied. For example, one might want to "maximize CPU utilization, subject to a maximum response time of 1 second". Once criteria have been established, then different algorithms can be analyzed and a "best choice" determined. The following sections outline some different methods for determining the "best choice".

Is a link unidirectional or bi-directional?

The link may be unidirectional, but is usually bi-directional.

What are advantages and disadvantages of using user-level threads?

The most obvious advantage of this technique is that a user-level threads package can be implemented on an Operating System that does not support threads. Some other advantages are User-level threads does not require modification to operating systems. Simple Representation: Each thread is represented simply by a PC, registers, stack and a small control block, all stored in the user process address space. Simple Management: This simply means that creating a thread, switching between threads and synchronization between threads can all be done without intervention of the kernel. Fast and Efficient: Thread switching is not much more expensive than a procedure call. Disadvantages: There is a lack of coordination between threads and operating system kernel. Therefore, process as whole gets one time slice irrespect of whether process has one thread or 1000 threads within. It is up to each thread to relinquish control to other threads. User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise, entire process will blocked in the kernel, even if there are runable threads left in the processes. For example, if one thread causes a page fault, the process blocks.

The traditional UNIX scheduler enforces an inverse relationship between priority numbers and priorities: the higher the number, the lower the priority. The scheduler recalculates process priorities once per second using the following function: Priority = (recent CPU usage / 2) + base, where base = 60 and recent CPU usage refers to a value indicating how often a process has used the CPU since priorities were last recalculated. Assume that recent CPU usage for process P1 is 40, for process P2 is 18, and for process P3 is 10. What will be the new priorities for these three processes when priorities are recalculated? Based on this information, does the traditional UNIX scheduler raise or lower the relative priority of a CPU-bound process?

The priorities assigned to the processes are 80, 69, and 65 respectively. The scheduler lowers the relative priority of CPU-bound processes.

Describe four criteria to be used in selecting a CPU scheduling algorithm

When a process switches from the running state to the waiting state, such as for an I/O request or invocation of the wait( ) system call. When a process switches from the running state to the ready state, for example in response to an interrupt. When a process switches from the waiting state to the ready state, say at completion of I/O or a return from wait( ). When a process terminates.

Thread Basics

Threads of the same process share the entire process address space as well as all open files. Each thread has its own program counter and CPU register set. Cooperation among threads is much easier Also context switching among the threads of the same process is faster Notice the need to protect critical sections Programmer defines the threads in a process

Which of the following components of program state are shared across threads in a multithreaded process? a) Register values b) Code c) Global variables d) Stack memory

Threads share the heap, global memory and the page table. They have private register values and private stack segments.

Describe three different models of establishing relationship between user-level and kernel-level threads.

Three common ways of establishing a relationship between user threads and kernel threads are: Many-to-One One-to-One Many-to-Many

What is Timer for?

Timer to prevent infinite loop / process hogging resources

The services and functions provided by an operating system can be divided into two sets.

To provide functions that are helpful to the user. To ensure efficient operation of the system itself.

What is deterministic modeling?

Type of analytic evaluation, Takes a particular predetermined workload and defines the performance of each algorithm for that workload.

Under what circumstances is better to use kernel-level threads than user- level threads?

User-level threads are threads that the OS is not aware of. They exist entirely within a process, and are scheduled to run within that process's timeslices. The OS is aware of kernel-level threads. Kernel threads are scheduled by the OS's scheduling algorithm, and require a "lightweight" context switch to switch between (that is, registers, PC, and SP must be changed, but the memory context remains the same among kernel threads in the same process).

Under what circumstances is better to use kernel-level threads than user-level threads?

User-level threads are threads that the OS is not aware of. They exist entirely within a process, and are scheduled to run within that process's timeslices. The OS is aware of kernel-level threads. Kernel threads are scheduled by the OS's scheduling algorithm, and require a "lightweight" context switch to switch between (that is, registers, PC, and SP must be changed, but the memory context remains the same among kernel threads in the same process). User-level threads are much faster to switch between, as there is no context switch; further, a problem-domain-dependent algorithm can be used to schedule among them. CPU-bound tasks with interdependent computations, or a task that will switch among threads often, might best be handled by user-level threads. Kernel-level threads are scheduled by the OS, and each thread can be granted its own timeslices by the scheduling algorithm. The kernel scheduler can thus make intelligent decisions among threads, and avoid scheduling processes which consist of entirely idle threads (or I/O bound threads). A task that has multiple threads that are I/O bound, or that has many threads (and thus will benefit from the additional timeslices that kernel threads will receive) might best be handled by kernel threads. Kernel-level threads require a system call for the switch to occur; user-level threads do not.

Can multithreaded solution using multiple user-level threads achieve better performance on a multiprocessor system than on a single-processor system?

We assume that user-level threads are not known to the kernel. In that case, the answer is because the scheduling is done at the process level. On the other hand, some OS allows user-level threads to be assigned to different kernellevel processes for the purposes of scheduling. In this case the multithreaded solution could be faster.

Can multithreaded solution using multiple user-level threads achieve better performance on a multiprocessor system than on a single-processor system?

We assume that user-level threads are not known to the kernel. In that case, the answer is because the scheduling is done at the process level. On the other hand, some OS allows user-level threads to be assigned to different kernellevel processes for the purposes of scheduling. In this case the multithreaded solution could be faster.

Describe possible states in which a process can be.

new: The process is being created running: Instructions are being executed waiting: The process is waiting for some event to occur ready: The process is waiting to be assigned to a processor terminated: The process has finished execution

Consider a multiprocessor system and a multithreaded program written using the many-to-many threading model. Let the user-level threads in the program be more than the number of processors in the system. Discuss the performance implications of the following scenarios. a. The number of kernel threads allocated to the program is less than the number of processors b. The number of kernel threads allocated to the program is equal to the number of processors c. The number of kernel threads allocated to the program is grater than the number processors but less than the user-level threads

When the number of kernel threads is less than the number of processors, then some of the processors would remain idle since the scheduler maps only kernel threads to processors and not user-level threads to processors. When the number of kernel threads is exactly equal to the number of processors, then it is possible that all of the processors might be utilized simultaneously. However, when a kernel-thread blocks inside the kernel (due to a page fault or while invoking system calls), the corresponding processor would remain idle. When there are more kernel threads than processors, a blocked kernel thread could be swapped out in favor of another kernel thread that is ready to execute, thereby increasing the utilization of the multiprocessor system

Assume that an operating system maps user-level threads to the kernel using the many-to-many model and that the mapping is done through the use of LWPs. Furthermore, the system allows program developers to create real-time threads. Is it necessary to bind a real-time thread to an LWP?

Yes. Timing is crucial to real-time applications. If a thread is marked as real-time but is not bound to an LWP, the thread may have to wait to be attached to an LWP before running. Consider if a real-time thread is running (is attached to an LWP) and then proceeds to block (i.e. must perform I/O, has been preempted by a higher-priority real-time thread, is waiting for a mutual exclusion lock, etc.) While the real-time thread is blocked, the LWP it was attached to has been assigned to another thread. When the real-time thread has been scheduled to run again, it must first wait to be attached to an LWP. By binding an LWP to a real-time thread you are ensuring the thread will be able to run with minimal delay once it is scheduled.

What is the capacity of a link?

Zero capacity - the link cannot have any messages waiting in it. Sender must wait for receiver Bounded capacity - finite length of n messages Sender must wait if link is full Unbounded capacity - infinite length

What is Peterson's solution?

a concurrent programming algorithm for mutual exclusion that allows two or more processes to share a single-use resource without conflict, using only shared memory for communication.

What is a message passing system?

a mechanism to allow processes to communicate and to synchronize their action

What is the bounded-buffer problem?

a multi-process synchronization problem. The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue. N -cells buffer, each cell can hold one item Semaphore mutex initialized to the value 1 Semaphore full initialized to the value 0 Semaphore empty initialized to the value N

What are the process states?

new: The process is being created running: Instructions are being executed waiting: The process is waiting for some event to occur ready: The process is waiting to be assigned to a processor terminated: The process has finished execution

What are some differences between client server and peer-to-peer computing?

no central server. Each workstation on the network shares its files equally with the others. There's no central storage or authentication of users. separate dedicated servers and clients in a client/server network.

Consider a system running ten I/O-bound tasks and one CPU-bound task. Assume that the I/O-bound tasks issue an I/O operation once for every millisecond of CPU computing and that each I/O operation takes 10 milliseconds to complete. Also assume that the context switching overhead is 0.1millisecond and that all processes are long-running tasks. What is the CPU utilization for a round-robin scheduler when: a. The time quantum is 1 millisecond b. The time quantum is 10 milliseconds

a) The time quantum is 1 millisecond Anwser: i. CPU utilization of the tasks 1ms ∗ 11 = 11ms ii. Every task will use up the whole quantum. The I/O operations for the I/O bound tasks will return in time for their next turn. Only non utilizations will be the context switch 11 ∗ .1 = 1.1ms iii. 1.1ms + 11ms = 12.1ms total time. iv. 11/12.1 = .909, 90.9% b) The time quantum is 10 milliseconds Anwser: i. Total time (1ms + .1ms) ∗ 10 + 10 + .1 = 21.1 ii. CPU utilization of CPU-bound task = 10ms iii. CPU utilization of I/O bound tasks = 1ms then context switch iv. 1 ∗ 10 + 10 = 20ms total CPU utilization v. 20/21.1 = .9478, 94.78%

Explain the differences in the degree to which the following scheduling algorithms discriminate in favor of short processes: a. FCFS b. RR

a. FCFS—discriminates against short jobs since any short jobs arriving after long jobs will have a longer waiting time. bullet b. RR—treats all jobs equally (giving them equal bursts of CPU time) so short jobs will be able to leave the system faster since they will finish first.

Many CPU scheduling algorithms are parameterized. For example, the RR algorithm requires a parameter to indicate the time slice. Multilevel feedback queues require parameters to define the number of queues, the scheduling algorithms for each queue, the criteria used to move processes between queues, and so on. These algorithms are thus really sets of algorithms (for example, the set of RR algorithms for all time slices, and so on). One set of algorithms may include another (for example, the FCFS algorithm is the RR algorithm with an infinite time quantum). What (if any) relation holds between the following pairs of sets of algorithms? a. Priority and SJF b. Multilevel feedback queues and FCFS c. Priority and FCFS d. RR and SJF

a. The shortest job has the highest priority. b. The lowest level of MLFQ is FCFS. c. FCFS gives the highest priority to the job having been in existence the longest. d. None.

What is Dual mode operation?

allows OS to protect itself and other system components (User mode and kernel mode)

Waiting time

amount of time a process has been waiting in the ready queue

Response time

amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)

Is the size of a message that the link can accommodate fixed or variable?

both?

Data parallelism

distributes subsets of the same data across multiple cores, same operation/task on each

Task parallelism

distributing threads across cores, each thread performing unique operation

User vs. Kernel level thread

don't need system calls then use user-level threads because they are faster. If the threads need system calls then kernel-level threads are better

What is Parallelism

implies a system can perform more than one task simultaneously

CPU utilization

keep the CPU as busy as possible

A CPU scheduling algorithm determines an order for the execution of its scheduled processes. Given n processes to be scheduled on one processor, how many possible different schedules are there? Give a formula in terms of n.

n!


संबंधित स्टडी सेट्स

History 1301: Chapter 4. The Empire in Transition

View Set

Texas Promulgated Contract Forms Quiz

View Set

Study set for vocab words English

View Set

Chapter 1: Introduction to Economics

View Set