5 CPU Scheduling

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Which of the scheduling algorithms could result in starvation?

Shortest job first and priority-based scheduling algorithms could result in starvation.

Soft vs Hard Real-Time Scheduling

Soft real-time scheduling gives priority to real-time tasks over non-real- time tasks. Hard real-time scheduling provides timing guarantees for real- time tasks,

5 Criteria for Scheduling Algorithms

(1) CPU utilization (2) throughput (3) turnaround time (4) waiting time (5) response time

Starvation

A process that is ready to run but waiting for the CPU can be considered blocked. A priority scheduling algorithm can leave some low- priority processes waiting indefinitely.

Time Quantum

A small unit of time, called a time quantam or time slice, is defined. A time quantum is generally from 10 to 100 milliseconds in length.

Aging

A solution to the problem of indefinite blockage of low-priority processes involves gradually increasing the priority of processes that wait in the system for a long time.

How can average turnaround time and maximum waiting time conflict?

Average turnaround time is minimized by executing the shortest tasks first. Such a scheduling policy could however starve long-running tasks and thereby increase their waiting time.

How can CPU utilization and response time conflict?

CPU utilization is increased if the overheads associated with context switching is minimized. The context switching overheads could be lowered by performing context switches infrequently. This could however result in increasing the response time for processes.

How can I/O device utilization and CPU utilization conflict?

CPU utilization is maximized by running long-running CPU-bound tasks without performing context switches. I/O device utilization is maximized by scheduling I/O-bound jobs as soon as they become ready to run, thereby incurring the overheads of context switches.

Earliest-Deadline-First (EDF) Scheduling

Earliest-deadline-first (EDF) scheduling assigns priorities according to deadline. The earlier the deadline, the higher the priority; the later the deadline, the lower the priority.

Preemptive vs Nonpreemptive Scheduling Algorithms

Preemptive: where the CPU can be taken away from a process) Nonpreemptive: where a process must voluntarily relinquish control of the CPU). Almost all modern operating systems are preemptive.

Completely Fair Scheduler (CFS)

Linux uses the completely fair scheduler (CFS), which assigns a proportion of CPU processing time to each task. The proportion is based on the virtual runtime (vruntime) value associated with each task.

Load Balancing on Multicore Systems

Load balancing on multicore systems equalizes loads between CPU cores, although migrating threads between cores to balance loads may invalidate cache contents and therefore may increase memory access times.

Run Queue

Most scheduling algorithms maintain a run queue, which lists processes eligible to run on a processor. On multicore systems, there are two general options: (1) each processing core has its own run queue, or (2) a single run queue is shared by all processing cores.

What are advantages of each processing core having its own run queue vs a single run queue?

Most scheduling algorithms maintain a run queue, which lists processes eligible to run on a processor. The advantage of each processing core having its own run queue is that there is no contention over a single run queue when the scheduler is running concurrently on 2 or more processors. When a scheduling decision must be made for a processing core, the scheduler only need to look no further than its private run queue. A disadvantage of a single run queue is that it must be protected with locks to prevent a race condition and a processing core may be available to run a thread, yet it must first acquire the lock to retrieve the thread from the single queue. However, load balancing would likely not be an issue with a single run queue, whereas when each processing core has its own run queue, there must be some sort of load balancing between the different run queues.

Multicore Processors

Multicore processors place one or more CPUs on the same physical chip, and each CPU may have more than one hardware thread. From the perspective of the operating system, each hardware thread appears to be a logical CPU.

Multilevel Feedback Queues

Multilevel feedback queues are similar to multilevel queues, except that a process may migrate between different queues.

Multilevel Queue Scheduling

Multilevel queue scheduling partitions processes into several separate queues arranged by priority, and the scheduler executes the processes in the highest-priority queue. Different scheduling algorithms may be used in each queue.

Priority Scheduling

Priority scheduling assigns each process a priority, and the CPU is allocated to the process with the highest priority. Processes with the same priority can be scheduled in FCFS order or using RR scheduling.

PCS vs SCS Scheduling

PCS scheduling is done local to the process. It is how the thread library schedules threads onto available LWPs. SCS scheduling is the situation where the operating system schedules kernel threads. On systems using either many-to-one or many-to-many, the two scheduling models are fundamentally different. On systems using one-to-one, PCS and SCS are the same

Advantages of having different time-quantum sizes at different levels of a multilevel queueing system?

Processes that need more frequent servicing, for instance, interactive processes such as editors, can be in a queue with a small time quantum. Processes with no need for frequent servicing can be in a queue with a larger quantum, requiring fewer context switches to complete the processing, and thus making more efficient use of the computer.

Proportional Share Scheduling

Proportional share scheduling allocatesT shares among all applications. If an application is allocated N shares of time, it is ensured of having N∕T of the total processor time.

Shortest-Job-First (SJF)

Provably optimal, providing the shortest average waiting time. Implementing SJF scheduling is difficult, however, because predicting the length of the next CPU burst is difficult.

Rate-Monotonic Real-Time Scheduling

Rate-monotonic real-time scheduling schedules periodic tasks using a static priority policy with preemption.

Round-Robin (RR)

Round-robin (RR) scheduling allocates the CPU to each process for a time quantum. If the process does not relinquish the CPU before its time quantum expires, the process is preempted, and another process is scheduled to run for a time quantum.

Scheduling on Solaris

Solaris identifies six unique scheduling classes that are mapped to a global priority. CPU-intensive threads are generally assigned lower priorities (and longer time quantums), and I/O-bound threads are usually assigned higher priorities (with shorter time quantums.)

Dispatcher

The dispatcher is the module that gives control of the CPU's core to the process selected by the CPU scheduler. This function involves the following: • Switching context from one process to another • Switching to user mode • Jumping to the proper location in the user program to resume that program

Formulae for Length of Next CPU Burst

The next CPU burst is generally predicted as an exponential average of the measured lengths of previous CPU bursts. t_n be the length of the nth CPU burst τ_(n+1) be our predicted value for the next CPU burst.

First-Come, First-SServed (FCFS)

The simplest scheduling algorithm, but it can cause short processes to wait for very long processes.

CPU Scheduling

The task of selecting a waiting process from the ready queue and allocating the CPU to it. The CPU is allocated to the selected process by the dispatcher.

Scheduling on Windows

Windows scheduling uses a preemptive, 32-level priority scheme to determine the order of thread scheduling.

How do you calculate waiting time for a scheduled process?

finish time - burst time

How do you calculate turnaround time for a scheduled process?

finishing time - arrival time "Remember that turnaround time is finishing time minus arrival time, so you have to subtract the arrival times to compute the turnaround times."

Why interrupt and dispatch latency times must be bounded in a hard real-time system?

following tasks: save the currently executing instruction, determine the type of interrupt, save the current process state, and then invoke the appropriate interrupt service routine. Dispatch latency is the cost associated with stopping one process and starting another. Both interrupt and dispatch latency needs to be minimized in order to ensure that real-time tasks receive immediate attention. Furthermore, sometimes interrupts are disabled when kernel data structures are being modified, so the interrupt does not get serviced immediately. For hard real-time systems, the time-period for which interrupts are disabled must be bounded in order to guarantee the desired quality of service.

CPU Utilization formula

measure the CPU utilization of a process Pi as the ratio of its burst to its period—ti∕pi or if there is idle time, the total percentage not idle


संबंधित स्टडी सेट्स

SS 8 - Corporate Finance: R25 - Corporate Governance

View Set

TEST 4 ADV PATHO: Integumentary (CH 47, 48)

View Set

Test: Europe, the Americas, and Africa Unit Test

View Set

Pennsylvania Hunter Ed Course: Unit 10: Introduction to Firearm Safety

View Set

7.1 The Kidney: Function, Anatomy, the Nephron and Glomerulus

View Set

Series 66 - Unit 2 (Session 4, 5, 6, 7, 8, 9, 10)

View Set