Chapter 6, CPU Scheduling

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Round-Robin Scheduling

Similiar to FCFS - but preemption is added to enable the system to switch between process. A time quantum is defined (10 to 100 ms in length). The ready queue is treated as a circular queue; allocating the CPU for each process for time intervals of 1 time quantum. New processes are added to the tail of the ready queue. The CPU scheduler picks the first process from the queue, sets a timer to interrupt after 1 time quantum and dispatches. A process may have a time less than the quantum and will release the CPU voluntarily OR it will be longer, and an interrupt will go off. The average waiting time for RR is often long. In RR, no process is allocated the CPU for more then quantum UNLESS it is the only process. RR is preemptive.

Process Affinity

(1) If a process goes to another processor, the cache memory of the process MUST be repopulated to the second processor. (2) We want to AVOID switching a process so it has a high cost A process HAS an affinity for the processor which is currently running it. Some factors that affect affinity is when the CPU has faster access to some parts of memory than to others.

Preemptive Scheduling (interrupted)

-Has a cost with access to shared data. -Temporarily interrupt a task carried out by a computer system with the intention of resuming the task at a later time. This is used to switch to a higher priority task (resource) BUT - this may have a problem especially if we are updating shared data and it is interrupted, and left in inconsistent state OR kernel is busy with an activity on behalf of the process. To deal with this, the OS waits for either a system call to complete OR for an I/O block to take place before doing a context switch so that the kernel will NOT pre-empt when data is in an inconsistent state. In order for sections of code to not be accessed concurrently by several processes, they disable interrupts at entry and reenable interrupts at exit.

Shortest-Remaining Time First Scheduling (Preemptive SJF scheduling)

A preemptive SJF algorithm will preempt the currently executing process. Tricky**

Priority Scheduling

A priority associated with each process; CPU allocated to the process with highest priority Equal-priority process scheduled in FCFS order. The larger the CPU burst, the lower the priority. Some properties use quantity/quality to compute priority of a process. Priority scheduling can be preemptive or nonpreemptive. When a process arrive at the ready queue, its priority is compared with the priority of the currently running process. -Preemptive priority will preempt the CPU if the priority of the newly arrived process is higher than priority of currently running process -Nonpreemptive will put the new process at the head of the ready queue.

Hard Affinity

A process WILL never migrate to other processors.

Push Migration

A specific task checks the LOAD on each processor and if there is an imbalance, it EVENLY distributes the load by moving FROM overloaded to idle/less busy processors.

Scheduling in Virtualization

A system with virtualization - ACTs like a multiprocessor system The virtualiztion software presents one or more virtual CPUs to each virtual machine, and schedules the USE of physical CPU to these virtual machines. Most environments have 1 host and many guest operating systems. The host creates and manages the virtual machine, and each virtual machine has a guest OS running within the guest that MAY be used for specific use cases. Within a VM, the VM must accept what CPU resources it receives. Individual virtualized OS receive a portion of available CPU cycles. Virtualization can undo good scheduling algorithms.

Course-Grained Multithreading

A thread executes on a processor UNTIL a long-latency event like a memory stall occurs. Due to the delay, the processor MUST switch to another thread to begin execution. Cost of switching is high.

Why CPU Scheduling?

By switching the CPU among process, the OS can make the computer productive. Threads are apart of the process model; on OS that supports thenm it is the kernel-level threads, not processes that are scheduled by the OS.

Preemptive Scheduling

CPU scheduling decisions take place under 4 circumstances: 1. When a process switches from running state to the waiting state (e.g. I/O request, or wait()) 2. When a process switches from the running state to the ready state (e.g. this is when an interrupt occurs) 3. When a process switches from the waiting state to the ready state (completes I/O) 4. When the process terminates For 1 and 4, there is no choice in terms of scheduling. A new process (if one exists in the ready queue) must be selected for execution.

Ways to Multithread a Processor

Coarse-Grained and Fine-Grained Multithreading

Coarse vs. Fine Grained

Fine-grained issues instructions for DIFFERENT threads after every cycle. Coarse grained switches to issue instructions when a long latency event occurs.

Load Sharing

If multiple CPUs are available, load sharing becomes possible. This is when we distribute the load across multiple processors WHICH are identical

Queuing Models (network analysis)

In some systems, there is no static set of processes to use. What can be determined is distribution of CPU/I/O burst (team measured). The study where using arrival rates and services to rate to obtain average queue length, average wait time, and device queues. The computer is a network of servers - that have a queue of waiting processes. The CPU is the server with its ready queue, as is the I/O with device queues. n = lambda x W

Load Balancing

Keeps the workload evenly distribured across all processors in a symmetric multiprocessing system. (1) So, one or more processors do not sit idle or one processor is doing too much (2) LB is necessary where each processor HAS its own queue of processes to execute (a common run queue does not require this) There ARE two approaches to load balancing: push migration and pull migration. Load balancing may counteract benefits of process affinity (e.g. advantage of keeping processes on the same CPU)

Implementation of Simulation

May be limited accuracy; accurate way is to code it up, put it in the OS, and see how it works. Difficulty with approach is the HIGH cost (of coding an algorithm). Users just want the process to be executed and get results. Another difficulty is that the environment in which the algorithm is used will change - most flexible scheduling algorithms are those that can be altered by managers or users for specific application.

Pull Migration

Occurs when an idle processor PULLs a waiting task from a busy processor.

Thread Scheduling

On OS that support threads - it is kernel-level threads. User -level threads are managed by a thread library and the kernel is unaware. To run on a CPU, user-level threads are mapped to a kernel level thread and this mapping may use a LWP.

Nonpreemptive (or cooperating) Scheduling

Once the CPU has been allocated to a process, the process keeps the CPU until it released the CPU either BY terminating it, or switching to the waiting state (Windows, Mac OS X), Cooperating scheduling is the only method used on certain hardware platforms.

Approach to Multiple-Processor Scheduling

One approach to CPU scheduling is asymmetric multiprocessing. In asymmetric multiprocessing - there is a MASTER server and it handles the single processor. Only this processor accesses the system data structure, which reduces the need for data sharing. In a symmetric multiprocessing, each processor is self-scheduling. All processes have a COMMON ready queue, or each processor has its own private queue of ready processes. Most modern OSs support SMP.

Multilevel Queue Scheduling Algorithm

Parititions the ready queue into several separate queues. The processes are assigned to one queue - based on the property like memory size, process priority The processes are PERMANENTLY assigned to 1 queue. Each queue has a scheduling algorithms. There is also a fixed priority preemptive scheduling. For example, the foreground queue has absolute priority over the background queue. Another example is system processes, interactive processes, batch processes, student processes Each queue has absolute priority over lower priority queues. No process in a LOWER priority queue can RUN unless every process in the HIGHER priority queues are ran. If a process in the LOWER priority queue was running, it will be preempted in favour of ANOTHER process in a higher priority queue. In addition, if a process in a lower priority queue WAITS too long - it may be moved to a higher priority queue to prevent starvation A multilevel queue scheduler is defined by the scheduling algorithm for EACH queue, number of queues, method for upgrading a process from higher and lower priorities, which queue a process will enter.

Multicore Processors

Placing multiple processor cores on the same chip. Each core appears to the OS as separate physical processors. SMP systems that use multicore processors ARE faster and consume less power. BUT, they complicate scheduling issues. 1. Memory Stall may occur and Cache Miss 2. The processor spends up to 50% of its time waiting for data to be available To solve this, multithreaded processor cores HAVE two or more hardware threads assigned to a core SO that if one core is busy waiting, the OTHER can take off. From the OS - each hardware threads appears as a logical processor. If we have eight cores per chip and four hardware threads per core - we can have '32' logical processors.

FIFO (first in, first out) Scheduling Algorithm

Process that requests the CPU first is allocated the CPU first. It is managed by the FIFO queue; When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue. The negatives is that the WAITING time is long. The average waiting time is the time each process waits for.

Simulation

Programming a model of the computer system. There is a variable representing a clock; as it is increased, it modifies the system state to reflect activities of the device, processes, and scheduler. From this - we can get algorithm performances. Simulations may be expensive, time, large storage

Solaris Scheduling

Solaris USES a priority-based thread scheduling - where each threads belongs to six classes: -Time sharing, interactive, real time, system, fixed priority, fair share. Within each class, there are different priorities and different algorithms. The default scheduling class is time sharing. -The scheduling policy for the time-sharing class alter priorities and assigns time slices of different length USING the multilevel feedback queue. The higher the priority, the smaller the time slice. The lower the prioirity, the larger the time slice. Interactive processes ARE given higher priority. CPU-bound processes are lower priority. For example, priority 0 means a time quantum of 200. Threads in the real-time class are GIVEN the highest priority (so we have a guaranteed response form system). Very few belong to this. Once established, the priority of a system thread CANNOT change. The fixed priority and fair-shared classes were introduced in Solaris 9. The fair-shared classes USES CPU shares INSTEAD of priority to make decision - the shares are allocated to a set of processes. Each scheduling class has a set of priorities, and selects thread with the highest to run. The thread runs on the CPU until it is block, uses its time-slice, or pre-empted. If there are multiple threads with the same priority, round-robin queue. The KERNEL maintains 10 threads for servicing interrupts - they do not belong to any scheduling class and execute at a high priority.

Processor Sets

Solaris allows processes to be assigned to sets - WHICH limit which processes can run on which CPUs

CPU-Scheduling Algorithms (Criteria)

Some criteria: (1) CPU Utilization (40% to 90% usage) (2) Throughput (how much processes/second) (3) Turnaround Time (how long does it take to execute that process). It is sum of periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, doing I/O (4) Waiting Time (processs spends in the ready queue) (5) Response time (time it takes to start responding) We want to maximize these properties. A system with a reasonable and predictable response time is MORE desirable than a system faster on average. CPU consists of several hundreds of CPU bursts and I/O bursts.

Cache

Stores data so future requests for data can be obtained faster - successive memory accesses.

Soft Affinity

The OS has a policy of attempting to keep a process running on the SAME processor, but no guarantee!

Relationship between Window kernel and WIN32 API

The WIN32 API and numberic priorities of Windows kernel. -The WIN32 API identifies many priority classes a process can belong to. Priorities in all except the real-time priority class ARE variable - which means the priority of a thread belonging to these classes can change. A thread with a given priority class HAS a relative priority (e.g. time critical, highest, normal, above normal, lowest). The PRIORITY of each thread is based on both the priority class it BELONGs to and its RELATIVE priority within that class. Each thread has a base priority which represents a value in the priority range for the class the thread belongs to. The base priority is the value of the NORMAL relative priority for that class. The priority is never lower below the base priority.

Convey Effect

The convey effect is all of the other processes WAITING for the one big process to get off the CPU (which results in a lower CPU/device utilization)

Dispatcher

The dispatcher - is a module that gives control of the CPU to the process SELECTED by the short-term scheduler. The dispatcher switches context, switches to user mode, and jumps to the proper location in user program to restart the program. The dispatcher should be as quick as possible - since it is invoked during every process switch.

Shortest-Job First Scheduling

This algorithms associates with each process the length of the PROCESS's next CPU burst. It is assigned the process with the smallest next CPU burst. The scheduling depends on the length of the next CPU burst of a process, than the length. We assign the process from least to greatest, and then calculate average waiting time. This gives the minimum average waiting time for a set of processes. The average waiting time decreases. The difficulty with this algorithm is KNOWING the length of the next CPU request. SJF is used in long-term scheduling (cannot be implemented in short term because we cannot know the length of the next CPU burst) The SJF can be preemptive or NON-preemptive. The choice arises when a new process arrives, while the previous is executing. The next CPU burst of the newly arrived process may be shorter than what is left of the currently executing process. A nonpreemptive will allow the current running process to finish its cpu BURST.

Linux Scheduling

The kernel provides a scheduling algorithm that runs in constant time - regardless of the NUMBER of tasks in a system. The scheduler provides support for SMP - process affinity and load balancing. The LINUX scheduler is a pre-emptive, priority based algorithm with TWO separate priority ranges. (1) Real-time range from 0 to 99 (2) Nice value range from 100 to 140. These two ranges map into a global priority scheme where LOWER values include higher priorities. Unlike OTHER schedulers, Linux assigns HIGHER priority tasks LONGER time quanta and LOWER priority tasks SHORTER time quantas. A task is considered eligible for execution on the CPU as long as it has time remaining on its time slice. When a task has used up its time slice - it is expired and will not be considered until all other tasks have exhausted their time quanta. All runnable tasks are in a runqueue data structure; each processor maintains its own runqueue and schedules itself independently. Each runqueue CONTAINS its two priority arrays: active and expired. -Active = the tasks with remaining in their time slice -Expired = all expired tasks. Each of these priority arrays contain a list of tasks indexed by priority. The scheduler chosoe the task with the highest priority in the active array for execution. When all tasks have exhausted their time slices, the two priority arrays are exchanged. Real-time tasks are assigned static priorities; all other tasks have dynamic priorities.

Performance of RR algorithm

The performance depends on the size of the time quantum (if it is large, RR is the same as a FCFS policy), if it small, the RR approach is called processor sharing (which process is running at 1/n of the speed of the real processor) In RR - we want the time quantum to be large WITH respect to the context-switch time (which is 10% of the time quantum). Turnaround time depends on the size of the time quantum. Average turnaround time DOES NOT necesarily improve the time quantum size increases Average turnaround time improves if the most processes finish in their single time quantum. A rule of thumb is 80% of CPU bursts should be shorter than the time quantum.

Problem with priority scheduling algorithm

The problem is starvation. A process may be ready to run, but is waiting for the CPU and is blocked by a stream of higher priority processes. This is bad in a heavily loaded system. A solution to this is aging.

Context Switch

The process of storing the state of a process or thread sO that it can be restored and resumed execution at a later point.

Aging

The schedule to the priority scheduling algorithm. It is the technique of increasing the priority of the process that waits in the system for a long time. If we have a range of priorities - we can increase the priority of a waiting process every 15 mins.

CPU-I/O Burst Cycle

The success of the CPU scheduling depends on the PROCESS execution of a cycle of CPU execution and I/O wait. The processes alternate between the two states. The final CPU burst ends with a request to termination execution (curve = hyperexponatial) An I/O bound program HAS many short CPU burst. A CPU-bound program may have very few LONG CPU burst.

Dispatch Latency

The time it takes for the dispatcher to stop one process and start running another.

System-Contention Scope

There is where competition for the CPU with SCS scheduling takes place for all threads in the system (e.g. systems with 1 to 1 model such as Windows, Solaris do this). Process Contention Scope is DONE according to priority (highest priority to run) on the LWP. PCS will typically preempt thread running in favour of a higher priority thread

Multilevel Queue Scheduling

This algorithm is FOR processes classified into different groups (e.g. foreground processes which are interactive, background processes, which are batch). These types of processes have different response time requirements and neededs. Foreground has more priority.

Fine-Grained Multithreading

This form switches between threads AT every instruction cycle. The design of a fine-grained system includes logic for thread switching. The COST of switching between threads is small.

Memory Stall

This is when a processor accesses memory, it SPENDS a lot of time waiting for data to be available such as a a Cache Miss.

Deterministic Modeling

This method takes a particular predetermined workload and defines the performance of each algorithm for the workload (Average wait time)

Multithreaded Multicore Processor

This processor requires two different levels of scheduling. 1. One level ARE scheduling decisions MADE by the OS as it chooses which software thread to run on the processor (can choose any algorithm) 2. Specifies how EACH core decides which hardware thread to run

Objective of multiprogramming

To maximize the CPU utilization. Idea: A process is executed until it must wait, for the competition of some I/O request. Instead of the CPU sitting idle, the OS takes CPU away from the process and gives the CPU to another process to execute.

Contention Scope

User and kernel thread are schedulied different. On a many to one and many to many model, the thread library schedules user level threads to run on an available LWP - a known such as process contention scope since competition for the CPU takes place among threads belong to the same process. To decide which KERNEL thread to schedule on the CPU, the kernel uses system-contention scope.

CPU Scheduling - Algorithms

When selecting an algorithm - we define important of measures: (1) Max CPU utilization and max response time is under 1 second (2) Max throughput such as that turnaround time is linearly proportional to execution time

CPU Scheduler

When the CPU becomes idle - the OS must select one of the processes in the ready queue to be executed. This is executed by the short-term scheduler (CPU scheduler). This scheduler selects a process from the PROCESSES in memory that are READY to be executed and allocates the CPU to that process. All processes in the ready queue are waiting to be run by the CPU (process control blocks)

Cache Miss

When we are accessing data not in the cache memory

Windows Scheduling

Windows schedules threads USING a priority-based preemptive scheduling algorithm - always ENSURES the highest priority thread will run. Dispatcher: The portion of the Windows kernel that handles the scheduling.... -Thread selected to run will run UNTIL it is preempted by a higher priority thread, terminates, or calls a blocking system. -If a higher priority real time thread is ready while a lower priority thread is running, the lower will be interrupted. The DISPATCHER uses a 32-level priority scheme. Variable class CONTAIN threads with priorities from 1 to 15. Real-time class CONTAIN threads with priorities from 16 to 31. There is a priority 0 for memory management. Dispatcher uses a queue TO find a thread ready to run. If there is no ready thread found - the dispatcher will execute a special thread called idle thread.

Little's Formula

n = lambda vs W where n = the average queue length (excludes processes being served) W = average waiting time in the queue (3 processes per second) Lambda = the average arrival rate for new processes in the queue (e.g. 3 processes per second) We expect that during the time W that a process wiats, lambda vs W new processes will arrive. If the system is in a steady state, the number of processes leaving the queue == number of processes that arrive. If 7 processes arrive/second (average) - Lambda and 14 processes in the queue, then the average waiting time is 2 seconds per process.


Ensembles d'études connexes

07.11 Natural Selection Module Exam

View Set

Medical Surgical Nursing Chapter 34 Coronary Artery Disease and Acute Coronary Syndrome

View Set

180 - Unit 4: Transferring Real Estate (EXAM 1)

View Set

Chemistry A, Final Exam: Lesson 11 Quiz

View Set

Chapter 7: Legal Dimensions of Nursing Practice

View Set

World Geography: Lesson Three (Test Nine)

View Set

CH 21: Digestive, gastrointestinal, and metabolic function

View Set

Fundamentals Of Networking Technologies 7th Edition

View Set