Op Sys Exam 3

¡Supera tus tareas y exámenes ahora con Quizwiz!

In a multiprogramming environment, there are usually more jobs to be executed than could possibly be run at one time. Before the operating system can schedule them, it needs to resolve three limitations of the system:

(1) There are a finite number of resources (such as disk drives, printers, and tape drives). (2) Some resources, once they're allocated, can't be shared with another job (e.g., printers). (3) Some resources require operator intervention—that is, they can't be reassigned automatically from job to job (such as tape drives).

Natural Wait

A common term used to identify an I/O request from a program in a multiprogramming environment that would cause a process to wait "naturally" before resuming execution.

When is a Job's PCB created?

A job's PCB is created when the Job Scheduler accepts the job and is updated as the job progresses from the beginning to the end of its execution.

Queues

A linked list of PCB's that indicates the order in which jobs or processes will be serviced.

Turnaround Time

A measure of a system's efficiency that tracks the time required to execute a job and return output to the user.

Shortest Job Next (SJN)

A nonpreemptive scheduling algorithm (also known as shortest job first, or SJF) that handles jobs based on the length of their CPU cycle time. It's easiest to implement in batch environments where the estimated CPU time required to run the job is given in advance by each user at the start of each job. However, it doesn't work in interactive systems because users don't estimate in advance the CPU time required to run their jobs. The SJN algorithm is optimal only when all of the jobs are available at the same time and the CPU estimates are available and accurate. ------------------------------------------------------------------------------ Policy Type: Nonpreemptive Best for: Batch Advantage: Minimizes average waiting time Disadvantage: Indefinite postponement of some jobs

First Come First Served (FCFS)

A nonpreemptive scheduling algorithm that handles jobs according to their arrival time: the earlier they arrive, the sooner they're served. It's a very simple algorithm to implement because it uses a FIFO queue. This algorithm is fine for most batch systems, but it is unacceptable for interactive systems because interactive users expect quick response times. With FCFS, as a new job enters the system its PCB is linked to the end of the READY queue and it is removed from the front of the queue when the processor becomes available—that is, after it has processed all of the jobs before it in the queue. ------------------------------------------------------------------------------ Policy Type: Nonpreemptive Best for: Batch Advantage: Easy to implement Disadvantage: the average turnaround times vary widely and are seldom minimized.

Thread

A portion of a process that can run independently. For example, if your system allows processes to have a single thread of control and you want to see a series of pictures on a friend's Web site, you can instruct the browser to establish one connection between the two sites and download one picture at a time. However, if your system allows processes to have multiple threads of control, then you can request several pictures at the same time and the browser will set up multiple connections and download several pictures at once.

Round Robin

A preemptive process scheduling algorithm that is used extensively in interactive systems. It's easy to implement and isn't based on job characteristics but on a predetermined slice of time that's given to each job to ensure that the CPU is equally shared among all active processes and isn't monopolized by any one job. This time slice is called a time quantum and its size is crucial to the performance of the system. It usually varies from 100 milliseconds to 1 or 2 seconds. Jobs are placed in the READY queue using a first-come, first-served scheme and the Process Scheduler selects the first job from the front of the queue, sets the timer to the time quantum, and allocates the CPU to this job. If processing isn't finished when time expires, the job is preempted and put at the end of the READY queue and its information is saved in its PCB. In the event that the job's CPU cycle is shorter than the time quantum, one of two actions will take place: (1) If this is the job's last CPU cycle and the job is finished, then all resources allocated to it are released and the completed job is returned to the user; (2) if the CPU cycle has been interrupted by an I/O request, then information about the job is saved in its PCB and it is linked at the end of the appropriate I/O queue. Later, when the I/O request has been satisfied, it is returned to the end of the READY queue to await allocation of the CPU. The efficiency of round robin depends on the size of the time quantum in relation to the average CPU cycle. If the quantum is too large—that is, if it's larger than most CPU cycles—then the algorithm reduces to the FCFS scheme. If the quantum is too small, then the amount of context switching slows down the execution of the jobs and the amount of overhead is dramatically increased. Job A has a CPU cycle of 8 milliseconds. The amount of context switching increases as the time quantum decreases in size. ------------------------------------------------------------------------------ Policy Type: Preemptive Best for: Interactive Advantage: Provides reasonable response times to interactive users; provides fair CPU allocation Disadvantage: Requires selection of good time quantum

Task

A process, also called a task, is a single instance of a program in execution.

Preemptive Scheduling Policy

A scheduling strategy that interrupts the processing of a job and transfers the CPU to another job is called a preemptive scheduling policy; it is widely used in time-sharing environments.

High-level Scheduler

A synonym for the Job Scheduler. It is only concerned with selecting jobs from a queue of incoming jobs and placing them in the process queue, whether batch or interactive, based on each job's characteristics. The Job Scheduler's goal is to put the jobs in a sequence that will use all of the system's resources as fully as possible.

Low-level Scheduler

A synonym for the Process Scheduler. The Process Scheduler is the low-level scheduler that assigns the CPU to execute the processes of those jobs placed on the READY queue by the Job Scheduler. This becomes a crucial function when the processing of several jobs has to be orchestrated.

Processor

Also known as the CPU (for central processing unit), is the part of the machine that performs the calculations and executes the programs.

Process Scheduling Algorithm

An algorithm used by the Job Scheduler to allocate the CPU and move jobs through the system. Early operating systems used nonpreemptive policies designed to move batch jobs through the system as efficiently as possible. Most current systems, with their emphasis on interactive use and response time, use an algorithm that takes care of the immediate requests of interactive users.

Program

An inactive unit, such as a file stored on a disk. It is not a process. To an operating system, a program or job is is a unit of work that has been submitted by the user.

Job Scheduler

The high-level scheduler of the Processor Manager that selects jobs from a queue of incoming jobs based on each job's characteristics.

CPU-bound jobs

CPU-bound jobs (such as finding the first 300 prime numbers) have long CPU cycles and shorter I/O cycles.

Context Switching

Context switching is required by all preemptive algorithms. Acts of saving a job's processing information in its PCB so the job can be swapped out of memory and of loading the processing information from the PCB of another job into the appropriate registers so the CPU can process it. Context switching occurs in all preemptive policies.

Scheduling Order

Each job is initiated by the Job Scheduler based on certain criteria. Once a job is selected for execution, the Process Scheduler determines when each step, or set of steps, is executed—a decision that's also based on certain criteria.

Process Identification

Each job is uniquely identified by the user's identification and a pointer connecting it to its descriptor (supplied by the Job Scheduler when the job first enters the system and is placed on HOLD).

Process Control Block (PCB)

Each process in the system is represented by a data structure called a Process Control Block (PCB) that performs the same function as a traveler's passport. The PCB contains the basic information about the job, including what it is, where it's going, how much of its processing has been completed, where it's stored, and how much it has spent in using resources.

Middle-level Scheduler

In a highly interactive environment, there's also a third layer of the Processor Manager called the middle-level scheduler. In some cases, especially when the system is overloaded, the middle-level scheduler finds it is advantageous to remove active jobs from memory to reduce the degree of multiprogramming, which allows jobs to be completed faster. The jobs that are swapped out and eventually swapped back in are managed by the middle-level scheduler.

Nonpreemptive Scheduling Policy

Functions without external interrupts (interrupts external to the job). Therefore, once a job captures the processor and begins execution, it remains in the RUNNING state uninterrupted until it issues an I/O request (natural wait) or until it is finished (with exceptions made for infinite loops, which are interrupted by both preemptive and nonpreemptive policies).

I/O-bound jobs

I/O-bound jobs (such as printing a series of documents) have many brief CPU cycles and long I/O cycles

Interrupts

In this chapter we examined another type of interrupt that occurs when the time quantum expires and the processor is deallocated from the running job and allocated to another one. The control program that handles the interruption sequence of events is called the interrupt handler. When the operating system detects a nonrecoverable error, the interrupt handler typically follows this sequence: 1. The type of interrupt is described and stored—to be passed on to the user as an error message. 2. The state of the interrupted process is saved, including the value of the program counter, the mode specification, and the contents of all registers. 3. The interrupt is processed: The error message and state of the interrupted process are sent to the user; program execution is halted; any resources allocated to the job are released; and the job exits the system. 4. The processor resumes normal operation.

Process Status

Information stored in the job's PCB that indicates the current position of the job and the resources responsible for that status. This indicates the current status of the job—HOLD, READY, RUNNING, or WAITING —and the resources responsible for that status.

As a job moves through the system, it's always in one of five states (or at least three)...

It changes from HOLD to READY to RUNNING to WAITING and eventually to FINISHED. These states are initiated by either the Job Scheduler or the Process Scheduler

Priority Scheduling

Priority scheduling is a nonpreemptive algorithm and one of the most common scheduling algorithms in batch systems, even though it may give slower turnaround to some users. This algorithm gives preferential treatment to important jobs. It allows the programs with the highest priority to be processed first, and they aren't interrupted until their CPU cycles (run times) are completed or a natural wait occurs. If two or more jobs with equal priority are present in the READY queue, the processor is allocated to the one that arrived first (first-come, first-served within priority). ------------------------------------------------------------------------------ Policy Type: Nonpreemptive Best for: Batch Advantage: Ensures fast completion of important jobs Disadvantage: Indefinite postponement of some jobs

Process Scheduler

The low-level scheduler of the Processor Manager that establishes the order in which processes in the READY queue will be served by the CPU.

Multiprogramming

Requires that the processor be allocated to each job or to each process for a period of time and deallocated at an appropriate moment. If the processor is deallocated during a program's execution, it must be done in such a way that it can be restarted later as easily as possible.

Job Status

The condition of a job as it moves through the system from the beginning to the end of its execution.

Shortest Remaining Time (SRT)

The preemptive version of the SJN algorithm. The processor is allocated to the job closest to completion—but even this job can be preempted if a newer job in the READY queue has a time to completion that's shorter. This algorithm can't be implemented in an interactive system because it requires advance knowledge of the CPU time required to finish each job. It is often used in batch environments, when it is desirable to give preference to short jobs, even though SRT involves more overhead than SJN because the operating system has to frequently monitor the CPU time for all the jobs in the READY queue and must perform context switching for the jobs being swapped (switched) at preemption time (not necessarily swapped out to the disk, although this might occur as well). The turnaround time is the completion time of each job minus its arrival time. ------------------------------------------------------------------------------ Policy Type: Preemptive Best for: Batch Advantage: Ensures fast completion of short jobs Disadvantage: Overhead incurred by context switching

Process State

This contains all of the information needed to indicate the current state of the job such as: • Process Status Word — the current instruction counter and register contents when the job isn't running but is either on HOLD or is READY or WAITING. If the job is RUNNING, this information is left undefined. • Register Contents — the contents of the register if the job has been interrupted and is waiting to resume processing. • Main Memory — pertinent information, including the address where the job is stored and, in the case of virtual memory, the mapping between virtual and physical memory locations. • Resources — information about all resources allocated to this job. Each resource has an identification field listing its type and a field describing details of its allocation, such as the sector address on a disk. These resources can be hardware units (disk drives or printers, for example) or files. • Process Priority — used by systems using a priority scheduling algorithm to select which job will be run next.

Accounting

This contains information used mainly for billing purposes and performance measurement. It indicates what kind of resources the job used and for how long. Typical charges include: • Amount of CPU time used from beginning to end of its execution. • Total time the job was in the system until it exited. • Main storage occupancy—how long the job stayed in memory until it finished execution. This is usually a combination of time and space used; for example, in a paging system it may be recorded in units of page-seconds. • Secondary storage used during execution. This, too, is recorded as a combination of time and space used. • System programs used, such as compilers, editors, or utilities. • Number and type of I/O operations, including I/O transmission time, that includes utilization of channels, control units, and devices. • Time spent waiting for I/O completion. • Number of input records read (specifically, those entered online or coming from optical scanners, card readers, or other input devices), and number of output records written.

Priorities can also be determined by the Processor Manager based on characteristics intrinsic to the jobs such as:

• Memory requirements. Jobs requiring large amounts of memory could be allocated lower priorities than those requesting small amounts of memory, or vice versa. • Number and type of peripheral devices. Jobs requiring many peripheral devices would be allocated lower priorities than those requesting fewer devices. • Total CPU time. Jobs having a long CPU cycle, or estimated run time, would be given lower priorities than those having a brief estimated run time. • Amount of time already spent in the system. This is the total amount of elapsed time since the job was accepted for processing. Some systems increase the priority of jobs that have been in the system for an unusually long time to expedite their exit. This is known as aging.

What's a good process scheduling policy? Several criteria come to mind, but notice in the list below that some contradict each other:

• Minimize response time. Quickly turn around interactive requests. This could be done by running only interactive jobs and letting the batch jobs wait until the interactive load ceases. • Minimize turnaround time. Move entire jobs in and out of the system quickly. This could be done by running all batch jobs first (because batch jobs can be grouped to run more efficiently than interactive jobs). • Minimize waiting time. Move jobs out of the READY queue as quickly as possible. This could only be done by reducing the number of users allowed on the system so the CPU would be available immediately whenever a job entered the READY queue. • Maximize CPU efficiency. Keep the CPU busy 100 percent of the time. This could be done by running only CPU-bound jobs (and not I/O bound jobs). • Ensure fairness for all jobs. Give everyone an equal amount of CPU and I/O time. This could be done by not giving special treatment to any job, regardless of its processing characteristics or priority.

Job Transition States

• The transition from HOLD to READY is initiated by the Job Scheduler according to some predefined policy. At this point, the availability of enough main memory and any requested devices is checked. • The transition from READY to RUNNING is handled by the Process Scheduler according to some predefined algorithm (i.e., FCFS, SJN, priority scheduling, SRT, or round robin—all of which will be discussed shortly). • The transition from RUNNING back to READY is handled by the Process Scheduler according to some predefined time limit or other criterion, for example a priority interrupt. • The transition from RUNNING to WAITING is handled by the Process Scheduler and is initiated by an instruction in the job such as a command to READ, WRITE, or other I/O request, or one that requires a page fetch. • The transition from WAITING to READY is handled by the Process Scheduler and is initiated by a signal from the I/O device manager that the I/O request has been satisfied and the job can continue. In the case of a page fetch, the page fault handler will signal that the page is now in memory and the process can be placed on the READY queue. • Eventually, the transition from RUNNING to FINISHED is initiated by the Process Scheduler or the Job Scheduler either when (1) the job is successfully completed and it ends execution or (2) the operating system indicates that an error has occurred and the job is being terminated prematurely.


Conjuntos de estudio relacionados

2.4 The three types of chemical bonds are ionic, covalent, and hydrogen.

View Set

International Business Chapter 10 Terms

View Set

Chapter 25. The Digestive System Part 2 (Sections 4-7) Homework Assignment

View Set

WordPress Certified Editor (wce) - Practice Test - knowledge pillars

View Set

Employment at Will and Exceptions

View Set