Lecture 9: Operating Systems

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Non-preemptive scheduling

First come first served (FCFS). Shortest time remaining (STR). Priority Scheduling

Functions of the Supervisor and the Scheduler

Functions of the supervisor. Calls appropriate interrupt handler. Transfers control to the scheduler. Functions of the scheduler. Updates status of any process or thread affected by last interrupt. Decides which thread to dispatch to the CPU. Updates thread control information and the stack to reflect the scheduling decision. Dispatches selected thread

Timer Interrupts

Generated at regular intervals by CPU to give scheduler an opportunity to suspend currently executing thread. Not a "real" interrupt; no interrupt handler to call; supervisor passes control to the scheduler. Important CPU hardware feature for multitasking OSs. Guarantee no thread can hold CPU for long period. It is used in round robin scheduler.

Real-Time Scheduling

Guarantees minimum amount of CPU time to a thread if the thread makes an explicit real-time scheduling request when it is created. Guarantees a thread enough resources to complete its function within a specified time. Often used in transaction processing, data acquisition, and automated process control

Basic Services

programs that accept commands and requests from a user and a user's program. Manages, loads and executes programs. Manages hardware resources of the computer. Acts as an interface between the user and the system.

Thread States

ready: waiting for access to the cpu. Running: retains control of CPU until the thread or its parent process terminates normally or an interrupt occurs. Blocked: waiting for some event to occur.

Virtual Resource

the resources that are apparent to a process or user.

Interrupt Processing

thread can be blocked waiting for resources. Thread is put into a wait state (blocked state) and its state stored on the stack. Some interrupt handler will process the blocked thread's request. When the resource has been allocated, thread is moved from blocked state to ready or running state. The thread remains in the blocked state until the request is satisfied or a time out occurs.

Operating System Layers

using layers makes the OS easier to maintain. Command layer, command language or job control language. Service layer, service call. Kernel.

Virtual Memory - Basic Ideas

virtual memory (VM) increases the apparent amount of memory by using far less expensive hard disk space. Provides for process separation. Demand paging. Pages reside on hard disk and are brought into memory as needed. Page table. Keeps track of what is in memory and what is still out on hard disk

Real Resource

a computer system's physical devices and associated system software

Priority Scheduling

a nonpreemptive process scheduling policy (or algorithm) that allows for the execution of high-priority jobs before low-priority jobs.

First-Come, First Served (FCFS)

a priority sequencing rule that specifies that the job or customer arriving at the workstation first has the highest priority

Processing Control Block (PCB)

A block of data for each process in the system. Contains all relevant information about the process. Location of code in memory, stack pointer value, process ID, priority value and many more. PID (process ID): a unique identifier for each process.

Thrashing

A condition that can arise when a system is heavily loaded is called thrashing. Thrashing occurs when every frame of memory is in use, and programs are allocated just enough pages to meet their minimum requirement. A page fault occurs in a program, and the page is replaced by another page that will itself be needed for replacement almost immediately. Too many page faults affect system performance

CPU Allocation

A multitasking OS can execute dozens, hundreds, or thousands of threads in the same time frame. Most computers have only one or few CPUs, threads usually share CPUs (concurrent or interleaved execution) OS makes rapid decisions about which threads receive CPU control and for how long that control is retained. CPU allocation. provides mechanism for the acceptance of threads into the system and for the actual allocation of CPU time to execute those threads. meets the objective to optimize use of computer system resources by allowing multiple threads to execute concurrently.

Multitasking Resource Allocation Goals

A multitasking operating system manages hardware resources (CPU, memory, I/O) to achieve the following: Meet the resource needs of processes. Prevent processes from interfering with one another. Efficiently use hardware and other resources

Process vs. Thread

A process is a unit of execution that contains all of the resources to execute as a stand-alone entity. A thread is usually a subset of a process, and is the smallest unit of executable code that can be scheduled on its own. A process has at least one thread, but a thread is not necessarily a process. Processes are sometimes considered 'heavy-weight' while threads are considered 'light-weight', referring to the amount of resources allocated to each type. Processes have unique address spaces; threads within a process share the address space of the process

Preemptive Scheduling

A running thread controls the CPU by controlling the content of the instruction pointer. In preemptive scheduling, a thread can be removed involuntarily from the running state. CPU control is lost whenever an interrupt is received. CPU then transfers control to the OS. The portion of the operating system that receives control is called the supervisor.

Types of Operating Environments

A single process, non-threaded (SPNT) OS runs one process at a time. Example: Microsoft's DOS. A single process, multi-threaded (SPMT) OS runs only one process at a time, but supplies an interface that allows for multiple threads to execute in that process. Example: General Software's Embedded DOS. A multi-process, non-threaded (MPNT) OS have many processes with a single thread of execution. Example: OS in mini-computers. Linux and Windows are examples of multi-process, multi-threaded (MPMT) environments. macOS is the Unix-based graphical operating system

Round Robin (cont)

A variation on round robin that is used by some UNIX systems calculates a dynamic priority based on the ratio of CPU time to total time that the process has been in the system. The smallest ratio is treated as the highest priority and is assigned the CPU next. Both Windows and Linux use such a dynamic priority scheduling algorithm as their primary criterion for dispatch selection. The algorithms on both systems adjust priority based on their use of resources.

NUR - Not Used Recently

Add two additional bits for each entry in the page tables. One bit is set (changed to 1) whenever the page is referenced (used). The other bit is set (changed to 1) whenever that data on the page is modified, that is written to. It is called a dirty bit.

GUI

Advantages: easy to learn, little training, amenable to multi-tasking. Disadvantages: harder to implement, more hardware/software requirements. Requires lots of memory. Software is complex and difficult to write.

Services Required by Concurrent Processing

Allocates resources such as memory, CPU time, and I/O devices to programs. Protects users and programs from each other and provides for inter-program communication. Provides feedback to the system administrators to permit performance optimization of the computer system

Virtual Memory Management (cont.)

As pages in secondary storage are needed for current processing, the OS copies them into page frames in memory. If necessary, pages currently in memory are written to secondary storage to make room for other pages being loaded.

Processes

Basic unit of work in the OS. A Process is a program together with all the resources that are associated with it as it is executed. I.e., A process contains all of the resources to execute as a stand-alone entity. Program: a file or listing. Process: a program being executed. Managed independently by OS. Can request and receive hardware resources and OS services. Can be stand-alone or part of a group that cooperates to achieve a common purpose. Can communicate with other processes executing on the same computer or on other computers

Interrupt Processing

CPU automatically suspends currently executing process, pushes current register values onto the stack, and transfers control to OS master interrupt handler. Suspended process's state remains on the stack until interrupt processing is completed (Process is put into a blocked state). Once interrupt has been processed, OS can leave suspended process in blocked state, move it to ready state, or return it to running state.

Virtual Memory vs. Caching

Cache speeds up memory access. Virtual memory increases amount of perceived storage. Independence from the configuration and capacity of the memory system. Low cost per bit compared to main memory

Scheduling

Decision-making process used by OS to determine which ready thread moves to the running state. The portion of the operating system that makes scheduling decisions is called the scheduler. Typical methods. Preemptive scheduling. Non-preemptive scheduling. Real-time scheduling

Virtual Memory Tradeoffs

Disadvantages: SWAP file takes up space on disk. Paging takes up resources of the CPU Advantages: Programs share memory space. More programs run at the same time. Programs run even if they cannot fit into memory all at once. Process separation

Dynamic Address Translation

Every memory reference in a fetch-execute cycle goes through the same translation process. The address that would normally be sent to the memory address register (MAR) is mapped through the page table and then sent to the MAR.

As the Machine is Powered UP

Execution begins with bootstrap loader stored in ROM (BIOS for PC). Looks for OS program in a fixed location. Loads OS into RAM. Transfers control to starting location of OS. Loader program in OS used to load and execute user programs

Page Replacement Algorithms

FIFO-first in, first out. LRU- last recently used NUR- not recently used

Virtual Machines (cont)

Hypervisor: Layer of software and/or hardware that separates one or more operating systems from the hardware. Hypervisor May consist of software or a mixture of software and hardware, if the CPU provides hardware virtualization support. Type 1 (native) - hypervisor software interfaces directly with the computer hardware. Type 2 (hosted) - hypervisor software runs as a program on a standard operating system

Sharing the CPU During I/O Breaks

I/O represents a large percentage of a typical program's execution

Handling Page Fault (cont2)

If the page being replaced has been altered, it must first be stored back into its own image, before the new required page can be loaded. This is a requirement, because the page may have to be reloaded again later. That way, the backing store always contains the latest version of the program and data as the program is executed.

NUR-Not Used Recently (cont.)

It is a commonly used algorithm. The memory manager software will attempt to find a page with both bits set to 0. This is the page that has not been used for a while and not been modified. So it is necessary only to write the new page over it. The second choice will be a page whole dirty bit is set, but whole reference bit is unset. The third choice will be a page that has been referenced , but not modified. Finally, least desirable will be a page that has been recently referenced and modified.

A UNIX and Linux Commands

Many system administrators prefer command line. GUI executes commands. Responds to mouse clicks. Command interpreter (shell). Accepts keyboard commands and runs them

Scheduling Objectives

Maximize throughput-maximize the number of jobs completed in a given time period. Maximize turnaround time-minimize the time between submission of a job and its completion. Maximize cpu utilization-keep the Cpu busy. Maximize resource allocation-maximize the use of all resources by balancing processes that require heavy cpu time with those emphasizing I/O

Memory Management

Memory management is the point at which the operating system and hardware meet and it is concerned with managing the main store and disk drives. When computers first appeared, an address generated by the computer corresponded to the location of an operand in physical memory. Even today, 8-bit microprocessors do not use memory management. Today, the logical address generated by highperformance computers in PCs and workstations is not the physical address of the operand accessed in memory.

OS Overview

Most important component of system software. Primary purpose: Manages all hardware resources and allocates them to users and applications as needed. Manages CPU, memory, processes, secondary storage (files), I/O devices, and users. Performs many low-level tasks on behalf of users and application programs. Accesses files and directories, creates and moves windows, and accesses resources over a network

Concurrent Operations

Multitasking can be achieved by concurrent processing. The OS acts as a controller to provide concurrent processing. OS makes rapid decisions about which programs receive CPU control and for how long that control is retained. Programs share CPUs (called concurrent or interleaved execution)

Multitasking Systems

OS support for running multiple programs at one time is called multitasking. Multitasking (or multiprogramming) operating systems are the norm for general-purpose computers. Allows flexibility of application and system software. Multitasking operating systems must be able to handle multiple programs and users. Multiuser systems have to be multitasking.

Shortest Remaining Time

Or Shortest Job First (SJF) Chooses the next process to be dispatched based on the expected amount of CPU time needed to complete the process. Maximize throughput by selecting jobs that require only a small amount of CPU time. Longer jobs can be starved as short jobs will be pushed ahead of longer jobs. When SJF is implemented, it generally includes a dynamic priority factor that raises the priority of jobs as they wait, until they reach a priority where they will be processed next regardless of length.

Threads

Processes can subdivide themselves into more easily managed subunits called threads. A process has at least one thread, but a thread is not necessarily a process. A thread is a portion of a process that can be scheduled and executed independently. Each thread consists of a program counter, a register set, and a stack space, but shares all resources allocated to its parent process including primary storage, files and I/O devices. Advantage: reduce OS overhead for resource allocation and process management.

Scheduling Objectives (cont)

Promote graceful degradation. As the system load becomes heavy, it should degrade gradually in performance. Provide minimal and consistent response time. An algorithm that allows a large variation in the response time may not be considered acceptable to users. Prevent starvation. Starvation is a situation that occurs when a process is never given the CPU time that it needs to execute.

Additional Services

Provides tools and services for concurrent processing for multitasking. Provides interfaces for the user and the user's programs. Provides file support services. Provides I/O support services. Provides means of starting the computer. Handles all interrupt processing. Provides network services

LRU - Least Recently Used

Replace the page that has not been used for the longest time, on the assumption that the page probably will not be needed again. The algorithm performs fairly well, but requires a considerable amount of overhead. To implement it, the page tables must record the time every time the page is referenced. Then when page replacement is required, every page must be checked to find the page with the oldest recorded time. If the number of pages is large, this can take a considerable amount time.

Resource Allocation Process

Resource allocation: ensure that overall system objectives are achieved efficiently and effectively. Keeps detailed records of available resources. Knows which resources are used to satisfy which requests. Schedules resources based on specific allocation policies to meet present and anticipated demands. Updates records to reflect resources commitment and release by processes and users

Achieving Multitasking

Sharing the CPU during I/O breaks: while one program is waiting for I/O to take place, another program is using the CPU to execute instructions. CPU idle time can be used to execute other programs. Time-slicing: the CPU may be switched rapidly back and forth between different programs. Executing a few instructions from each, using a periodic clock-generated interrupt. It was discussed in Lecture 6.

Types of OS

Single-tasking systems. Multiuser, multitasking systems. Distributed systems. Network server systems. Embedded systems, real-time systems.

Process Control Data Structures

The OS keeps track of each process by creating and updating a data structure called a PCB for each active process. The PCB (Process Control Block). Created when the process is created, updated when the process changes and deleted when the process terminates. Used by OS to perform many functions (e.g., resource allocation, secure resource access, protecting active processes from interference with other active processes). PCBs are normally organized into a larger data structure (called a linked list, process queue, or process list)

Threads (cont)

The OS keeps track of thread-specific information in a thread control block (TCB). Each PCB contains pointers to its related TCBs. Threads can execute concurrently on a single processor or simultaneously on multiple processors. Like processes, threads can be created and destroyed and can be in ready, running, and blocked states.

Kernel

The central module of an operating system. Always loaded into memory at start-up time and will remain resident as long as the computer is running. Contains essential services required by other parts of the operating system and applications. Typically responsible for memory management, process and task management, and secondary storage management

User Interface Types

The command layer, sometimes called the shell, is the user interface to the OS. Types: CLI - Command Line Interface. A text interface that accepts user input from the keyboard. Batch system commands. Menu-driven environment. GUI (pronounced goo-ee) - Graphical User Interface

Dynamic Address Translation

The dynamic address translation (DAT) automatically and invisibly translates every individual address in a program (the logical or virtual address)to a different corresponding physical location (the physical address). A lookup in the program's page table locates the entry in the table for the page number, then translates, or maps, the virtual memory reference into a physical memory location consisting of the corresponding frame number and the same offset. This operation is implemented in hardware by processor's memory management unit (MMU).

Virtual Memory Managment

The only portion of a process that must be in memory at any point during execution are the next instruction to be fetched and any operands stored in memory. Only a few bytes of any process must reside in memory at any one time. OS minimizes the amount of process code and data stored in memory at one time, which frees large quantities of memory for use by other processes and substantially increases the number of processes that can execute concurrently. During process execution, one or more pages are allocated to frames in memory, and the rest are held in secondary storage (auxiliary storage).

Memory Address

The physical memory address is divided into two parts: a frame number and the offset (because it represents the offset from the beginning of the frame). The first address in a frame is 0. The instruction and data memory address references in a program is logical or virtual memory references. The logical (virtual) address is also separated into its page number and an offset. Since each page fits exactly into a frame, the offset of a particular address from the beginning of a page is also exactly the same as the offset from the beginning of the frame where the page is physically loaded.

Handling Page Fault (cont3)

The process of page replacement is known as page swapping. Most OSs perform page swapping only when it is required as a result of a page fault. This procedure is called demand paging. Since the virtual memory mapping assures that any program page can be loaded anywhere into memory, there is no need to be concerned about allocating particular locations in memory. Any free frame will do.

VM Terminology

The processes are divided into pages - fixed size chunks, e.g., 4 KB. The memory is also divided into the page-sized chunks called the page frames (or frames) - same size as the process pages. For each process, OS creates a page table residing in memory which stores the information about all pages of a single process. The OS memory manager software maintains the page tables for each program (process).

Round Robin

The simplest preemptive algorithm. Round robin gives each process a quantum of CPU time. If the process is not completed within its quantum, it is forced to return to the ready state to await another turn. It is inherently fair and maximizes the throughput since shorter jobs get processed quickly.

FIFO - First-In, First-Out

The simplest realistic page replacement algorithm. The oldest page remaining in the page table is selected for replacement. Not a good page replacement algorithm: A page that has been in memory for a long period of time is probably there because it is heavily used. Belady's Anomaly - when increasing number of pages results in more page faults.

Process States

Three primary process operating states Ready state: a process is waiting for access to a CPU. Running state: the process retains control of the CPU until it terminates normally or an interrupt occurs. Blocked state: the process is suspended while an interrupt is being processed; Waiting for some event to occur (completion of service request or correction of an error condition)

Command Line Interface

User interface for an OS devoid of all graphical trappings.

Virtual Machines

Virtualization: Using a powerful computer to simulate a number of computers. Virtual machines: A simulated computer. Each virtual machine has its own access to the hardware resources of the host machine and an operating system that operates as a guest of the host machine

Handling Page Fault (cont.)

When a page fault interrupt occurs, the OS memory manager software answers the interrupt, selects a memory frame in which to place the required page. It then loads the required page from its program image in the backing store (disk or SSD) into the selected memory frame. If every memory frame is already in use by other pages, the manager software must pick a page in memory to be replaced.

Handling Page Fault

When the program is loaded, an exact, pageby-page image of the program is also stored in a known auxiliary storage location (called a backing store or swap space, swap file). The auxiliary storage area is usually found on disk or SSD. When an instruction or data reference is on a page that does not have a corresponding frame in memory, the CPU hardware causes a special type of interrupt called a page fault or a page fault trap.

Address Mapping for Multiple Processes in Multitasking System

With virtual storage, each process in a multitasking system has its own virtual memory, and its own page table. Physical memory is shared among the different processes. Since all the pages are of the same size, any frame may be placed anywhere in memory. The pages selected do not have to be contiguous. The ability to load any page into any frame solves the problem of finding enough contiguous memory space in which to load programs of different sizes

CLI

advantages: more flexible and powerful. Faster for experienced users. Can combine commands. Disadvantages: more difficult to learn and use.

Windows Interfaces

also known as Graphical User Interfaces (GUIs). Mouse-driven and icon-based. Windows: allocated to the use of a particular program or process. contain a title bar, menu bar, and widgets. Widgets - resize and move windows, scroll data and images within a window

Inverted Page Table

inverted Page Table lists every memory frame with its associated process and page. • The table represents what is in physical memory at every instant. • Any frame without an associated page entry is available for allocation.

How are Memory Frames managed and Assigned to Pages?

physical memory is shared among all of the active processes in a system. Since each process has its own page table, it is not practical to identify available memory frames by accumulating data from all of the tables. Rather, there must be a single resource that identifies the entire pool of available memory frames from which the memory manager may draw, when required. It can be done by using the inverted page table.


Ensembles d'études connexes

NU472 HESI Obstetrics/Maternity Practice Quiz - 43 Questions

View Set

MED SURG SUCCESS/ Genitourinary Disorders

View Set