CS 490 Final
What is the difference between a mode switch and a process switch?
A mode switch may occur without changing the state of the process that is currently in the running state. A process switch involves taking the currently executing process out of the running state in favor of another process. The process switch involves saving more state information.
What is the difference between a multiprocessor and a multicore system?
A multicore computer is a special case of a multiprocessor, in which all of the processors are on a single chip.
Process
A process is comprised of: •A program in execution (the program's code) •Associated data in memory •The process control block (PCB)
Starvation
A runnable process is overlooked indefinitely by the scheduler, even though it is ready to proceed. It is never chosen
Critical Section
A section of code in a process that requires access to shared resources and must not be executed when another process is in a corresponding section of code
Semaphore
A special variable type with an integer value Operations: Initialize Wait or semwait
Memory Hierarchy
A structure that uses multiple levels of memories; as the distance from the processor increases, the size of the memories and the access time both increase.
Multicore Computer
Also known as a chip multiprocessor Combines two or more processors (cores) on a single piece of silicon (die) each core consists of all of the components of an independent processor In addition, multicore chips also include L2 cache and in some cases L3 cache
Atomic Operation
An action implemented as an uninterruptable instruction or function
What is an interrupt? How do operating systems detect them?
An interrupt is a mechanism by which other modules (I/O, memory) may interrupt the normal sequencing of the processor. The process checks for an interrupt flag at the end of each fetch-execute cycle to determine if something needs to be dealt with.
What are the benefits of organizing memory in a hierarchy?
Cache memory is a memory that is smaller and faster than main memory and that is interposed between the processor and main memory. The cache acts as a buffer for recently used memory locations. There can be multiple caches, the faster and closer to the CPU the more costly it is. Having a hierarchy of memory allows the hardware to assist in speeding up memory accesses without the CPU having to intervene as much, as well as spread out the cost.
Serial Processing
Computers ran from a console with display lights, toggle switches, some form of input device, and a printer No OS Scheduling - most installations used a hardcopy sign-up sheet to reserve computer time
Processor
Controls the operation of the computer Performs the data processing functions Referred to as the Central Processing Unit (CPU)
Principle of Locality
Data is organized so that the percentage of accesses to each successively lower level is substantially less than that of the level above
Types of PCB
Identifier State Priority Program Counter Memory Pointers Context Data I/O Status Accounting Data
Kernel Level Threads
In a pure KLT system, the OS is aware of all threads and performs the management
Microprossesor
Invention that brought about desktop and handheld computing Processor on a single chip Fastest general purpose processor Multiprocessors Each chip (socket) contains multiple processors (cores)
Cache Memory
Invisible to the OS Interacts with other memory management hardware Processor must access memory at least once per instruction cycle Processor execution is limited by memory cycle time Exploit the principle of locality with a small, fast memory
Define jacketing.
Jacketing converts a blocking system call into a non-blocking system call by using an application level I/O routine to check the status of the I/O device. This enables a language to support threads that need to wait on I/O without blocking the entire process that contains the threads from doing any other useful work.
I/O Model
Moves data between the computer and external environments such as: Storage Communications equipment terminals
Race Condition
Multiple threads or processes read and write a shared data item, and the final result depends on the relative timing of their execution
What is multiprocessing?
Multiprocessing is a mode of operation that provides for the physical computation of processes in parallel, on different CPUs/computing hardware.
What is multiprogramming?
Multiprogramming is a mode of operation that provides for the interleaved execution of two or more computer programs by a single processor.
operating system structure
OS services can be developed as a set of concurrent processes, as well
Process Creation
Parent creates child, resource sharing, children a subset of parents resources but not shared
SMP Advantages
Performance Scaling Availability Incremental Growth
Two State Model
Process is either running or not running too simplistic for a realistic operating system
What does it mean to preempt a process?
Process preemption occurs when an executing process is interrupted by the processor so that another process can be executed.
Instruction Register
Processor interprets the instruction and performs required action: Processor-memory Processor-I/O Data processing Control
Types of interrupts
Program Timer I/O Hardware Failure
OS Convenience Advantages
Program development Program Execution Access to I/O devices Controlled Access to Files System Access Accounting Various APIs Error Detection and response
System Bus
Provides for communication among processors, main memory, and I/O modules
microkernel
Structures the operating system by removing all nonessential components from the kernel and implementing them as system and user-level programs
What is swapping and what is its purpose?
Swapping is moving the memory contents of a process into/out of secondary storage to free up/restore memory for the process. Freeing up memory allows the OS to admit/execute other processes that need memory resources.
Programmed I/O
The I/O module performs the requested action then sets the appropriate bits in the I/O status register. The processor periodically checks the status of the I/O module until it determines the instruction is complete
OS Resource Management Techniques
The OS frequently relinquishes control and must depend on the processor to allow it to regain control The OS is itself software that needs to run on the CPU and use the same resources as the applications it manages
Explain the difference between a monolithic kernel and a microkernel design?
The kernel is a portion of the operating system that includes the most heavily used portions of software. Generally, the kernel is maintained permanently in main memory. Microkernels implement the most important core features in the memory resident portion of the operating system where Monolithic kernels include all possible OS features in the core. Microkernels use memory more efficiently by allowing some operating system features to be managed as if they were normal processes and can be swapped in/out of memory. This makes it easier to change/add-to/update an operating system.
Coarse Grained Threads
The process is developed as individual modules that could be assigned to processors. Each is performing its own specialized tasks. Ex: Client/Server applications
What does it mean to say that a process/program is CPU bound?
The process spends a much greater fraction of its time computing instructions than waiting for I/O
What does it mean to say that a process/program is I/O bound?
The process spends a much greater fraction of its time waiting for I/O instead of computing instructions on the CPU/ALU.
Mutual-exclusion
The requirement that when one process is in a critical section that accesses shared resources, no other can be
Multiprogramming
The technique of keeping multiple programs in main memory at the same time, competing for the CPU
Time-Sharing System
This enables many people to share the system and perceive it as dedicated to themselves. It required the ability to load more applications into memory at the same time
In the discussion of ULTs vs KLTs, it was pointed out that a disadvantage of ULTs is that when a ULT executes a system call, not only is that thread blocked, but also all of the threads within the process are blocked. Why is that so?
This is because ULTs are usually implemented by the programming language within a single process and, therefore, the threaded structure of the process that contains the threads is not visible to the OS at all. The OS cannot manage those threads separately from the process. So, from the OS point of view, the process is iss uing a blocking call, not a thread.
Direct Memory Access (DMA)
Transfers the entire block of data directly to and from memory without going through the processor
Deadlock
Two or more processes are unable to proceed because each is waiting for the other to finish
Livelock
Two or more processes continuously change their states in response to the other(s) without doing any useful work
process attributes
User State information (registers, stack pointers, hardware flags) Control Information (state, priority, event identities, data pointers, etc.)
User Mode
a process is executing normally
kernel mode
a process is executing some instructions via the os that are privileged. Often, when a process calls an OS utility, it enters kernel mode to execute the instructions, and returns to user mode when the operation exits.
User Level Threads
all details of thread management are performed by the application itself via thread libraries
Interrupt-Driven I/O
allows the CPU to do other things until I/O is requested. Transfer rate is limited by the speed with which the processor can test and service a device.
Address Register
contain main memory addresses of data and instructions, or they contain a portion of the address that is used in the calculation of the complete or effective address
Instruction Execution
fetch-decode-execute Program counter (PC) holds address of the instruction to be fetched next
process location
how much physical memory does it need, what portion of the process image is actually loaded into real memory
Process Control Block
is a data structure created by and managed by the operating system software Contains sufficient information about a process so it is possible to interrupt a running process and resume its execution as if the interruption did not happen Supports Multi Programming
Monitor
is a higher level programming language construct that provides equivalent functionality to semaphores but is easier to control and verify
Dispatch Queue
maintains pointers to the PCBs of the waiting processes
Fine Grained Threads
many similar or identical tasks are spread across processors to compute part of a solution Ex: image processing where each thread works on a portion of the image file
structured applications
modular design sometimes encompasses developing problem solutions built from multiple concurrent processes
multiple applications
multiprogramming was invented to handle this, and allow processing time to be shared among multiple active applications
Benefits of Threads
responsiveness, resource sharing, economy, scalability
Cache Size
small caches have significant impact on performance
Symmetric Multiprocessors
stand-alone computer system with the following characteristics: two or more similar processors of comparable capability processors share the same main memory and are interconnected by a bus or other internal connection scheme processors share access to I/O devices all processors can perform the same functions the system is controlled by an integrated operating system that provides interaction between processors and their programs at the job, task, file, and data element levels
Main Memory
stores data and information and is usually volatile; its contents are lost when electrical power is turned off. It plays a major role in a computer's performance.
uniprogramming
system works only one program at a time.
Fault Tolerance
the ability for a system to respond to unexpected failures or system crashes as the backup system immediately and automatically takes over with no loss of service
Process-Based OS
the operating system executes as a collection of processes side-by-side with the user processes
Block Size
the unit of data exchanged between cache and main memory
Memory Tables
•Allocation of main memory to processes •Allocation of secondary memory to processes •Protection attributes of the memory blocks •Additional information for managing virtual memory
Process Switching events
•Clock interrupt - The os dispatcher can choose some time slice/interval for which the selected process runs on the CPU. At the end of the time, a clock interrupt signals the system. The running process is moved to the ready state to await its next turn •I/O Interrupt - when a pending i/o event is completed, the interrupt informs the os the event is complete. •Memory Fault - the process needs to pull more of its contents into memory •Other - traps, hardware faults etc.
I/O Tables
•Devices available or assigned to processes •Status of i/o operations in progress •Memory locations used for data transfers
File Tables
•File status •Location on secondary memory
Simple Batch
•Monitor controls the sequence of events •Resident Monitor is software always in memory •Monitor reads in job and gives control •Job returns control to monitor
Process Tables
•OS storage for information about each process, including PCB
Five State Model
•Running - the process that is currently being executed on the processor •Ready - a process is ready to execute (not waiting on pending events) •Blocked - (aka: Waiting) - the process cannot resume execution until some event occurs •New - a process is just created, but not yet loaded into memory •Exit - a process has completed and its PCB storage can be reclaimed