C191 Operating Systems

¡Supera tus tareas y exámenes ahora con Quizwiz!

command interpreter

Allows users to directly enter commands to be performed by the operating system

network operating system

An operating system that provides features such as file sharing across the network, along with a communication scheme that allows different processes on different computers to exchange messages.

real-time operating system

An operating system that reacts to current events and actions occurring around it (such as thermostats, mobile phone, spacecraft, etc)

working set model

Based on the assumption of locality, using a moving working-set window. Prevents thrashing, while allowing the highest degree of multiprogramming.

current file position pointer

Because a process is usually either reading from or writing to a file, the current operation location can be kept as a per-process ___

prepaging

Bringing in all the initial pages, to prevent the high amount of page faults in the beginning.

power of 2 allocator

Buddy system - Keeps dividing fixed sized pages of size 2^n into buddies of size 2^(n-1), until there is a piece small enough to satisfy the request.

user interface

CLI, GUI, batch

shortest job first (SJF_

CPU is assigned the process that has the smallest next CPU burst

stack algorithms

Class of algorithms for page-replacement, that never suffer from Belady's anomaly (e.g. LRU).

privileged instructions

Designated machine instructions that may cause harm only allowed in kernel mode

counting semaphore

Is a type of semaphore usage wherein the value of a ___________________________ can range over an unrestricted domain.

binary semaphore

Is a type of semaphore usage wherein the value of a _______________can range only between 0 and 1.

RAID 0

Non-redundant striping

CPU burst

Process execution begins with a ___________.

system calls

Provide an interface to the services made available by the OS

deadlock prevention

Provides a set of methods to prevent deadlocks by constraining how requests for resources can be made

dispatch queue

Queue used in GCD for assignment to a thread from the thread pool.

elevator algorithm

SCAN algorithm is sometimes called this because the disk arm behaves like an elevator in a building, first servicing all the requests going up and then reversing to service requests the other way

priority inversion

Scheduling problem when lower-priority process holds a lock needed by higher-priority process

interrupt vector

The memory location of an interrupt handler, which prioritizes interrupts and saves them in a queue if more than one interrupt is waiting to be handled

swap space

The space on the disk reserved for the full virtual memory space of a process.

timer

Unit that can generate an interrupt after a specified amount of time

programmed I/O (PIO)

Watching status bits and feeding data into controller register byte by byte.

anonymous pipes

Windows version of ordinary pipes

readers-writers problem

Writers to a shared data set must have exclusive access to prevent errors

producer, consumer

a ___ process produced information that is consumed by a ___ process

relative block number

a block number provided by the user to the operating system; is an index relative to the beginning of the file

core dump

a capture of the memory of the process

list

a collection of data values as a sequence

mailbox set

a collection of mailboxes which can be grouped together and treated as one mailbox for the purposes of the task

I/O subsystem

a collection of modules within the operating system that controls all I/O requests

hashed page table

a common approach for handling address spaces larger than 32 bits. the has value is the virtual page number

queueing diagram

a common representation of process scheduling

network

a communication path between two or more systems

resource allocator

a computer acts as the manager for CPU time, memory space, file-storage space, etc

interpretation

a computer language is not compiled to native code, but instead executed in high-level form

inode

a data structure for storing file system metadata

tree

a data structure that can be used to represent data hierarchically

system resource allocation graph

a directed graph that precisely describes deadlocks

disk blocks

a disk drive might be divided into several thousand individual units

DTrace

a facility that dynamically adds probes to a running system

crash

a failure in the kernel

executable file

a file containing a list of instructions stored on disk

acyclic graph

a graph with no cycles; allows directories to share subdirectories and files

mirrored volume

a logical disk consists of two physical disks and every write is carried out on both disks

file

a logical storage unit

paging

a memory management scheme that allows the physical address space of a process to be contiguous

busy waiting

a method by which processes, waiting for an event to occur, continuously test to see if the condition has changed and remain in unproductive, resource consuming wait loops

storage area networks (SAN)

a network which provides access to consolidated block-level data storage

dispatched

a new process put in the ready queue that is waiting until it is selected for execution

general tree

a parent can have an unlimited number of children

binary tree

a parent may have at most two children, left child and right child

link

a pointer to another file or subdirectory

processor affinity

a process has an affinity for the processor on which it is currently running

thread

a process is a program that performs a single ___ of execution

zombie

a process that has terminated but whose parent has not yet called wait()

thrashing

a process that spends more time paging than executing

process

a program in execution

operating system

a program that manages a computer's hardware

turnstile

a queue structure containing threads blocked on a lock

shared memory

a region of memory that is shared by cooperating processes is established

section object

a region of shared memory associated with the channel

dynamic loading

a routine is not loaded until it is called

critical section

a segment of code in which the process may be changing common variables, updating a table, writing a file, etc

blocks

a self contained unit of work

unnamed semaphore

a semaphore that can be used only be threads belonging to the same process

named semaphore

a semaphore that has an actual name in the file system and can be shared by multiple unrelated processes

text file

a sequence of characters organized into lines (and possibly pages)

source file

a sequence of functions, each of which is further organized as declarations followed by executable statements

memory transaction

a sequence of memory read-write operations that are atomic

middleware

a set of software frameworks that provide additional services to application developers

small computer systems interface (SCSI)

a set of standards for physically connecting and transferring data between computers and peripheral devices

program counter

a single threaded process has one of these to specify the next instruction to execute

peterson's solution

a software-based solution to the critical section problem

translation lookaside buffer (TLB)

a special high-speed cache for page table entries. Functions just like a memory cache and contains the page table entries that have been most recently used.

push migration

a specific task periodically checks the load on each processor and evenly distributes the load by moving processes from overloaded to idle or less busy processor

bitmap

a string of binary digits that can be used to represent the status of items

monitor

a synchronization construct that allows threads to have mutual exclusion and the ability to wait for a certain condition to become true

dining philosophers problem

a synchronization problem that is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free manner

hard real time system

a task must be serviced by its deadline

kernel mode

a task that is executed on behalf of the operating system

user mode

a task that is executed on behalf of the user

checksums

a technique used to verify the integrity of data

coarse grained multithreading

a thread executes on a processor until a long latency event such as a memory stall occurs

green thread

a thread library available for solaris

target thread

a thread that is to be canceled

single-threaded

a traditional process has a single thread of control

clustered system

a type of multiprocessing system which gathers together multiple CPUs

spinlock

a type of mutex lock where the process "spins" while waiting for the lock to become available

critical section object

a user-mode mutex that can be acquired and released without kernel intervention

Circular SCAN (C-SCAN) Scheduling

a variant of SCAN designed to provide a more uniform wait time. moves the head from one end of the disk to the other, servicing requests along the way, when it reaches the end, it immediately returns to the beginning of the disk without servicing requests on the return trip

page fault

access to a page that's marked invalid

network attached storage

accessing disk storage via a remote host in a distributed file system

pipe

acts as a conduit allowing two processes to communicate

forward mapped page table

address translation working from the outer page table inward

shared lock

akin to a reader lock in that several processes can acquire the lock concurrently

priority inheritance protocol

all processes that are accessing resources needed by a higher priority process inherit the higher priority until they are finished with the resources

proportional allocation

allocate available memory to each process according to its size

first fit

allocate the first hole that is big enough

worst fit

allocate the largest hole

best fit

allocate the smallest hole that is big enough

hard affinity

allowing a process to specify a subset of processors on which it may run

page address extension (PAE)

allows 32 bit processors to access a physical address space larger than 4GB

memory mapping

allows a part of the virtual address space to be logically associated with the file

preemptive kernel

allows a process to be preempted while it is running in kernel mode

global replacement

allows a process to select a replacement frame from the set of all frames, even if that frame is currently allocated to some other process. one process can take a frame from another

anonymous access

allows a user to transfer files without having an account on the remote system

user mode scheduling (UMS)

allows applications to create and manage threads independently of the kernel

direct memory access (DMA)

allows certain hardware subsystems to access main system memory independent of the CPU

copy on write

allows parent and child processes to share the same pages

dual booted

allows us to install multiple operating systems on a single system

RAID 6

also called RAID P + Q, redundancy

starvation

also called indefinite blocking, a situation in which processes wait indefinitely within the semaphore

multicore

also called multiprocessor, place multiple computing cores on a single chip

process identifier

also called pid, an integer number that provides a unique value for each process in the system

imperative

also called procedural language, used for implementing algorithms that are state based

trap

also known as an exception, is a software generated interrupt caused by an error

long term scheduler

also known as job scheduler, selects processes from a mass-storage device and loads it into memory for execution

time sharing

also known as multitasking. CPU executes multiple jobs by switching among them, but the switches are so frequent that the users can interact with each program while it is running

multiprocessor systems

also known as parallel systems or multicore systems have two or more processors in close communication, sharing the computer bus, clock, memory, and peripheral devices

blocking send

also known as synchronous send, the sending process is blocked until the message is received by the receiving process or by the mailbox

virtual address

also referred to as logical address

TLB reach

amount of memory accessible from the TLB and is the number of entries multiplied by the page size

logical address

an address generated by the CPU

physical address

an address seen by the memory unit-that is, the one loaded into the memory address register of the memroy

sector slipping

an alternative to sector sparing, the controller being instructed to replace a bad block

swap map

an array of integer counters, each corresponding to a page slot in the swap area

socket

an endpoint for communication

GNU/Linux

an example of open-source software.

secondary storage

an extension of main memory and is able to hold large quantities of data permanently

semaphore

an integer variable that is accessed only through two standard operations: wait() and signal()

lightweight process (LWP)

an intermediate data structure between the user and kernel threads in many to many or two level models

system call

an interrupt triggered by software

targeted latency

an interval of time during which every runnable task should run at least once

signaled state

an object in ___ is available, and a thread will not block when acquiring the object

nonsignaled state

an object in ___ is not available and a thread will block when attempting to acquire the object

soft affinity

an operating system has a policy of attempting to keep a process running on the same processor but not guaranteeing that it will do so

matchmaker

an operating system provides a rendezvous daemon on a fixed RPC port

virtualization

an operating system that is natively compiled for a particular CPU that runs within another operating system also native to that CPU

interrupt driven

an operating system will sit quietly, waiting for something to happen because it is ___

volume

any entity containing a file system

protection

any mechanism for controlling access of processes or users to resources defined by the OS

locality model

as a process executes, it moves from locality to locality.

external fragmentation

as processes are removed from memory, the free memory space is broken into little pieces. exists when there is enough total memory space to satisfy a request but the available spaces are not contiguous

hash map

associates key:value pairs using a hash function

bounded buffer

assumes a fixed buffer size

nonblocking receive

asynchronous receive, the receiver receives either a valid message or a null

nonblocking send

asynchronous send, the sending process sends the message and resumes operations

valid invalid bit

attached to each entry in the page table to indicate if page is in process's logical address space

load balancing

attempts to keep the workload evenly distributed across all processors in an SMP system

exclusive lock

behaves like a write lock, only one process at a time can acquire this lock

RAID 3

bit interleaved parity

RAID 5

block interleaved distributed parity

RAID 4

block interleaved parity

block level striping

blocks of a file are striped across multiple disks

parallel regions

blocks of code that may run in parallel

pages

breaking logical memory into blocks of the same size called ___

frames

breaking physical memory into fixed sized blocks called ___

indexed allocation

bringing all the pointers together into one location at the index block

coalescing

buddies combining to form larger segments using this technique

cooperating process

can affect or be affected by other processes executing in the system

fault tolerant

can suffer a failure of any single component and still continue operation

cancellation point

cancellation only occurs when a thread reaches this point

cache management

careful selection of the cache size and a replacement policy can result in greatly increased performance

clusters

collect blocks into multiples

RAID 0 + 1

combination of RAID 0 and RAID 1.

scheduler activation

communication between the user-thread library and the kernel

system contention scope (SCS)

competition for the CPU with this scheduling takes place among all threads in the system

queueing network analysis

computing utilization, average queue length, average wait time, etc.

dynamic storage allocation problem

concerns how to satisfy a request of size from a list of free holes

hot spare

configured to be used as a replacement in case of disk failure

job queue

consists of all processes in the system

job pool

consists of all processes residing on disk awaiting allocation of main memory

I/O control

consists of device drivers and interrupt handlers to transfer information between the main memory and disk system

services

constantly running system program processes

system wide open file table

contains a copy of the FCB of each open file

per process open file table

contains a pointer to the appropriate entry in the system wide open file table

data section

contains global variables

open file table

contains information about all open files

mount table

contains information about each mounted volume

file control block

contains information about the file, including ownership, permissions, and location of the file contents

boot control block

contains information needed by the system to boot an an operating system from a volume

error correcting code

contains information to determine if bits have been corrupted, and if so, identify which bits have changed and calculate what the correct value should be

current directory

contains most of the files that are of current interest to the process

root partition

contains the operating system kernel

variable class

contains threads having priorities from 1 to 15

real time class

contains threads with priorities ranging from 16 to 31

volume control block

contains volume (or partition) details such as the number of blocks in the partition, size of blocks, etc

interprocess communication (IPC)

cooperating processes require this mechanism to allow them to exchange data and information

caching

copying information into a faster storage system (the cache) on a temporary basis

thread pool

create a number of threads at process startup and place them into a pool where they sit and wait for work

implicit threading

creation and management of threading from application developers to compilers and run time libraries

distributions

custom builds of linux

security

defend a system from external and internal attacks

constant linear velocity (CLV)

density of bits per track

mechanism

determine how to do something

policy

determine what will be done

garbage collection

determine when the last reference has been deleted and the disk space can be reallocated

SCAN algorithm

disk arm starts at one end of the disk and moves toward the other end servicing requests as it reaches each cylinder, until it gets to the end of the other disk where movement is reversed and servicing continues

RAID 1 + 0

disks are mirrored in pairs and then resulting mirror pairs are striped

priority paging

distinguishing pages that have been allocated to processes from pages allocated to regular files

task parallelism

distributing tasks (threads) across multiple computing cores

parallelization

divides a program into separate components that run in parallel on individual cores in a computer or computers in a cluster

partitions

dividing memory into several fixed sized ___ and contains exactly one process

nonpreemptive kernel

does not allow a process running in kernel mode to be preempted

probes

dtrace creates this

mirroring

duplicating every disk

linked allocation

each file is a linked list of disk blocks

main queue

each process has its own serial queue, also known as this

contiguous memory allocation

each process is contained in a single section of memory that is contiguous to the section containing the next process

direct communication

each process that wants to communicate must explicitly name the recipient or sender of the communication

asymmetric processing

each processor is assigned a specific task. boss-worker relationship

symmetric processing (SMP)

each processor is self scheduling, all process may be in a common ready queue or each processor may have its own private queue of ready processes

symmetric multiprocessing (SMP)

each processor performs all tasks. all processors are peers, no boss-worker relationship

page slots

each swap area contains a series of 4 KB ___ which are used to hold swapped pages

thread local storage (TLS)

each thread needs its own copy of certain data

asynchronous procedure calls (APC)

enables a user thread to specify a function that is to be called when the user thread receives notification of a particular event

logical memory

enables user to use large amounts of memory to store data. defines ways to organize physical memory. enables operating system to arrange memory into a logical manner such as assigning logical addresses

abstract data type (ADT)

encapsulates data with a set of functions to operate on data that are independent of any specific implementation

page number, page offset

every address generated by the CPU is divided into two parts

solid state disks

faster than hard drives and are nonvolatile

debugging

finding and fixing errors in a system

data parallelism

focuses on distributing subsets of the same data across multiple computing cores and performing the same operation on each core

belady's anomaly

for some page replacement algorithms, the page-fault rate may increase as the number of allocated frames increases

variable timer

generally implemented by a fixed-rate clock and counter - from 1 millisecond to 1 second; random amount of time

cpu bound process

generates I/O requests infrequently, and uses more of its time doing computations

escalate privileges

giving a user extra permissions for an activity

aging

gradually increasing the priority of processes

group identifiers

group functionality being implemented as a system-wide list of group names and ___

clustering

handles page faults by bringing in the faulting page and several pages following the faulting page

turnaround time

how long it takes to execute a process

resource utilization

how various hardware and software resources are shared

orphan

if a parent did not invoke wait() and got terminated, thereby leaving its child processes as ___

multithreaded

if a process has multiple threads of control, it can perform more than one task at a time

cascading termination

if a process terminates then all its children must also be terminated

load sharing

if multiple CPUs are available, ___ becomes possible - but scheduling problems become correspondingly more complex

software transactional memory (STM)

implements transactional memory exclusively in hardware, no special hardware is needed

performance tuning

improves performance by removing process bottlenecks

multiprogramming

increases CPU utilization by organizing jobs (code and data) so that the CPU always has one to execute

page table length register (PTLR)

indicates the size of the page table

local descriptor table (LDT)

information about the first partition is kept here

global descriptor table (GDT)

information about the second partition is kept here

sequential access

information in the file is processed in order, one record after the other

linked list

items in a list that linked to one another

CPU utilization

keeping the CPU as busy as possible

loadable kernel modules

kernel has a set of core components and links in additional services via modules

mode bit

kernel is 0, user is 1

context switch

kernel saves the context of the old process in PCB and loads the saved context of the new process scheduled to run

file organization module

knows about files and their logical blocks as well as physical blocks

logical blocks

large one dimensional arrays where the ___ is the smallest unit of transfer

index

like a ___ at the back of a book, contains pointers to various blocks

device queue

list of processes waiting for a particular I/O device

user file directory (UFD)

lists only the files of a single user

demand paging

loading pages only as they are needed

hot-standby mode

machine does nothing but monitor the active server. if the server fails, this host becomes the active server

tertiary storage

magnetic tape drives, CDs and DVDs

CPU scheduling

makes a decision of which job to run first when multiple jobs are ready to run at the same time

logical file system

manages metadata information

control program

manages the execution of user programs to prevent errors and improper use of the computer

one to one model

maps each user thread to a kernel thread

many to one model

maps many user level threads to one kernel thread

segmentation

memory management scheme that supports the programmer view of memory that memory is a collection of variable sized segments

RAID 2

memory style error correcting codes

heap

memory that is dynamically allocated during process run time

ports

messages are sent to and received from mailboxes called these

working set minimum

minimum number of pages the process is guaranteed to have in memory

RAID 1

mirrored disks

disk arm

moves all the heads as a unit

bladeserver

multiple processor boards, I/O boards, and networking boards are placed in the same chassis

many to many model

multiplexing many user level threads to a small or equal number of kernel threads

pure demand paging

never bring a page into memory until it is required

lazy swapper

never swaps a page into memory unless that page will be needed

extended file attributes

newer file systems that include character encoding of the file and security features such as a file checksum

throughput

number of processes that are completed per time unit

pull migration

occurs when an idle processor pulls a waiting task from a busy processor

immutable shared file

once a file is declared as shared by its creator, it cannot be modified

frame table

one entry for each physical page frame indicating whether the latter is free or allocated

inverted page table

one entry for each real page of memory

hole

one large block on available memory

asymmetric clustering

one machine in hot-standby mode, while another is running the applications

slab

one or more physically contiguous pages

compaction

one solution to the problem of external fragmentation, shuffles the memory contents to place all the free memory together in one large block

50 percent rule

one third of memory may be unusable

asynchronous cancellation

one thread immediately terminates the target thread

deterministic modeling

one type of analytic evaluation. This method takes a particular predetermined workload and defines the performance of each algorithm for that workload

asymmetric multiprocessing

only one processor accesses the system data structures, reducing the need for data sharing

transient

operating system code that comes and goes as needed

mandatory

or advisory file locking mechanism, once a process acquires an exclusive lock, the OS will prevent any other process from accessing the locked file

short term scheduler

or cpu scheduler, selects from processes that are ready to execute and allocates cpu to one of them

modify bit

or dirty bit, a bit that is associated with a block of computer memory and indicates whether or not the corresponding block of memory has been modified

distributed information systems

or distributed naming services, provide unified access to the information needed for remote computing

sector sparing

or forwarding, controller being told to replace each bad sector logically with one of the spare sectors

low-level formatting

or physical formatting, dividing a disk into sectors that the disk controller can read and write before a disk can store data

local replacement algorithm

or priority replacement algorithm, if one process starts thrashing, it cannot steal frames from another process and cause the latter to thrash as well

reentrant code

or pure code, code that can be shared and never changes during execution

position time

or random access time, consists of the time necessary to move the disk arm to the desired cylinder and the time necessary for the desired sector to rotate to the disk head

time quantum

or time slice, a small unit of time

device directory

or volume table of contents, each volume that contains a file system must also contain information about the files in the system. records information such as name, location, size, etc for all times on that volume

message passing

packets of information in predefined formats are moved between processes by the operating system

most frequently used (MFU)

page replacement algorithm that is based on the argument that the page with the smallest count was probably just brought in and has yet to be used

registers

page table is implemented as a set of dedicated ___.

zero fill on demand

pages that have been zeroed out before being allocated, thus erasing the previous contents

virtual memory

part of the secondary memory (hard disk) which is used as a memory. when processor executes instructions, it converts this into real memory addresses. mainly used to increase address space

monolithic

partitioning tasks into small components or modules so you don't have this kind of system

relative path name

path name that defines a path from the current directory

absolute name path

path names that begins at the root and follows a path down to the specified file, giving the directory names on the path

profiling

periodically samples the instruction pointer to determine which code is being executed

unbounded buffer

places no practical limit on the size of the buffer

multicore processing

placing multiple processor cores on the same physical CPU chip

page directory

points to an inner page table that is indexed by the contents of the innermost 10 bits in the linear address

page table base register (PTBR)

points to the page table

conflict phase of dispatch latency

preemption of any process running in the kernel release by low priority processes of resources needed by a high priority process

shortest remaining time first

preemptive SJF scheduling is sometimes called this

physical memory

primary memory (RAM) available in the system. the only memory directly accessible to the CPU

I/O burst

process execution begins with a CPU then it is followed by ___

cycle

process execution consists of a ___ of CPU execution and I/O wait

swapping

processes are swapped in and out of main memory to the disk

ready queue

processes residing in main memory that are ready and waiting to execute and kept on a list. generally stored as a linked list

shared memory model

processes use shared memory to create and gain access to regions of memory owned by other processes

text section

program code

shared libraries

programs linked before the new library was installed will continue using the older library

application programs

programs such as word processors, spreadsheets, compilers, etc. programs not associated with the operation of the system

mutex lock

protect critical regions and prevent race conditions

adaptive mutex

protects access to every critical data item

system programs

provide a convenient environment for program development and execution

soft real time systems

provide no guarantee as to when a critical real time process will be scheduled

portals

provide web access to internal systems

trace tapes

provides a way to compare two algorithms on the same set of real inputs

interactive

provides direct communication between the user and system

domain name system (DNS)

provides host name to network address translations for the entire internet

distributed lock manager (DLM)

provides shared access by supplying access control and locking to ensure that no conflicting operations occur in clustering technology

thread library

provides the programmer with an API for creating and managing threads

nice value

range from -20 to +19. lower value indicates higher relative priority which receives a higher proportion of CPU

memory mapped I/O

ranges of memory addresses are set aside and are mapped to the device registers

virtual run time

records of how long each task has run by maintaining this

cache coherency

refers to a number of ways to make sure all the caches of the resource have the same data and that the data in the caches make sense

virtual address space

refers to the logical view of how a process is stored in memory

remainder section

remaining code of a critical section that does not include entry section or exit section

distributed file system (DFS)

remote directories are visible from a local machine

medium term scheduler

removes a process from memory and reduces the degree of multiprogramming

microkernel

removes all nonessential components from the kernel and implements them as system and user-level programs

Least Recently Used (LRU)

replace the page that has not been used for the longest period of time

context

represented in the PCB of the process. the register set, stacks, and private storage area

superblock object

represents an entire file system

dentry object

represents an individual directory entry

inode object

represents an individual file

file object

represents an open file

module entry point

represents the function that is invoked when the module is loaded into the kernel

binary search tree

requires ordering between the parents' two children in which left child < right child

contiguous allocation

requires that each file occupy a set of contiguous blocks on the disk

local replacement

requires that each process select from only its own set of allocated frames

deadlock avoidance

requires that the OS be given in advance additional information concerning which resources a process will request and use during its lifetime

periodic

requiring the CPU at constant intervals

admission control algorithm

scheduler admits a process to complete on time or rejects the request if it cannot guarantee that the task will be serviced by the deadline

proportional share

schedulers that operate by allocating shares among all applications

rate monotonic

schedules periodic tasks using a static priority policy with preemption

multilevel feedback queue

scheduling algorithm that allows a process to move between queues

multilevel queue

scheduling algorithm that partitions the ready queue into several separate queues

scheduling classes

scheduling in the linux system is based on ___. each class is assigned a specific priority

earliest deadline first (EDF)

scheduling that dynamically assigns priorities according to deadline, the earlier the deadline the higher the priority

process scheduler

selects an available process (from a set of available processes) for program execution on the CPU

queue

sequentially ordered data structure that uses the first in, first out (FIFO) principle for adding and removing items

stack

sequentially ordered data structure that uses the last in, first out (LIFO) principle for adding and removing items

system call interface

serves as the link to system calls made available by the operating system.

high-availability

service will continue even if one or more systems in the cluster fail

race condition

several processes access and manipulate the same data concurrently and the outcome depends on the order in which the access takes place

user defined signal handler

signal handler which overrides the default action

round robin

similar to FCFS but preemption is added to enable the system to switch between processes

clustered page table

similar to hashed page tables except each entry refers to a single page

boot block

simple code (that fits in a single disk block) and knows only the address on the disk and length of the remainder of the bootstrap program

application programming interface (API)

specifies a set of functions that are available to an application programmer including parameters and return values

access control list

specifies user names and types of access allowed for each user

consistency semantics

specify how multiple users of a system are to access a shared file simultaneously

I/O bound process

spends more of its time doing I/O than it spends doing computations

data striping

splitting the bits of each byte across multiple disks, this type is also called bit-level striping

redundancy

store extra information that's not normally needed but can be used to rebuild lost information in the event of a disk failure

kernel threads

support for threads at the kernel level, managed directly by operating system

user threads

support for threads provided at the user level, managed without kernel support

tracks

surface of a platter is logically divided into circular ___

anonymous memory

swap space used for pages not associated with a file

fine grained multithreading

switches between threads at a much finer level of granularity than coarse grained multithreading

blocking receive

synchronous receive, the receiver blocks until a message is available

dynamically linked libraries

system libraries are linked to user programs when the programs are run

static linking

system libraries are treated like any other object module and are combined by the loader into the binary program image

system processes/daemons

system programs that are loaded into memory at boot time that run the entire time the kernel is running

non-uniform memory access (NUMA)

systems in which memory access times vary significantly

server systems

systems that satisfy requests generated by client systems

little-endian

systems that store the least significant byte first

big-endian

systems that store the most significant byte first

hash function

takes data as its input, performs a numeric operation on this data, and returns a numeric value

network computers

terminals that understand web-based computing

thread cancellation

terminating a thread before it has completed

PThreads

the POSIX standard defining an API for thread creation and synchronization

base register, limit register

the ___ holds the smallest legal physical memory address; the ___ specifies the size of the range

optimal page replacement algorithm

the algorithm that has the lowest page fault rate of all algorithms will never suffer from belady's anomaly

waiting time

the amount of time a process spends waiting in the ready queue

event latency

the amount of time that elapses from when an event occurs to when it is serviced

linear address

the base and limit information about a segment is used to generate this

message passing model

the communicating processes exchange messages with one another to transfer information

read end

the consumer reads from the other of the pipe

exit section

the critical section is followed by this

state

the current activity of a process. new, running, waiting, ready, terminated

constant angular velocity (CAV)

the density of bits decreases from inner tracks to outer tracks to keep the data rate constant

module exit point

the function that is called when the module is removed from the kernel

bootstrap program

the initial program that allows a computer to run, initializes all aspects of the system. locates kernel, loads it into main memory, and starts its execution

upcall

the kernel informs an application about certain events

default signal handler

the kernel runs this when handling a signal

iSCSI

the latest network attached storage protocol

media services

the layer that provides services for graphics, audio, and video

core services

the layer that provides support for cloud computing and databases

mount point

the location within a file structure where the file system is to be attached

dispatcher

the module that gives control of the CPU to the process selected by the short term scheduler

hard disk drive (HDD)

the most common secondary storage device

degree of multiprogramming

the number of processes in memory

Moore's law

the number of transistors doubles every 18 months

interrupt

the occurrence of an event is signaled by this from the hardware or software

kernel

the one program running at all times on the computer

open count

the open file table also has an ___ associated with each file to indicate how many processes have the file open

layered approach

the operating system is broken into a number of layers, layer 0 is hardware, layer n is user interface

variable partition

the operating system keeps a table indicating which parts of memory are available and which are occupied

page table

the page number is used as an index into a ___. contains the base address of each page in physical memory

instruction register

the part of the CPU's control unit that holds the instruction currently being executed or decoded

hit ratio

the percentage of virtual address translations that are resolved in the TLB rather than the page table

interrupt latency

the period of time from the arrival of an interrupt at the CPU to the start of the routine that services the interrupt

log file

the place where error information is written when a process fails

booting

the procedure of starting a computer by loading the kernel

first come first served (FCFS)

the process that requests the CPU first is allocated the CPU first

input queue

the processes on the disk that are waiting to be brought into memory for execution

write end

the producer writes to one end of the pipe

transfer rate

the rate at which data flows between the drive and computer

memory management unit

the run time mapping from virtual to physical addresses is done by this hardware device

entry section

the section of code implementing a request

scheduler

the selection process used by the operating system to select a process from a queue is carried out by the ___

logical address space

the set of all logical addresses generated by a program

physical address space

the set of all physical addresses corresponding to logical addresses

cylinder

the set of tracks that are at one arm position makes up a ___

sectors

the surface of a platter is logically divided into circular tracks, which are subdivided into ___

communications

the system call that has the message passing model and the shared memory model

protection

the system call that provides a mechanism for controlling access to the resources provided by a computer system

device manipulation

the system call where the resources (main memory, disk drives, access to files, etc) controlled by the operating system are given

information maintenance

the system call where the system transfers information between the user program and the operating system

file manipulation

the system call where you can create and delete files

process control

the system call where you can halt the execution of a running program

write pointer

the system must keep a ___ to the location in the file where the next write is to take place. it must be updated whenever a write occurs

read pointer

the system needs to keep a ___ to the location in the file where the next read is to take place

deferred cancellation

the target thread periodically checks whether it should terminate, allowing it an opportunity to terminate itself in an orderly fashion

mean time to repair

the time it takes (on average) to replace a failed disk and restore the data on it

rotational latency

the time necessary for the desired sector to rotate to the disk head

seek time

the time necessary to move the disk arm to the desired cylinder

enabling control blocks

these actions are performed when probes fire

high-performance computing

these systems supply significantly greater computational power than single-processor or even SMP systems because they run applications concurrently on all computers in the cluster

cleanup handler

this function is invoked when a cancellation request is found to be pending

process control block (PCB)

this is what each process is represented by in the operating system. also called a task control block

process contention scope (PCS)

this scheme is known as ___ since competition for the CPU takes place among threads belonging to the same process

response time

time from submission of a request until the first response is produced

bandwidth

total number of bytes transferred divided by the total time between the first request for service and the completion of the last transfer

double buffering

transfers between operating system buffers and process memory that occur only when the process is swapped in

symmetric clustering

two or more hosts are running applications and are monitoring each other

deadlocked

two or more processes are waiting indefinitely for an event that can be caused only by one of the waiting processes

shared memory

two or more processes read and write to a shared section of memory

device driver

understands the device controller and provides the rest of the operating system with a uniform interface to the device

user identifiers (user IDs)

unique numerical IDs to distinguish among users

address space identifier (ASID)

uniquely identifies each process and is used to provide address space protection for that process

internal fragmentation

unused memory that is internal to a partition

upcall handler

upcalls are handled by the thread library with this

advanced local procedure call (ALPC)

used for communication between processes on the same machine

signal

used in UNIX to notify a process that a particular event has occurred

firewalls

used to protect networks from security breaches

registry

used to store and retrieve configuration information

hardware transactional memory (HTM)

uses hardware cache hierarchies and cache coherency protocols to manage and resolve conflicts involving shared data residing in separate processors' caches

analytic evaluation

uses the given algorithm and system workload to produce a formula or number to evaluate the performance of the algorithm for that workload

raw disk

using a disk partition as a large sequential array of logical blocks without any file system data structures

LOOK and C-LOOK

versions of scan and c-scan that look for requests before continuing to move in a given direction

state save, state restore

we perform a ___ of the current state of the CPU, and then a ___ to resume operations

crash dump

when a crash occurs, error information is saved to a log file, and the memory state is saved here

multiple partition method

when a partition is free, a process is selected from the input queue and is loaded into the free partition

swapping

when a process is reintroduced into memory and its execution can be continued where it left off

memory stall

when a processor spends a significant amount of time waiting for the data while accessing memory

volatile

when a storage devices loses its contents when the power is turned off, it is called

rendezvous

when both send and receive are blocking we have a ___ between the sender and receiver

idle thread

when no ready thread is found, the dispatcher will execute a special thread called this

job scheduling

when several jobs are ready to be brought into memory and there isn't enough room, then the system has to choose among them

automatic working set trimming

when the amount of free memory falls below the threshold, the virtual memory manager uses a tactic known as ___ to restore the value above the threshold

head crash

when the head damages the magnetic surface

emulation

when the source CPU is different from the target CPU

dispatcher object

windows provides these for thread synchronization outside the kernel

connection ports, communication ports

windows uses two types of ports to establish and maintain a connection between two processes

priority scheduling

A priority is associated with each process, and the CPU is allocated to the process with the highest priority.

two level model

A variation on the many-to-many model that multiplexes many user-level threads to a smaller or equal number of kernel threads but also allows a user-level thread to be bound to a kernel thread.

graceful degradation

Ability to continue providing service proportional to the level of surviving hardware


Conjuntos de estudio relacionados

Fundamentals of Neuroscience Exam 3

View Set

Psychiatric-Mental Health Practice Exam HESI

View Set

اجتماعيات الوحدة الاولى القضية اللبنانية ودعم مصر ودعم استقلال الجزائر

View Set