ITC262 - Exam Revision

Ace your homework & exams now with Quizwiz!

Benefits of threads

less time to create then processes less time to terminate then processes less time to switch between threads in the same processes then switch processes enhance efficiency since they have access to the same resources and don't require kernel intervention for protection and communication within the same process

simple paging

main mem divided into equal sized frames. processes divided into equal sized pages, same length as the frames and loaded into mem (may not be contiguous). No external frag. small amount of internal frag

Explain the concept of a process and mark its differences from a program

A process is an instance of a program being executed A program is essentially a set of instructions written in any computer language

Briefly define round-robin scheduling.

A clock interrupt is generated at periodic intervals. When the interrupt occurs, the currently running process is placed in the ready queue, and the next ready job is selected on a FCFS basis. Pre-emptive

mutual exclusion

only one process can use a critical resource at a time. the process is said to be in a critical section of its execution when this occurs mutual exclusion can cause deadlock and starvation

dynamic paritioning

partitions are created dynamically so each process is loaded into a partition of exactly the same size as that process. no internal fragmentation. inefficient for the processor since it needs to perform compaction to counter external fragmentation

page buffering

places pages chosen for replacement in a buffer (still in main mem). A free page list is updated with this detail, and it is removed from the page table. If ref'd while there moved back to the process resident set. reduces i/p since pages can be written in clusters to disk instead of 1 at a time

The process list contains

pointer to the memory location of the proces Process index register PC value Base & Limit register (the region of mem)

resident set

portion of a process actually in main mem

the process image is the collection of

program, data, stack, and attributes defined in the process control block

advantages of segmentation

programmer doesn't need to know data structure size ahead of time - os will manage it easier recompilcation sharing program/data between processes protection by way of access priviliges

mutex

same as binary semaphore except the processes that locks it (sets the value to 0) must be the one to unlock (Set it to 1)

Multithreading refers to

the ability of an OS to support multiple, concurrent paths of execution within a single process.

scheduling issues for multiprocessors

the assignment of processes to processors the use of multiprogramming on individual processors the actual dispatching of a process

The producer/consumer problem

the problem is to make sure that the producer won't try to add data into a buffer if it's full, and that the consumer won't try to remove data from an empty buffer.

memory management is

the task of subdividing memory to facilitate multiple user processes on a multiprogramming system. vital to ensure processor can be kept busy

A Pure kernel level thread approach puts all the thread mgmt in the kernel and uses an API to the kernel thread facility

true

A monitor supports synchronisation by the use of condition variables contained within and accessible only to the monitor

true

If a process is assigned statically to one processor with it can leave other processors idle. a dedicated short term queue is used for each processor

true

The kernel is not aware of user level threads

true

Windows uses a pure kernel level thread approach

true

dynamic load balancing in which threads are moved from a queue for one processor to a queue for another is the approach taken by linux

true

if a page is not in main mem a page fault occurs

true

if the semaphore is currently 0 the next process to issue a semWait() is blocked

true

segment table entries include a p-bit (present in main mem) and an m-bit (modified since placed in main mem)

true

strong semaphores guarantee freedom from starvation while weak semaphores do not

true

to overcome having large page tables in memory, they themselves are stored in virtual memory and subject to paging

true

using semWait and semSignal is difficult since they can be scattered throughout code

true

The monitor is a programming language construct that provides equivalent functionality to that of semaphores and that is easier to control

true - it allows a programmer to lock any object

each process has a page table

true - it is loaded into main mem with the prcess. each page table entry uses a P bit to indicate if that entry is in main mem or not and if it is then the PTE also includes the frame number

interrupt disabling for mutual exclusion works for uniprocessor's only

true - since processes cannot have overlapped execution. in a multiprocessor starvation and deadlock are present

The DMA module transfers the entire block of data, one word at a time, directly to or from memory without going through the processor

true - the exchange of data between the DMA and I/O modules takes places off the system bus

A translation lookaside buffer is implemented in hardware to to speed up page table lookups

true. TLB checked first for virtual address, if not present (TLB miss) cpu checks page table. if not present page fault and invoke OS to retrieve from disk. If present pull from main mem and update the TLB

What terminology can be used to refer to a system configuration that includes an I/O module which is a separate processor with a specialised instruction set? A. Direct Memory Access (DMA) B. I/O channel C. I/O processor D. Any of the above

D. Any of the above

Under microkernel design, how do operating system components external to the microkernel communicate with each other?

D. Message passing

What is it called when only one process is allowed to have access to a shared resource?

D. Mutual exclusion

two approaches for scheduling on a multiprocessor

Master/Slave scheduling - key kernel func's always on same proc - other jobs scheduled by master on other procs - if slave needs I/O, request is sent to the master - failure of master breaks system - master can be bottleneck Peer scheduling - complicated - anything can run on any proc

What is multiprogramming?

Multiprogramming is a mode of operation that provides for the interleaved execution of two or more computer programs by a single processor.

What are the four conditions that create deadlock?

Mutual exclusion Hold and Wait No preemption The above three conditions, plus: Circular wait. A closed chain of processes exists, such that each process holds at least one resource needed by the next process in the chain.

Why is the average search time to find a record in a file less for an indexed sequential file than for a sequential file?

In a sequential file, a search may involve sequentially testing every record until the one with the matching key is found. The indexed sequential file provides a structure that allows a less exhaustive search to be performed.

Nonprocess Kernel

Kernel executed outside of processes (priv mode) Has own stack During process interrupt, mode context saved and control passed to kernel

Process based OS

Major kernel functions organised into different processes Modular design Functions can be run on dedicated processor for better perf

Disadvantages of user level threads

Many OS calls are blocking - a sys call from a thread blocks the thread and process In a Pure ULT strategy, a multi-threaded app cannot take advantage of multiprocessing

What is the difference between binary and general semaphores?

A binary semaphore may only take on the values 0 and 1. A general semaphore may take on any integer value.

What is the relationship between a pathname and a working directory?

The pathname is an explicit enumeration of the path through the tree structured directory to a particular point in the directory. The working directory is a directory within that tree structure that is the current directory that a user is working on.

What operations can be performed on a semaphore?

1) A semaphore may be initialised to a nonnegative integer. 2) The wait operation decrements the semaphore value. If the value becomes negative, then the process executing the wait is blocked. 3) The signal operation increments the semaphore value. If the value is not positive, then a process blocked by a wait operation is unblocked.

Give four general examples of the use of threads in a single-user multiprocessing system

1) Foreground/background work; 2) asynchronous processing; 3) speedup of execution by parallel processing of data; 4) modular program structure.

What are three contexts in which concurrency arises?

1) Multiple applications 2) Structured applications 3) Operating-system structure

Linux Process states

Running - executing or ready to execute Interruptible - blocked state, waiting for an event, I/O, resource, or signal from another process. Uninterruptible - blocked state, waiting directly on hardware and wont handle any signals Stopped - halted and only resumed by positive action from another process Zombie - terminated but task structure still in process table

Briefly define shortest-process-next scheduling.

SPN is a non-pre-emptive policy in which the process with the shortest expected processing time is selected next.

Briefly define the disk scheduling policies

FIFO: Items are processed from the queue in sequential first-come-first-served order. SSTF: Select the disk I/O request that requires the least movement of the disk arm from its current position. SCAN: The disk arm moves in one direction only, satisfying all outstanding requests en- route, until it reaches the last track in that direction (end of disk). The service direction is then reversed and the scan proceeds in the opposite direction, again picking up all requests in order. C-SCAN: Similar to SCAN, but restricts scanning to one direction only. Thus, when the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again.

Fragmentation of a disk can be removed by the process of compaction. Compaction involves a relocation of the files. But disks do not have relocation registers or base registers. How, then, can files be relocated in a disk?

Files on a secondary storage can be relocated by reading them to the main memory and then writing back to new locations in the secondary memory. The procedure involves bringing a data block of the file to the main memory, relocating it (i.e., assigning it a new location), and then storing it back in the new location. The process continues until all the required portions file are relocated. Consequently, this requires a considerable overhead.

Modern resident set size policies are

Fixed-allocation - process gets fixed number of frames Variable-allocation - page frame allocation varies over lifetime of the process. Increases overhead since OS needs assess active process behaviour

What is the difference between a page and a frame?

In a paging system, programs and data stored on disk or divided into equal, fixed-sized blocks called pages, and main memory is divided into blocks of the same size called frames. Exactly one page can fit in one frame.

List the three control problems associated with competing processes and briefly define each

1) Mutual exclusion: competing processes can only access a resource that they wish to access, one at a time; mutual exclusion mechanisms must enforce this one-at-a-time policy. 2) Deadlock: if competing processes need exclusive access to more than one resource then deadlock can occur if each process gained control of one resource and is waiting for the other resource. 3) Starvation: one of a set of competing processes may be indefinitely denied access to a needed resource because other members of the set are monopolising that resource.

List three degrees of awareness between processes and briefly define each.

1) Processes unaware of each other: These are independent processes that are not intended to work together. They compete for resources. 2) Processes indirectly aware of each other: These are processes that are not necessarily aware of each other by their respective process IDs, but that share access to some object, such as an I/O buffer. 3) Processes directly aware of each other: These are processes that are able to communicate with each other by process ID and which are designed to work jointly on some activity.

What conditions are generally associated with the readers/writers problem?

1. Any number of readers may simultaneously read the file. 2. Only one writer at a time may write to the file. 3. If a writer is writing to the file, no reader may read it.

List and briefly define five different categories of synchronisation granularity.

1. Fine: Parallelism inherent in a single instruction stream 2. Medium: Parallel processing or multitasking within a single application 3. Coarse: Multiprocessing of concurrent processes in a multiprogramming environment 4. Very Coarse: Distributed processing across network nodes to form a single computing environment 5. Independent: Multiple unrelated processes

requirements to provide mutual exclusion

1. mutual exclusion must be enforced 2. a process halting in its noncritical section must not interfer with other processes 3. must be impossible for a process requiring access to a critical section to be delayed forever: starvation/deadlock 4. when no process is in its critical section it must, any one that tries must be permitted without delay 5. no assumptions made about exec speed or num of processors 6. no process can stay in its critical section indefinitly

What is the main function of a dispatcher? Give some examples of events when it is invoked.

A dispatcher, or a short-term scheduler, allocates processes in the ready queue to the CPU for immediate processing. It makes the fine-grained decision of which process to execute next. It has to work very frequently since, generally, a process is executed in the CPU for a very short interval. The dispatcher is invoked whenever an event that may lead to the blocking of the current process occurs. It is also invoked when an event that may provide an opportunity to preempt a currently running process in favour of another occurs. Examples of such events include clock interrupts, I/O interrupts, operating system calls, and signals (e.g., semaphores).

What is the difference between a file and a database?

A file is a collection of similar records, and is treated as a single entity by users and applications and may be referenced by name. A database is a collection of related data. The essential aspects of a database are that the relationships that exist among elements of data are explicit and that the database is designed for use by a number of different applications.

What is a file management system?

A file management system is that set of system software that provides services to users and applications in the use of files.

Which characteristics of monitors mark them as high-level synchronisation tools?

A monitor is a collection of procedures, variables, and data structures which are grouped together in a module. The characteristics that mark it as a high-level synchronisation tool and give it an edge over primitive tools are: 1) As the variables and procedures are encapsulated, local data variables are accessible only by the monitor's procedures and not by any external procedure, thus eliminating the erroneous updating of variables. 2) A process enters the monitor by invoking one of its procedures. Therefore, not all processes can use the monitor, and those that can, must do so only in the manner defined in its construct. 3) Only one process may execute in the monitor at a time; all other processes that invoke the monitor are blocked and wait for the monitor to become available. 4) Monitors can control the time of accessing a variable by inserting appropriate functions.

What is the key difference between a mutex and a binary semaphore?

A mutex is a mutual exclusion object that is created so that multiple processes can take turns in accessing a shared variable or resource. A binary semaphore is a synchronisation variable used for signalling among processes; it can take on only two values: 0 and 1. The mutex and the binary semaphore are used for similar purposes. The key difference between the two is that the process that locks a mutex must be the one to unlock it; in a semaphore implementation, however, if the operation wait(s) is executed by one process, the operation signal(s) can be executed by any process.

What does the program counter (PC) contain?

A pointer to the location in memory of the next program instruction

What is a race condition?

A race condition occurs when multiple processes or threads read and write data items so that the final outcome depends on the order of execution of instructions in the multiple processes.

Within a process, there may be one or more threads, each with

A thread execution state (Running, Ready, etc.) A saved thread context when not running An execution stack Some per-thread static storage for local variables Access to the memory and resources of its process ,shared with all other threads in that process

How is sharing achieved in a segmentation system?

A. By referencing a segment in the segment tables of more than one process;

Which of the following is true of the relationship between processes and threads? A. It takes far less time to create a new thread in an existing process than to create a new process. B. It takes less time to terminate a process than a thread. C. It takes less time to switch between two different processes than to switch between two threads within the same process. D. All of the above.

A. It takes far less time to create a new thread in an existing process than to create a new process.

Which page replacement policy is impossible to implement because it would require the OS to have perfect knowledge of future events?

A. Optimal policy;

Comparing preemptive and nonpreemptive scheduling policies, which policy generally incurs a greater overhead?

A. Preemptive scheduling;

What is the name of the aspect of disk performance that represents the time it takes to position the head at the desired track?

A. Seek time

What characteristic of programs causes the principle of locality of reference to be valid?

A. The same data and the same instructions are often reused

Which of the following is an important feature of threads? A. The threads of a process can interact through shared memory. B. The threads of a process can interact through shared files. C. The threads of a process share a user stack. D. The threads of a process share a thread control block.

A. The threads of a process can interact through shared memory.

Identify the advantages and disadvantages of pre-emptive scheduling.

Advantages: 1. It ensures fairness to all processes regardless of their priority in most cases. 2. It reduces the monopolisation of the CPU by a large process. 3. It increases the scheduling capacity of the system. 4. Depending on the CPU time the process needs, it also gives a quick response time for processes. Disadvantages: 1. Pre-emption is associated with extra overhead due to increased context- switch, increased caching, increased bus-related costs etc. 2. Pre-emptive scheduling results in increased dispatcher activities and, subsequently, more time for dispatching.

What are some advantages and disadvantages of sequential file organisation?

Advantages: 1. Simplicity of organization. 2. Ease of access to adjacent records. 3. Simplicity of retrieval algorithms that may require additional data structure. 4. Fast retrieval of sequential data based on the primary key. 5. Creation of automatic backup copy. Disadvantages: 1. The average access time is equal to the time required to access half the file. 2. Even simple queries are time consuming. 3. Insertion of records mid-way is time consuming because it requires shifting all records after the inserted record in order to maintain the physical order of the system. 4. Record deletion results in a wastage of space and is therefore not a problem-free alternative to shifting records.

Why is the principle of locality crucial to the use of virtual memory?

Algorithms can be designed to exploit the principle of locality to avoid thrashing. In general, the principle of locality allows algorithms to predict which resident pages are least likely to be referenced in the near future and are therefore good candidates for being swapped out.

What is the difference between a page and a segment?

An alternative way in which the user program can be subdivided is segmentation. In this case, the program and its associated data are divided into a number of segments. It is not required that all segments of all programs be of the same length, although there is a maximum segment length.

What is an inode in UNIX?

An index-node or inode is a data structure on a traditional UNIX file system that stores all the information about a regular file, directory, or any other file system object except its data and name. It contains the key information needed by the operating system for that particular file. Several file names may be associated with a single inode, but an active inode is associated with exactly one file and each file is controlled by exactly one inode. The attributes of the file as well as its permissions and other control information are stored in the inode. The exact inode structure varies from one UNIX implementation to another. Some of the information included in a typical inode is the size of the file, the user ID of the file's owner, the group ID of the file, access permissions, timestamps, block pointers, and directory entries.

What is an instruction trace?

An instruction trace for a program is the sequence of instructions that execute for that process.

When to process switch

Any time the OS gains control from the current process

Briefly define FCFS (First Come First Served) scheduling.

As each process becomes ready, it joins the ready queue. When the currently-running process ceases to execute, the process that has been in the ready queue the longest is selected for running. Non-pre-emptive

Steps to create a process

Assign PID Allocate Space Initialise PCB Set Linkages Create/Expand other Data Structurs

To which of the following situations does busy waiting refer?

B. A process waiting for permission to enter a critical section is constantly testing a variable to gain entrance.

What potential problem does fixed file blocking experience?

B. Internal fragmentation;

When there is parallel processing or multitasking within a single application, which is the BEST description for its degree of synchronisation granularity?

B. Medium

The concept of virtual memory is based on which basic techniques?

B. Segmentation and paging;

Which of the following best describes the items that need to be either saved and replaced, or updated and moved, as part of the overheads that are included in a process switch?

B. The processor registers and the PCB;

What is the difference between block-oriented devices and stream-oriented devices? Give a few examples of each.

Block-oriented devices store information in blocks that are usually of fixed size, and transfers are made one block at a time. Generally, it is possible to reference data by its block number. Disks and tapes are examples of block-oriented devices. Stream-oriented devices transfer data in and out as a stream of bytes, with no block structure. Terminals, printers, communications ports, mouse and other pointing devices, and most other devices that are not secondary storage are stream oriented.

In a fixed-partitioning scheme, what are the advantages of using unequal-size partitions?

By using unequal-size fixed partitions: 1. It is possible to provide one or two quite large partitions and still have a large number of partitions. The large partitions can allow the entire loading of large programs. 2. Internal fragmentation is reduced because a small program can be put into a small partition.

What is the principal design issue in the round robin scheduling technique?

C. Determining the length of the time quantum;

Of the following four placement algorithms used in dynamic memory partitioning, which is generally the fastest and best of the four? A. Next-fit B. Worst-fit C. First-fit D. Best-fit

C. First-fit

Which operating system supports the "single process and single thread" model of processes?

C. MS-DOS

Which process scheduler executes the least frequently?

C. The long-term scheduler

What is the situation called where the processor spends most of its time swapping process pieces rather than executing instructions?

C. Thrashing

Which of the following is usually associated with an error or other unintended exceptional condition caused by the process currently running? A. Interrupt B. Memory fault C. Trap D. Supervisor call

C. Trap

If the operating system (OS) is viewed as a software layer and the layer above the OS is the "application program" then what is the layer below the OS?

C. hardware

How can the hold-and-wait condition be prevented?

Can be prevented by making a process request all its required resources at one time and blocking until all requests can be granted simultaneously. Inefficient in a few ways: 1) A process may be held up for a long time waiting for all of its resource requests to be filled, when it could have proceeded with only some of the resources 2) Resources allocated to a process may be unused for a long time, denying access to other processes. 3) Process may not know in advance what resources will be required 4) With modular programming or a multithreaded structure, an application would need to be aware of all resources that will be requested at all levels or in all modules to make simultaneous requests

What are the drawbacks of using either only a pre-cleaning policy or only a demand cleaning policy?

Cleaning refers to determining when a modified page should be written out to secondary memory. Two common cleaning policies are demand cleaning and pre-cleaning. There are problems with using either of the two policies exclusively. This is because, on the one hand, pre-cleaning involves a page being written out but remaining in the main memory until the page replacement algorithm dictates that it can be removed. Therefore, while pre- cleaning allows the writing of pages in batches, it makes little sense to write out hundreds or thousands of pages only to find that the majority of them have been modified again before they are replaced. The transfer capacity of secondary memory is limited in this method; it is wasted in unnecessary cleaning operations. On the other hand, with demand cleaning, the writing of a dirty page is coupled to, and precedes, the reading in of a new page. This technique may minimise page writes, but it results in the fact that a process that suffers a page fault may have to wait for two-page transfers before it can be unblocked. This may decrease processor utilisation.

What grain size of parallelism is appropriate for a multiprogrammed uniprocessor?

Coarse grained or very coarse grained parallelisms are appropriate for a multiprogrammed uniprocessor. In this situation the synchronisation between processes is at a very gross level. Thus, it can easily be handled as a set of concurrent processes whose interaction among themselves is limited. The processes get CPU time in the uniprocessor system in accordance with any scheduling algorithm they might use.

What is the distinction between competing processes and cooperating processes?

Competing processes need access to the same resource at the same time, such as a disk, file, or printer. Cooperating processes either share access to a common object, such as a memory buffer or are able to communicate with each other and cooperate in the performance of some application or activity.

List and briefly define three file allocation methods.

Contiguous allocation: a single contiguous set of blocks is allocated to a file at the time of file creation. Chained allocation: allocation is on an individual block basis. Each block contains a pointer to the next block in the chain. Indexed allocation: the file allocation table contains a separate one-level index for each file; the index has one entry for each portion allocated to the file.

What condition or conditions lead to process termination? A. Normal completion B. Bounds violation C. Parent termination D. All of the above

D. All of the above

Strategies to deal with deadlocks

Deadlock prevention - disallow one of the three conditions for deadlock occurrence, or prevent circular wait condition from happening Deadlock avoidance - do not grant a resource request if this allocation might lead to deadlock Deadlock detection - grant resource requests when possible, but periodically check for the presence of deadlock and take action to recover

What is demand paging?

Demand paging is a page fetch policy in which a page is brought into main memory only when it is referenced, i.e., pages loaded only when they are demanded during program execution. Pages not accessed are never brought into main memory. When a process starts, there is a flurry of page faults. As more pages are brought in, the principle of locality suggests that most future references will be to those pages that have been brought in recently. After a while, the system generally settles down and the number of page faults drops to a low level.

Why would you expect improved performance using a double buffer rather than a single buffer for I/O?

Double buffering allows two operations to proceed in parallel rather than in sequence. Specifically, a process can transfer data to (or from) one buffer while the operating system empties (or fills) the other.

Give examples of reusable and consumable resources

Examples of reusable resources are processors, I/O channels, main and secondary memory, devices, and data structures such as files, databases, and semaphores. Examples of consumable resources are interrupts, signals, messages, and information in I/O buffers.

For which kinds of applications is gang scheduling of threads most useful?

Gang scheduling is when a set of related threads are scheduled to run on a set of processors at the same time, on a one-to-one basis. Gang scheduling is most useful for medium-grained to fine-grained parallel applications whose performance severely degrades when any part of the application is not running while other parts are ready to run. It is also beneficial for any parallel application, even one that is not performance sensitive.

What are some reasons to allow two or more processes to all have access to a particular region of memory?

If a number of processes are executing the same program, it is advantageous to allow each process to access the same copy of the program rather than have its own separate copy. Also, processes that are cooperating on some task may need to share access to the same data structure.

Compare direct and indirect addressing with respect to message passing.

In direct addressing, each communicating process has to name the recipient or the sender of the message explicitly In indirect addressing, messages are sent and received from ports or mailboxes (which are shared data-structures consisting of queues that can temporarily hold messages).

Discuss the concept of dynamic scheduling.

In dynamic scheduling, the scheduling decisions are made at run-time, thus permitting the number of threads in the process to be altered dynamically. The scheduling responsibility of the OS is primarily limited to processor allocation, and it proceeds according to the following policy: When a job requests processors, 1. If there are idle processors, use them to satisfy the request. 2. If the job that is making the request is a new arrival, allocate it a single processor by taking one away from any job currently allocated more than one processor. 3. If any portion of the request cannot be satisfied, it remains outstanding until either a processor becomes available for it or the job rescinds the request. Upon release of one or more processors (including job departures), 1. Scan the current queue of unsatisfied requests for processors. 2. Assign a single processor to each job in the list that currently has no processors. 3. Then scan the list again, allocating the rest of the processors on an FCFS basis.

If purely priority-based scheduling is used in a system, what are the problems that the system will face?

In pure priority-based scheduling algorithms, a process with a higher priority is always selected at the expense of a lower-priority process. The problem with a pure priority scheduling scheme is that lower-priority processes may suffer starvation. This will happen if there is always a steady supply of higher-priority ready processes. If the scheduling is non-pre-emptive, a lower priority process will run to completion if it gets the CPU. However, in pre-emptive schemes, a lower priority process may have to wait indefinitely in the ready or suspended queue.

What is the difference between internal and external fragmentation?

Internal fragmentation refers to the wasted space internal to a partition due to the fact that the block of data loaded is smaller than the partition. External fragmentation is a phenomenon associated with dynamic partitioning and refers to the fact that a large number of small areas of main memory external to any partition accumulates.

When will the OS gain control

Interrupt Trap (exception) Supervisor Call (open file etc.)

In paging memory management, how does an increase in the page size affect internal fragmentation?

It increases

On a uniprocessor, when a decision is made to change the state of a Running process so that it will not be Running, which best describes the code that has made that decision?

It is OS code accessed following an interrupt and a mode switch

What is the purpose of the system stack?

It is for use in controlling the execution of procedure calls and return

Which of the following is a problem with the banker's algorithm?

It is more restrictive than deadlock detection.

List reasons why a mode switch between threads may be cheaper than a mode switch between processes.

Less state information is involved. Usually doesn't require a big change in Virtual Memory. Usually doesn't need the caches to be flushed.

Thread scheduling approaches in a multiprocessor

Load sharing - global ready queue in which processes are assigned to any idle processor Gang scheduling - set of related threads is sch'd to be run on a set of processors at the same time on a 1-to-1 basis Dedicated processor assignment - each program for the duration of its exec is allocated processors equal to the # of threads Dynamic scheduling - num of threads in a process altered during exec.

Page replacement scope policies are

Local - page for replacement isolated to the process resident set in question Global - all unlocked pages across all processes considered

Processor scheduling types

Long-term scheduling - decision to add to the pool of processes to be executed (NEW > READY/SUSPEND or READY). Decides the degree of multiprogramming - more processes = less processor time slices Medium-term scheduling - decision to add to the number of processes that are partially or fully in main mem. (READY/SUSPEND > READY or BLOCKED/SUSPEND > BLOCKED). part of swapping function Short-term scheduling (dispatcher) - the decision as to which available processes will be executed by the processor (READY > RUNNING) I/O scheduling - the decision as to which process's pending I/O request shall be handled by an available I/O device

Why can't you disallow mutual exclusion in order to prevent deadlocks?

Mutual exclusion restricts the usage of a resource to one user at a time. If mutual exclusion is disallowed, then all non-sharable resources become sharable. While this may not hamper some activities (like a read-only file being accessed by a number of users), it poses serious problems for activities that require non-sharable resources (like writing to a file). Preventing mutual exclusion in these situations gives undesirable results. Also, there are some resources (like printers) that are inherently non-sharable, and it is impossible to disallow mutual exclusion. Thus, in general, mutual exclusion cannot be disallowed for practical purposes.

What are the three conditions that must be present for deadlock to be possible?

Mutual exclusion. Only one process may use a resource at a time. Hold and wait. A process may hold allocated resources while awaiting assignment of others. No preemption. No resource can be forcibly removed from a process holding it

Advantages of user level threads

No kernel involvement for thread switching Scheduling can be app specific Can run on any OS

What are typical access rights that may be granted or denied to a particular user for a particular file?

None, knowledge of, read, write, execute, change protection, delete.

Exec within user process kernel

OS exec's in the context of a user process Process image contains kernel stack. During an interrupt mode switch done in the same process. Switch back may require process switch with switching routine involvement

Design and management concerns of the OS raised by the existance of concurrency

OS must keep track of processess (using PCB's) OS must allocate and deallocate resources OS must protect data and physical resources against inteference Functioning and output of a process must be independent of its exec speed

List four design issues for which the concept of concurrency is relevant

OS needs to be able to facilitate the following: 1) Communication among processes 2) Sharing of and competing for resources 3) Synchronisation of the activities of multiple processes 4) Allocation of processor time to processes

page replacement algorithms

Optimal Least Recently Used FIFO Clock Policy - can be extended to also use m bit to give replacement preference to pages that are unchanged

How is a thread different from a process?

Proc can't share mem - threads can Proc create, exec, switch are time consuming - threads efficient Proc's are loosely coupled, don't share resources - threads tightly coupled, share Interprocess comms are difficult and req's sys calls - comms easier/efficient

List and briefly explain five storage management responsibilities of a typical OS.

Process isolation: OS must prevent independent processes from interfering with each other's memory, both data and instructions Automatic allocation and management: Programs should be dynamically allocated across memory hierarchy as required. Allocation should be transparent to programmer, so is relieved of concerns relating to memory limitations, and OS can achieve efficiency by assigning memory to jobs only as needed Support of modular programming: Programmers should be able to define program modules, and to create, destroy, and alter the size of modules dynamically Protection and access control: Sharing of memory, at any level of the memory hierarchy, creates potential for one program to address memory space of another. This is desirable when sharing is needed by particular applications. At other times, threatens integrity of programs and even of OS itself. OS must allow portions of memory to be accessible in various ways by various users Long-term storage: Many application programs require means for storing information for extended periods of time, after the computer has been powered down.

What does it mean to pre-empt a process?

Process pre-emption occurs when an executing process is interrupted by the processor so that another (sometimes of a higher priority) process can be executed.

How is processor scheduling done in the batch portion of an OS?

Processor scheduling in a batch system, or in the batch portion of an OS, is done by a long- term scheduler. Newly submitted jobs are routed to disk and held in a batch queue from which the long-term scheduler creates processes and then places these in the ready queue so that they can be executed. In order to do this, the scheduler has to take two decisions: a. When can the OS take on one or more additional processes? This is generally determined by the desired degree of multiprogramming. The greater the number of processes created, the smaller the percentage of CPU time for each. The scheduler may decide to add one or more new jobs either when a job terminates or when the fraction of time for which the processor is idle exceeds a certain threshold. b. Which job or jobs should it accept and turn into processes? This decision can be based on a simple first-come-first-served (FCFS) basis. However, the scheduler may also include other criteria like priority, expected execution time, and I/O requirements.

List and briefly define three techniques for performing I/O.

Programmed I/O: The processor issues an I/O command, on behalf of a process, to an I/O module; that process then busy-waits for the operation to be completed before proceeding. Interrupt-driven I/O: The processor issues an I/O command on behalf of a process, continues to execute subsequent instructions, and is interrupted by the I/O module when the latter has completed its work. The subsequent instructions may be in the same process, if it is not necessary for that process to wait for the completion of the I/O. Otherwise, the process is suspended pending the interrupt and other work is performed. Direct memory access (DMA): A DMA module controls the exchange of data between main memory and an I/O module. The processor sends a request for the transfer of a block of data to the DMA module and is interrupted only after the entire block has been transferred.

What criteria are important in choosing a file organization?

Rapid access, ease of update, economy of storage, simple maintenance, reliability.

What requirements is memory management intended to satisfy?

Relocation, protection, sharing, logical organisation, physical organisation

What are the two separate and potentially independent characteristics embodied in the concept of process?

Resource ownership and scheduling/execution.

Briefly define shortest-remaining-time scheduling.

SRT is a pre-emptive version of shortest-process-next (SPN). In this case, the scheduler always chooses the process that has the shortest expected remaining processing time. When a new process joins the ready queue, it may in fact have a shorter remaining time than the currently running process. Accordingly, the scheduler may pre-empt whenever a new process becomes ready.

Briefly define feedback scheduling.

Scheduling is pre-emptive (on time quantum) basis, and a dynamic priority mechanism is used. When a process first enters the system, it is placed in RQ0 (see Figure 9.4 of textbook or Slides). After its first execution, when it returns to the Ready state, it is placed in RQ1. Each subsequent time that it is pre-empted, it is demoted to the next lower priority queue. A shorter process will complete quickly, without migrating very far down the hierarchy of ready queues. A longer process will gradually drift downward. Thus, newer, shorter processes are favoured over older, longer processes. Within each queue, except the lowest- priority queue, a simple FCFS mechanism is used. Once in the lowest-priority queue, a process cannot go lower, but is returned to this queue repeatedly until it completes execution.

List some of the methods that may be adopted to recover from deadlocks.

Some methods to recover from deadlocks are: a. Abort all deadlocked processes. Though this is a common solution adopted in operating systems, the overhead is very high in this case. b. Back up each deadlocked process to some previously defined checkpoint and restart all processes. This requires that rollback and restart mechanisms be built into the system. The risk in this approach lies in the fact that that the original deadlock may recur. c. Detect the deadlocked processes in a circular-wait condition. Successively abort deadlocked processes until the circular wait is eliminated and the deadlock no longer exists. The order in which processes are selected for abortion should be on the basis of some criterion of minimum cost. d. Successively preempt resources until the deadlock no longer exists. A process that has a resource preempted from it must be rolled back to a point prior to its acquisition of that resource.

What are the advantages of organising programs and data into modules?

Some of the advantages of organising programs and data into modules are: 1. Modules can be written and compiled independently. All references from one module to another can be resolved by the system at run time. 2. Each module can be given different degrees of protection (like read only, read-write, execute only, read-write-execute, etc.). The overhead associated with this is quite nominal. 3. A module can be shared among different processes by incorporating appropriate mechanisms.

What are the desirable properties of a file system?

Some of the desirable properties of a file system are: 1. Long-term existence: Files are stored on disk or other secondary storage and do not disappear when a user logs off. 2. Sharable between processes: Files have names and can have associated access permissions that permit controlled sharing. 3. Structure: Depending on the file system, a file can have an internal structure that is convenient for particular applications. In addition, files can be organized into hierarchical or more complex structure to reflect the relationships among files.

What is starvation with respect to concurrency control by mutual exclusion?

Starvation refers to a situation where a runnable process is infinitely overlooked by the scheduler for performance of a certain activity. In the context of concurrency control using mutual exclusion, this situation occurs when many processes are contending to enter in the critical section and a process is indefinitely denied access. Although this process is ready to execute in its critical section, it is never chosen and as an outcome never runs to completion.

What is a Linux Elevator? Point out some problems associated with it.

The Linux Elevator is the default disk scheduler in Linux 2.4. It maintains a single queue for disk read and write requests, and it performs both sorting and merging functions on the queue. To put it simply, it keeps the list of requests sorted by block number. Thus, the drive moves in a single direction as the disk requests are handled, satisfying each request as it is encountered. The elevator scheme has two problems: (1) A distant block request can be delayed for a substantial time because the queue is dynamically updated. (2) A stream of write requests (e.g., to place a large file on the disk) can block a read request for a considerable time and thus block a process. Typically, a write request is issued asynchronously. That is, once a process issues the write request, it need not wait for the request to be satisfied. When an application issues a write, the kernel copies the data into an appropriate buffer, to be written out as time permits. Once the data are captured in the kernel's buffer, the application can proceed. Hence, for a read operation, the process must wait until the requested data are delivered to the application before proceeding.

What is the purpose of a translation lookaside buffer?

The TLB is a cache that contains those page table entries that have been most recently used. Its purpose is to reduce the need to go to disk to retrieve a page table entry.

How can the circular-wait condition be prevented?

The circular-wait condition can be prevented by defining a linear ordering of resource types. If a process has been allocated resources of type R, then it may subsequently request only those resources of types following R in the ordering.

What are the differences between a blocking I/O and a non-blocking I/O?

The differences between the two can be explained as follows: If the I/O instruction is blocking, then the next instruction that the processor executes is from the OS, which will put the current process in a blocked state and schedule another process. On the other hand, if the I/O instruction from the process is non-blocking, then the processor continues to execute instructions from the process that issued the I/O command. Operating systems generally use blocking system calls for application interface. An example where non-blocking I/O is used is a user interface that receives keyboard and mouse input while processing and displaying data on screen.

How is the execution context of a process used by the OS?

The execution context, or process state, is the internal data by which the OS can supervise and control the process. This internal information is separated from the process, because the OS has information not permitted to the process. The context includes all the information that the OS needs to manage the process and that the processor needs to execute the process properly. The context includes the contents of the various processor registers, such as the program counter and data registers. It also includes information of use to the OS, such as the priority of the process and whether the process is waiting for the completion of an I/O event.

What is the kernel of an OS?

The kernel is a portion of the operating system that includes the most heavily used portions of software. Generally, the kernel is maintained permanently in main memory. The kernel runs in a privileged mode and responds to calls from processes and interrupts from devices.

What is a pathname? State the two alternate ways to assign pathnames.

The pathname of a file is a symbolic name of a file together with its location by which users can identify one file from another. Absolute path name comprises of the total path of a file starting from the root directory. Absolute path names are always unique. For example, the path C:/dirA/dir1/myfile denotes that the drive C: contains a directory dirA, which contains a subdirectory dir1, in which the file myfile is stored. On the other hand, a user can designate the current directory as the working directory and all path names not beginning at the root directory are taken to the working directory. Hence, all path names are relative to the working directory and are called relative path names. For example, if the working directory is dir1 (of the above example), then the relative pathname of the file will be simply myfile.

With respect to mutual exclusion using interrupt disabling - Mention the requirements for this exclusion and state which of them are met when interrupts are disabled

The requirements that should be met in order to provide support for mutual exclusion are: 1) Only one process should be executing in its critical section at a time among many contending processes, i.e., mutual exclusion is enforced. 2) A process executing in its non-critical section should not interfere with any other process. 3) No deadlocks or live locks should exist. 4) A process should be allowed to enter its critical section within a finite amount of time (or, in other words, it should satisfy the bounded-waiting condition). 5) A process cannot execute in its critical section for an indefinite amount of time. 6) When no process is in its critical section, any process that wishes to enter the critical section should be allowed to do so immediately. 7) No assumptions should be made about the relative speeds or the number of processors. When interrupt disabling is used, mutual exclusion is guaranteed since a critical section cannot be interrupted. Also, the question of deadlocks does not arise. However, the condition of bounded-waiting is not met and so starvation may occur. Further, the time that a process stays in the critical section cannot be made finite

What is the difference between a resident set and a working set?

The resident set of a process is the current number of pages of that process in main memory. The working set of a process is the number of pages of that process that have been referenced over a specified time window.

What scheduling criteria affect the performance of a system?

The scheduling criteria that affect the performance of the system are: 1. Turnaround Time: This is the total time that has elapsed between the submission of a process and its completion. It is the sum of the following: time spent waiting to get into the memory, time spent waiting in the ready queue, the CPU time, and time spent on I/O operations. 2. Response Time: For an interactive process, this is the time from the submission of a request to when the response begins to be received. The scheduling discipline should attempt to achieve low response time and to maximise the number of interactive users receiving an acceptable response time. 3. Waiting Time: This is defined as the total time spent by a job while waiting in the ready queue or in the suspended queue in a multiprogramming environment. 4. Deadlines: When process completion deadlines can be specified, the scheduling discipline should subordinate other goals to that of maximising the percentage of deadlines met. 5. Throughput: This is defined as the average amount of work completed per unit time. The scheduling policy should attempt to maximise the number of processes completed per unit of time. This clearly depends on the average length of a process, but it is also influenced by the scheduling policy, which may affect utilisation. 6. Processor utilisation: This is defined as the average fraction of time the processor is busy executing user programs or system modules. Generally, the higher the CPU utilisation, better it is. This is a significant criterion for expensive shared systems. In single-user systems, and in other systems like real-time systems, this criterion is less important than some of the others.

How does the use of virtual memory improve system utilisation?

The use of virtual memory improves system utilisation in the following ways: a. More processes may be maintained in main memory: The use of virtual memory allows the loading of only portions of a process into the main memory. Therefore, more processes can enter the system, thus resulting in higher CPU utilisation. b. A process may be larger than all of main memory: The use of virtual memory theoretically allows a process to be as large as the disk storage available, however, there will be system constraints (such as address space) that will limit the size.

Explain thrashing.

Thrashing is a phenomenon in virtual memory schemes, in which the processor spends most of its time swapping pieces rather than executing instructions.

What is relocation of a program?

To relocate a program is to load and execute a given program to an arbitrary place in the memory; therefore, once a program is swapped out to the disk, it may be swapped back anywhere in the main memory. To allow this, each program is associated with a logical address. The logical address is generated by the CPU and is converted, with the aid of the memory manager, to the physical address in the main memory. A CPU register contains the values that are added to the logical address to generate the physical address.

Linux namespaces enable a process (or multiple sharing the same namespace) to have a different view of the system than another that have associated namespaces

True

There are six Linux namespaces

True - mnt, pid, net, ipc, uts, and user

New processes are created in Linux by cloning the attributes of the current process?

True, there is no fork() only clone()

Process or task in Linux is represented by a 'task_struct'

True: Contains info on state, scheduling, PID, IPC, File system details for open files, virtual address space etc.

Windows threads can be in one of six states

True: Ready - ready for exec & can be scheduled Standby - selected to run on next avail. processor. Running - Kernel dispatcher performs thread switch, and it execs. Goes back to ready if preempted/time slice exhausted. Waiting - when blocked, waiting for synch. purposes or directed by event subsystem Transition - ready to run but waiting for unavailable resource Terminated - by itself, another thread, or parent.

With respect to mutual exclusion using interrupt disabling - Identify the problems associated with this mechanism

Two major problems are associated with the interrupt-disabling mechanism: 1) Since all interrupts are disabled before entry into the critical section, the ability of the processor to interleave processes is greatly restricted. 2) This fails to work in a multiprocessor environment since the scope of an interrupt capability is within one processor only. Thus, mutual exclusion cannot be guaranteed.

Briefly define highest-response-ratio-next scheduling.

When the current process completes or is blocked, choose the ready process with the greatest value of R, where R = (w + s)/s, with w = time spent waiting for the processor and s = expected service time. Favours short processes, long processes increase R (by increasing w) over time, and so avoid starvation. Non-pre-emptive

What is the difference between demand cleaning and pre-cleaning?

With demand cleaning, a page is written out to secondary memory only when it has been selected for replacement. A pre-cleaning policy writes modified pages before their page frames are needed so that pages can be written out in batches.

binary semaphore

a semaphore that can only be 1 or 0

State some utilities of buffering.

a. Buffering smooths peaks in I/O demand. b. It alleviates the differences in coordination that may crop up due to a disparity in the speeds of the producer and the consumer of a data stream. For example, a file received by a modem is to be stored on a hard disk, which is a thousand times faster than a modem. A buffer created in main memory receives the bytes from the modem and when an entire buffer has been filled, it is written onto disk as a block. c. Buffering adapts devices that have different data-transfer sizes to transfer data among themselves

Which considerations determine the size of a page?

a. Page size versus page table size: b. Page size versus TLB usage: c. Internal fragmentation of pages: d. Page size versus disk access:

semaphore

an integer used for signalling among processes. operations are atomic: - init - decrement - may result in blocking - increment - may result in unblocking

semwait()

decrements the semaphore value (each negative value is a blocked process)

Mode switching occurs

during interrupt processing. No process switching occurs, current process state does not change. PCB processor state is saved. Typically saving/restoring is done in hardware

circular wait can be prevented by defining a linear ordering of resource types

e.g. if using R1 subsequent requests must be R1+ (R2, R3 etc.) cannot go back to R0. inefficient however.

simple segmentation

each process is divided into a number of segments. a process is loaded by loading all of its segments into dynamic partitions that need not be contigious. no internal frag, improved mem util & reduced overhead compared to dynamic partitionin. External frag.

Two objectives paramount in designing the I/O facility are

efficiency - to ensure I/O ops to bottleneck the system generality - it is desirable to handle all devices in a uniform manner

Kernel level thread disadvantages

expensive to transfer control of threads within the same process due to mode switch to the kernel creation/destruction of threads in kernel is costly

Kernel level thread advantages

if thread gets blocked, OS can run another one Kernel can simultaneously schedule multiple threads from same process on multiple processors kernel routines can be multithreaded

semSignal()

increments the semaphore value (each positive value is a process that can issue a wait and execute)


Related study sets

ACCT 240, CHP 9: FLEXIBLE BUDGETS AND PERFORMANCE ANALYSIS

View Set

Exam II American Government Study Guide

View Set

Anatomy and Physiology Ch 25 Metabolism

View Set

NURS 2300 Midterm 1 oncology patients

View Set

Quiz 10 Chapter 14 "Marketing and the Customer Relationship"

View Set

Pharmacology Exam 2/immunization/inflammation etc

View Set

State Exam Prep (Random Questions) 12/22

View Set