OS Midterm

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Mutex Lock Process/Logic

- Boolean variable "available" indicates if lock is available or not - Lock is available → call to aquire() succeeds AND lock is then considered unavailable → process that attempts to acquire an unavailable lock is blocked until the lock is released

Caveat of new definition of semaphore

- Haven't completely eliminated busy waiting - Moved it from the entry section to the critical section of application program - limited buys waiting to the critical sections of the wait() and signal() operations, which are short → critical section is almost never occupied → busy waiting occurs rarely and for a short time

Proof of properties 2 & 3:

- Long...brain isn't processing what's going on (pg 209)...will come back to this later

Proof of mutual exclusion property

- Pi enters its critical section only if either flag[j] == false or turn == i - Don't know how to concisely word second reason (pg 208-209)

Peterson's Solution Characteritics

- Restricted to two processes that alternate execution between their critical sections and remainder sections

Changes to semaphore definition to avoid busy waiting

- add struct process *list so that when a process must wait on a semaphore it's added to the list of processes - the signal() operation removes one process from the list of waiting processes and awakens that process - since the order of the decrement and the test in wait()'s implementation are switched, there can be negative semaphore values (the magnitude is the number of waiting processes)

Hardware-based solutions cons

- complicated - generally inaccessible to application programmers

Peterson's Two required shares data items

- int turn: indicate whose turn it is to enter its critical section o Example: if turn == i → Pi allowed to execute in it's critical state - Boolean flag[2]: used to indicate if a process is ready to enter its critical section o Example: flag[i] == true → Pi is ready to execute its critical section - To enter the critical section, process Pi sets flag[i] to true and turn to j (which is the other process)

Prevent interrupts from occurring while shared variable was being modified in a single processor

- possible solution to the critical-section problem - current sequence of instructions would be allowed to execute in order without preemption - No other instructions would be run → no unexpected modifications to shared variables - Approach taken by preemptive kernels - Not feasible in multiprocessor environment

Mutex Lock

- protects critical region to prevent race conditions - Process much acquire a lock before entering a critical section and releases the lock when exiting the critical section

Con of implementation of a semaphore with a waiting queue

- situation where two or more processes are waiting indefinitely for an event that can be caused only by one of the waiting processes (aka a deadlock)

Semaphore requirements

- when one process modifies the semaphore value, no other process can simultaneously modify the same value - in wait(S), the testing of the integer value of S and its possible modifications must be executed without interruption

5.1 BACKGROUND HEADER

---------------------------------

5.2 The Critical-Section Problem Header

---------------------------------

5.3 Peterson's Solution Header

---------------------------------

5.4 Synchronization Hardware Header

---------------------------------

5.5 Mutex Locks Header

---------------------------------

5.6 Semaphores Header

---------------------------------

5.7 Classic Problems of Synchronization Header

---------------------------------

CHAPTER 1 HEADER (Introduction)

---------------------------------

CHAPTER 2 HEADER (OS Structures)

---------------------------------

CHAPTER 3 HEADER (Processes)

---------------------------------

CHAPTER 4 HEADER (Threads)

---------------------------------

CHAPTER 5 HEADER (Process Synchronization)

---------------------------------

CHAPTER 6 HEADER

---------------------------------

Solution to the Critical-Section Problem must satisfy what three requirements?

1. Mutual Exclusion 2. Progress 3. Bounded Waiting

What are the three main purposes of an operating system?

1. To provide an environment for a user to execute programs on computer hardware in a convenient and efficient manner. 2. To fairly and efficiently allocate computer resources needed to solve a problem 3. Two Functions - Supervise execution of user programs to prevent errors or improper use AND to manage operation and control of I/O devices

Including the initial parent process, how many processes are created by a program shown in Figure 3.32? - http://imgur.com/MusplPP

16 Processes (2^n) where n is the number of forks in a program

What will be printed at Line A of a program shown in Figure 3.30? - http://imgur.com/9zLLHwE

5 The value of pid in the child process will be 0, which means that only in the child process will the value be incremented by 15 (it is the child's copy of the data). When the child process ends, and control is given back to the parent process, it will enter the else statement and print the unchanged value of 5.

Including the initial parent process, how many processes are created by a program shown in Figure 3.31? - http://imgur.com/ozDmiJx

8 Processes (2^n) where n is the number of forks in a program

Using the program in Figure 3.34, identify the values of pid at lines A , B , C , and D . (Assume that the actual pids of the parent and child are 2600 and 2603, respectively.) - http://imgur.com/IBcZAeE

A = 0 B = 2603 C = 2603 D = 2600 Fork returns the pid of the child process.

Progress

A process has to be waiting for a lock, to be considered to get the lock. If it still executing a remainder section, it will not be considered. Prevents things from getting a lock when there are other things waiting

Processor Affinity

A process should stay on the processor it's currently executing on, to avoid copy costs

Zombie Process

A process which has terminated but not yet reaped by its parent. Not alive, but still in the process table.

Hard Affinity

A process will not switch to a new processor no matter what

Process

A program in execution. Needs resources to accomplish its task.

Indefinite Blocking/Starvation

A situation in which processes wait indefinitely within the semaphore. - May occur if we remove processes in the semaphore's list in LIFO order

Nice Value

A task is being nice if it lowers its priority so other tasks get more time

How is an interrupt different from a trap

A trap is software-generated interrupt while an interrupt is triggered by a signal to the processor.

Message Passing

A way for processes to communicate and synchronize their actions without sharing the same address space. Send(message) Receive(message)

Multiprocessor Systems

AKA Parallel Systems, Tightly-Coupled Systems Advantages - Increased throughput, economy of scale, increased reliability

Graceful Degradation

Ability of a system to maintain limited functionality even when a large portion of it has been destroyed or rendered inoperative (when a core fails)

Concurrency

Ability to allow more than one task to make progress

Parallelism

Ability to perform more than one task simultaneously

Virtual Run Time

Adjusted record of how long a process has run

Advantages and Disadvantages to Layered Approach of Operating Systems

Advantage - Simple to construct and debug Disadvantage - Hard to define the layers, the more layers a system has, the less efficient it is

Advantages and Disadvantages of Private Run Queue Between Processors

Advantages - Easy to manage processor affinity Fewer Race conditions Disadvantages - Requires something to manage load balancing

Advantages and Disadvantages of Shared Run Queue Between Processors

Advantages - Load balancing handled automatically Disadvantages - Requires something to manage processor affinity Possible race conditions

Discuss the performance of the following scenario of a program on a multiprocessor system. The number of kernel threads allocated to the program is equal to the number of processing cores

All kernel-threads might be being used at the same time. If one of them blocks, the processor would become idle.

Priority-Inheritance Protocol

All processes that are accessing resources needed by a higher-priority process inherit the higher priority until they are finished with the resources in question.

When a thread calls exit(), does it terminate all threads or just the calling thread.

All threads in a process

Preemptive kernel

Allows a process to be preempted while it is running in kernel mode - must be carefully designed to ensure that shared kernel data are free from race conditions - may be more responsive since less risk of kernel-mode process running for arbitrarily long period - more suitable for real-time programming, since it allows a real-time process to preempt a process currently running the in the kernel

Cycle

Alternating between CPU Execution and I/O Wait

Waiting Time

Amount of time a process has been waiting in the ready queue

Response Time

Amount of time it takes from when a request was submitted until the first response is produced Time of Submission --> Time of First Response

Turnaround Time

Amount of time to execute a particular process. FORMULA = FinishTime - ArrivalTime Time of Submission ---> Time of Completion Sum of the periods spent waiting (to get into memory and in the ready queue), Executing, and doing I/O

Pull Migration

An idle processor pulls a waiting tasking from a busy processor

Interrupt

An interrupt is a hardware generated change of flow within the system

Non-blocking is considered _______

Asynchronous Non-blocking Send - Sender sends message and continues Non-blocking Receive - Receiver receives a message or null

It is critical that semaphore operations be executed _______.

Atomically Need to guarantee that no two processes can execute wait() and signal() operations or else this will lead to the critical-section problem

5 Criteria to Compare Scheduling Algorithms

CPU Utilization Throughput Turnaround Time Waiting Time Response Time WTR TC Water Taco

If a cache can be made as large as a device that it is caching, why not make it that large and eliminate the device

Cache memory is more expensive than slower memory. Cache memory is volatile, meaning it gets erased with loss of electricity. For this reason, it can not replace certain items (like a whole computer)

Give 2 reasons why caches are useful

Caches are useful when two or more components need to exchange data, and the components perform transfers at different speeds. Caches are used to pre-fetch the next instruction in order to save time and CPU Cycles.

What problems do caches solve

Caches solve the problem of transferring data between components of different speeds by creating a buffer of intermediate speed. If the fast component finds the data it needs, it doesn't have to wait for the slower component.

Cooperating Process

Can affect or be affected by other processes

http://pastebin.com/ewzUZ3C9

Chapter 6 Problems

Semaphore

Contains an integer variable that's accessed through only two standard atomic operations: wait() and signal()

Thread Control Block (TCB)

Contains information about each threads CPU registers, program counter, and execution stack

Process Control Block (PCB)

Contains information associated with each processes: State, PC, CPU Registers, Scheduling info, Accounting info, I/O Status

Crash Dump

Contains kernel memory at the time of OS failure

Core Dump

Contains the memory captured of a process when it fails

Nonpreemptive

Cooperative Scheduling, CPU holds onto a process until it terminates or waits

Critical-Section Problem

Design a protocol that the processes can use to cooperate so that when one process is executing its critical section, no other process is allowed to execute its own critical section.

A lot of Symmetric Multiprocessing (SMP) systems have different levels of caches; one level is local to each processing core, and another level is shared among all processing cores. Why are caching systems designed this way?

Different levels of cache are based upon size and speed. The closer a cache is physically to the thing it's caching, the faster they can communicate. Faster, smaller caches are placed local to each core, and a slower, larger cache is shared among the processors.

Mode Bit

Distinguishes when the system is running user code or kernel code 0 = Kernel 1 = User

Preemptive SJF

Does jobs in the order defined by their next CPU Burst time, can interrupt a running process if the burst time of of the new process is less than the remaining time of the running process.

Shortest Job First Scheduling (SJF)

Does jobs in the order defined by their next CPU Burst time. Optimal - Gives minimum average waiting time for a given set of processes

Nonpreemptive kernel

Doesn't allow a process running in kernel mode to be preempted - The process will run until it exists kernel mode, blocks, or voluntarily yields control of the CPU - Free from race conditions, since only one process is active in the kernel at a time

What would the bootstrap program that allows the choice of Operating Systems need to do?

During boot-up, the bootstrap program will determine which OS to boot into, based on a choice by the user after producing options. Instead of booting straight into an OS, the computer runs the bootstrap program on startup.

One-to-One Thread Relationship

Each user thread maps to one kernel thread - Allows more concurrency - More overhead of creating the kernel threads

General Structure of Typical Process

Entry Section --> Critical Section --> Exit Section --> Remainder Section

Deadlocked Set

Every process in the set is waiting for an event that can be caused only by another process in the set. In the case of semaphores, the events are resource acquisition and release

First Come First Served Scheduling (FCFS)

FIFO Queue, Does jobs in the order that they arrive. Long average waiting time. Nonpreemptive

3 Types of User Interfaces to a System

GUI (Graphical) Command Line Batch (Files written and executed)

Dispatcher

Gives control of the CPU to the next process according to the Short Term Scheduler

CPU Utilization

Goal - Keep the CPU as busy as possible (40% to 90%)

Why is it important for the scheduler to distinguish I/O -bound programs from CPU -bound programs?

I/O bound programs must be run very often, so that they can poll devices. I/O programs will not use up an entire quanta, and can therefore be scheduled differently than a CPU bound program that requires many quanta.

In Chapter 3, we discussed Google's Chrome browser and its practice of opening each new tab in a separate process. Would the same benefits have been achieved if instead Chrome has been designed to open each new website in a separate thread? Explain.

If Chrome used a new thread for each new tab, this would negate the benefit they have from using processes. Using threads, if one of the tabs crashed, the entire process (googlechrome.exe) would crash.

Discuss the performance of the following scenario of a program on a multiprocessor system. The number of kernel threads allocated to the program is greater than the number of processing cores but less than the number of user-level threads

If a kernel thread is blocked (and the processor is now idle) then it can be swapped out with another thread that is in the ready state.

Cascading Termination

If a parent has terminated, all of it's children must also be terminated

Mutual exclusion

If a process is executing in its critical section, no other process can be executing its critical section

How could a system be designed to allow a choice of operating systems from which to boot?

If a system was designed to dualboot operating systems, both could be stored on their own disk or partition.

Aging

Increasing the priority of older processes

Priority Inversion

Indirectly, a process with lower priority (M), has affected how long process H must wait for L to relinquish resource R - Only occurs in systems with more than two priorities

Cooperating processes need __________

Interprocess Communication (IPC)

What are the advantages of using loadable kernel modules?

It is difficult to know the features an OS will need while it is being designed. The advantage of a loadable kernel module is that functionality can be added and remove from the kernel while it running, without the need to recompile or reboot the kernel.

How does timesharing provide the illusion of parallelism?

It rapidly switches been processes in a system, allowing each process to make progress. This looks like things are running at the same time (parallelism), but actually they are only running concurrently.

When using the exec() command, does it replace the entire process (and all threads) or does it replace only the calling thread.

It replaces the entire process, including all threads.

Suppose that a scheduling algorithm (at the level of short-term CPU scheduling) favors those processes that have used the least processor time in the recent past. Why will this algorithm favor I/O -bound programs and yet not permanently starve CPU -bound programs?

It will favor the I/O bound programs because of the short CPU burst request by them. They will not starve the CPU-bound programs though, because I/O bound programs let go of the CPU pretty often to do their I/O

Load Balancing

Keeping the workload evenly distributed across all processors.

Privileged instructions are only executable in _____ mode

Kernel

Under what circumstances is one type better than the other (user threads vs kernel threads)?

Kernel threads are better in a multiprocessor environment, where the kernel can schedule threads on different processors. A task that has multiple threads that are I/O bound, or that has many threads (and thus will benefit from the additional timeslices that kernel threads will receive) might best be handled by kernel threads. User-level threads are much faster to switch between, as there is no context switch, which makes them ideal for tasks with small cpu bursts.

Describe some of the challenges of designing operating systems for mobile devices compared with designing operating systems for traditional PCs.

Less storage means the OS must manage memory carefully Limited battery means the OS must manage power consumption carefully Less processing power and fewer processors means the OS must carefully allocate processors to applications

Bootstrap Program

Loaded at power-up or reboot Known as firmware Loads kernel and starts execution Stored in ROM

Quanta

MAX amount of time before switching to a new process. Should be chosen such that 80% of CPU bursts are smaller than the quanta

Many-to-Many Thread Relationship

Many user threads are mapped to many kernel threads.

Many-to-One Thread Relationship

Many user threads mapped to a single kernel thread. - Unable to run in parallel on multiprocessors. - Entire process will block if any thread makes a blocking system call

Goals of Optimization Regarding the 5 Criteria for Comparing Scheduling Algorithms

Maximize CPU Utilization and Throughput Minimize Turnaround, Waiting, and Response time

Exit section

May follow the critical section

Clustered Systems

Multiple individual systems working together

Reader-Writer Problem

Mutex = Reader Queue (stacking up) RW_Mutex = Readers and Writers (can only contain 1 reader at a time)

5 Process States

New - Being created Ready - Waiting to run Running - Instructions being executed Waiting - Waiting for some event to occur Terminated - Finished execution RRWTN Round Robin Waitin

Spinlock Advantages

No context switch (which may take considerable time) is required when a process must wait on a lock

Unbounded Buffer Problem (Producer & Consumer)

No limit on the size of the buffer. The consumer may have to wait for new items, but the producer can always produce new items. Solution:

Can a multithreaded solution using multiple user-level threads achieve better performance on a multiprocessor system than on a single-processor system? Explain.

No, a multithreaded solution can not achieve better performance, because user-level threads are not recognized by the kernel, and the kernel is the only thing that can spread out threads across different processors.

Atomic

Non interruptible

A process switching from Running to Waiting is ___________

Nonpreemptive

A process terminating from any other state is ___________

Nonpreemptive

The following instruction should be __________. "Issue a trap instruction"

Not Privileged

The following instruction should be __________. "Read the clock"

Not Privileged

The following instruction should be __________. "Switch from user to kernel mode"

Not Privileged

Throughput

Number of processes that complete their execution per time unit.

Moore's Law

Number of transistors on a semiconductor chip doubles every 18 months.

Soft Affinity

OS will attempt to keep a process on a single processor, but no guarantees

When a process creates a new process using the fork() operation, which of the following is shared between the parent process and child process? Stack, Heap, Shared memory segments

Only the shared memory segments are shared between the parent and child processes. Copies of the stack and heap are made for the child.

Convoy Effect

Other Processes in a queue must wait for the CPU hog to finish

Targeted Latency

Period of time where every process is run at least once

A process switching from Running to Ready is ___________

Preemptive

A process switching from Waiting to Ready is ___________

Preemptive

Two general approaches to handle critical sections in operating systems

Preemptive Kernels and Nonpreemptive Kernels

Explain the difference between preemptive and nonpreemptive scheduling.

Preemptive scheduling allows a process to be interrupted in the midst of its execution, taking the CPU away and allocating it to another process. Nonpreemptive scheduling ensures that a process relinquishes control of the CPU only when it finishes with its current CPU burst.

Priority Inversion Solution

Priority-Inheritance Protocol

The following instruction should be __________. "Access I/O device"

Privileged

The following instruction should be __________. "Clear memory"

Privileged

The following instruction should be __________. "Modify entries in device-status table"

Privileged

The following instruction should be __________. "Set value of timer"

Privileged

The following instruction should be __________. "Turn off interrupts"

Privileged

It is important that a long-term scheduler selects a good ___________ of I/O bound and CPU-bound processes.

Process Mix

Cooperating Process

Process that can affect or be affected by other processes executing in the system - Can either directly share a logical address space (code & data) through the use of threads or be allowed to share data only through files or messages

Orphan Process

Process whose parent has terminated

Heavyweight Process

Process with one thread

Completely Fair Scheduler

Processes with smaller virtual run time are able to preempt processes with larger virtual run time.

In a multi-threaded process, each thread has it's own _________

Program counter

Locking

Protecting critical regions through the use of locks

Thread Library

Provides programmer with an API for creating and managing threads

Remainder section

Remaining code

Kernel Threads

Run solely in kernel space. Kernel must support threading.

Priority Scheduling

Running higher priority tasks first. Can result in indefinite blocking/starvation

Multilevel Feedback Queue Scheduling

Same as regular, except that processes can move up or down the levels based on a classifier (priority or size etc). Long processes sit in the bottom most queue and are served in FCFS order.

Benefits of Multithreaded Programming

Scalability - Multithreading on multiprocessor systems increases parallelism Economy - Allocating memory and resources for processes is costly. Because threads automatically share memory, it is more economical to context switch between threads rather than processes. Responsiveness - Program can keep running if another part is blocked Resource Sharing - Threads automatically share memory by default, which allows an application to have several threads of activity in the same address space. SERR

Critical Section

Segment of code within a process that may be changing common variables, updating a table, writing a file, etc

Short-term scheduling (CPU scheduler)

Selects from the processes that are ready to execute and allocates CPU to one of them

Medium-term scheduling (memory manager)

Selects processes from the ready queue or blocked queue and removes them from memory, then reinstates them later to continue running. Reduces the degree of multiprogramming.

Long-term scheduling (job scheduler)

Selects which processes should be brought into the ready queue. Controls degree of multiprogramming

Race condition

Several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place. This is bad, as you want your program's outcome to be consistent and predictable, not variable.

Interprocess Communication (IPC)

Shared Memory & Message Passing

Describe the differences among short-term, medium-term, and long-term scheduling.

Short term is invoked frequently (once every few milliseconds) Long term is invoked infrequently (once every few seconds or minutes)

Only _______-threaded processes have program counters.

Single

Trap (Exception)

Software generated interrupt. Can be caused by an error or by a request from a user program.

Discuss the performance of the following scenario of a program on a multiprocessor system. The number of kernel threads allocated to the program is less than the number of processing cores

Some of the processors would be idle because there are not enough kernel threads to be spread around to all of the processors available.

Program counter

Specifies the location of the next instruction to execute.

I/O Bound Process

Spends more time doing I/O than computations, many short CPU bursts

CPU-Bound Process

Spends more time doing computations, few very long CPU bursts

Preemptive

Supports interrupts, CPU can switch to another process before the current process finishes. Can lead to race conditions if not carefully coded

Blocking is considered _______

Synchronous Blocking Send - Sender is blocked until the message is received. Blocking Receive - Received is blocked until a message is available

What is the purpose of system calls

System calls allow user-level processes to request services of the operating system.

Homogeneous Multiprocessors

Systems where each processor is identical

Round Robin Scheduling

Tasks are circled through, each task being executed for 1 quanta of time. If the task is completed in that amount of time, it leaves the list. If it is not, it will remain in the list and be visited on the next go around.

Describe the actions taken when context-switching between kernel threads.

The CPU registers of the current thread are saved, and the CPU registers of the thread to be swapped in are restored.

What will happen if a long term scheduler chooses all CPU-bound processes

The I/O waiting queue will almost always be empty, devices will go unused.

What problems do caches cause

The cache data must be kept consistent with the realtime data in a component. If a component has a value change, the data in the cache must also be updated. This causes problems on multiprocessor systems where more than one process could be accessing the same data.

Describe the actions taken by a kernel to context-switch between processes.

The kernel must save the state of the currently running process and the registers associated with it. It must then restore the state and registers of the process scheduled to be run next. Context of a process is represented in the PCB Context-switch time is pure overhead (worthless). System do no useful work while switching. Helpful Diagram - http://imgur.com/Uc28Pip

Push Migration

The load on each processor is checked periodically, and processes are moved from overloaded processors to idle or less-busy ones.

Multiprogramming

The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization (More than one thread at a time)

Timesharing (Multitasking)

The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running.

Kernel

The one program running at all times on the computer

Joy's Law

The performance of computers doubles every 18 months, until 2003.

Spinlock

The process "spins" while waiting for the locks to become available (see busy waiting)

What will happen if a long term scheduler chooses all I/O bound processes

The ready queue will almost always be empty, and the short-term scheduler will have little to do.

Multilevel Queue Scheduling

There are multiple levels of priority, where each level has it's own inner queue.

Bounded Buffer Problem (Producer & Consumer)

There is a fixed limit on the size of the buffer. The consumer must wait if the buffer is empty, and the producer must wait if the buffer is full. Solution: http://pastebin.com/Vn8muVSH

Bounded Waiting

There's a limit to the number of times other processes are allowed to enter their critical sections after a process has made a request to enter its own critical section and before that request is granted Simply: If P1 wants to execute it's critical section, and it asks for permission, P2 can't just keep going on the merry-go round indefinitely. It has to give P1 a turn eventually. There is a limit to the number of times P2 can continue before P1 will take over.

How are iOS and Android similar?

They are based on existing kernels (Linux & Mac OS X) They have architecture that uses software stacks They provide frameworks for developers

User Threads (Supported by Kernel)

Threads run in user space but are scheduled by the kernel. Kernel must support threading.

User Threads (Not supported by Kernel)

Threads run in user space, and are scheduled in user space. Supported just by user-space threading library.

Dispatch Latency

Time to stop one process and start another process

Effects of disabling interrupts on multiprocessor

Time-consuming, since message is passed to all the processors Message passing delays entry into each critical section → system efficiency decreases Need to consider effect on system's clock if it is kept updated by interrupts. Timer will get messed up.

What is the purpose of interrupts

To let the CPU know that an a device controller is done with it's operation

OS Services

UI Resource Allocation Program Execution I/O Operations Protection & Security File System Manipulation Accounting Communications Error Detection URPIP FACE

A way to ensure bounded waiting for semaphores

Use a FIFO queue when adding and removing processes

exec() system call

Used after a fork to replace the processes' memory space with a new program

Signal Handler

Used to process signals. 1. Signal is generated. 2. Signal is delivered to process 3. Signal is handled.

What are two differences between user-level threads and kernel-level threads?

User-level threads are unknown by the kernel, whereas the kernel is aware of kernel threads. User threads are scheduled by the thread library and the kernel threads scheduled by the kernel. Kernel threads do not have to be associated with a process, whereas every user thread belongs to a process. Kernel threads are generally more expensive to maintain than user threads as they must be represented with a kernel data structure.

Spinlock Disadvantages

Wastes CPU cycles that some other process might be able to use productively

Kernel Mode

When a user application requests a service from the OS, the system must transition into kernel mode to fulfill this request.

User Mode

When the computer system is executing on behalf of a user application.

Entry Section

Where a process requests permission to enter its critical section

Symmetric Multiprocessing

Where each processor is self-scheduling. All processes could be in a common ready queue, or they might have their own private queue.

Asymmetric Multiprocessing

Where one processor has all scheduling decisions, I/O processing, and other system activities. It is the master server. The other processors execute only user code. Only one processor accesses the system data structures, reducing the need for data sharing.

Can traps be generated intentionally by a user program? If so, for what purpose?

Yes, by an exception/error or an explicit call, if a user needs an OS service to perform

Is it possible to have concurrency but not parallelism? Explain

Yes. Concurrency means that more than one process or thread is progressing at the same time. However, it does not imply that the processes are running simultaneously. The scheduling of tasks allows for concurrency, but parallelism is supported only on systems with more than one processing core.

Multiple-processor solution to semaphore critical-section problem

alternative locking techniques (such as compare_and_swap()) instead of disabling interrupts, which is difficult and can diminish performance on multi processor machines

Calls to either acquire() or release() must be performed _______

atomically. Thus they are often implemented using one of the hardware mechanisms

Which of the following components of program state are shared across threads in a multithreaded process? (a) Register values (b) Heap memory (c) Global variables (d) Stack memory

b & c (b) Heap memory: Shared (c) Global variables: Shared

What is the average turnaround time of the following values - http://imgur.com/400ll50 - if: A) using FCFS B) using SJF C) using SJF, but with the 1 second being idle time

http://www.cs.rpi.edu/~moorthy/courses/os00/soln1234.html A) 10.53 B) 9.53 C) 6.86

How are iOS and Android different?

iOS is closed-source, Android is open-source iOS applications use Objective-C or Swift, Android is strictly Java Android uses a VM, iOS executes code natively

Processes within a system may be ___________ or ____________

independent, cooperating

Single-processor solution to semaphore critical-section problem

inhibit interrupts during the time wait() and signal() operations are executing. Interrupt is inhibited → instructions from different processes cannot be interleaved → currently running process executes until interrupts are reenabled and the scheduler regains control

Semaphore Solution to busy waiting

instead of busy waiting, when a process executes wait(), it can block itself → process is placed into a waiting queue associated with the semaphore → state of process is switched to the waiting state → control is transferred to the CPU scheduler, which selects another process to execute

How do you terminate just the calling thread?

pthread_exit()

Mutex lock cons

requires busy waiting

A _________ challenge arises when a higher-priority process needs to read/modify kernel data that's being accessed by a lower-priority process

scheduling

Binary semaphore

value can range only between 0 and 1 behave similarly to mutex locks

Counting semaphore

value can range over an unrestricted domain - can be used to control access to a given resource consisting of a finite number of instances

Busy waiting

while a process is in its critical section, other processes that try to enter their critical section must loop continuously in the call to acquire()


Set pelajaran terkait

Ch 22: The Role of Life insurance in Estate Planning

View Set

Job Related Services Quiz 3: Area and Volume

View Set

Sect 3: Food Safety & Supply Topic B

View Set

Chapter 12 - IP Addressing and Subnetting

View Set

the complete ap econ review guide

View Set

Bone, cartilage tendon and muscle

View Set

Effective Supervisory Practices 5th Ed Chapters 1-7

View Set