Operating Systems Exam 1

¡Supera tus tareas y exámenes ahora con Quizwiz!

compare and contrast direct and indirect communications

Direct communication is when you name the sender or receiver in the system calls directly. Indirect communication is when you send a message via a channel and the receiver goes and gets it

Explain the disadvantages of the operating system as a software layer separating user programs from the hardware.

Disadvantages: OS is expensive, OS is relied on and if it fails the whole system crashes

List specific reasons for a process leaving the running state, and how these are related to preemption by the CPU scheduler.

A process terminates(system call, infrequent), a process voluntarily yields to CPU(system call, very rare), a process blocks(system call), a process has had enough CPU time(timer interrupt), another process finishes waiting(hardware interrupt)

Contrast the system call mechanism with an ordinary subroutine call and explain how a system call is implemented

A system call requests info from the kernel rather than from user. A system call has a number associated with it. The system call interface switches to kernel, accesses the call associated with the number, performs it and returns with the values into user mode.

Contrast the role and execution environment of a user process vs. the OS kernel

A user process executes by using things it has access to, or requesting kernel info from a system call.

Explain the advantages of the operating system as a software layer separating user programs from the hardware.

Advantages: OS allows you to run many programs at once, share memory among programs, enables interaction with devices. OS makes sure the system operates efficiently

Describe the most important steps in a context switch starting from timer interrupts

An interrupt occurs telling CPU to leave a process Decide to run the different process save a copy of the PC and CPU registers for original process set memory bounds for new process restore CPU registers for new process copy new process into the stack return from interrupt

Describe asynchronous and deferred models of thread cancellation and the consequences of using one or the other

Asynchronous: Something like kill() to terminate threads, good for processes problematic for threads Deferred: Define / create cancellation points, where the thread can terminate, Maybe, have target thread check to see if it should terminate

Describe how base and limit registers function as a simple mechanism for memory protection.

Base register: address of first byte a CPU can access in user mode Limit register: number of bytes the program can access, starting from the base

Explain the meaning and importance of each of the five CPU scheduling criteria (CPU utilization, throughput, etc.) and determine values for these based on a CPU schedule. Explain why typical operating systems can't satisfy these requirements

CPU utilization: how often is the CPU running a user process Throughput: How many processes are finished per time unit Turnaround time: time from when a job is submitted until it is done Waiting time: time a process spends in the ready state Response time: time from when a job becomes runnable to when it starts producing output OS can't do all these things at once

Implement simple programs and program fragments using the Pthreads API, and explain the behavior of code fragments using this API (including pthread_create(), pthread_join())

Can create threads, join them, pass them subroutines, fail function.

List and explain the different types of protection (e.g., memory, CPU, I/O) an operating system is expected to provide to its processes, and how these are achieved with the help of hardware.

Dual-Mode Operation- OS needs to do things user cant, CPU has a bit to keep up with the mode I/O Protection- OS forces users to request for IO Memory Protection- need to control what memory a program can use CPU Protection- need to get CPU back from running a process

Explain how transitions between the user and kernel mode occur

First an interrupt occurs, either hardware(device) or software(error). Then, CPU stops and switches to an interrupt handler. A trap also allows for the switch between user and kernel mode.

Describe types of processor affinity

soft processor affinity - OS attempts to keep process running on same processor, but can move around if needed hard processor affinity - a process can specify processors on which it can run, runs on same core

Describe temporal and spatial locality

temporal locality is when a program is likely to access data it has recently EX: x = x + 2 Spatial locality is when a program is likely to access data nearby to what it has recently EX: i in loop

Implement critical section solutions based on atomic test-and-set or compare-and swap CPU instructions

test and set: set conditions based on memory location, set memory location to zero compare and swap - if ( target == expected ) { set condition codes to true target = newValue; } else set condition codes to false

Explain the behavior and advantages/disadvantages various CPU scheduling algorithms

First come first serve: Schedule processes in the order they arrive Adv: process quick Dis: short processes held up by long ones Shortest Job first: Chooses the process with the shortest CPU Burst, non-preemptive Shortest Remaining Time First: Always run the process with the least time remaining in queue, preemptive Adv of SJF and SRTF : good for avg wait time Dis of SJF and SRTF: impossible to implement Priority Scheduler: can be preemptive and nonpreemptive, smallest num is highest priority Round Robin Scheduling: Make processes take turns via time quantum, use of ready queue, priority doesn't matter

describe the memory layout of a (single-threaded) process.

Text section: code and other global, read-only data Data: writable global variables Stack: local variables, return addresses, etc Heap: dynamically allocated memory (malloc(), new) Room for the stack and heap to grow

Given a collection of processes arriving in the ready queue, develop a schedule corresponding to one of the CPU scheduling algorithms

Use of Gantt charts, nothing special here

Explain the difference between user- and kernel mode

User mode - only has access to a restricted amount of CPU features, can't run privileged instructions Kernel mode- access to all CPU instructions and registers

Explain differences in implementation and tradeoffs for user-level and kernel-level implementations of threads

User-level thread - implements multithreading, kernel unaware of multithreading, implemented via reusable library. Bad because if one user gets in blocking all threads gets in blocking, low context switch overhead, reduced consumption of kernel resources Kernel-level thread- aware of multiple threads in process, maintains thread control block for each thread, independently schedules each thread,

Describe the general technique of caching

Using a faster, smaller type of storage that holds a subset of the larger storage, temporarily. Can update what needs to be accessed and what not from the cache

Explain how memory is laid out and how thread context is maintained in a multithreaded process

Using the memory style of a process, the multithreads just contain multiple thread stacks in the stack space. The share most stuff so they can stay in the same memory lay out

For a particular schedule, compute waiting time, average waiting time, turnaround time and response time

Waiting time: time waiting where ready but not run Turnaround time: time from arrival to done response time: time when it gets CPU first - arrival time

Identify deficiencies in flawed critical section solutions and describe them in terms of critical section requirements

Watch Videos of class for this portion

Describe how the traditional model of a (single-threaded) process can be generalized to include multiple threads.

We want a couple processes to be able to share everything. Each uses same code, resources, data. Each thread runs its own subroutines and sections of code. Each needs CPU registers and region of memory

Describe how and why multiple scheduling behaviors may be combined in a multilevel feedback queue to simultaneously accommodate processes with a variety of behaviors

Why: Using different schedulers for different kinds of jobs, ready queue partitioned into multiple queues each with a different scheduler How: Queues selected via priority, ready process enters Q0, if it uses whole quantum it goes to Q1, if it uses whole quantum it goes to Q2, after some time move the processes to the topmost queue

Describe how write-through and write-back policies work to keep the backing store consistent with the cache.

Write-through is when the data is written simultaneously in the cache and memory. Write-back is when the data is written to the cache and is updated in the memory at a later time

Describe the bounded buffer problem

buffer with limited capacity, producer waits if buffer is full, consumer waits if buffer is empty, queue order within buffer

types of thread scheduling based on contention scope

contention scope: other threads you compete with for CPU time process contention scope: compete within process system contention scope: compete with threads in other processes

Describe the behavior of counting and binary semaphores

counting semaphores: the value can be any digit binary semaphore: sem can only be 0 or 1(default)

Implement simple programs using POSIX API for processes, explain the behavior of these system calls, and explain the behavior of example programs using them (including fork(), wait(), exit() and exec*()).

fork() allows us to create a dup child process. wait() creates deterministic output exec*() runs processes

how transitions among these states may occur because of events inside and outside the process.

new to ready: admitted ready to running: scheduler dispatch running to ready: interrupt waiting to ready: I/O or event completion running to waiting: I/O or event wait running to terminated: exit

Explain the meanings of various process states (e.g., new, ready)

new: being created and can't be run yet ready: process is runnable but isn't on CPU yet running: process is executing instructions on CPU waiting: process has requested something and can't run until it is complete terminating: process has finished running, still has pid, but isn't on a CPU right now

List and describe steps performed during system boot

- Start running firmware at a known address - Load a boot loader from secondary storage - Bootstrap loader copies kernel into memory and starts executing it - Kernel initializes data structures (e.g., interrupt vectors) - Kernel starts running at least one system utility (e.g., Unix init or systemd on Linux)

implement a bounded buffer solution using semaphores

A Semaphore lock: For preventing concurrent access to the buffer, Initialized to 1, Just like semaphore-based critical section solution A Semaphore fullCount: Counts the number of filled buffer slots, Initialized to 0 A Semaphore emptyCount: Counts the number of empty slots, Initialized to the buffer capacity

Describe the blocking behavior of a process with user-level threads

If one user-level thread makes a blocking call, the CPU halts until it is finished, stopping other threads from being run

Use semaphores to solve simple synchronization problems (like the ones on the inclass exercises and the examples) using our semaphore pseudocode (e.g., sem s = 1; acquire( s ); release( s );)

If you want something to happen before something else set the sem s = 0 globally. Then release after first operation and acquire before the second operation.

Define the term, Process

Its an abstraction for running a program

Describe the basic steps of process creation

Loading code and static data. Allocate memory for heap and stack initialize file descriptors jump to the main and give the cpu to the process

Describe the different, general models for mapping process threads to kernel threads (e.g., many-to-one, one-to-one) the relevant tradeoffs.

Many-to-One : Many user level threads implemented by a single kernel thread Many-to-Many : Independent mapping of user to kernel, needs scheduling One-to-One : Each thread maps to a kernel level thread Two-Level Model: does many to many but also has ability to map individually

Describe different approaches to operating system organization, in particular, microkernel vs. monolithic kernel, and the relative advantages/disadvantages

Microkernel - The microkernel is usually a smaller kernel than normal because of the why the processes are organized. More of the OS process are place outside of the kernel. Advantages: This allows for there to be less code in the kernel making it more resistant to failure and security breaches. A microkernel makes it easier to add new features. Monolithic kernel- larger, faster, less code to write outside kernel, if a service crashes the whole thing crashes, OS in one big kernel

Describe the parent-child relationship among processes and how this yields a process tree

Parent and child can share all, some, or none of the resources Child can be a duplicate(fork then exec) or a new process(create process method)

Describe the role of system programs, how the operating system provides an interface to end users.

Provide an environment where programs can be executed

Describe the different types of storage available and the relative size/performance tradeoffs. (4)

Register - < 1 KB, fastest access time and bandwidth, managed by compiler Cache- > 16MB, fast access time and bandwidth, managed by compiler Main memory- > 16 GB, slow access time and bandwidth, managed by OS Disk storage - >100 GB, slowest access time and bandwidth, managed by OS

Describe the jobs and responsibilities of a modern operating system and the types of services normally provided to user processes.

Runs programs, share memory, handles the interactions between the user apps and hardware. Handling user interactions with the hardware is the main thing.

Speculate about possible executions of multiple threads and their interactions through shared variables.

Shared variables like semaphores allow multiple threads to synchronize and properly share resources and data in an efficient way

Explain the necessity of burst length prediction in SJF and SRTF

Since these scheduling methods are impossible to implement, we must use process behavior to differentiate and examine predictions in burst length

− Define starvation, recognize scenarios when it can occur and how aging may be used as a countermeasure

Starvation - low priority processes may never execute It can occur in preemptive priority scheduling Aging helps by increasing the priority of processes so eventually, one will have high enough priority to run

Describe the behavior of semaphores and their relevant operations, acquire() and release(). Make judgments about the possible behavior of a multi-threaded program using these mechanisms

Synchronization help from the OS, behaves like an int aquire: wait until a semaphore has positive value and then decrement it release: increment the semaphore value

Describe the function and contents of the PCB

The PCB is where the OS saves CPU registers and other information during a context switch PCB contains process state, process ID, Copies of PC and CPU registers, memory bounds, accounting information, resources in use(open files), pointer for linking PCB to other lists

Explain the need for dual-mode operation

The operating system needs to do things with the hardware that the user can not

Describe the critical section problem using appropriate terms and the requirements for a solution (e.g., bounded waiting, progress).

The problem is that processes need to be able to synchronize activity to cooperatively share data. The solution needs to have: mutual exclusion: if one process is in crit section no other process can be progress: if no process is executing in critical section and processes wish to enter critical section then only the processes not executing in their remainder section can decide which process goes into crit section next bounded waiting: there is a limit to the number of times processes can enter their critical section after another process has made a request to enter the critical section before it is granted

Describe how process context is maintained by the operating system

The process context is maintained through the Process Control Block

Describe how and why PCBs may move among OS scheduling queues.

They are stored in a process table(list of PCB) and run from a scheduling queue which is a list of waiting PCB

Explain why threads can be an important and valuable resource in designing an application for a modern computer system

Threads allow us to have lower cost, faster context switching, efficient communication, and simplified program structure

Compare and contrast IPC mechanisms based on message passing with POSIX anonymous pipes and shared memory

Through calls, pipe() read() write() and close() to pass messages

Implement Peterson's solution, describe the advantages and disadvantages of this solution

Two process solution, loading and storing can't be interrupted, use flags with a turn to break ties, set wantIn[ me ] when you want to enter crit section, set turn to let other thread go first Adv: solves the problem Dis: depends on busy waiting, works for 2 threads only, hard to implement, tricky to understand, doesn't work ALL the time


Conjuntos de estudio relacionados

How did the arms race affect US-Soviet relations?

View Set

Chapter 2: Types of Life Policies

View Set

GEOG-100 ALL Quizzes (Starting at Quiz 6)

View Set

NURS Exam 3: Diabetes PrepU questions

View Set