ECE 3220 Final
1 #define NTHREADS 10 2 thread_t threads[NTHREADS]; 3 main() { 4 for (i = 0; i < NTHREADS; i++) { thread_create(&threads[i], &go, i); } 5 for (i = 0; i < NTHREADS; i++) { 6 exitValue = thread_join(threads[i]); 7 printf("Thread %d returned with %ld\n", i, exitValue); 8 } 9 printf("Main thread done.\n"); 10 } 11 void go (int n) { 12 printf("Hello from thread %d\n", n); 13 thread_exit(100 + n); 14 } if the second for loop (5-8) is deleted, what are the possible outputs of the program? why?
"main thread done" plus 0 and 10 "hello from thread i" with prints in any order. when main ends it exits and returns the PCB. All TCB returned at same time
https://drive.google.com/file/d/1Z7jSYpVT1SGc5_c-YCKJLpSEq1XKf4x4/view?usp=sharing how many resources?
(1,0)
https://drive.google.com/file/d/1xzrTfmBbwurAtn4V0sHpF8rF4dNf9EP-/view?usp=sharing which processes can be given more resources?
B
https://drive.google.com/open?id=1waDDBZfGDGUnmMrgw-x0PEJhFcaTcNup which is a ready process (on proc2)?
B
https://drive.google.com/file/d/1xzrTfmBbwurAtn4V0sHpF8rF4dNf9EP-/view?usp=sharing give a sequence of requests and releases that recover all resources
B request, B release, A request, A release, C request, C release B request, b release, c request, c release, a request, a release
https://drive.google.com/file/d/1Z7jSYpVT1SGc5_c-YCKJLpSEq1XKf4x4/view?usp=sharing which processes can be given more resources?
C
https://drive.google.com/file/d/1Z7jSYpVT1SGc5_c-YCKJLpSEq1XKf4x4/view?usp=sharing give a sequence of requests and releases that recover all resources
C req, C rel, A req, A rel, B req, B rel C req, C rel, B req, B rel, A req, A rel
https://drive.google.com/open?id=1waDDBZfGDGUnmMrgw-x0PEJhFcaTcNup Which is a new process? (ready to run on main)
D
explain the steps that an os goes through when the cpu receives an interrupt
1) save PC, IR, and PSR 2) switch to kernel mode 3) disable/defer future interrupts 4) load new PC from interrupt vector table
https://drive.google.com/file/d/1xzrTfmBbwurAtn4V0sHpF8rF4dNf9EP-/view?usp=sharing which processes cannot be given any more resources?
A, C
1 #define NTHREADS 10 2 thread_t threads[NTHREADS]; 3 main() { 4 for (i = 0; i < NTHREADS; i++) { thread_create(&threads[i], &go, i); } 5 for (i = 0; i < NTHREADS; i++) { 6 exitValue = thread_join(threads[i]); 7 printf("Thread %d returned with %ld\n", i, exitValue); 8 } 9 printf("Main thread done.\n"); 10 } 11 void go (int n) { 12 printf("Hello from thread %d\n", n); 13 thread_exit(100 + n); 14 } where is parameter n (11) of the thread stored
TCB because it is a per-thread variable
what is a race condition among threads?
occurs when the behavior of a program depends on the interleaving of operations of different threads. the output is determined by the random order of execution of independent operations
all operation on unix/linux IO uses the same set of system calls. list four of them
open, close, read, write
what is the difference between efficiency and overhead?
overhead is cost of implementing an abstraction, efficiency is the lack of overhead
paging / segmentation - can contain internal fragmentation
paging
paging / segmentation - fixed length allocation
paging
what could fail if memory protection is not enforced?
process could access other data structures in the kernel and gain information that should be kept secure
what could fail if privileged instructions are not enforced?
process could directly perform IO operation that could read/write anyone's file on the disk
what could fail if timer interrupts are disabled?
process could run in an infinite loop and never yield the processor back to the kernel
how does an OS keep a user application from directly access hardware without controls?
the OS implements protection through dual-mode operation: kernel/user. instructions that access hardware directly are considered privileged instructions. user mode cannot execute privileged instruction
Given the code, explain what must happen for the UNIX wait (line 8) to return immediately and successfully 1 main() { 2 int child_pid = fork(); 3 if (child_pid == 0) { 4 printf ("I am process #%d\n", getpid()); 5 return 0; 6 } else { 7 printf ("I am process #%d\n", getpid()); 8 wait(child_pid); 9 printf ("I am the parent of process #%d\n", child_pid); 10 return 0; 11 } 12 }
the child process of the form has to run to completion before the wait system call is made
Referee/Illusionist/Glue Libraries
glue
Referee/Illusionist/Glue disk details such as sector size are hidden
glue
Referee/Illusionist/Glue represents the largest section of code in the OS
glue
what is the difference between host and guest operating systems?
host OS runs on native hardware and provides a virtual machine abstraction. the guest OS runs on a virtual machine.
Referee/Illusionist/Glue Virtual machine
illusionist
Referee/Illusionist/Glue able to support multiple users simultaneously
illusionist
Referee/Illusionist/Glue guest OS
illusionist
https://drive.google.com/file/d/1xzrTfmBbwurAtn4V0sHpF8rF4dNf9EP-/view?usp=sharing how many resources are left?
1
list four of the 6 best practices for coding w shared objects and locks
1) consistent structure 2) synchronize w locks and conditional variables 3) acquire the lock at the beginning of a method and release at the end of a method 4) hold lock before using a conditional variable 5) wait in a while() loop 6) do not sleep while holding lock
Suppose you build a system using a staged architecture with some fixed number of threads operating in each stage. Assuming each stage is individually deadlock free, describe a way to guarantee that your system as a whole cannot deadlock. You only need to break one of the four necessary conditions for deadlock.
1) no limited resources -- make the queues infinitel large so that a stage never blocks when sending a message 2) preemption is allowed -- when a pipe fills up, a stage holding a lock is restated 3) no multiple requests -- each stage can only request one lock 4) no circular waiting -- arrange the stages into a directed graph so that there cannot be a cycle of stages waiting for eachother
Suppose a machine with 32-bit virtual addresses and 40-bit physical addresses is designed with a two-level page table, subdividing the virtual address into three pieces as follows: The first 10 bits are the index into the top-level page table, the second 10 bits are the index into the second-level page table, and the last 12 bits are the offset into the page. There are 4 protection bits per page, so each page table entry takes 4 bytes. what is the page size?
12 bit offset = 4 kb page size
1: class TooSimpleFutexLock { 2: private : 3: int val; 4: public : 5: TooSimpleMutex() : val (0) { } // Constructor 6: void acquire () { 7: int c; 8: // atomic_inc returns ⇤old⇤ value 9: while ((c = atomic_inc (val)) != 0) { 10: futex_wait (&val , c + 1); 11: } 12: } 13: void release () { 14: val = 0; 15: futex_wake (&val , 1); 16: } 17: }; A corner case (Lines 9-10) can occur when multiple threads try at acquire the lock at the same time. It can show up as occasional slowdowns and bursts of CPU usage. What is the problem?
2+ threads get into a livelock loop. a first thread reads and increments the val, then a second thread reads and increments the val before the first thread finishes the update. the 2+ threads begin to fight eachother until one is able to read, increment, and update successfully
Given code, how many different copies of the variable X are there? What are their values where their process finishes? 1 main() { 2 int child = fork(); 3 int x = 5; 4 if (child == 0) { 5 x += 5; 6 } else { 7 child = fork(); 8 x += 10 9 if (child) { 10 x += 5; 11 } 12 } 13 }
3 20,15,10
Suppose a machine with 32-bit virtual addresses and 40-bit physical addresses is designed with a two-level page table, subdividing the virtual address into three pieces as follows: The first 10 bits are the index into the top-level page table, the second 10 bits are the index into the second-level page table, and the last 12 bits are the offset into the page. There are 4 protection bits per page, so each page table entry takes 4 bytes. how much memory is consumed by first and second level page tables and wasted by internal fragmentation for a process that has 64kb of memory starting at address 0?
8 KB Mapping 64 KB of memory requires: 1 page frame for the top level page table (the first entry is a pointer to a page table, the rest are invalid) and 1 page frame for the second level page table (the first 16 entries are valid page frames and the remainder are invalid). Since the process size is a multiple of the page size, there is no additional internal fragmentation for representing the program. Thus the space overhead of translation is 2 pages, or 8 KB.
https://drive.google.com/open?id=1waDDBZfGDGUnmMrgw-x0PEJhFcaTcNup which is a running process?
A
https://drive.google.com/file/d/1Z7jSYpVT1SGc5_c-YCKJLpSEq1XKf4x4/view?usp=sharing which processes cannot be given any more resources?
A and B
Whats the difference between reliability and availability?
Reliability is the correct operation of a system, availability the fraction of time the system is working
suppose you do your homework assignments in SFJ order. why will this not work?
SJF does not account for deadlines
how is a microkernel 'better' than a monolithic kernel?
smaller core kernel, more modular libs, easier OS updates, simpler code base
how do conditions variables work with locks to stop busy waiting
a conditional variable is a synchronization object that allows a thread to wait for a change to a shared state. a thread in busy wait keeps polling a shared state looking for a change, but a conditional variable will signal a waiting thread when the change occurs
how do locks and conditional variables prevent a race condition among threads?
a lock add synchronizations (or mutual exclusion). only one thread can execute a critical section of code at a time
for the 'hello world' program, we mention that the kernel must copy the string from the user program to screen memory. why must the screen's buffer memory be protected?
a malicious/flawed application could insert bad code into the protected memory. the os could then execute this bad code causing a system crash or a malicious app could gain control of the os
how to semaphores differ from locks and condition variables in synchronization?
a semaphore has memory and condition variables have no memory a semaphore can have any non-zero positive integer value and locks are binary
why do threads have variable speed?
a thread has no control over when it runs or doesnt run. the scheduler will schedule the threads and all the threads will get CPU time, but the order is not predictable
how could a multi level feedback queue of 3+ levels cause starvation?
a workload of high priority jobs run at level 1 and level 2. the jobs run to completion and are never preempted into the lower levels. jobs in the lower levels are never allowed to run.
https://drive.google.com/open?id=1waDDBZfGDGUnmMrgw-x0PEJhFcaTcNup which is a waiting process (on proc2)?
c
describe the three types of user-mode to kernel-mode transfers
system calls, interrupts, execptions
what information is shared between threads of the same process?
code, data, heap
for systems that use paged segmentation, what translation state does the kernel need to change on a process context switch?
contents of the segment table
describe what info is needed in a thread control block
copy of process registers, a stack
how can a semaphore be used as a mutual exclusion lock?
create/initialize the semaphore to 1
describe the difference between external and internal fragmentation
external fragmentation is the unused physical memory space between segments. internal fragmentation is the unused physical space inside a frame (page) unit
T/F a system call (e.g. open()) is an example of an asynchronous interrupt
false
T/F an OS should never create more processes than the available number of processors
false
T/F an interrupt causes a new thread
false
T/F an interrupt creates a new process
false
T/F on modern processors, all instructions are atomic
false
T/F threads are more expensive for an OS to create than processes
false
processor utilization: 20% disk: 99.7% network: 5% explain how a faster CPU will effect processor utilization
faster CPU will spend less time working and more time waiting for disk. this will SIGNIFICANTLY DECREASE CPU utilization
explain why an os supports communication through a file system?
file storage is a way to store data for later user (time insensitive), programs can run at different times and still communicate
which unix/linux system call creates a new process? exec or fork?
fork
list the four conditions necessary for a deadlock to occur
limited resources, no preemption, multiple independent requests, circular waiting
processor utilization: 20% disk: 99.7% network: 5% explain how a faster network will effect processor utilization
little time is spent on the network. a faster network will have NO EFFECT on CPU utilization
https://drive.google.com/open?id=1mhtH0hRg622X6n4FyodFxoD-orJBtkgm what three tasks does the kernel stub need to perform before completing the system call (after 2 but before 3)
locate system calls arguments validate parameters copy before check
MTTR, and therefore availability, can be improved by reducing the time to reboot a system after a failure. What techniques might you use to speed up booting? Would your techniques always work after a failure?
make a checkpoint at a regular internal as the OS is running. after a 'failure' restore back to the checkpoint where the system is restarted
what happens if we run the following program? main() { while (fork() >= 0) ; }
maximum number of processes will be created and no other processes will be created. will lock up the OS. as one process dies another will be created.
explain why an os supports communication through shared regions of memory
memory sharing is useful for large amounts of data (complex data structures) to prevent duplication
1 #define NTHREADS 10 2 thread_t threads[NTHREADS]; 3 main() { 4 for (i = 0; i < NTHREADS; i++) { thread_create(&threads[i], &go, i); } 5 for (i = 0; i < NTHREADS; i++) { 6 exitValue = thread_join(threads[i]); 7 printf("Thread %d returned with %ld\n", i, exitValue); 8 } 9 printf("Main thread done.\n"); 10 } 11 void go (int n) { 12 printf("Hello from thread %d\n", n); 13 thread_exit(100 + n); 14 } what is minimum and maximum number of times that the main thread enters the waiting state (6)?
min = 0 -- all children threads can be created and completed before the main thread reaches the call to thread_join. max = 10 -- no child thread is finished before the main thread reaches call
how is a microkernel 'poorer' than a monolithic kernel?
more system calls needed for communication needed between user libraries and kernel code
explain why an os supports communication through messages (pipes/sockets) passed between applications
most communication between applications occur through messages and is an effective way to communicate in real-time between programs
chopstick philosopher problem - for a system with n philosphers, what is the minimum number of chopsticks that ensures deadlock freedom? why?
n+1. because in any state, if any philosopher wishes to eat, some philosopher can finish eating, returning chopsticks to the pool, and allowing any other philosopher to eat
https://drive.google.com/file/d/1Z7jSYpVT1SGc5_c-YCKJLpSEq1XKf4x4/view?usp=sharing will this sequence of requests work? C Request (1,0), C Releases (2, 1), B Requests (0, 1), A Requests (2, 0)
no
can a unix/linux pipe be used for full-duplex communication? why or why not
no - a pipe can only support half-duplex communication, information can only flow one way through a pipe
for SJF, if the scheduler assigns a task to the processor and no other task becomes schedulable in the meantime, will the scheduler ever preempt the current task? why/why not
no, if a job is scheduled, then it must be the shortest one. unless an even shorter one arrives, it will remain the shortest one until it finishes.
a virtual memory system that uses paging is vulnerable to external fragmentation. why or why not?
no. external fragmentation is unusable space in the gaps between variable-sized memory regions -- with paging, all memory is allocated in fixed size units
Referee/Illusionist/Glue controls communications between users and processes
referee
Referee/Illusionist/Glue enforcing file access permissions
referee
Referee/Illusionist/Glue isolates running programs from one another
referee
describe the advantage of an architecture that incorporates segmentation and paging over ones that are pure paging
relative to pure paging (single or multi level) segmentation (plus paging) supports variable sized memory regions, with a separate page table for each region
describe the advantages of an architecture that incorporates segmentation and paging over ones that are pure segmentation
relative to pure segmentation (single or multi level): paging (plus segmentation) allows the TLB to perform efficient lookups, as the base element is fixed size
1: class TooSimpleFutexLock { 2: private : 3: int val; 4: public : 5: TooSimpleMutex() : val (0) { } // Constructor 6: void acquire () { 7: int c; 8: // atomic_inc returns ⇤old⇤ value 9: while ((c = atomic_inc (val)) != 0) { 10: futex_wait (&val , c + 1); 11: } 12: } 13: void release () { 14: val = 0; 15: futex_wake (&val , 1); 16: } 17: }; The goal of this code is to avoid making expensive system calls in the uncontested case of an acquire on a FREE lock or a release of a lock with no other waiting threads. This code fails to meet this goal at LINE 15. Why?
release always makes a system call, even if there are no waiting threads
older computer os' tended to be batch systems (one processor) and newer computer os' tend to be interactive systems (2+ processors). considering users, what design feature has become more important in newer OS?
response time
what is the difference between security and privacy?
security is the protection of a computers operations so that a malicious attacker cannot perform unauthorized operations. Privacy is the protection of the user data from unauthorized access.
paging / segmentation - variable length allocation
segmentation
most round-robin schedules use a fixed sized quantum. give an argument against a small quantum
small time quantum will increase overhead due to the cost of switching contexts and cache interference
Unix/linus uses one system command "open(args)" for IO. why does Unix NOT use "open/create/exists" for IO?
the unix linux system call open(args) is an all-purpose atomic instruction. the arguments to the system call control the behavior of the IO. a single system call simplifies the interface to the user apps.
when a user process is interrupted or causes a processor exception, the x86 hardware switches the stack pointer to a kernel stack, before saving the current process state. explain why.
the user stack pointer may be corrupted. switching to the kernel stack ensures that heir is a valid memory region to store the process state. the kernel stack is inaccessible to the user and is protected. the kernel is able to store state information safely
processor utilization: 20% disk: 99.7% network: 5% explain how a faster disk will effect processor utilization
this is the bottleneck. a faster deck will reduce the time spent waiting for disk use. this will SIGNIFICANTLY INCREASE CPU utilization
T/F a loadable device driver means that the kernel does not have to be recompiled to use the device
true
T/F a timer interrupt is an example of an asynchronous interrupt
true
T/F an os kernel can use internal threads
true
T/F when a user attempts to execute a privileged instruction (in user mod) the os should stop the process
true
T/F if a multi threaded program runs correctly in all cases on a single time-sliced processor, it will run correctly if each thread is run on a separate processor of a shared-memory multiprocessor
true -- to run correctly in all cases on a single processor implies that the program's threads can execute in any interleaving--- that is, they are independent of the order of instruction execution
1: class TooSimpleFutexLock { 2: private : 3: int val; 4: public : 5: TooSimpleMutex() : val (0) { } // Constructor 6: void acquire () { 7: int c; 8: // atomic_inc returns ⇤old⇤ value 9: while ((c = atomic_inc (val)) != 0) { 10: futex_wait (&val , c + 1); 11: } 12: } 13: void release () { 14: val = 0; 15: futex_wake (&val , 1); 16: } 17: }; A corner case (Lines 9-10) can cause the mutual exclusion correctness condition to be violated, allowing two threads to both believe they hold the lock. What is the problem?
val can be repeatedly incremented by each thread, so it is possible for it to wrap around to 0, allowing multiple threads to simultaneously believe they have acquired the lock.
1: void RWLock::donwRead() { 2: lock.acquire(); 3: activeReaders--; 4: if (activeReaders == 0 && waitingWriters > 0) { 5: writeGo.signal(); 6: } 7: lock.release(); 8: } why do we use writeGo.Signal() rather than writeGo.Broadcast() at 5?
when a read function finishes, at most only one writer can make progress and any one of the writes can make progress. so a broadcast is wasteful and unnecessary
https://drive.google.com/file/d/1xzrTfmBbwurAtn4V0sHpF8rF4dNf9EP-/view?usp=sharing will the sequence of requests work? B Request (1), B Releases (3), A Request (1), C Request (2)
yea
can fork return an error? why or why not?
yes, if system runs out of resources (memory, TCB, etc)
can exec() return an error? why or why not?
yes, if the executable doesn't exist or the user doesn't have access to the executable
suppose that you create a local variable v in one thread t1 and pass a pointer to v to another thread, t2. is it possible that a write by t2 to v will cause t1 to execute the wrong code?
yes. threads provide no protection from read/write by other threads. if a pointer/reference is passed from one thread, t1, to another thread, t2, within the same process, then the receiving thread, t2, can write bad code into the sending thread, t1
