COSC 423 midterm
dining-philosophers problem - solving using monitors
5 philosophers and 5 chopsticks. deadlock and starvation can happen
Is it possible to have concurrency but not parallelism? Explain.
A computer system has concurrency when it can switch between and make progress on multiple processes at once. A system has parallelism when it can work on two processes at the exact same time. A single-core system with a process scheduler has concurrency but not parallelism.
Identify the values of pid at lines A, B, C, and D. (Assume the pids of the parent and child are 2600 and 2603 respectively.) #include <sys/types.h> #include <stdio.h> #include <unistd.h> int main() { pid_t pid, pid1; /* fork a child process */ pid = fork(); if (pid < 0) { /* error occurred */ fprintf(stderr, "Fork Failed"); return 1; } else if (pid == 0) { /* child process */ pid1 = getpid(); printf("child: pid = %d",pid); /* A */ printf("child: pid1 = %d",pid1); /* B */ } else { /* parent process */ pid1 = getpid(); printf("parent: pid = %d",pid); /* C */ printf("parent: pid1 = %d",pid1); /* D */ wait(NULL); } return 0; }
A=0, B=2603, C=2603, and D=2600.
monitors
ADT where only one process at a time is active within the monitor. has a condition type var: these vars can only invoke x.wait() and x.signal(). signal and wait : first process waits for second to leave monitor or for another signal signal and continue: second process waits for first to leave the monitor or for another signal
Which of the following components of program state are shared across threads in a multithreaded process? a. Register values b. Heap memory c. Global variables d. Stack memory
Global variables and heap memory
What is the relationship between a guest operating system and a host operating system in a system like VMware? What factors need to be considered in choosing the host operating system?
I BSed this question on the homework and I'll BS it on the midterm.
Can a multithreaded solution using multiple user-level threads achieve better performance on a multiprocessor system than on a single-processor system? Explain! You should consider that an OS might implement user-level threads in different ways.
If the OS implements user-level threads in such a way that they are only ever mapped to a single kernel-level thread, then running a threaded program on a multiprocessor will have no impact. Inversely, if the OS implements user-level threads such that multiple user-level threads are mapped to multiple kernel-level threads then running a threaded program on a multiprocessor will increase performance.
The CPU can continue to execute other programs while the DMA controller is transferring data. Does this data transfer interfere with the execution of the other programs? If so, describe what kinds of interference may occur.
If the operating system is designed poorly, the memory block that the I/O device is given for DMA might be accessed by the CPU while the I/O device is still using it. In this case, both would be updating the same memory locations. One may overwrite data that is critical to the process that the other is currently running.
What purpose do interrupts serve? How is a trap different from an interrupt? Can a user program deliberately generate traps? If so, to what purpose?
Interrupts tell the CPU to stop whatever it is currently doing, save it for later, and do a new task that is specified by the interrupt. Interrupts can be generated by software or hardware devices like I/0; traps are always software generated. Traps can be manually triggered, which is called a system call.
What output will be generated at Line X and Line Y in the following program? #include <sys/types.h> #include <stdio.h> #include <unistd.h> #define SIZE 5 int nums[SIZE] = {0,1,2,3,4}; int main() { int i; pid_t pid; pid = fork(); if (pid == 0) { for (i = 0; i < SIZE; i++) { nums[i] *= -i; printf("CHILD: %d ",nums[i]); /* LINE X */ } } else if (pid > 0) { wait(NULL); for (i = 0; i < SIZE; i++) printf("PARENT: %d ",nums[i]); /* LINE Y */ } return 0; }
Line X: This line is only reached if the current process is the child process. The for loop will print 0, -1, -4, -9, -16. Line Y: Only reached if the current process is the parent. The for loop will print 0, 1, 2, 3, 4.
What is the main advantage of the microkernel approach to systems design? How do user programs and system services interact in a microkernel architecture? What are the disadvantages of using the microkernel approach?
Microkernels are the smallest and most simple kernels. It is easier to update because of its relative size and simplicity. In addition to this, it is easier to port to new architectures. Communication between client programs and system services is message passing. Microkernels can face performance problems both from copying messages between services and from the performance overhead of having system processes in the user space.
The hardware of some computer systems don't provide a privileged mode of operation. Might it be possible to construct a secure operating system for these computer systems? Give arguments both that it is and that it is not possible.
One way to make a computer with no privilege system secure is to simply not let the user programs run any code that could compromise security. While this would severely limit what user programs can do, it would be secure. This might not be possible, however, because if someone has access to the memory they can write any instructions that they want, which compromises security
Consider a symmetric multiprocessing system similar to what is shown in Figure 1.8 ("Symmetric multiprocessing architecture"). Give an example of how data residing in memory could have two different values in each of the local caches.
Processor0 receives an instruction to read the data in memory address 0x3001, add it to 0x3002 and store it back into 0x3001. While the value is being computed (and before it has been stored back into 0x3001), processor1 receives instructions to also read the data in memory address 0x3001. This would result in the cache of each processor containing different values for 0x3001.
calculating a job's waiting time
Ta = job arrival time Tt = job termination time (completed its CPU burst) Tb = duration of job burst Tw = Tt - Ta - Tb for non-preemptive versions, you need to calculate the gannt chart first
Direct memory access (DMA) is a technique for interacting with I/O devices in such a way that the CPU's load is not appreciably increased. How does the CPU interact with the device to coordinate the transfer of data between the device and the CPU? How does the CPU know when the transfer is complete?
The CPU creates buffers, pointers, and counters for the I/O devices. from there everything is handled by the device. An interrupt is sent once the whole operation is complete.
Describe the actions taken by a kernel to context-switch between processes.
When the kernel receives an interrupt, it saves the state of the current process (PC, registers, process state) into the process' PCB. It then loads the saved state of the new process into the registers and begins execution.
readers-writers problem
Writers to a shared data set must have exclusive access to prevent errors. Multiple readers can access at the same to to increase optimization
Describe how you could obtain a statistical profile of the amount of time a program spends executing different sections of its code. Discuss the importance of such a statistical profile.
You would do this by periodically interrupting the CPU to check how much time has passed since the beginning of program execution. This is very important for process optimization and debugging.
calculating response time
amount of time a process has to wait before it begins execution: p1 takes 7 seconds. p2 executes immediately after p1 and arrived at 3. response time of p2 was 7 - 3
semaphores
and int that is accessed only through atomic operations wait() and signal(). processes waiting to access will block until they can counting semaphores: unrestricted - useful for when multiple processes can access something binary semaphores: 0 and 1 - very similar to mutex lock
explain compare_and_swap()
atomic swaps the contents of two words but always returns the original value of the first word can be used to implement mutual exclusion by having a global var lock set to false. a process sets lock to true and enters its critical section. it then sets it back to false when it is complete
explain test_and_set()
atomic - if two cores call it at the same time it will run sequentially can be used to implement mutual exclusion by implementing a lock bool.
first come first serve
completes processes as they are sent to the CPU
Provide pseudo-code describing how the producer consumer problem might be implemented with mailboxes (see the section, Naming, on indirect communication).
did a homework on this
java synchronization with monitors
each java object has a lock synchronized method to get lock the entry set waits for lock uses monitors to make sure that no two objects have lock at the same time
priority scheduling
each process is given a priority. the CPU allocates to the highest priority first. priority has a range - 0 to 7 or 0 to 4,000. some systems have high nums has high priority and vise versa
calculating turnaround time
exit time - arrival time of each process
POSIX synchronization
has mutex locks, condition variables, named and unnamed semaphores
What advantages might a peer-to-peer system enjoy over client-server systems?
in a p2p system, if one node goes down the network can continue to operate. If the sever in a server-client system goes down, the whole network goes down. In addition to this, p2p systems can balance workload among all nodes
What are the two models of interprocess communication? What are the strengths and weaknesses of the two approaches?
message passing: packets of information are sent between processes by OS. easier to setup than shared memory but slower shared memory: two or more processes read and write to the same memory block. Faster than message passing, but problems arise when processes are not synchronized correctly of if an unintended program accesses the memory block.
mutex lock
mutex lock (stands for mutual exclusion lock) must be acquired before a process can enter its critical section. acquire() and release(). A process that attempts to acquire an unavailable lock blocks until it becomes available.
Explain the needed elements of solving the critical-section problem: mutual exclusion, progress and bounded waiting.
mutual exclusion: if P is executing in its critical section, then no other process can enter its critical section progress: if no process is executing in its critical section and a process is waiting to enter its critical section, then only processes that are not in their remainder can decide who enters the critical section next bounded waiting: there is a limit on the number of times a process can wait to enter its critical section
preemption and non-preemption
non-preemption: once the CPU has been given a task, that task keeps the CPU until it terminates or waits. preemption: the process can switch from ready state to waiting state (interrupts, etc.) and can switch back from the waiting state to running state (I/O completion, etc.)
Give an example of a situation in which ordinary pipes are more suitable than named pipes and an example of a situation in which named pipes are more suitable than ordinary pipes.
ordinary pipes: with a typical producer-consumer setup, ordinary pipes provide all of the needed functionality named pipes: only named pipes can communicate between computers
Bounded-buffer problem
producer blocks if buffer is full, consumer blocks if buffer is empty
Describe the three general methods for a process to pass parameters to the operating system.
register: the most simple - parameters are stored in the registers when the program is executed. Limit # and size of parameters. memory block: the address to a memory block is passed to the register in place of the parameters. This allows more and longer parameters. Stack: the parameters are pushed onto the stack and popped by the CPU when they are needed
round robin
similar to FCFS but with preemption. The CPU scheduler goes around the ready queue (which is circular) and allocates the CPU to each process for a time interval of up to 1 time quantum.
calculating throughput
the number of completed processes in a given amount of time: p1 takes 3 seconds, p2 takes 5 seconds throughput = (3+5)/2
multilevel queue scheduling
there are multiple queues. They can represent priority, process type, etc. the queues have their own scheduler for determining which queue is chosen next. each queue could implement a different scheduling technique
shortest job first
uses exponential averaging to determine which process should be done next. I'm not entirely sure how this works. if it is preemptive, then it will use the shortest remaining time first