CS153 Final

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Explain the concept of Copy on Write (CoW).

"Copy on Write" means you only make a copy when there is a write event.

Run "sleep 3" and "exec sleep 3" in your shell respectively. Describe what happens and explain why it happens this way.

"sleep 3": a new process is forked from the shell process and executes sleep system call, the shell process will wait for this child process to exit. As a result, the shell process will be blocked for 3 seconds. After the child process terminates, the shell process continues to receive commands. "exec sleep 3": In this case, no child process will be forked. The original shell process will be overwritten by a new program that executes the sleep(3) system call. As a result, after sleeping for 3 seconds, this process terminates, and the terminal window disappears. If you perform this test on a remote shell (e.g., through ssh), the connection will be closed.

In this bank example, suppose you have $1000, you withdraw $100, and your father withdraws $200 at another ATM simultaneously, what can the remaining balance be? withdraw (account, amount) { balance = get_balance(account); balance = balance - amount; put_balance(account, balance); return balance; }

- $900 - $800 - $700

Which of the following are true? - A process is more costly to create than a thread. - Communicating between threads is more costly than communicating between processes. - Each thread has its own stack. - Each thread has its own address space.

- A process is more costly to create than a thread. - Each thread has its own stack.

Which of the following are true? - In UNIX, we execute a new program calling a fork() system call. - A process with multiple threads has multiple stacks - Each thread has its own address space - When a kernel-level thread makes a blocking system call, the other threads in the same process are blocked. - User-level threads are faster to create, schedule and manipulate.

- A process with multiple threads has multiple stacks - User-level threads are faster to create, schedule and manipulate.

What can an operating system do?

- Access Control - Support Concurrency - Provide Persistent Storage - Virtualize Memory - Support Networking - Virtualize CPU

Which of the following are true? - An application can NOT call arbitrary functions in the OS kernel. - The OS kernel is always running to monitor and control the execution of applications - Only the OS kernel can directly access I/O devices.

- An application can NOT call arbitrary functions in the OS kernel. - Only the OS kernel can directly access I/O devices.

Which of the following are operating systems? - Android - Google Chrome - QEMU - UNIX - iOS - Zoom - XV6 - Adobe Acrobat

- Android - UNIX - iOS - XV6

What is timer interrupt used for?

- Context switch - Sleep()

Describe how to add a new system call in XV6.

- Define syscall number - Add an entry in the syscall table - Implement syscall function - Add a user-space declaration

Which of the following are true? - A symbolic link shares the same inode with its target file - Each file or directory has a unique inode. A directory is an inode. - A directory contains a list of names and their inodes. - A hard link shares the same inode with its target file

- Each file or directory has a unique inode. A directory is an inode. - A directory contains a list of names and their inodes. - A hard link shares the same inode with its target file

Order the steps in the virtual memory system when CPU issues a memory read.

- Given a virtual address (VA), get VPN and VPO. - Given a VPN, look up TLB to find the corresponding PPN. If PPN is found, skip the next step. - Given a VPN, look up Page Table and find the PTE. If the PTE is valid, find PPN; otherwise, raise page fault. - Given PPN and VPO, get the physical address (PA) - Given a physical address (PA), read Hardware Cache. If cache hit, done; otherwise, next step. - Given a physical address (PA), read Main Memory

Which of the following are true? - Applications run directly on the CPU and can directly access I/O devices. - The OS kernel runs in Ring 3, and the applications run in Ring 0 - If an application executes a protected instruction, the CPU will raise a fault. - Applications cannot call arbitrary kernel functions. - OS is sleeping most of the time can be woken up by events like interrupts, system calls, faults and signals.

- If an application executes a protected instruction, the CPU will raise a fault. - Applications cannot call arbitrary kernel functions. - OS is sleeping most of the time can be woken up by events like interrupts, system calls, faults and signals.

The following events are asynchronous

- Interrupt - Signal

Which are true about paging? - It is easy to allocate memory - A page table is used to translate virtual address to physical address - Paging may have external fragmentation - Two processes can share one page

- It is easy to allocate memory - A page table is used to translate virtual address to physical address - Two processes can share one page

Which of the following belong to a process after the introduction of threads? - Program Counter - PID - Address Space - OS-Level Resources (e.g, open files, network connections) - Stack Pointer

- PID - Address Space - OS-Level Resources (e.g, open files, network connections)

What does PCB (Process Control Block) contain?

- PID - Process Name - Open Files - Execution State (e.g., Ready, Running, Waiting, etc.) - Context (CPU registers for context switch)

Which of the following are preemptive scheduling algorithms? - Round Robin - SRJF - FIFO - SJF

- Round Robin - SRJF

Which of the following scheduling algorithms may cause starvation? (assume no aging mechanism is in place) - SJF - SJRF - FIFO - Round Robin - Priority Scheduling - MLFQ

- SJF - SJRF - Priority Scheduling

Which one is the slowest in disk access? - Seek Time - Rotational Latency - Data Transfer

- Seek Time

The following events are synchronous

- System Call - Fault

Which of the following is true? - The design of multiple level page table reduce the footprint of page tables. - L1/L2 cache is addressed by virtual address, not physical address - TLB speeds up the address translation - Page tables are stored in TLB, not memory

- The design of multiple level page table reduce the footprint of page tables. - TLB speeds up the address translation

Which of the following are true? - To achieve synchronizations, software locks don't really work, and architectural support is necessary. - Disabling interrupts can achieve mutual exclusion, but it doesn't work for multiprocessors. - When sem_post() is called with the semaphore value being 0, the calling thread will be blocked. - To protect a critical region, we can use a semaphore and initialize it to 0. - A wait queue is associated with each semaphore.

- To achieve synchronizations, software locks don't really work, and architectural support is necessary. - Disabling interrupts can achieve mutual exclusion, but it doesn't work for multiprocessors. - A wait queue is associated with each semaphore.

Which of the following are true? - User-level threads are invisible to the OS kernel - Kernel-level threads are faster to create, schedule and manipulate. - When a user-level thread makes a blocking system call, the other threads in the same process are blocked.

- User-level threads are invisible to the OS kernel - When a user-level thread makes a blocking system call, the other threads in the same process are blocked.

Which ring does OS run on?

0

Consider a simple memory system in lec20.pdf. For each of the following virtual addresses, answer the following questions: VPN? VPO? TLB miss or hit? Page fault? Physical address if applicable? Cache miss or hit if applicable? Actual value if applicable? - 0x0032

0x0032 000000 00 110010 TLBT TLBI VPO TLB miss VPN = 0, PPN=28, no page fault Physical address: 101000 110010 CO = 2, CI = 0xC, CT = 0x28 Cache miss, value in memory

Consider a simple memory system in lec20.pdf. For each of the following virtual addresses, answer the following questions: VPN? VPO? TLB miss or hit? Page fault? Physical address if applicable? Cache miss or hit if applicable? Actual value if applicable? - 0x028D

0x028D 000010 10 001101 TLBT TLBI VPO TLB Miss Look up the page table, VPN=0x0A, PPN= 0x09 Physical address: 001001 001101 CO = 2, CI = 1, CT = 0x09 Cache miss, value in memory

Consider a simple memory system in lec20.pdf. For each of the following virtual addresses, answer the following questions: VPN? VPO? TLB miss or hit? Page fault? Physical address if applicable? Cache miss or hit if applicable? Actual value if applicable? - 0x03C6

0x03C6 000011 11 000110 TLBT TLBI VPO 0x0F 0x03 0x03 0x0D VPN TLBI. TLBT. PPN TLB Hit, No page fault Physical Address: 001101 000110 CO = 2, CI = 1, CT = 0x0D Cache miss, value in memory

Can you explain the first eight steps in the timeline below for file read?

1) read root inode to locate root data 2) read root data to lookup foo and find foo's inode 3) read foo inode to locate foo data 4) read foo data to find bar's inode 5) read bar inode to retrive bar's metadata as part of file open operation 6) read bar inode to locate bar data 7) read bar data 8) write bar inode to update its last access timestamp

Can you explain the first 10 steps in the timeline below for file write?

1) read root inode to locate root data 2) read root data to lookup foo and find foo's inode 3) read foo inode to locate foo data 4) read foo data to read all entries under this folder 5) read inode bitmap to find a free inode 6) write inode bitmap to mark that free inode as taken 7) write foo data to insert an entry for bar and its inode 8) read bar inode so we will write it next 9) write bar inode to initialize and set up the inode 10) write foo inode to update its timestamp

Consider a UNIX-style inode with 10 direct pointer, one single-indirect pointer, and one double-indirect pointer only. Assume that the block size is 8K bytes, and the size of a pointer is 4 bytes. a) What is the largest file size that can be indexed in this system?

10 * 8K + 8K/4 * 8K + 8K/4 * 8K/4 * 8K = 80K + 16M + 32G

Consider a UNIX-style inode with 10 direct pointer, one single-indirect pointer, and one double-indirect pointer only. Assume that the block size is 8K bytes, and the size of a pointer is 4 bytes. b) How many blocks (including indirect blocks) are needed to store a file of size 100 bytes, 10K bytes, 10M bytes, and 4G bytes?

100: 1 block 10K: one block for the first 8K, and one block for the remaining 2K, so 2 blocks 10M: 10M is 10240K, first 80K stored in 10 blocks. So for the remaining 10160K, we need a single-indirect pointer that points to 1 indirect block. 10160K / 8K = 1270. So totally 10 + 1 + 1270 = 1281 blocks. 4G: we need 4G/8K = 2^19 data blocks plus some indirect blocks. We need one indirect block that is pointed by the single-indirect pointer. We also need the first-level indirect block pointed by the double-indirect pointer. Now the question is how many second-level indirect blocks we need for the double-indirect pointer. Each second-level index block can cover 8K/4 *8K = 16MB. So we need this many second-level indirect blocks. (4G - 80K - 16M)/16M = (4G - 80K)/16M - 1, which rounds up to 4G/16M - 1 = 2^8 - 1. So totally, we need 1 + 1 + 2^8 - 1 = 2^8 + 1 indirect blocks. Then the total number of blocks we need is: 2^19 + 2^8 +1.

Evaluate the following code snippet: #define N 2 int main() { for (int i=0; i<N; i++) { fork(); } printf("hello world!\n"); return 0; } How many "Hello World" messages will be printed, and why?

4 "Hello World!" messages.

What is a process?

A process is a program in execution, an instantiation of a program, an OS abstraction of execution.

Priority Scheduling

Chooses the next job based on priority. However, this can lead to starvation of low priority jobs.

Explain every line and every column of ls command output

Columns: (Left to right) 1) Inode # 2) Permissions 3) Link count 4) Owner 5) Group 6) File size 7) Last modify or created 8) Directory / File name Lines explained: (Left to right) Every line starts with a inode # for a directory or a file. (a directory and symbolic link is distinguished by looking at the permissions. If the permissions line starts with a 'd' it is a directory. If the permissions line starts with a 'l' it is a symbolic link.) To break down the permissions, it can be divided into three parts : Owner, Group, and Other users. (The types of permissions: 'r' = read permission, 'w' = write permission, 'x' = execute permission) Next, is a number that tells you what is the current link count is to the directory/file listed. Following that you will see the owner and the group name right next to it. And the number next to the group name is the file size. Lastly, the line will be finished up with when the directory/file was last modified/created and the directory/file name.

Why is it more efficient and also saves memory?

Copy on Write is more efficient, because during fork, we don't need to copy the entire memory pages from the parent to the child process, which can be very slow. We simply make both processes point to the same set of pages. Copy happens on when absolutely necessary, when the parent or child process writes to a shared page. Copy on Write saves memory too, because the child process often only needs to write a small number of pages, and we only copy these pages, instead of all the pages belonging to that process.

A group of 3 people go to a restaurant. They wait until the last person arrives before they start ordering. Implement this scenario using threads and semaphores. Treat each person as a thread. (hint: use pthread_create, sem_init, sem_wait, sem_post.)

Creation of 3 threads for 3 people is omitted sem_t s[3]; //initialize each s[i] to 0 void person1() { printf("persion 1 has arrived.\n"); sem_post (&s[0]); sem_post (&s[0]); sem_wait(&s[1]); sem_wait[&s[2]]; order(); } void person2() { printf("persion 2 has arrived.\n"); sem_post (&s[1]); sem_post (&s[1]); sem_wait(&s[0]); sem_wait[&s[2]]; order(); } void person3() { printf("persion 3 has arrived.\n"); sem_post (&s[2]); sem_post (&s[2]); sem_wait(&s[0]); sem_wait[&s[1]]; order(); }

SJF (Shortest job first)

Does the task with the shortest amount of work left to do.

OS model: It can be woken up by four kinds of events. What are these events? Give an example for each kind of events.

Fault (or exception): page fault, divide-by-zero, general-protection System call (or trap): open(), close(), sleep(), etc. Interrupt: keystroke, network packets, timer, etc. Signal: SIGKILL, software timer, etc.

What does a process contain?

It contains all the state for a program in execution, including static and dynamic memory, control registers, general-purpose registers, and a set of OS resources (such as open files, network connections, etc.)

What does a thread contain?

It contains program counter, a set of general-purpose registers, stack pointer, and some book-keeping information about it.

Evaluate the following code snippet: #define N 2 int main() { for (int i=0; i<N; i++) { fork(); } printf("hello world!\n"); return 0; } If N is 10, how many messages will be printed, and why?

It will be 2^10=1024 messages, because one fork() will result in 2 processes, N fork() calls will result in 2^N processes. Each process will print one message.

Consider the following program: int main() { int count = 0; int pid; while(!(pid = fork()) && count++ < 4) { printf("hello world!\n"); } return 0; } How many "Hello world\n" messages will be printed on the screen? Explain your reasoning.

It will print out 4 "Hello world\n" messages. After P1 (the first process) forks P2, P1 will immediately break from the while loop. Then P2 enters the loop body and prints the message. Then P2 forks P3, and P2 breaks from the while loop. P3 enters the loop body and prints the message. Then P2 forks P3 and P2 breaks from the while loop. P3 enters the loop body and prints the message. The variable count increments in each created process.

Explain the execution state graph (Slide 12 in Lecture 5). Define each state, and explain what causes a transition from one state to another.

New: The process is about to be created. Ready: The process is in the ready queue, and is ready to be scheduled Running: The process is currently running on CPU Waiting: The process is waiting for certain operations (e.g., I/O) to be completed Transition from New to Ready: System call like fork() in UNIX and CreateProcess() in Windows Transition from Ready to Running: The scheduler in the OS picks one process from the ready queue to run Transition from Running to Ready: The scheduler puts the currently running process back to the ready queue Transition from Running to Waiting: The running process performs an operation (e.g., I/O, page fault, semaphore) that causes itself to be put on a wait queue of that operation. Transition from Waiting to Ready: The process is woken up. It is removed from the wait queue, and inserted into the ready queue. Transition from Running to Terminated: When the process completes its execution (e.g., called exit(), ExitProcess())

Which scheduling algorithm has a shorter average response time?

SJF

RR (Round robin)

Schedules each task to the resource for a fixed period of time (aka time quantum).

FIFO (First in first out)

Schedules tasks in the order that they arrive.

Save the following code in mem_test.c. Compile it using GCC "gcc -o mem_test mem_test.c", and run it (./mem_test) in Linux shell. #include <stdio.h> #include <stdlib.h> #include <strings.h> #include <sys/types.h> #include <unistd.h> #define MEM_SIZE (128*1024*1024) //128MB int main(int argc, char *argv[]) { void *p = malloc(MEM_SIZE); bzero(p, MEM_SIZE); printf("pid = %d\n", getpid()); getchar(); free(p); return 0; } a) In another shell window, run utility "free -m" to check free memory (in megabytes) in system before and after you press the enter key into the mem_test program. How much is the difference of free memory before and after you press enter? Is this difference expected?

The difference is roughly 128MB. This difference is expected, because this program allocates 128MB memory on heap.

Save the following code in mem_test.c. Compile it using GCC "gcc -o mem_test mem_test.c", and run it (./mem_test) in Linux shell. #include <stdio.h> #include <stdlib.h> #include <strings.h> #include <sys/types.h> #include <unistd.h> #define MEM_SIZE (128*1024*1024) //128MB int main(int argc, char *argv[]) { void *p = malloc(MEM_SIZE); bzero(p, MEM_SIZE); printf("pid = %d\n", getpid()); getchar(); free(p); return 0; } b) Comment out the line of "bzero". Compile the program. Redo the steps in (a) again. How much is the difference of free memory before and after you press enter? Can you explain why the difference is much smaller?

The difference is very small, almost not noticeable. The reason is that this allocation is lazy in the virtual memory system. This 128MB memory region is just marked as allocated to the process without actually mapping them to actual physical pages.

OS model: It is called limited direct execution or sleeping beauty model. Explain this concept.

The user-level code runs directly on CPU with limited privileges, which means it can perform restricted operations. The restricted operations must be performed via system calls, which will transition the execution from user level (least privileged level) to the kernel level (the most privileged level). In this way, the OS is mostly sleeping, and only wakes up by certain events, such as system calls, interrupts, faults, etc.

An implementation of producer/consumer problem. void *producer(void *arg) { int i; for(i = 0; i < loops; i++) { sem_wait(&mutex); sem_wait(&empty); put( i); sem_post(&full); sem_post(&mutex); } } void *consumer(void *arg) { int i; for(i = 0; i < loops; i++) { sem_wait(&mutex); sem_wait(&full); put( i); sem_post(&empty); sem_post(&mutex); printf("%d\n", tmp); } }

This implementation can cause a deadlock.

An implementation of producer/consumer problem. sem_t empty; sem_t full; void *producer(void *arg) { int i; for(i = 0; i < loops; i++) { sem_wait(&empty); put( i); sem_post(&full); } } void *consumer(void *arg) { int tmp = 0; while (tmp != -1) { sem_wait(&full); tmp = get(); sem_post(&empty); printf("%d\n", tmp); } } int main(int argc, char *argv[ ]) { sem_init(&empty, 0, MAX); sem_init(&full, 0, 0); }

This implementation is incorrect, because there is a race condition.

int pthread_lock(pthread_mutex_t *mutex) { while(*mutex==1) { } *mutex= 1; return 0; }

This implementation is wrong, because there is a race condition on *mutex.

What is a thread?

Thread is a sequential execution stream within a process.

OS model: How does it prevent user-mode applications from performing dangerous and privileged operations (e.g., I/O, read and write OS memory, read and write privileged CPU registers)?

User-mode applications run in a less-privileged CPU mode (Ring 3 in x86 processor). If they perform a dangerous and privileged operation, CPU will immediately raise a fault, which will pause the user-mode execution, trap into the kernel mode, and let the OS kernel to handle the fault.

Explain how Copy on Write works when a child process is forked from its parent.

When a child process is forked from its parent, both parent's page table and child's page table point to the same set of physical pages, and these pages are all marked as read-only in both page tables. Then when there is a memory write happening in either parent or child process, we will make a new copy of that page to be written, and that new page is mapped to the page table of the process that is writing the memory.

In the following implementation of two process barrier, what X and Y should be? sem_t arrive1, arrive2; sem_init(&arrive1, 0, X); sem_init(&arrive2, 0, Y); void Worker1() { ... sem_post(&arrive1); /*signal arrival */ sem_wait(&arrive2); /*await arrival of other process */ ... } void Worker2() { ... sem_post(&arrive2); sem_wait(&arrive1); ... }

X=0, Y=0

How to execute a new program in UNIX?

fork() and then exec()

What about N people? Creation of N threads for N people can be omitted.

sem_t s[N]; //initialize each s[i] to 0 void person(int p) //p is set to be between 0 and N-1 { int i; printf("Person %d has arrived.\n", p+1); for(i=0; i<N-1; i++) sem_post(s[p]); //signal other people that this person has arrived. for(i=0; i<N; i++) { if(i != p) sem_wait(s[i]); //wait for other people } printf("Person %d starts ordering.\n", p+1); order(); }

Complete the following program to implement the Producer/Consumer problem correctly. int buffer[MAX]; int fill = 0; int use = 0; sem_t empty; sem_t full; sem_t mutex; void *producer(void *arg) { int i; for(i = 0; i < loops; i++) { ----------- ----------- put( i); ----------- ----------- } } void *consumer(void *arg) { int tmp = 0; while (tmp != -1) { ----------- ----------- put( i); ----------- ----------- printf("%d\n", tmp); } } int main(int argc, char *argv[]) { sem_init(&empty, 0, ----); sem_init(&full, 0, ----); sem_init(&mutex, 0, ----); }

sem_wait(&empty); sem_wait(&mutex); sem_post(&mutex); sem_post(&full); sem_wait(&full); sem_wait(&mutex); sem_post(&mutex); sem_post(&empty); MAX 0 1

Consider a process that has been allocated 5 pages of memory: P1, P2, P3, P4, and P5. The process accesses these pages in the following order: (30 Points) P1 P2 P3 P4 P1 P2 P5 P1 P2 P3 P4 P5 (i) Illustrate Belady's anomaly by precisely describing the execution of the FIFO page eviction algorithm in two cases: a) where the machine has 3 pages of physical memory, and b) where the machine has 4 pages of physical memory, and by comparing the number of page faults incurred in these two cases. (When the process begins executing, none of its pages are present in memory.)

with 3 pages: P1 P2 P3 P4 P1 P2 P5 P1 P2 P3 P4 P5 PF: Y. Y. Y. Y. Y. Y. Y Y. Y Evict: P1 P2 P3 P4 P1 P2 with 4 pages: P1 P2 P3 P4 P1 P2 P5 P1 P2 P3 P4 P5 PF: Y. Y. Y. Y. Y. Y. Y. Y. Y. Y Evict: P1. P2 P3 P4 P5 P1 With more physical memory available, there should be fewer page faults. However, in this example, with 3 pages, there are 9 page faults, and with 4 pages, there are 10 page faults.

Consider a process that has been allocated 5 pages of memory: P1, P2, P3, P4, and P5. The process accesses these pages in the following order: (30 Points) P1 P2 P3 P4 P1 P2 P5 P1 P2 P3 P4 P5 (ii) Show how the LRU page eviction algorithm would work in the same scenarios a) and b) described above.

with 3 pages: P1 P2 P3 P4 P1 P2 P5 P1 P2 P3 P4 P5 PF: Y. Y. Y. Y. Y. Y. Y Y. Y. Y. (totally 10 page faults) Evict: P1 P2 P3 P4 P5 P1 P2 with 4 pages: P1 P2 P3 P4 P1 P2 P5 P1 P2 P3 P4 P5 PF: Y. Y. Y. Y. Y. Y. Y. Y (totally 8 page faults) Evict: P3. P4 P5 P1


Ensembles d'études connexes

Chapter 7: The Nurse-Client Relationship

View Set

Unit 7 Florida Essentials of Real Estate Investment v6.0

View Set

OT 241: Spatial, Temporal, and Sociocultural Dimensions

View Set

Chapter 9 Test Questions: Harvesting Chemical Energy

View Set

Chapter 60: Assessment of Neurologic Function

View Set