Threads

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

What happens if a thread in a multi-threaded process crashes? How can you improve the fault tolerance (robustness) of a multi-threaded program?

All of the threads crash, the entire process crashes. You have to be extremely careful as a programmer to check errors and limits before proceeding through an action?

What are the disadvantages of threads?

Because data can be shared and accesses by other threads, must be careful in synchronizing accesses If a signal is received, ALL of the threads receive the signal (a common signal handler occurs for all threads) Programmer must ensure that function is thread-safe before using the function in a multithreaded program

What kind of applications benefit the most from user-level thread support?

Computation-heavy programs that do not block often. Less context switching overhead.

What are the benefits of threads over processes?

Do not need to set up IPC mechanisms or shared memory for threads. Thread context-switches are less expensive than process context switches.

Describe how a hybrid implementation of a kernel- and user- level thread works.

Each kernel level thread has user-level threads. The user-level threads of a kernel thread is multiplexed.

What is an event-based programming model?

Have an infinite event loop that continuously and sequentially checks which event has occurred

Explain how a web server could use threads to improve concurrency when serving client requests

Have each thread process work on a client request and have another thread manage receiving the requests

In a event-based programming model, why are long-running event-handlers problematic? How do threads solve this problem?

If other events occur while the event handler is being processed, the signal of the event could be missed or must wait until the long handler to complete in order to be handled Threads can interweave and do not have to wait for a thread to complete order to run proceed (if there is no dependency between threads)

Why does context-switching between threads incur less overhead than between processes?

Only the thread context needs to be loaded (register values). The virtual memory space remains the same for thread context switches. Processes must load context plus the entire address space. Data in the cache becomes useless for the new process, so cold-start penalty occurs.

What kind of applications benefit the most from kernel-level thread support?

Programs that block often (so the blocked thread can be preempted without all of the threads of that process to block)

What are threads? How do they differ from processes? How are they similar?

Threads are another way to multitasking (other, processes) Differ because threads share the same address space as other threads, but processes have their own. Similar because both threads and processes execute their own sequence of instructions (execution path) from the process where they were spawn. Multiple "threads of execution"

What do threads share? What do threads not share?

Threads share: address space, code, global variables, heap, file descriptors, signals and signal handlers Per thread: own registers, pc, stack, stack pointer, thread ID

What combinations of user/kernel level threads and global/local scheduling are feasible, and why?

User level, local scheduling - Threads are managed by user libraries, not the OS. No need to know of other threads to schedule them. User level, global scheduling - Not possible. OS does not know user-level threads exist, therefore cannot schedule them. Kernel level, local scheduling - Can still schedule the threads of a process by only knowing of the threads of the process (and not other threads) Kernel level, global scheduling - Threads are scheduled independently by the OS among other threads of OTHER processes.

What are the benefits and disadvantages of using user-level and kernel-level threads?

User-level Pro: Faster to context switch between threads Cons: If process is preempted, all threads of that process cannot execute, slow on threads that block a lot (since all of the threads would block) Kernel-level Pro: Can schedule threads on different cores, good for threads that block often Con: Slower context switch

When would you prefer a) event-based programming b) multi-threaded programming Why?

a) event-based programming When user interaction is high (like GUI programs that wait for user actions). For many different possible actions and would like only one action to be processed at once. Cannot tolerate the burden of context switching b) multi-threaded programming When you need to divide up the work and do not need a lot of user input to decide how to proceed through a task

Briefly explain: a) user-level threads b) kernel-level threads c) local thread scheduling d) Global thread scheduling

a) user-level threads OS has no idea of these threads. When the process is preempted, all threads are preempted b) kernel-level threads OS is aware of threads as separate entities. Can schedule threads individually c) local thread scheduling Each process gets a time slice from the OS. The time slice is divided among threads. Scheduling decisions can be made by only knowing about the threads of the process. (The threads during the PROCESS time slice execute; threads are NOT interleaved with other process threads during the PROCESS time slice). d) Global thread scheduling Each thread gets its own time slice. Any thread from ANY process can get scheduled. Since the knowledge of other processes must be known, global thread scheduling can only be implemented in kernel-level threads


संबंधित स्टडी सेट्स

референтные величины бх белки ферменты

View Set

Cost Price / Selling price and markup

View Set

Experimental Final Exam (from midterm)

View Set

Chapter 17 - Marketing Communication

View Set