OS - Final exam (Week 6 - 10)

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

A directed arc from a process to a resource means that..

that the process is currently blocked waiting for that resource

A directed arc from a resource node to a process node means that....

that the resource has previously been requested by, granted to, and is currently held by that process.

A set of processes is deadlocked if...?

- each process in the set waiting for an event - that event can be caused only by another process

What are two types of resources

- preempt-able resource - non-preempt-able

Define Banker's Algorithm for multiple resources

1) Look for a row, R, whose unmet resource needs are all smaller than or equal to A. If no such row exists,system will eventually deadlock. 2) Assume the process of row chosen requests all resources needed and finishes. Mark that process as terminated, add its resources to the A vector. 3) Repeat steps 1 and 2 until either all processes are marked terminated (safe state) or no process is left whose resource needs can be met (deadlock)

Requirements to avoid race conditions

1) no two processes may be simultaneously inside their critical regions 2) no assumptions may be made about speeds or the number of CPUs 3) no process running outside its critical region may block other processes 4) no process should have to wait forever to enter its critical region

Recovery from deadlock - what are the possible methods of recovery?

1) preemption 2) rollback (abort the process and restart it) 3) killing processes

List the sequence of events required to use a resource

1) request the resource 2) use the resource 3) release the resource

What are the conditions for resource deadlocks?

1. Mutual exclusion - Each resource is either currently assigned to exactly one processor is available 2. Hold and wait Processes - currently holding resources that were granted earlier can request new resources. 3. No preemption Resources - previously granted cannot be forcibly taken away from a process. They must be explicitly released by the process holding them. 4. Circular wait condition - There must be a circular list of two or more processes, each of which is waiting for a resource held by the next member of the chain.

Define non preempt-able resource

A nonpreemptable resource cannot be taken away from its current owner without potentially causing failure

Define preempt-able resource

A preemptable resource is one that can be taken away from theprocess owning it with no ill effects

Provide an example of a non-preemtable resource

A process has begun to burn a Blu-ray, suddenly taking the Blu-ray recorder away from it and giving it to another process will result in a garbled Blu-ray

What is a resource?

A resource is anything that must be acquired, used, and released over the course of time A resource can be a hardware device or a piece of information (record in db)

How are arrays defined and accessed in C programming?

Array Definition and Initialization: Defined with a type, name, and size, and can be initialized with a set of values. Example: int myArray[3] = {100, 200, 300}; defines an array of integers with three elements. Accessing Elements: Direct access by index: myArray[0] gives 100, myArray[1] gives 200, myArray[2] gives 300. Using pointer arithmetic: *(myArray) gives 100, *(myArray + 1) gives 200, *(myArray + 2) gives 300. Understanding Pointers and Arrays: The name of the array myArray acts like a pointer to the first element of the array. Incrementing the pointer (myArray + n) accesses the nth element.

What is a barrier in process synchronization, and how is it used in multi-phase applications?

Barrier Mechanism: A synchronization technique used to coordinate multiple processes, ensuring that no single process proceeds to the next phase of execution until all have reached a certain point, or barrier. Operating Principle: Processes approaching a barrier must all wait until the last one arrives. When the final process reaches the barrier, all processes are simultaneously allowed to proceed to the next phase. Use Case Example: Commonly applied in computational problems with multiple phases, such as the relaxation techniques used in physics and engineering simulations, to ensure that a new phase does not begin until all calculations from the previous phase are complete

What are the two main types of semaphores in operating systems?

Binary Semaphore (Mutex): Purpose: Provides mutual exclusion for access to a resource. Values: Can only take two values, 0 or 1. Behavior: Acts as a lock, where 1 indicates the resource is free, and 0 indicates it is locked. Counting Semaphore: Purpose: Manages access to a resource pool with a finite number of instances. Values: Can take on a range of values from 0 to n, where n is the number of resources in the pool. Behavior: Allows up to n processes to access the resource concurrently and blocks additional processes until a resource becomes free.

Recovery from deadlock - explain the recovery through rollback process

Checkpoint a process periodically - Checkpointing a process means that its state (e.g. the memory image and the resource state) is written to a file so that it can be restarted later. - New checkpoints should not overwrite old ones but should be written to new files. Recovery - To do the recovery, a process that owns a needed resource is rolled back to a point in time before it acquired that resource by starting at one of its earlier checkpoints

What are condition variables in Pthreads, and how do they work with mutexes?

Condition Variables: Purpose: Allow threads to sleep and wait for certain conditions to be met rather than just obtaining a lock. Use Case: Typically used with mutexes, where a mutex guards the shared data and a condition variable queues threads waiting for certain conditions on that data. Behavior: No Memory: Condition variables do not remember signals. If a signal is sent and no thread is waiting, it is lost. Wait Operation: A thread must lock a mutex before it can wait on a condition variable to ensure safe access to the shared resource. Signal Operation: Another thread can signal the condition variable to wake up one waiting thread, while broadcast can wake up all waiting threads. Common Pattern: Lock Mutex: Secure access to the shared resource. Check Condition: If not met, wait on the condition variable. Unlock Mutex: Once condition is met or upon receiving a signal, continue with the resource and unlock the mutex.

Mutual exclusion with busy waiting

Continuously testing a variable until some value appears is called busy waiting. It should usually be avoided, since it wastes CPU time. Only when there is a reasonable expectation that the wait will be short is busy waiting used.

What is a message queue in Linux IPC and how is it used?

Definition: A message queue is an IPC mechanism that allows processes to send and receive messages in a queued fashion. Key Characteristics: Ordering: Messages are typically received in the order they were sent. Persistence: Message queues can be persistent across system reboots, depending on how they are configured. Blocking and Non-blocking Operations: Processes can send and receive messages either by blocking until the operation completes or non-blocking where the call returns immediately if the queue is full (for send) or empty (for receive). System V IPC Message Queue Functions: msgget: Creates a message queue or opens an existing one. msgsnd: Sends a message to a queue. msgrcv: Receives a message from a queue. msgctl: Performs control operations on a queue, like getting the status or deleting the queue. POSIX Message Queue Functions: mq_open: Opens a message queue descriptor or creates a new one. mq_close: Closes a message queue descriptor. mq_send: Sends a message to a queue. mq_receive: Receives a message from a queue. mq_unlink: Removes a message queue from the system. Usage: Message queues are used for tasks that require asynchronous communication between processes, like event notification or passing data chunks.

What are named semaphores in Linux IPC, and what is their typical syntax?

Definition: Named semaphores are synchronization mechanisms identified by unique names and can be shared between processes and threads. Syntax: Creating or opening a named semaphore: sem_t *sem_open(const char *name, int oflag, ...); Closing a named semaphore: int sem_close(sem_t *sem); Removing a named semaphore: int sem_unlink(const char *name); Usage: Primarily used for synchronization between threads across different processes. Because they are named, processes can reference the semaphore using its unique name. Syntax Difference with Unnamed Semaphores: Named semaphores use persistent system-wide names, whereas unnamed semaphores are referenced through variables within a process and typically do not have a persistent identifier.

What are sockets in Linux?

Definition: Sockets are communication endpoints used for sending and receiving data between processes, either within the same machine or over a network. Key Points: Support various communication protocols, with TCP and UDP being the most common. Identified by an IP address combined with a port number. Function as the fundamental building block for network communication in Linux.

What are unnamed semaphores in Linux IPC, and what is their typical syntax?

Definition: Unnamed semaphores are used for synchronizing threads within the same process or between processes sharing a common address space (such as parent and child processes). Syntax: Initializing an unnamed semaphore: int sem_init(sem_t *sem, int pshared, unsigned int value); Destroying an unnamed semaphore: int sem_destroy(sem_t *sem); Usage: Useful when the semaphore's scope is limited to a single process or among processes that share the same memory space. Syntax Difference with Named Semaphores: Unlike named semaphores, unnamed semaphores are not identified by names but by their memory address. They are typically used when all threads or processes have access to the same memory address where the semaphore is located.

What are the key differences between pipes and named pipes, and can you give a simple example of each?

Differences: Relation Restriction: Pipes are typically used for communication between processes with a parent-child relationship, while named pipes can be used between any processes. Directionality: Pipes are unidirectional; named pipes can be bidirectional. Persistence: Pipes exist only as long as the creating process, while named pipes are persistent and exist independently of the creating process, until they are removed. Namespace: Pipes do not have names in the filesystem; named pipes (FIFOs) appear as files and are accessed via a pathname. Examples: Pipe Example: A shell script that uses | to pass the output of one command to another, like ls | sort. Named Pipe Example: Two unrelated processes where one writes log messages to /tmp/logger_pipe and another reads from it.

Mutual exclusion with busy waiting: disabling interrupts

Disable all interrupts just after entering its critical region and re-enable them just before leaving it. The CPU is only switched from process to process as a result of clock or other interrupts. Once a process has disabled interrupts, it can examine and update the shared memory without fear that any other process will intervene

In a computer with a multiprocessor, if interrupts are disabled in one of its processor, what occurs?

Disabling interrupts affects only the CPU that executed the disable instruction. The other ones will continue running and can access the shared memory

What does the mmap function do in Linux, and what are its key parameters?

Function Overview: mmap() is used to map files or devices into memory. It returns the starting address of the mapped area on success, or MAP_FAILED on error. Key Parameters: addr: Specifies the preferred starting address for the mapping; if NULL, the kernel chooses the address. length: The size of the mapping. prot: Protection flags (e.g., PROT_EXEC, PROT_READ, PROT_WRITE, PROT_NONE). flags: Determines whether updates to the mapping are visible to other processes mapping the same region, and whether updates are carried through to the underlying file (MAP_SHARED or MAP_PRIVATE). fd: A file descriptor of the file to be mapped. offset: The offset in the file where the mapping starts; often zero.

What is socketpair() in Linux, and what is its typical use?

Function Purpose: socketpair() is used to create a pair of connected, indistinguishable sockets in the Unix domain, often used for IPC. Key Aspects: Domain: Typically AF_UNIX as it's used for communication between processes on the same host. Type: Can be stream (SOCK_STREAM) or datagram (SOCK_DGRAM). Protocol: Usually set to 0 to choose the default protocol for the given type. Syntax Example: int socketpair(AF_UNIX, SOCK_STREAM, 0, int sv[2]); creates a pair of connected stream sockets with sv[0] and sv[1] being the file descriptors for the two sockets.

Explain how monitors ensure mutual exclusion without concern for the scheduler's behavior.

Guaranteed Mutual Exclusion:Monitors automatically ensure that only one process can operate within a monitor's procedures at any given time.When a producer process finds a buffer full and begins a wait operation, it does so with the assurance that the consumer process cannot enter the monitor.This exclusivity prevents the scheduler from switching to the consumer in the midst of the producer's wait operation, thus eliminating timing errors. Consumer Access:The consumer process is barred from entering the monitor while the producer is in the wait state.Only after the producer completes its wait operation and is no longer runnable will the consumer be allowed to enter the monitor.

Critical region - Example

Here process A enters its critical region at time T1. A little later, at time T2 process B attempts to enter its critical region but fails because another process is already in its critical region and we allow only one at a time. Consequently, B is temporarily suspended until time T3 when A leaves its critical region, allowing B to enter immediately. Eventually B leaves (at T4) and we are back to the original situation with no processes in their critical regions

Define starvation

In a dynamic system, requests for resources happen all the time. Some policy is needed to make a decision about who gets which resource when. This policy, although seemingly reasonable, may lead to some processes never getting service even though they are not deadlocked,e.g. 1) priority scheduling in interactive systems. 2) SSF (Shortest Seek First in disk arm scheduling algorithms). Starvation can be avoided by using a first-come, first-served resource allocation policy

Recovery from deadlock - explain the killing processes method

It is best to kill a process that can be rerun from the beginning with no ill effects

What are the key System V shared memory functions in IPC for Linux?

Key Functions: ftok: Generates a System V IPC key from a pathname and project identifier. shmget: Allocates a System V shared memory segment. shmat: Attaches the shared memory segment to the calling process's address space. shmdt: Detaches the shared memory segment from the calling process's address space. shmctl: Performs control operations on the shared memory segment. Usage: These functions collectively allow for the creation, access, and management of shared memory between multiple processes, facilitating inter-process communication.

What are the POSIX shared memory functions in IPC for Linux?

Key Functions: shm_open: Creates or opens a named shared memory object. ftruncate: Sets the size of the shared memory object. mmap: Maps the shared memory object into the virtual address space of the calling process. shm_unlink: Removes the name of the shared memory object, effectively deleting it. close: Closes the file descriptor allocated by shm_open. fstat: Obtains information about the shared memory object. fchown: Changes the ownership of a shared memory object. fchmod: Changes the permissions of a shared memory object. Usage: POSIX shared memory provides a more standardized and potentially simpler way to handle shared memory across different UNIX-like operating systems.

Interprocess Communication (IPC):

Mechanism for exchanging data: Enables processes within an operating system to communicate and share data efficiently, ensuring coordination and synchronization. Variety of forms: Includes pipes (named and unnamed), message queues, semaphores, shared memory, and sockets, catering to different needs and scenarios. Critical for multitasking: Facilitates the division of tasks among multiple processes, enhancing performance and resource utilization in complex computing environments.

What is memory mapping in the context of Linux IPC?

Memory Mapping:An IPC mechanism where a file or a file-like resource is mapped into a process's virtual address space. Types of Memory Mapping:File Mapping: Directly correlates a portion of the file with a segment of the address space. Reading or writing to this memory directly affects the file contents.Anonymous Mapping: Acts similarly to dynamically allocated memory (malloc), with the initial content being zeroed. Function: mmap() creates a new mapping, which can be used for IPC by allowing multiple processes to access the same memory segment.

What is message passing, and why is it preferred over semaphores and monitors in certain contexts?

Message Passing Overview: Definition: A method for processes to communicate and synchronize their actions by sending messages to each other. Usage: Predominantly used in parallel programming and distributed systems where shared memory is not feasible or practical. Advantages Over Semaphores and Monitors: Abstraction Level: More high-level than semaphores, which are considered too low-level for some applications. Language Support: Monitors are integrated into only a few programming languages, whereas message passing can be implemented across various environments. Distributed Communication: Unlike semaphores and monitors, message passing allows for information exchange between different machines, making it suitable for distributed computing. Example System: MPI (Message-Passing Interface): A standardized and portable message-passing system designed to function on a wide variety of parallel computers. Widely used in high-performance computing, especially for scientific calculations.

How is the producer-consumer problem addressed using message passing?

Message-Based Synchronization: Utilizes a fixed number of messages (N), similar to slots in a shared-memory buffer. The consumer initiates the process by sending N empty messages to the producer to represent the empty slots in the buffer. Exchange Mechanism: The producer, when it has an item to send, takes one empty message, fills it with the item (making it a full message), and sends it back to the consumer. This simulates placing an item in the buffer. Flow Control: Producer Faster Than Consumer: All messages may become full, causing the producer to wait for an empty message to be returned by the consumer. Consumer Faster Than Producer: All messages may be empty, causing the consumer to wait for a full message from the producer. Resulting Dynamics: Ensures that neither producer nor consumer overruns the other, maintaining a balance in the production and consumption rates.

What is a monitor in the context of operating systems, and how does it enforce mutual exclusion?

Monitor Definition: A high-level synchronization construct that includes procedures, variables, and data structures encapsulated within a module or package. Access Control: Processes can call the procedures inside a monitor but cannot access its internal data structures directly. Mutual Exclusion:Property: Only one process can execute in a monitor at a time, ensuring mutual exclusion.Benefit: Prevents race conditions by serializing access to the resources that the monitor controls.

What is a mutex, and how does it compare to a semaphore?

Mutex (Mutual Exclusion Object):Simplified Semaphore: A binary semaphore specifically used for mutual exclusion, to protect critical sections and prevent race conditions.States: Has two states—unlocked (0) and locked (non-zero).Usage: Protects access to shared data by allowing only one thread at a time to own the mutex.Implementation: Can be implemented in user space using atomic instructions like Test-and-Set Lock (TSL) or Exchange (XCHG).Storage: Typically represented by an integer for practical purposes, even though only 1 bit is theoretically needed.

Mutual exclusion with busy waiting: Peterson's solution

Peterson's Solution defines the procedures for entering and leaving the critical region. Before using the shared variables (i.e., before entering itscritical region), each process calls enter region with its ownprocess number, 0 or 1, as parameter. This call will cause it towait, if need be, until it is safe to enter. After it hasfinished with the shared variables, the process calls leaveregion to indicate that it is done and to allow the otherprocess to enter, if it so desires

What are pipes and named pipes in inter-process communication (IPC)?

Pipes: Used for communication between parent and child processes (or related processes). Unidirectional, meaning data flows in a single direction from the writer to the reader. Example: pipe(fd) where fd[0] is set up for reading, and fd[1] is set up for writing. Named Pipes (FIFOs): Allow communication between unrelated processes that do not have a parent-child relationship. Bidirectional, can be read or written by multiple processes. Example: mkfifo(pathname, mode) to create a named pipe in the filesystem that any process can access using the pathname.

What is the Readers and Writers Problem and what is an effective way to synchronize access to a database?

Problem Context: An airline reservation system with processes needing to read and write to a shared database. Multiple readers can access the database simultaneously, but writers require exclusive access. Challenges: Ensuring that no reader is in the database when a writer is writing. Preventing writers from being starved out by readers continually accessing the database. Potential Solutions: Use reader-writer locks that prioritize writers to avoid writer starvation. Implement a read-write lock allowing concurrent reads or one exclusive write. Maintain a queue to balance the order of readers and writers accessing the database.

Describe the Producer-Consumer Problem in operating systems.

Problem Context: Two processes, a producer and a consumer, share a common, fixed-size buffer. Producer: Puts information into the buffer. If the buffer is full, it goes to sleep until space is available. Consumer: Takes information out of the buffer. If the buffer is empty, it goes to sleep until the producer adds new items. Synchronization Requirement: Ensures that the producer doesn't overwrite un-consumed items and the consumer doesn't read the same item multiple times or read from an empty buffer. Solution Techniques: Semaphores, mutexes, or condition variables are used to solve this problem, ensuring mutual exclusion and proper sequencing.

What is the Dining Philosophers Problem and how can it be addressed to avoid deadlock?

Problem Description: Five philosophers sit at a table with a plate of spaghetti. Each requires two forks to eat, placed between each pair of plates. Philosophers pick up the two adjacent forks to eat and put them down when they are finished. Challenges: Preventing a deadlock where each philosopher holds one fork and waits forever for the second. Avoiding resource starvation where a philosopher never gets both forks. Potential Solutions: Enforce an order of fork pickup (e.g., always pick up the lower-numbered fork first). Allow a philosopher to pick up forks only if both are available. Implement a dining arbitrator to control access to forks.

InterProcess Communication Example: A Print spooler (PART 2)

Process A - Local variable next_free_slot = 7 - A clock interrupt occurs. Process B - Local variable next_free_slot = 7 - It stores the name of its file in slot 7 A clock interrupt occurs. Process A - Local variable next_free_slot = 7 - It writes its file name in slot 7 Result: Process B will never receive any output.

Mutual exclusion with busy waiting: strict alternation

Process Coordination: Alternates strict turns between two processes, ensuring only one accesses the critical section at a time. Busy Waiting Loops: Utilizes while loops to make processes wait ('busy waiting') until it's their turn to enter the critical region. Drawback: Can lead to idle time if a process does not need the critical section when its turn arrives, hence potentially inefficient.

What are the key Pthread functions for managing condition variables, and what are their purposes?

Pthread Condition Variable Functions: Pthread_cond_init: Initializes a new condition variable. Pthread_cond_destroy: Destroys a condition variable and frees up any resources it holds. Pthread_cond_wait: Blocks a thread until a condition variable is signaled. This function releases the associated mutex and re-acquires it upon waking up. Pthread_cond_signal: Wakes up at least one thread that is waiting on the specified condition variable. Pthread_cond_broadcast: Wakes up all threads waiting on the specified condition variable. Usage Notes: The pthread_cond_wait function must be called with a mutex locked by the calling thread to avoid a race condition. pthread_cond_signal and pthread_cond_broadcast are used to notify other threads that a particular condition has changed.

What are the primary mutex operations in Pthreads, and what do they do?

Pthread Operations: Pthread_mutex_init: Initializes a new mutex. Pthread_mutex_destroy: Deallocates a mutex that is no longer needed. Pthread_mutex_lock: Locks a mutex. If the mutex is already locked, the calling thread is blocked until the mutex becomes available. Pthread_mutex_trylock: Attempts to lock a mutex. If the mutex is already locked, the call fails and returns immediately, allowing the thread to do other work or try again later. Pthread_mutex_unlock: Unlocks a mutex, potentially allowing another waiting thread to proceed. Note: These function calls are part of the POSIX Threads (Pthreads) library, which provides a standard set of threading APIs for C/C++.

What is the purpose of the sleep and wakeup mechanism in operating systems?

Purpose: Manage process synchronization and prevent race conditions. Sleep Operation: A process goes to sleep if it cannot proceed, releasing the CPU to other processes. Wakeup Operation: A process is awakened by another process when it can proceed. Usage Scenario: Typically used in producer-consumer problems, deadlock resolution, and resource allocation.

How does the Read-Copy-Update (RCU) technique handle the removal of nodes in a data structure without using locks?

RCU Node Removal Steps: Decouple Node: Initially, the node to be removed (e.g., B) is decoupled from the data structure. Readers may still access the old node (B), while new readers will see the new version without B. Waiting Period: The system waits until it is certain that there are no more readers accessing the old nodes (B and D). This ensures that any read operations that started before the removal will not be disrupted. Final Removal: Once it is safe (all old readers have finished), the nodes B and D are fully removed from the structure. New readers only see the updated structure. Advantages of RCU: Non-blocking Reads: The RCU allows read operations to continue without the need for locks, even while removals are being prepared. Safe Updates: Ensures that updates do not interfere with ongoing read operations and only become visible when they are fully ready.

What is the Read-Copy-Update (RCU) technique in concurrent programming, and how does it allow for lock avoidance?

RCU Overview: A synchronization mechanism that allows readers to access data structures without locks while writers can simultaneously make updates. Operation: Reading: Readers traverse the current version of a data structure, such as a tree, without obtaining locks, ensuring uninterrupted read access. Writing: When updating, a writer creates a copy of the node or structure, applies changes, and then adjusts pointers to replace the old version. Example Process:Original Tree: Readers access nodes A, B, C, D, E.Adding a Node: Writer prepares node X and links it to E. Current readers of A and E are unaffected.Replacing with Update: Once X is ready, the writer changes the tree's pointers so that new readers see X while current readers finish with the old version. Advantages: No Locks Needed for Readers: Enhances performance by avoiding reader-writer locks. No Stale Data: Readers either see the old data or the new data, not an inconsistent mixture. Graceful Transitions: Readers are not forced to restart their read if a write occurs.

What are UDP sockets and how are they used in Linux?

UDP Sockets: User Datagram Protocol (UDP) sockets provide a connectionless datagram service that offers a direct way to send and receive packets over an IP network. Characteristics: Connectionless: No need to establish a connection before sending data. Datagram-Based: Data is read in chunks (datagrams), and each send call results in a discrete packet being transmitted. Efficiency: No overhead for establishing and maintaining a connection. Syntax Example: int socket(AF_INET, SOCK_DGRAM, 0); creates a UDP socket.

What is a race condition, and how can it manifest in the producer-consumer problem?

Race Condition Definition: A situation in operating systems where the outcome depends on the non-deterministic ordering of events, leading to unexpected results. Example Scenario: In the producer-consumer context, a race condition can occur when managing a shared buffer.Buffer Variables: count tracks the number of items in the buffer, and N is its maximum capacity.Problem Illustration:The buffer is empty, and the consumer checks count, finding it to be 0.The scheduler switches to the producer before the consumer can go to sleep.The producer adds an item, increments count to 1, and tries to wake the consumer, thinking it's asleep.The wakeup call is missed because the consumer wasn't actually asleep yet.The consumer, upon resuming, reads the old value of count (0), goes to sleep, leading to both processes eventually sleeping indefinitely. Consequence: Both the producer and consumer might end up sleeping forever if the buffer fills up, due to the lost wakeup signal. Solution Insight: This highlights the importance of properly synchronizing access to shared resources and the careful management of signaling between processes to avoid race conditions.

Deadlock example - Hardware

See attached image

Deadlock example - Software

See attached image

What are semaphores, and how do they function in operating systems?

Semaphores: Introduced by Dijkstra in 1965, semaphores are a synchronization mechanism. Purpose: Manage concurrent processes by using an integer variable to track the state or count of available resources. Atomic Operations:P(s) Operation (Proberen/Test): If s == 0 (semaphore value), the process sleeps; otherwise, s is decremented (s = s - 1), representing the allocation of a resource.V(s) Operation (Verhogen/Increment): Increments the semaphore value (s = s + 1), releasing a resource and potentially waking up sleeping processes. Atomicity Guarantee: Ensures that once a semaphore operation begins, no other process can access the semaphore until the operation has completed or is blocked. Analogy: Compared to train semaphores, where signals indicate track status—stop (red), caution (yellow), or clear (green).

What are signals in the context of Linux IPC, and how can they be managed?

Signal Definition: A signal is an asynchronous notification sent to a process to notify it that an event has occurred. It acts as a software interrupt that can be sent by the operating system or invoked programmatically. Signal Management: Handling a Signal: A program can define a custom handler function to respond to specific signals (signal(signum, handler)). Ignoring a Signal: A signal can be explicitly ignored by setting its action to SIG_IGN. Default Action: If not caught or ignored, the signal is handled by executing the default action associated with that signal (SIG_DFL). Usage Example: The kill command can send signals to processes. For example, kill -SIGQUIT [pid] sends the SIGQUIT signal to the process with the specified pid. Common Signals: SIGINT: Interrupt from keyboard (Ctrl+C). SIGTERM: Termination signal. SIGKILL: Kill signal that cannot be caught, blocked, or ignored. SIGSEGV: Invalid memory reference. SIGQUIT: Quit from keyboard. Note: Signals are a fundamental aspect of UNIX and Linux systems and are used for error reporting, program control, and inter-process communication.

Race conditions

Situations where two or more processes are reading or writing some shared data and the final result depends on who runs precisely when, are called race conditions

What is sleep?

Sleep is a system call that causes the caller to block, that is, be suspended until another process wakes it up

InterProcess Communication Example: A Print spooler (PART 1)

Slots 0 to 3 are empty (the fileshave already been printed) andslots 4 to 6 are full (with thenames of files queued forprinting). More or less simultaneously, processes A and B decide they want to queue a file for printing Shared variables, out which points to the next file to be printed, and in, which points to the next free slot in the directory.

Mutual exclusion

Some way of making sure that if one process is using a shared variable or file, the other processes will be excluded from doing the same thing

What is the ostrich algorithm?

Stick your head in the sand and pretend there is no problem. Mathematicians find it unacceptable and say that deadlocks must be prevented at all costs. Engineers ask how often the problem is expected. If deadlocks occur on the average once every five years, just ignore it.

What are the differences between System V IPC and POSIX IPC mechanisms in Linux?

System V IPC: Message Queue: msgctl, msgsnd, msgrcv, msgget Semaphore: semop, semctl, semget Shared Memory: shmat, shmdt, shmget POSIX IPC: Message Queue: mq_open, mq_close, mq_send, mq_receive, mq_notify, mq_setattr, mq_getattr, mq_timedsend, mq_timedreceive, mq_unlink Semaphore: sem_open, sem_close, sem_post, sem_wait, sem_trywait, sem_timedwait, sem_getvalue, sem_destroy, sem_init, sem_unlink Shared Memory: shm_open, ftruncate, mmap, munmap, shm_unlink, fstat, close, fchown, fchmod Key Differences: Standardization: POSIX IPC is standardized by the IEEE and tends to be more portable across different UNIX-like operating systems. Functionality: POSIX IPC provides more extensive functions for message queues, semaphores, and shared memory with better support for real-time systems. Documentation: In Linux, the use of IPC mechanisms can be explored more through the manual pages (man mq_overview, man sem_overview, and man shm_overview).

What are TCP sockets and how are they characterized in Linux?

TCP Sockets: Transmission Control Protocol (TCP) sockets provide reliable, ordered, and error-checked delivery of a stream of bytes. Use a three-way handshake to establish a connection before data can be sent. Characteristics: Connection-Oriented: Require a connection to be established between two endpoints. Stream-Based: Data is read as a byte stream, no message boundaries. Reliability: Ensures that data arrives intact and in order. Syntax Example: int socket(AF_INET, SOCK_STREAM, 0); creates a TCP socket.

Recovery from deadlock - explain the preemption process

Take a resource from some other process. - In some cases it may be possible to temporarily take a resource away from its current owner and give it to another process. Depends on nature of the resource

What does the CPU do when executing the TSL instruction?

The CPU executing the TSL instruction locks the memory bus to prohibit other CPUs from accessing memory until it is done

Define Banker's Algorithm for a single resource

The banker's algorithm considers each request as it occurs, seeing whether granting it leads to a safe state. If it does, the request is granted; otherwise, it is postponed until later

When does a deadlock occur?

When all the members of a set of processes are blocked waiting for an event that only other members of the same set can cause. This situation causes all of the processes to wait forever.

Define safe state

There is some scheduling order in which every process can run to completion even if all of them suddenly request their maximum number of resources immediately. When a process requests a resource the system must decide if resource should be granted. A safe state the system can guarantee that all processes will finish.From an unsafe state, no such guarantee can be given

What is the problem with Peterson's solution and the solutions using TSL or XCHG?

They both have the defect of requiring busy waiting, wasting CPU time

Explain the issue here: tP0_non-critical_region > tP1_non-critical_region Process 0 leaves its critical region and sets turn to 1, enters its non-critical region. Process 1 enters its critical region, leaves its critical region and sets turn to 0. Process 1 enters its non-critical region, quickly finishes its job and goes back to the while loop Since turn is 0, process 1 has to wait for process 0 to finish its non-critical region so that it can enter its critical region.

This violates the third condition of providing mutual exclusion: No process running outside its critical region may block other processes

T / F - Deadlocks can occur on hardware resources or on software resources

True

Define livelock

Two people trying to pass each other on the street when both of them politely step aside, and yet no progress is possible, because they keep stepping the same way at the same time. Livelock is similar to deadlock in that it can stop all forward progress,but it is technically different since it involves processes that are not actually blocked

Attacking the Hold-and-Wait Condition

continued from image: A slightly different way to break the hold-and-wait condition is to require a process requesting a resource to first temporarily release all the resources it currently holds.Then it tries to get everything it needs all at once.

The part of the program where the shared memory is accessed is called the _______ or ________

critical region or critical section

Mutual Exclusion with Busy Waiting:The TSL Instruction

enter_region: TSL REGISTER,LOCK CMP REGISTER,#0 JNE enter_region RET - copy lock to register and set lock to 1 -- was lock zero? -- if it was nonzero, lock was set, so loop return to caller; critical region entered leave_region: MOVE LOCK,#0 RET - store a 0 in lock -- return to caller

Deadlock modeling - four conditions can be modeled using directed graphs. What are the two kinds of nodes that the graphs can have?

see attached image

A lock that uses busy waiting is called a ______

spin lock


Set pelajaran terkait

U11, Decisions 1, Business Result Intermediate, 2e

View Set

Public Finance - Government Revenue and Taxation

View Set

Word Basics 4 - Headers / Footers and Page-Numbers

View Set