CS 446 Final (Concepts)

Ace your homework & exams now with Quizwiz!

What is a canonical device? Why is it important to modern OS IO?

A canonical device is a standard representation of a device that provides a consistent and uniform interface for the operating system to communicate with different types of devices. It is important to modern OS IO as it allows the OS to treat different devices in a similar manner, simplifying device management and enabling interoperability.

What is a critical region? How is it related to process scheduling?

A critical region is a section of code or a region of shared data that must not be concurrently accessed by multiple processes or threads. It is related to process scheduling because the proper synchronization mechanisms, such as mutexes or semaphores, need to be employed to protect critical regions from simultaneous access and potential race conditions.

What is a device driver, and why is it necessary in modern OS? What components of the device driver enable devices to be used across operating systems?

A device driver is a software component that allows the operating system to communicate and interact with specific hardware devices. It is necessary in modern OS to provide a standardized interface for accessing and controlling diverse hardware. The use of standardized APIs (Application Programming Interfaces) and hardware abstraction layers enables device drivers to be used across different operating systems.

What is a directory?

A directory, also known as a folder, is a file system object used for organizing and storing files and other directories.

What is a dump, and what is an advantage of doing it incrementally?

A dump refers to creating a backup or snapshot of the system's memory or file system. Incremental dumps back up only the changes since the last dump, saving time and storage space compared to full backups.

What is a lock? What conditions must be met for it to function correctly?

A lock is a synchronization mechanism used to protect shared resources in a multi-threaded or multi-process environment. It allows only one thread or process to access the shared resource at a time, ensuring mutual exclusion and preventing data races. For a lock to function correctly, the following conditions must be met: 1. Mutual Exclusion: Only one thread or process can acquire the lock at a time. 2. Hold and Wait: A thread holding a lock can request additional locks while still holding the current lock. 3. No Preemption: The lock cannot be forcibly taken away from a thread. 4. Circular Wait: There should be no circular dependency where each thread is waiting for a lock held by another thread.

What is a page table? How is it used to manage memory?

A page table is a data structure used by the operating system to map virtual addresses to physical addresses in a paged memory system. It contains entries for each page of a process, specifying the corresponding physical frame. Page tables enable efficient memory management by facilitating address translation during page faults and memory access.

What is the difference between a parent and child process?

A parent process creates and controls one or more child processes. Child processes are created by the parent process and inherit certain characteristics, such as environment variables and file descriptors. A process tree represents the hierarchical relationship between parent and child processes.

Compare and contrast a process and a program.

A process and a program are both related to the execution of software. They represent different stages in the software's lifecycle, with a program being a static set of instructions, and a process being the dynamic execution of those instructions.

What is the difference between a process and thread?

A process is an instance of a program that is being executed, while a thread is a lightweight unit of execution within a process. Processes have their own memory space, while threads share the same memory space.

How are a thread and a process different? What do threads have shared access to, and what is not shared between threads?

A process is an instance of a program that runs independently and has its own memory space, resources, and execution context. A thread, on the other hand, is a lightweight unit of execution within a process. Threads share the same memory space and resources of the process, allowing them to have shared access to variables and data. However, each thread has its own stack and execution context, which are not shared between threads.

What is a process? What are the three process states, and how do they communicate with each other?

A process is an instance of a running program. The three process states are Running, Blocked, and Ready. Processes communicate with each other through inter-process communication mechanisms such as shared memory, message passing, or signals.

What is a program, and how is it related to a process?

A program is a set of instructions or code stored on disk. A process is the execution of a program, including the current state, memory, and resources allocated to it. A program becomes a process when it is loaded into memory and executed by the operating system.

What is the difference between a process and a program?

A program is a set of instructions or code that is stored on disk, while a process is an instance of a program in execution, with its own memory, resources, and state.

What is a race condition, and how do we avoid it?

A race condition occurs when multiple threads or processes access shared data simultaneously, resulting in an unexpected and incorrect outcome. It can lead to data corruption and inconsistent program behavior. Race conditions can be avoided by using synchronization mechanisms such as locks, semaphores, or mutexes to enforce mutual exclusion and ensure that only one thread can access the shared data at a time.

What is a regular file?

A regular file is a common file type that contains data organized in a specific format, such as text or binary data. It can be created, read, modified, and deleted like any other file.

How many processes can be executed at once on a single CPU?

A single CPU can execute only one process at a time. The CPU switches between different processes using a technique called context switching, which gives the illusion of concurrent execution.

What is a thread pool?

A thread pool is a collection of pre-initialized threads that are ready to perform tasks. It helps manage and reuse threads efficiently, eliminating the overhead of creating and destroying threads for each task. The thread pool maintains a queue of tasks and assigns them to available threads, allowing for concurrent execution and better resource utilization.

What is a trap instruction, and why is it useful when a user needs to escalate privileges?

A trap instruction is a software interrupt triggered by a user process to transfer control to the operating system. It is useful when a user needs to escalate privileges because it allows the user to request privileged operations or services from the operating system, such as accessing protected resources or executing privileged instructions.

What is the difference between a user thread and a kernel thread?

A user thread is managed by user-level thread libraries and the operating system is unaware of its existence. User threads are faster to create and manage but are limited by the capabilities and scheduling decisions of the underlying user-level thread library. On the other hand, a kernel thread is managed and supported directly by the operating system. Kernel threads provide more flexibility and can take advantage of multiprocessor systems, but they have higher overhead due to kernel involvement in thread management.

What is an I-Node, and what is it used for?

An I-Node (index node or inode) is a data structure used in Unix-like file systems to store metadata about a file. It contains information such as file permissions, owner, timestamps, and pointers to the file's data blocks.

What is an operating system, and what are its two main roles/purposes?

An operating system is software that manages computer hardware and software resources. Its two main roles/purposes are resource management and providing an interface for users and applications to interact with the system.

What does an operating system manage? Provide an example of each

An operating system manages computer hardware, software, and resources. Example: Memory management, Process management, File Management, Device management.

What is the difference between an absolute file path and a relative path? Provide an example.

Answer: An absolute file path specifies the complete location of a file starting from the root directory. For example, "/home/user/documents/file.txt". In contrast, a relative path specifies the file's location relative to the current directory. For example, "documents/file.txt".

Compare 3 different operating system types discussed in class, and give 1 real world example of each (batch, server, pc, sensor, etc).

Batch OS - used for running large, repetitive jobs, example: payroll processing. Server OS - used for managing and providing services to networked computers, example: Windows Server. PC OS - used for personal computers, example: Windows, macOS, Linux.

What does it mean to be interrupt driven?

Being interrupt driven means that a system or device operates by responding to interrupts, which are signals that indicate the need for immediate attention or action.

How are buffers used to improve file system performance?

Buffers temporarily store data in memory before reading from or writing to a file system, reducing disk I/O operations and improving file system performance.

What is caching?

Caching is a technique of storing frequently accessed data or instructions in a faster and closer location to the processor, such as cache memory. It aims to reduce access latency and improve overall system performance.

What is concurrency? What is entailed in allowing an OS to be concurrent?

Concurrency allows processes and events to occur at the event time. Allowing an OS to be concurrent involves managing the tasks so that they can share resources, communicate, and avoid conflicts.

What is the tradeoff between creating processes in the user space versus creating them in kernel space? Consider both time and space.

Creating processes in user space is faster and requires less overhead compared to creating them in kernel space. However, processes in user space have limited access to system resources and rely on system calls to interact with the kernel. Processes in kernel space have unrestricted access to resources but require additional overhead and privilege checks.

What is the tradeoff between creating threads in the user space, versus creating them in kernel space? Consider both time and space.

Creating threads in user space is faster and requires fewer system resources. However, user-level threads cannot utilize multiple processors fully. Creating threads in kernel space provides better parallelism and allows threads to run on different processors. However, it requires more system resources and incurs additional overhead for context switching.

What is DMA? When is it useful and how is it related to device controllers?

DMA (Direct Memory Access) is a technique that allows devices to transfer data directly to and from memory without involving the CPU. DMA is useful for improving IO performance by reducing CPU overhead and enabling efficient data transfers. Device controllers utilize DMA to access memory independently and perform data transfers with minimal CPU involvement.

What is deadlock? How can deadlock be managed? How can deadlock be recovered from?

Deadlock occurs when two or more processes or threads are unable to proceed because each is waiting for a resource held by another, resulting in a circular waiting condition. Deadlock can be managed using various techniques: Prevention: Ensure that at least one of the four necessary conditions for deadlock (mutual exclusion, hold and wait, no preemption, circular wait) is not satisfied. Avoidance: Use resource allocation strategies and algorithms to dynamically analyze and predict potential deadlock situations before granting resource requests. Detection and Recovery: Detect deadlock using algorithms like resource allocation graphs or deadlock detection algorithms, and then take actions such as process termination or resource preemption to recover from deadlock.

What are the different methods of allocating memory to processes? Be sure to discuss the tradeoffs between each implementation.

Different memory allocation methods include contiguous allocation, non-contiguous allocation (segmentation), and dynamic partitioning (paging). Contiguous allocation provides efficient memory access but can lead to fragmentation. Segmentation allows flexibility but can result in external fragmentation. Dynamic partitioning optimizes memory usage but incurs overhead due to page tables and potential fragmentation.

What is efficiency as it pertains to multiprogramming?

Efficiency in multiprogramming refers to maximizing CPU utilization and throughput by keeping the CPU busy with productive work. It involves effective scheduling, minimizing idle time, and optimizing resource allocation to achieve higher efficiency in executing multiple processes concurrently.

Choose two operating system types (batch, multiprocessor, PC, embedded, real-time, or any other discussed in class) and compare and contrast them.

Embedded operating systems and mobile operating systems are similar in that they are designed to run on specialized devices with limited resources and specific functionalities. However, they differ in their target devices, with embedded systems being used in a wide range of dedicated devices, while mobile systems are tailored for smartphones and tablets.

What does every process have to define before execution?

Every process has to define its own address space before execution.

Provide 3 examples of an input device, 3 examples of an output device, and 3 examples of a device that acts as both.

Examples of input devices include a keyboard, mouse, and touchscreen. Examples of output devices include a monitor, printer, and speakers. Examples of devices that act as both input and output devices include a touchscreen display, modem, and network interface card.

Contiguous allocation can lead to fragmentation. Is that fragmentation internal or external? Why?

External fragmentation happens because contiguous allocation requires files to be stored in consecutive blocks on the disk. As files are created, modified, and deleted, free blocks of varying sizes become scattered throughout the disk. These scattered free blocks form gaps of unused space between allocated files and over time, these gaps can become fragmented, making it difficult to find contiguous blocks of sufficient size to store new files.

What factors influence the read and write times of a hard drive?

Factors that influence the read and write times of a hard drive include the rotational speed of the platters (RPM), data density, seek time, and the interface used for data transfer (e.g., SATA or SCSI).

True or False: Linux enforces file extension meaning.

False.

Compare and contrast HDD and SSD storage.

HDD (Hard Disk Drive) and SSD (Solid-State Drive) are storage devices. HDDs use rotating platters and magnetic heads to read and write data, while SSDs use flash memory. SSDs offer faster data access and transfer speeds, are more durable, and consume less power compared to HDDs. However, SSDs are generally more expensive per unit of storage capacity.

How is IO scheduled when there are concurrent requests?

IO scheduling is typically performed by the operating system using algorithms such as First-Come, First-Served (FCFS), Shortest Seek Time First (SSTF), or SCAN. These algorithms determine the order in which IO requests are serviced to optimize efficiency and fairness.

A file contains a company's employee records. The records are randomly accessed and updated. If the employee records are fixed size, is contiguous or linked the best way to allocate? Why?

If the employee records are fixed size and randomly accessed, contiguous allocation is the best way to allocate them. It enables direct access to records using their physical location, resulting in efficient random access and updates.

Compare and contrast internal and external fragmentation and provide an example.

Internal fragmentation occurs when allocated memory blocks contain unused portions, resulting in wasted memory within a process. External fragmentation occurs when free memory segments are scattered throughout the system, making it difficult to allocate contiguous blocks to processes. For example, internal fragmentation can happen when allocating fixed-size memory blocks, while external fragmentation can occur when using variable-size memory allocation schemes.

What is interprocess communication?

Interprocess communication (IPC) is the mechanism used by processes or threads to exchange data and coordinate their actions. It allows communication and coordination between different processes running on the same system or across a network.

Compare and Contrast kernel space and user space.

Kernel space is the portion of the operating system where the kernel resides, with unrestricted access to system resources. User space is where applications and user-level processes run, with limited access to system resources.

What is livelock?

Livelock is a situation where processes or threads are actively trying to resolve a deadlock but end up repeatedly changing states without making progress.

Define each of the following: mutex, semaphore, busy wait, condition variable. Compare and contrast them with each other. Be sure to discuss the advantages and disadvantages of each.

Mutex: A mutex is a synchronization primitive that allows only one thread to access a shared resource at a time. It provides exclusive access and avoids race conditions. Advantages: simplicity and efficiency. Disadvantages: can lead to deadlocks if not used correctly. Semaphore: A semaphore is a synchronization object that controls access to a shared resource by multiple threads. It allows a specified number of threads to access the resource simultaneously. Advantages: can control the number of concurrent accesses. Disadvantages: potential for deadlocks and priority inversion. Busy Wait: Busy wait is a synchronization technique where a thread continuously checks for a condition to become true. It wastes CPU cycles and is inefficient. Advantages: simplicity. Disadvantages: high CPU usage and inefficiency. Condition Variable: A condition variable is a synchronization primitive that allows threads to wait until a certain condition becomes true. It is used in conjunction with mutexes. Advantages: efficient resource utilization. Disadvantages: can lead to potential issues like missed signals.

Why is it a bad idea to not use any memory abstraction when designing an OS?

Not using any memory abstraction can result in inefficient memory management, limited process isolation, and difficulties in handling dynamic memory requirements. It hinders effective resource utilization and can lead to conflicts and crashes between processes.

Define the following page replacement algorithms: optimal, not recently used, first-in, first-out, second-chance, clock, least recently used, working set, or WSClock. Compare and contrast them as you define.

Optimal: Evicts the page that will not be used for the longest time in the future. It provides the best possible replacement, but its implementation is impractical due to the need for future knowledge. Not Recently Used (NRU): Classifies pages based on recent use (referenced or not referenced) and selects pages for replacement randomly from the least-recently-used class. First-In, First-Out (FIFO): Replaces the oldest page in memory, following a queue-like structure. Second-Chance: Enhances FIFO by giving pages a second chance before being replaced if they have been referenced recently. Clock: Uses a circular list of pages and a clock hand to find and replace the oldest unreferenced page. Least Recently Used (LRU): Replaces the page that has not been used for the longest time, based on past references. Working Set: Maintains a fixed-size window of recently referenced pages and replaces pages outside the window. WSClock: Combines elements of both the working set and clock algorithms to account for temporal locality and reference patterns.

What is parallelism and how is it different from multithreading?

Parallelism refers to the execution of multiple tasks simultaneously, utilizing multiple processors or cores. It involves true simultaneous execution of independent tasks to achieve faster processing. Multithreading, on the other hand, involves the execution of multiple threads within a single task or process. While multithreading can provide concurrency and improved performance on a single processor, it does not necessarily guarantee true parallel execution.

What is persistence? What is its role in ease-of-use?

Persistence refers to the ability of data to remain stored after the computer is turned off. It plays a vital role in ease-of-use by ensuring that data can be easily retrieved after the system is powered on.

Compare and contrast preemption and context switching; when is one used over the other?

Preemption is the act of forcibly interrupting the execution of a process to allow another process to run. Context switching involves saving the state of the currently running process and loading the state of another process. Preemption is used when a higher-priority process needs to execute, while context switching occurs during preemption or when a process blocks or completes execution.

What is process synchronization, and why do we need it? Provide an example where synchronization is necessary.

Process synchronization refers to coordinating the execution of multiple processes or threads to ensure they access shared resources in a controlled manner. It is needed to prevent race conditions, conflicts, and inconsistencies. An example is a bank account where multiple threads try to withdraw or deposit money simultaneously. Synchronization ensures that the account balance remains consistent and avoids issues like overdrawn or incorrect balances.

What is RAID? How is it related to mirroring, striping, and parity? What are the various levels, and how are they different? How does increasing security using RAID affect resources and speed?

RAID (Redundant Array of Independent Disks) is a data storage technology that combines multiple physical disks into a single logical unit to improve performance, fault tolerance, or both. RAID uses different techniques such as mirroring (RAID 1), striping (RAID 0), and parity (RAID 5) to achieve data redundancy and performance benefits. RAID levels include RAID 0, RAID 1, RAID 5, RAID 10, and more, each offering different levels of performance, fault tolerance, and resource utilization. Increasing security using RAID introduces additional overhead for data redundancy, which can impact both resources and speed.

What are the pros and cons of using random access?

Random access allows direct access to any part of a file, providing flexibility and efficiency for accessing specific data points. However, it may introduce additional complexity and overhead compared to sequential access.

What are the Read, Write, and Append file operations, and how do they differ?

Read operation retrieves data from a file, write operation modifies or creates a file, and append operation adds new data at the end of an existing file. The main difference lies in the effect each operation has on the existing file content.

What are the pros and cons of using sequential file access?

Sequential file access offers simplicity and efficiency for processing data sequentially but can be inefficient for random access operations or accessing specific portions of large files.

Compare and contrast the Shortest Job First scheduling algorithm with the Shortest Remaining Time Next scheduling algorithm.

Shortest Job First schedules processes based on the shortest burst time, while Shortest Remaining Time Next schedules processes based on the remaining time to completion. Shortest Job First is suitable for batch systems with known burst times, while Shortest Remaining Time Next is useful in interactive systems where burst times may vary. Both algorithms aim to minimize waiting time and improve throughput.

What is starvation?

Starvation: A situation where a process is unable to acquire the necessary resources to make progress. It can occur when resource allocation is unfair or when resources are continuously allocated to higher-priority processes, neglecting lower-priority ones.

Compare and contrast swapping, paging, and segmentation. Provide at least one example of a situation where each is preferable.

Swapping: Entire process is moved in and out of memory. Example: Swapping processes between main memory and disk to handle high demand. Paging: Process and memory are divided into fixed-size pages. Example: Allocating memory in a system with limited physical memory. Segmentation: Process and memory are divided into variable-sized segments. Example: Memory management in complex applications with varying memory requirements.

Describe how the MBR, boot block, and BIOS are used to load the OS.

The BIOS (Basic Input/Output System) initiates the boot process, locates the Master Boot Record (MBR) in the first sector of the bootable device, and reads the boot loader from the MBR. The boot loader then locates and loads the necessary components, including the OS, typically from the boot block.

What is the MMU, and how is it related to the TLB and paging?

The Memory Management Unit (MMU) is hardware responsible for virtual memory management. It translates virtual addresses to physical addresses. The Translation Lookaside Buffer (TLB) is a cache within the MMU that stores frequently accessed page translations, reducing memory access latency. Paging is a memory management technique facilitated by the MMU and TLB.

Compare and contrast the Round Robin scheduling algorithm with the Completely Fair Scheduling (CFS) algorithm and the Multi-Level Feedback Queue algorithm.

The Round Robin scheduling algorithm assigns a fixed time quantum to each process, while the CFS algorithm dynamically adjusts the time allocated to each process based on its priority. The Multi-Level Feedback Queue algorithm uses multiple priority queues and allows processes to move between queues based on their behavior. Round Robin is suitable for time-sharing systems, CFS is commonly used in desktop environments, and Multi-Level Feedback Queue is used in systems with varying task priorities.

What are the basic steps that happen when an operating system abstracts a process?

The basic steps involved when an operating system abstracts a process include process creation, process scheduling, context switching, and process termination.

What is the basic thread API - that is, what are the various pthread methods we've used in class, and what do they do? You should be able to write a very basic bit of multithreaded code and understand what each component in that code does.

The basic thread API typically includes functions such as pthread_create, pthread_join, pthread_exit, pthread_mutex_init, pthread_mutex_lock, pthread_mutex_unlock, pthread_cond_init, pthread_cond_wait, and pthread_cond_signal. These functions allow for the creation, synchronization, and termination of threads.

What is the file API and its components? What does each component do?

The file API (Application Programming Interface) provides functions and methods to interact with files. Its components include file creation, opening, reading, writing, closing, and manipulating file attributes, each serving a specific purpose in file management.

What is the difference between an operating system and the kernel?

The kernel is the core component of the operating system, responsible for managing system resources, while the operating system is a broader term encompassing all software that manages computer resources.

Based on the hardware support for IO, describe the use of the north bridge, south bridge, and peripheral IO in the role of IO.

The north bridge is responsible for high-speed communication between the CPU, memory, and graphics card. The south bridge handles lower-speed communication and connects to peripheral devices such as USB ports, audio controllers, and disk controllers. Peripheral IO includes various input/output devices connected to the system, such as keyboards, mice, printers, and storage devices.

What security does the operating system control?

The operating system controls access to resources, authentication, and authorization, and implements security measures to prevent unauthorized access and attacks.

What are the process states? Which states can be transitioned to from a specific state?

The process states include: New, Ready, Running, Blocked, and Terminated. From any state, a process can transition to the Ready, Running, or Blocked state, depending on the events and conditions.

What is the producer-consumer problem, and where is it useful in the CS world?

The producer-consumer problem involves coordinating the interaction between threads or processes where one produces data (producer) and the other consumes it (consumer). The challenge is to ensure that the producer does not overwrite the data before it is consumed, and the consumer does not consume data that has not been produced yet. This problem is common in scenarios such as multi-threaded applications, message passing systems, and concurrent data processing.

What is the purpose of RAID? Is RAID 0 consistent with the purpose of RAID? Why or why not?

The purpose of RAID is to improve data storage performance, reliability, and fault tolerance. RAID 0, also known as striping, enhances performance by distributing data across multiple disks but does not provide redundancy. Therefore, RAID 0 is not consistent with the primary purpose of RAID, which is to ensure data integrity and availability in case of disk failures.

What is the purpose of a process scheduling algorithm? How is it related to the process table?

The purpose of a process scheduling algorithm is to determine the order and allocation of CPU time to different processes. It ensures efficient utilization of system resources and provides fairness. The process table is a data structure that stores information about all active processes, including their state, priority, and resource usage. The scheduling algorithm selects processes from the process table for execution based on their scheduling criteria.

What is the purpose of a process scheduling algorithm?

The purpose of a process scheduling algorithm is to determine the order and priority of executing processes or threads in a system. It ensures efficient CPU utilization, fair resource allocation, and responsiveness to user interactions.

What is the purpose of the CPU idle process?

The purpose of the CPU idle process is to keep the CPU busy when there are no other processes available for execution.

What is the role of threading in concurrency?

The role of threading in concurrency is to allow multiple threads to execute concurrently within a single process, thereby increasing the overall efficiency and performance of the system.

What is the shell, and how is it related to system calls?

The shell is the interface through which a user interacts with the operating system, executing commands and managing processes. It uses system calls to communicate with the kernel and access system resources.

Compare and contrast the shortest seek time and elevator algorithms and their role in improving random access seek times.

The shortest seek time and elevator algorithms are both used to improve random access seek times in disk drives. The shortest seek time algorithm works by prioritizing the requests that require the least amount of movement for the disk's read/write heads. This algorithm is like choosing the shortest route to a destination, reducing the overall seek time and improving efficiency. In the elevator algorithm, the disk's read/write heads move in one direction, serving the requests for data on that side until there are no more pending requests. Then, the heads change direction and serve the remaining requests on the other side. This algorithm helps improve efficiency by reducing unnecessary back-and-forth movement of the read/write heads, just like taking a logical route when riding an elevator in a building. It's an effective way to minimize the time spent on seeking data on the disk.

What are the two directory organization systems, and what are their pros and cons?

The single-level directory system is simple and straightforward, where all files are placed in a single directory. However, this system lacks scalability and can quickly become disorganized and difficult to manage as the number of files increases. In contrast, the hierarchical directory system organizes files in a tree-like structure with multiple levels of directories. This system offers better organization and allows for efficient categorization and easy navigation of files. However, it requires more complex navigation as users need to traverse through multiple directory levels to access specific files.

What are the 6 major goals of every operating system?

The six major goals of every operating system are resource allocation, processs management, memory management, device management, security, and user interface. Resource allocation: The OS should manage and allocate computer resources effectively. Process management: The OS should manage and schedule processes efficiently. Memory management: The OS should allocate and manage memory effectively. Device management: The OS should manage input/output devices and provide a consistent interface to them. Security: The OS should provide secure access to the system resources and protect against unauthorized access. User interface: The OS should provide an intuitive and user-friendly interface.

What are the three multithreading models, and how do they work?

The three multithreading models are many-to-one, one-to-one, and many-to-many. Many-to-One: Multiple user-level threads are mapped to a single kernel thread. It provides concurrency but lacks parallelism since only one thread can execute at a time. One-to-One: Each user-level thread is mapped to a separate kernel thread. It allows for true parallelism as multiple threads can execute simultaneously. Many-to-Many: Multiple user-level threads are mapped to multiple kernel threads. It provides a balance between concurrency and parallelism, allowing for efficient utilization of system resources.

What are the two general categories of Data Transfer for IO? When is one preferred over the other and why?

The two general categories of data transfer for IO are programmed IO and DMA (Direct Memory Access). Programmed IO is preferred when data transfer is infrequent or involves smaller amounts of data, while DMA is preferred for high-speed data transfer and to offload CPU involvement.

What is thrashing? Why is it bad? How do we prevent it?

Thrashing: Excessive swapping of pages, leading to low system performance. It is bad because it causes frequent disk I/O and wastes CPU cycles. To prevent thrashing, we can increase the degree of multiprogramming or adjust the page size to reduce the number of page faults.

In a system with multiple directories, what can we do to have two different files with the same name (for example, example.txt in A and B, with different contents)?

To have two different files with the same name in different directories, you can utilize separate directory paths. For example, A/example.txt and B/example.txt would represent two different files with different contents.

In a system with multiple directories, how do we modify a file in directory A from directory B and have the changes seen in both directories?

To modify a file in directory A from directory B and have the changes reflected in both directories, you can create a hard link to the file. Hard linking establishes multiple directory entries for the same file, allowing modifications made in one directory to be visible in another.

True or False: Your HDD is an IO device.

True, a hard disk drive (HDD) is an input/output (IO) device used for reading and writing data.

True or False: Linux cares about capitalization in file paths.

True.

True or False: context switching allows a CPU to swap from process A to process B and then return to the exact same place in instructions for process A after executing process B.

True. Context switching involves saving the current state of a process, including registers and program counters, and loading the saved state of another process. It allows the CPU to switch between processes and resume execution from the exact point where they were paused.

True or False: init is always the last process to be run at boot.

True. In most operating systems, the init process, with process ID 1, is the first process started during boot and is responsible for initializing the system. It is typically the last process to be run during the boot process.

True or False: a directory is a type of file.

True. It is a specialized file.

True or False: on a machine with one CPU containing a single core, we can only run one process at a time.

True. On a machine with a single CPU and core, only one process can execute at any given time. The CPU can rapidly switch between processes, giving the illusion of concurrent execution.

What is the difference in how users locate a file vs how the operating system locates it vs how a process locates a file?

Users locate a file using the path name mapped back to the inode. The operating system needs to be associated with the inode to traverse through it. Processes locate a file through the page table.

Compare and contrast using a byte sequence file structure versus using a tree file structure.

Using a byte sequence file structure involves storing files as a continuous sequence of bytes, allowing efficient sequential access but less efficient random access. In contrast, a tree file structure organizes files hierarchically, enabling efficient random access but introducing overhead for maintaining the tree structure.

Compare and contrast virtual and physical memory.

Virtual Memory: An abstraction that allows a process to access more memory than physically available. It uses disk storage as an extension of main memory. Physical Memory: Actual physical memory (RAM) available in the system. It directly corresponds to the addressable memory space.

What is virtualization? What portions of the OS is it associated with?

Virtualization is the process of creating virtual instances of resources, such as servers or operating systems, on a physical system. It is associated with both the hardware and software components of the operating system.

What do we do if we want to execute a program that is too large to hold in physical memory all at once?

We use demand paging or demand segmentation. The program's required pages or segments are loaded into memory as needed during execution.

Describe how paging handles page faults. You may draw a diagram if you think that would be helpful. Provide at least one example of a page replacement algorithm. What is the space-time trade-off associated with paging?

When a page fault occurs, the process is temporarily paused and the MMU sends a signal to the CPU. CPU traps to kernel. Kernel checks the page that caused the fault and manages the memory. The page fault handler checks the TLB to see if the page is frequently accessed. If the page is not found in the cache, the kernel looks for it in the secondary storage or hard drive. If the page is found, the kernel gives it back to the process and resumes its execution. If the page cannot be found, it may lead to a program crash or an error.

What are the two ways to share resources?

Which portions of the OS are associated with each? The two ways to share resources are through inter-process communication (IPC) and shared memory. IPC is associated with the kernel space, while shared memory involves user space.


Related study sets

Accounting and Bookkeeping Terms

View Set

Microeconomics Test 2 (Quizzes and iClicker)

View Set