OS Final Example Questions
Which of these statements is true? (a) A mode switch precedes a context switch. (b) A context switch precedes a mode switch. (c) A context switch can occur without a mode switch. (d) A mode switch is just a different name for a context switch.
(a) A mode switch precedes a context switch.
A process control block is: (a) A structure that stores information about a single process. (b) The kernel's structure for keeping track of all the processes in the system. (c) A linked list of blocked processes (those waiting on some event). (d) A kernel interface for controlling processes (creating, deleting, suspending).
(a) A structure that stores information about a single process.
Which of the following techniques avoids the need for spinlocks? (a) Event counters (b) Test-and-set (c) Compare-and-swap (d) All of the above.
(a) Event counters
A dedicated system call instruction, such as SYSCALL, is: (a) Faster than a software interrupt. (b) More secure than a software interrupt. (c) More flexible than a software interrupt. (d) All of the above.
(a) Faster than a software interrupt.
Process aging: (a) Increases the priority of a process if it sits in the ready state for a long time. (b) Decrease the priority of a process each time the process gets to run. (c) Increases the priority of a process as it gets older. (d) Decreases the priority of a process as it gets older
(a) Increases the priority of a process if it sits in the ready state for a long time.
compare-and-swap instruction (CAS, or CMPXCHG on Intel systems) allows you to: (a) Modify a memory location only if its contents match a given value. (b) Exchange the contents of two memory locations if their values are different. (c) Exchange the contents of two memory locations if a lock is not set. (d) Exchange the contents of two memory locations if a lock is set.
(a) Modify a memory location only if its contents match a given value.
To implement a user-level threads package, it helps if the operating system provides: (a) Non-blocking system calls. (b) Kernel threads. (c) An execve mechanism. (d) Direct memory access
(a) Non-blocking system calls.
Every process gets the same share of the CPU with a: (a) Round-robin scheduler. (b) Shortest remaining time first scheduler. (c) Priority scheduler. (d) Multilevel feedback queues.
(a) Round-robin scheduler.
Which scheduler does not risk starvation of processes? (a) Round-robin scheduler. (b) Shortest remaining time first scheduler. (c) Priority scheduler. (d) Multilevel feedback queues.
(a) Round-robin scheduler.
Disk controllers tend to use Direct Memory Access (DMA) over Programmed I/O (PIO) because: (a) The CPU does not have to copy the disk data byte by byte. (b) Most disks are not programmable. (c) Transferring data to or from a disk is performed by the kernel. (d) The entire disk appears as memory to an operating system.
(a) The CPU does not have to copy the disk data byte by byte.
A context switch always takes place when: (a) The operating system saves the state of one process and loads another. (b) A process makes a system call. (c) A hardware interrupt takes place. (d) A process makes a function call.
(a) The operating system saves the state of one process and loads another.
Condition variables support these operations: (a) Wait / notify (b) Read-and-increment / wait-for-value (c) Increment / decrement-and-wait (d) Set-value / wait-for-value
(a) Wait / notify
When does preemption take place? (a) When a quantum expires. (b) When a process issues an I/O request. (c) When a process exits. (d) All of the above.
(a) When a quantum expires.
A condition variable enables a thread to go to sleep and wake up when: (a) The value of the variable is greater than or equal to some number N. (b) Another thread sends a signal to that variable. (c) Another thread increments the variable. (d) Another thread reads the variable
(b) Another thread sends a signal to that variable.
A multilevel feedback queue scheduler generally assigns a long quantum to: (a) High priority processes. (b) Low priority processes. (c) New processes. (d) Old processes.
(b) Low priority processes.
Which of the following is a policy, not a mechanism? (a) Create a thread. (b) Prioritize processes that are using the graphics card. (c) Send a message from one process to another. (d) Delete a file.
(b) Prioritize processes that are using the graphics card.
In contrast to a cooperative scheduler, a preemptive scheduler supports the following state transition: (a) Ready → running (b) Running → ready (c) Ready → blocked (d) Blocked → running
(b) Running → ready
What information is stored in a thread control block (TCB)? (a) List of open files. (b) Stack pointer. (c) Memory map. (d) Thread owner ID.
(b) Stack pointer.
A quantum is: (a) The absolute minimum time that a process can run. (b) The maximum time that a process can run before being preempted. (c) The amount of time that a process runs before it blocks on I/O. (d) The fraction of a time slice during which the process is running.
(b) The maximum time that a process can run before being preempted.
Which of the following does NOT cause a trap? (a) A user program divides a number by zero. (b) The operating system kernel executes a privileged instruction. (c) A programmable interval timer reaches its specified time. (d) A user program executes an interrupt instruction.
(b) The operating system kernel executes a privileged instruction. The kernel is already running in privileged mode, so executing a privileged instruction will not cause a violation.
A thread that is blocked on a semaphore is awakened when another thread: (a) Tries to decrement a semaphore's value below 0. (b) Tries to increment the semaphore. (c) Causes the semaphore's value to reach a specific number. (d) Tries to block on the same semaphore
(b) Tries to increment the semaphore.
The wait system call on UNIX systems puts a process to sleep until: (a) A semaphore wakes it up. (b) The specified elapsed time expires. (c) A child process terminates. (d) The process is preempted by another process.
(c) A child process terminates
Priority inversion occurs when: (a) A scheduler repeatedly schedules a low-priority thread over a high-priority one. (b) A high-priority thread is blocked, causing the scheduler to run a low-priority thread. (c) A scheduler schedules a high-priority thread when it would be better to schedule a low priority one. (d) A high-priority thread wakes up a low-priority thread, causing it to be scheduled.
(c) A scheduler schedules a high-priority thread when it would be better to schedule a low priority one.
Process aging is: (a) Computing the next CPU burst time via a weighted exponential average of previous bursts. (b) The measurement of elapsed CPU time during a process' execution. (c) Boosting a process' priority temporarily to get it scheduled to run. (d) Giving a process a longer quantum as it gets older.
(c) Boosting a process' priority temporarily to get it scheduled to run.
Multiprogramming is: (a) An executable program that is composed of modules built using different programming languages. (b) Having multiple processors execute different programs at the same time. (c) Keeping several programs in memory at once and switching between them. (d) When a program has multiple threads that run concurrently.
(c) Keeping several programs in memory at once and switching between them.
Switching between user level threads of the same process is often more efficient than switching between kernel threads because: (a) User level threads require tracking less state. (b) User level threads share the same memory address space. (c) Mode switching is not necessary. (d) Execution stays within the same process with user level threads
(c) Mode switching is not necessary. a, b, and d apply to kernel threads as well.
A process scheduler is responsible for moving processes between these states: (a) Ready and Blocked (b) Running and Blocked (c) Ready and Running (d) Ready, Running, and Blocked
(c) Ready and Running
Threads within the same process do not share the same: (a) Text segment (instructions). (b) Data segment. (c) Stack. (d) Open files.
(c) Stack.
A Thread Control Block (TCB) stores: (a) User (owner) ID (b) Memory map (c) The machine state (registers, program counter) (d) Open file descriptors
(c) The machine state (registers, program counter)
A process exists in the zombie (also known as defunct) state because: (a) It is running but making no progress. (b) The user may need to restart it without reloading the program. (c) The parent may need to read its exit status. (d) The process may still have children that have not exited.
(c) The parent may need to read its exit status.
What's the biggest problem with spinlocks? (a) They are vulnerable to race conditions. (b) They are fundamentally buggy. (c) They waste CPU resources. (d) They rely on kernel support and cannot be implemented at user level.
(c) They waste CPU resources
A race condition is: (a) When one process is trying to beat another to execute a region of code. (b) When a process cannot make progress because another one is blocking it. (c) When the outcome of processes is dependent on the exact order of execution among them. (d) A form of locking where processes coordinate for exclusive access to a critical section.
(c) When the outcome of processes is dependent on the exact order of execution among them.
Which process state transition is not valid? (a) Ready → Running (b) Running → Ready (c) Running → Blocked (d) Blocked → Running
(d) Blocked → Running
Which of the following is not a system call? (a) Duplicate an open file descriptor. (b) Get the current directory. (c) Decrement a semaphore. (d) Create a new linked list.
(d) Create a new linked list.
Which of the following is most likely to be a system call? (a) The implementation of a while loop in C. (b) Parse a token from a string. (c) Get the cosine of a number. (d) Get the time of day.
(d) Get the time of day
What does a time-sharing system need that a multiprogramming system does not? (a) Trap mechanism (b) Kernel mode execution privileges (c) Shorter time slices (d) Interval Timer
(d) Interval Timer
With DMA (Direct Memory Access): (a) The processor can read or write directly to a device. (b) The kernel can read or write directly to a process' memory without intermediate buffers. (c) A process can read or write to kernel memory without intermediate buffers. (d) The device can read or write directly to the system's memory.
(d) The device can read or write directly to the system's memory.
Two threads are considered to be asynchronous when: (a) They have no reliance on one another. (b) The outcome of a thread is dependent on the specific sequence of execution of both threads. (c) Only one thread is allowed to access a shared resource at a time. (d) The threads require occasional synchronization.
(d) The threads require occasional synchronization.
When a process is first launched, the operating system does not know the size of this segment: (a) text (b) data (c) bss (d) heap
(d) heap
On POSIX systems, one process can send a signal to another process via: (a) notify (b) signal (c) wait (d) kill
(d) kill
What are three advantages of threads over processes?
1. Creating threads and switching among threads is more efficient. 2. Some programming is easier since all memory is shared among threads - no need to use messaging or create shared memory segments. 3. Depending on the implementation, a separate (or custom) scheduler may be used to schedule threads. This is more common for user threads.
List two events that may take a process to a ready state.
1. Startup: created → ready 2. Preemption: running → ready 3. I/O complete: blocked → ready
In a thread-aware operating system, a process control block (PCB) no longer includes: (a) saved registers. (b) process owner. (c) open files. (d) memory map.
Answer: (a) Since each thread has its own register set, the registers have to be saved per thread in the Thread Control Block (TCB). The owner, process memory, and open files are shared among all threads in a process so they can still be tracked in the Process Control Block.
Which component of a process is not shared across threads? (a) Register values. (b) Heap memory. (c) Global variables. (d) Program memory.
Answer: (a) All memory except is shared across threads that belong to the same process. The unique component per thread is the register set: all the processors registers, including the stack pointer and program counter.
Which of these is not a component of the operating system? (a) Boot loader. (b) Process scheduler. (c) System memory manager. (d) File system driver.
Answer: (a) The boot loader is used to load the operating system but is not needed once the OS is loaded.
A semaphore puts a thread to sleep: (a) if it tries to decrement the semaphore's value below 0. (b) if it increments the semaphore's value above 0. (c) until another thread issues a notify on the semaphore. (d) until the semaphore's value reaches a specific number.
Answer: (a) The two operations on a semaphore are down(s) and up(s). - Down(s) decrements the semaphore s but does not allow its value to go <0. If it will, then the value stays at 0 but the process blocks until another process does an up(s).
Implementing preemption in operating systems relies on: (a) a programmable interval timer. (b) being able to switch to kernel mode. (c) having a modular operating system design. (d) programmable I/O.
Answer: (a) To get preemption to work, you need to be able to get control away from the currently running process. Programming an interval timer to generate periodic interrupts forces the operating system to get control at regular intervals so that it can decide whether or not to preempt the running process.
A processor switches execution from user mode to privileged mode via: (a) a software interrupt. (b) a programmable I/O. (c) a hardware interrupt. (d) memory mapped I/O.
Answer: (a) or (c). ->Tricky since there are two valid answers. A CPU will enter kernel (privileged) mode via either hardware interrupt or a trap (a software interrupt). N.B., if the question asked how a process switches from user to kernel mode, then then answer would be (a).
The memory map of a multithreaded process looks similar to that of a singlethreaded process except that the multithreaded one has: (a) a copy of the data segment per thread. (b) a stack for each thread. (c) a heap for each thread. (d) a heap and stack for each thread.
Answer: (b) Each thread requires a separate stack to be allocated for it (from the memory that is shared among all threads). The heap contains dynamically allocated memory (e.g., via malloc) and is shared by all. The data segment contains global variables also shared.
Multiprogramming (vs. multitasking) allows the operating system to: (a) interrupt a process to run another process. (b) switch execution to another process when the first process blocks on I/O or makes a system call. (c) allow a single process to take advantage of multiple processors. (d) distribute multiple processes across multiple processors in a system.
Answer: (b). The distinction between multi programming and multitasking is multitasking allows preemption. Multi programming relies that a process relinquishes the CPU by calling the kernel (a system call; windows had a yield system call that did nothing but relinquished the processor). (c) Is true for both multi programming and multitasking; (a) is true for multitasking; (d) may be true for both if it's a multiprocessing system.
A CPU burst is: (a) an example of priority inversion where a low priority process gets access to the CPU. (b) a temporary increase in the priority of a process to ensure that it gets the CPU. (c) an unexpected increase in a process' need for computation. (d) the period of time that a process uses the processor between I/O requests.
Answer: (d) A CPU burst is the period of time that a process is executing instructions before it gets to a waiting state on some event (usually I/O but also things like messages or semaphores).
A process control block (PCB) exists only for processes in: (a) the ready state. (b) ready and running states. (c) ready and blocked states. (d) ready, running, and blocked states
Answer: (d) The PCB exists for processes in all states until the process exits and the parent picks up its exit via wait.
A shortest remaining time first scheduler: (a) dynamically adjusts the quantum based on the process. (b) favors processes that use the CPU for long stretches of time. (c) gives each process an equal share of the CPU. (d) tries to optimize mean response time for processes.
Answer: (d) A SRTF scheduler tries to estimate the next CPU burst of each process by weighing past CPU bursts. It then sorts processes based on this estimated burst time. This causes short-cpu-burst processes to be scheduled first and optimizes the average response time.
Which statement about multilevel feedback queues is false? Multilevel feedback queues: (a) assign processes dynamically into priority classes. (b) give low priority processes a longer quantum. (c) favor interactive processes. (d) prioritize processes into classes based on their estimated CPU burst
Answer: (d) A multilevel feedback queue scheduler does not compute an estimate of the CPU burst. It simply drops a process to the next lower priority queue if the process used up its entire time slice in the current CPU burst. If it did not, then it remains in its current queue.
Compared to a non-preemptive scheduler, a preemptive scheduler can move processes from the: (a) running to the blocked state. (b) ready to the running state. (c) blocked to the ready state. (d) running to the ready state.
Answer: (d) A preemptive scheduler can stop a process from running and have another pro
A round robin scheduler: (a) favors processes that are expected to have a short CPU burst. (b) favors high priority processes. (c) dynamically adjusts the priority of processes based on their past CPU usage. (d) gives each process an equal share of the CPU.
Answer: (d) A round robin scheduler is basically first-come, first-served with preemption: a process can run until it blocks or until its time slice expires. Then it's put at the end of the queue and every other process that's ready to run gets to run. No estimation of CPU burst, no priorities.
The alternative to programmed I/O is: (PIO is for data transfer) (a) memory-mapped I/O. (b) interrupt-driven I/O. (c) independent I/O. (d) direct memory access (DMA).
Answer: (d) PIO requires using the CPU to read data from device registers that are mapped onto the system's memory. The alternative is DMA, in which the device controller accesses the memory bus to transfer data to/from memory. Not (a): Memory-mapped I/O is the same as PIO: the processor has to issue instructions to transfer data to/from the device memory. Not (b): Interrupts may occur with either mode to let you know that the device is ready. The term "interrupt-driven I/O" doesn't really make much sense. Not (c): No such thing.
Process aging: (a) helps estimate the length of the next compute period. (b) increases the runtime of a process. (c) is a count of the total CPU time used by the process and is stored in the PCB. (d) improves the chances of a process getting scheduled to run.
Answer: (d) Process aging is when you temporarily increase the priority of a low-priority process that has not been run in a while to ensure that it gets a chance to get scheduled to run.
Computing the weighted exponential average of previous CPU cycles is used for: (a) determining the length of a quantum for the process. (b) allowing a priority scheduler to assign a priority to the process. (c) having a round-robin scheduler sort processes in its queue. (d) estimating the length of the next cycle
Answer: (d) This is used to estimate the next CPU burst time. The assumption is that the CPU burst period will be related to previous CPU burst periods.
How does Multilevel Feedback Queue scheduling try to approximate Shortest Remaining Time First (SRTF) scheduling? How does it not have the disadvantage of SRTF?
Both try to give high priority to processes that exhibited short CPU bursts. The advantage of MFQ is that there is no need for an estimation function; the decision is automatic based on whether the process used up its allotted time slice last time. Moreover, there is no need to keep a sorted queue. Insertions can be O(1).
What is busy waiting? How can it lead to priority inversion?
Busy waiting, also known as a spin lock, is the situation when a thread is looping, continuously checking whether a condition exists that will allow it to continue. It is often used to check if you can enter a critical section or to poll a device to see if data is ready (or written/sent). Busy waiting can lead to priority inversion under the following situation: Process A is a low priority process that is in a critical section. Process B is a high priority process that is executing a spin lock trying to enter the critical section. A priority scheduler always lets B run over A since it's a higher-priority process. A never gets to run so it can release its lock on the critical section and allow B to run. The situation is called priority inversion because B is not doing useful work; it is essentially blocked by A from making progress.
Explain what is meant by chain loading.
Chain loading is the process of using multiple boot loaders. A primary boot loader loads and runs a program that happens to be another boot loader. That, in turn may load a third boot loader or an operating system. Why do we use chain loading? Often, there are severe size constraints on the primary boot loader (e.g., with a BIOS, it has to fit in under 440 bytes of memory). The primary boot loader may load a secondary boot loader that may give you options of which OS to load, be able to check for errors, and be able to parse a file system.
If short time slices improve interactive performance, what is the downside?
Context switch overhead: The number of context switches increases.
The order that disk blocks are scheduled in flash memory is not critical. True False
Correct. There is no concept of seeking.
What is the difference between deadlock and starvation?
Deadlock: a process can make no progress even if it is scheduled because there is a circular dependency on resources coupled with exclusive (locked) access. Starvation: the scheduler never schedules the process (N.B.: it does not need to be a low-priority process; that's up to the scheduler).
A process executes faster when running in kernel mode than when running in user mode. True or False?
False
A system call always results in a context switch. True or False ?
False
Programmed I/O (PIO) uses fewer CPU resources than DMA. True or False
False
UNIX-derived systems execute new programs via a two-step process of fork and execve. Other systems provide a single system call to run a new program. Explain an advantage of using this two-step approach.
It allows a child process to do prep work using settings from the parent process, such as setting up standard input, output, and error file descriptors (including pipes) and the current directory prior to running the program.
What is the difference between a mode switch and a context switch?
Mode switch: change CPU execution mode from one privilege level to another e.g., user → kernel via a trap or syscall. Context switch: save one process' execution context & restore that of another process
Each page table needs to have within it a reference to each page frame in the system. True False
No. A page table will only reference the page frames that the process uses.
Is it possible to construct a secure operating system on processors that do not provide a privileged mode of operation? Explain why or why not.
No. Since there is no distinction between privileged (kernel) or unprivileged (user) mode, a user process can do anything that the operating system kernel can do, such as accessing I/O devices or disabling interrupts. A process can, for example, read or modify any port of the disk or keep other processes from running. Important! The most common wrong answer was explaining that you cannot have an operating system without privileged/unprivileged modes. This is not true. Many early processors, such as the 8086, did not have privileged modes. A process was able to access all of memory, modify interrupts, and access any I/O devices if it wanted to.
Given that we can create user-level code to control access to critical sections (e.g., Peterson's algorithm), why is it important for an operating system to provide synchronization facilities such as semaphores in the kernel?
Question is about offering sync services via the kernel than via user-level code. To avoid busy waiting: the waiting thread can go to sleep →creates better cpu utilization; →avoids priority inversion (if an issue)
What should be done in the kernel to limit this fork bomb danger?
Set a per-user limit on the number of processes that can be created. Set a limit on the number of child processes (including arbitrarily deep grandchildren) that a process can create.
To a programmer, a system call looks just like a function call. Explain the difference in the underlying implementation.
The main difference is that a system call invokes a mode switch via a trap (or a system call instruction, which essentially does the same thing). [other distinctions could be the way parameters are passed and that the specific system call is identified by a number since all system calls share the same entry point in the kernel]
Multilevel queues allow multiple processes to share the same priority level. True or False
True
Software interrupts are synchronous with the current process. True or False
True
Switching among threads in the same process is more efficient than switching among processes. True or False
True
The value of a semaphore can never be negative. True or False
True
What is the potential danger in running this code? !while (1) fork();
Unbounded growth in the number of processes will lead to a kernel memory shortage (process list) and user memory shortage (each process requires memory).
What is the advantage of having different time slice lengths at different levels in a multilevel feedback queue?
We expect interactive (I/O intensive) processes not to use long stretches of the CPU (short CPU bursts). We reward these by giving them a high priority for execution. Processes that have longer CPU bursts get progressively longer CPU bursts BUT increasingly lower priority levels. This means that they'll get scheduled less often but, when they do, they will get to run for a longer time. This reduces the overhead of context switching.
Under what conditions will you reach a point of diminishing returns where adding more memory may improve performance only a little?
When there is enough memory in the computer to hold the working set of each running process.
A clock page replacement algorithm tries to approximate a Least Recently Used (LRU) algorithm. True False
Yes, it tries. Pages that were referenced (reference bit set) are skipped over. Only if we don't find unused pages do we go back and grab one that was recently referenced.
The Shortest Seek Time First (SSTF) algorithm has a danger of starving some requests. True False
Yes. If there's a continuous stream of disk activity, outlying blocks may get deferred indefinitely if there are always blocks that are closer to the current block ready to be scheduled.
A system with 32-bit addresses, 1 GB (230) main memory, and a 1 megabyte (20-bit) page size will have an inverted page table that contains: a. 1,024 (1K, 210) entries b. 4,096 (4K, 212) entries. c. 1,048,576 (1M, 220) entries. d. 4,294,967,296 (4G, 230) entries.
a. 1,024 (1K, 210) entries Inverted page table = 1 entry per page frame Main memory = 1 G; Page size = 1 M # page frames = 1G / 1 B = 1 K (1024)
A system with 32-bit addresses, 1 GB (230) main memory, and a 1 megabyte (20-bit) page size will have a page table that contains: a. 4,096 (4K, 212) entries. b. 4,294,967,296 (4G, 230) entries. c. 1,048,576 (1M, 220) entries. d. 1,024 (1K, 210) entries.
a. 4,096 (4K, 212) entries. The amount of main memory does not matter. If the page size takes 20 bits (offset) then the page number takes the first 32-20 = 12 bits 212 = 4096 entries.
In a conventional paging system, a page table entry (PTE) will not contain: a. A logical page number. b. A physical page frame number. c. A page residence bit. d. Page permissions
a. A logical page number. a) Right. The logical number is used as an index into the page table. b) The physical page frame number is the frame that corresponds to the logical page (virtual address). It's the most important item in the PTE. c) This tells us if the PTE entry maps to a valid page or not. d) This defines the access permissions for a page (e.g., read-write, noexecute, read-only).
Segmentation is a form of: a. Base & limit addressing. b. Direct-mapped paging. c. Multi-level page tables. d. Base & limit addressing followed by a page table lookup.
a. Base & limit addressing. a) Yes. Each segment (code, data, stack, others) gets its own base & limit. b) No. Segmentation ≠ paging. c) No. Segmentation ≠ paging. d) No. The Intel architecture is the only one that supports a hybrid mode with segmentation followed by paging but that is a hybrid mode, not segmentation.
A buffer cache is useful only for: a. Block devices. b. Character devices. c. Network devices. d. Block and network devices.
a. Block devices. - A buffer cache is used for block addressable storage. - Data in character and network devices is not addressable.
Unlike full data journaling, ordered journaling: a. Improves performance by not writing file data into the journal. b. Makes sure that all that all journal entries are written in a consistent sequence. c. Provides improved consistency by writing the data before any metadata is written. d. Imposes no order between writing data blocks and writing metadata journal entries.
a. Improves performance by not writing file data into the journal. a) Right. File data is written first, then metadata is journaled. b) Full data journaling does this too. c) No. It provides worse consistency. d) No. It imposes a strict order. File data first, then journal.
Monitoring page fault frequency of a process allows us to: a. Manage page frame allocation per process. b. Adjust the size of the TLB for optimum performance. c. Determine if the process is I/O bound or CPU intensive. d. Identify the number of illegal instructions and invalid memory accesses within a program.
a. Manage page frame allocation per process. a) Yes. It lets us decide if a process is thrashing (too few pages) or not paging enough (too many pages in memory). b) Nothing to do with the TLB. Also, you cannot adjust the size of the TLB; it's fixed in hardware. c) No. You can determine if it's generating page faults but that's it. d) Invalid address references also generate page faults but that's not what monitoring the page fault frequency accomplishes.
Memory compaction refers to: a. Moving regions of memory to get rid of external fragmentation. b. Compressing a region of memory to have it consume less space. c. Getting rid of large empty (zero) sections within a region of memory. d. Configuring a system to run in less memory than is actually available
a. Moving regions of memory to get rid of external fragmentation.
The use of clusters in a file system does NOT: a. Reduce internal fragmentation in a file. b. Increase the amount of contiguous blocks in a file. c. Reduce the number of blocks we need to keep track of per file. d. Improve file data access performance.
a. Reduce internal fragmentation in a file. A cluster is just a logical block; a fixed group of disk blocks. a) Clustering reduces external fragmentation. It increases internal fragmentation. With larger cluster sizes, a file may get more space than it needs b) Yes. A cluster = contiguous blocks. c) Yes. We need to keep track of clusters, not individual blocks. d) Yes. (1) Accessing contiguous blocks is faster on disks, (2) Lower likelihood of needing to access indirect blocks.
The reason for using a multilevel page table is to: a. Reduce the amount of memory used for storing page tables. b. Make table lookups more efficient than using a single-level table. c. Make it easier to find unused page frames in the system. d. Provide a hierarchy to manage different sections of a program.
a. Reduce the amount of memory used for storing page tables. a) Yes! b) No. A multi-step lookup is less efficient than a single lookup. c) No. Traversing all page tables across all proceses to look for unused frames is horribly inefficient. d) No. That's segmentation.
A File Allocation Table: a. Stores a list of blocks for every single file in the file system. b. Stores file names and the blocks of data that each file in the file system uses. c. Is a table-driven way to store file data. d. Is a bitmap identifying unused blocks that can be used for file data.
a. Stores a list of blocks for every single file in the file system. a) Yes. A FAT implements linked allocation. Each FAT entry represents a cluster. The table contains blocks for all files. b) File names are stored as data in directory files. c) This makes no sense. d) No. That's a block bitmap.
Which cannot be a valid page size? a. 32 bytes. b. 1,024 bytes. c. 3,072 bytes. d. 1,048,576 bytes.
c. 3,072 bytes. A page size must be a power of two. 3,072 is not a power of two.
Dynamic linking: a. Brings in libraries as needed to create the final executable file that can then be loaded and run. b. Links libraries to create the final executable file without having the user identify them. c. Loads libraries when the executable file first references those libraries. d. Is a technique for re-linking an executable file with newer versions of libraries.
c. Loads libraries when the executable file first references those libraries.
A disk drive using the Circular LOOK (C-LOOK) algorithm just wrote block 1203 and then read block 1204. The following blocks are queued for I/O: 900, 1200, 1800, 2500. In what order will they be scheduled? a. 1200, 900, 1800, 2500 b. 1200, 900, 2500, 1800 c. 1800, 2500, 1200, 900 d. 1800, 2500, 900, 1200
d. 1800, 2500, 900, 1200 C-LOOK schedules requests in sequence based on the current position and direction of the disk head. Requests are scheduled in one direction only (disk seeks back to the earliest block) Current block: 1204. Blocks to be scheduled next: 1800, 2500. Then seek back to 900 (lowest block) and schedule I/O for 900,1200.
Which scheduling algorithm makes the most sense for flash memory? a. Shortest seek time first (SSTF). b. SCAN. c. LOOK. d. First come, first served (FCFS).
d. First come, first served (FCFS). There is no penalty for (or concept of) seek time in flash memory. Hence, scheduling is pointless. FCFS is just plain FIFO ordering and does not attempt to resequence the queue of blocks.
The following is not an example of a character device: a. Mouse. b. Sound card. c. USB-connected printer. d. Flash memory.
d. Flash memory. a-c are all character devices. Flash memory is a block device and can hold a file system.