COMP301

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

what does a C hello world look like

#include <stdio.h> int main (int argc, char *argv[]) { printf("Hola Mundo!\n"); return 0; }

City streets are vulnerable to a circular blocking condition called gridlock, in which intersections are blocked by cars that then block cars behind them that then block the cars that are trying to enter the previous intersection, etc. All intersections around a city block are filled with vehicles that block the oncoming traffic in a circular manner. Gridlock is a resource deadlock and a problem in competition synchronization. New York City's prevention algorithm, called "don't block the box," prohibits cars from entering an intersection unless the space following the intersection is also available. Which prevention algorithm is this? Can you provide any other prevention algorithms for gridlock?

''Don't block the box'' is a pre-allocation strategy, neg ating the hold and wait deadlock precondition, since we assume that cars can enter the street space following the intersection, thus freeing the intersection. Another strategy might allow cars to temporarily pull into garages and release enough space to clear the gridlock. Some cities have a traffic control policy to shape traffic; as city streets become more congested, traffic supervisors adjust the settings for red lights in order to throttle traffic entering heavily congested areas. Lighter traffic ensures less competition over resources and thus lowers the probability of gridlock occurring.

Disk requests come in to the disk driver for cylinders 10, 22, 20, 2, 40, 6, and 38, in that order. A seek takes 6 msec per cylinder. How much seek time is needed for (a) First-come, first served. (b) Closest cylinder next. (c) Elevator algorithm (initially moving upward). In all cases, the arm is initially at cylinder 20.

(a) 10 + 12 + 2 + 18 + 38 + 34 + 32 = 146 cylinders = 876 msec. (b) 0 + 2 + 12 + 4 + 4 + 36 +2 = 60 cylinders = 360 msec. (c) 0 + 2 + 16 + 2 + 30 + 4 + 4 = 58 cylinders = 348 msec.

Using the page table of Fig. 3-9, give the physical address corresponding to each of the following virtual addresses: (a) 20 (b) 4100 (c) 8300

(a) 8212. (b) 4100. (c) 24684.

In which of the four I/O software layers is each of the following done. (a) Computing the track, sector, and head for a disk read. (b) Writing commands to the device registers. (c) Checking to see if the user is permitted to use the device. (d) Converting binary integers to ASCII for printing.

(a) Device driver. (b) Device driver. (c) Device-independent software. (d) User-level software

One way to place a character on a bitmapped screen is to use BitBlt from a font table. Assume that a particular font uses characters that are 16 × 24 pixels in true RGB color. (a) How much font table space does each character take? (b) If copying a byte takes 100 nsec, including overhead, what is the output rate to the screen in characters/sec?

(a) Each pixel takes 3 bytes in RGB, so the table space is 16 × 24 × 3 bytes, which is 1152 bytes. (b) At 100 nsec per byte, each character takes 115.2 μsec. This gives an output rate of about 8681 chars/sec.

Suppose that a system uses DMA for data transfer from disk controller to main memo- ry. Further assume that it takes t 1 nsec on average to acquire the bus and t 2 nsec to transfer one word over the bus (t 1 >> t 2 ). After the CPU has programmed the DMA controller, how long will it take to transfer 1000 words from the disk controller to main memory, if (a) word-at-a-time mode is used, (b) burst mode is used? Assume that com- manding the disk controller requires acquiring the bus to send one word and acknowl- edging a transfer also requires acquiring the bus to send one word.

(a) Word-at-a-time mode: 1000 × [(t1 + t2) + (t1 + t2) + (t1 + t2)] Where the first term is for acquiring the bus and sending the command to the disk controller, the second term is for transferring the word, and the third term is for the acknowledgement. All in all, a total of 3000 × (t1 + t2) nsec. (b) Burst mode: (t1 + t2) + t1 + 1000 times t2 + (t1 + t2) where the first term is for acquiring the bus and sending the command to the disk controller, the second term is for the disk controller to acquire the bus, the third term is for the burst transfer, and the fourth term is for acquiring the bus and doing the acknowledgement. All in all, a total of 3t1 + 1002t2.

Suppose that a machine has 38-bit virtual addresses and 32-bit physical addresses. (a) What is the main advantage of a multilevel page table over a single-level one? (b) With a two-level page table, 16-KB pages, and 4-byte entries, how many bits should be allocated for the top-level page table field and how many for the next- level page table field? Explain.

. Consider, (a) A multilevel page table reduces the number of actual pages of the page table that need to be in memory because of its hierarchic structure. In fact, in a program with lots of instruction and data locality, we only need the toplevel page table (one page), one instruction page, and one data page. (b) Allocate 12 bits for each of the three page fields. The offset field requires 14 bits to address 16 KB. That leaves 24 bits for the page fields. Since each entry is 4 bytes, one page can hold 212 page table entries and therefore requires 12 bits to index one page. So allocating 12 bits for each of the page fields will address all 238 bytes.

A personal computer salesman visiting a university in South-West Amsterdam remark- ed during his sales pitch that his company had devoted substantial effort to making their version of UNIX very fast. As an example, he noted that their disk driver used the elevator algorithm and also queued multiple requests within a cylinder in sector order. A student, Harry Hacker, was impressed and bought one. He took it home and wrote a program to randomly read 10,000 blocks spread across the disk. To his amaze- ment, the performance that he measured was identical to what would be expected from first-come, first-served. Was the salesman lying?

. Not necessarily. A UNIX program that reads 10,000 blocks issues the requests one at a time, blocking after each one is issued until after it is completed. Thus the disk driver sees only one request at a time; it has no opportunity to do anything but process them in the order of arrival. Harry should have started up many processes at the same time to see if the elevator algorithm worked.

In the preceding question, which resources are preemptable and which are nonpreemptable?

. The printer is nonpreemptable; the system cannot start printing another job until the previous one is complete. The spool disk is preemptable; you can delete an incomplete file that is growing too large and have the user send it later, assuming the protocol allows tha

Explain the differences between deadlock, livelock, and starvation.

A deadlock occurs when a set of processes are blocked waiting for an event that only some other process in the set can cause. On the other hand, processes in a livelock are not blocked. Instead, they continue to execute checking for a condition to become true that will never become true. Thus, in addition to the resources they are holding, processes in livelock continue to consume precious CPU time. Finally, starvation of a process occurs because of the presence of other processes as well as a stream of new incoming processes that end up with higher priority that the process being starved. Unlike deadlock or livelock, starvation can terminate on its own, e.g. when existing processes with higher priority terminate and no new processes with higher priority arrive.

Some digital consumer devices need to store data, for example as files. Name a modern device that requires file storage and for which contiguous allocation would be a fine idea.

A digital still camera records some number of photographs in sequence on a nonvolatile storage medium (e.g., flash memory). When the camera is reset, the medium is emptied. Thereafter, pictures are recorded one at a time in se- quence until the medium is full, at which time they are uploaded to a hard disk. For this application, a contiguous file system inside the camera (e.g., on the picture storage medium) is ideal.

At 7200 RPM, there are 120 rotations per second, so I rotation takes about 8.33 msec. Dividing this by 500 we get a sector time of about 16.67 μsec

A disk rotates at 7200 RPM. It has 500 sectors of 512 bytes around the outer cylinder. How long does it take to read a sector?

Why are optical storage devices inherently capable of higher data density than mag- netic storage devices? Note: This problem requires some knowledge of high-school physics and how magnetic fields are generated.

A magnetic field is generated between two poles. Not only is it difficult to make the source of a magnetic field small, but also the field spreads rapidly, which leads to mechanical problems trying to keep the surface of a magnetic medium close to a magnetic source or sensor. A semiconductor laser generates light in a very small place, and the light can be optically manipulated to illuminate a very small spot at a relatively great distance from the source.

A local area network is used as follows. The user issues a system call to write data packets to the network. The operating system then copies the data to a kernel buffer. Then it copies the data to the network controller board. When all the bytes are safely inside the controller, they are sent over the network at a rate of 10 megabits/sec. The receiving network controller stores each bit a microsecond after it is sent. When the last bit arrives, the destination CPU is interrupted, and the kernel copies the newly arri- ved packet to a kernel buffer to inspect it. Once it has figured out which user the packet is for, the kernel copies the data to the user space. If we assume that each interrupt and its associated processing takes 1 msec, that packets are 1024 bytes (ignore the head- ers), and that copying a byte takes 1 μ sec, what is the maximum rate at which one process can pump data to another? Assume that the sender is blocked until the work is finished at the receiving side and an acknowledgement comes back. For simplicity, as- sume that the time to get the acknowledgement back is so small it can be ignored.

A packet must be copied four times during this process, which takes 4.1 msec. There are also two interrupts, which account for 2 msec. Finally, the transmission time is 0.83 msec, for a total of 6.93 msec per 1024 bytes. The maximum data rate is thus 147,763 bytes/sec, or about 12% of the nominal 10 megabit/sec network capacity. (If we include protocol overhead, the figures get even worse.)

If an instruction takes 1 nsec and a page fault takes an additional n nsec, give a formula for the effective instruction time if page faults occur every k instructions.

A page fault every k instructions adds an extra overhead of n/k μsec to the av erage, so the average instruction takes 1 + n/k nsec.

what's a pointer?

A pointer stores the address of some data in memory

The discussion of the ostrich algorithm mentions the possibility of process-table slots or other system tables filling up. Can you suggest a way to enable a system administra- tor to recover from such a situation?

A portion of all such resources could be reserved for use only by processes owned by the administrator, so he or she could always run a shell and pro- grams needed to evaluate a deadlock and make decisions about which proc- esses to kill to make the system usable again.

Write a shell script that produces a file of sequential numbers by reading the last num- ber in the file, adding 1 to it, and then appending it to the file. Run one instance of the script in the background and one in the foreground, each accessing the same file. How long does it take before a race condition manifests itself? What is the critical region? Modify the script to prevent the race. (Hint: use ln file file.lock to lock the data file.)

A possible shell script might be if [ ! -f numbers ]; then echo 0 > numbers; fi count = 0 while (test $count != 200 ) do count=′expr $count + 1′ n=′tail -1 numbers′ expr $n + 1 >>numbers done Run the script twice simultaneously, by starting it once in the background (using &) and again in the foreground. Then examine the file numbers. It will probably start out looking like an orderly list of numbers, but at some point it will lose its orderliness, due to the race condition created by running two cop- ies of the script. The race can be avoided by having each copy of the script test for and set a lock on the file before entering the critical area, and unlocking it upon leaving the critical area. This can be done like this: if ln numbers numbers.lock then n=′tail -1 numbers′ expr $n + 1 >>numbers rm numbers.lock fi This version will just skip a turn when the file is inaccessible. Variant solu- tions could put the process to sleep, do busy waiting, or count only loops in which the operation is successful.

Explain how hard links and soft links differ with respective to i-node allocations.

A single i-node is pointed to by all directory entries of hard links for a given file. In the case of soft-links, a new i-node is created for the soft link and this inode essentially points to the original file being linked.

Describe the stack

A stack is a LIFO (last in, first out) structure • Each running process has its own stack • Declaring a new local variable will use the next available memory from the stack • Function parameters are stored on the stack too • Function parameters are copies of the original variable • When a block ends, the memory used by local variables is released by popping them off the stack • The freed memory can now be used by another block

What is a trap instruction? Explain its use in operating systems.

A trap instruction switches the execution mode of a CPU from the user mode to the kernel mode. This instruction allows a user program to invoke functions in the operating system kernel.

In Fig. 2-8, a multithreaded Web server is shown. If the only way to read from a file is the normal blocking read system call, do you think user-level threads or kernel-level threads are being used for the Web server? Why?

A worker thread will block when it has to read a Web page from the disk. If user-level threads are being used, this action will block the entire process, destroying the value of multithreading. Thus it is essential that kernel threads are used to permit some threads to block without affecting the others.

How many pebibytes are there in a zebibyte?

A zebibyte is 270 bytes; a pebibyte is 250. 220 pebibytes fit in a zebibyte

Instructions related to accessing I/O devices are typically privileged instructions, that is, they can be executed in kernel mode but not in user mode. Give a reason why these instructions are privileged.

Access to I/O devices (e.g., a printer) is typically restricted for different users. Some users may be allowed to print as many pages as they like, some users may not be allowed to print at all, while some users may be limited to printing only a certain number of pages. These restrictions are set by system administrators based on some policies. Such policies need to be enforced so that userlevel programs cannot interfere with them.

what arithmetic is allowed with pointers?

Addition and subtraction (NOT multiplication or division), Comparison operators, e.g. <, >, ==, != etc.

Describe two advantages and two disadvantages of thin client computing?

Advantages of thin clients include low cost and no need for complex management for the clients. Disadvantages include (potentially) lower performance due to network latency and (potential) loss of privacy because the client's data/information is shared with the server.

A swapping system eliminates holes by compaction. Assuming a random distribution of many holes and many data segments and a time to read or write a 32-bit memory word of 4 nsec, about how long does it take to compact 4 GB? For simplicity, assume that word 0 is part of a hole and that the highest word in memory contains valid data.

Almost the entire memory has to be copied, which requires each word to be read and then rewritten at a different location. Reading 4 bytes takes 4 nsec, so reading 1 byte takes 1 nsec and writing it takes another 2 nsec, for a total of 2 nsec per byte compacted. This is a rate of 500,000,000 bytes/sec. To copy 4 GB (2232 bytes, which is about 4. 295 × 109 bytes), the computer needs 232/500, 000, 000 sec, which is about 859 msec. This number is slightly pessimistic because if the initial hole at the bottom of memory is k bytes, those k bytes do not need to be copied. However, if there are many holes and many data segments, the holes will be small, so k will be small and the error in the calculation will also be small.

Explain the tradeoffs between precise and imprecise interrupts on a superscalar machine.

An advantage of precise interrupts is simplicity of code in the operating system since the machine state is well defined. On the other hand, in imprecise interrupts, OS writers have to figure out what instructions have been partially executed and up to what point. However, precise interrupts increase complexity of chip design and chip area, which may result in slower CPU.

How can the associative memory device needed for a TLB be implemented in hard- ware, and what are the implications of such a design for expandability?

An associative memory essentially compares a key to the contents of multiple registers simultaneously. For each register there must be a set of comparators that compare each bit in the register contents to the key being searched for. The number of gates (or transistors) needed to implement such a device is a linear function of the number of registers, so expanding the design gets expensive linearly.

Suppose that a computer can read or write a memory word in 5 nsec. Also suppose that when an interrupt occurs, all 32 CPU registers, plus the program counter and PSW are pushed onto the stack. What is the maximum number of interrupts per second this ma- chine can process?

An interrupt requires pushing 34 words onto the stack. Returning from the interrupt requires fetching 34 words from the stack. This overhead alone is 340 nsec. Thus the maximum number of interrupts per second is no more than about 2.94 million, assuming no work for each interrupt.

For the above problem, can another video stream be added and have the system still be schedulable?

Another video stream consumes 367 msec of time per second for a total of 1134 msec per second of real time so the system is not schedulable.

To a programmer, a system call looks like any other call to a library procedure. Is it important that a programmer know which library procedures result in system calls? Under what circumstances and why?

As far as program logic is concerned, it does not matter whether a call to a library procedure results in a system call. But if performance is an issue, if a task can be accomplished without a system call the program will run faster. Every system call involves overhead time in switching from the user context to the kernel context. Furthermore, on a multiuser system the operating system may schedule another process to run when a system call completes, further slowing the progress in real time of a calling process.

Show how counting semaphores (i.e., semaphores that can hold an arbitrary value) can be implemented using only binary semaphores and ordinary machine instructions.

Associated with each counting semaphore are two binary semaphores, M, used for mutual exclusion, and B, used for blocking. Also associated with each counting semaphore is a counter that holds the number of up s minus the num- ber of down s, and a list of processes blocked on that semaphore. To imple- ment down , a process first gains exclusive access to the semaphores, counter, and list by doing a down on M. It then decrements the counter. If it is zero or more, it just does an up on M and exits. If M is negative, the process is put on the list of blocked processes. Then an up is done on M and a down is done on B to block the process. To implement up , first M is down ed to get mutual exclusion, and then the counter is incremented. If it is more than zero, no one was blocked, so all that needs to be done is to up M. If, however, the counter is now neg ative or zero, some process must be removed from the list. Finally, an up is done on B and M in that order.

Consider the idea behind Fig. 4-21, but now for a disk with a mean seek time of 6 msec, a rotational rate of 15,000 rpm, and 1,048,576 bytes per track. What are the data rates for block sizes of 1 KB, 2 KB, and 4 KB, respectively?

At 15,000 rpm, the disk takes 4 msec to go around once. The average access time (in msec) to read k bytes is then 6 + 2 + (k/1, 048, 576) × 4. For blocks of 1 KB, 2 KB, and 4 KB, the access times are about 6.0039 msec, 6.0078 msec, and 6.0156 msec, respectively (hardly any different). These give rates of about 170.556 KB/sec, 340.890 KB/sec, and 680.896 KB/sec, respectively.

A system simulates multiple clocks by chaining all pending clock requests together as shown in Fig. 5-30. Suppose the current time is 5000 and there are pending clock re- quests for time 5008, 5012, 5015, 5029, and 5037. Show the values of Clock header, Current time, and Next signal at times 5000, 5005, and 5013. Suppose a new (pending) signal arrives at time 5017 for 5033. Show the values of Clock header, Current time and Next signal at time 5023.

At time 5000: Current time = 5000; Next Signal = 8; Header → 8 → 4 → 3 → 14 → 8. At time 5005: Current time = 5005; Next Signal = 3; Header → 3 → 4 → 3 → 14 → 8. At time 5013: Current time = 5013; Next Signal = 2; Header 2 → 14 → 8. At time 5023: Current time = 5023; Next Signal = 6; Header → 6 → 4 → 5.

Consider a computer system that has cache memory, main memory (RAM) and disk, and an operating system that uses virtual memory. It takes 1 nsec to access a word from the cache, 10 nsec to access a word from the RAM, and 10 ms to access a word from the disk. If the cache hit rate is 95% and main memory hit rate (after a cache miss) is 99%, what is the average time to access a word?

Av erage access time = 0.95 × 1 nsec (word is in the cache) + 0.05 × 0.99 × 10 nsec (word is in RAM, but not in the cache) + 0.05 × 0.01 × 10,000,000 nsec (word on disk only) = 5001.445 nsec = 5.001445 μsec

What is the essential difference between a block special file and a character special file?

Block special files consist of numbered blocks, each of which can be read or written independently of all the other ones. It is possible to seek to any block and start reading or writing. This is not possible with character special files.

Main memory units are preempted in swapping and virtual memory systems. The processor is preempted in time-sharing environments. Do you think that these preemp- tion methods were developed to handle resource deadlock or for other purposes? How high is their overhead?

Both virtual memory and time-sharing systems were developed mainly to assist system users. Virtualizing hardware shields users from the details of prestating needs, resource allocation, and overlays, in addition to preventing deadlock. The cost of context switching and interrupt handling, however, is considerable. Specialized registers, caches, and circuitry are required. Proba- bly this cost would not have been incurred for the purpose of deadlock pre- vention alone.

One way to prevent deadlocks is to eliminate the hold-and-wait condition. In the text it was proposed that before asking for a new resource, a process must first release what- ev er resources it already holds (assuming that is possible). However, doing so intro- duces the danger that it may get the new resource but lose some of the existing ones to competing processes. Propose an improvement to this scheme.

Change the semantics of requesting a new resource as follows. If a process asks for a new resource and it is available, it gets the resource and keeps what it already has. If the new resource is not available, all existing resources are released. With this scenario, deadlock is impossible and there is no danger that the new resource is acquired but existing ones lost. Of course, the processonly works if releasing a resource is possible (you can release a scanner be- tween pages or a CD recorder between CDs).

Which of the following instructions should be allowed only in kernel mode? (a) Disable all interrupts. (b) Read the time-of-day clock. (c) Set the time-of-day clock. (d) Change the memory map.

Choices (a), (c), and (d) should be restricted to kernel mode.

The banker's algorithm is being run in a system with m resource classes and n proc- esses. In the limit of large m and n, the number of operations that must be performed to check a state for safety is proportional to m a n b . What are the values of a and b?

Comparing a row in the matrix to the vector of available resources takes m op- erations. This step must be repeated on the order of n times to find a process that can finish and be marked as done. Thus, marking a process as done takes on the order of mn steps. Repeating the algorithm for all n processes means that the number of steps is then mn 2 . Thus, a = 1 and b = 2.

One way to eliminate circular wait is to have rule saying that a process is entitled only to a single resource at any moment. Give an example to show that this restriction is unacceptable in many cases.

Consider a process that needs to copy a huge file from a tape to a printer. Be- cause the amount of memory is limited and the entire file cannot fit in this memory, the process will have to loop through the following statements until the entire file has been printed: Acquire tape drive Copy the next portion of the file in memory (limited memory size) Release tape drive Acquire printer Print file from memory Release printer This will lengthen the execution time of the process. Furthermore, since the printer is released after every print step, there is no guarantee that all portions of the file will get printed on continuous pages.

There are several design goals in building an operating system, for example, resource utilization, timeliness, robustness, and so on. Give an example of two design goals that may contradict one another.

Consider fairness and real time. Fairness requires that each process be allocated its resources in a fair way, with no process getting more than its fair share. On the other hand, real time requires that resources be allocated based on the times when different processes must complete their execution. A realtime process may get a disproportionate share of the resources.

Suppose that there is a resource deadlock in a system. Give an example to show that the set of processes deadlocked can include processes that are not in the circular chain in the corresponding resource allocation graph.

Consider three processes, A, B and C and two resources R and S. Suppose A is waiting for I that is held by B, B is waiting for S held by A, and C is waiting for R held by A. All three processes, A, B and C are deadlocked. However, only A and B belong to the circular chain

A student has claimed that ''in the abstract, the basic page replacement algorithms (FIFO, LRU, optimal) are identical except for the attribute used for selecting the page to be replaced.'' (a) What is that attribute for the FIFO algorithm? LRU algorithm? Optimal algorithm? (b) Give the generic algorithm for these page replacement algorithms.

Consider, (a) The attributes are: (FIFO) load time; (LRU) latest reference time; and (Optimal) nearest reference time in the future. (b) There is the labeling algorithm and the replacement algorithm. The labeling algorithm labels each page with the attribute given in part a. The replacement algorithm evicts the page with the smallest label.

Consider a magnetic disk consisting of 16 heads and 400 cylinders. This disk has four 100-cylinder zones with the cylinders in different zones containing 160, 200, 240. and 280 sectors, respectively. Assume that each sector contains 512 bytes, average seek time between adjacent cylinders is 1 msec, and the disk rotates at 7200 RPM. Calcu- late the (a) disk capacity, (b) optimal track skew, and (c) maximum data transfer rate.

Consider, (a) The capacity of a zone is tracks × cylinders × sectors/cylinder × bytes/sect. Capacity of zone 1: 16 × 100 × 160 × 512 = 131072000 bytes Capacity of zone 2: 16 × 100 × 200 × 512 = 163840000 bytes Capacity of zone 3: 16 × 100 × 240 × 512 = 196608000 bytes Capacity of zone 4: 16 × 100 × 280 × 512 = 229376000 bytes Sum = 131072000 + 163840000 + 196608000 + 229376000 = 720896000 (b) A rotation rate of 7200 means there are 120 rotations/sec. In the 1 msec track-to-track seek time, 0.120 of the sectors are covered. In zone 1, the disk head will pass over 0.120 × 160 sectors in 1 msec, so, optimal track skew for zone 1 is 19.2 sectors. In zone 2, the disk head will pass over 0.120 × 200 sectors in 1 msec, so, optimal track skew for zone 2 is 24 sectors. In zone 3, the disk head will pass over 0.120 × 240 sectors in 1 msec, so, optimal track skew for zone 3 is 28.8 sectors. In zone 4, the disk head will pass over 0.120 × 280 sectors in 1 msec, so, optimal track skew for zone 3 is 33.6 sectors. (c) The maximum data transfer rate will be when the cylinders in the outermost zone (zone 4) are being read/written. In that zone, in one second, 280 sectors are read 120 times. Thus the data rate is 280 × 120 × 512 = 17,203,200 bytes/sec.

what is dma

Direct memory access (DMA) is a feature of computer systems that allows certain hardware subsystems to access main system memory (Random-access memory), independent of the central processing unit (CPU).

Students working at individual PCs in a computer laboratory send their files to be printed by a server that spools the files on its hard disk. Under what conditions may a deadlock occur if the disk space for the print spool is limited? How may the deadlock be avoided?

Disk space on the spooling partition is a finite resource. Every block that comes in de facto claims a resource and every new one arriving wants more resources. If the spooling space is, say, 10 MB and the first half of ten 2-MB jobs arrive, the disk will be full and no more blocks can be stored so we have a deadlock. The deadlock can be avoided by allowing a job to start printing before it is fully spooled and reserving the space thus released for the rest of that job. In this way, one job will actually print to completion, then the next one can do the same thing. If jobs cannot start printing until they are fully spooled, deadlock is possible.

A DMA controller has fiv e channels. The controller is capable of requesting a 32-bit word every 40 nsec. A response takes equally long. How fast does the bus have to be to avoid being a bottleneck?

Each bus transaction has a request and a response, each taking 50 nsec, or 100 nsec per bus transaction. This gives 10 million bus transactions/sec. If each one is good for 4 bytes, the bus has to handle 40 MB/sec. The fact that these transactions may be sprayed over five I/O devices in round-robin fashion is irrelevant. A bus transaction takes 100 nsec, regardless of whether consecutive requests are to the same device or different devices, so the number of channels the DMA controller has does not matter. The bus does not know or care.

In a system with threads, is there one stack per thread or one stack per process when user-level threads are used? What about when kernel-level threads are used? Explain.

Each thread calls procedures on its own, so it must have its own stack for the local variables, return addresses, and so on. This is equally true for user-level threads as for kernel-level threads.

Consider a real-time system with two voice calls of periodicity 5 msec each with CPU time per call of 1 msec, and one video stream of periodicity 33 ms with CPU time per call of 11 msec. Is this system schedulable?

Each voice call needs 200 samples of 1 msec or 200 msec. Together they use 400 msec of CPU time. The video needs 11 msec 33 1/3 times a second for a total of about 367 msec. The sum is 767 msec per second of real time so the system is schedulable.

A real-time system needs to handle two voice calls that each run every 6 msec and con- sume 1 msec of CPU time per burst, plus one video at 25 frames/sec, with each frame requiring 20 msec of CPU time. Is this system schedulable?

Each voice call runs 166.67 times/second and uses up 1 msec per burst, so each voice call needs 166.67 msec per second or 333.33 msec for the two of them. The video runs 25 times a second and uses up 20 msec each time, for a total of 500 msec per second. Together they consume 833.33 msec per second, so there is time left over and the system is schedulable.

Tw o computer science students, Carolyn and Elinor, are having a discussion about i- nodes. Carolyn maintains that memories have gotten so large and so cheap that when a file is opened, it is simpler and faster just to fetch a new copy of the i-node into the i- node table, rather than search the entire table to see if it is already there. Elinor dis- agrees. Who is right?

Elinor is right. Having two copies of the i-node in the table at the same time is a disaster, unless both are read only. The worst case is when both are being updated simultaneously. When the i-nodes are written back to the disk, whichever one gets written last will erase the changes made by the other one, and disk blocks will be lost.

To use cache memory, main memory is divided into cache lines, typically 32 or 64 bytes long. An entire cache line is cached at once. What is the advantage of caching an entire line instead of a single byte or word at a time?

Empirical evidence shows that memory access exhibits the principle of locality of reference, where if one location is read then the probability of accessing nearby locations next is very high, particularly the following memory locations. So, by caching an entire cache line, the probability of a cache hit next is increased. Also, modern hardware can do a block transfer of 32 or 64 bytes into a cache line much faster than reading the same data as individual words

A computer has a pipeline with four stages. Each stage takes the same time to do its work, namely, 1 nsec. How many instructions per second can this machine execute?

Every nanosecond one instruction emerges from the pipeline. This means the machine is executing 1 billion instructions per second. It does not matter at all how many stages the pipeline has. A 10-stage pipeline with 1 nsec per stage would also execute 1 billion instructions per second. All that matters is how often a finished instruction pops out the end of the pipeline.

A portable operating system is one that can be ported from one system architecture to another without any modification. Explain why it is infeasible to build an operating system that is completely portable. Describe two high-level layers that you will have in designing an operating system that is highly portable.

Every system architecture has its own set of instructions that it can execute. Thus a Pentium cannot execute SPARC programs and a SPARC cannot execute Pentium programs. Also, different architectures differ in bus architecture used (such as VME, ISA, PCI, MCA, SBus, ...) as well as the word size of the CPU (usually 32 or 64 bit). Because of these differences in hardware, it is not feasible to build an operating system that is completely portable. A highly portable operating system will consist of two high-level layers—-a machine-dependent layer and a machine-independent layer. The machine-dependent layer addresses the specifics of the hardware and must be implemented separately for every architecture. This layer provides a uniform interface on which the machine-independent layer is built. The machine-independent layer has to be implemented only once. To be highly portable, the size of the machine-dependent layer must be kept as small as possible.

Figure 1-23 shows that a number of UNIX system calls have no Win32 API equiv- alents. For each of the calls listed as having no Win32 equivalent, what are the conse- quences for a programmer of converting a UNIX program to run under Windows?

Figure 1-23 shows that a number of UNIX system calls have no Win32 API equiv- alents. For each of the calls listed as having no Win32 equivalent, what are the conse- quences for a programmer of converting a UNIX program to run under Windows?

Consider a swapping system in which memory consists of the following hole sizes in memory order: 10 MB, 4 MB, 20 MB, 18 MB, 7 MB, 9 MB, 12 MB, and 15 MB. Which hole is taken for successive segment requests of (a) 12 MB (b) 10 MB (c) 9 MB for first fit? Now repeat the question for best fit, worst fit, and next fit.

First fit takes 20 MB, 10 MB, 18 MB. Best fit takes 12 MB, 10 MB, and 9 MB. Worst fit takes 20 MB, 18 MB, and 15 MB. Next fit takes 20 MB, 18 MB, and 9 MB.

The IBM 360 had a scheme of locking 2-KB blocks by assigning each one a 4-bit key and having the CPU compare the key on every memory reference to the 4-bit key in the PSW. Name two drawbacks of this scheme not mentioned in the text.

First, special hardware is needed to do the comparisons, and it must be fast, since it is used on every memory reference. Second, with 4-bit keys, only 16 programs can be in memory at once (one of which is the operating system)

For each of the following decimal virtual addresses, compute the virtual page number and offset for a 4-KB page and for an 8 KB page: 20000, 32768, 60000

For a 4-KB page size the (page, offset) pairs are (4, 3616), (8, 0), and (14, 2656). For an 8-KB page size they are (2, 3616), (4, 0), and (7, 2656).

A computer has 32-bit virtual addresses and 4-KB pages. The program and data toget- her fit in the lowest page (0-4095) The stack fits in the highest page. How many en- tries are needed in the page table if traditional (one-level) paging is used? How many page table entries are needed for two-level paging, with 10 bits in each part?

For a one-level page table, there are 232/212 or 1M pages needed. Thus the page table must have 1M entries. For two-level paging, the main page table has 1K entries, each of which points to a second page table. Only two of these are used. Thus, in total only three page table entries are needed, one in the top-level table and one in each of the lower-level tables.

For a giv en class, the student records are stored in a file. The records are randomly ac- cessed and updated. Assume that each student's record is of fixed size. Which of the three allocation schemes (contiguous, linked and table/indexed) will be most ap- propriate?

For random access, table/indexed and contiguous will be both appropriate, while linked allocation is not as it typically requires multiple disk reads for a given record.

Five batch jobs. A through E, arrive at a computer center at almost the same time. They hav e estimated running times of 10, 6, 2, 4, and 8 minutes. Their (externally de- termined) priorities are 3, 5, 2, 1, and 4, respectively, with 5 being the highest priority. For each of the following scheduling algorithms, determine the mean process turnaround time. Ignore process switching overhead. (a) Round robin. (b) Priority scheduling. (c) First-come, first-served (run in order 10, 6, 2, 4, 8). (d) Shortest job first. For (a), assume that the system is multiprogrammed, and that each job gets its fair share of the CPU. For (b) through (d), assume that only one job at a time runs, until it finishes. All jobs are completely CPU bound.

For round robin, during the first 10 minutes each job gets 1/5 of the CPU. At the end of 10 minutes, C finishes. During the next 8 minutes, each job gets 1/4 of the CPU, after which time D finishes. Then each of the three remaining jobs gets 1/3 of the CPU for 6 minutes, until B finishes, and so on. The finishing times for the five jobs are 10, 18, 24, 28, and 30, for an average of 22 minutes. For priority scheduling, B is run first. After 6 minutes it is finished. The other jobs finish at 14, 24, 26, and 30, for an average of 18.8 minutes. If the jobs run in the order A through E, they finish at 10, 16, 18, 22, and 30, for an average of 19.2 minutes. Finally, shortest job first yields finishing times of 2, 6, 12, 20, and 30, for an average of 14 minutes.

Consider the following C program: int X[N]; int step = M; /* M is some predefined constant */ for (int i = 0; i < N; i += step) X[i] = X[i] + 1; (a) If this program is run on a machine with a 4-KB page size and 64-entry TLB, what values of M and N will cause a TLB miss for every execution of the inner loop? (b) Would your answer in part (a) be different if the loop were repeated many times? Explain.

For these sizes (a) M has to be at least 4096 to ensure a TLB miss for every access to an element of X. Since N affects only how many times X is accessed, any value of N will do. (b) M should still be at least 4,096 to ensure a TLB miss for every access to an element of X. But now N should be greater than 64K to thrash the TLB, that is, X should exceed 256 KB.

For each of the following system calls, give a condition that causes it to fail: fork , exec, and unlink

Fork can fail if there are no free slots left in the process table (and possibly if there is no memory or swap space left). Exec can fail if the file name given does not exist or is not a valid executable file. Unlink can fail if the file to be unlinked does not exist or the calling process does not have the authority to unlink it.

Consider the following two-dimensional array: int X[64][64]; Suppose that a system has four page frames and each frame is 128 words (an integer occupies one word). Programs that manipulate the X array fit into exactly one page and always occupy page 0. The data are swapped in and out of the other three frames. The X array is stored in row-major order (i.e., X[0][1] follows X[0][0] in memory). Which of the two code fragments shown below will generate the lowest number of page faults? Explain and compute the total number of page faults. Fr agment A for (int j = 0; j < 64; j++) for (int i = 0; i < 64; i++) X[i][j] = 0; Fr agment B for (int i = 0; i < 64; i++) for (int j = 0; j < 64; j++) X[i][j] = 0;

Fragment B since the code has more spatial locality than Fragment A. The inner loop causes only one page fault for every other iteration of the outer loop. (There will be only 32 page faults.) [Aside (Fragment A): Since a frame is 128 words, one row of the X array occupies half of a page (i.e., 64 words). The entire array fits into 64 × 32/128 = 16 frames. The inner loop of the code steps through consecutive rows of X for a given column. Thus, ev ery other reference to X[i][ j] will cause a page fault. The total number of page faults will be 64 × 64/2 = 2, 048].

Can you think of any situations where supporting virtual memory would be a bad idea, and what would be gained by not having to support virtual memory? Explain

General virtual memory support is not needed when the memory requirements of all applications are well known and controlled. Some examples are smart cards, special-purpose processors (e.g., network processors), and embedded processors. In these situations, we should always consider the possibility of using more real memory. If the operating system did not have to support virtual memory, the code would be much simpler and smaller. On the other hand, some ideas from virtual memory may still be profitably exploited, although with different design requirements. For example, program/thread isolation might be paging to flash memory.

On all current computers, at least part of the interrupt handlers are written in assembly language. Why?

Generally, high-level languages do not allow the kind of access to CPU hard- ware that is required. For instance, an interrupt handler may be required to enable and disable the interrupt servicing a particular device, or to manipulate data within a process' stack area. Also, interrupt service routines must execute as rapidly as possible.

what the advantages and disadvantages of using the heap

Global variables are stored on the heap • Again, each program has its own heap space • Accessible anywhere in your program (generally speaking) • The heap and the stack are completely separate • Globals are powerful but should be used wisely • Most variables are better stored on the stack • Hard to keep track of globals in large projects • Needless globals require lots of heap space • Variable declarations should be close to where they are used • Improves code readability / maintainability

Name one advantage of hard links over symbolic links and one advantage of symbolic links over hard links.

Hard links do not require any extra disk space, just a counter in the i-node to keep track of how many there are. Symbolic links need space to store the name of the file pointed to. Symbolic links can point to files on other machines, ev en over the Internet. Hard links are restricted to pointing to files within their own partition.

Local Area Networks utilize a media access method called CSMA/CD, in which sta- tions sharing a bus can sense the medium and detect transmissions as well as collisions. In the Ethernet protocol, stations requesting the shared channel do not transmit frames if they sense the medium is busy. When such transmission has terminated, waiting stations each transmit their frames. Two frames that are transmitted at the same time will collide. If stations immediately and repeatedly retransmit after collision de- tection, they will continue to collide indefinitely. (a) Is this a resource deadlock or a livelock? (b) Can you suggest a solution to this anomaly? (c) Can starvation occur with this scenario?

Here are the answers, albeit a bit complicated. (a) This is a competition synchronization anomaly. It is also a livelock. We might term it a scheduling livelock. It is not a resource livelock or deadlock, since stations are not holding resources that are requested by others and thus a circular chain of stations holding resources while requesting others does not exist. It is not a communication deadlock, since stations are executing inde- pendently and would complete transmission were scheduled sequentially.(b) Ethernet and slotted Aloha require that stations that detect a collision of their transmission must wait a random number of time slots before retrans- mitting. The interval within which the time slot is chosen is doubled after each successive collision, dynamically adjusting to heavy traffic loads. After sixteen successive retransmissions a frame is dropped. (c) Because access to the channel is probabilistic, and because newly arriving stations can compete and be allocated the channel before stations that have retransmitted some number of times, starvation is enabled.

A computer science student assigned to work on deadlocks thinks of the following bril- liant way to eliminate deadlocks. When a process requests a resource, it specifies a time limit. If the process blocks because the resource is not available, a timer is start- ed. If the time limit is exceeded, the process is released and allowed to run again. If you were the professor, what grade would you give this proposal and why?

I'd giv e it an F (failing) grade. What does the process do? Since it clearly needs the resource, it just asks again and blocks again. This is no better than staying blocked. In fact, it may be worse since the system may keep track of how long competing processes have been waiting and assign a newly freed re- source to the process that has been waiting longest. By periodically timing out and trying again, a process loses its seniority.

Describe the effects of a corrupted data block for a given file for: (a) contiguous, (b) linked, and (c) indexed (or table based).

If a data block gets corrupted in a contiguous allocation system, then only this block is affected; the remainder of the file's blocks can be read. In the case of linked allocation, the corrupted block cannot be read; also, location data about all blocks starting from this corrupted block is lost. In case of indexed alloca- tion, only the corrupted data block is affected.

In the solution to the dining philosophers problem (Fig. 2-47), why is the state variable set to HUNGRY in the procedure take forks?

If a philosopher blocks, neighbors can later see that she is hungry by checking his state, in test, so he can be awakened when the forks are available.

Consider the previous problem again, but now with p processes each needing a maxi- mum of m resources and a total of r resources available. What condition must hold to make the system deadlock free?

If a process has m resources it can finish and cannot be involved in a deadlock. Therefore, the worst case is where every process has m − 1 resources and needs another one. If there is one resource left over, one process can finish and release all its resources, letting the rest finish too. Therefore the condition for avoiding deadlock is r ≥ p(m − 1) + 1.

Round-robin schedulers normally maintain a list of all runnable processes, with each process occurring exactly once in the list. What would happen if a process occurred twice in the list? Can you think of any reason for allowing this?

If a process occurs multiple times in the list, it will get multiple quanta per cycle. This approach could be used to give more important processes a larger share of the CPU. But when the process blocks, all entries had better be re- moved from the list of runnable processes.

A certain file system uses 4-KB disk blocks. The median file size is 1 KB. If all files were exactly 1 KB, what fraction of the disk space would be wasted? Do you think the wastage for a real file system will be higher than this number or lower than it? Explain your answer.

If all files were 1 KB, then each 4-KB block would contain one file and 3 KB of wasted space. Trying to put two files in a block is not allowed because the unit used to keep track of data is the block, not the semiblock. This leads to 75% wasted space. In practice, every file system has large files as well as many small ones, and these files use the disk much more efficiently. For ex- ample, a 32,769-byte file would use 9 disk blocks for storage, given a space ef- ficiency of 32,769/36,864, which is about 89%.

Cinderella and the Prince are getting divorced. To divide their property, they hav e agreed on the following algorithm. Every morning, each one may send a letter to the other's lawyer requesting one item of property. Since it takes a day for letters to be de- livered, they hav e agreed that if both discover that they hav e requested the same item on the same day, the next day they will send a letter canceling the request. Among their property is their dog, Woofer, Woofer's doghouse, their canary, Tweeter, and Tweeter's cage. The animals love their houses, so it has been agreed that any division of property separating an animal from its house is invalid, requiring the whole division to start over from scratch. Both Cinderella and the Prince desperately want Woofer. So that they can go on (separate) vacations, each spouse has programmed a personal com- puter to handle the negotiation. When they come back from vacation, the computers are still negotiating. Why? Is deadlock possible? Is starvation possible? Discuss your answer.

If both programs ask for Woofer first, the computers will starve with the end- less sequence: request Woofer, cancel request, request Woofer, cancel request, and so on. If one of them asks for the doghouse and the other asks for the dog, we have a deadlock, which is detected by both parties and then broken, but it is just repeated on the next cycle. Either way, if both computers have been pro- grammed to go after the dog or the doghouse first, either starvation or dead- lock ensues. There is not really much difference between the two here. In most deadlock problems, starvation does not seem serious because introducing random delays will usually make it very unlikely. That approach does not work here.

In light of the answer to the previous question, does compacting the disk ever make any sense?

If done right, yes. While compacting, each file should be organized so that all of its blocks are consecutive, for fast access. Windows has a program that defragments and reorganizes the disk. Users are encouraged to run it periodi- cally to improve system performance. But given how long it takes, running once a month might be a good frequency.

Multiple jobs can run in parallel and finish faster than if they had run sequentially. Suppose that two jobs, each needing 20 minutes of CPU time, start simultaneously. How long will the last one take to complete if they run sequentially? How long if they run in parallel? Assume 50% I/O wait.

If each job has 50% I/O wait, then it will take 40 minutes to complete in the absence of competition. If run sequentially, the second one will finish 80 min- utes after the first one starts. With two jobs, the approximate CPU utilization is 1 − 0. 5 2 . Thus, each one gets 0.375 CPU minute per minute of real time. To accumulate 20 minutes of CPU time, a job must run for 20/0.375 minutes, or about 53.33 minutes. Thus running sequentially the jobs finish after 80 min- utes, but running in parallel they finish after 53.33 minutes.

If a CPU's maximum voltage, V , is cut to V /n, its power consumption drops to 1/n 2 of its original value and its clock speed drops to 1/n of its original value. Suppose that a user is typing at 1 char/sec, but the CPU time required to process each character is 100 msec. What is the optimal value of n and what is the corresponding energy saving in percent compared to not cutting the voltage? Assume that an idle CPU consumes no energy at all.

If n = 10, the CPU can still get its work done on time, but the energy used drops appreciably. If the energy consumed in 1 sec at full speed is E, then running at full speed for 100 msec then going idle for 900 msec uses E/10. Running at 1/10 speed for a whole second uses E/100, a saving of 9E/100. The percent savings by cutting the voltage is 90%.

Can the count = write(fd, buffer, nbytes); call return any value in count other than nbytes? If so, why?

If the call fails, for example because fd is incorrect, it can return −1. It can also fail because the disk is full and it is not possible to write the number of bytes requested. On a correct termination, it always returns nbytes.

Explain how time quantum value and context switching time affect each other, in a round-robin scheduling algorithm.

If the context switching time is large, then the time quantum value has to be proportionally large. Otherwise, the overhead of context switching can be quite high. Choosing large time quantum values can lead to an inefficient sys- tem if the typical CPU burst times are less than the time quantum. If context switching is very small or negligible, then the time quantum value can be cho- sen with more freedom.

Why are output files for the printer normally spooled on disk before being printed?

If the printer were assigned as soon as the output appeared, a process could tie up the printer by printing a few characters and then going to sleep for a week.

If a system has only two processes, does it make sense to use a barrier to synchronize them? Why or why not?

If the program operates in phases and neither process may enter the next phase until both are finished with the current phase, it makes perfect sense to use a barrier.

Copy on write is an interesting idea used on server systems. Does it make any sense on a smartphone?

If the smartphone supports multiprogramming, which the iPhone, Android, and Windows phones all do, then multiple processes are supported. If a process forks and pages are shared between parent and child, copy on write defi- nitely makes sense. A smartphone is smaller than a server, but logically it is not so different

All the trajectories in Fig. 6-8 are horizontal or vertical. Can you envision any circum- stances in which diagonal trajectories are also possible?

If the system had two or more CPUs, two or more processes could run in par- allel, leading to diagonal trajectories.

Suppose that file 21 in Fig. 4-25 was not modified since the last dump. In what way would the four bitmaps of Fig. 4-26 be different?

In (a) and (b), 21 would not be marked. In (c), there would be no change. In (d), 21 would not be marked.

what's the difference between time sharing and mutliprogramming OSs

In a timesharing system, multiple users can access and perform computations on a computing system simultaneously using their own terminals. Multiprogramming systems allow a user to run multiple programs simultaneously. All timesharing systems are multiprogramming systems but not all multiprogramming systems are timesharing systems since a multiprogramming system may run on a PC with only one user.

Can a measure of whether a process is likely to be CPU bound or I/O bound be deter- mined by analyzing source code? How can this be determined at run time?

In simple cases it may be possible to see if I/O will be limiting by looking at source code. For instance a program that reads all its input files into buffers at the start will probably not be I/O bound, but a problem that reads and writes incrementally to a number of different files (such as a compiler) is likely to be I/O bound. If the operating system provides a facility such as the UNIX ps command that can tell you the amount of CPU time used by a program, you can compare this with the total time to complete execution of the program. This is, of course, most meaningful on a system where you are the only user.

Give an example of a deadlock taken from politics.

In the U.S., consider a presidential election in which three or more candidates are trying for the nomination of some party. After all the primary elections are finished, when the delegates arrive at the party convention, it could happen that no candidate has a majority and that no delegate is willing to change his or her vote. This is a deadlock. Each candidate has some resources (votes) but needs more to get the job done. In countries with multiple political parties in the parliament, it could happen that each party supports a different version of the annual budget and that it is impossible to assemble a majority to pass the budget. This is also a deadlock.

In this problem you are to compare reading a file using a single-threaded file server and a multithreaded server. It takes 12 msec to get a request for work, dispatch it, and do the rest of the necessary processing, assuming that the data needed are in the block cache. If a disk operation is needed, as is the case one-third of the time, an additional 75 msec is required, during which time the thread sleeps. How many requests/sec can the server handle if it is single threaded? If it is multithreaded?

In the single-threaded case, the cache hits take 12 msec and cache misses take 87 msec. The weighted average is 2/3 × 12 + 1/3 × 87. Thus, the mean re- quest takes 37 msec and the server can do about 27 per second. For a multi- threaded server, all the waiting for the disk is overlapped, so every request takes 12 msec, and the server can handle 83 1/3 requests per second.

A slight modification of the elevator algorithm for scheduling disk requests is to al- ways scan in the same direction. In what respect is this modified algorithm better than the elevator algorithm?

In the worst case, a read/write request is not serviced for almost two full disk scans in the elevator algorithm, while it is at most one full disk scan in the modified algorithm

For an external USB hard drive attached to a computer, which is more suitable: a write- through cache or a block cache?

In this case, it is better to use a write-through cache since it writes data to the hard drive while also updating the cache. This will ensure that the updated file is always on the external hard drive even if the user accidentally removes the hard drive before disk sync is completed.

Explain the difference between internal fragmentation and external fragmentation. Which one occurs in paging systems? Which one occurs in systems using pure seg- mentation?

Internal fragmentation occurs when the last allocation unit is not full. External fragmentation occurs when space is wasted between two allocation units. In a paging system, the wasted space in the last page is lost to internal fragmentation. In a pure segmentation system, some space is invariably lost between the segments. This is due to external fragmentation.

You hav e been hired by a cloud computing company that deploys thousands of servers at each of its data centers. They hav e recently heard that it would be worthwhile to handle a page fault at server A by reading the page from the RAM memory of some other server rather than its local disk drive. (a) How could that be done? (b) Under what conditions would the approach be worthwhile? Be feasible?

It can certainly be done. (a) The approach has similarities to using flash memory as a paging device in smartphones except now the virtual swap area is a RAM located on a remote server. All of the software infrastructure for the virtual swap area would have to be dev eloped. (b) The approach might be worthwhile by noting that the access time of disk drives is in the millisecond range while the access time of RAM via a network connection is in the microsecond range if the software overhead is not too high. But the approach might make sense only if there is lots of idle RAM in the server farm. And then, there is also the issue of reliability. Since RAM is volatile, the virtual swap area would be lost if the remote server went down.

Does Peterson's solution to the mutual-exclusion problem shown in Fig. 2-24 work when process scheduling is preemptive? How about when it is nonpreemptive?

It certainly works with preemptive scheduling. In fact, it was designed for that case. When scheduling is nonpreemptive, it might fail. Consider the case in which turn is initially 0 but process 1 runs first. It will just loop forever and never release the CPU.

Files in MS-DOS have to compete for space in the FAT -16 table in memory. If one file uses k entries, that is k entries that are not available to any other file, what constraint does this place on the total length of all files combined?

It constrains the sum of all the file lengths to being no larger than the disk. This is not a very serious constraint. If the files were collectively larger than the disk, there would be no place to store all of them on the disk.

A file whose file descriptor is fd contains the following sequence of bytes: 3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5. The following system calls are made: lseek(fd, 3, SEEK SET); read(fd, &buffer, 4); where the lseek call makes a seek to byte 3 of the file. What does buffer contain after the read has completed?

It contains the bytes: 1, 5, 9, 2.

Consider a system in which threads are implemented entirely in user space, with the run-time system getting a clock interrupt once a second. Suppose that a clock interrupt occurs while some thread is executing in the run-time system. What problem might oc- cur? Can you suggest a way to solve it?

It could happen that the runtime system is precisely at the point of blocking or unblocking a thread, and is busy manipulating the scheduling queues. This would be a very inopportune moment for the clock interrupt handler to begin inspecting those queues to see if it was time to do thread switching, since they might be in an inconsistent state. One solution is to set a flag when the runtime system is entered. The clock handler would see this and set its own flag, then return. When the runtime system finished, it would check the clock flag, see that a clock interrupt occurred, and now run the clock handler.

In Fig. 5-9(b), the interrupt is not acknowledged until after the next character has been output to the printer. Could it have equally well been acknowledged right at the start of the interrupt service procedure? If so, give one reason for doing it at the end, as in the text. If not, why not?

It could have been done at the start. A reason for doing it at the end is that the code of the interrupt service procedure is very short. By first outputting another character and then acknowledging the interrupt, if another interrupt happens immediately, the printer will be working during the interrupt, making it print slightly faster. A disadvantage of this approach is slightly longer dead time when other interrupts may be disabled.

Suppose that we have a message-passing system using mailboxes. When sending to a full mailbox or trying to receive from an empty one, a process does not block. Instead, it gets an error code back. The process responds to the error code by just trying again, over and over, until it succeeds. Does this scheme lead to race conditions?

It does not lead to race conditions (nothing is ever lost), but it is effectively busy waiting.

In Fig. 3-3 the base and limit registers contain the same value, 16,384. Is this just an accident, or are they always the same? It is just an accident, why are they the same in this example?

It is an accident. The base register is 16,384 because the program happened to be loaded at address 16,384. It could have been loaded anywhere. The limit register is 16,384 because the program contains 16,384 bytes. It could have been any length. That the load address happens to exactly match the program length is pure coincidence.

What would happen if the bitmap or free list containing the information about free disk blocks was completely lost due to a crash? Is there any way to recover from this disas- ter, or is it bye-bye disk? Discuss your answers for UNIX and the FAT -16 file system separately.

It is not a serious problem at all. Repair is straightforward; it just takes time. The recovery algorithm is to make a list of all the blocks in all the files and take the complement as the new free list. In UNIX this can be done by scan- ning all the i-nodes. In the FAT file system, the problem cannot occur because there is no free list. But even if there were, all that would have to be done to recover it is to scan the FAT looking for free entries.

Is it possible that a resource deadlock involves multiple units of one type and a single unit of another? If so, give an example.

It is possible that one process holds some or all of the units of one resource type and requests another resource type, while another process holds the second resource while requesting the available units of the first resource type. If no other process can release units of the first resource type and the resource cannot be preempted or used concurrently, the system is deadlocked. For example, two processes are both allocated memory cells in a real memory system. (We assume that swapping of pages or processes is not supported, while dynamic requests for memory are supported.) The first process locks another resource - perhaps a data cell. The second process requests the locked data and is blocked. The first process needs more memory in order to execute the code to release the data. Assuming that no other processes in the system can complete and release memory cells, a deadlock exists in the system

Synchronization within monitors uses condition variables and two special operations, wait and signal . A more general form of synchronization would be to have a single primitive, waituntil , that had an arbitrary Boolean predicate as parameter. Thus, one could say, for example, waituntil x < 0 or y + z < n The signal primitive would no longer be needed. This scheme is clearly more general than that of Hoare or Brinch Hansen, but it is not used. Why not? (Hint: Think about the implementation.)

It is very expensive to implement. Each time any variable that appears in a predicate on which some process is waiting changes, the run-time system must re-evaluate the predicate to see if the process can be unblocked. With the Hoare and Brinch Hansen monitors, processes can only be awakened on a sig- nal primitive.

Consider a system that has two CPUs, each CPU having two threads (hyperthreading). Suppose three programs, P0, P1, and P2, are started with run times of 5, 10 and 20 msec, respectively. How long will it take to complete the execution of these programs? Assume that all three programs are 100% CPU bound, do not block during execution, and do not change CPUs once assigned.

It may take 20, 25 or 30 msec to complete the execution of these programs depending on how the operating system schedules them. If P0 and P1 are scheduled on the same CPU and P2 is scheduled on the other CPU, it will take 20 msec. If P0 and P2 are scheduled on the same CPU and P1 is scheduled on the other CPU, it will take 25 msec. If P1 and P2 are scheduled on the same CPU and P0 is scheduled on the other CPU, it will take 30 msec. If all three are on the same CPU, it will take 35 msec.

A user at a terminal issues a command to an editor to delete the word on line 5 occupy- ing character positions 7 through and including 12. Assuming the cursor is not on line 5 when the command is given, what ANSI escape sequence should the editor emit to delete the word?

It should move the cursor to line 5 position 7 and then delete 6 characters. The sequence is ESC [ 5 ; 7 H ESC [ 6 P

Consider Fig. 4-27. Is it possible that for some particular block number the counters in both lists have the value 2? How should this problem be corrected?

It should not happen, but due to a bug somewhere it could happen. It means that some block occurs in two files and also twice in the free list. The first step in repairing the error is to remove both copies from the free list. Next a free block has to be acquired and the contents of the sick block copied there. Final- ly, the occurrence of the block in one of the files should be changed to refer to the newly acquired copy of the block. At this point the system is once again consistent.

One way to use contiguous allocation of the disk and not suffer from holes is to com- pact the disk every time a file is removed. Since all files are contiguous, copying a file requires a seek and rotational delay to read the file, followed by the transfer at full speed. Writing the file back requires the same work. Assuming a seek time of 5 msec, a rotational delay of 4 msec, a transfer rate of 80 MB/sec, and an average file size of 8 KB, how long does it take to read a file into main memory and then write it back to the disk at a new location? Using these numbers, how long would it take to compact half of a 16-GB disk?

It takes 9 msec to start the transfer. To read 2 13 bytes at a transfer rate of 80 MB/sec requires 0.0977 msec, for a total of 9.0977 msec. Writing it back takes another 9.0977 msec. Thus, copying a file takes 18.1954 msec. To compact half of a 16-GB disk would involve copying 8 GB of storage, which is 2 20 files. At 18.1954 msec per file, this takes 19,079.25 sec, which is 5.3 hours. Clearly, compacting the disk after every file removal is not a great idea.

The CDC 6600 computers could handle up to 10 I/O processes simultaneously using an interesting form of round-robin scheduling called processor sharing. A process switch occurred after each instruction, so instruction 1 came from process 1, instruc- tion 2 came from process 2, etc. The process switching was done by special hardware, and the overhead was zero. If a process needed T sec to complete in the absence of competition, how much time would it need if processor sharing was used with n proc- esses?

It will take nT sec.

A group of operating system designers for the Frugal Computer Company are thinking about ways to reduce the amount of backing store needed in their new operating sys- tem. The head guru has just suggested not bothering to save the program text in the swap area at all, but just page it in directly from the binary file whenever it is needed. Under what conditions, if any, does this idea work for the program text? Under what conditions, if any, does it work for the data?

It works for the program if the program cannot be modified. It works for the data if the data cannot be modified. However, it is common that the program cannot be modified and extremely rare that the data cannot be modified. If the data area on the binary file were overwritten with updated pages, the next time the program was started, it would not have the original data.

In the text it was stated that the model of Fig. 2-11(a) was not suited to a file server using a cache in memory. Why not? Could each process have its own cache?

It would be difficult, if not impossible, to keep the file system consistent. Sup- pose that a client process sends a request to server process 1 to update a file. This process updates the cache entry in its memory. Shortly thereafter, another client process sends a request to server 2 to read that file. Unfortunately, if the file is also cached there, server 2, in its innocence, will return obsolete data. If the first process writes the file through to the disk after caching it, and server 2 checks the disk on every read to see if its cached copy is up-to-date, the system can be made to work, but it is precisely all these disk accesses that the caching system is trying to avoid.

It has been suggested that the first part of each UNIX file be kept in the same disk block as its i-node. What good would this do?

Many UNIX files are short. If the entire file fits in the same block as the i- node, only one disk access would be needed to read the file, instead of two, as is presently the case. Even for longer files there would be a gain, since one fewer disk accesses would be needed.

If a disk has double interleaving, does it also need cylinder skew in order to avoid missing data when making a track-to-track seek? Discuss your answer.

Maybe yes and maybe no. Double interleaving is effectively a cylinder skew of two sectors. If the head can make a track-to-track seek in fewer than two sector times, than no additional cylinder skew is needed. If it cannot, then additional cylinder skew is needed to avoid missing a sector after a seek.

When a user program makes a system call to read or write a disk file, it provides an indication of which file it wants, a pointer to the data buffer, and the count. Control is then transferred to the operating system, which calls the appropriate driver. Suppose that the driver starts the disk and terminates until an interrupt occurs. In the case of reading from the disk, obviously the caller will have to be blocked (because there are no data for it). What about the case of writing to the disk? Need the caller be blocked aw aiting completion of the disk transfer?

Maybe. If the caller gets control back and immediately overwrites the data, when the write finally occurs, the wrong data will be written. However, if the driver first copies the data to a private buffer before returning, then the caller can be allowed to continue immediately. Another possibility is to allow the caller to continue and give it a signal when the buffer may be reused, but this is tricky and error prone.

One mode that some DMA controllers use is to have the device controller send the word to the DMA controller, which then issues a second bus request to write to mem- ory. How can this mode be used to perform memory to memory copy? Discuss any advantage or disadvantage of using this method instead of using the CPU to perform memory to memory copy.

Memory to memory copy can be performed by first issuing a read command that will transfer the word from memory to DMA controller and then issuing a write to memory to transfer the word from the DMA controller to a different address in memory. This method has the advantage that the CPU can do other useful work in parallel. The disadvantage is that this memory to memory copy is likely to be slow since DMA controller is much slower than CPU and the data transfer takes place over system bus as opposed to the dedicated CPU-memory bus.

What is the difference between kernel and user mode? Explain how having two distinct modes aids in designing an operating system.

Most modern CPUs provide two modes of execution: kernel mode and user mode. The CPU can execute every instruction in its instruction set and use ev ery feature of the hardware when executing in kernel mode. However, it can execute only a subset of instructions and use only subset of features when executing in the user mode. Having two modes allows designers to run user programs in user mode and thus deny them access to critical instructions.

Is there any reason why you might want to mount a file system on a nonempty direc- tory? If so, what is it?

Mounting a file system makes any files already in the mount-point directory inaccessible, so mount points are normally empty. Howev er, a system administrator might want to copy some of the most important files normally located in the mounted directory to the mount point so they could be found in their normal path in an emergency when the mounted device was being repaired.

Suppose that process A in Fig. 6-12 requests the last tape drive. Does this action lead to a deadlock?

No. D can still finish. When it finishes, it returns enough resources to allow E (or A) to finish, and so on.

If a multithreaded process forks, a problem occurs if the child gets copies of all the parent's threads. Suppose that one of the original threads was waiting for keyboard input. Now two threads are waiting for keyboard input, one in each process. Does this problem ever occur in single-threaded processes?

No. If a single-threaded process is blocked on the keyboard, it cannot fork.

Systems that support sequential files always have an operation to rewind files. Do sys- tems that support random-access files need this, too?

No. If you want to read the file again, just randomly access byte 0.

When segmentation and paging are both being used, as in MULTICS, first the segment descriptor must be looked up, then the page descriptor. Does the TLB also work this way, with two lev els of lookup?

No. The search key uses both the segment number and the virtual page number, so the exact page can be found in a single match.

A 255-GB disk has 65,536 cylinders with 255 sectors per track and 512 bytes per sec- tor. How many platters and heads does this disk have? Assuming an average cylinder seek time of 11 ms, average rotational delay of 7 msec and reading rate of 100 MB/sec, calculate the average time it will take to read 400 KB from one sector.

Number of heads = 255 GB / (65536*255*512) = 16 Number of platters = 16/2 = 8 The time for a read operation to complete is seek time + rotational latency + transfer time. The seek time is 11 ms, the rotational latency is 7 ms and the transfer time is 4 ms, so the average transfer takes 22 msec.

Give a list of applications for each of these systems (one per operating systems type).

Obviously, there are a lot of possible answers. Here are some. Mainframe operating system: Claims processing in an insurance company. Server operating system: Speech-to-text conversion service for Siri. Multiprocessor operating system: Video editing and rendering. Personal computer operating system: Word processing application. Handheld computer operating system: Context-aware recommendation system. Embedded operating system: Programming a DVD recorder for recording TV. Sensor-node operating system: Monitoring temperature in a wilderness area. Real-time operating system: Air traffic control system. Smart-card operating system: Electronic payment.

Oliver Owl's night job at the university computing center is to change the tapes used for overnight data backups. While waiting for each tape to complete, he works on writ- ing his thesis that proves Shakespeare's plays were written by extraterrestrial visitors. His text processor runs on the system being backed up since that is the only one they have. Is there a problem with this arrangement?

Ollie's thesis may not be backed up as reliably as he might wish. A backup program may pass over a file that is currently open for writing, as the state of the data in such a file may be indeterminate.

A computer manufacturer decides to redesign the partition table of a Pentium hard disk to provide more than four partitions. What are some consequences of this change?

One fairly obvious consequence is that no existing operating system will work because they all look there to see where the disk partitions are. Changing the format of the partition table will cause all the operating systems to fail. The only way to change the partition table is to simultaneously change all the operating systems to use the new format.

A 25 × 80 character monochrome text screen requires a 2000-byte buffer. The 1200 × 900 pixel 24-bit color bitmap requires 3,240,000 bytes. In 1980 these two options would have cost $10 and $15,820, respectively. For current prices, check on how much RAM currently costs, probably pennies per MB.

One reason GUIs were initially slow to be adopted was the cost of the hardware need- ed to support them. How much video RAM is needed to support a 25-line × 80-row character monochrome text screen? How much for a 1200 × 900-pixel 24-bit color bit- map? What was the cost of this RAM at 1980 prices ($5/KB)? How much is it now?

In UNIX and Windows, random access is done by having a special system call that moves the ''current position'' pointer associated with a file to a given byte in the file. Propose an alternative way to do random access without having this system call.

One way is to add an extra parameter to the read system call that tells what ad- dress to read from. In effect, every read then has a potential for doing a seek within the file. The disadvantages of this scheme are (1) an extra parameter in ev ery read call, and (2) requiring the user to keep track of where the file point- er is.

If a disk controller writes the bytes it receives from the disk to memory as fast as it re- ceives them, with no internal buffering, is interleaving conceivably useful? Discuss your answer.

Possibly. If most files are stored in logically consecutive sectors, it might be worthwhile interleaving the sectors to give programs time to process the data just received, so that when the next request is issued, the disk would be in the right place. Whether this is worth the trouble depends strongly on the kind of programs run and how uniform their behavior is.

In the discussion on stable storage, a key assumption is that a CPU crash that corrupts a sector leads to an incorrect ECC. What problems might arise in the fiv e crash-recov- ery scenarios shown in Figure 5-27 if this assumption does not hold?

Problems arise in scenarios shown in Figure 5-27 (b) and 5-27 (d), because they may look like scenario 5-27 (c), if the ECC of the corrupted block is correct. In this case, it is not possible to detect which disk contains the valid (old or new) lock, and a recovery is not possible

RAID level 2 can not only recover from crashed drives, but also from undetected transient errors. If one drive delivers a single bad bit, RAID level 2 will correct this, but RAID level 3 will not.

RAID level 3 is able to correct single-bit errors using only one parity drive. What is the point of RAID level 2? After all, it also can only correct one error and takes more drives to do so.

Compare RAID level 0 through 5 with respect to read performance, write performance, space overhead, and reliability.

Read performance: RAID levels 0, 2, 3, 4, and 5 allow for parallel reads to service one read request. However, RAID level 1 further allows two read requests to simultaneously proceed. Write performance: All RAID levels provide similar write performance. Space overhead: There is no space overhead in level 0 and 100% overhead in level 1. With 32-bit data word and six parity drives, the space overhead is about 18.75% in level 2. For a 32-bit data word, the space overhead in level 3 is about 3.13%. Finally, assuming 33 drives in lev els 4 and 5, the space overhead is 3.13% in them. Reliability: There is no reliability support in level 0. All other RAID levels can survive one disk crash. In addition, in levels 3, 4 and 5, a single random bit error in a word can be detected, while in level 2, a single random bit error in a word can be detected and corrected.

What is the difference between a physical address and a virtual address?

Real memory uses physical addresses. These are the numbers that the memory chips react to on the bus. Virtual addresses are the logical addresses that refer to a process' address space. Thus a machine with a 32-bit word can generate virtual addresses up to 4 GB regardless of whether the machine has more or less memory than 4 GB.

Explain how the system can recover from the deadlock in previous problem using (a) recovery through preemption. (b) recovery through rollback. (c) recovery through killing processes.

Recovery through preemption: After processes P2 and P3 complete, process P1 can be forced to preempt 1 unit of RS3. This will make A = (0 2 1 3 2), and allow process P4 to complete. Once P4 completes and release its re- sources P1 may complete. Recovery through rollback: Rollback P1 to the state checkpointed before it acquired RS3. Recovery through killing proc- esses: Kill P1.

Assuming that it takes 2 nsec to copy a byte, how much time does it take to completely rewrite the screen of an 80 character × 25 line text mode memory-mapped screen? What about a 1024 × 768 pixel graphics screen with 24-bit color?

Rewriting the text screen requires copying 2000 bytes, which can be done in 4 μseconds. Rewriting the graphics screen requires copying 1024 × 768 × 3 = 2,359,296 bytes, or about 4.72 msec.

Five jobs are waiting to be run. Their expected run times are 9, 6, 3, 5, and X. In what order should they be run to minimize average response time? (Your answer will depend on X.)

Shortest job first is the way to minimize average response time. 0 < X ≤ 3: X, 3, 5, 6, 9. 3 < X ≤ 5: 3, X, 5, 6, 9. 5 < X ≤ 6: 3, 5, X, 6, 9. 6 < X ≤ 9: 3, 5, 6, X, 9. X > 9: 3, 5, 6, 9, X.

A bitmap terminal contains 1600 by 1200 pixels. To scroll a window, the CPU (or controller) must move all the lines of text upward by copying their bits from one part of the video RAM to another. If a particular window is 80 lines high by 80 characters wide (6400 characters, total), and a character's box is 8 pixels wide by 16 pixels high, how long does it take to scroll the whole window at a copying rate of 50 nsec per byte? If all lines are 80 characters long, what is the equivalent baud rate of the terminal? Putting a character on the screen takes 5 μ sec. How many lines per second can be dis- played?

Scrolling the window requires copying 79 lines of 80 characters or 6320 characters. Copying 1 character (16 bytes) takes 800 nsec, so the whole window takes 5.056 msec. Writing 80 characters to the screen takes 400 nsec, so scrolling and displaying a new line take 5.456 msec. This gives about 183.2 lines/sec

Explain how separation of policy and mechanism aids in building microkernel-based operating systems.

Separation of policy and mechanism allows OS designers to implement a small number of basic primitives in the kernel. These primitives are simplified, because they are not dependent of any specific policy. They can then be used to implement more complex mechanisms and policies at the user level.

Suppose that two processes A and B share a page that is not in memory. If process A faults on the shared page, the page table entry for process A must be updated once the page is read into memory. (a) Under what conditions should the page table update for process B be delayed even though the handling of process A's page fault will bring the shared page into mem- ory? Explain. (b) What is the potential cost of delaying the page table update?

Sharing pages brings up all kinds of complications and options: (a) The page table update should be delayed for process B if it will never access the shared page or if it accesses it when the page has been swapped out again. Unfortunately, in the general case, we do not know what process B will do in the future. (b) The cost is that this lazy page fault handling can incur more page faults. The overhead of each page fault plays an important role in determining if this strategy is more efficient. (Aside: This cost is similar to that faced by the copy-on-write strategy for supporting some UNIX fork system call implementations.)

Consider a file whose size varies between 4 KB and 4 MB during its lifetime. Which of the three allocation schemes (contiguous, linked and table/indexed) will be most ap- propriate?

Since the file size changes a lot, contiguous allocation will be inefficient re- quiring reallocation of disk space as the file grows in size and compaction of free blocks as the file shrinks in size. Both linked and table/indexed allocation will be efficient; between the two, table/indexed allocation will be more effi- cient for random-access scenarios.

Contiguous allocation of files leads to disk fragmentation, as mentioned in the text, be- cause some space in the last disk block will be wasted in files whose length is not an integral number of blocks. Is this internal fragmentation or external fragmentation? Make an analogy with something discussed in the previous chapter.

Since the wasted storage is between the allocation units (files), not inside them, this is external fragmentation. It is precisely analogous to the external fragmentation of main memory that occurs with a swapping system or a sys- tem using pure segmentation.

Tw o processes, A and B, each need three records, 1, 2, and 3, in a database. If A asks for them in the order 1, 2, 3, and B asks for them in the same order, deadlock is not possible. However, if B asks for them in the order 3, 2, 1, then deadlock is possible. With three resources, there are 3! or six possible combinations in which each process can request them. What fraction of all the combinations is guaranteed to be deadlock free?

Suppose that process A requests the records in the order a, b, c. If process B also asks for a first, one of them will get it and the other will block. This situation is always deadlock free since the winner can now run to completion without interference. Of the four other combinations, some may lead to dead- lock and some are deadlock free. The six cases are as follows: abc acb bac bca cab cba deadlock free deadlock free possible deadlock possible deadlock possible deadlock possible deadlock Since four of the six may lead to deadlock, there is a 1/3 chance of avoiding a deadlock and a 2/3 chance of getting one.

After receiving a DEL (SIGINT) character, the display driver discards all output cur- rently queued for that display. Why?

Suppose that the user inadvertently asked the editor to print thousands of lines. Then he hits DEL to stop it. If the driver did not discard output, output might continue for several seconds after the DEL, which would make the user hit DEL again and again and get frustrated when nothing happened.

The four conditions (mutual exclusion, hold and wait, no preemption and circular wait) are necessary for a resource deadlock to occur. Giv e an example to show that these conditions are not sufficient for a resource deadlock to occur. When are these condi- tions sufficient for a resource deadock to occur?

Suppose that there are three processes, A, B and C, and two resource types, R and S. Further assume that there are one instance of R and two instance of S. Consider the following execution scenario: A requests R and gets it; B requests S and gets; C requests S and gets it (there are two instances of S); B requests R and is blocked; A requests S and is blocked. At this stage all four conditions hold. However, there is no deadlock. When C finishes, one instance of S is released that is allocated to A. Now A can complete its execution and release R that can be allocated to B, which can then complete its execution. These four conditions are enough if there is one resource of each type.

In the example given in Fig. 1-17, the library procedure is called read and the system call itself is called read . Is it essential that both of these have the same name? If not, which one is more important?

System calls do not really have names, other than in a documentation sense. When the library procedure read traps to the kernel, it puts the number of the system call in a register or on the stack. This number is used to index into a table. There is really no name used anywhere. On the other hand, the name of the library procedure is very important, since that is what appears in the program

what does putting * infront of a pointer do?

The '*' operator will "dereference" a pointer • Allows access to the original variable that it is pointing to • You can now read or modify the value of the original variable • Do not confuse with the '*' used to declare the pointer

Measurements of a certain system have shown that the average process runs for a time T before blocking on I/O. A process switch requires a time S, which is effectively wasted (overhead). For round-robin scheduling with quantum Q, giv e a formula for the CPU efficiency for each of the following: (a) Q = ∞ (b) Q > T (c) S < Q < T (d) Q = S (e) Q nearly 0

The CPU efficiency is the useful CPU time divided by the total CPU time. When Q ≥ T , the basic cycle is for the process to run for T and undergo a process switch for S. Thus, (a) and (b) have an efficiency of T /(S + T ). When the quantum is shorter than T, each run of T will require T /Q process switches, wasting a time ST /Q. The efficiency here is then T T + ST /Q which reduces to Q/(Q + S), which is the answer to (c). For (d), we just sub- stitute Q for S and find that the efficiency is 50%. Finally, for (e), as Q → 0 the efficiency goes to 0.

One of the first timesharing machines, the DEC PDP-1, had a (core) memory of 4K 18-bit words. It held one process at a time in its memory. When the scheduler decided to run another process, the process in memory was written to a paging drum, with 4K 18-bit words around the circumference of the drum. The drum could start writing (or reading) at any word, rather than only at word 0. Why do you suppose this drum was chosen?

The PDP-1 paging drum had the advantage of no rotational latency. This saved half a rotation each time memory was written to the drum.

In Windows, when a user double clicks on a file listed by Windows Explorer, a pro- gram is run and given that file as a parameter. List two different ways the operating system could know which program to run.

The Windows way is to use the file extension. Each extension corresponds to a file type and to some program that handles that type. Another way is to remember which program created the file and run that program. The Macin- tosh works this way.

Suppose four cars each approach an intersection from four different directions simul- taneously. Each corner of the intersection has a stop sign. Assume that traffic regula- tions require that when two cars approach adjacent stop signs at the same time, the car on the left must yield to the car on the right. Thus, as four cars each drive up to their individual stop signs, each waits (indefinitely) for the car on the left to proceed. Is this anomaly a communication deadlock? Is it a resource deadlock?

The above anomaly is not a communication deadlock since these cars are independent of each other and would drive through the intersection with a minimal delay if no competition occurred. It is not a resource deadlock, since no car is holding a resource that is requested by another car. Nor would the mechanisms of resource pre-allocation or of resource preemption assist in controlling this anomaly. This anomaly is one of competition synchronization, however, in which cars are waiting for resources in a circular chain and traffic throttling may be an effective strategy for control. To distinguish from resource deadlock, this anomaly might be termed a ''scheduling deadlock.'' A similar deadlock could occur following a law that required two trains merging onto a shared railroad track to wait for the other to proceed. Note that a policeman signaling one of the competing cars or trains to proceed (and not the others) can break this dead state without rollback or any other overhead.

In the WSClock algorithm of Fig. 3-20(c), the hand points to a page with R = 0. If τ = 400, will this page be removed? What about if τ = 1000?

The age of the page is 2204 − 1213 = 991. If τ = 400, it is definitely out of the working set and was not recently referenced so it will be evicted. The τ = 1000 situation is different. Now the page falls within the working set (barely), so it is not removed.

Consider the procedure put forks in Fig. 2-47. Suppose that the variable state[i] was set to THINKING after the two calls to test, rather than before. How would this change affect the solution?

The change would mean that after a philosopher stopped eating, neither of his neighbors could be chosen next. In fact, they would never be chosen. Sup- pose that philosopher 2 finished eating. He would run test for philosophers 1 and 3, and neither would be started, even though both were hungry and both forks were available. Similarly, if philosopher 4 finished eating, philosopher 3 would not be started. Nothing would start him.

A program contains an error in the order of cooperation and competition mechanisms, resulting in a consumer process locking a mutex (mutual exclusion semaphore) before it blocks on an empty buffer. The producer process blocks on the mutex before it can place a value in the empty buffer and awaken the consumer. Thus, both processes are blocked forever, the producer waiting for the mutex to be unlocked and the consumer waiting for a signal from the producer. Is this a resource deadlock or a communication deadlock? Suggest methods for its control.

The anomaly is not a resource deadlock. Although processes are sharing a mutex, i.e., a competition mechanism, resource pre-allocation and deadlock avoidance methods are all ineffective for this dead state. Linearly ordering re- sources is also ineffective. Indeed one could argue that linear orders may be the problem; executing a mutex should be the last step before entering and the first after leaving a critical section. A circular dead state does exist in which both processes wait for an event that can only be caused by the other process. This is a communication deadlock. To make progress, a time-out will work to break this deadlock if it preempts the consumer's mutex. Writing careful code or using monitors for mutual exclusion are better solutions.

It has been observed that a thin-client system works well with a 1-Mbps network in a test. Are any problems likely in a multiuser situation? (Hint: Consider a large number of users watching a scheduled TV show and the same number of users browsing the World Wide Web.)

The bandwidth on a network segment is shared, so 100 users requesting different data simultaneously on a 1-Mbps network will each see a 10-Kbps effective speed. With a shared network, a TV program can be multicast, so the video packets are only broadcast once, no matter how many users there are and it should work well. With 100 users browsing the Web, each user will get 1/100 of the bandwidth, so performance may degrade very quickly.

The beginning of a free-space bitmap looks like this after the disk partition is first for- matted: 1000 0000 0000 0000 (the first block is used by the root directory). The sys- tem always searches for free blocks starting at the lowest-numbered block, so after writing file A, which uses six blocks, the bitmap looks like this: 1111 1110 0000 0000. Show the bitmap after each of the following additional actions: (a) File B is written, using fiv e blocks. (b) File A is deleted. (c) File C is written, using eight blocks. (d) File B is deleted.

The beginning of the bitmap looks like: (a) After writing file B: 1111 1111 1111 0000 (b) After deleting file A: 1000 0001 1111 0000 (c) After writing file C: 1111 1111 1111 1100 (d) After deleting file B: 1111 1110 0000 1100

What is the biggest advantage of implementing threads in user space? What is the biggest disadvantage?

The biggest advantage is the efficiency. No traps to the kernel are needed to switch threads. The biggest disadvantage is that if one thread blocks, the entire process blocks.

Free disk space can be kept track of using a free list or a bitmap. Disk addresses re- quire D bits. For a disk with B blocks, F of which are free, state the condition under which the free list uses less space than the bitmap. For D having the value 16 bits, express your answer as a percentage of the disk space that must be free.

The bitmap requires B bits. The free list requires DF bits. The free list requires fewer bits if DF < B. Alternatively, the free list is shorter if F/B < 1/D, where F/B is the fraction of blocks free. For 16-bit disk addresses, the free list is shorter if 6% or less of the disk is free.

Consider an application where students' records are stored in a file. The application takes a student ID as input and subsequently reads, updates, and writes the correspond- ing student record; this is repeated till the application quits. Would the "block read- ahead" technique be useful here?

The block read-ahead technique reads blocks sequentially, ahead of their use, in order to improve performance. In this application, the records will likely not be accessed sequentially since the user can input any student ID at a given instant. Thus, the read-ahead technique will not be very useful in this scenario.

Consider a disk that has 10 data blocks starting from block 14 through 23. Let there be 2 files on the disk: f1 and f2. The directory structure lists that the first data blocks of f1 and f2 are respectively 22 and 16. Given the FAT table entries as below, what are the data blocks allotted to f1 and f2? (14,18); (15,17); (16,23); (17,21); (18,20); (19,15); (20, −1); (21, −1); (22,19); (23,14). In the above notation, (x, y) indicates that the value stored in table entry x points to data block y.

The blocks allotted to f1 are: 22, 19, 15, 17, 21. The blocks allotted to f2 are: 16, 23, 14, 18, 20.

You are given the following data about a virtual memory system: (a)The TLB can hold 1024 entries and can be accessed in 1 clock cycle (1 nsec). (b) A page table entry can be found in 100 clock cycles or 100 nsec. (c) The average page replacement time is 6 msec. If page references are handled by the TLB 99% of the time, and only 0.01% lead to a page fault, what is the effective address-translation time?

The chance of a hit is 0.99 for the TLB, 0.0099 for the page table, and 0.0001 for a page fault (i.e., only 1 in 10,000 references will cause a page fault). The effective address translation time in nsec is then: 0. 99 × 1 + 0. 0099 × 100 + 0. 0001 × 6 × 106 ≈ 602 clock cycles. Note that the effective address translation time is quite high because it is dominated by the page replacement time even when page faults only occur once in 10,000 references.

A computer system has enough room to hold fiv e programs in its main memory. These programs are idle waiting for I/O half the time. What fraction of the CPU time is wasted?

The chance that all five processes are idle is 1/32, so the CPU idle time is 1/32.

Assume that you are trying to download a large 2-GB file from the Internet. The file is available from a set of mirror servers, each of which can deliver a subset of the file's bytes; assume that a given request specifies the starting and ending bytes of the file. Explain how you might use threads to improve the download time.

The client process can create separate threads; each thread can fetch a different part of the file from one of the mirror servers. This can help reduce downtime. Of course, there is a single network link being shared by all threads. This link can become a bottleneck as the number of threads becomes very large.

Below is an execution trace of a program fragment for a computer with 512-byte pages. The program is located at address 1020, and its stack pointer is at 8192 (the stack grows toward 0). Give the page reference string generated by this program. Each instruction occupies 4 bytes (1 word) including immediate constants. Both instruction and data references count in the reference string. Load word 6144 into register 0 Push register 0 onto the stack Call a procedure at 5120, stacking the return address Subtract the immediate constant 16 from the stack pointer Compare the actual parameter to the immediate constant 4 Jump if equal to 5152

The code and reference string are as follows LOAD 6144,R0 1(I), 12(D) PUSH R0 2(I), 15(D) CALL 5120 2(I), 15(D) JEQ 5152 10(I) The code (I) indicates an instruction reference, whereas (D) indicates a data reference.

Here are some questions for practicing unit conversions: (a) How long is a nanoyear in seconds? (b) Micrometers are often called microns. How long is a megamicron? (c) How many bytes are there in a 1-PB memory? (d) The mass of the earth is 6000 yottagrams. What is that in kilograms?

The conversions are straightforward: (a) A nanoyear is 10−9 × 365 × 24 × 3600 = 31. 536 msec. (b) 1 meter (c) There are 250 bytes, which is 1,099,511,627,776 bytes. (d) It is 6 × 1024 kg or 6 × 1027 g.

A small computer on a smart card has four page frames. At the first clock tick, the R bits are 0111 (page 0 is 0, the rest are 1). At subsequent clock ticks, the values are 1011, 1010, 1101, 0010, 1010, 1100, and 0001. If the aging algorithm is used with an 8-bit counter, giv e the values of the four counters after the last tick.

The counters are Page 0: 0110110 Page 1: 01001001 Page 2: 00110111 Page 3: 10001011

How much cylinder skew is needed for a 7200-RPM disk with a track-to-track seek time of 1 msec? The disk has 200 sectors of 512 bytes each on each track.

The disk rotates at 120 RPS, so 1 rotation takes 1000/120 msec. With 200 sectors per rotation, the sector time is 1/200 of this number or 5/120 = 1/24 msec. During the 1-msec seek, 24 sectors pass under the head. Thus the cylinder skew should be 24.

A thin-client terminal is used to display a Web page containing an animated cartoon of size 400 pixels × 160 pixels running at 10 frames/sec. What fraction of a 100-Mbps Fast Ethernet is consumed by displaying the cartoon?

The display size is 400 × 160 × 3 bytes, which is 192,000 bytes. At 10 fps this is 1,920,000 bytes/sec or 15,360,000 bits/sec. This consumes 15% of the Fast Ethernet.

Consider the directory tree of Fig. 4-8. If /usr/jim is the working directory, what is the absolute path name for the file whose relative path name is ../ast/x?

The dotdot component moves the search to /usr, so ../ast puts it in /usr/ast. Thus ../ast/x is the same as /usr/ast/x.

A disk manufacturer has two 5.25-inch disks that each have 10,000 cylinders. The newer one has double the linear recording density of the older one. Which disk proper- ties are better on the newer drive and which are the same? Are any worse on the newer one?

The drive capacity and transfer rates are doubled. The seek time and average rotational delay are the same. No properties are worse.

A computer whose processes have 1024 pages in their address spaces keeps its page tables in memory. The overhead required for reading a word from the page table is 5 nsec. To reduce this overhead, the computer has a TLB, which holds 32 (virtual page, physical page frame) pairs, and can do a lookup in 1 nsec. What hit rate is needed to reduce the mean overhead to 2 nsec?

The effective instruction time is 1h + 5(1 − h), where h is the hit rate. If we equate this formula with 2 and solve for h, we find that h must be at least 0.75.

A fast-food restaurant has four kinds of employees: (1) order takers, who take custom- ers' orders; (2) cooks, who prepare the food; (3) packaging specialists, who stuff the food into bags; and (4) cashiers, who give the bags to customers and take their money. Each employee can be regarded as a communicating sequential process. What form of interprocess communication do they use? Relate this model to processes in UNIX.

The employees communicate by passing messages: orders, food, and bags in this case. In UNIX terms, the four processes are connected by pipes.

CPU architects know that operating system writers hate imprecise interrupts. One way to please the OS folks is for the CPU to stop issuing new instructions when an interrupt is signaled, but allow all the instructions currently being executed to finish, then force the interrupt. Does this approach have any disadvantages? Explain your answer.

The execution rate of a modern CPU is determined by the number of instructions that finish per second and has little to do with how long an instruction takes. If a CPU can finish 1 billion instructions/sec it is a 1000 MIPS machine, ev en if an instruction takes 30 nsec. Thus there is generally little attempt to make instructions finish quickly. Holding the interrupt until the last instruction currently executing finishes may increase the latency of interrupts appreciably. Furthermore, some administration is required to get this right.

It is still alive. For example, Intel makes Core i3, i5, and i7 CPUs with a variety of different properties including speed and power consumption. All of these machines are architecturally compatible. They differ only in price and performance, which is the essence of the family idea.

The family-of-computers idea was introduced in the 1960s with the IBM System/360 mainframes. Is this idea now dead as a doornail or does it live on?

Consider the page sequence of Fig. 3-15(b). Suppose that the R bits for the pages B through A are 11011011, respectively. Which page will second chance remove?

The first page with a 0 bit will be chosen, in this case D.

In the text we gav e an example of how to draw a rectangle on the screen using the Win- dows GDI: Rectangle(hdc, xleft, ytop, xright, ybottom); Is there any real need for the first parameter (hdc), and if so, what? After all, the coor- dinates of the rectangle are explicitly specified as parameters.

The first parameter is essential. First of all, the coordinates are relative to some window, so hdc is needed to specify the window and thus the origin. Second, the rectangle will be clipped if it falls outside the window, so the window coordinates are needed. Third, the color and other properties of the rectangle are taken from the context specified by hdc. It is quite essential.

A process running on CTSS needs 30 quanta to complete. How many times must it be swapped in, including the very first time (before it has run at all)?

The first time it gets 1 quantum. On succeeding runs it gets 2, 4, 8, and 15, so it must be swapped in 5 times.

How many disk operations are needed to fetch the i-node for afile with the path name /usr/ast/courses/os/handout.t? Assume that the i-node for the root directory is in mem- ory, but nothing else along the path is in memory. Also assume that all directories fit in one disk block.

The following disk reads are needed: directory for / i-node for /usr directory for /usr i-node for /usr/ast directory for /usr/ast i-node for /usr/ast/courses directory for /usr/ast/courses i-node for /usr/ast/courses/os directory for /usr/ast/courses/os i-node for /usr/ast/courses/os/handout.t In total, 10 disk reads are required.

A soft real-time system has four periodic events with periods of 50, 100, 200, and 250 msec each. Suppose that the four events require 35, 20, 10, and x msec of CPU time, respectively. What is the largest value of x for which the system is schedulable?

The fraction of the CPU used is 35/50 + 20/100 + 10/200 + x/250. To be schedulable, this must be less than 1. Thus x must be less than 12.5 msec.

A UNIX file system has 4-KB blocks and 4-byte disk addresses. What is the maximum file size if i-nodes contain 10 direct entries, and one single, double, and triple indirect entry each?

The i-node holds 10 pointers. The single indirect block holds 1024 pointers. The double indirect block is good for 1024 2 pointers. The triple indirect block is good for 1024 3 pointers. Adding these up, we get a maximum file size of 1,074,791,434 blocks, which is about 16.06 GB.

Given a disk-block size of 4 KB and block-pointer address value of 4 bytes, what is the largest file size (in bytes) that can be accessed using 10 direct addresses and one indi- rect block?

The indirect block can hold 1024 addresses. Added to the 10 direct addresses, there are 1034 addresses in all. Since each one points to a 4-KB disk block, the largest file is 4,235,264 bytes

Consider the i-node shown in Fig. 4-13. If it contains 10 direct addresses and these were 8 bytes each and all disk blocks were 1024 KB, what would the largest possible file be?

The indirect block can hold 128 disk addresses. Together with the 10 direct disk addresses, the maximum file has 138 blocks. Since each block is 1 KB, the largest file is 138 KB.

A machine-language instruction to load a 32-bit word into a register contains the 32-bit address of the word to be loaded. What is the maximum number of page faults this in- struction can cause?

The instruction could lie astride a page boundary, causing two page faults just to fetch the instruction. The word fetched could also span a page boundary, generating two more faults, for a total of four. If words must be aligned in memory, the data word can cause only one fault, but an instruction to load a 32-bit word at address 4094 on a machine with a 4-KB page is legal on some machines (including the x86).

Consider a system in which it is desired to separate policy and mechanism for the scheduling of kernel threads. Propose a means of achieving this goal.

The kernel could schedule processes by any means it wishes, but within each process it runs threads strictly in priority order. By letting the user process set the priority of its own threads, the user controls the policy but the kernel hand- les the mechanism.

What are the advantages and disadvantages of optical disks versus magnetic disks?

The main advantage of optical disks is that they hav e much higher recording densities than magnetic disks. The main advantage of magnetic disks is that they are an order of magnitude faster than the optical disks

A computer with an 8-KB page, a 256-KB main memory, and a 64-GB virtual address space uses an inverted page table to implement its virtual memory. How big should the hash table be to ensure a mean hash chain length of less than 1? Assume that the hash- table size is a power of two.

The main memory has 228/213 = 32,768 pages. A 32K hash table will have a mean chain length of 1. To get under 1, we have to go to the next size, 65,536 entries. Spreading 32,768 entries over 65,536 table slots will give a mean chain length of 0.5, which ensures fast lookup.

In some systems it is possible to map part of a file into memory. What restrictions must such systems impose? How is this partial mapping implemented?

The mapped portion of the file must start at a page boundary and be an integral number of pages in length. Each mapped page uses the file itself as backing store. Unmapped memory uses a scratch file or partition as backing store.

The designers of a computer system expected that the mouse could be moved at a maximum rate of 20 cm/sec. If a mickey is 0.1 mm and each mouse message is 3 bytes, what is the maximum data rate of the mouse assuming that each mickey is reported separately?

The maximum rate the mouse can move is 200 mm/sec, which is 2000 mickeys/sec. If each report is 3 byte, the output rate is 6000 bytes/sec.

A system has four processes and fiv e allocatable resources. The current allocation and maximum needs are as follows: Process A Process B Process C Process D Allocated 10211 20110 11010 11110 Maximum 11213 22210 21310 11221 Available 00x11 What is the smallest value of x for which this is a safe state?

The needs matrix is as follows: 01002 02100 10300 00111 If x is 0, we have a deadlock immediately. If x is 1, process D can run to com- pletion. When it is finished, the available vector is 1 1 2 2 1. Unfortunately we are now deadlocked. If x is 2, after D runs, the available vector is 1 1 3 2 1 and C can run. After it finishes and returns its resources the available vector is 2 2 3 3 1, which will allow B to run and complete, and then A to run and com- plete. Therefore, the smallest value of x that avoids a deadlock is 2.

Consider a 4-TB disk that uses 4-KB blocks and the free-list method. How many block addresses can be stored in one block?

The number of blocks on the disk = 4 TB / 4 KB = 2 30. Thus, each block ad- dress can be 32 bits (4 bytes), the nearest power of 2. Thus, each block can store 4 KB / 4 = 1024 addresses.

Many versions of UNIX use an unsigned 32-bit integer to keep track of the time as the number of seconds since the origin of time. When will these systems wrap around (year and month)? Do you expect this to actually happen?

The number of seconds in a mean year is 365.25 × 24 × 3600. This number is 31,557,600. The counter wraps around after 232 seconds from 1 January 1970. The value of 232/31,557,600 is 136.1 years, so wrapping will happen at 2106.1, which is early February 2106. Of course, by then, all computers will be at least 64 bits, so it will not happen at all.

If FIFO page replacement is used with four page frames and eight pages, how many page faults will occur with the reference string 0172327103 if the four frames are ini- tially empty? Now repeat this problem for LRU.

The page frames for FIFO are as follows: x0172333300 xx017222233 xxx01777722 xxxx0111177 The page frames for LRU are as follows: x0172327103 xx017232710 xxx01773271 xxxx0111327 FIFO yields six page faults; LRU yields seven.

A machine has a 32-bit address space and an 8-KB page. The page table is entirely in hardware, with one 32-bit word per entry. When a process starts, the page table is cop- ied to the hardware from memory, at one word every 100 nsec. If each process runs for 100 msec (including the time to load the page table), what fraction of the CPU time is devoted to loading the page tables?

The page table contains 232/213 entries, which is 524,288. Loading the page table takes 52 msec. If a process gets 100 msec, this consists of 52 msec for loading the page table and 48 msec for running. Thus 52% of the time is spent loading page tables.

In the discussion on global variables in threads, we used a procedure create global to allocate storage for a pointer to the variable, rather than the variable itself. Is this es- sential, or could the procedures work with the values themselves just as well?

The pointers are really necessary because the size of the global variable is unknown. It could be anything from a character to an array of floating-point numbers. If the value were stored, one would have to giv e the size to cre- ate global, which is all right, but what type should the second parameter of set global be, and what type should the value of read global be?

On early computers, every byte of data read or written was handled by the CPU (i.e., there was no DMA). What implications does this have for multiprogramming?

The prime reason for multiprogramming is to give the CPU something to do while waiting for I/O to complete. If there is no DMA, the CPU is fully occupied doing I/O, so there is nothing to be gained (at least in terms of CPU utilization) by multiprogramming. No matter how much I/O a program does, the CPU will be 100% busy. This of course assumes the major delay is the wait while data are copied. A CPU could do other work if the I/O were slow for other reasons (arriving on a serial line, for instance).

A typical printed page of text contains 50 lines of 80 characters each. Imagine that a certain printer can print 6 pages per minute and that the time to write a character to the printer's output register is so short it can be ignored. Does it make sense to run this printer using interrupt-driven I/O if each character printed requires an interrupt that takes 50 μ sec all-in to service?

The printer prints 50 × 80 × 6 = 24, 000 characters/min, which is 400 characters/sec. Each character uses 50 μsec of CPU time for the interrupt, so collectively in each second the interrupt overhead is 20 msec. Using interrupt-driven I/O, the remaining 980 msec of time is available for other work. In other words, the interrupt overhead costs only 2% of the CPU, which will hardly affect the running program at all.

Can the priority inversion problem discussed in Sec. 2.3.4 happen with user-level threads? Why or why not?

The priority inversion problem occurs when a low-priority process is in its critical region and suddenly a high-priority process becomes ready and is scheduled. If it uses busy waiting, it will run forever. With user-level threads, it cannot happen that a low-priority thread is suddenly preempted to allow a high-priority thread run. There is no preemption. With kernel-level threads this problem can arise.

A RAID can fail if two or more of its drives crash within a short time interval. Suppose that the probability of one drive crashing in a given hour is p. What is the probability of a k-drive RAID failing in a given hour?

The probability of 0 failures, P0, is (1 − p) k . The probability of 1 failure, P1, is kp(1 − p) k−1 . The probability of a RAID failure is then 1 − P0 − P1. This is 1 − (1 − p) k − kp(1 − p) k−1 .

Consider a multiprogrammed system with degree of 6 (i.e., six programs in memory at the same time). Assume that each process spends 40% of its time waiting for I/O. What will be the CPU utilization?

The probability that all processes are waiting for I/O is 0. 4 6 which is 0.004096. Therefore, CPU utilization = 1 − 0. 004096 = 0: 995904.

Suppose that in Fig. 6-6 C ij + R ij > E j for some i. What implications does this have for the system?

The process is asking for more resources than the system has. There is no conceivable way it can get these resources, so it can never finish, even if no other processes want any resources at all.

Why is the process table needed in a timesharing system? Is it also needed in personal computer systems running UNIX or Windows with a single user?

The process table is needed to store the state of a process that is currently suspended, either ready or blocked. Modern personal computer systems have dozens of processes running even when the user is doing nothing and no programs are open. They are checking for updates, loading email, and many other things, On a UNIX system, use the ps -a command to see them. On a Windows system, use the task manager.

It has been observed that the number of instructions executed between page faults is di- rectly proportional to the number of page frames allocated to a program. If the avail- able memory is doubled, the mean interval between page faults is also doubled. Sup- pose that a normal instruction takes 1 microsec, but if a page fault occurs, it takes 2001 μ sec (i.e., 2 msec) to handle the fault. If a program takes 60 sec to run, during which time it gets 15,000 page faults, how long would it take to run if twice as much memory were available?

The program is getting 15,000 page faults, each of which uses 2 msec of extra processing time. Together, the page fault overhead is 30 sec. This means that of the 60 sec used, half was spent on page fault overhead, and half on running the program. If we run the program with twice as much memory, we get half as many memory page faults, and only 15 sec of page fault overhead, so the total run time will be 45 sec.

How long does it take to load a 64-KB program from a disk whose average seek time is 5 msec, whose rotation time is 5 msec, and whose tracks hold 1 MB (a) for a 2-KB page size? (b) for a 4-KB page size? The pages are spread randomly around the disk and the number of cylinders is so large that the chance of two pages being on the same cylinder is negligible.

The seek plus rotational latency is 10 msec. For 2-KB pages, the transfer time is about 0.009766 msec, for a total of about 10.009766 msec. Loading 32 of these pages will take about 320.21 msec. For 4-KB pages, the transfer time is doubled to about 0.01953 msec, so the total time per page is 10.01953 msec. Loading 16 of these pages takes about 160.3125 msec. With such fast disks, all that matters is reducing the number of transfers (or putting the pages consecutively on the disk).

The aging algorithm with a = 1/2 is being used to predict run times. The previous four runs, from oldest to most recent, are 40, 20, 40, and 15 msec. What is the prediction of the next time?

The sequence of predictions is 40, 30, 35, and now 25.

Give a simple example of a page reference sequence where the first page selected for replacement will be different for the clock and LRU page replacement algorithms. As- sume that a process is allocated 3=three frames, and the reference string contains page numbers from the set 0, 1, 2, 3.

The sequence: 0, 1, 2, 1, 2, 0, 3. In LRU, page 1 will be replaced by page 3. In clock, page 1 will be replaced, since all pages will be marked and the cursor is at page 0.

Consider the following solution to the mutual-exclusion problem involving two proc- esses P0 and P1. Assume that the variable turn is initialized to 0. Process P0's code is presented below. /* Other code */ while (turn != 0) { } /* Do nothing and wait. */ Cr itical Section /* . . . */ tur n = 0; /* Other code */ For process P1, replace 0 by 1 in above code. Determine if the solution meets all the required conditions for a correct mutual-exclusion solution.

The solution satisfies mutual exclusion since it is not possible for both proc- esses to be in their critical section. That is, when turn is 0, P0 can execute its critical section, but not P1. Likewise, when turn is 1. However, this assumes P0 must run first. If P1 produces something and it puts it in a buffer, then while P0 can get into its critical section, it will find the buffer empty and block. Also, this solution requires strict alternation of the two processes, which is undesirable.

A system has two processes and three identical resources. Each process needs a maxi- mum of two resources. Is deadlock possible? Explain your answer.

The system is deadlock free. Suppose that each process has one resource. There is one resource free. Either process can ask for it and get it, in which case it can finish and release both resources. Consequently, deadlock is im- possible.

A computer provides each process with 65,536 bytes of address space divided into pages of 4096 bytes each. A particular program has a text size of 32,768 bytes, a data size of 16,386 bytes, and a stack size of 15,870 bytes. Will this program fit in the machine's address space? Suppose that instead of 4096 bytes, the page size were 512 bytes, would it then fit? Each page must contain either text, data, or stack, not a mix- ture of two or three of them.

The text is eight pages, the data are five pages, and the stack is four pages. The program does not fit because it needs 17 4096-byte pages. With a 512-byte page, the situation is different. Here the text is 64 pages, the data are 33 pages, and the stack is 31 pages, for a total of 128 512-byte pages, which fits. With the small page size it is OK, but not with the large one.

The performance of a file system depends upon the cache hit rate (fraction of blocks found in the cache). If it takes 1 msec to satisfy a request from the cache, but 40 msec to satisfy a request if a disk read is needed, give a formula for the mean time required to satisfy a request if the hit rate is h. Plot this function for values of h varying from 0 to 1.0.

The time needed is h + 40 × (1 − h). The plot is just a straight line.

The amount of disk space that must be available for page storage is related to the maxi- mum number of processes, n, the number of bytes in the virtual address space, v, and the number of bytes of RAM, r. Giv e an expression for the worst-case disk-space re- quirements. How realistic is this amount?

The total virtual address space for all the processes combined is nv, so this much storage is needed for pages. However, an amount r can be in RAM, so the amount of disk storage required is only nv − r. This amount is far more than is ever needed in practice because rarely will there be n processes actually running and even more rarely will all of them need the maximum allowed virtual memory

In Fig. 2-2, three process states are shown. In theory, with three states, there could be six transitions, two out of each state. However, only four transitions are shown. Are there any circumstances in which either or both of the missing transitions might occur?

The transition from blocked to running is conceivable. Suppose that a process is blocked on I/O and the I/O finishes. If the CPU is otherwise idle, the proc- ess could go directly from blocked to running. The other missing transition, from ready to blocked, is impossible. A ready process cannot do I/O or any- thing else that might block it. Only a running process can block.

Section 3.3.4 states that the Pentium Pro extended each entry in the page table hier- archy to 64 bits but still could only address only 4 GB of memory. Explain how this statement can be true when page table entries have 64 bits.

The virtual address was changed from (PT1, PT2, Offset) to (PT1, PT2, PT3, Offset). But the virtual address still used only 32 bits. The bit configuration of a virtual address changed from (10, 10, 12) to (2, 9, 9, 12)

Virtual machines have become very popular for a variety of reasons. Nevertheless, they have some downsides. Name one.

The virtualization layer introduces increased memory usage and processor overhead as well as increased performance overhead.

A notebook computer is set up to take maximum advantage of power saving features including shutting down the display and the hard disk after periods of inactivity. A user sometimes runs UNIX programs in text mode, and at other times uses the X Window System. She is surprised to find that battery life is significantly better when she uses text-only programs. Why?

The windowing system uses much more memory for its display and uses virtual memory more than the text mode. This makes it less likely that the hard disk will be inactive for a period long enough to cause it to be automatically powered down.

Calculate the maximum data rate in bytes/sec for the disk described in the previous problem.

There are 120 rotations in a second. During one of them, 500 × 512 bytes pass under the head. So the disk can read 256,000 bytes per rotation or 30,720,000 bytes/sec.

When an interrupt or a system call transfers control to the operating system, a kernel stack area separate from the stack of the interrupted process is generally used. Why?

There are several reasons for using a separate stack for the kernel. Two of them are as follows. First, you do not want the operating system to crash be- cause a poorly written user program does not allow for enough stack space. Second, if the kernel leaves stack data in a user program's memory space upon return from a system call, a malicious user might be able to use this data to find out information about other processes.

Can a system be in a state that is neither deadlocked nor safe? If so, give an example. If not, prove that all states are either deadlocked or safe.

There are states that are neither safe nor deadlocked, but which lead to dead- locked states. As an example, suppose we have four resources: tapes, plotters, scanners, and CD-ROMs, as in the text, and three processes competing for them. We could have the following situation:Has A: 2 0 0 0 B: 1 0 0 0 C: 0 1 2 1 Needs 1020 0131 1010 39 Av ailable 0121 This state is not deadlocked because many actions can still occur, for example, A can still get two printers. However, if each process asks for its remaining re- quirements, we have a deadlock.

In the discussion of stable storage using nonvolatile RAM, the following point was glossed over. What happens if the stable write completes but a crash occurs before the operating system can write an invalid block number in the nonvolatile RAM? Does this race condition ruin the abstraction of stable storage? Explain your answer.

There is a race but it does not matter. Since the stable write itself has already completed, the fact that the nonvolatile RAM has not been updated just means that the recovery program will know which block was being written. It will read both copies. Finding them identical, it will change neither, which is the correct action. The effect of the crash just before the nonvolatile RAM was updated just means the recovery program will have to make two disk reads more than it should.

A computer has 4 GB of RAM of which the operating system occupies 512 MB. The processes are all 256 MB (for simplicity) and have the same characteristics. If the goal is 99% CPU utilization, what is the maximum I/O wait that can be tolerated?

There is enough room for 14 processes in memory. If a process has an I/O of p, then the probability that they are all waiting for I/O is p 14 . By equating this to 0.01, we get the equation p 14 = 0. 01. Solving this, we get p = 0. 72, so we can tolerate processes with up to 72% I/O wait.

It has been suggested that efficiency could be improved and disk space saved by stor- ing the data of a short file within the i-node. For the i-node of Fig. 4-13, how many bytes of data could be stored inside the i-node?

There must be a way to signal that the address-block pointers hold data, rather than pointers. If there is a bit left over somewhere among the attributes, it can be used. This leaves all nine pointers for data. If the pointers are k bytes each, the stored file could be up to 9k bytes long. If no bit is left over among the at- tributes, the first disk address can hold an invalid address to mark the follow- ing bytes as data rather than pointers. In that case, the maximum file is 8k bytes.

What kind of hardware support is needed for a paged virtual memory to work?

There needs to be an MMU that can remap virtual pages to physical pages. Also, when a page not currently mapped is referenced, there needs to be a trap to the operating system so it can fetch the page

In early UNIX systems, executable files (a.out files) began with a very specific magic number, not one chosen at random. These files began with a header, followed by the text and data segments. Why do you think a very specific number was chosen for ex- ecutable files, whereas other file types had a more-or-less random magic number as the first word?

These systems loaded the program directly in memory and began executing at word 0, which was the magic number. To avoid trying to execute the header as code, the magic number was a BRANCH instruction with a target address just above the header. In this way it was possible to read the binary file directly into the new process' address space and run it at 0, without even knowing how big the header was.

The Intel 8086 processor did not have an MMU or support virtual memory. Nev erthe- less, some companies sold systems that contained an unmodified 8086 CPU and did paging. Make an educated guess as to how they did it. (Hint: Think about the logical location of the MMU.)

They built an MMU and inserted it between the 8086 and the bus. Thus all 8086 physical addresses went into the MMU as virtual addresses. The MMU then mapped them onto physical addresses, which went to the bus.

We discussed making incremental dumps in some detail in the text. In Windows it is easy to tell when to dump a file because every file has an archive bit. This bit is miss- ing in UNIX. How do UNIX backup programs know which files to dump?

They must keep track of the time of the last dump in a file on disk. At every dump, an entry is appended to this file. At dump time, the file is read and the time of the last entry noted. Any file changed since that time is dumped.

Modern operating systems decouple a process address space from the machine's physi- cal memory. List two advantages of this design.

This allows an executable program to be loaded in different parts of the machine's memory in different runs. Also, it enables program size to exceed the size of the machine's memory.

Assume two processes are issuing a seek command to reposition the mechanism to ac- cess the disk and enable a read command. Each process is interrupted before executing its read, and discovers that the other has moved the disk arm. Each then reissues the seek command, but is again interrupted by the other. This sequence continually repeats. Is this a resource deadlock or a livelock? What methods would you recommend to handle the anomaly?

This dead state is an anomaly of competition synchronization and can be con- trolled by resource pre-allocation. Processes, however, are not blocked from resources. In addition, resources are already requested in a linear order. This anomaly is not a resource deadlock; it is a livelock. Resource preallocation will prevent this anomaly. As a heuristic, processes may time-out and release their resources if they do not complete within some interval of time, then go to sleep for a random period and then try again

In order to control traffic, a network router, A periodically sends a message to its neighbor, B, telling it to increase or decrease the number of packets that it can handle. At some point in time, Router A is flooded with traffic and sends B a message telling it to cease sending traffic. It does this by specifying that the number of bytes B may send (A's window size) is 0. As traffic surges decrease, A sends a new message, telling B to restart transmission. It does this by increasing the window size from 0 to a positive number. That message is lost. As described, neither side will ever transmit. What type of deadlock is this?

This is clearly a communication deadlock, and can be controlled by having A time out and retransmit its enabling message (the one that increases the win- dow size) after some period of time (a heuristic). It is possible, however, that B has received both the original and the duplicate message. No harm will oc- cur if the update on the window size is given as an absolute value and not as a differential. Sequence numbers on such messages are also effective to detect duplicates.

A student in a compiler design course proposes to the professor a project of writing a compiler that will produce a list of page references that can be used to implement the optimal page replacement algorithm. Is this possible? Why or why not? Is there any- thing that could be done to improve paging efficiency at run time?

This is probably not possible except for the unusual and not very useful case of a program whose course of execution is completely predictable at compilation time. If a compiler collects information about the locations in the code of calls to procedures, this information might be used at link time to rearrange the object code so that procedures were located close to the code that calls them. This would make it more likely that a procedure would be on the same page as the calling code. Of course this would not help much for procedures called from many places in the program.

Virtual memory provides a mechanism for isolating one process from another. What memory management difficulties would be involved in allowing two operating systems to run concurrently? How might these difficulties be addressed?

This question addresses one aspect of virtual machine support. Recent attempts include Denali, Xen, and VMware. The fundamental hurdle is how to achieve near-native performance, that is, as if the executing operating system had memory to itself. The problem is how to quickly switch to another operating system and therefore how to deal with the TLB. Typically, you want to give some number of TLB entries to each kernel and ensure that each kernel operates within its proper virtual memory context. But sometimes the hardware (e.g., some Intel architectures) wants to handle TLB misses without knowledge of what you are trying to do. So, you need to either handle the TLB miss in software or provide hardware support for tagging TLB entries with a context ID.

Why would a thread ever voluntarily give up the CPU by calling thread yield? After all, since there is no periodic clock interrupt, it may never get the CPU back.

Threads in a process cooperate. They are not hostile to one another. If yield- ing is needed for the good of the application, then a thread will yield. After all, it is usually the same programmer who writes the code for all of them.

Consider the following piece of C code: void main( ) { fork( ); fork( ); exit( ); } How many child processes are created upon execution of this program?

Three processes are created. After the initial process forks, there are two proc- esses running, a parent and a child. Each of them then forks, creating two addi- tional processes. Then all the processes exit.

What type of multiplexing (time, space, or both) can be used for sharing the following resources: CPU, memory, disk, network card, printer, keyboard, and display?

Time multiplexing: CPU, network card, printer, keyboard. Space multiplexing: memory, disk. Both: display.

Suppose that a 10-MB file is stored on a disk on the same track (track 50) in consecu- tive sectors. The disk arm is currently situated over track number 100. How long will it take to retrieve this file from the disk? Assume that it takes about 1 ms to move the arm from one cylinder to the next and about 5 ms for the sector where the beginning of the file is stored to rotate under the head. Also, assume that reading occurs at a rate of 200 MB/s.

Time to retrieve the file = 1 * 50 ms (Time to move the arm over track 50) + 5 ms (Time for the first sector to rotate under the head) + 10/200 * 1000 ms (Read 10 MB) = 105 ms

In an electronic funds transfer system, there are hundreds of identical processes that work as follows. Each process reads an input line specifying an amount of money, the account to be credited, and the account to be debited. Then it locks both accounts and transfers the money, releasing the locks when done. With many processes running in parallel, there is a very real danger that a process having locked account x will be unable to lock y because y has been locked by a process now waiting for x. Devise a scheme that avoids deadlocks. Do not release an account record until you have com- pleted the transactions. (In other words, solutions that lock one account and then re- lease it immediately if the other is locked are not allowed.)

To avoid circular wait, number the resources (the accounts) with their account numbers. After reading an input line, a process locks the lower-numbered ac- count first, then when it gets the lock (which may entail waiting), it locks the other one. Since no process ever waits for an account lower than what it al- ready has, there is never a circular wait, hence never a deadlock.

How could an operating system that can disable interrupts implement semaphores?

To do a semaphore operation, the operating system first disables interrupts. Then it reads the value of the semaphore. If it is doing a down and the sema- phore is equal to zero, it puts the calling process on a list of blocked processes associated with the semaphore. If it is doing an up , it must check to see if any processes are blocked on the semaphore. If one or more processes are block- ed, one of them is removed from the list of blocked processes and made run- nable. When all these operations have been completed, interrupts can be enabled again.

Is the open system call in UNIX absolutely essential? What would the consequences be of not having it?

To start with, if there were no open , on every read it would be necessary to specify the name of the file to be opened. The system would then have to fetch the i-node for it, although that could be cached. One issue that quickly arises is when to flush the i-node back to disk. It could time out, however. It would be a bit clumsy, but it might work.

The clock interrupt handler on a certain computer requires 2 msec (including process switching overhead) per clock tick. The clock runs at 60 Hz. What fraction of the CPU is devoted to the clock?

Tw o msec 60 times a second is 120 msec/sec, or 12% of the CPU

A computer with a 32-bit address uses a two-level page table. Virtual addresses are split into a 9-bit top-level page table field, an 11-bit second-level page table field, and an offset. How large are the pages and how many are there in the address space?

Twenty bits are used for the virtual page numbers, leaving 12 over for the offset. This yields a 4-KB page. Twenty bits for the virtual page implies 220 pages.

Explain how an OS can facilitate installation of a new device without any need for recompiling the OS.

UNIX does it as follows. There is a table indexed by device number, with each table entry being a C struct containing pointers to the functions for opening, closing, reading, and writing and a few other things from the device. To install a new device, a new entry has to be made in this table and the pointers filled in, often to the newly loaded device driver.

Suppose that the virtual page reference stream contains repetitions of long sequences of page references followed occasionally by a random page reference. For example, the sequence: 0, 1, ... , 511, 431, 0, 1, ... , 511, 332, 0, 1, ... consists of repetitions of the sequence 0, 1, ... , 511 followed by a random reference to pages 431 and 332. (a) Why will the standard replacement algorithms (LRU, FIFO, clock) not be effective in handling this workload for a page allocation that is less than the sequence length? (b) If this program were allocated 500 page frames, describe a page replacement ap- proach that would perform much better than the LRU, FIFO, or clock algorithms.

Under these circumstances (a) Every reference will page fault unless the number of page frames is 512, the length of the entire sequence. (b) If there are 500 frames, map pages 0-498 to fixed frames and vary only one frame.

Suppose that a machine has 48-bit virtual addresses and 32-bit physical addresses. (a) If pages are 4 KB, how many entries are in the page table if it has only a single level? Explain. (b) Suppose this same system has a TLB (Translation Lookaside Buffer) with 32 en- tries. Furthermore, suppose that a program contains instructions that fit into one page and it sequentially reads long integer elements from an array that spans thou- sands of pages. How effective will the TLB be for this case?

Under these circumstances: (a) We need one entry for each page, or 224 = 16 × 1024 × 1024 entries, since there are 36 = 48 − 12 bits in the page number field. (b) Instruction addresses will hit 100% in the TLB. The data pages will have a 100 hit rate until the program has moved onto the next data page. Since a 4-KB page contains 1,024 long integers, there will be one TLB miss and one extra memory access for every 1,024 data references.

A simple operating system supports only a single directory but allows it to have arbi- trarily many files with arbitrarily long file names. Can something approximating a hier- archical file system be simulated? How?

Use file names such as /usr/ast/file. While it looks like a hierarchical path name, it is really just a single name containing embedded slashes.

Can a thread ever be preempted by a clock interrupt? If so, under what circumstances? If not, why not?

User-level threads cannot be preempted by the clock unless the whole process' quantum has been used up (although transparent clock interrupts can happen). Kernel-level threads can be preempted individually. In the latter case, if a thread runs too long, the clock will interrupt the current process and thus the current thread. The kernel is free to pick a different thread from the same proc- ess to run next if it so desires.

The readers and writers problem can be formulated in several ways with regard to which category of processes can be started when. Carefully describe three different variations of the problem, each one favoring (or not favoring) some category of proc- esses. For each variation, specify what happens when a reader or a writer becomes ready to access the database, and what happens when a process is finished.

Variation 1: readers have priority. No writer may start when a reader is active. When a new reader appears, it may start immediately unless a writer is cur- rently active. When a writer finishes, if readers are waiting, they are all started, regardless of the presence of waiting writers. Variation 2: Writers have prior- ity. No reader may start when a writer is waiting. When the last active process finishes, a writer is started, if there is one; otherwise, all the readers (if any) are started. Variation 3: symmetric version. When a reader is active, new readers may start immediately. When a writer finishes, a new writer has prior- ity, if one is waiting. In other words, once we have started reading, we keep reading until there are no readers left. Similarly, once we have started writing, all pending writers are allowed to run.

In Fig. 2-12 the register set is listed as a per-thread rather than a per-process item. Why? After all, the machine has only one set of registers.

When a thread is stopped, it has values in the registers. They must be saved, just as when the process is stopped. the registers must be saved. Multiprogram- ming threads is no different than multiprogramming processes, so each thread needs its own register save area.

A machine has 48-bit virtual addresses and 32-bit physical addresses. Pages are 8 KB. How many entries are needed for a single-level linear page table?

With 8-KB pages and a 48-bit virtual address space, the number of virtual pages is 248/213, which is 235 (about 34 billion).

The primary additive colors are red, green, and blue, which means that any color can be constructed from a linear superposition of these colors. Is it possible that someone could have a color photograph that cannot be represented using full 24-bit color?

With a 24-bit color system, only 224 colors can be represented. This is not all of them. For example, suppose that a photographer takes pictures of 300 cans of pure blue paint, each with a slightly different amount of pigment. The first might be represented by the (R, G, B) value (0, 0, 1). The next one might be represented by (0, 0, 2), and so forth. Since the B coordinate is only 8 bits, there is no way to represent 300 different values of pure blue. Some of the photographs will have to be rendered as the wrong color. Another example is the color (120.24, 150.47, 135.89). It cannot be represented, only approximated by (120, 150, 136).

Can two threads in the same process synchronize using a kernel semaphore if the threads are implemented by the kernel? What if they are implemented in user space? Assume that no threads in any other processes have access to the semaphore. Discuss your answers.

With kernel threads, a thread can block on a semaphore and the kernel can run some other thread in the same process. Consequently, there is no problem using semaphores. With user-level threads, when one thread blocks on a sem- aphore, the kernel thinks the entire process is blocked and does not run it ever again. Consequently, the process fails.

In Sec. 2.3.4, a situation with a high-priority process, H, and a low-priority process, L, was described, which led to H looping forever. Does the same problem occur if round- robin scheduling is used instead of priority scheduling? Discuss.

With round-robin scheduling it works. Sooner or later L will run, and eventual- ly it will leave its critical region. The point is, with priority scheduling, L never gets to run at all; with round robin, it gets a normal time slice periodically, so it has the chance to leave its critical region.

A computer uses a programmable clock in square-wav e mode. If a 500 MHz crystal is used, what should be the value of the holding register to achieve a clock resolution of (a) a millisecond (a clock tick once every millisecond)? (b) 100 microseconds?

With these parameters, (a) Using a 500 MHz crystal, the counter can be decremented every 2 nsec. So, for a tick every millisecond, the register should be 1000000/2 = 500,000. (b) To get a clock tick every 100 μsec, holding register value should be 50,000.

Suppose that an operating system does not have anything like the select system call to see in advance if it is safe to read from a file, pipe, or device, but it does allow alarm clocks to be set that interrupt blocked system calls. Is it possible to implement a threads package in user space under these conditions? Discuss.

Yes it is possible, but inefficient. A thread wanting to do a system call first sets an alarm timer, then does the call. If the call blocks, the timer returns con- trol to the threads package. Of course, most of the time the call will not block, and the timer has to be cleared. Thus each system call that might block has to be executed as three system calls. If timers go off prematurely, all kinds of problems develop. This is not an attractive way to build a threads package.

In the discussion on stable storage, it was shown that the disk can be recovered to a consistent state (a write either completes or does not take place at all) if a CPU crash occurs during a write. Does this property hold if the CPU crashes again during a recov- ery procedure. Explain your answer.

Yes the disk remains consistent even if the CPU crashes during a recovery procedure. Consider Fig. 5-31. There is no recovery involved in (a) or (e). Suppose that the CPU crashes during recovery in (b). If CPU crashes before the block from drive 2 has been completely copied to drive 1, the situation remains same as earlier. The subsequent recovery procedure will detect an ECC error in drive 1 and again copy the block from drive 2 to drive 1. If CPU crashes after the block from drive 2 has been copied to drive 1, the situation is same as that in case (e). Suppose that the CPU crashes during recovery in (c). If CPU crashes before the block from drive 1 has been completely copied to drive 2, the situation is same as that in case (d). The subsequent recovery procedure will detect an ECC error in drive 2 and copy the block from drive 1 to drive 2. If CPU crashes after the block from drive 1 has been copied to drive 2, the situation is same as that in case (e). Finally, suppose the CPU crashes during recovery in (d). If CPU crashes before the block from drive 1 has been completely copied to drive 2, the situation remains same as earlier. The subsequent recovery procedure will detect an ECC error in drive 2 and again copy the block from drive 1 to drive 2. If CPU crashes after the block from drive 1 has been copied to drive 2, the situation is same as that in case (e).

Fig. 6-3 shows the concept of a resource graph. Do illegal graphs exist, that is, graphs that structurally violate the model we have used of resource usage? If so, give an ex- ample of one.

Yes, illegal graphs exist. We stated that a resource may only be held by a single process. An arc from a resource square to a process circle indicates that the process owns the resource. Thus, a square with arcs going from it to two or more processes means that all those processes hold the resource, which violates the rules. Consequently, any graph in which multiple arcs leave a square and end in different circles violates the rules unless there are multiple copies of the resources. Arcs from squares to squares or from circles to circles also violate the rules.

In Fig. 2-15 the thread creations and messages printed by the threads are interleaved at random. Is there a way to force the order to be strictly thread 1 created, thread 1 prints message, thread 1 exits, thread 2 created, thread 2 prints message, thread 2 exists, and so on? If so, how? If not, why not?

Yes, it can be done. After each call to pthread create, the main program could do a pthread join to wait until the thread just created has exited before creat- ing the next thread.

Does the busy waiting solution using the turn variable (Fig. 2-23) work when the two processes are running on a shared-memory multiprocessor, that is, two CPUs sharing a common memory?

Yes, it still works, but it still is busy waiting, of course.

The producer-consumer problem can be extended to a system with multiple producers and consumers that write (or read) to (from) one shared buffer. Assume that each pro- ducer and consumer runs in its own thread. Will the solution presented in Fig. 2-28, using semaphores, work for this system?

Yes, it will work as is. At a given time instant, only one producer (consumer) can add (remove) an item to (from) the buffer.

Can the resource trajectory scheme of Fig. 6-8 also be used to illustrate the problem of deadlocks with three processes and three resources? If so, how can this be done? If not, why not?

Yes. Do the whole thing in three dimensions. The z-axis measures the number of instructions executed by the third process.

In the text, we described a multithreaded Web server, showing why it is better than a single-threaded server and a finite-state machine server. Are there any circumstances in which a single-threaded server might be better? Give an example.

Yes. If the server is entirely CPU bound, there is no need to have multiple threads. It just adds unnecessary complexity. As an example, consider a tele- phone directory assistance number (like 555-1212) for an area with 1 million people. If each (name, telephone number) record is, say, 64 characters, the en- tire database takes 64 megabytes and can easily be kept in the server's memory to provide fast lookup.

A distributed system using mailboxes has two IPC primitives, send and receive . The latter primitive specifies a process to receive from and blocks if no message from that process is available, even though messages may be waiting from other processes. There are no shared resources, but processes need to communicate frequently about other matters. Is deadlock possible? Discuss.

Yes. Suppose that all the mailboxes are empty. Now A sends to B and waits for a reply, B sends to C and waits for a reply, and C sends to A and waits for a reply. All the conditions for a communications deadlock are now fulfilled.

Some operating systems provide a system call rename to give a file a new name. Is there any difference at all between using this call to rename a file and just copying the file to a new file with the new name, followed by deleting the old one?

Yes. The rename call does not change the creation time or the time of last modification, but creating a new file causes it to get the current time as both the creation time and the time of last modification. Also, if the disk is nearly full, the copy might fail.

When a computer is being developed, it is usually first simulated by a program that runs one instruction at a time. Even multiprocessors are simulated strictly sequentially like this. Is it possible for a race condition to occur when there are no simultaneous ev ents like this?

Yes. The simulated computer could be multiprogrammed. For example, while process A is running, it reads out some shared variable. Then a simulated clock tick happens and process B runs. It also reads out the same variable. Then it adds 1 to the variable. When process A runs, if it also adds 1 to the variable, we have a race condition.

A computer has a three-stage pipeline as shown in Fig. 1-7(a). On each clock cycle, one new instruction is fetched from memory at the address pointed to by the PC and put into the pipeline and the PC advanced. Each instruction occupies exactly one mem- ory word. The instructions already in the pipeline are each advanced one stage. When an interrupt occurs, the current PC is pushed onto the stack, and the PC is set to the ad- dress of the interrupt handler. Then the pipeline is shifted right one stage and the first instruction of the interrupt handler is fetched into the pipeline. Does this machine have precise interrupts? Defend your answer.

Yes. The stacked PC points to the first instruction not fetched. All instructions before that have been executed and the instruction pointed to and its successors have not been executed. This is the condition for precise interrupts. Precise interrupts are not hard to achieve on machine with a single pipeline. The trouble comes in when instructions are executed out of order, which is not the case here.

In the dining philosophers problem, let the following protocol be used: An even-num- bered philosopher always picks up his left fork before picking up his right fork; an odd-numbered philosopher always picks up his right fork before picking up his left fork. Will this protocol guarantee deadlock-free operation?

Yes. There will be always at least one fork free and at least one philosopher that can obtain both forks simultaneously. Hence, there will be no deadlock. You can try this for N = 2, N = 3 and N = 4 and then generalize.

Give fiv e different path names for the file /etc/passwd. (Hint: Think about the direc- tory entries ''.'' and ''..''.)

You can go up and down the tree as often as you want using ''..''. Some of the many paths are /etc/passwd /./etc/passwd /././etc/passwd /./././etc/passwd /etc/../etc/passwd /etc/../etc/../etc/passwd /etc/../etc/../etc/../etc/passwd /etc/../etc/../etc/../etc/../etc/passwd

Suppose that you were to design an advanced computer architecture that did process switching in hardware, instead of having interrupts. What information would the CPU need? Describe how the hardware process switching might work.

You could have a register containing a pointer to the current process-table entry. When I/O completed, the CPU would store the current machine state in the current process-table entry. Then it would go to the interrupt vector for the interrupting device and fetch a pointer to another process-table entry (the ser- vice procedure). This process would then be started up.

how is a string stored in c

char hello[6] = {'h', 'e', 'l', 'l', 'o', '\0'};

how do you free allocated memory

free(pointerName);

how do you allocate memory

int *ptr = (int *) malloc(sizeof (int));

what does putting the % symbol in front of a var do

it returns the address of the variable i.e. int i = 3; int value = i; int address = &i; int *pointer = &i; values contains 3, address contains the memory address of the variable and pointer is a pointer to the variable.

what does the Type of pointer mean? e.g. int* height

it tells us the type of data stored at the memory address. e.g. int* height means that a integer is stored at the memory location

In many UNIX systems, the i-nodes are kept at the start of the disk. An alternative de- sign is to allocate an i-node when a file is created and put the i-node at the start of the first block of the file. Discuss the pros and cons of this alternative.

ome pros are as follows. First, no disk space is wasted on unused i-nodes. Second, it is not possible to run out of i-nodes. Third, less disk movement is needed since the i-node and the initial data can be read in one operation. Some cons are as follows. First, directory entries will now need a 32-bit disk address instead of a 16-bit i-node number. Second, an entire disk will be used even for files which contain no data (empty files, device files). Third, file-system integ- rity checks will be slower because of the need to read an entire block for each i-node and because i-nodes will be scattered all over the disk. Fourth, files whose size has been carefully designed to fit the block size will no longer fit the block size due to the i-node, messing up performance.

where are function arguments stored when passed to the fuction

on the stack

what are the 2 main functions of an operating systems

resource management and extended machines (i.e. providing useful abstractions that are more convenient than the real hardware)

where are global variables stored

the heap

how does the function know how to get back to the rest of the program

the return address is the first thing put on the stack

what can you do if you want a variable that only has scope for a particular function (like a local variable) but still persists for the lifetime of my program (like a global)?

use a static variable, a static var can only be accessed inside it's scope but the value persists between calls. if you declare static int i =0; in a function, the var is only set to 0 the first call.


संबंधित स्टडी सेट्स

Ch 15.1 and 15.2- Regulation of Gene Expression

View Set

Prepare 1 Unit 5 What can you do?

View Set

Practice Questions for OB Exam #2

View Set

Khan Academy- Compsci Principles: Simulations

View Set

DEF System - Diesel Exhaust Fluid

View Set

CH: 4 Patient Rights & Legal Issues

View Set

Impact of Changes on a DCF and WACC

View Set