CSci 451 Ch1-5 Book Qx Answers

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Figure 3.9b contains seven states. In principle, one could draw a transition between any two states, for a total of 42 different transitions. a. List all of the possible transitions and give an example of what could cause each transition. b. List all of the impossible transitions and explain why.

1. Important new transitions are the following : • Blocked->Blocked/Suspend: If there are no ready processes, then at least one blocked process is swapped out to make room for another process that is not blocked. This transition can be made even if there are ready processes available, if the OS determines that the currently running process or a ready process that it would like to dispatch requires more main memory to maintain adequate performance. • Blocked/Suspend->Ready/Suspend: A process in the Blocked/Suspend state is moved to the Ready/Suspend state when the event for which it has been waiting occurs. Note that this requires that the state information concerning suspended processes must be accessible to the OS. • Ready/Suspend->Ready: When there are no ready processes in main memory, the OS will need to bring one in to continue execution. In addition, it might be the case that a process in the Ready/Suspend state has higher priority than any of the processes in the Ready state. In that case, the OS designer may dictate that it is more important to get at the higher-priority process than to minimize swapping. • Ready ->Ready/Suspend: Normally, the OS would prefer to suspend a blocked process rather than a ready one, because the ready process can now be executed, whereas the blocked process is taking up main memory space and cannot be executed. However, it may be necessary to suspend a ready process if that is the only way to free up a sufficiently large block of main memory.Also, the OS may choose to suspend a lower-priority ready process rather than a higherpriority blocked process if it believes that the blocked process will be ready soon. Several other transitions that are worth considering are the following: • New->Ready/Suspend and New ->Ready: When a new process is created, it can either be added to the Ready queue or the Ready/Suspend queue. In either case, the OS must create a process control block and allocate an address space to the process. It might be preferable for the OS to perform these housekeeping duties at an early time, so that it can maintain a large pool of processes that are not blocked.With this strategy, there would often be insufficient room in main memory for a new process;hence the use of the (New SReady/Suspend) transition. On the other hand, we could argue that a just-in-time philosophy of creating processes as late as possible reduces OS overhead and allows that OS to perform the process-creation duties at a time when the system is clogged with blocked processes anyway. • Blocked/Suspend ->Blocked: Inclusion of this transition may seem to be poor design. After all, if a process is not ready to execute and is not already in main memory, what is the point of bringing it in? But consider the following scenario: A process terminates, freeing up some main memory.There is a process in the Blocked/Suspend) queue with a higher priority than any of the processes in the (Ready/Suspend) queue and the OS has reason to believe that the blocking event for that process will occur soon. Under these circumstances, it would seem reasonable to bring a blocked process into main memory in preference to a ready process. • Running->Ready/Suspend: Normally, a running process is moved to the Ready state when its time allocation expires. If, however, the OS is preempting the process because a higher-priority process on the Blocked/Suspend queue has just become unblocked, the OS could move the running process directly to the (Ready/Suspend) queue and free some main memory. • Any State->Exit: Typically, a process terminates while it is running, either because it has completed or because of some fatal fault condition. However, in some operating systems, a process may be terminated by the process that created it or when the parent process is itself terminated. If this is allowed, then a process in any state can be moved to the Exit state. 2. Recall that the reason for all of this elaborate machinery is that I/O activities are much slower than computation and therefore the processor in a uniprogramming system is idle most of the time. But the arrangement of Figure 3.8b does not entirely solve the problem. It is true that, in this case, memory holds multipleprocesses and that the processor can move to another process when one process is blocked. But the processor is so much faster than I/O that it will be common for all of the processes in memory to be waiting for I/O. Thus, even with multiprogramming, a processor could be idle most of the time. What to do? Main memory could be expanded to accommodate more processes. But there are two flaws in this approach. First, there is a cost associated with main memory, which, though small on a per-byte basis, begins to add up as we get into the gigabytes of storage. Second, the appetite of programs for memory has grown as fast as the cost of memory has dropped. So larger memory results in larger processes, not more processes. Another solution is swapping, which involves moving part or all of a process from main memory to disk.When none of the processes in main memory is in the Ready state, the OS swaps one of the blocked processes out onto disk into a suspend queue.This is a queue of existing processes that have been temporarily kicked out of main memory, or suspended. The OS then brings in another process from the suspend queue, or it honors a new-process request. Execution then continues with the newly arrived process. Swapping, however, is an I/O operation, and therefore there is the potential for making the problem worse, not better. But because disk I/O is generally the fastest I/O on a system (e.g., compared to tape or printer I/O), swapping will usually enhance performance. With the use of swapping as just described, one other state must be added to our process behavior model (Figure 3.9a): the Suspend state. When all of the processes in main memory are in the Blocked state, the OS can suspend one process by putting it in the Suspend state and transferring it to disk.The space that is freed in main memory can then be used to bring in another process. When the OS has performed a swapping-out operation, it has two choices for selecting a process to bring into main memory: It can admit a newly created process or it can bring in a previously suspended process. It would appear that the preference should be to bring in a previously suspended process, to provide it with service rather than increasing the total load on the system. But this line of reasoning presents a difficulty. All of the processes that have been suspended were in the Blocked state at the time of suspension. It clearly would not do any good to bring a blocked process back into main memory, because it is still not ready for execution. Recognize, however, that each process in the Suspend state was originally blocked on a particular event.When that event occurs, the process is not blocked and is potentially available for execution.

Give three examples of an interrupt.

1. OS may permit resource sharing and resource protection. 2. if a user requests that a file be printed, the OS can create a process that will manage the printing. The requesting process can thus proceed independently of the time required to complete the printing task. 3. an application process may generate another process to receive data that the application is generating and to organize those data into a form suitable for later analysis. The new process runs in parallel to the original process and is activated from time to time when new data are available. This arrangement can be very useful in structuring the application.

The program execution of Figure 1.4 is described in the text using six steps. Expand this description to show the use of the MAR and MBR.

1. The PC contains 300, the address of the first instruction. This instruction (the value 1940 in hexadecimal) is loaded into the IR and the PC is incremented. Note that this process involves the use of a memory address register (MAR) and a memory buffer register (MBR). For simplicity, these intermediate registers are not shown. 2. The first 4 bits (first hexadecimal digit) in the IR indicate that the AC is to be loaded from memory. The remaining 12 bits (three hexadecimal digits) specify the address, which is 940. 3. The next instruction (5941) is fetched from location 301 and the PC is incremented. 4. The old contents of the AC and the contents of location 941 are added and the result is stored in the AC. 5. The next instruction (2941) is fetched from location 302 and the PC is incremented. 6. The contents of the AC are stored in location 941. MAR specifies the address in memory for the next read or write MBR which contains the data to be written into memory or which receives

List four characteristics of a suspended process

1. The process is not immediately available for execution. 2. The process may or may not be waiting on an event. If it is, this blocked condition is independent of the suspend condition, and occurrence of the blocking event does not enable the process to be executed immediately. 3. The process was placed in a suspended state by an agent: either itself, a parent process, or the OS, for the purpose of preventing its execution. 4. The process may not be removed from this state until the agent explicitly orders the removal.

What is multiprogramming?

A mode of operation that provides for the interleaved execution of two or more computer programs by a single processor. The same as multitasking, using different terminology.

What is swapping and what is its purpose?

A process that interchanges the contents of an area of main storage with the contents of an area in secondary memory.

What is a process?

A program in execution. A process is controlled and scheduled by the operating system. Same as task .

How is the execution context of a process used by the OS?

Also known as the process state, the execution context is the internal data the Operating system uses to control or supervise a process.

Control

An instruction may specify that the sequence of execution be altered.

What is an interrupt?

An interrupt is a hardware-generated change-of-flow within the system. An interrupt handler is summoned to deal with the cause of the interrupt; control is then returned to the interrupted context and instruction.

Scheduling

Any processor may perform scheduling, which complicates the task of enforcing a scheduling policy and assuring that corruption of the scheduler data structures is avoided. If kernel-level multithreading is used, then the opportunity exists to schedule multiple threads from the same process simultaneously on multiple processors.

What are the steps performed by an OS to create a new process?

Assign a unique process identifier to the new process. At this time, a new entry is added to the primary process table, which contains one entry per process. 2. Allocate space for the process. This includes all elements of the process image. Thus, the OS must know how much space is needed for the private user address space (programs and data) and the user stack. These values can be assigned by default based on the type of process, or they can be set based on user request at job creation time. If a process is spawned by another process, the parent process can pass the needed values to the OS as part of the process-creation request. If any existing address space is to be shared by this new process, the appropriate linkages must be set up. Finally, space for a process control block must be allocated. 3. Initialize the process control block. The process identification portion contains the ID of this process plus other appropriate IDs, such as that of the parent process. The processor state information portion will typically be initialized with most entries zero, except for the program counter (set to the program entry point) and system stack pointers (set to define the process stack boundaries). The process control information portion is initialized based on standard default values plus attributes that have been requested for this process. For example, the process state would typically be initialized to Ready or Ready/ Suspend. The priority may be set by default to the lowest priority unless an explicit request is made for a higher priority. Initially, the process may own no resources (I/O devices, files) unless there is an explicit request for these or unless they are inherited from the parent. 4. Set the appropriate linkages. For example, if the OS maintains each scheduling queue as a linked list, then the new process must be put in the Ready or Ready/Suspend list. 5. Create or expand other data structures. For example, the OS may maintain an accounting file on each process to be used subsequently for billing and/or performance assessment purposes.

A computer has a cache, main memory, and a disk used for virtual memory. If a referenced word is in the cache, 20 ns are required to access it. If it is in main memory but not in the cache, 60 ns are needed to load it into the cache (this includes the time to originally check the cache), and then the reference is started again. If the word is not in main memory, 12 ms are required to fetch the word from disk, followed by 60 ns to copy it to the cache, and then the reference is started again. The cache hit ratio is 0.9 and the main-memory hit ratio is 0.6. What is the average time in ns required to access a referenced word on this system?

Assuming a reference is in memory or cache we get an average access time of 0.9×20 ns + 0.1×80 ns = 26 ns Now considering the possibility of going to disk, we get 0.99×26 ns + 0.01×10000080 ns = 100,026.54 ns ∼= 100µs Clearly we need very high hit ratios to make a virtual memory system work well. An alternative interpretation is that the .99 main memory hit ratio refers to the 0.99 of the .1 requests that go to main memory. In that case we have 0.9×20 ns + 0.1(.99×80 ns +.01×10000080 ns) = 10,026.ns = 10µs

storage management responsibilities of a typical OS 2

Automatic allocation and management

Why does Figure 3.9b have two blocked states?

Blocked : Blocked/Suspend: If there are no ready processes, then at least one blocked process is swapped out to make room for another process that is not blocked. This transition can be made even if there are ready processes available, if the OS determines that the currently running process or a ready process that it would like to dispatch requires more main memory to maintain adequate performance. • Blocked/Suspend : Ready/Suspend: A process in the Blocked/Suspend state is moved to the Ready/Suspend state when the event for which it has been waiting occurs. Note that this requires that the state information concerning suspended processes must be accessible to the OS. • Ready/Suspend : Ready: When there are no ready processes in main memory, the OS will need to bring one in to continue execution. In addition, it might be the case that a process in the Ready/Suspend state has higher priority than any of the processes in the Ready state. In that case, the OS designer may dictate that it is more important to get at the higher-priority process than to minimize swapping. • Ready : Ready/Suspend: Normally, the OS would prefer to suspend a blocked process rather than a ready one, because the ready process can now be executed, whereas the blocked process is taking up main memory space and cannot be executed. However, it may be necessary to suspend a ready process if that is the only way to free up a sufficiently large block of main memory. Also, the OS may choose to suspend a lower-priority ready process rather than a higher-priority blocked process if it believes that the blocked process will be ready soon. New : Ready/Suspend and New : Ready: When a new process is created, it can either be added to the Ready queue or the Ready/Suspend queue. In either case, the OS must create a process control block and allocate an address space to the process. It might be preferable for the OS to perform these housekeeping duties at an early time, so that it can maintain a large pool of processes that are not blocked. With this strategy, there would often be insufficient room in main memory for a new process; hence the use of the (New : Ready/Suspend) transition. On the other hand, we could argue that a just-in-time philosophy of creating processes as late as possible reduces OS overhead and allows that OS to perform the process-creation duties at a time when the system is clogged with blocked processes anyway. • Blocked/Suspend: Blocked: Inclusion of this transition may seem to be poor design. After all, if a process is not ready to execute and is not already in main memory, what is the point of bringing it in? But consider the following scenario: A process terminates, freeing up some main memory. There is a process in the (Blocked/Suspend) queue with a higher priority than any of the processes in the (Ready/Suspend) queue and the OS has reason to believe that the blocking event for that process will occur soon. Under these circumstances, it would seem reasonable to bring a blocked process into main memory in preference to a ready process. • Running : Ready/Suspend: Normally, a running process is moved to the Ready state when its time allocation expires. If, however, the OS is preempting the process because a higher-priority process on the Blocked/Suspend queue has just become unblocked, the OS could move the running process directly to the (Ready/Suspend) queue and free some main memory. • Any State : Exit: Typically, a process terminates while it is running, either because it has completed or because of some fatal fault condition. However, in some operating systems, a process may be terminated by the process that created it or when the parent process is itself terminated. If this is allowed, then a process in any state can be moved to the Exit state.

distinct actions that a machine instruction can specify? 4

Control

Processor

Controls the operation of the computer and performs its data processing functions.

Processor-memory

Data may be transferred from processor to memory or from memory to processor.

Processor-I/O

Data may be transferred to or from a peripheral device by transferring between the processor and an I/O module

distinct actions that a machine instruction can specify? 3

Data processing

Suppose that we have a multiprogrammed computer in which each job has identical characteristics. In one computation period, T , for a job, half the time is spent in I/O and the other half in processor activity. Each job runs for a total of N periods. Assume that a simple round-robin scheduling is used, and that I/O operations can overlap with processor operation. Define the following quantities: • Turnaround time = actual time to complete a job • Throughput = average number of jobs completed per time period T • Processor utilization = percentage of time that the processor is active (not waiting) Compute these quantities for one, two, and four simultaneous jobs, assuming that the period T is distributed in each of the following ways: a. I/O first half, processor second half b. I/O first and fourth quarters, processor second and third quarter

I/O first half, processor second half When there is one job, it can do I/O or run on the processor whenever it wants. So the quantities are: Turnaround Time = N*T Throughput = 1/N Processor Utilization = 50% When there are two jobs, one starts right away and does I/O. When it switches to run on the CPU, the second can start its I/O. This delays the second job for 1/2*N, but otherwise they alternate between I/O and CPU. Assume the jobs are long, so the extra 1/2 a cycle is insignificant. Then: Turnaround Time = N*T Throughput = 2/N Processor Utilization = 100% When there are 4 jobs, the CPU is round-robin among the four, as is the I/O. This means the jobs are interleaved as: Job1: I/O CPU I/O CPU Job2: I/O CPU I/O CPU Job3: I/O CPU I/O CPU Job4: I/O CPU I/O CPU A job can execute for one cycle T, then it must wait for T before doing another cycle. Again assume the jobs are long so that any initial wait is insignificant. Then: Turnaround Time = (2N-1)*T Throughput = 2/N Processor Utilization = 100% I/O first and fourth quarters, processor second and third quarter The answers for this part are the same as the first. This is easy to see for the case of 1 job and 2 jobs. When there are 4 jobs, the CPU is round-robin among the four, as is the I/O. This means the jobs are interleaved as: Job1: I CP O I CP O Job2: I CP O I CP O Job3: I CP O I CP O Job4: I CP O I CP O

List main elements of a computer 3

I/O modules

In virtually all systems that include DMA modules, DMA access to main memory is given higher priority than processor access to main memory. Why?

If a processor is held up in attempting to read or write memory, usually no damage occurs except a slight loss of time. However, a DMA transfer may be to or from a device that is receiving or sending data in a stream (e.g., disk or network), and cannot be stopped. Thus, if the DMA module is held up (denied continuing access to main memory), data will be lost.

Simultaneous concurrent processes or threads

Kernel routines need to be reentrant to allow several processors to execute the same kernel code simultaneously. With multiple processors executing the same or different parts of the kernel, kernel tables and management structures

storage management responsibilities of a typical OS 5

Long-term storage

List main elements of a computer 2

Main memory

Long-term storage

Many application programs require means for storing information for extended periods of time, after the computer has been powered down.

Suppose the hypothetical processor of Figure 1.3 also has two I/O instructions: 0011 Load AC from I/O 0111 Store AC to I/O In these cases, the 12-bit address identifies a particular external device. Show the program execution (using format of Figure 1.4 ) for the following program: 1. Load AC from device 5. 2. Add contents of memory location 940. 3. Store AC to device 6. Assume that the next value retrieved from device 5 is 3 and that location 940 contains a value of 2.

Memory 300 3005 301 5940 302 7006 . . 940 0002 941 We will assume that the memory (contents in hex) as the previous table: 300: 3005; 301: 5940; 302: 7006 Therefore, the steps will be as the following: Step 1: 3005 → IR Step 2: 3 → AC Step 3: 5940 → IR Step 4: 3 + 2 = 5 → AC Step 5: 7006 → IR Step 6: AC → Device 6

Memory management:

Memory management on a multiprocessor must deal with all of the issues found on uniprocessor computers and is discussed in Part Three. In addition, the OS needs to exploit the available hardware parallelism to achieve the best performance. The paging mechanisms on different processors must be coordinated to enforce consistency when several processors share a page or segment and to decide on page replacement. The reuse of physical pages is the biggest problem of concern; that is, it must be guaranteed that a physical page can no longer be accessed with its old contents before the page is put to a new use.

design issues for an SMP operating system. 4

Memory management:

I/O modules

Move data between the computer and its external environment.

A DMA module is transferring characters to main memory from an external device transmitting at 9600 bits per second (bps). The processor can fetch instructions at the rate of 1 million instructions per second. By how much will the processor be slowed down due to the DMA activity?

NO CLUE

Consider a computer system that contains an I/O module controlling a simple keyboard/printer Teletype. The following registers are contained in the CPU and connected directly to the system bus: INPR: Input Register, 8 bits OUTR: Output Register, 8 bits FGI: Input Flag, 1 bit FGO: Output Flag, 1 bit IEN: Interrupt Enable, 1 bit Keystroke input from the Teletype and output to the printer are controlled by the I/O module. The Teletype is able to encode an alphanumeric symbol to an 8-bit word and decode an 8-bit word into an alphanumeric symbol. The Input flag is set when an 8-bit word enters the input register from the Teletype. The Output flag is set when a word is printed. a. Describe how the CPU, using the first four registers listed in this problem, can achieve I/O with the Teletype. b. Describe how the function can be performed more efficiently by also employing IEN.

NO CLUE

Contrast the scheduling policies you might use when trying to optimize a time-sharing system with those you would use to optimize a multiprogrammed batch system.

NO CLUE

For what types of entities does the OS maintain tables of information for management purposes

NO CLUE

For the seven-state process model of Figure 3.9b , draw a queueing diagram similar to that of Figure 3.8b

NOPE

Generalize Equations ( 1.1 ) and (1.2) in Appendix 1A to n -level memory hierarchies

NOT DONE. 1.1 Ts = H x T1 + (1 -H) x (T1 + T2) T1 +(1- H)x T2 1.2 Cs = (C1S1 + C2S2) / (S1 + S2)

Suppose a stack is to be used by the processor to manage procedure calls and returns. Can the program counter be eliminated by using the top of the stack as a program counter?

No

A computer consists of a CPU and an I/O device D connected to main memory M via a shared bus with a data bus width of one word. The CPU can execute a maximum of 106 instructions per second. An average instruction requires five processor cycles, three of which use the memory bus. A memory read or write operation uses one processor cycle. Suppose that the CPU is continuously executing "background" programs that require 95% of its instruction execution rate but not any I/O instructions. Assume that one processor cycle equals one bus cycle. Now suppose that very large blocks of data are to be transferred between M and D. a. If programmed I/O is used and each one-word I/O transfer requires the CPU to execute two instructions, estimate the maximum I/O data transfer rate, in words per second, possible through D. b. Estimate the same rate if DMA transfer is used.

Note that it is a condition in the first case only that each one-word I/O transfer requires the CPU to execute two instructions. For DMA I/O, no instruction execution is needed in the middle of the process. Some people mentioned that 106 instructions/second is unbelievably slow for a computer. Indeed, 106 should be 106 according to the errata of the textbook. It is fine to use either number though I use 106 in the following. a. 106×5% 2 = 2.65 word/second b. 106 × (95% × 2 + 5% × 5) = 227.9 word/second

List three general categories of information in a process control block

Process identification data always include a unique identifier for the process (almost invariably an integer number) and, in a multiuser-multitasking system, data like the identifier of the parent process, user identifier, user group identifier, etc. The process id is particularly relevant, since it's often used to cross-reference the OS tables defined above, e.g. allowing to identify which process is using which I/O devices, or memory areas. Processor state data are those pieces of information that define the status of a process when it's suspended, allowing the OS to restart it later and still execute correctly. This always include the content of the CPU general-purpose registers, the CPU process status word, stack and frame pointers etc. Process control information is used by the OS to manage the process itself. This includes: The process scheduling state (different from the task state above discussed), e.g. in terms of "ready", "suspended", etc., and other scheduling information as well, like a priority value, the amount of time elapsed since the process gained control of the CPU or since it was suspended. Also, in case of a suspended process, event identification data must be recorded for the event the process is waiting for. Process structuring information:process's children id's, or the id's of other processes related to the current one in some functional way, which may be represented as a queue, a ring or other data structures. Interprocess communication information: various flags, signals and messages associated with the communication among independent processes may be stored in the PCB. Process privileges, in terms of allowed/unallowed access to system resources.

storage management responsibilities of a typical OS 1

Process isolation

List main elements of a computer 1

Processor

distinct actions that a machine instruction can specify? 2

Processor-I/O

distinct actions that a machine instruction can specify? 1

Processor-memory

Support of modular programming

Programmers should be able to define program modules, and to create, destroy, and alter the size of modules dynamically.

Automatic allocation and management

Programs should be dynamically allocated across the memory hierarchy as required. Allocation should be transparent to the programmer. Thus, the programmer is relieved of concerns relating to memory limitations, and the OS can achieve efficiency by assigning memory to jobs only as needed.

storage management responsibilities of a typical OS 4

Protection and access control

System bus

Provides for communication among processors, main memory, and I/O modules.

The following state transition table is a simplified model of process management, with the labels representing transitions between states of READY, RUN, BLOCKED, and NONRESIDENT. READY RUN BLOCKED NONRESIDENT READY - 1 - 5 RUN 2 - 3 - BLOCKED 4 - - 6 Give an example of an event that can cause each of the above transitions. Draw a diagram if that helps.

RUN to READY: time-quantum expiration RUN to BLOCKED: the process issued an I/O RUN to SUSPEND: not possible READY to SUSPEND: memory is overcommitted, so the READY process is temporarily swapped out of memory READY to RUN: the process is allocated the CPU by the scheduler READY to BLOCKED: not possible BLOCKED to READY: the awaited event completes (e.g. I/O completion) BLOCKED to SWAPPED: same as READY to SWAPPED BLOCKED to RUN: not possible SWAPPED to READY or SWAPPED to BLOCKED: When memory becomes available or the reason for swapping is no longer true

What does it mean to preempt a process?

Reclaiming a resource from a process before the process has finished using it.

design issues for an SMP operating system. 5

Reliability and fault tolerance:

For the processing model of Figure 3.6 , briefly define each state.

Running: The process that is currently being executed. For this chapter, we will assume a computer with a single processor, so at most one process at a time can be in this state. • Ready: A process that is prepared to execute when given the opportunity. • Blocked/Waiting:4 A process that cannot execute until some event occurs, such as the completion of an I/O operation. • New: A process that has just been created but has not yet been admitted to the pool of executable processes by the OS. Typically, a new process has not yet been loaded into main memory, although its process control block has been created. • Exit: A process that has been released from the pool of executable processes by the OS, either because it halted or because it aborted for some reason.

design issues for an SMP operating system. 2

Scheduling

Protection and access control

Sharing of memory, at any level of the memory hierarchy, creates the potential for one program to address the memory space of another. This is desirable when sharing is needed by particular applications. At other times, it threatens the integrity of programs and even of the OS itself. The OS must allow portions of memory to be accessible in various ways by various users

design issues for an SMP operating system. 1

Simultaneous concurrent processes or threads

Consider a 32-bit microprocessor, with a 16-bit external data bus, driven by an 8-MHz input clock. Assume that this microprocessor has a bus cycle whose minimum duration equals four input clock cycles. What is the maximum data transfer rate across the bus that this microprocessor can sustain in bytes/s? To increase its performance, would it be better to make its external data bus 32 bits or to double the external clock frequency supplied to the microprocessor? State any other assumptions you make and explain. Hint: Determine the number of bytes that can be transferred per bus cycle.

Since minimum bus cycle duration = 4 clock cycles and bus clock = 8 MHz Then, maximum bus cycle rate = 8 M / 4 = 2 M/s Data transferred per bus cycle = 16 bit = 2 bytes Data transfer rate per second = bus cycle rate * data per bus cycle = 2 M * 2 = 4 Mbytes/sec. To increase its performance: By doubling the frequency, it may mean adopting a new chip manufacturing technology (assuming each instruction will have the same number of clock cycles); By doubling the external data bus, that means wider (may be newer) on-chip data busdrivers/latches and modifications to the bus control logic. Therefore, in the first situation the speed of the memory chips will need to double, not to slow down the microprocessor. Regarding the second situation, the word length of the memory will must double to be able to send/receive 32-bit quantities.

In general, what are the strategies for exploiting spatial locality and temporal locality?

Spatial locality is generally exploited by using larger cache blocks and by incorporating prefetching mechanisms (fetching items whose use is expected) into the cache control logic. temporal locality is exploited by keeping recently used instruction and data values in cache memory and by exploiting a cache hierarchy.

Consider the following code: for (i 0; i 20; i++) for (j 0; j 10; j++) a[i] a[i] * j a. Give one example of the spatial locality in the code. b. Give one example of the temporal locality in the code.

Spatial locality occurs when the array a is accessed sequentially. Temporal locality occurs when i is used repeatedly with the same value in the second loop.

What is the distinction between spatial locality and temporal locality?

Spatial locality refers to the tendency of execution to involve a number of memory locations that are clustered. Temporal locality refers to the tendency for a processor to access memory locations that have been used recently.

Main memory

Stores data and programs.

storage management responsibilities of a typical OS 3

Support of modular programming

design issues for an SMP operating system. 3

Synchronization:

List main elements of a computer 4

System bus

Assume that at time 5 no system resources are being used except for the processor and memory. Now consider the following events: At time 5: P1 executes a command to read from disk unit 3. At time 15: P5's time slice expires. At time 18: P7 executes a command to write to disk unit 3. At time 20: P3 executes a command to read from disk unit 2. At time 24: P5 executes a command to write to disk unit 3. At time 28: P5 is swapped out. At time 33: An interrupt occurs from disk unit 2: P3's read is complete. At time 36: An interrupt occurs from disk unit 3: P1's read is complete. At time 38: P8 terminates. At time 40: An interrupt occurs from disk unit 3: P5's write is complete. At time 44: P5 is swapped back in. At time 48: An interrupt occurs from disk unit 3: P7's write is complete. For each time 22, 37, and 47, identify which state each process is in. If a process is blocked, further identify the event on which is it blocked.

T = 22: P5, P8 ready/running P1, P3, P7: blocked for I/O T = 37: P1, P3, P8: ready/running P5: blocked suspend (swapped out) P7: blocked for I/O T = 47: P1, P3, P5 ready/running P7: blocked for I/O P8: exit

Process isolation

The OS must prevent independent processes from interfering with each other's memory, both data and instructions.

Reliability and fault tolerance:

The OS should provide graceful degradation in the face of processor failure. The scheduler and other portions of the OS must recognize the loss of a processor and restructure management tables accordingly

Data processing

The processor may perform some arithmetic or logic operation on data.

In IBM's mainframe OS, OS/390, one of the major modules in the kernel is the System Resource Manager. This module is responsible for the allocation of resources among address spaces (processes). The SRM gives OS/390 a degree of sophistication unique among operating systems. No other mainframe OS, and certainly no other type of OS, can match the functions performed by SRM. The concept of resource includes processor, real memory, and I/O channels. SRM accumulates statistics pertaining to utilization of processor, channel, and various key data structures. Its purpose is to provide optimum performance based on performance monitoring and analysis. The installation sets forth various performance objectives, and these serve as guidance to the SRM, which dynamically modifies installation and job performance characteristics based on system utilization. In turn, the SRM provides reports that enable the trained operator to refine the configuration and parameter settings to improve user service. This problem concerns one example of SRM activity. Real memory is divided into equal-sized blocks called frames, of which there may be many thousands. Each frame can hold a block of virtual memory referred to as a page. SRM receives control approximately 20 times per second and inspects each and every page frame. If the page has not been referenced or changed, a counter is incremented by 1. Over time, SRM averages these numbers to determine the average number of seconds that a page frame in the system goes untouched. What might be the purpose of this and what action might SRM take?

This problem concerns one example of SRM activity. Real memory is divided into equal-sized blocks called frames, of which there may be many thousands. Each frame can hold a block of virtual memory referred to as a page. SRM receives control approximately 20 times per second and inspects each and every page frame. If the page has not been referenced or changed, a counter is incremented by 1. Over time, SRM averages these numbers to determine the average number of seconds that a page frame in the system goes untouched. What might be the purpose of this and what action might SRM take? The system operator can review this quantity to determine the degree of "stress" on the system. By reducing the number of active jobs allowed on the system, this average can be kept high. A typical guideline is that this average should be kept above 2 minutes [IBM86]. This may seem like a lot, but it isn't.

What is the purpose of system calls, and how do system calls relate to the OS and to the concept of dual-mode (kernel-mode and user-mode) operation?

With a time sharing system, the primary concern is turnaround time. A round-robin scheduler would given every process a chance to run on the CPU for a short time, and reduce the average turnaround time. If the scheduler instead let one job run until completion, then the first job would have a short turnaround time, but later ones would have to wait for a long time. In a batch system, the primary concern is throughput. In this case, the time spent switching between jobs is wasted, so a more efficient scheduling algorithm would be first-come, first-served, and let each job run on the processor as long as it wants.

Synchronization:

With multiple active processes having potential access to shared address spaces or shared I/O resources, care must be taken to provide effective synchronization. Synchronization is a facility that enforces mutual exclusion and event ordering.

Define the main categories of processor registers 1

a memory address register (MAR)

Consider a memory system with the following parameters: Tc 100 ns Cc 0.01 cents/bit Tm 1,200 ns Cm 0.001 cents/bit a. What is the cost of 1 MByte of main memory? b. What is the cost of 1 MByte of main memory using cache memory technology? c. If the effective access time is 10% greater than the cache access time, what is the hit ratio H

a) What is the cost of 1 MB of main memory? ### 1*1024*1024*8*0.001cents=$84. b) What is the cost of 1 MB of main memory using cache technology? ### 1*1024*1024*8*0.01cents=$840. c) Design a main memory/cache system with 1 MB of main memory whose effective access time is no more than 10% greater than cache memory access time. What is its cost? ### The effective access time ( from the equation 1.1 given in page 39) with T1 = 100 ns and T2 = 1200 ns. we get Ts = T1 +(1-H) (T1+T2) = 100 + 0.05 * 1300 ns = 165 ns Since we need an access efficiency of 110 ns, the hit ratio has to be improved: 110 = 100 + (1-H) * 1300 10/1300 = 1-H H = 1290/1300 = 0.99 The ratio of the size ( s1/s2) has to be 0.1 ( for strong locality) by looking at performance curve. cache has .1 M and main memory is .9 M cost is 83.89+75.49 = $159.38 A main memory system consists of a number of memory modules attached to the system bus. When a write request is made, the bus is occupied for 100 nanosec by the data, address and control signals. During the same 100nanosec and for 500nanosec thereafter memory module executes one cycle accepting, and storing the data. The operation of the memory modules may overlap, but only one request can be on the bus any time. Assume that there are eight such modules connected to the bus. What is the maximum possible rate (in bytes per second) at which data can be stored. ### In a cycle, maximum possible data = 8* (100+500)/100=48bits=6bytes The maximum possible rate= 6/0.0006=10000bytes/second

Consider a hypothetical 32-bit microprocessor having 32-bit instructions composed of two fields. The first byte contains the opcode and the remainder an immediate operand or an operand address. a. What is the maximum directly addressable memory capacity (in bytes)? b. Discuss the impact on the system speed if the microprocessor bus has 1. a 32-bit local address bus and a 16-bit local data bus, or 2. a 16-bit local address bus and a 16-bit local data bus. c. How many bits are needed for the program counter and the instruction register?

a. 2^(32-8) = 2^24 = 16,777,216 bytes = 16 MB ,(8 bits = 1 byte for he opcode). b.1. a 32-bit local address bus and a 16-bit local data bus. Instruction and data transfers would take three bus cycles each, one for the address and two for the data. Since If the address bus is 32 bits, the whole address can be transferred to memory at once and decoded there; however, since the data bus is only 16 bits, it will require 2 bus cycles (accesses to memory) to fetch the 32-bit instruction or operand. b.2.a 16-bit local address bus and a 16-bit local data bus. Instruction and data transfers would take four bus cycles each, two for the address and two for the data. Therefore, that will have the processor perform two transmissions in order to send to memory the whole 32-bit address; this will require more complex memory interface control to latch the two halves of the address before it performs an access to it. In addition to this two-step address issue, since the data bus is also 16 bits, the microprocessor will need 2 bus cycles to fetch the 32-bit instruction or operand. c. For the PC needs 24 bits (24-bit addresses), and for the IR needs 32 bits (32-bit addresses)

A multiprocessor with eight processors has 20 attached tape drives. There is a large number of jobs submitted to the system that each require a maximum of four tape drives to complete execution. Assume that each job starts running with only three tape drives for a long period before requiring the fourth tape drive for a short period toward the end of its operation. Also assume an endless supply of such jobs. a. Assume the scheduler in the OS will not start a job unless there are four tape drives available. When a job is started, four drives are assigned immediately and are not released until the job finishes. What is the maximum number of jobs that can be in progress at once? What are the maximum and minimum number of tape drives that may be left idle as a result of this policy? b. Suggest an alternative policy to improve tape drive utilization and at the same time avoid system deadlock. What is the maximum number of jobs that can be in progress at once? What are the bounds on the number of idling tape drives?

a. Assume the scheduler in the OS will not start a job unless there are four tape drives available. When a job is started, four drives are assigned immediately and are not released until the job finishes. What is the maximum number of jobs that can be in progress at once? What is the maximum and minimum number of tape drives that may be left idle as a result of this policy? If a conservative policy is used, at most 20/4 = 5 processes can be active simultaneously. Because one of the drives allocated to each process can be idle most of the time, at most 5 drives will be idle at a time. In the best case, none of the drives will be idle. b. Suggest an alternative policy to improve tape drive utilization and at the same time avoid system deadlock. What is the maximum number of jobs that can be in progress at once? What are the bounds on the number of idling tape drives? To improve drive utilization, each process can be initially allocated with three tape drives. The fourth one will be allocated on demand. In this policy, at most floor(20/3) = 6 processes can be active simultaneously. The minimum number of idle drives is 0 and the maximum number is 2.

An I/O-bound program is one that, if run alone, would spend more time waiting for I/O than using the processor. A processor-bound program is the opposite. Suppose a short-term scheduling algorithm favors those programs that have used little processor time in the recent past. Explain why this algorithm favors I/O-bound programs and yet does not permanently deny processor time to processor-bound programs.

a. I/O-bound processes use little processor time; thus, the algorithm will favor I/O-bound processes. b. if CPU-bound process is denied access to the processor - the CPU-bound process won't use the processor in the recent past. - the CPU-bound process won't be permanently denied access.

Consider a hypothetical microprocessor generating a 16-bit address (e.g., assume that the program counter and the address registers are 16 bits wide) and having a 16-bit data bus. a. What is the maximum memory address space that the processor can access directly if it is connected to a "16-bit memory"? b. What is the maximum memory address space that the processor can access directly if it is connected to an "8-bit memory"? c. What architectural features will allow this microprocessor to access a separate "I/O space"? d. If an input and an output instruction can specify an 8-bit I/O port number, how many 8-bit I/O ports can the microprocessor support? How many 16-bit I/O ports? Explain

a. The Maximum memory address space = 2^16 = 64 Kbytes. b. The Maximum memory address space = 2^16 = 64 Kbytes. Therefore, in (a) and (b), the microprocessor is to access 64K bytes, but the difference thing between them is that the access of 8-bit memory will transfer a 8 bits and the access of 16-bit memory may transfer 8 bits or 16 bits word. c. Separate I/O instructions are needed because during its execution will generate separate its own signals I/O signals. That signals will be different from the memory signals which is generated during the execution for memory instructions. Therefore, one more output pin will be needed to carry I/O signals. d.With an 8-bit I/O port number the microprocessor can support 2^8 = 256 8-bit input ports, and 2^8 = 256 8-bit output ports.With an 8-bit I/O port number the microprocessor can support 2^8 = 256 16-bit input ports,and 2^8 = 256 16-bit output ports. Thus, the size of the I/O port will not change the number of I/O ports since the number of I/O ports depends on the number of bits which is used to represent the I/O port number (equals to 8 bits in both cases).

objectives of an OS design? 3

ability to evolve

What characteristics distinguish the various elements of a memory hierarchy? 3

access time

What characteristics distinguish the various elements of a memory hierarchy? 2

capacity

What is the kernel of an OS?

consists of an interacting collection of components

objectives of an OS design? 1

convenience

What characteristics distinguish the various elements of a memory hierarchy? 1

cost

How are multiple interrupts dealt with? 2

define priorities for interrupts and to allow an interrupt of higher priority to cause a lower-priority interrupt handler to be interrupted

How are multiple interrupts dealt with? 1

disable interrupts while an interrupt is being processed

objectives of an OS design? 2

efficiency

What is cache memory?

intended to provide memory access time approaching that of the fastest memories available

What is the difference between an interrupt and a trap?

interrupt A suspension of a process, such as the execution of a computer program, caused by an event external to that process and performed in such a way that the process can be resumed. trap An unprogrammed conditional jump to a specified address that is automatically activated by hardware; the location from which the jump was made is recorded.

What is multithreading?

is a technique in which a process, executing an application, is divided into threads that can run concurrently.

Why are two modes (user and kernel) needed?

kernel mode needs direct access to hardware and reference memory, protection needed in user mode

Define the main categories of processor registers 2

memory buffer register (MBR),

What is the difference between a mode switch and a process switch?

mode switch A hardware operation that occurs that causes the processor to execute in a different mode (kernel or process). When the mode switches from process to kernel, the program counter, processor status word, and other registers are saved. When the mode switches from kernel to process, this information is restored. process switch An operation that switches the processor from one process to another, by saving all the process control block, registers, and other information for the first and replacing them with the process information for the second

Explain the difference between a monolithic kernel and a microkernel.

monolithic kernel A large kernel containing virtually the complete operating system, including scheduling, file system, device drivers, and memory management. All the functional components of the kernel have access to all of its internal data structures and routines. Typically, a monolithic kernel is implemented as a single process, with all elements sharing the same address space. microkernel A small privileged operating system core that provides process scheduling, memory management, and communication services and relies on other processes to perform some of the functions traditionally associated with the operating system kernel.

What is the difference between a multiprocessor and a multicore system?

multiprocessor each chip (called a socket) contains multiple processors (called cores), each with multiple levels of large memory caches, and multiple logical processors sharing the execution units of each core. multicore computer, also known as a chip multiprocessor , combines two or more processors (called cores) on a single piece of silicon (called a die).

Explain the distinction between a real address and a virtual address.

real address A physical address in main memory. virtual address The address of a storage location in virtual memory.

Describe the round-robin scheduling technique

round robin A scheduling algorithm in which processes are activated in a fixed cyclic order; that is, all processes are in a circular queue. A process that cannot proceed because it is waiting for some event (e.g., termination of a child process or an input/output operation) returns control to the scheduler.

MAR

specifies the address in memory for the next read or write

What common events lead to the creation of a process?

start program, user logs in, device driver, OS creates on behalf of user, process spawn child

disabled interrupt

the processor ignores any new interrupt request signal

What is an instruction trace?

threads

MBR

which contains the data to be written into memory or which receives


संबंधित स्टडी सेट्स

Practice Exam 1 - integrated business policy/strategy

View Set

A Christmas Carol - Stave Three Study Guide, First Half

View Set

Bus. Law Test 1 - Multiple Choice and True/False

View Set

Chapter 8: The Appendicular Skeleton

View Set

Chapter 6: Formulating Hypotheses

View Set

Physics I Final - Conceptual Questions

View Set