COP 4610 Final Study Guide

¡Supera tus tareas y exámenes ahora con Quizwiz!

Consider a paged virtual memory system. (a) What is page replacement? (b) What does the acronym LRU stand for? (c) Using four frames, and the page reference string: 12342165234176231236, where each digit is a page number, describe the LRU algorithm. (d) Using the same data, describe the FIFO algorithm. (e) Using the same data, describe the optimal replacement algorithm. (f) What is Belady's anomaly?

(a) What is page replacement? Whenever a new page is referred and not present in memory, page fault occurs and Operating System replaces one of the existing pages with newly needed page. (b) What does the acronym LRU stand for? LRU = least recently used. c), d), e) https://imgur.com/a/e1dPvlk f) Belady's anomaly is the phenomenon in which increasing the number of page frames results in an increase in the number of page faults for certain memory access patterns. Commonly experienced when using the first-in first-out (FIFO) page replacement algorithm

Why is it important to balance file-system I/O among the disks and controllers on a system in a multitasking environment?

A system can perform only at the speed of its slowest bottleneck. Disks or disk controllers are frequently the bottleneck in modern systems as their individual performance cannot keep up with that of the CPU and system bus. By balancing I/O among disks and controllers, neither an individual disk nor a controller is overwhelmed, so that bottleneck is avoided.

What is one advantage and one disadvantage of log-structured file-systems?

A disadvantage is that data can be lost if it has been written but not checkpointed. This can be mitigated by decreasing the time between checkpoints or allowing applications to ask to wait until the next checkpoint before proceeding. An advantage however is that most reads are absorbed by the cache; writes always append to the log, so they are sequential and very fast. Blocks are also located on disk in exactly (or almost exactly) the order in which they were last written. Even if reads miss the cache, they will have good locality if the order in which files are read mimics the order in which they are written.

Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs.

A page fault occurs when an access to a page that has not been brought into main memory takes place. The operating system verifies the memory access, aborting the program if it is invalid. If it is valid, a free frame is located, and I/O is requested to read the needed page into the free frame. Upon completion of I/O, the process table and page table are updated, and the instruction is restarted.

Describe the advantages and disadvantages of increasing the level of multiprogramming

Advantages include the CPU being used most of the time, and never being idle. All tasks are also able to run in parallel. Short time jobs are completed faster than long time jobs. Multiple users can be supported too. Response time and total read time to execute jobs are lower. Disadvantages include the complication of programming this due to complicated schedule handling. Tracking all the tasks and processes can sometimes be difficult too. And because high loads of tasks, long time jobs have to wait a long time.

Similarly, some systems support many types of structures for a file's data, while others simply support a stream of bytes. What are the advantages and disadvantages of each approach?

An advantage of having the system support different file structures is that the support comes from the system; individual applications are not required to provide the support. In addition, if the system provides the support for different file structures, it can implement the support presumably more efficiently than an application. The disadvantage of having the system provide support for defined file types is that it increases the size of the system. In addition, applications that may require different file types other than what is provided by the system may not be able to run on such systems. An alternative strategy is for the operating system to define no support for file structures and instead treat all files as a series of bytes. This is the approach taken by UNIX systems. The advantage of this approach is that it simplifies the operating system support for file systems, as the system no longer has to provide the structure for different file types. Furthermore, it allows applications to define file structures, thereby alleviating the situation where a system may not provide a file definition required for a specific application.

Explain external and internal fragmentation. How can they be avoided?

As processes are loaded and removed from memory, the free memory space is broken into little pieces. External fragmentation exists when there is enough total memory space to satisfy a request, but the available spaces are not contiguous: storage is fragmented into a large number of small holes. Internal fragmentation is unused memory that is internal to a partition because of rounding up from the actual requested allocation to the allocation granularity. External fragmentation can be avoided by paging, and not need compaction (which is a solution to external fragmentation). Internal fragmentation can be avoided by dynamic partitioning and assigning the smallest partition but large enough for the process.

Compare the performance characteristics of bitwise and blockwise striping techniques for multi- disk systems.

Byte-level striping means that each file is split up into parts one byte in size (in this case bits). The first byte would be written to the first drive, the second byte to the second drive and so on, until the fifth byte is then written to the first drive again and the whole process starts over. Block-level striping is when each file is split into parts one block in size, which is 512 bytes by default, but can be specified otherwise. The size of this block is commonly referred to as stripe size. If a file is smaller than the stripe size, it simply gets stored on a single disk. Bit-level is expensive, requires many drives, and is substandard in transactional environments. Block-level provides improvements to random access performance.

How do caches help improve performance? Why do systems not use more or larger caches if they are so useful?

Caches allow components of differing speeds to communicate more efficiently by storing data from the slower device, temporarily, in a faster device (the cache). Caches are, almost by definition, more expensive than the device they are caching for, so increasing the number or size of caches would increase system cost.

Discuss file block allocation --- Linked, Indexed, etc. Calculate which block contains which file-byte.

Contiguous allocation: each file occupies a contiguous set of blocks on the disk : if a file requires n blocks and is given a block b as the starting location, then the blocks assigned to the file will be: b, b+1, b+2,......b+n-1 The file 'mail' in the following figure starts from the block 19 with length = 6 blocks. Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks. Linked List Allocation: In this scheme, each file is a linked list of disk blocks which need not be contiguous. The directory entry contains a pointer to the starting and the ending file block. The file 'jeep' shows that blocks are randomly distributed. The last block (25) has -1 indicating a null pointer and does not point to any other block Indexed Allocation: In this scheme, a special block known as the Index block contains the pointers to all the blocks occupied by a file and each file has its own index block The ith entry in the index block contains the disk address of the ith file block. The directory entry contains the address of the index block as shown in the image:

Consider a system that supports the strategies of contiguous, linked, and indexed allocation. What criteria should be used in deciding which strategy is best utilized for a particular file?

Contiguous—if file is usually accessed sequentially, if file is relatively small. Linked—if file is large and usually accessed sequentially. Indexed—if file is large and usually accessed randomly.

Explain the copy-on-write method. If an OS uses paging and copy-on-write, explain what happens during the fork() system call. Explain what is shared between the parent and child process, and what is not.

Copy-on-write (CoW or COW), (= implicit sharing or shadowing) is a resource-management technique used to efficiently implement a "duplicate" or "copy" operation on modifiable resources. If a resource is duplicated but not modified, it is not necessary to create a new resource; the resource can be shared between the copy and the original Fork() creates a new process (child) which is a copy of the calling process (parent). In COW, the parent's data, stack heap and process address space are shared by the parent and child and have their protection changed by the kernel to read-only. If either process tries to modify these regions, the kernel then makes a copy of that piece of memory only, typically a page.

Some systems automatically delete all user files when a user logs off or a job terminates, unless the user explicitly requests that they be kept. Other systems keep all files unless the user explicitly deletes them. Discuss the relative merits of each approach.

Deleting all files not specifically saved by the user has the advantage of minimizing the file space needed for each user by not saving unwanted or unnecessary files. Saving all files unless specifically deleted is more secure for the user in that it is not possible to lose files inadvertently by forgetting to save them.

Why is it advantageous to the user for an operating system to dynamically allocate its internal tables? What are the penalties to the operating system for doing so?

Dynamic tables allow more flexibility in system use growth — tables are never exceeded, avoiding artificial use limits. Unfortunately, kernel structures and code are more complicated, so there is more potential for bugs. The use of one resource can take away more system resources (by growing to accommodate the requests) than with static tables.

An operating system supports a paged virtual memory. The central processor has a cycle time of 1 microsecond. It costs an additional 1 microsecond to access a page other than the current one. Pages have 1,000 words, and the paging device is a drum that rotates at 3,000 revolutions per minute and transfers 1 million words per second. The following statistical measurements were obtained from the system: • One percent of all instructions executed accessed a page other than the current page. • Of the instructions that accessed another page, 80 percent accessed a page already in memory. • When a new page was required, the replaced page was modified 50 percent of the time. Calculate the effective instruction time on this system, assuming that the system is running one process only and that the processor is idle during drum transfers.

Effective access time = 0.99 × (1 sec + 0.008 × (2 sec) + 0.002 × (10,000 sec + 1,000 sec) + 0.001 × (10,000 sec + 1,000 sec) = (0.99 + 0.016 + 22.0 + 11.0) sec = 34.0 sec

Discuss Swap-space management: swap-space use and location.

Swap space helps the computer's operating system in pretending that it have more RAM than it actually has. Swap space is a space on hard disk which is a substitute of physical memory → Applications which are not used or are used less can be kept in swap file → Having sufficient swap file helps the system keep some physical memory free all the time → The space in physical memory which has been freed due to swap space can be used by OS for some other important tasks

In a virtual memory system, processes can use memory with no size limit. Is this TRUE or FALSE. Why?

False, virtual memory upper limits are set by the OS. On 32-bit Windows the limit is 16TB, and on 64-bit Windows the limit is 256TB. Virtual memory is also physically limited by the available disc space.

Researchers have suggested that, instead of having an access list associated with each file (specifying which users can access the file, and how), we should have a user control list associated with each user (specifying which files a user can access, and how). Discuss the relative merits of these two schemes.

File control list. Since the access control information is concentrated in one single place, it is easier to change access control information, and this requires less space. User control list. This requires less overhead when opening a file.

Discuss the hardware support required to support demand paging.

For every memory-access operation, the page table needs to be consulted to check whether the corresponding page is resident or not and whether the program has read or write privileges for accessing the page. These checks have to be performed in hardware. A TLB could serve as a cache and improve the performance of the lookup operation.

Compare local page replacement v/s global-page replacement ?

Global page-replacement algorithms replace pages without regard to the process to which they belong, a page is selected anywhere from memory. Sometimes, this can cause more page faults. Local page-replacement algorithms replace pages with regard to the process to which they belong, a page is selected from pages that belong to that process. The advantage of local page replacement is its scalability: each process can handle its page faults independently without contending for some shared global data structure.

Could you simulate a multilevel directory structure with a single-level directory structure in which arbitrarily long names can be used? If your answer is yes, explain how you can do so, and contrast this scheme with the multilevel directory scheme. If your answer is no, explain what prevents your simulation's success. How would your answer change if file names were limited to seven characters?

If arbitrarily long names can be used, then it is possible to simulate a multilevel directory structure. This can be done, for example, by using the character "." to indicate the end of a subdirectory. Thus, for example, the name jim.java.F1 specifies that F1 is a file in subdirectory java which in turn is in the root directory jim. If file names were limited to seven characters, then the above scheme could not be utilized and thus, in general, the answer is no. The next best approach in this situation would be to use a specific file as a symbol table (directory) to map arbitrarily long names (such as jim.java.F1) into shorter arbitrary names (such as XX00743), which are then used for actual file access

What are the tradeoffs involved in rereading code pages from the file system versus using swap space to store them?

If code pages are stored in swap space, they can be transferred more quickly to main memory (because swap space allocation is tuned for faster performance than general file system allocation). Using swap space can require startup time if the pages are copied there at process invocation rather than just being paged out to swap space on demand. Also, more swap space must be allocated if it is used for both code and data pages.

Consider a file currently consisting of 100 blocks. Assume that the filecontrol block (and the index block, in the case of indexed allocation) is already in memory. Calculate how many disk I/O operations are required for contiguous, linked, and indexed (single-level) allocation strategies, if, for one block, the following conditions hold. In the contiguous-allocation case, assume that there is no room to grow at the beginning but there is room to grow at the end. Also assume that the block information to be added is stored in memory. a. The block is added at the beginning. b. The block is added in the middle. c. The block is added at the end. d. The block is removed from the beginning. e. The block is removed from the middle. f. The block is removed from the end.

Image: https://imgur.com/a/B8ym28U

Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6. How many page faults would occur for the following replacement algorithms, assuming one, two, three, four, five, six, and seven frames? Remember that all frames are initially empty, so your first unique pages will cost one fault each. • LRU replacement • FIFO replacement • Optimal replacement

Image: https://imgur.com/a/r1szKOn

Segmentation is similar to paging but uses variable-sized "pages." Define two segment-replacement algorithms, one based on the FIFO pagereplacement scheme and the other on the LRU page-replacement scheme. Remember that since segments are not the same size, the segment that is chosen for replacement may be too small to leave enough consecutive locations for the needed segment. Consider strategies for systems where segments cannot be relocated and strategies for systems where they can.

In FIFO, we find the first segment large enough to accommodate the incoming segment. If relocation is not possible and no one segment is large enough, select a combination of segments whose memories are contiguous, which are "closest to the first of the list" and which can accommodate the new segment. If relocation is possible, rearrange the memory so that the first N segments large enough for the incoming segment are contiguous in memory. Add any leftover space to the free-space list in both cases. In LRU, we select the segment that has not been used for the longest period of time and that is large enough, adding any leftover space to the free space list. If no one segment is large enough, select a combination of the "oldest" segments that are contiguous in memory (if relocation is not available) and that are large enough. If relocation is available, rearrange the oldest N segments to be contiguous in memory and replace those with the new segment.

Is disk scheduling, other than FCFS scheduling, useful in a single-user environment? Explain your answer.

In a single-user environment, the I/O queue usually is empty. Requests generally arrive from a single process for one block or for a sequence of consecutive blocks. In these cases, FCFS is an economical method of disk scheduling. But LOOK is nearly as easy to program and will give much better performance when multiple processes are performing concurrent I/O, such as when a Web browser retrieves data in the background while the operating system is paging and another application is active in the foreground.

Why must the bit map for file allocation be kept on mass storage, rather than in main memory?

In case of system crash (memory failure) the free-space list would not be lost as it would be if the bit map had been stored in main memory.

Compare LOOK and C-LOOK for disk scheduling. Which one would you prefer? Why?

LOOK works like an elevator. It scans down towards the nearest end and then when it hits the bottom it scans up servicing the requests that it didn't get going down. If a request comes in after it has been scanned it will not be serviced until the process comes back down or moves back up. C-LOOK is an enhanced version of C-SCAN, which is circular scanning. It begins its scan toward the nearest end and works its way all the way to the end of the system. Once it hits the bottom or top it jumps to the other end and moves in the same direction. C-LOOK the enhanced version, scanning doesn't go past the last request in the direction that it is moving. It too jumps to the other end but not all the way to the end, just to the furthest request. Scan scans, and look looks... so look doesn't go over every track. LOOK looks for a request before going in a direction. LOOK is more efficient than C-LOOK at all loads, whereas C-LOOK is better at high loads only, as it reduces the starvation problem.

Explain LRU algorithm. Depict it on a system with 4 physical memory pages when the processes access the virtual memory pages in the following order: 10, 3, 100, 10, 1, 2, 4, 3, 4, 3, 100, 10, 100, 3.

LRU page replacement replaces pages that have not been used in the longest amount of time. So for example, if page 2 has just been used and page 4 has not been used for a while page 4 will be replaced. So it's not like FIFO where a page could get replaced after 4 pages in a system with 4 physical memory pages, if a page is being used often enough it can stay without being replaced. With the given virtual memory pages, here's how LRU would look:

Why is rotational latency usually not considered in disk scheduling? How would you modify SSTF, SCAN, and C-SCAN to include latency optimization?

Most disks do not export their rotational position information to the host. Even if they did, the time for this information to reach the scheduler would be subject to imprecision and the time consumed by the scheduler is variable, so the rotational position information would become incorrect. Further, the disk requests are usually given in terms of logical block numbers, and the mapping between logical blocks and physical locations is very complex.

What is mount point? What is a partition?

Mount point is the location within the file structure where the file system is to be attached. Partition is when we divide the hard disk into a region through partitioning. Data can be stored and retrieved on these partitions, and to access the partition the partition table must be read.

Explain the mounting operation. Why must a file system be mounted before being accessed?

Mounting is the act of associating a storage device to a particular location in the directory tree and it applies to anything that is made accessible as files, not just actual storage devices. Mounting a file system attaches that file system to a directory (mount point) and makes it available to the system so if it is not mounted then it won't be accessible.

You have devised a new page-replacement algorithm that you think may be optimal. In some contorted test cases, Belady's anomaly occurs. Is the new algorithm optimal? Explain your answer.

No. An optimal algorithm will not suffer from Belady's anomaly because —by definition—an optimal algorithm replaces the page that will not be used for the longest time. Belady's anomaly occurs when a page replacement algorithm evicts a page that will be needed in the immediate future. An optimal algorithm would not have selected such a page.

What is device polling and interrupt servicing?

Polling is when the CPU checks the status of an I/O device; interrupts is the opposite - the device notifies the CPU about change in its status.

Explain the shortcomings of RAID level 4. How does RAID level 5 overcome these?

RAID level 4, or block-interleaved parity organization, uses block-level striping, as in RAID 0, and in addition keeps a parity block on a separate disk for corresponding blocks from N other disks. RAID level 5, or block-interleaved distributed parity, differs from level 4 in that it spreads data and parity among all N+1 disks, rather than storing data in N disks and parity in one disk. For each block, one of the disks stores the parity and the others store data. RAID 4 array will not operate as quickly as RAID 5, because parity relies on a single disk drive, rather than parity data being distributed across the disks in the array. RAID 5 array is faster than level 4 RAID, because there is no single parity disk that will create a data input bottleneck. In a RAID 4 array, the array can only write as fast as the parity disk.

Consider the page table shown in Figure 9.30 for a system with 12-bit virtual and physical addresses and with 256-byte pages. The list of free page frames is D, E, F (that is, D is at the head of the list, E is second, and F is last). Convert the following virtual addresses to their equivalent physical addresses in hexadecimal. All numbers are given in hexadecimal. (A dash for a page frame indicates that the page is not in memory.) Image: https://imgur.com/a/YFYHCmt a. 9EF b. 111 c. 700 d. 0FF

Size of virtual/physical address space = 2^12 = 4096 bytes. Page size = 256 bytes = 2^8. 12-8 = 4. Each letter in hexa is 4 in binary. So first 4 we check for frame. 7 and 0 are not in memory, so we get free page frames D is first then E is second as question states. 8 bits are offset hence last 2 letters are kept. a.. 9EF - 0EF b. 111 - 211 c. 700 - D00 d. 0FF - EFF

Why do some systems keep track of the type of a file, while others leave it to the user and others simply do not implement multiple file types? Which system is "better"?

Some systems allow different file operations based on the type of the file (for instance, an ascii file can be read as a stream while a database file can be read via an index to a block). Other systems leave such interpretation of a file's data to the process and provide no help in accessing the data. The method that is "better" depends on the needs of the processes on the system, and the demands the users place on the operating system. If a system runs mostly database applications, it may be more efficient for the operating system to implement a database type file and provide operations, rather than making each program implement the same thing (possibly in different ways). For general-purpose systems it may be better to only implement basic file types to keep the operating system size smaller and allow maximum freedom to the processes on the system.

Explain why SSTF scheduling tends to favor middle cylinders over the innermost and outermost cylinders.

The center of the disk is the location having the smallest average distance to all other tracks. Thus, the disk head tends to move away from the edges of the disk. Here is another way to think of it. The current location of the head divides the cylinders into two groups. If the head is not in the center of the disk and a new request arrives, the new request is more likely to be in the group that includes the center of the disk; thus, the head is more likely to move in that direction.

Explain the purpose of the open() and close() operations.

The open() operation informs the system that the named file is about to become active. The close() operation informs the system that the named file is no longer in active use by the user who issued the close operation.

We have an operating system for a machine that uses base and limit registers, but we have modified the machine to provide a page table. Can the page tables be set up to simulate base and limit registers? How can they be, or why can they not be?

The page table can be set up to simulate base and limit registers provided that the memory is allocated in fixed-size segments. In this way, the base of a segment can be entered into the page table and the valid/invalid bit used to indicate that portion of the segment as resident in the memory. There will be some problem with internal fragmentation.

What problems could occur if a system allowed a file system to be mounted simultaneously at more than one location?

There would be multiple paths to the same file, which could confuse users or encourage mistakes (deleting a file with one path deletes the file in all the other paths).

One problem with contiguous allocation is that the user must preallocate enough space for each file. If the file grows to be larger than the space allocated for it, special actions must be taken. One solution to this problem is to define a file structure consisting of an initial contiguous area (of a specified size). If this area is filled, the operating system automatically defines an overflow area that is linked to the initial contiguous area. If the overflow area is filled, another overflow area is allocated. Compare this implementation of a file with the standard contiguous and linked implementations.

This method requires more overhead then the standard contiguous allocation. It requires less overhead than the standard linked allocation.

Define and explain thrashing:

Thrashing is when a process is spending more time paging than executing. As the degree of multiprogramming increases, CPU utilization also increases, although more slowly, until a maximum is reached. If the degree of multiprogramming is increased even further, thrashing sets in, and CPU utilization drops sharply. At this point, to increase CPU utilization and stop thrashing, we must decrease the degree of multiprogramming. This is mostly caused by the fact that new processes try to get started by taking frames from running processes, causing more page faults and a longer queue for the paging device. Eventually this comes to the point where the page fault rate increases tremendously, effective memory-access time increases, and no work is getting done, because the processes are spending all their time paging.

In a virtual memory system, describe how a virtual address reference is translated into a physical address.

To translate a virtual address to a physical address we need to know the physical memory size, the page size, and the number of bits for a virtual address. So, if a virtual address is 16-bits long, then there are 2^16 addresses in the virtual address space. If page size is given at 4KB, then offset bits are log2(4096) which is 12. Hence 16-12=4, so 2^4 = 16 virtual pages. Since page size (in the virtual address space) is always the same as the frame size in the main memory, the 12 bit offset will remain the same in the physical address as that of the virtual address. Now given a virtual address, we use the first 4 bits to get the frame address. A virtual address of 0xACA1 has A as the page number (10), and the corresponding frame is 5 (0101), so the resulting physical address is 0x5CA1.

SSTF favors accesses to middle cylinders over the innermost and outermost cylinders. Is this statement true or false and why?

True, because the center of the disk is the location having the smallest average distance to all other tracks. Thus, the disk head tends to move away from the edges of the disk. Here is another way to think of it. The current location of the head divides the cylinders into two groups. If the head is not in the center of the disk and a new request arrives, the new request is more likely to be in the group that includes the center of the disk; thus, the head is more likely to move in that direction.

Is there any way to implement truly stable storage? Explain your answer.

Truly stable storage would never lose data. The fundamental technique for stable storage is to maintain multiple copies of the data, so that if one copy is destroyed, some other copy is still available for use. But for any scheme, we can imagine a large enough disaster that all copies are destroyed.

Calculate the average turnaround time for various disk scheduling algorithms given IO times, the seek curve, and request sequence.

Turnaround time is the total time to service all requests following the scheduling algorithm. So if we know how long it takes to go through one cylinder and how long it takes to service the request, then we can figure out in total how long it took service all requests.

Explain how the VFS layer allows an operating system to support multiple types of file systems easily.

VFS introduces a layer of indirection in the file system implementation. In many ways, it is similar to object-oriented programming techniques. System calls can be made generically (independent of file system type). Each file system type provides its function calls and data structures to the VFS layer. A system call is translated into the proper specific functions for the target file system at the VFS layer. The calling program has no file-system-specific code, and the upper levels of the system call structures likewise are file system-independent. The translation at the VFS layer turns these generic calls into file-system-specific operations.

Could a RAID level 1 organization achieve better performance for read requests than a RAID level 0 organization (with nonredundant striping of data)? If so, how?

Yes, a RAID Level 1 organization could achieve better performance for read requests. When a read operation is performed, a RAID Level 1 system can decide which of the two copies of the block should be accessed to satisfy the request. This choice could be based on the current location of the disk head and could therefore result in performance optimizations by choosing a disk head that is closer to the target data.

Suppose that you want to use a paging algorithm that requires a reference bit (such as second-chance replacement or working-set model), but the hardware does not provide one. Sketch how you could simulate a reference bit even if one were not provided by the hardware or explain why it is not possible to do so. If it is possible, calculate what the cost would be.

You can use the valid/invalid bit supported in hardware to simulate the reference bit. Initially set the bit to invalid. On first reference a trap to the operating system is generated. The operating system will set a software bit to 1 and reset the valid/invalid bit to valid.

7. It is sometimes said that tape is a sequential-access medium, whereas a magnetic disk is a random-access medium. In fact, the suitability of a storage device for random access depends on the transfer size. The term "streaming transfer rate" denotes the rate for a data transfer that is underway, excluding the effect of access latency. In contrast, the "effective transfer rate" is the ratio of total bytes per total seconds, including overhead time such as access latency. Suppose we have a computer with the following characteristics: the level-2 cache has an access latency of 8 nanoseconds and a streaming transfer rate of 800 megabytes per second, the main memory has an access latency of 60 nanoseconds and a streaming transfer rate of 80 megabytes per second, the magnetic disk has an access latency of 15 milliseconds and a streaming transfer rate of 5 megabytes per second, and a tape drive has an access latency of 60 seconds and a streaming transfer rate of 2 megabytes per second. a. Random access causes the effective transfer rate of a device to decrease, because no data are transferred during the access time. For the disk described, what is the effective transfer rate if an average access is followed by a streaming transfer of (1) 512 bytes, (2) 8 kilobytes, (3) 1 megabyte, and (4) 16 megabytes? b. The utilization of a device is the ratio of effective transfer rate to streaming transfer rate. Calculate the utilization of the disk drive for each of the four transfer sizes given in part a. c. Suppose that a utilization of 25 percent (or higher) is considered acceptable. Using the performance figures given, compute the smallest transfer size for disk that gives acceptable utilization. d. Complete the following sentence: A disk is a random-access device for transfers larger than bytes and is a sequential access device for smaller transfers. e. Compute the minimum transfer sizes that give acceptable utilization for cache, memory, and tape. f. When is a tape a random-access device, and when is it a sequential-access device?

a. For 512 bytes, the effective transfer rate is calculated as follows. ETR = transfer size/transfer time. If X is transfer size, then transfer time is ((X/STR) + latency). Transfer time is 15ms + (512B/5MB per second) = 15.0097ms. Effective transfer rate is therefore 512B/15.0097ms = 33.12 KB/sec. ETR for 8KB = .47MB/sec. ETR for 1MB = 4.65MB/sec. ETR for 16MB = 4.98MB/sec. b. Utilization of the device for 512B = 33.12 KB/sec / 5MB/sec = .0064 = .64 For 8KB = 9.4%. For 1MB = 93%. For 16MB = 99.6%. c. Calculate .25 = ETR/STR, solving for transfer size X. STR = 5MB, so 1.25MB/S = ETR. 1.25MB/S * ((X/5) + .015) = X. .25X + .01875 = X. X = .025MB. d. A disk is a random-access device for transfers larger than K bytes (where K > disk block size), and is a sequential-access device for smaller transfers e. Calculate minimum transfer size for acceptable utilization of cache memory: STR = 800MB, ETR = 200, latency = 8 * 10−9. 200 (XMB/800 + 8 X 10−9) = XMB. .25XMB + 1600 * 10−9 = XMB. X = 2.24 bytes. Calculate for memory: STR = 80MB, ETR = 20, L = 60 * 10−9. 20 (XMB/80 + 60 * 10−9) = XMB. .25XMB + 1200 * 10−9 = XMB. X = 1.68 bytes. Calculate for tape: STR = 2MB, ETR = .5, L = 60s. .5 (XMB/2 + 60) = XMB. .25XMB + 30 = XMB. X = 40MB. f. It depends upon how it is being used. Assume we are using the tape to restore a backup. In this instance, the tape acts as a sequential-access device where we are sequentially reading the contents of the tape. As another example, assume we are using the tape to access a variety of records stored on the tape. In this instance, access to the tape is arbitrary and hence considered random.

6. In some systems, a subdirectory can be read and written by an authorized user, just as ordinary files can be. a. Describe the protection problems that could arise. b. Suggest a scheme for dealing with each of these protection problems.

a. One piece of information kept in a directory entry is file location. If a user could modify this location, then he could access other files defeating the access-protection scheme. b. Do not allow the user to directly write onto the subdirectory. Rather, provide system operations to do so.

7. Consider a system that supports 5,000 users. Suppose that you want to allow 4,990 of these users to be able to access one file. a. How would you specify this protection scheme in UNIX b. Can you suggest another protection scheme that can be used more effectively for this purpose than the scheme provided by UNIX?

a. There are two methods for achieving this: i. Create an access control list with the names of all 4990 users. ii. Put these 4990 users in one group and set the group access accordingly. This scheme cannot always be implemented since user groups are restricted by the system. b. The universal access to files applies to all users unless their name appears in the access-control list with different access permission. With this scheme you simply put the names of the remaining ten users in the access control list but with no access privileges allowed.

Consider a reference string 1,2,3,4,2,5,7,2,3,2,1,7,8 a) How many page faults would there be using FIFO replacement and 4 page frames? b) How many faults with LRU and 4 page frames? c) How many faults using an optimal algorithm and 4 page frames?

a. 10 b. 9 c. 8

Consider a computer system with the following characteristics: Hard disk with 16 usable surfaces (8 double-sided platters), 110 tracks/surface, 64 sectors/track, 1024 bytes/sector. The disk rotates at 360 rpm., and the disk controller has an onboard 1 KB buffer. The buffer cannot be written while it is being read and cannot be read while it's being written. The CPU is clocked at 50 MHz, 5 clock cycles per instruction on average, with bus access needed for 3 cycles/instructions on average (ignore the effect of CPU and buffer cache) a) What is the capacity of the disk in KB (1 KB = 1024 bytes)? b) Ignore all memory transfer delays, and compute the maximum attainable memory/disk transfer throughput (bytes transferred per second) for the case of interrupt-driven I/O. It takes 100 instructions (setup) plus 2 cycles per byte (memory/buffer transfer) to process the disk interrupt handler.

a. What is the capacity of the disk in KB (1 KB = 1024 bytes)? → first need 1 cylinder capacity: 8 * platter capacity = 8 * 2 * capacity of 1 track = 8 * 2 * 64 * 1024 = 2^3 * 2^1 * 2^6 * 2^10 = 2^20 Bytes = R R/1024 = 1024 KB Capacity of disk = R * 110 = 112640 KBytes b.

A virtual memory paging scheme has a four page frame memory and an eight page virtual address space with 512 (01000 octal) byte pages. The following page table relates virtual pages to physical page frames for a process: a. What are the actual physical addresses of the following virtual addresses (in octal): b. Explain briefly what happens if there is no physical address for a requested virtual address. https://imgur.com/a/Yy86Fx1

a. 1. 01000 = 512(decimal) so we have 1*512 = physical frame 1. 2. 00030 = 24(decimal) so we have 0*512 = physical frame 3. 3. 06100 = 3136(decimal) so we have 6*512 + 64 = physical frame 0. 4. 05200= 2688(decimal) so we have 4*512 +640 = physical frame 2. b. When there is no physical address for a requested virtual address, a page fault exception is raised by computer hardware. Logically, the page may be accessible to the process, but requires a mapping to be added to the process page tables, and additionally the faulting page must be fetched from disk

Consider the two-dimensional array A: int A[][] = new int[100][100]; A[0][0] is at location 200 in a paged memory system with pages of size 200. A small process that manipulates the matrix resides in page 0 (locations 0 to 199). Thus, every instruction fetch will be from page 0. For three-page frames, how many page faults are generated by the following array-initialization loops? Use LRU replacement and assume that page frame 1 contains the process and the other two are initially empty. Image: https://imgur.com/a/9Mgao1m

a. 5,000, row changes, so 100 x 100 / 2 = 5,000 b. 50, column changes, so 100 / 2 = 50

Consider the following page-replacement algorithms. Rank these algorithms on a five-point scale from "bad" to "perfect" according to their page-fault rate. Separate those algorithms that suffer from Belady's anomaly from those that do not. a. LRU replacement b. FIFO replacement c. Optimal replacement d. Second-chance replacement

a. Rank 2, and no. b. Rank 3, and yes. c. Rank 1, and no. d. Rank 4, and yes.

Consider a demand-paged computer system where the degree of multiprogramming is currently fixed at four. The system was recently measured to determine utilization of the CPU and the paging disk. Three alternative results are shown below. For each case, what is happening? Can the degree of multiprogramming be increased to increase the CPU utilization? Is the paging helping? a. CPU utilization 13 percent; disk utilization 97 percent b. CPU utilization 87 percent; disk utilization 3 percent c. CPU utilization 13 percent; disk utilization 3 percent

a. Thrashing is occurring. b. CPU utilization is sufficiently high to leave things alone and increase degree of multiprogramming. c. Increase the degree of multiprogramming.

Assume that you have a page-reference string for a process with m frames (initially all empty). The page-reference string has length p, and n distinct page numbers occur in it. Answer these questions for any page-replacement algorithms: a. What is a lower bound on the number of page faults? b. What is an upper bound on the number of page faults?

a. n b. P

Describe the in-memory and on-disk data structures required for file-management.

https://imgur.com/a/1XXNmQG https://imgur.com/a/GkcZEcz

Draw the life-cycle of an IO request.

https://imgur.com/a/K77SWzZ 1. request I/O 2. can satisfy request (yes, go to 8) 3. send request to device driver 4. monitor device 5. I/O done, interrupt 6. store data in device driver buffer 7. see which I/O completed 8. transfer data to process 9. I/O done, input or output available

Explain the steps in handling a page fault. Depict it on a figure with a page table (with valid bits), an OS module for handling page faults (both swap-out and swap-in), a swap file on the disk, and a physical memory.

https://imgur.com/wYgWsLe page fault occurs (in 3.c.), which means the requested page has to be retrieved from the secondary storage (i.e., disk) where it is currently stored. Thus, the page supervisor accesses the disk, re-stores in main memory the page corresponding to the virtual address that originated the page fault (4.), updates the page table and the TLB with a new mapping between the virtual address and the physical address where the page has been stored (3.a.), and finally tells the MMU to start again the request so that a TLB hit will take place (1 & 2.a.).

Describe the various RAID levels, two sentences each. Discuss the advantages and disadvantages of RAID level 5 v/s RAID level 1+0

· RAID 0 involves striping. Data are split up into blocks that get written across all the drives in the array. +good performance and all storage capacity is used. -not fault tolerant. · RAID 1 involves mirroring. Data are stored twice by writing them to both the data drive and a mirror drive. +good performance and if a driver fails data does not have to be rebuilt, just copied. -effective storage capacity is only half of total drive capacity. · RAID 5 involves striping with parity. Data blocks are striped across the drives and on one drive a parity checksum of all the block data is written. Using the parity data, the computer can recalculate the data of one of the other data blocks, should those data no longer be available. +read data is fast and if drive fails you have access to all data. -Drive failures have an effect on throughput, and complex technology. · RAID 6 involves striping with double parity. Parity data are written to two drives. That means it requires at least 4 drives and can withstand 2 drives dying simultaneously. +if two drives fail, still have access to all data. -write data transactions are slower, drive failures have an effect on throughput, and complex technology. · RAID 1+0 involves mirroring and striping. It provides security by mirroring all data on secondary drives while using striping across each set of drives to speed up data transfers. +If one disk fails, rebuild time is very fast. -half of storage capacity goes to mirroring (expensive redundancy).

Discuss Blocking I/O (a), Nonblocking I/O(b), Asynchronous I/O system calls.

→ A blocking system call is a one that suspends or puts the calling process on wait until the event (on which the call was blocked) occurs after which the blocked process is woken up and is ready for execution. → Non-blocking system call doesn't put the calling thread in some sort of wait/suspend mode. → asynchronous I/O is a form of input/output processing that allows other processing to continue before the transmission has finished == non-blocking system call. → In synchronous I/O, a thread starts an I/O operation and immediately enters a wait state until the I/O request has completed == blocking system call


Conjuntos de estudio relacionados

AP Computer Science Principles Exam Questions

View Set

Maternal Newborn Practice 2019 A***

View Set

Biology 1620 Chap. 34 Vertebrates

View Set

chapter 4 patterns of inheritance

View Set