Operating Systems Final Exam Cheat Sheet

Ace your homework & exams now with Quizwiz!

9. Know how to design deadlock prevention algorithms for certain necessary conditions of deadlocks.

***

32. Know how to calculate the access privileges of Unix/Linux systems from octal values.

+------------+------+-------+ | Permission | Octal| Field | +------------+------+-------+ | rwx------ | 700 | User | | ---rwx--- | 070 | Group | | ------rwx | 007 | Other | +------------+------+-------+ ugo 000

40. Many operating systems, such as Windows 7, make changes for file systems on SSD drive. List at least 3 changes.

1. Microsoft implementation of "Trim" feature is supported in Windows 7 is supported in Windows 7 2. SSD can identify itself differently from HDD in ATA as defined by ATA8-ACS Identify in ATA as defined by ATA8-ACS Identify Word 217: Nominal media rotation rateWord 217: Nominal media rotation rat 3. The alignment of NTFS partition to SSD geometry is important for SSD performance geometry

6. Explain four necessary conditions of a deadlock.

1. Mutual exclusion. There must be some resource that can't be shared between processes. 2. Hold and wait. Some process must be holding one resource while waiting for another. 3. No preemption. Only the process holding a resource can release it. 4. Circular wait. There must be some cycle of waiting processes P1, P2, ..., Pn such that each process Pi is waiting for a resource held by the next process in the cycle.

15. Know how to calculate the size of page tables (in bytes) given the number of bits for addresses and the number of bits for offsets.

32 bits = 232 bytes -- Logical Memory Space Page Size: 4KB = 212 bytes Physical Memory Size = 1GB = 230 bytes # of pages = 232 ÷ 212 = 220 # of entries = 220

11. What is MMU? Why MMU must be built in hardware.

A Memory-Management Unit is a hardware device that handles the run-time mapping from virtual to physical addresses. It is responsible for handling all memory and caching operations associated with the processor, and all other aspects of memory management. One job MMU has is to oversees and regulates the processor's use of RAM and cache memory. The OS translates the virtual address into the physical RAM address. 1) Saves space -Many programs do not need all of their code and data at once (or ever), so there is no need to allocate memory for it. 2) Allows flexibility for applications and OS. -Indirection allows moving programs around in memory; OS can adjust amount of memory allocated based upon its runtime behavior -Allows processes to address more or less memory than physically installed in the machine 3) Isolation and protection -One process cannot access memory addresses in others -Exception: shared memory

38. Why a file should be stored contiguously in magnetic disks but does not needed in solid state disks (SSDs)?

A contiguous file is a file in which all the parts are sequentially stored on the hard drive. Unlike fragmented files, a contiguous file requires less time to retrieve from the storage media. Storing the files in a contiguous manner results in optimized read and write speeds. "Defragmenting" your hard drive restores fragmented files to a contiguous file state by moving data to allow space for each file to be stored contiguously.

26. What is dirty bit used in paging? Explain why using it can improve the performance of the system.

A dirty bit is a bit in memory switched on when an update is made to a page by computer hardware. When the dirty bit is switched on, the page has been modified and can be replaced in memory. If it is off, no replacement is necessary since no updates have been made. This scheme can significantly reduce the time required to service a page fault, since it reduces I/O time by one-half if the page has not been modified.

36. How many primary partitions can a hard drive (used in windows/Linux/Mac systems) have? Why? Can we have more than four partitions for a hard disk?

A hard drive can contain at most four partitions. This is because the data structures for the MBR partition table only allow for exactly four records describing partitions. However, more than four partitions is possible. We can do so by converting a primary partition into a logical partition.

17. What is hashed page table? Compare the advantages and disadvantages between multilevel page table and hashed page table schemes.

A hashed page table is a page table that contains a chain of elements hashing to the same location. Each element contains the virtual page number, the value of the mapped page frame, and a pointer to the next element. Common in address spaces > 32 bits. Maps page numbers to page frames. Advantages: -MPT's reduces space -HT's improve search speed -HT's hash virtual page numbers into page table Disadvantages: -MPT's may have a long address translation time -If a requested page is not in memory (page fault) the corresponding second level page table need to be loaded from disk. This will add to space usage but still way much better than having an entire single level page table.

1. What are monitors and condition variables? Explain why condition variable is required for certain scenario where monitors are used. You need to understand the entry queue and wait queue. a. Yield vs wait

A monitor type presents a set of programmer-defined operations that are provided mutual exclusion within the monitor. The monitor type also contains the declaration of variables whose values define the state of an instance of that type, along with the bodies of methods or functions that operate on those variables. It also provides a convenient and effective mechanism for process synchronization. Only one process may be active within the monitor at a time. A condition variable is an explicit queue that threads can put themselves on when some state of execution is not as desired (by waiting on the condition); some other thread, when it changes said state, can then wake one (or more) of those waiting threads and thus allow them to continue (by signaling on the condition).

22. What is page fault? What happens after a page fault?

A page fault occurs when a program attempts to access a block of memory that is not stored in the physical memory, or RAM. The fault notifies the operating system that it must locate the data in virtual memory, then transfer it from the storage device, such as an HDD or SSD, to the system RAM. The fault Handler returns to the original process, causing the faulting instruction to be restarted. The CPU resends the offending virtual address to the MMU. Because the Virtual page page is now cached in physical memory. MMU returns the physical address using which main memory returns the requested word to the processor.

Authenticity

Authentication is the process of constraining a set of potential sender of a message. It is complementary to encryption. It is useful for proving that a message has not been modified. An authentication algorithm consists of a set of keys, a set of messages, a set of authenticators, a function for generating authenticators from messages, and a function for verifying authenticators on messages. Example: Authentication is used when a traveler shows his or her ticket and driver's license at the airport so he or she can check his or her bags and receive a boarding pass. Airports need to authenticate that the person is who he or she says she is and has purchased a ticket, before giving him or her a boarding pass.

Availiability

Availability is a guarantee of reliable access to the information by authorized people. It is important to ensure that the information concerned is readily accessible to the authorized viewer at all times. Some types of security attack attempt to deny access to the appropriate user, either for the sake of inconveniencing them, or because there is some secondary effect. For example, by breaking the web site for a particular search engine, a rival may become more popular.

3. Write Java code to solve simple synchronization problems such as bounded buffer and read-writer locker.

Bounded Buffer: class Consumer extends Thread { private final Buffer buffer; public Consumer(Buffer b) { buffer = b; } public void run() { while (!Thread.currentThread().isInterrupted()) { char c = buffer.delete(); System.out.print(c); } } } class Producer extends Thread { private final Buffer buffer; private InputStreamReader in = new InputStreamReader(System.in); public Producer(Buffer b) { buffer = b; } public void run() { try { while (!Thread.currentThread().isInterrupted()){ int c = in.read(); if (c == -1) break; // -1 is eof buffer.insert((char)c); } } catch (IOException e) {} } } Read-Writer Lock: public class ReadWriteLock{ private int readers = 0; private int writers = 0; private int writeRequests = 0; public synchronized void lockRead() throws InterruptedException{ while(writers > 0 || writeRequests > 0){ wait(); } readers++; } public synchronized void unlockRead(){ readers--; notifyAll(); } public synchronized void lockWrite() throws InterruptedException{ writeRequests++; while(readers > 0 || writers > 0){ wait(); } writeRequests--; writers++; } public synchronized void unlockWrite() throws InterruptedException{ writers--; notifyAll(); } }

21. What is the difference between buddy method and slab allocation method?

Buddy: The buddy system allocates memory from a fixed-size segment consisting of physically contiguous pages. Memory allocation from this segment using a power-of-2 allocator, which satisfies requests in units sized as a power of a. The advantage of the buddy system is how quickly adjacent buddies can be combined to form larger segments using a technique called coalescing. The drawback to using this system is that rounding up to the next highest power of 2 is very likely to cause fragmentation within allocated segments. Slab: The basic idea behind the slab allocator is to have caches of commonly used objects kept in an initialized state available for use by the kernel. Without an object based allocator, the kernel will spend much of its time allocating, initializing and freeing the same object. The slab allocator aims to to cache the freed object so that the basic structure is preserved. The slab-allocation algorithm uses caches to store kernel objects. 1) No wasted memory. 2) Memory requests can be satisfied quickly.

10. Know how to run banker algorithm for both deadlock detection and deadlock avoidance. Safe/unsafe state.

The banker's algorithm is a resource allocation and deadlock avoidance algorithm that tests for safety. Deadlock Detection: public void isSafe() { input(); calc_need(); boolean done[]=new boolean[np]; int j=0; while(j<np) { boolean allocated=false; public static boolean isSystemInSafeState() { /* * this method contains the implementation of the safety algorithm */ int work[]=available.clone(); boolean finish[]=new boolean[n]; for(int i=0;i<finish.length;i++) { finish[i]=false; } /* * step 2 starts here */ while(true) { int i; for(i=0;i<n;i++) { int counter=0; for(int j=0;j<m;j++) { if(need[i][j]<=work[j]) { counter++; } } if(counter==m && !finish[i]) { break; } } if(i==n) { int count=0; for(int j=0;j<n;j++) { if(finish[j]) count++; } if(count==n) return true; else return false; } finish[i]=true; safetySequence+=("P"+(i+1)+" "); for(int j=0;j<m;j++) {work[j]+=allocation[i][j];} } } for(int i=0;i<np;i++) if(!done[i] && check(i)) { for(int k=0;k<nr;k++) avail[0][k]=avail[0][k]-need[i][k]+max[i][k]; System.out.println("Allocated process : "+i); allocated=done[i]=true; j++; } Deadlock Avoidance: public static boolean isSystemInSafeState(){ int work[]=available.clone(); boolean finish[]=new boolean[n]; for(int i=0;i<finish.length;i++){finish[i]=false;} while(true){ int i; for(i=0;i<n;i++){ int counter=0; for(int j=0;j<m;j++){ if(need[i][j]<=work[j]){counter++;} } } if(counter==m && !finish[i]){break;} } if(i==n){ int count=0; for(int j=0;j<n;j++){ if(finish[j]) {count++;} } if(count==n){ return true; } else{return false;} finish[i]=true; safetySequence+=("P"+(i+1)+" "); for(int j=0;j<m;j++){ work[j]+=allocation[i][j]; } } }

12. Explain the difference between internal and external fragmentation

External Fragmentation occurs when the blocks that have been assigned to active objects are scattered through the heap in such a way that the remaining, unused space is composed of multiple blocks: there may be a lot of free space, but no one piece of it may be large enough to satisfy a future request. Internal Fragmentation occurs when a storage-management algorithm allocates a block that is larger than required to hold a given object; the extra space is then unused. In paging, internal fragmentation is caused. Paging uses constant-size blocks of memory, and thus minimizes external fragmentation at the expense of internal, if the memory allocated is less than a page.

25. Understand FIFO, optimal and LRU algorithm and know how to count page faults.

FIFO Page Replacement: In FIFO page replacement, when a page is needed to be replaced, we select the oldest page. Optimal Page Replacement: When a page replacement is needed, it looks ahead in the input queue for the page frame which will be referenced only after a long time. The page with the longest reference is swapped. LRU Page Replacement: This method uses the recent past as an approximation of near future. We replace the page which has not been referenced for a long time in the past. ***

20. Why paging is not used in kernel memory management?

Kernel memory is allocated from a free-memory pool different from the list used to satisfy ordinary user-mode processes. The reasons being: 1. Kernel requests memory for data structures of varying sizes, some which are less than a page in size. As a result, the kernel must use memory conservatively and attempt to minimize waste due to fragmentation. This is important because many OS do not subject kernel code or data to the paging system. 2. Pages allocated to user-mode processes do not necessarily have to be in contiguous physical memory. However, certain hardware devices interact directly with physical memory-without the benefit of a virtual memory interface-and consequently may require memory residing in physically contiguous pages.

39. What is RAID? Know how to calculate the read and write throughput, usable space and reliability for a. RAID 0 b. RAID 1 c. RAID 10 d. RAID 5. i. Error rate?

RAID storage uses multiple disks in order to provide fault tolerance, to improve overall performance, and to increase storage capacity in a system. This is in contrast with older storage devices that used only a single disk drive to store data. RAID allows you to store the same data redundantly (in multiple paces) in a balanced way to improve overall performance. RAID disk drives are used frequently on servers but aren't generally necessary for personal computers. a. RAID 0: refers to disk arrays with stripping at the level of blocks without any redundancy (such as mirroring or parity). b. RAID 1: refers to disk mirroring. c. RAID 5: refers to disk striping with parity. Striping also allows users to reconstruct data in case of a disk failure. d. RAID 10: combines features of RAID 0 + RAID 1. It provides optimization for fault tolerance.

27. What is thrashing? How to reduce the possibility of thrashing? Which method reduces the possibility of thrashing?

Thrashing is computer activity that makes little or no progress, usually because memory or other resources have become exhausted or too limited to perform needed operations. When this happens, a pattern typically develops in which a request is made of the operating system by a process or program, the operating system tries to find resources by taking them from some other process, which in turn makes new requests that can't be satisfied. We can limit the effects of thrashing by using a local replacement algorithm. With local replacement , is one process starts thrashing, it cannot steal frames from another process and cause the latter to thrash as well. To prevent thrashing, we must provide a process with as many frames as it needs using the locality model.

23. What is demand paging? What are advantages of demanding paging?

With demand-paged virtual memory, pages are only loaded when they are demanded during program execution; pages that are never accessed are thus never loaded into physical memory. A DP system is similar to a paging system with swapping where processes reside in secondary memory. Advantages: • Demand paging does not load the pages that are never accessed, so saves the memory for other programs and increases the degree of multiprogramming. • There is less loading latency at the program startup. • There is less of initial disk overhead because of fewer page reads. • It does not need extra hardware support than what paging needs, since protection fault can be used to get page fault. • Pages will be shared by multiple programs until they are modified by one of them, so a technique called copy on write will be used to save more resources. • Ability to run large programs on the machine, even though it does not have sufficient memory to run the program. This method is easier for a programmer than an old manual overlays.

2. Understand how monitors and condition variables are implemented in Java.

monitor DinPhil{ enum State{THINKING, HUNGRY, EATING}; State[] states = new State[5]; Condition[] self = new Condition[5]; public DinPhil{ for(int i = 0; i < 5; i++) state[i] = State.THINKING; } public void takeForks(int i){ state[i] = State.HUNGRY; test(i); if(state[i] != State.EATING) self[i].wait; } public void returnForks(int i){ state[i] = State.THINKING; test((i+4)%5); test((i+1)%5); } private void test(int i){ if((state[(i+4)%5] != State.EATING) && (state[i] == State.HUNGRY) && (state[(i+1)%5] != State.EATING)){ state[i] = State.EATING; self[i].signal; } } }

7. Know how to draw and explain resource-allocation graph.

A resource allocation graph tracks which resource is held by which process and which process is waiting for a resource of a particular type ***

30. What are differences between hard link and soft link?

A symbolic or soft link is an actual link to the original file, whereas a hard link is a mirror copy of the original file. If you delete the original file, the soft link has no value, because it points to a non-existent file. But in the case of hard link, it is entirely opposite. If you delete the original file, the hard link can still has the data of the original file. Because hard link acts as a mirror copy of the original file. In a nutshell, a soft link -can cross the file system -allows you to link between directories -has different inodes number and file permissions than original file. -permissions will not be updated -has only the path of the original file, not the contents A hard Link -can't cross the file system boundaries -can't link directories -has the same inodes number and permissions of original file -permissions will be updated if we change the permissions of source file -has the actual contents of original file, so that you still can view the contents, even if the original file moved or removed.

31. Explain why there is no hard link for directory in almost all operating systems.

Allowing hard links to directories would break the directed acyclic graph structure of the filesystem, possibly creating directory loops and dangling directory subtrees, which would make fsck and any other file tree walkers error prone.

37. How to calculate the maximum possible file size of ext2/ext3 (Linux file system)?

Ext2: Maximum file size is 16GB - 2TB. Journaling feature is not available. It's being used for normally Flash based storage media like USB Flash drive, SD Card etc. Ext3: Max file size 16GB - 2TB. Provide facility to upgrade from Ext2 to Ext3 file systems without having to back up and restore data.

Confidentiality

Confidentiality is a set of rules that limits access to information. Measures undertaken to ensure confidentiality are designed to prevent sensitive information from reaching the wrong people, while making sure that the right people can in fact get it: Access must be restricted to those authorized to view the data in question. Example: A good example of methods used to ensure confidentiality is an account number or routing number when banking online. Data encryption is a common method of ensuring confidentiality. User IDs and passwords constitute a standard procedure; two-factor authentication is becoming the norm.

35. Know how to find the cluster of a file in the indexed file allocation methods used in Linux file system.

Consider an indexed file allocation using index nodes (inodes). An inode contains among other things, 7 indexes, one indirect index, one double index, and one triple index. If the disk sector is 512 bytes, what is the maximum file size in this allocation scheme? For a 2-byte index, we can have 65536 disk blocks, i.e. 512 * 65536 = 32MB. But a triple index structure can express more disk blocks. Therefore, we go to a 4-byte indexing scheme (3-byte indexing scheme are not attractive and are not sufficient). A 4-byte indexing scheme therefore gives a maximum file size of 7*512 + 128*512 + 128*128*512 + 128*128*128*512 = 108219952 or about 1GB.

24. Explain how copy-on-write works and what the advantages of this scheme are.

Copy-on-write is an optimization strategy used in computer programming. The fundamental idea is that if multiple callers ask for resources which are initially indistinguishable, you can give them pointers to the same resource. This function can be maintained until a caller tries to modify its "copy" of the resource, at which point a true private copy is created to prevent the changes becoming visible to everyone else. All of this happens transparently to the callers. The primary advantage is that if a caller never makes any modifications, no private copy need ever be created.

4. What is a deadlock?

Deadlock is when two or more tasks never make progress because each is waiting for some resource held by another process. Deadlocks are described in terms of processes (things that can block) and resources (things processes can wait for). Processes may or may not correspond to full-blown processes as used elsewhere. It is typically assumed that each resource may have multiple instances, where a process is indifferent to which instance it gets and nobody blocks unless the processes collectively request more instances than the resource can provide.

5. Explain the difference between deadlock and starvation?

Deadlock refers to the situation when processes are stuck in circular waiting for the resources. A situation of deadlock arises when all the blocked processes of one set each occupies a resource and wait for the resource which is occupied by some other process in the set. -Infinite resources -Waiting is not allowed -Sharing is not allowed -Preempt the resources -All Requests made at the starting On the other hand, starvation occurs when a particular process needs to wait indefinitely, as it never gets a chance to proceed further. In this situation, the process or transaction either waits indefinitely or gets in restart mode again and again. This may happen in a deadlock situation when there is the possibility that the same process becomes a victim every time and gets rolled back. It is commonly found in propriety based scheduling systems. -Uncontrolled management of resources -Process priorities being strictly enforces -Use of random selection -Scarcity of resources -No strict enforcement of the priorities

34. Know how to find all clusters of a file by giving Linked Allocation Table and the entry of the file in a directory.

Disk files can be stored as linked lists, with the expense of the storage space consumed by each link. (E.g. a block may be 508 bytes instead of 512.) Linked allocation involves no external fragmentation, does not require pre-known file sizes, and allows files to grow dynamically at any time. Unfortunately linked allocation is only efficient for sequential access files, as random access requires starting at the beginning of the list for each new location access. Allocating clusters of blocks reduces the space wasted by pointers, at the cost of internal fragmentation. ***

19. Microsoft Windows uses a method that combine paging and segmentation. Explain the advantage of it.

In a combined paging/segmentation system a user's address space is broken up into number of segments. Each segment is broken up into a number of fixed-sized pages which are equal in length to a main memory frame. Segmentation is visible to the programmer. Paging is transparent to the programmer. This combination reduced memory usage as opposed to pure paging, allows shared individual pages by coping page table entries, and allows shared whole segments by sharing segment table entries, which is the same as sharing the page table for that segment. Allow it is a very unique solution the problem of internal fragmentation is still present.

Integrity

Integrity is the assurance that the information is trustworthy and accurate. Integrity involves maintaining the consistency, accuracy, and trustworthiness of data over its entire life cycle. Data must not be changed in transit, and steps must be taken to ensure that data cannot be altered by unauthorized people. These measures include file permissions and user access controls. Example: For example, if you save a file with important information that must be relayed to members of your organization, but someone opens the file and changes some or all of the information, the file has lost its integrity. The consequences could be anything from coworkers missing a meeting you planned for a specific date and time, to 50,000 machine parts being produced with the wrong dimensions.

28. What are mandatory lock and advisory locking mechanisms?

Mandatory Lock: When a process acquires a file the operating system will impose a mandatory file lock. This prevents another processes from accessing the file in question thereby enforcing file integrity. A benefit to mandatory locks is that users are guaranteed that only one process may write to the file at a time thereby reducing conflicts. A negative is that processes may get deadlocked waiting on a resource. Advisory Lock: Advisory Locking is a cooperative locking scheme where the participating processes need to follow/obey a locking protocol. As long as the processes follow the locking protocol/API and respect its return values, the underlying API takes care that file locking semantics work correctly.

16. Why a multiple-level page-table scheme is needed in some systems? What are advantages and disadvantages of multiple-level page-table scheme?

Multi-level page tables a mapping structure where a virtual address is treated as a structured word containing a small number of fields, with the width of each field being fixed. It can be used when a page table is too big to fit in a contiguous space. Advantages: -Allocating memory is easy and cheap -Any free page is ok, OS can take first one out of list it keeps -Eliminates external fragmentation -Data (page frames) can be scattered all over PM -Pages are mapped appropriately anyway -Allows demand paging and prepaging -More efficient swapping -No need for considerations about fragmentation -Just swap out page least likely to be used Disadvantages: -Potentially poor space overhead. - Longer memory access times (page table lookup) -Can be improved using TLB -Guarded page tables -Inverted page tables -Memory requirements (one entry per VM page) -Improve using Multilevel page tables and variable page sizes (super-pages) -Guarded page tables -Page Table Length Register (PTLR) to limit virtual memory size -Internal fragmentation

29. Know the file structures after a file system is mounted to another one.

On UNIX, file systems can be mounted at any directory. Mounting is implemented by setting a flag in the in-memory copy of the inode for that directory. The flag indicates that the directory is a mount point. A field then points to an entry in the mount table, indicating which device is mounted there. The mount table entry contains a pointer to the superblock of the file system on that device. This scheme enables the operating system to traverse its directory structure, switching seamlessly among file systems of varying types.

13. What are pages? What are frames? What is the relation between pages and frames

Pages are fixed-sized blocks in divided in the logical memory space. Frames are fixed sized blocks in physical memory space. When a process is to be executed, its pages are loaded into any available memory frames from their source. The backing-store is divided into fixed-sized blocks that are the same size as the memory frames.

18. What is the difference between paging and segmentation methods?

Paging: It is a memory-management scheme that permits the physical address space of a process to be noncontiguous. It avoids external fragmentation and the need for compaction. It also solves the considerable problem of fitting memory chunks of varying sizes onto the backing store. Segmentation: It is a MMS that supports this user view of memory. A logical address space is a collection of segments. Each segment has a name and a length. The addresses specify both the segment name and the offset within the segment. The user therefore specifies each address by two quantities: a segment name and an offset. Differences: - A page is of fixed block size. - A segment is of variable size. - Paging may lead to internal fragmentation. - Segmentation may lead to external fragmentation. - The user specified address is divided by CPU into a page number and offset. - The user specifies each address by two quantities a segment number and the offset (Segment limit). - The hardware decides the page size. - The segment size is specified by the user. - Paging involves a page table that contains base address of each page. - Segmentation involves the segment table that contains segment number and offset (segment length).

8. What is deadlock prevention and what is deadlock avoidance. Explain their differences.

Prevention: • The goal is to ensure that at least one of the necessary conditions for deadlock can never hold. • Deadlock prevention is often impossible to implement. • The system does not require additional prior information regarding the overall potential use of each resource for each process. • In order for the system to prevent the deadlock condition it does not need to know all the details of all resources in existence, available and requested. • Resource allocation strategy for deadlock prevention is conservative, it under commits the resources. • All resources are requested at once. Avoidance: • The goal for deadlock avoidance is to the system must not enter an unsafe state. • Deadlock avoidance is often impossible to implement. • The system requires additional prior information regarding the overall potential use of each resource for each process. • In order for the system to be able to figure out whether the next state will be safe or unsafe, it must know in advance at any time the number and type of all resources in existence, available, and requested. • Deadlock avoidance techniques include Banker's algorithm, Wait/Die, Wound/Wait etc. • Resource allocation strategy for deadlock avoidance selects midway between that of detection and prevention. • Needs to be manipulated until at least one safe path is found. • There is no preemption.

14. What is TLB? Explain how TLB works. Why we cannot use dedicated register to implement page tables?

Translation Lookaside Buffer is a memory cache that is used to reduce the time taken to access a user memory location. The TLB is associative, high-speed memory. It is used with page tables. The TLB contains a few page-table entries. When a logical address is generated by the CPU, its page number is presented to the TLB. If the page number is found its frame number is immediately available and is used to access memory. If not found, a memory reference to the page table must be made. The implementation of the page table is vital to the efficiency of the virtual memory technique, for each memory reference must also include a reference to the page table. The fastest solution is a set of dedicated registers to hold the page table but this method is impractical for large page tables because of the expense. But keeping the page table in main memory could cause intolerable delays because even only one memory access for the page table involves a slowdown of 100 percent and large page tables can require more than one memory access.

33. Know the commands in Linux to change access privileges of files/directories.

chmod - change permissions chown - change ownership sudo - gain admin rights a - all u - user g - group o - other r - read: Look at the contents of a file/find out what files are in a directory w - write: Change or delete the contents of a file/create or remove files in a directory x - execute: Can execute (run as a program) a file/can change to the directory or copy from the directory. +-----+---+--------------------------+ | rwx | 7 | Read, write and execute | rw- | 6 | Read, write | r-x | 5 | Read, and execute | r-- | 4 | Read, | -wx | 3 | Write and execute | -w- | 2 | Write | --x | 1 | Execute | --- | 0 | no permissions


Related study sets

PROBLEMS WITH LABOR AND DELIVERY

View Set

pediatric genitourinary (nclex questions)

View Set

Chapter 50 Pathophysiology NCLEX-Style Review Questions

View Set

Carrie's Iggy Acute II Midterm Exam #1 Complete Set

View Set

3. Chapter - Strategic Capabilities

View Set