CSE 2431 Final

Ace your homework & exams now with Quizwiz!

Memory Mapping (created via mmap())

A file-access method in which a file is mapped into the process memory space so that standard memory access instructions read and write the contents of the file; an alternative to the use of read() and write() calls. mmap() creates a new mapping in the virtual address space of the calling process

Segmentation

Segmentation is a memory management technique in which the memory is divided into the variable size parts. Each part is known as a segment which can be allocated to a process. The details about each segment are stored in a table called a segment table.

Memory management unit

Sometimes people call the part of the processor that helps with address translation the memory management unit (MMU). The run-time mapping from virtual to physical addresses is done by a hardware device called the memory-management unit (MMU).

Random-access memory(RAM):SRAM and DRAM

Static RAM(SRAM) is faster and significantly more expensive than Dynamic RAM(DRAM). SCAM is used for cache memories, both on and off the CPU chip. DRAM is use used for the main memory plus the frame buffer of a graphics system.

Swap space

The allocated space where the virtual memory is stored on the hard drive when the amount of physical memory space is used up or full

Dynamic loading; dynamic linking, shared libraries

The benefit of this approach is that it avoids linking and loading libraries that may end up not being used into an executable file. Instead, the library is conditionally linked and is loaded if it is required during program run time.

Copy-on-write (COW) and fork()

The goal of copy-on-write (COW) fork() is to defer allocating and copying physical memory pages for the child until the copies are actually needed, if ever. COW fork() creates just a pagetable for the child, with PTEs for user memory pointing to the parent's physical pages.

Primary, secondary storage

The goal of secondary storage is to retain data until you overwrite or delete it, meaning it exclusively relies upon non-volatile storage media such as HDDs and SSDs. This is in contrast to primary storage, which includes both volatile and non-volatile storage media for quick access to frequently used data.

I/O bus: - SATA, PCI Express, SCSI interfaces - Device controller

The input/output bus or io bus is the pathway used for input and output devices to communicate with the computer processor. - SATA (Serial AT Attachment)[a][2] is a computer bus interface that connects host bus adapters to mass storage devices such as hard disk drives, optical drives, and solid-state drives. PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe or PCI-e,[1] is a high-speed serial computer expansion bus standard, designed to replace the older PCI, PCI-X and AGP bus standards. What is the SCSI interface? SCSI (Small Computer Systems Interface) is a smart bus, controlled with a microprocessor, that allows you to add up to 15 peripheral devices to the computer. - A device controller is a system that handles the incoming and outgoing signals of the CPU by acting as a bridge between CPU and the I/O devices.

Memory-mapped files

A memory-mapped file contains the contents of a file in virtual memory. This mapping between a file and memory space enables an application, including multiple processes, to modify the file by reading and writing directly to the memory.

Page faults and handling

A page fault will happen if a program tries to access a piece of memory that does not exist in physical memory (main memory). The fault specifies the operating system to trace all data into virtual memory management and then relocate it from secondary memory to its primary memory, such as a hard disk. Now, let's understand the procedure of page fault handling in the OS: -Firstly, an internal table for this process to assess whether the reference was valid or invalid memory access. -If the reference becomes invalid, the system process would be terminated. Otherwise, the page will be paged in. -After that, the free-frame list finds the free frame in the system. -Now, the disk operation would be scheduled to get the required page from the disk. -When the I/O operation is completed, the process's page table will be updated with a new frame number, and the invalid bit will be changed. Now, it is a valid page reference. -If any page fault is found, restart these steps from starting.

Error-correcting codes

An error correcting code (ECC) is an encoding scheme that transmits messages as binary numbers, in such a way that the message can be recovered even if some bits are erroneously flipped. They are used in practically all cases of message transmission, especially in data storage where ECCs defend against data corruption.

Operations on directories

Every Directory supports a number of common operations on the file: File Creation. Search for the file. File deletion. Renaming the file.

File (abstract data type): attributes, operations

File attributes are settings associated with computer files that grant or deny certain rights to how a user can access that file. The user performs file operations with the help of commands provided by the operating system.

Protection

File security is all about safeguarding your business-critical information from prying eyes by implementing stringent access control measures and flawless permission hygiene. Apart from enabling and monitoring security access controls, decluttering data storage also plays an important role in securing files.

Swapping

Swapping is a simple memory management policy done by moving a complete process in or out of memory, process data moved include: process control block (pcb), data variables (i.e. heap, stack), and instructions (in machine language)

Directory listing

ls, A directory listing is a type of Web page that lists files and directories that exist on a Web server.

Partitions and mounting

Mounting can be defined as the software process that activates a particular disk by making its content available to the computer's file system. Mounting creates a partition for the mounted device in the computer's file system.

Mounting, partitions

Mounting creates a partition for the mounted device in the computer's file system. Even after a physical connection is made between a device and the computer, if the device is not mounted, the computer is not able to recognize it.

Solid-state disks (SSDs)

A solid-state drive (SSD) is a solid-state storage device that uses integrated circuit assemblies to store data persistently, typically using flash memory, and functions as secondary storage in the hierarchy of computer storage.[1]

Solid state disks (SSDs).

A storage technology, based on flash memory, that in some situations is an attractive alternative to the conventional rotating disk because they are built of semiconductor memory, with no moving parts, and thus have much faster random access times than rotating disks, use less power, and are more rugged.

reference string

A string of memory an algorithm is run against to determine the number of page faults it generates.

virtual file systems

A virtual file system (VFS) or virtual filesystem switch is an abstract layer on top of a more concrete file system

Page replacement

the act of overwriting a page in memory with a different page loaded from the disk when needed

Memory hierarchy

An enhancement to organize the memory such that it can minimize the access time

File-system mounting

Mounting is a process in which the operating system adds the directories and files from a storage device to the user's computer file system. The file system is attached to an empty directory, by adding so the system user can access the data that is available inside the storage device through the system file manager. Storage systems can be internal hard disks, external hard disks, USB flash drivers, SSD cards, memory cards, network-attached storage devices, CDs and DVDs, remote file systems, or anything else.

Shared libraries

Shared libraries are files used by multiple applications.

Allocating frames to processes: 1.) Minimum # of frames 2.) Equal allocation 3.) Proportional allocation (# of frames per priority) 4.) Global vs. local allocation

1.) At least a minimum number of frames should be allocated to each process. This constraint is supported by two reasons. The first reason is, as less number of frames are allocated, there is an increase in the page fault ratio, decreasing the performance of the execution of the process. Secondly, there should be enough frames to hold all the different pages that any single instruction can reference. 2.) Equal allocation: In a system with x frames and y processes, each process gets equal number of frames, i.e. x/y. For instance, if the system has 48 frames and 9 processes, each process will get 5 frames. The three frames which are not allocated to any process can be used as a free-frame buffer pool. 3.) Frames are allocated to each process according to the process size. 4.) Global replacement: When a process needs a page which is not in the memory, it can bring in the new page and allocate it a frame from the set of all frames, even if that frame is currently allocated to some other process; that is, one process can take a frame from another. Local replacement: When a process needs a page which is not in the memory, it can bring in the new page and allocate it a frame from its own set of allocated frames only.

Demand paging

Demand paging follows that pages should only be brought into memory if the executing process demands them.

file types and extensions

File types define how files and organizational data are stored in a business. A file type is usually identified by the file extension and the applications associated with the file. Each stored file can have multiple file extensions but a single file format. A file format defines the way data is stored in a file.

Device drivers

In computing, a device driver is a computer program that operates or controls a particular type of device that is attached to a computer or automaton.

Application I/O interfaces: -Block devices - Character streams - Network sockets and select() system call - Nonblocking and asynchronous I/O

- A block device is (in very general terms) a piece of hardware that provides data access in blocks (contiguous groups of bytes) as opposed to character devices which provides access to individual data bytes. - Character streams are like byte streams, but they contain 16-bit Unicode characters rather than eight-bit bytes. They are implemented by the Reader and Writer classes and their subclasses. - The select() call monitors activity on a set of sockets looking for sockets ready for reading, writing, or with an exception condition pending. - a non blocking read() returns immediately with whatever data available; an asynchronous read() requests a transfer that will be performed in its entirety, but that will complete at some future time.

Allocation methods - continuous - Linked (e.g., FAT32) - Indexed (e.g., Linux ext4)

- In this scheme, each file occupies a contiguous set of blocks on the disk. For example, if a file requires n blocks and is given a block b as the starting location, then the blocks assigned to the file will be: b, b+1, b+2,......b+n-1. This means that given the starting block address and the length of the file (in terms of blocks required), we can determine the blocks occupied by the file. The directory entry for a file with contiguous allocation contains: Address of starting block and Length of the allocated portion. - In this scheme, each file is a linked list of disk blocks which need not be contiguous. The disk blocks can be scattered anywhere on the disk. The directory entry contains a pointer to the starting and the ending file block. Each block contains a pointer to the next block occupied by the file. - In this scheme, a special block known as the Index block contains the pointers to all the blocks occupied by a file. Each file has its own index block. The ith entry in the index block contains the disk address of the ith file block.

I/O bus access: - Polling - Interrupts - Direct memory access (DMA)

- Polling is the process where the computer or controlling device waits for an external device to check for its readiness or state, often with low-level hardware. -Interrupt I/O is a way of controlling input/output activity whereby a peripheral or terminal that needs to make or receive a data transfer sends a signal. This will cause a program interrupt to be set. At a time appropriate to the priority level of the I/O interrupt. -Direct memory access (DMA) is a feature of computer systems that allows certain hardware subsystems to access main system memory independently of the central processing unit (CPU).

Access control (Unix): -Owner, group, universe permission -chmod, chown, du, df, mount

- The first three positions (after the "-" or "d") designate owner's permissions. The r indicates the owner can read the file. The w indicates the owner can write to the file. The x indicates the owner can execute the file. The second three positions designate permissions for the group. In this example, the group members can read, but not write to it or execute it. (Execution permission is usually only given for particular files or a specific directory.) If you don't give others access to write or execute files, ignore this part. The last three positions are for the world/anyone. To allow your Web pages to be viewed using a browser, you need this permission set to "read." - In Unix operating systems, the chmod command is used to change the access mode of a file. Syntax: chmod [options] [mode] [File_name]. chown command is used to change the file Owner or group. Whenever you want to change ownership, you can use the chown command. syntax to change owner: chown owner_name file_name. syntax to change group: chown :group1 file1.txt. The du command is a standard Linux/Unix command that allows a user to gain disk usage information quickly. The df command tells the amount of space used and available on the file system. To enable ACL(access control list), the filesystem must be mounted with the acl option. You can use fstab entries to make it permanent on your system. There is a possibility that the acl option is already active as one of the default mount options on the filesystem.

Types of directories: - Single-level directory - Two-level directory - Tree-structured directory - Directed acyclic graph(DAG) - General graph-structured directory

- The most basic way is to keep a single large list of all files on a drive. However, when the number of files grows or the system has more than one user, a single-level directory structure becomes a severe constraint. No two files with the same name are allowed. - The two-level directory structure enables the usage of the same file name across several user directories. There is a single master file directory that contains individual directories for each single user files directory. At the second level, there is a separate directory for each user, which contains a collection of users' files. The mechanism prevents a user from accessing another user's directory without their authorization. - In the tree directory structure, searching is more efficient, and the concept of a current working directory is used. Files can be arranged in logical groups. We can put files of the same type in the same directory. - The tree model forbids the existence of the same file in several directories. By making the directory an acyclic graph structure, we may achieve this. Two or more directory entries can lead to the same subdirectory or file, but we'll limit it, for now, to prevent any directory entries from pointing back up the directory structure. - Cycles are allowed inside a directory structure where numerous directories can be derived from more than one parent directory in a general graph directory structure. When general graph directories are allowed, commands like, search a directory and its subdirectories, must be used with caution. If cycles are allowed, the search is infinite.

Disk scheduling algorithms: 1.) FCFS 2.) SSTF 3.) SCAN, C-SCAN 4.) LOOK, C-LOOK

1.) FCFS is the simplest of all Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order they arrive in the disk queue. 2.) In SSTF (Shortest Seek Time First), requests having the shortest seek time are executed first. 3.) In the SCAN algorithm the disk arm moves in a particular direction and services the requests coming in its path and after reaching the end of the disk, it reverses its direction and again services the request arriving in its path. CSCAN algorithm in which the disk arm instead of reversing its direction goes to the other end of the disk and starts servicing the requests from there. 4.) LOOK Algorithm is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its direction from there only. In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in front of the head and then from there goes to the other end's last request.

Partitions: 1.) Fixed size 2.) Variable size (holes)

1.) Fixed partitioning is therefore defined as the system of dividing memory into non-overlapping sizes that are fixed, unmovable, static. A process may be loaded into a partition of equal or greater size and is confined to its allocated partition. 2.) In variable Partitioning, space in main memory is allocated strictly according to the need of process, hence there is no case of internal fragmentation. There will be no unused space left in the partition.

Allocation algorithms: 1.) First Fit 2.) Worst fit 3.) Best Fit

1.) In the first fit, the partition is allocated which is first sufficient from the top of Main Memory. 2.) The worst-fit algorithm searches for the largest free partition and allocates the process to it. This algorithm is designed to leave the largest possible free partition for future use. 3.) Best-fit allocation is a memory allocation algorithm used in operating systems to allocate memory to processes. In this algorithm, the operating system searches for the smallest free block of memory that is big enough to accommodate the process being allocated memory.

Page-replacement algorithms: 1.) FIFO (Belady's anomaly) 2.) Optimal 3.) LRU 4.) Second chance (clock)

1.) In this algorithm, operating system keeps track of all pages in the memory in a queue, oldest page is in the front of the queue. When a page needs to be replaced page in the front of the queue is selected for removal. 2.) The optimal page replacement (OPT) algorithm is a memory management technique. It minimizes the number of page faults by predicting future accesses and replacing the least recently used pages. 3.) In Least Recently Used (LRU) algorithm is a Greedy algorithm where the page to be replaced is least recently used. The idea is based on locality of reference, the least recently used page is not likely. 4.) In the Second Chance page replacement policy, the candidate pages for removal are considered in a round robin matter, and a page that has been accessed between consecutive considerations will not be replaced. The page replaced is the one that, when considered in a round robin matter, has not been accessed since its last consideration.

Steps of cache request: 1.) Set selection 2.) Line matching 3.)Word extraction

1.) In this step, the cache extracts the s set index bits from the middle of the address for w. These bits are interpreted as an unsigned integer that corresponds to a set number. In other words, if we think of the cache as a one-dimensional array of sets, then the set index bits form an index into this array. 2.) Now that we have selected some set i in the previous step, the next step is to determine if a copy of the word w is stored in one of the cache lines contained in set i. In a direct-mapped cache, this is easy and fast because there is exactly one line per set. A copy of w is contained in the line if and only if the valid bit is set and the tag in the cache line matches the tag in the address of w. 3.) Once we have a hit, we know that w is somewhere in the block. This last step determines where the desired word starts in the block.

Page table structures: 1.) Multi-level (≥ 2 levels) 2.) Inverted (hashing)

1.) Multilevel Paging is a paging scheme that consists of two or more levels of page tables in a hierarchical manner. It is also known as hierarchical paging. The entries of the level 1 page table are pointers to a level 2 page table and entries of the level 2 page tables are pointers to a level 3 page table and so on. 2.) Inverted Page Table (IPT) is a data structure used to map physical memory pages to virtual memory pages. Unlike a traditional Page Table, which is a per-process data structure, an IPT is a system-wide data structure that contains an entry for each physical page in memory.

E-way set-associative cache (meaning; different values of E)

A set associative cache relaxes this constraint so each set holds more than one cache line. A cache with 1 < E < C/B is often called an E-way set associative cache. Set associative cache (1 < E < C/B). In a set associative cache, each set contains more than one line. T

Nonvolatile vs. volatile memory

DRAMs and SRAMs are volatile in the sense that they lose their information if the supply voltage is turned off. Nonvolatile memories, on the other hand, retains their information even when they are powered off.

Directory implementation: - linear list - Hash table

Directory implementation in the operating system can be done using Singly Linked List and Hash table. The efficiency, reliability, and performance of a file system are greatly affected by the selection of directory-allocation and directory-management algorithms. - The implementation of directories using a singly linked(linear) list is easy to program but is time-consuming to execute. Here we implement a directory by using a linear list of filenames with pointers to the data blocks. - In the hash table for each pair in the directory key-value pair is generated. The hash function on the file name determines the key and this key points to the corresponding file stored in the directory. This method efficiently decreases the directory search time as the entire list will not be searched on every operation. Using the keys the hash table entries are checked and when the file is found it is fetched.

Disk formatting, partitions

Disk formatting is the process of preparing a data storage device such as a hard disk drive, solid-state drive, floppy disk, memory card or USB flash drive for initial use. A partition is a logical division of a hard disk that is treated as a separate unit by operating systems (OSes) and file systems.

Memory controller

Each DRAM chip is connected to some circuit, known as the memory controller, that can transfer w bits at a time to and from each DRAM chip. To read the contents of supercell(i, j), the memory controller sends the row address i to the DRAM, followed by the column address j.

File System implementation - Boot control block - Volume control block - Directory structure

File system implementation is the process of designing, developing, and implementing the software components that manage the organization, allocation, and access to files on a storage device in an operating system. - A boot-control block, ( per volume ) a.k.a. the boot block in UNIX or the partition boot sector in Windows contains information about how to boot the system off of this disk. - A volume control block, ( per volume ) a.k.a. the master file table in UNIX or the superblock in Windows, which contains information such as the partition table, number of blocks on each filesystem, and pointers to free blocks and free FCB blocks. - The directory structure is the organization of files into a hierarchy of folders. It should be stable and scalable; it should not fundamentally change, only be added to.

Comparison of algorithm algorithms: 1.) Fragmentation: external and internal 2.) Compaction, which resolves fragmentation

Fragmentation is the condition that results when free memory is divided into small, scattered chunks, whereas compaction is a process of moving allocated memory around to create larger contiguous blocks of free memory. Fragmentation is generally considered to be a problem that needs to be avoided, whereas compaction is a technique that can be used to address fragmentation when it does occur.

Free-space management - Free space list - Bit vector - Linked list

Free space management is a critical aspect of operating systems as it involves managing the available storage space on the hard disk or other secondary storage devices. - The free space list consists of all free disk blocks that are not allocated to any file or directory. For saving a file in the disk, the operating system searches the free space list for the required disk space and then allocates that space to the file. - A "BitVector" is a very compact representation of data that can only take on two values, such as Boolean data, and can also be an efficient representation of sets. The numbering of bits is consistent with other bit operations such as BitSet, so that 0 refers to the first bit. - In computer science, a linked list is a linear collection of data elements whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a collection of nodes which together represent a sequence.

Victim frame (modified/dirty bit)

If there is no free frame, use a page-replacement algorithm to select an existing frame to be replaced, known as the victim frame. Assigning a modify bit, or dirty bit to each page, indicates whether or not it has been changed since it was last loaded in from disk. If the dirty bit has not been set, then the page is unchanged, and does not need to be written out to disk. Otherwise the page write is required. It should come as no surprise that many page replacement strategies specifically look for pages that do not have their dirty bit set, and preferentially select clean pages as victim pages. It should also be obvious that unmodifiable code pages never get their dirty bits set.

Contiguous memory and allocation

In contiguous memory allocation, we allocate contiguous blocks of memory to each process when it is brought in the main memory to be executed.

Cache, caching

In general, a cache is a small, fast storage device that acts as a staging area for the data objects stored in a larger, slower device. The process of using a cache is known as caching.

Hard disk drives: platter, arm, etc. -Cylinders, tracks, sectors, gaps -sector density

In modern drives, there is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. -A cylinder is formed while all drive heads are in the same position on the disk. The tracks, stacked on top of each other form a cylinder. A sector is the smallest physical storage unit on the disk. In addition to the gaps left between the tracks, gaps are also left between the sectors. These gaps allow for a physical separation between the blocks of data and are typically used to help the hard drive controller when reading from or writing to the disk. -A sector is the smallest physical storage unit on a disk, bit density is the number of bits stored per unit length or area of a magnetic recording medium.

Linux-specific file system structures: i-node, directory, entry (dentry), superblock, file objects

Inodes keep track of all the files on a Linux system. Except for the file name and the actual content of the file, inodes save everything else. It's like a file-based data structure that holds metadata about all of the files in the system. In Linux, most of the operations are performed on files, for example, text files or images. Directories (folders) are used to help you organize your files. Think of directories like folders in a file cabinet. They have names, just like files, but their function is to contain other files, and other directories. A dentry is the glue that holds inodes and files together by relating inode numbers to file names. Dentries also play a role in directory caching which, ideally, keeps the most frequently used files on-hand for faster access. File system traversal is another aspect of the dentry as it maintains a relationship between directories and their files. The superblock is essentially file system metadata and defines the file system type, size, status, and information about other metadata structures (metadata of metadata). The superblock is very critical to the file system and therefore is stored in multiple redundant copies for each file system. A file object allows us to use, access and manipulate all the user accessible files. One can read and write any such files.

Principle of locality: temporal and spatial locality

Locality is typically described as having two distinct forms: temporal locality and spatial locality. In a program with good temporal locality, a memory location that is referenced once is likely to be referenced again multiple times in the near future. In a program with good spatial locality, if a memory location is referenced once, then the program is likely to reference a nearby memory location in the near future.

Magnetic tapes

Magnetic tape is a medium for magnetic storage made of a thin, magnetizable coating on a long, narrow strip of plastic film.

ROM, PROM, EPROM, EEPROM, flash memory

ROMS, referred to collectively as read-only memories, even though some types of ROMs can be written to as well as read, are distinguished by their number of times they can be programmed and by mechanism from reprogramming them. A programmable ROM (PROM) can be programmed exactly once. An erasable programmable ROM (EPROM) can be erased and reprogrammed on the order of 1000 times. AN electrically erasable PROM (EEPROM) is akin to an EPROM, but does not require a physically separate programming device, and thus can be programmed in-place on printed circuit cards and can be reprogrammed on the order of 10^5 times before it wears out. Flash memory is a type of nonvolatile memory, based on EEPROMs.

Kernel I/O subsystem: I/O scheduling, buffering, caching, spooling

Scheduling involves determining the best order in which to execute I/O requests to improve system performance, share device access permissions fairly, and reduce the average waiting time, response time, and turnaround time for I/O operations to complete. A buffer is a memory area that stores data being transferred between two devices or between a device and an application. A cache is a region of fast memory that holds a copy of data. Access to the cached copy is much easier than the original file. A spool is a buffer that holds the output of a device, such as a printer that cannot accept interleaved data streams.

Unix system calls for files (e.g., open(), lseek(), read(), write(), close())

The open function provides a wide range of options to specify file access mode, permissions, and flags. Syntax: int open(const char *pathname, int flags, mode_t mode).lseek() lets you specify new file offsets past the current end of the file. If data is written at such a point, read operations in the gap between this data and the old end of the file will return bytes containing zeros: off_t lseek(int fildes, off_t offset, int pos). The read() function reads data previously written to a file: read(file). write() writes up to count bytes from the buffer starting at buf to the file referred to by the file descriptor fd: ssize_t write(int fildes, const void *buf, size_t nbyte). close() closes a file descriptor, so that it no longer refers to any file and may be reused:

File Systems

The overall structure of an operating system, in which files are named, organized, and stored. FAT and NTFS are types of file systems.

Paging: 1.) Pages (fixed size) 2.) Page table 3.) Address space identifier (ASID) 4.) Frames 5.) Translation look-aside buffer (TLB) 6.) Protection 7.) Page sharing

The process of retrieving processes in the form of pages from the secondary storage into the main memory is known as paging: 1.) Pages have fixed sizes, usually 2k or 4k, and a fixed-length block of contiguous memory is called a Page. A page is also called a virtual page or memory page. 2.) A page table is a data structure used by a virtual memory system in a computer operating system to store mappings between virtual addresses and physical addresses. 3.) The ASID is a number assigned by the OS to each individual task. 4.) In the OSI model of computer networking, a frame is the protocol data unit at the data link layer. 5.) A translation lookaside buffer (TLB) is a memory cache that stores the recent translations of virtual memory to physical memory. 6.) The paging process should be protected by using the concept of insertion of an additional bit called Valid/Invalid bit. This bit is used to indicate whether a page is currently in memory or not. If the bit is set to valid, the page is in memory, and if it is set to invalid, the page is not in memory. 7.) a mechanism used by the VMkernel for more effective use of physical memory resources by only storing once memory pages that are identical on two or more virtual machines.

Disk performance: transfer rate, seek time, rotational latency

The transfer rate is how long the actual data read or write takes. Seek time measures the delay for the disk head to reach the track. A rotational latency accounts for the time to get to the right sector.

Virtual (logical) vs. physical addresses

The use of virtual addresses allows a program to behave as if it has exclusive use of the main memory, even though other processes are also running and using memory. On the other hand, a physical address is a location in the actual hardware memory, such as RAM.

Process's Virtual memory address space

The virtual address space for a process is the set of virtual memory addresses that it can use. The address space for each process is private and cannot be accessed by other processes unless it is shared.

Cache (re)placement policy

This process of overwriting an existing block is known as replacing or evicting the block. The block that is evicted is sometimes referred to as a victim block. The decision about which block to replace is governed by the cache's replacement policy

Thrashing: 1.) cause, solutions 2.) CPU utilization vs. degree of multiprogramming 3.) Locality of reference 4.) Working set model

Thrashing occurs in a system with virtual memory when a computer's real storage resources are overcommitted, leading to a constant state of paging and page faults, slowing most application-level processing. 1.) Causes: High degree of multiprogramming, Lack of frames, Page replacement policy. Solutions: locality of reference and working model 2.) Multi-programming increases CPU utilization by organizing jobs (code and data) so that the CPU always has one to execute. The idea is to keep multiple jobs in main memory. If one job gets occupied with IO, CPU can be assigned to other job. Multi-tasking is a logical extension of multiprogramming. 3.) A locality is a set of pages that are actively used together. The locality model states that as a process executes, it moves from one locality to another. A program is generally composed of several different localities which may overlap. 4.) The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (often approximated by the most recently used pages) can be in RAM

Booting

To boot (to boot up, to start up or booting) a computer is to load an operating system (OS) into the computer's main memory or RAM.

cold, conflict, and capacity misses

s. An empty cache is sometimes referred to as a cold cache, and misses of this kind are called compulsory misses or cold misses. Restrictive placement policies of this kind lead to a type of miss known as a conflict miss, in which the cache is large enough to hold the referenced data objects, but because they map to the same cache block, the cache keeps missing. . When the size of the working set exceeds the size of the cache, the cache will experience what are known as capacity misses. In other words, the cache is just too small to handle this particular working set.

File access methods (sequential, random)

sequential access: Information in the file is processed in order, one record after the other. random access: Direct file access, also known as random access. It allows us to access data directly from any location within the file, without the need to read or write all the records that come before it.


Related study sets

unit 6 genetic expression & regulation

View Set

Economics Chapter 1 and 2 Test Review

View Set

APUSH Vol. 1 to 1877 Ch. 18 Renewing the Sectional Struggle, 1848-1854

View Set

433 test 3 musculoskeletal review questions

View Set

Chap 3 3.3 a membrane seperates each cell from its surrounding

View Set

Anatomical Features (Markings) of Bones

View Set

Title IX and Gender Issues in Sports

View Set

Chapter 4: Folk and Popular Culture Test Review

View Set