CMPSC472TEST1

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

The basic components in a computer system, what they do and how they work (CPU, memory, I/O, etc.)

- Input(It accepts (or reads) the list of instructions and data from the outside world. It converts these instructions and data in computer acceptable format. It supplies the converted instructions and data to the computer system for further processing.). Output(It accepts the results produced by the computer which are in coded form and hence cannot be easily understood by us.It converts these coded results to human acceptable (readable) form.It supplied the converted results to the outside world.). CPU(The main unit inside the computer is the CPU. This unit is responsible for all events inside the computer. It controls all internal and external devices, performs "Arithmetic and Logical operations". The operations a Microprocessor performs are called "instruction set" of this processor. The instruction set is "hard wired" in the CPU and determines the machine language for the CPU. The more complicated the instruction set is, the slower the CPU works. Processors differed from one another by the instruction set.The control Unit and the Arithmetic and Logic unit of a computer system are jointly known as the Central Processing Unit (CPU). The CPU is the brain of any computer system. Similarly, in a computer system, all major calculations and comparisons are made inside the CPU and the CPU is also responsible for activating and controlling the operations of other units of a computer system.) Main memory - (Refers to physical memory that is internal to the computer. The word main is used to distinguish it from external mass storage devices such as disk drives. Another term for main memory is RAM.The computer can manipulate only data that is in main memory. Therefore, every program you execute and every file you access must be copied from a storage device into main memory. The amount of main memory on a computer is crucial because it determines how many programs can be executed at one time and how much data can be readily available to a program. Because computers often have too little main memory to hold all the data they need, computer engineers invented a technique called swapping, in which portions of data are copied into main memory as they are needed. Swapping occurs when there is no room in memory for needed data. When one portion of data is copied into memory, an equal-sized portion is copied (swapped) out to make room.) Secondary memory - (Secondary memory is computer memory that is non-volatile and persistent in nature and is not directly accessed by a computer/processor. It allows a user to store data that may be instantly and easily retrieved, transported and used by applications and services. Secondary memory is also known as secondary storage.) BUS - (In computer architecture, a bus or buss is a communication system that transfers data between components inside a computer, or between computers. This expression covers all related hardware components (wire, optical fiber, etc.) and software, including communication protocols.)

The role of the process control block and its contents

--The role of the PCBs is central in process management: they are accessed and/or modified by most OS utilities, including those involved with scheduling, memory and I/O resource access and performance monitoring. It can be said that the set of the PCBs defines the current state of the operating system. Data structuring for processes is often done in terms of PCBs. For example, pointers to other PCBs inside a PCB allow the creation of those queues of processes in various scheduling states ("ready", "blocked", etc.). Contents: The process scheduling state, e.g. in terms of "ready", "suspended", etc., and other scheduling information as well, like a priority value, the amount of time elapsed since the process gained control of the CPU or since it was suspended. Also, in case of a suspended process, event identification data must be recorded for the event the process is waiting for. Process structuring information: process's children id's, or the id's of other processes related to the current one in some functional way, which may be represented as a queue, a ring or other data structures. Interprocess communication information: various flags, signals and messages associated with the communication among independent processes may be stored in the PCB. Process privileges, in terms of allowed/disallowed access to system resources. Process state: State may enter into new, ready, running, waiting, dead depending on CPU scheduling. Process No: a unique identification number for each process in the operating system. Program counter: a pointer to the address of the next instruction to be executed for this process. CPU registers: indicates various register set of CPU where process need to be stored for execution for running state. CPU scheduling information: indicates the information of a process with which it uses the CPU time through scheduling. Memory management information: includes the information of page table, memory limits, Segment table depending on memory used by the operating system. Accounting information: includes the amount of CPU used for process execution, time limits, execution ID etc. IO status information: includes a list of I/O devices allocated to the process.

Kernel and user modes (how they differ, how the system knows which one it is in, etc.)

-Kernel mode: Kernel mode is generally reserved for the lowest-level, most trusted functions of the operating system. -User mode: Due to the protection afforded by this sort of isolation, crashes in user mode are always recoverable. Difference: 1.In Kernel mode, the executing code has complete and unrestricted access to the underlying hardware.It can execute any CPU instruction and reference any memory address. In contrast, In User mode, the executing code has no ability to directly access hardware or reference memory.Code running in user mode must delegate to system APIs to access hardware or memory. 2.Crashes in kernel mode are catastrophic; they will halt the entire PC. Crashes in user mode are always recoverable.

The difference between a thread and a process

1.Threads share the address space of the process that created them; processes have their own address space. 2. Threads have direct access to the data segment of its process; processes have their own copy of the data segment of the parent process. 3. Threads can directly communicate with other threads of its process; processes must use interprocess communication to communicate with sibling processes. 4. New threads are easily created; new processes require duplication of the parent process. 5.it is faster for an operating system to switch between threads than it is to switch between different processes.

Provide two programming examples in which multithreading does not provide better performance than a single-threaded solution.

1. Any kind of sequential program is not a good candidate to be threaded. An example of this is a program that calculates an individual tax return. 2. Another example is a "shell" program such as the C-shell or Korn shell. Such a program must closely monitor its own working space such as open files, environment variables, and current working directory.

Describe the actions taken by a kernel to context-switch between processes.

1. In response to a clock interrupt, the OS saves the PC and user stack pointer of the currently executing process, and transfers control to the kernel clock interrupt handler, 2. The clock interrupt handler saves the rest of the registers, as well as other machine state, such as the state of the floating point registers, in the process PCB. 3. The OS invokes the scheduler to determine the next process to execute, 4. The OS then retrieves the state of the next process from its PCB, and restores the registers. This restore operation takes the processor back to the state in which this process was previously interrupted, executing in user code with user mode privileges.

Using Amdahl's Law, calculate the speedup gain of an application that has a 60 percent parallel component for (a) two processing cores and (b) four processing cores.

1.42 = (1 / (1 - 0.6) + (0.6/2)) and (1 / (1-0.6) + (0.6/4)) ]]]

System calls (what they are, why we need them, how they work, different types)

A system call, sometimes referred to as a kernel call, is a request in a Unix-like operating system made via a software interrupt by an active process for a service performed by the kernel. System calls can also be viewed as clearly-defined, direct entry points into the kernel through which programs request services from the kernel. They allow programs to perform tasks that would not normally be permitted. System calls can be roughly grouped into five major categories: Process Control load execute end, abort create process (for example, fork on Unix-like systems, or NtCreateProcess in the Windows NT Native API) terminate process get/set process attributes wait for time, wait event, signal event allocate, free memory File management create file, delete file open, close read, write, reposition get/set file attributes Device Management request device, release device read, write, reposition get/set device attributes logically attach or detach devices Information Maintenance get/set time or date get/set system data get/set process, file, or device attributes Communication create, delete communication connection send, receive messages transfer status information attach or detach remote devices

Interrupts and traps and how they are processed, as well as similarities and differences

A trap is an exception in a user process. It's caused by division by zero or invalid memory access. It's also the usual way to invoke a kernel routine (a system call) because those run with a higher priority than user code. Handling is synchronous (so the user code is suspended and continues afterwards). In a sense they are "active" - most of the time, the code expects the trap to happen and relies on this fact. An interrupt is something generated by the hardware (devices like the hard disk, graphics card, I/O ports, etc). These are asynchronous (i.e. they don't happen at predictable places in the user code) or "passive" since the interrupt handler has to wait for them to happen eventually.You can also see a trap as a kind of CPU-internal interrupt since the handler for trap handler looks like an interrupt handler (registers and stack pointers are saved, there is a context switch, execution can resume in some cases where it left off). An interrupt is a hardware-generated change of flow within the system. An interrupt handler is summoned to deal with the cause of the interrupt;control is then returned to the interrupted context and instruction. (Answer for 1.19) A trap is a software-generated interrupt. An interrupt can be used to signal the completion of an I/O to obviate the need for device polling. A trap can be used to call operating system routines or to catch arithmetic errors.

The advantages/disadvantages to using threads:

Advantages: Less overhead to establish and terminate vs. a process: because very little memory copying is required (just the thread stack), threads are faster to start than processes. Faster task-switching: in many cases, it is faster for an operating system to switch between threads than it is to switch between different processes. Data sharing with other threads in a process: for tasks that require sharing large amounts of data, the fact that threads all share a process's memory pool is very beneficial. Disadvantages: Global variables are shared between threads.Inadvertent modification of shared variables can be disastrous. Memory crash in one thread kills other threads sharing the same memory, unlike processes. If one of threads in the process is blocked, the whole process blocked.

The application of Amdahl's law to speed up (parallelism) and how to apply it

Amdahl's law: Slatency is the theoretical speedup in latency of the execution of the whole task; p: amount of code in parallel 1-p: amount of code in serial S: the number of processor

What is the main advantage of the microkernel approach to system design? How do user programs and system services interact in a microkernel architecture? What are the disadvantages of using the microkernel approach?

An OS has been developed called Mach that modularized the kernel using the microkernel approach. This method structures the OS by removing all non-essential components from the kernel and implementing them as system and user-level program. Micro kernel provides minimal process and memory management with communication facility. The communication is done indirectly via the method called Message passing. All new services are added to user space and consequently do not require modification of the kernel. Micro kernel provides more security and reliability, since most services are running as user rather than kernel. Dis advantage: It suffers from performance decrease due to increased system function overhead

Keeping in mind the various definitions of operating system, consider whether the operating system should include applications such as Web browsers and mail programs. Argue both that it should and that it should not, and support your answers.

An argument in favor of including popular applications with the operating system is that if the application is embedded within the operating system, it is likely to be better able to take advantage of features in the kernel and therefore have performance advantages over an application that runs outside of the kernel. Arguments against embedding applications within the operating system typically dominate however: (1) the applications are applications - and not part of an operating system, (2) any performance benefits of running within the kernel are offset by security vulnerabilities, (3) it leads to a bloated operating system.

What is the purpose of interrupts? How does an interrupt differ from a trap? Can traps be generated intentionally by a user program? If so, for what purpose?

An interrupt is a hardware-generated change of flow within the system. An interrupt handler is summoned to deal with the cause of the interrupt; control is then returned to the interrupted context and instruction. A trap is a software-generated interrupt. An interrupt can be used to signal the completion of an I/O to obviate the need for device polling. A trap can be used to call operating system routines or to catch arithmetic errors.

What are the three main purposes of an operating system?

As a control program it serves two major functions: (1) supervision of the execution of user programs to prevent errors and improper use of the computer, and (2) management of the operation and control of I/O devices.

What resources are used when a thread is created? How do they differ from those used when a process is created?

Because a thread is smaller than a process, thread creation typically uses fewer resources than process creation. Creating a process requires allocating a process control block (PCB), a rather large data structure. The PCB includes a memory map, list of open files, and environment variables. Allocating and managing the memory map is typically the most time-consuming activity. Creating either a user or kernel thread involves allocating a small data structure to hold a register set, stack, and priority.

Parallel vs. concurrent processes

Concurrency is when two or more tasks can start, run, and complete in overlapping time periods. It doesn't necessarily mean they'll ever both be running at the same instant. Eg. multitasking on a single-core machine. Parallelism is when tasks literally run at the same time

How cache works (block transfers, etc.)

Cache is a small high-speed memory. Stores data from some frequently used addresses (of main memory).

Give two reasons why caches are useful. What problems do they solve? What problems do they cause? If a cache can be made as large as the device for which it is caching (for instance, a cache as large as a disk), why not make it that large and eliminate the device?

Caches are useful when two or more components need to exchange data, and the components perform transfers at differing speeds. Caches solve the transfer problem by providing a buffer of intermediate speed between the components. If the fast device finds the data it needs in the cache, it need not wait for the slower device. The data in the cache must be kept consistent with the data in the components. If a component has a data value change, and the datum is also in the cache, the cache must also be updated. This is especially a problem on multiprocessor systems where more than one process may be accessing a datum. A component may be eliminated by an equal-sized cache, but only if: (a) the cache and the component have equivalent state-saving capacity (that is, if the component retains its data when electricity is removed, the cache must retain data as well), and (b) the cache is affordable, because faster storage tends to be more expensive.

Interprocess communication (shared memory, message passing) [We didn't spend much time on this]

Concept: inter-process communication or interprocess communication (IPC) refers specifically to the mechanisms an operating system provides to allow processes to share data. Typically, applications can use IPC categorized as clients and servers, where the client requests data and the server responds to client requests. Approaches: Shared memory: Multiple processes are given access to the same block of memory which creates a shared buffer for the processes to communicate with each other. Message passing: Allows multiple programs to communicate using message queues and/or non-OS managed channels, commonly used in concurrency models.

Describe the actions taken by a kernel to context-switch between kernellevel threads

Context switching between kernel threads typically requires saving the value of the CPU registers from the thread being switched out and restoring the CPU registers of the new thread being scheduled.

The problems introduced by multiprogramming

Copying or Stealing one's programs or data. Today, everyone depend their daily task mostly on computers. They do research; accounting; programs; create, print, delete , and search files; spend their leisure time through gaming and other activities that make computers the major requirements to attain these. In this case, in a multiprogramming and time-sharing environment where several users share the system simultaneously, there will be no assurance that each files, programs and data of each user will have the privacy or be restricted to other users especially when the user does not know how to make his files unexposed to other user. Two good examples of this situation are: when using Team Viewer or the built-in Remote Desktop Service (formerly Terminal Service). Both of them are used in the same way but differ on how they will execute. For instance the Remote Desktop Service, you can specify how many users can connect to your computer and also you can specify their limitations. Then when the user is now connected to your computer, he has the freedom on whatever he will do to your computer, whether he will create, delete or search for files. And worst, can copy and steal confidential information and data and transfer it to his computer. It is an illegal transfer of electronic data. This is possible because in a multiprogramming and time-sharing environment, you can run programs at the same time while doing a specific task. In general speaking, it is a multi-tasking activity. 2. Using system resources (CPU, memory, disk space, peripherals) with improper accounting: When there are several users using the system simultaneously, it cannot be guaranteed that there will be a specific amount of space or limitations in using system resources. As the time goes by, each user occupies a number of bytes of memory and disk space, and an amount of programs process by the CPU and the amount of time that peripherals are being use. And as the time increases, the system resources will proportionally increase their capability and production. In this case, a great possibility that there will be a shortage in memory and disk space most especially when it is not use with proper accounting or allocation. When this happens, all users sharing and connected to the system will now be interrupted and worst it will make their programs crashed and their paper works and the likes will not be save. But then there are solutions to ease this scenario. A good example is through virtualization which can make hosting of multiple virtualized environments within a single OS instance possible (VMware software is an example).

EAT formula(Effective Access Time)

EAT(effective access time) = H(Cost L1) + (1 - H)(Cost L2)

What happens when a context switch occurs

In a switch, the state of process currently executing must be saved somehow, so that when it is rescheduled, this state can be restored. The process state includes all the registers that the process may be using, especially the program counter, plus any other operating system specific data that may be necessary. This is usually stored in a data structure called a process control block (PCB). Since the operating system has effectively suspended the execution of one process, it can then switch context by choosing a process from the ready queue and restoring its PCB. In doing so, the program counter from the PCB is loaded, and thus execution can continue in the chosen process. Process and thread priority can influence which process is chosen from the ready queue

The concept of a thread

In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler.

What a process actually is and how it is represented in the system Process

In computing, a process is an instance of a computer program that is being executed. It contains the program code and its current activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently. //This website shows how it's represented in the system. https://www.tutorialspoint.com/operating_system/os_processes.htm

System boot (what happens and how)

In order for a computer to successfully boot, its BIOS, operating system and hardware components must all be working properly; failure of any one of these three elements will likely result in a failed boot sequence. When the computer's power is first turned on, the CPU initializes itself, which is triggered by a series of clock ticks generated by the system clock. Part of the CPU's initialization is to look to the system's ROM BIOS for its first instruction in the startup program. The ROM BIOS stores the first instruction, which is the instruction to run the power-on self test (POST), in a predetermined memory address. POST begins by checking the BIOS chip and then tests CMOS RAM. If the POST does not detect a battery failure, it then continues to initialize the CPU, checking the inventoried hardware devices (such as the video card), secondary storage devices, such as hard drives and floppy drives, ports and other hardware devices, such as the keyboard and mouse, to ensure they are functioning properly. Once the POST has determined that all components are functioning properly and the CPU has successfully initialized, the BIOS looks for an OS to load. The BIOS typically looks to the CMOS chip to tell it where to find the OS, and in most PCs, the OS loads from the C drive on the hard drive even though the BIOS has the capability to load the OS from a floppy disk, CD or ZIP drive. The order of drives that the CMOS looks to in order to locate the OS is called the boot sequence, which can be changed by altering the CMOS setup. Looking to the appropriate boot drive, the BIOS will first encounter the boot record, which tells it where to find the beginning of the OS and the subsequent program file that will initialize the OS. Once the OS initializes, the BIOS copies its files into memory and the OS basically takes over control of the boot process. Now in control, the OS performs another inventory of the system's memory and memory availability (which the BIOS already checked) and loads the device drivers that it needs to control the peripheral devices, such as a printer, scanner, optical drive, mouse and keyboard. This is the final stage in the boot process, after which the user can access the system's applications to perform tasks.

Interrupts and how they work

In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing. The processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, after the interrupt handler finishes, the processor resumes normal activities.[1] There are two types of interrupts: hardware interrupts and software interrupts.

What are the advantages of using loadable kernel modules?

It is difficult to predictwhat features an operating system will need when it is being designed. The advantage of using loadable kernel modules is that functionality can be added to and removed from the kernel while it is running. There is no need to either recompile or reboot the kernel

Microkernels vs. monolithic kernels (including benefits and disadvantages of one over another)

Monolithic kernel is a single large process running entirely in a single address space. It is a single static binary file. All kernel services exist and execute in the kernel address space. The kernel can invoke functions directly. Examples of monolithic kernel based OSs: Unix, Linux. In microkernels, the kernel is broken down into separate processes, known as servers. Some of the servers run in kernel space and some run in user-space. All servers are kept separate and run in different address spaces. Servers invoke "services" from each other by sending messages via IPC (Interprocess Communication). This separation has the advantage that if one server fails, other servers can still work efficiently. Examples of microkernel based OSs: Mac OS X and Windows NT.

What is the purpose of the command interpreter? Why is it usually separate from the kernel?

It reads commands from the user or from a file of commands and executes them, usually by turning them into one or more system calls. It is usually not part of the kernel since the command interpreter is subject to changes.

The relationship between the number of bits in an address and how many unique locations that address can reference (such as Problem 1.3)

It really depends on whether or not it's set associative, direct mapped, etc, as we have seen in homework 1, problem 3, it's the same addresses but can be mapped differently.

User/kernel mode and privileged instructions

Kernel Mode In Kernel mode, the executing code has complete and unrestricted access to the underlying hardware. It can execute any CPU instruction and reference any memory address. Kernel mode is generally reserved for the lowest-level, most trusted functions of the operating system. Crashes in kernel mode are catastrophic; they will halt the entire PC. User Mode In User mode, the executing code has no ability to directly access hardware or reference memory. Code running in user mode must delegate to system APIs to access hardware or memory. Due to the protection afforded by this sort of isolation, crashes in user mode are always recoverable. Most of the code running on your computer will execute in user mode.

Tradeoffs between using multiple processes vs. multiple threads

Multiple threads advantage: Less overhead to establish and terminate vs. a process: because very little memory copying is required (just the thread stack), threads are faster to start than processes. Faster task-switching: in many cases, it is faster for an operating system to switch between threads than it is to switch between different processes. Data sharing with other threads in a process: for tasks that require sharing large amounts of data, the fact that threads all share a process's memory pool is very beneficial. Threads are a useful choice when you have a workload that consists of lightweight tasks (in terms of processing effort or memory size) that come in, for example with a web server servicing page requests. Processes are a useful choice for parallel programming with workloads where tasks take significant computing power, memory or both.

Multiprocessor/multicore/multithreading/multitasking/multicore

Multiprocessor/multicore/multithreading/multitasking/multicore - Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system.Multi-core is usually the term used to describe two or more CPUs working together on the same chip. Multithreading - a technique by which a single set of code can be used by several processors at different stages of execution.In computing, multitasking is a concept of performing multiple tasks (also known as processes) over a certain period of time by executing them concurrently.

Process states, how a process changes from one state to another, and the queues associated with them

New: When a process is first created, it occupies the "created" or "new" state. In this state, the process awaits admission to the "ready" state. Ready: A "ready" process has been loaded into main memory and is awaiting execution on a CPU Running: A process moves into the running state when it is chosen for execution. Blocked: A process that is blocked on some event , such as exhausting its CPU time allocation or waiting for an event to occur (such as I/O operation completion or a signal). Terminated: A process may be terminated, either from the "running" state by completing its execution or by explicitly being killed. (The following two states and queues are summarized by myself. You can review your note of the class.) Ready suspended: The process was in ready state and then is swapped out to the "Ready suspend" state. When it is activated, it will be back to the Ready State. Blocked suspended: The process was in Blocked state, when it is swapped out, it will be the "Block suspend" state. (1) When the process is activated, it will back to the Blocked state.(2) When the event or I/O completion, it will be moved to Ready suspend state. Queue: Ready Queue: When the New process is admitted, it will be moved into the "Ready Queue" and wait for the scheduler to dispatch it. Event queues(Professor Null didn't give specific name of queues): 1.When the process is blocked due to wait for event(I/O completion), it will be moved to the specific queue. 2. When the event is completed, the process will be moved to Ready queue. 3. I think there must have queues such as blocked suspend queue and ready suspend queue.

When a process creates a new process using the fork() operation, which of the following states is shared between the parent process and the child process? a. Stack b. Heap c. Shared memory segments

Only the shared memory segments are shared between the parent process and the newly forked child process. Copies of the stack and the heap are made for the newly created process.

The fetch-decode-execute instruction cycle

Picture on main page

Process trees

Process Tree shows running processes as a tree. The tree is rooted at either pid or init if pid is omitted. If a user name is specified, all process trees rooted at processes owned by that user are shown.

Suspension and swapping (what they are and how they differ)

Suspension: a process may be swapped out, that is, removed from main memory and placed on external storage by the scheduler.

What is the purpose of system calls?

System calls allow user-level processes to request services of the operating system.

What an OS does with processes (create, terminate, manage) and how

The creation of a process : Each process is named by a process ID number. A unique process ID is allocated to each process when it is created image in a certain address space, which typically involves the loading of the executable code for the task from some mass storage medium; the creation and initialization of a PCB for the process, and its insertion in one of the process control queues (typically the ``ready'' one) Termination of process: When a process terminates, the kernel releases the resources owned by the process and notifies the child's parent of its termination.

1.5 How does the distinction between kernel mode and user mode function as a rudimentary form of protection (security) system?

The distinction between kernel mode and user mode provides a rudimentary form of protection in the following manner. Certain instructions could be executed only when the CPU is in kernel mode. Similarly, hardware devices could be accessed only when the program is executing in kernel mode. Control over when interrupts could be enabled or disabled is also possible only when the CPU is in kernel mode. Consequently, the CPU has very limited capability when executing in user mode, thereby enforcing protection of critical resources.

How does the distinction between kernel mode and user mode function as a rudimentary form of protection (security) system?

The distinction between kernel mode and user mode provides a rudimentary form of protection in the following manner. Certain instructions could be executed only when the CPU is in kernel mode. Similarly, hardware devices could be accessed only when the program is executing in kernel mode. Control over when interrupts could be enabled or disabled is also possible only when the CPU is in kernel mode. Consequently, the CPU has very limited capability when executing in user mode, thereby enforcing protection of critical resources.

What are the five major activities of an operating system with regard to process management?

The five major activities are: a. The creation and deletion of both user and system processes b. The suspension and resumption of processes c. The provision of mechanisms for process synchronization d. The provision of mechanisms for process communication e. The provision of mechanisms for deadlock handling

Which of the following instructions should be privileged? a. Set value of timer. b. Read the clock. c. Clear memory. d. Issue a trap instruction. e. Turn off interrupts. f. Modify entries in device-status table. g. Switch from user to kernel mode. h. Access I/O device

The following operations need to be privileged: Set value of timer, clear memory, turn off interrupts, modify entries in device-status table, access I/O device. The rest can be performed in user mode.

The scheduler (dispatcher) in an OS and how it uses the ready queue Scheduler:

The process scheduler is a part of the operating system that decides which process runs at a certain point in time. How to use the ready queue(summarized by myself): When the New process is admitted, it will be moved into the "Ready Queue" and wait for the scheduler to dispatch it.

The concepts of improper synchronization, failed mutual exclusion, nondeterminate program execution, and deadlocks. Improper Synchronization

The software attempts to use a shared resource in an exclusive manner, but does not prevent or incorrectly prevents use of the resource by another thread or process. Failed mutual exclusion - two or more threads enter it's own critical section at the same time. Nondeterminate program execution - ? Deadlocks - A deadlock occurs when the waiting process is still holding on to another resource that the first needs before it can finish. The OS responsibilities for memory management

Which of the following components of program state are shared across threads in a multithreaded process? a. Register values b. Heap memory c. Global variables d. Stack memory

The threads of a multithreaded process share heap memory and global variables. Each thread has its separate set of register values and a separate stack.

What are the three major activities of an operating system with regard to memory management?

The three major activities are: a. Keep track of which parts of memory are currently being used and by whom. b. Decide which processes are to be loaded into memory when memory space becomes available. c. Allocate and deallocate memory space as needed.

The OS responsibilities for memory management

The three major activities are: a. Keep track of which parts of memory are currently being used and by whom. b. Decide which processes are to be loaded into memory when memory space becomes available. c. Allocate and deallocate memory space as needed.

What are the different types of cache associations, and how do they work?

The various cache organizations (direct mapped, fully associative, and set associative) (PICTURE INSIDE THE GUIDE)

Main purposes of OS

To provide an environment for a computer user to execute programs on computer hardware in a convenient and efficient manner. To allocate the separate resources of the computer as needed tosolve the problem given. The allocation process should be as fair and efficient as possible.As a control program it serves two major functions: (1) supervision of the execution of user programs to prevent errors and improper use of the computer, and (2) management of the operation and control of I/O devices.

The various processor registers and what they do

User-accessible registers can be read or written by machine instructions. The most common division of user-accessible registers is into data registers and address registers. Data registers can hold numeric values such as integer and, in some architectures, floating-point values, as well as characters, small bit arrays and other data. In some older and low end CPUs, a special data register, known as the accumulator, is used implicitly for many operations. Address registers hold addresses and are used by instructions that indirectly access primary memory. General-purpose registers (GPRs) can store both data and addresses, i.e., they are combined data/address registers and rarely the register file is unified to include floating point as well. Status registers hold truth values often used to determine whether some instruction should or should not be executed. Instruction register, holding the instruction currently being executed. Instruction register, holding the instruction currently being executed.Registers related to fetching information from RAM, a collection of storage registers located on separate chips from the CPU: Memory buffer register (MBR)(Data bus ride!), Memory data register (MDR) ,Memory address register (MAR), Registers are what the CPU is working with right now http://ecomputernotes.com/fundamental/input-output-and-memory/what-is-registers-function-performed-by-registers-types-of-registers

Under what circumstances does a multithreaded solution using multi- ple kernel threads provide better performance than a single-threaded solution on a single-processor system?

When a kernel thread suffers a page fault, another kernel thread can be switched in to use the interleaving time in a useful manner. A single-threaded process, on the other hand, will not be capable of performing useful work when a page fault takes place. Therefore, in scenarios where a program might suffer from frequent page faults or has to wait for other system events, a multithreaded solution would perform better even on a single-processor system.

Explain the role of the init process on UNIX and Linux systems in regard to process termination.

When a process is terminated, it briefly moves to the zombie state and remains in that state until the parent invokes a call to wait(). When this occurs, the process id as well as entry in the process table are both released. However, if a parent does not invoke wait(), the child process remains a zombie as long as the parent remains alive. Once the parent process terminates, the init process becomes the new parent of the zombie. Periodically, the init process calls wait() which ultimately releases the pid and entry in the process table of the zombie process.

When processes are created; when/why they are terminated

When created: -- Initialization of system: When operating system boots many background and foreground processes are created. Background processes are the one which are used for background purposes of the operating system and they do not interact with the users. -- Creation of a new process by a running process: System calls are issued by the running process to create new processes. New processes are created to help the existing process to do its job efficiently and swiftly. -- Process created by a user: A new process is created by the user. For Example, when a user starts a new browser window, a new process is created. When terminate: --Exit: When a process has finished its task then it is terminated by using an exit call. --Error: A process is terminated when there is any error in the program. --Killed by another process: A process can be terminated when another process kills it by calling kill. A process should have authorization to kill another process.

Is it possible to have concurrency but not parallelism? Explain.

Yes. Concurrency means that more than one process or thread is progressing at the same time. However, it does not imply that the processes are running simultaneously. The scheduling of tasks allows for concurrency, but parallelism is supported only on systems with more than one processing core.

In a multiprogramming and time-sharing environment, several users share the system simultaneously. This situation can result in various security problems. a. What are two such problems? b. Can we ensure the same degree of security in a time-shared machine as in a dedicated machine? Explain your answer.

a) What are two such problems? 1. One user can read the private data of another user - privacy. 2. One user can corrupt the private data of another user - integrity. 3. One user can prevent another user from getting anything done - denail of service. b) Can we ensure the same degree of security in a time-shared machine as we have in a dedicated machine? Explain your answer. There are two answers, either one correct. Yes - if we can ensure that the operating system prevents any sharing of data between users, either for reading or writing, and fairly shares the computer, then we can achieve the same level of security. No - we can never be sure that our software doesn't have bugs, so we can never be sure that we prevent all sharing of data and fairly allocate computer resources.

Provide three programming examples in which multithreading provides better performance than a single-threaded solution.

a. A Web server that services each request in a separate thread. b. A parallelized application such as matrix multiplication where different parts of the matrix may be worked on in parallel. c. An interactive GUI program such as a debugger where a thread is used to monitor user input, another thread represents the running application, and a third thread monitors performance.

List five services provided by an operating system, and explain how each creates convenience for users. In which cases would it be impossible for user-level programs to provide these services? Explain your answer.

a. Program execution. The operating system loads the contents (or sections) of a file into memory and begins its execution. A user-level program could not be trusted to properly allocate CPU time. b. I/O operations. Disks, tapes, serial lines, and other devices must be communicated with at a very low level. The user need only specify the device and the operation to perform on it, while the system converts that request into device- or controller-specific commands. User-level programs cannot be trusted to access only devices they should have access to and to access them only when they are otherwise unused. c. File-system manipulation. There are many details in file creation, deletion, allocation, and naming that users should not have to perform. Blocks of disk space are used by files and must be tracked. Deleting a file requires removing the name file information and freeing the allocated blocks. Protections must also be checked to assure proper file access. User programs could neither ensure adherence to protection methods nor be trusted to allocate only free blocks and deallocate blocks on file deletion. d. Communications. Message passing between systems requires messages to be turned into packets of information, sent to the network controller, transmitted across a communications medium, and reassembled by the destination system. Packet ordering and data correction must take place. Again, user programs might not coordinate access to the network device, or they might receive packets destined for other processes. e. Error detection. Error detection occurs at both the hardware and software levels. At the hardware level, all data transfers must be inspected to ensure that data have not been corrupted in transit. All data on media must be checked to be sure they have not changed since they were written to the media. At the software level, media must be checked for data consistency; for instance, whether the number of allocated and unallocated blocks of storage match the total number on the device. There, errors are frequently processindependent (for instance, the corruption of data on a disk), so there must be a global program (the operating system) that handles all types of errors. Also, by having errors processed by the operating system, processes need not contain code to catch and correct all the errors possible on a system.

User vs. kernel-level threads (ULT vs. KLT) and the pros and cons of each

a. User-level threads are unknown by the kernel, whereas the kernel is aware of kernel threads. b. User threads are scheduled by the thread library and the kernel schedules kernel Threads. c. Kernel threads need not be associated with a process whereas every user thread belongs to a process.

What are two differences between user-level threads and kernel-level threads? Under what circumstances is one type better than the other?

a. User-level threads are unknown by the kernel, whereas the kernel is aware of kernel threads. b. On systems using either M:1 or M:N mapping, user threads are scheduled by the thread library and the kernel schedules kernel threads. c. Kernel threads need not be associated with a process whereas every user thread belongs to a process. Kernel threads are generally more expensive to maintain than user threads as they must be represented with a kernel data structure.

The fork, wait, and exec system calls, what they do, how they work (be able to trace code)

fork(): -Construct a new logical address and context for the child process of the parent -data, stack, heap are cloned -text(code) segment may be cloned(may be shared) -Registers are identical,including the PC -Return different value in parent and child -child get 0 return -parent get the pid of the child wait(int *status): The parent process issue a wait system call, which suspends the execution of the parent process while the child executes. When the child process terminates, it returns an exit status to the operating system, which is then returned to the waiting parent process. The parent process then resumes execution. exec(): -transform the calling process into the called process(code + data are replaced; heap + stack are empty) -program counter (set to start of new program)

Command line vs GUI -

http://www.computerhope.com/issues/ch000619.htm That website has everything you need.

The concept of multithreading:

multithreading is the ability of a central processing unit (CPU) or a single core in a multi-core processor to execute multiple processes or threads concurrently

The services of the OS (what an OS is, why it exists, etc.)

operating system - the software that supports a computer's basic functions, such as scheduling tasks, executing applications, and controlling peripherals. An operating system (OS) is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require an operating system to function. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources.


Kaugnay na mga set ng pag-aaral

12.13.T - Quiz 9: Government Spending and Services

View Set

The Paralegal Professional (5th edition) (Intro to Paralegal Profession) flash cards

View Set

Chapter 1 Quiz-Supervisory Management

View Set

MECH REVIEW MIDTERM Manual Mills 201 and Mills Operation 251 and Manual Mill setup 221

View Set

ECON 309:Chapter 5 practice questions

View Set

Business Model and the Business Model Canvas

View Set

Cisco CCNA 1 questions Chapter 2

View Set

Chap. 49 - Metabolic and Endocrine Function --- Assessment and Management of Patients With Hepatic Disorders

View Set