Midterm Review

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

System Programs

- File manipulation - Status information sometimes stored in a File modification - Programming language support - Program loading and execution - Communications - Background services - Application programs

Midterm. Multilevel Queue. Understand for example why FCFS.

the simplest CPU-scheduling algorithm; With this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue. On the negative side, the average waiting time under the FCFS policy is often quite long. the FCFS scheduling algorithm is nonpreemptive. Once the CPU has been allocated to a process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O. The FCFS algorithm is thus particularly troublesome for time-sharing systems, where it is important that each user get a share of the CPU at regular intervals. It would be disastrous to allow one process to keep the CPU for an extended period.

Midterm. Define kernel mode.

when a user application requests a service from the operating system system call), the system must transition from user to kernel mode to the request; task executed on behalf of the operating system.

In terms of file management, what kinds of activities does OS do?

• Free-space management • Storage allocation • Disk scheduling

Midterm: Know difference between Rate Monotonic Scheduling and EDF (Earliest Deadline First).

• Rate Monotonic Scheduling - Static priority policy. - Priority based on inverse of period, 1/p - Shorter periods = higher priority - Longer periods = lower priority • EDF (Earliest Deadline First) - Priorities based on deadline - Earlier deadline = higher priority - Later deadline = less priority - Priorities are not static, dynamic (priorities can be adjusted on-the-fly)

Midterm. Review code for Producer-Consumer.

• Remember they are parallel processes and share counter variable (increment/decrement). • Not sufficient to handle race condition. Need synchronization tool.

Midterm. What are the 5 threading issues?

1. Semantics of fork() and exec() system calls 2. Signal handling - Synchronous and asynchronous 3. Thread cancellation of target thread - Asynchronous or deferred 4. Thread-local storage 5. Scheduler Activations refer slide 4.33

Question: What is the purpose of system programs?

Convenient environment for program development and execution. System programs can be thought of as bundles of useful system calls. They provide basic functionality to users so that users do not need to write their own programs to solve common problems. They are associated with the operating system but are not necessarily part of the kernel; aid in managing the operating system while it is running

Compare and contrast monolithic (simple) vs layered structure.

A monolithic kernel is an operating system architecture where the entire operating system is working in kernel space. The monolithic model differs from other operating system architectures (such as the microkernel architecture)[1][2] in that it alone defines a high-level virtual interface over computer hardware. A set of primitives or system calls implement all operating system services such as process management, concurrency, and memory management. Device drivers can be added to the kernel as modules. refer section 2.7.1 https://www.tutorialspoint.com/monolithic-system-architecture

Multilevel Queue

A multilevel queue scheduling algorithm partitions the ready queue into several separate queues (Figure 6.6). The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type. Each queue has its own scheduling algorithm. For example, separate queues might be used for foreground and background processes. The foreground queue might be scheduled by an RR algorithm, while the background queue is scheduled by an FCFS algorithm. In addition, there must be scheduling among the queues, which is commonly implemented as fixed-priority preemptive scheduling. For example, the foreground queue may have absolute priority over the background queue.

Midterm. Define process.

A program in execution; it is a unit of work within the system; program is a passive entity, process is an active entity.

Question: What is a buffer?

A region of memory used to temporarily hold data while it is being moved from one place to another; allows each device or process to operate without being held up by another.

Thread

A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter (PC), a register set, and a stack. It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as open files and signals. A traditional process has a single thread of control. If a process has multiple threads of control, it can perform more than one task at a time.

Midterm. Understand diagram of process state.

As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. A process may be in one of the following states: • New. The process is being created. • Running. Instructions are being executed. • Waiting. The process is waiting for some event to occur (such as an I/O completion or reception of a signal). • Ready. The process is waiting to be assigned to a processor. • Terminated. The process has finished execution. These names are arbitrary, and they vary across operating systems. The states that they represent are found on all systems, however. Certain operating systems also more finely delineate process states. It is important to realize that only one process can be running on any processor at any instant. Many processes may be ready and waiting, however.

Midterm. Critical-Section

Consider a system consisting of n processes {P0, P1, ..., Pn−1}. Each process has a segment of code, called a critical section, in which the process may be changing common variables, updating a table, writing a file, and so on. The important feature of the system is that, when one process is executing in its critical section, no other process is allowed to execute in its critical section. That is, no two processes are executing in their critical sections at the same time. The critical-section problem is to design a protocol that the processes can use to cooperate. Each process must request permission to enter its critical section. The section of code implementing this request is the entry section. The critical section may be followed by an exit section. The remaining code is the remainder section. • One solution: Mutex Locks. - Busy wait. - Acquire() lock to prevent others from using.

Midterm. If communicate, how perform interprocess communication?

Cooperating processes (can affect or be affected by the other processes executing in the system; share data) require an interprocess communication (IPC) mechanism that will allow them to exchange data and information. There are two fundamental models of interprocess communication: shared memoryand message passing. In the shared-memory model, a region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region. In the message-passing model, communication takes place by means of messages exchanged between the cooperating processes. Both of the models just mentioned are common in operating systems, and many systems implement both. (Although there are systems that provide distributed shared memory,we do not consider them in this text.)

Process Creation

During the course of execution, a process may create several new processes. As mentioned earlier, the creating process is called a parent process, and the new processes are called the children of that process. Each of these new processes may in turn create other processes, forming a tree of processes. refer section 3.3

Midterm: (True/False) Different Threads share code, data, files, registers, and heaps.

False. Should be stack, not heap.

Midterm. Dual-mode

In order to ensure the proper execution of the operating system, we must be able to distinguish between the execution of operating-system code and user-defined code. The approach taken by most computer systems is to providehardware support that allows us to differentiate among various modes of execution.

Midterm. Message passing

In the message-passing model, communication takes place by means of messages exchanged between the cooperating processes. Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided. Message passing is also easier to implement in a distributed system than shared memory. refer section 3.4.2

Midterm. Shared Memory

In the shared-memory model, a region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region. Shared memory can be faster than message passing, since message-passing systems are typically implemented using system call and thus require the more time-consuming task of kernel intervention. In shared-memory systems, system calls are required only to establish shared refer section 3.4.1

Midterm. Compare and contrast monolithic (simple) vs layered structure.

Layering provides a distinct advantage in an operating system. All the layers can be defined separately and interact with each other as required. Also, it is easier to create, maintain and update the system if it is done in the form of layers. Change in one layer specification does not affect the rest of the layers. Each of the layers in the operating system can only interact with the layers that are above and below it. The lowest layer handles the hardware and the uppermost layer deals with the user applications. the layered approach, in which the operating system is broken into a number of layers (levels). The bottom layer (layer 0) is the hardware; the highest (layer N) is the user interface. refer section 2.7.2

Question: Can there only be two modes?

No. Can have more.

Question: Why buffer?

Pickup after complete other operation. By storing info in a buffer, your operating system requires less time and effort to access it (since they use RAM instead of your hard disk), making it useful in situations where the rate of the received data is diferent than the rate of the processed data

Question: Will the parent and child share resources?

Resource sharing options - Parent and children share all resources - Children share subset of parent's resources - Parent and child share no resources

Question: What is the purpose of the command interpreter?

Take inputs (commands) from user and executes.

Midterm. Multilevel Queue. Understand for example why Round Robin (RR) scheduling algorithm.

The round-robin (RR) scheduling algorithm is designed especially for timesharing systems. It is similar to FCFS scheduling, but preemption is added to enable the system to switch between processes. A small unit of time, called a time quantum or time slice, is defined. A time quantum is generally from 10 to 100 milliseconds in length. Thus, we want the time quantum to be large with respect to the contextswitch time. if the time quantum is too large, RR scheduling degenerates to an FCFS policy. A rule of thumb is that 80 percent of the CPU bursts should be shorter than the time quantum. refer section 6.3.4

Question: Does P3 execute from which address space?

There are also two address-space possibilities for the new process: 1. The child process is a duplicate of the parent process (it has the same program and data as the parent). 2. The child process has a new program loaded into it.

Define microkernel.

This method structures the operating system by removing all nonessential components from the kernel and implementing them as system and user-level programs. The result is a smaller kernel. There is little consensus regarding which services should remain in the kernel and which should be implemented in user space. Typically, however, microkernels provide minimal process and memory management, in addition to a communication facility;The main function of the microkernel is to provide communication between the client program and the various services that are also running in user space. leave as few functions in kernel. Leave functions outside.

Question: How much shared memory allocate in IPC?

Two types of buffers can be used. The unbounded buffer places no practical limit on the size of the buffer. The consumer may have to wait for new items, but the producer can always produce new items. The bounded buffer assumes a fixed buffer size (more reasonable because fixed size). In this case, the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full.

Midterm. Race Condition

We would arrive at this incorrect state because we allowed both processes to manipulate the variable counter concurrently. A situation like this, where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a race condition. To guard against the race condition above, we need to ensure that only one process at a time can be manipulating the variable counter. To make such a guarantee, we require that the processes be synchronized in some way.

Question: Will parent and child be able to execute concurrently?

When a process creates a new process, two possibilities for execution exist: 1. The parent continues to execute concurrently with its children. 2. The parent waits until some or all of its children have terminated.

Midterm. Define user mode.

When the computer system is executing on behalf of a user application; task executed on behalf of user.

Midterm. Understand Producer-Consumer (Multiprocessing scenario).

a common paradigm for cooperating processes. Although the producer and consumer routines shown above are correct separately, they may not function correctly when executed concurrently. As an illustration, suppose that the value of the variable counter is currently 5 and that the producer and consumer processes concurrently execute the statements "counter++" and "counter--". Following the execution of these two statements, the value of the variable countermay be 4, 5, or 6! The only correct result, though, is counter == 5, which is generated correctly if the producer and consumer execute separately.

Question: Interrupt timeline (high 1, low 0)

an interrupt is an input signal to the processor indicating an event that needs immediate attention. An interrupt signal alerts the processor and serves as a request for the processor to interrupt the currently executing code, so that the event can be processed in a timely manner. If the request is accepted, the processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, unless the interrupt indicates a fatal error, the processor resumes normal activities after the interrupt handler finishes.[1] hardware interrupts or software interrupts or processor.

Question: How does hardware distinguish between the different modes (user and kernel)?

by bits where user mode = 1 bit, kernel mode = 0 bit.

Question: What is a system call?

monitor call; method used by a process to request action by operating system; usually takes form of a trap to a specific location in the interrupt vector; treated by hardware as software interrupt. control passes through the interrupt vector to a service routine in the operating system and the mode bit is set to kernel mode.

Explain system call. (short-answer)

provide an interface to the services made available by an operating system. These calls are generally available as routines written in C and C++, although certain low-level tasks (for example, tasks where hardware must be accessed directly) may have to be written using assembly-language instructions. Categories: process control, file manipulation, device manipulation, information maintenance,communications, and protection.


Ensembles d'études connexes

Selling/Negotiation Exam 1 Review: Ch.5

View Set

Personal Finance Ch. 3 Questions

View Set

Chapter 41 - Professional Roles and Leadership

View Set

Statistics Final Exam Non-Credit Test Review

View Set