Test 1

Ace your homework & exams now with Quizwiz!

Which of the following instructions should be allowed only in kernel mode? (a) Disable all interrupts. (b) Read the time-of-day clock. (c) Set the time-of-day clock. (d) Change the memory map.

(a) and (d). Usually, all instructions involving I/O or memory protection are allowed only in kernel mode.

What is the key difference between a TRAP and an interrupt?

(pp. 29, 50-51) An interrupt is a hardware signal from the Interrupt Controller to the CPU, whereas a TRAP is a kernel-mode set of instructions.

Many true objective-oriented programming languages use message passing rather than function calling. What are the design issues for message-passing?

--Messages can be lost by the network and thus a program must account for unreliable message passing. --A messaging system must know how to deal with how processes are named so that the specified process in the send or receive call is unambiguous. -A messaging system must take care of authentication so that it knows that it is communicating with who it should communicate with. -The problem of sending messages to and from processes on the same computer. Other notes pointed out in class: -Mac OS X is a message passing oriented system. -Message passing is a lot slower!

List and explain the four types of things that will kill or stop a process

1. Normal exit: user termination, program exits when it has finished its task. 2. Error exit: program termination, when a argument that the program needs is not currently available. 3. Fatal Error: when no such file exits to start the process. 4. Killed by another process: when a program operates outside of its memory range or is allocating and is then killed

In a multitasking OS what are the states that a process can be in. Explain each.

1. Running: the process is actually using the CPU. 2. Ready: the process is able to run but has no CPU available so it is stopped waiting for a CPU to open up. 3. Blocked: the process is waiting for an external event to take place before it can run.

Suppose that a 10-MB file is stored on a disk on the same track (track #: 50) in consecutive sectors. The disk arm is currently situated over track number 100. How long will it take to retrieve this file from the disk? Assume that moving the arm from one cylinder to the next takes about 1 ms and it takes about 5 ms for the sector where the beginning of the file is stored to rotate under the head. Also, assume that reading occures at a rate of 100 MB/s.

50 ms to move to the correct cylinder + 5 ms to move to the right sector + 100 ms to read the file = 155 ms, or .155 seconds.

A computer has a pipeline of 4 stages with each stage doing work in 1 nsec. How many instructions per second can this machine execute?

999,999,997 instructions per second. The pipeline has 3 nsec of simple calculations, then completes an instruction in every subsequent nsec.

Define Daemon and fork.

A Daemon is simply a process that stays hidden in the background and handles certain tasks as e-mailing, or printing. A fork is a UNIX specific system call responsible for creating a new process. The fork creates an exact clone of the calling process. After the fork, the two processes, the parent and the child, have the same memory image, the same environment strings, and the same open file.

What is a TRAP Instruction and what is its use in Operating Systems?

A TRAP instruction is a software interrupt. TRAP instructions are used to switch from user mode to Kernel mode and start execution at a fixed address within the kernel. The TRAP instruction is fairly similar to the call instruction in the sense that the instruction following it is taken from a distant location and the return address is saved on the stack for later use. TRAP instructions cannot jump to an arbitrary address.

What is a barrier? Why is a barrier useful to multithreaded programs?

A barrier is divides applications into phases and enforces the rule that no process may proceed into the next phase until all processes are ready to proceed. A barrier is useful to multithreaded programs because if a thread depends on the result of another, it needs to be held back until that thread is ready to proceed.

What is a critical region? What four conditions must hold for a good solution? Explain each.

A critical region is a part of a program that must complete execution before other processes can have access to the resources being used. Processes within a critical region can't be interleaved without threatening integrity of the operation. And the four are: No two processes may be simultaneously inside their critical regions. No assumptions may be made about speeds or the number of CPUs. No process running outside its critical region may block other processes. No process should have to wait forever to enter its critical region.

Explain how a Web Server would use threads to improve performance.

A main thread handles incoming requests and creates a new thread for each request to the server. The main thread no longer has to worry about the request. The thread for each request will handle all communication and execution of the request.

What is a Monitor? How are they useful to multithreaded programs?

A monitor is a module that is used to help prevent deadlock by using mutual exclusion. They are used to ensure that only one thread can be open at a time.

What is a mutex? What is its goal?

A mutex is a variable type that can be in one of two states: unlocked or locked (much like a bool type). Usually implemented using an integer, 0 is used to represent the unlocked state, and all other values represent the locked state. Mutexes are used to manage access to critical regions by threads (or processes).

What is a pop-up thread? Why use a pop-up thread?

A pop-up thread is a special thread that is created to handle new messages that come in. Pop-up threads are useful because, since they are new threads, they do not have any history that must be restored. Thus the thread can be created quickly and is given the incoming message to process. This reduces latency between message arrival and the processing of that message.

What are process hierarchies? What is a handle? What is the first process to run in most UNIX OSes?

A process hierarchy is formed when a process creates another process, forming a parent and child process; then the child process creates other processes. This happening creates a process hierarchy. A handle is a special token that is given to a parent process that is used to control the child. The handle can also be passed to other processes thereby invalidating the hierarchy. The first process in UNIX to run, is a special process called init, which is present in the boot image. When it starts running, it reads a file telling how many terminals there are, the forks off one new process per terminal. These processes wait for someone to log in. If login is successful, the login process executes a shell to accept commands.

What are the differences between the process table and the thread table?

A process table is a data structure for context switching and scheduling. Implemented differently in different operating systems, each entry called a context block contains the program counter, stack pointer, registers, etc. which are saved in the process table. Threads work similarly but share resources like memory. Depending on the OS, the thread table may be a subset within the process table. In other words, each process has its own independent address space and memory, while (in some Operating Systems, such as UNIX) threads share these resources as a subset of a process. A thread table is essentially a process table with shared resources.

What is a process table/process and process control block?

A process table is a location in memory that stores the elements that are used to track and facilitate a process. The process control block is a data structure with the information to manage a process.

What is a Race Condition? Why would you not like your bank to have race conditions when dealing with you money?

A race condition occurs when 2 or more threads are able to access shared data and they try to change it at the same time. Because the thread scheduling algorithm can swap between threads at any point, you don't know the order at which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread scheduling algorithm, i.e. both threads are 'racing' to access/change the data.

Explain Fig 2-10. Why would you want to add interrupts?

A server can be constructed in three different ways. The first method is when the server is using threads, which allows for parallelism (to improve performance) and blocking system calls to make programming easier. The second method is using a single-threaded process which does not allow for parallelism but can block system calls. The third method is using a finite-state machine which allows for parallelism, but does not use blocking system calls and uses interrupts instead. If you use a nonblocking system, to get the information between states you must pass interrupts or signals, otherwise you cannot change states within the process.

Windows 3.1 had the threads use thread_yeild. Why was this a bad idea and is not used in modern day Linux or Windows 7?

A thread_yield allows a thread to voluntarily give up the CPU to let another thread run. "There is no such call for processes because processes are fiercely competitive and each one wants all the CPU time it can get. However, since the threads of a process are working together and their code is invariably written by the same programmer" - This might not be true these days there are multiple programmers working on the same project. Sometimes it happens that a thread is not logically blocked, but feels that it has run long enough and wants to give another thread a chance to run. It can accomplish this goal by calling pthread_yield. How does a thread know when it has run long enough? What if a thread thinks it hasn't run long when it has run too long?

A portable operating system is one that can be ported from one system architecture to another without any modification. A. Explain why it is infeasible to build an operating system that is completely portable. B. Describe two high-level layers that you will have in designing an operating system that is highly portable.

A) One reason why it is infeasible because there is no way to specify path names. UNIX does not allow path names to be prefixed because it creates device dependence, therefore not portable. B) Layer 3: Input/Output Management - managing I/O devices and buffering the information streams to and from. C) Layer 2: Operator-processor communication - handled communication between each process and the operator console.

What are kernel level threads? What are user level threads? Why use each?

A. Thread created by an operating system B. A thread which has the same operating system execution context as its parent. C. To effectively use multi-core processing

To a programmer, a system call looks like any other call to a library procedure. Is it important that a programmer know which library procedures result in system calls? Under what circumstances and why?

According to the text, the difference between a system call and a procedure call is that a system call enters the kernel and procedure calls do not. This is important because if a process needs a system service or it has to execute a trap instruction to give the operating system the control. The operating system then determines how it need to be handled by examining the parameters of the process that's calling for it. It will then determine whether or not to use the system or procedural call.

Explain what each of the things stored in question 8 does [address space, program counter, stack pointer, etc.]. Why must we save this info?

All of them perform functions that pertain to a process's current state. This info must be saved by the OS so it can restore it to continue running the process later. This allows context switching.

Explain Batch, Interactive, and Real Time scheduling. Why is Real Time the hardest?

Batch: In batch systems, there are no users impatiently waiting at their terminals for a quick response to a short request. Consequently, nonpreemptive algorithms, or preemptive algorithms with long time periods for each process, are often acceptable. This approach reduces process switches and thus improves performance. The batch algorithms are actually fairly general and often applicable to other situations as well. Interactive: In an environment with interactive users (such as servers with multiple users, all of whom are in a big hurry), preemption is essential to keep one process from hogging the CPU and denying service to the others. Even if no process intentionally ran forever, one process might shut out all the others indefinitely due to a program bug. Real Time: In systems with real-time constraints, preemption is sometimes not needed because the processes know that they may not run for long periods of time and usually do their work and block quickly. The difference with interactive systems is that real-time systems run only programs that are intended to further the application at hand. Real Time scheduling is the hardest because unlike with Batch or Interactive, where a long wait may be irritating, in systems with Real Time scheduling it may be intolerable. Because of this, the process times need to be predictable and always fall within certain time constraints (hard vs. soft real-time scheduling).

Explain why Fig. 2-6 is the greatest argument for running several processes at one time.

Because of I/O wait times, a computer without multiprogramming would spend a majority of time idle, which is a complete waste and causes the computer to seem slow since it is waiting for I/O all the time. With multiprogramming, we see an increase of CPU utilization because of the ability to run processes that are not waiting for I/O.

What is the essential difference between a block special file and character special file?

Block special files are used to model devices that consist of a collection of randomly addressable blocks such as disks. However, character special files are used to model printers, modems, and other devices that accept or output a character stream.

Explain how separation of policy and mechanism aids in building microkernel-based operating systems.

By separating the policy for a function in the user-mode process, and keeping only the mechanism in the kernel, the core functionality is all that is left in the kernel. In this way, the microkernel concept is followed, moving the bulk of the code outside the kernel itself.

Contrast and compare threads & processes

Compare: Threads and Processes are ways of doing multiple actions at the same time in an application. Contrast: Processes are programs. Processes can contain multiple threads, whereas Threads are temporary task workers that only exist inside their processes.

Explain the value of Fig 2-29. Why is each item important to that section?

Fig 2-29 illustrates the mutex_lock method. It is better than the enter_region method since if there is a mutex lock, it calls the thread_yield method, which allows it to release the CPU for other threads.

List and explain what a process does when interrupting a running process. Process states: 1. Running 2. Ready 3. Blocked

Firstly, a process can only be in one of three states. The first state is Running. a running process is when the CPU is actually being used at that instant. The second state is Ready. The ready state is what the process is runnable, but temporarily stopped to let another process run. The third and last state is Blocked. The blocked state is when a process is unable to run until some external event happens (like a key event or mouse click). Four transitions are possible among these three states. The first transition occurs when the operating system discovers that a process cannot continue right now (Going from Running to Blocked). The second and third transitions are handled by the process scheduler which is a native part of the OS (Going from Running to Ready(2), and from Ready to Running(3)). The fourth transition occurs when an external event ends the wait time on a process (Going from Blocked to Ready)

A file whose file descriptor is fd contains the following sequence of bytes: 3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5. The following system calls are made: lseek(fd, 3, SEEK_SET); read(fd, &buffer, 4); where the I seek call makes a seek to byte 3 of the file. What does buffer contain after the read has completed?

Firstly, because the whence parameter in the lseek statement is set to SEEK_SET the offset becomes relative to the start of the file. Thus, the subsequent read command can read the first 4 bits as stated in the third parameter (nbytes). The characters read are then placed in the buffer to hold until needed next.

What is IPC? List issues with IPC?

IPC or InterProcess Communication is a communication between processes in a well-structured way which avoids the use of interrupts and race conditions. First issue is how one process can pass information to another. Second, is making sure two or more processes do not get in each other's way. Third is proper sequencing when dependencies are present.

What is a semaphore? Where and how can they be used?

In computer science, a semaphore is a variable or abstract data type that provides a simple but useful abstraction for controlling access by multiple processes to a common resource in a parallel programming environment.

What is a finite-state machine? What is its value?

In the context of the text, a finite-state machine is a design used to handle and simulate multiple threads of process handling by having a saved state and another set of events that occur to change the state. This is valuable because it allows faux multithreading. (Page 99) Definition from class: A finite-state machine has a limited number of states allowed. (VIM as opposed to other word processors).

In the example given in Fig. 1-17, the library procedure is called read and the system call itself is called read. Is it essential that both of these have the same name? If not, which is more important?

It is not essential that they both are the same name. The library procedure is more important, as the system call could use any name for the function / method name. The compiler will translate the system call into a call to the library procedure (potentially written in assembly) which makes this core level's namespace more important.

What is the hybrid implementation of the previous question? Why use it?

It is where you combine all of the user-level threads and the kernel-level threads into one. The advantage of doing this is that you can determine how many threads and can multiplex easily. This is mainly used in the Linux operating system.

The family of computers idea was introduced in the 1960s with the IBM System/360 mainframes. Is this idea now dead as a doornail or does it live on?

It lives happily on today, with every major OS having a family of computers. A family of computers means a group of computers that differ in cost and performance but are able to run the same software on the same OS.

What is multiprogramming?

Multiprogramming is the ability to run multiple jobs on a single system at the same time. This technique uses memory to store data/instructions in a "queue" that the CPU will process.. This allows the CPU (traditionally a very expensive component) to not be idle for extended periods of time.

What is mutual exclusion? List and explain busy waiting, spin lock.

Mutual exclusion is making sure that if one process is using a shared variable or file, that another process will be excluded from using it at the same time. Busy Waiting is when you continually test a variable until a variable is returned. Spin Lock is when there is a lock that is using busy waiting meaning that it will constantly test for a variable. The downside is that it uses a lot of CPU time while it is spinning.

Is there any reason why you would want to mount a file system on a non-empty directory? If so, why?

No, because if it is mounted on a non-empty directory those files in the non-empty directory will not be accessible until it is unmounted.

Compare and contrast the real time algorithms.

Real-time scheduling algorithms can be static or dynamic. Static: This requires that all information is available in advance and deadlines are known before the process begins. With this information, scheduling decisions are made before the system starts running. Dynamic: This does not have the restrictions of static algorithms, and decisions are made at runtime.

There are several design goals in building an operating system, for example, resource utilization, timeliness, robustness, and so on. Give an example of two design goals that may contradict one another.

Resource utilization and timeliness may contradict one another. When a resource is multiplexed, different programs or users take turns using a resource. Or they share the resource and each get part of it. Determining how the resource is multiplexed is the task of operating system. If resource is multiplexed, timeliness would change according to how a resource is shared.

Compare and contrast the interactive system algorithms.

Round Robin: This is one of the oldest, simplest, fairest, and most widely used algorithms. Each process is assigned a time interval (or quantum) and is allowed to run for that interval. At the end of this quantum, the next process is allowed to run. The first process is then placed back in the queue and waits its turn to continue running. This can require many context switches, eating extra CPU time. It is easy to implement. Priority Scheduling: Each process is assigned a priority, and the process with the highest priority is allowed to run first. To prevent a high priority process to run indefinitely, the scheduler may decrease the priority of the currently running process, allowing other processes to then run. Alternatively, processes may be given priority classes, and within each priority class the processes may go through a Round Robin scheduling process. Multiple Queues: This is the design of one of the earliest priority schedulers, designed in CTSS (Compatible TimeSharing System) running on the IBM 7094 (c. 1960s). In this process, there are multiple groups of priorities, and each group is given a number of quanta to be allowed to run (eg. Group 1 = 1 quantum, Group 2 = 2 quanta, Group 3 = 4 quanta, etc.). If a process does not complete within Group 1, it will be moved to Group 2 and allowed to run in its next turn. This process will continue until the process is completed. Shortest Process Next: By aging a process (taking a weighted average of the previous run-times for a given process), an estimate for how long it will take to run can be made. When the estimates are known for all processes, the shortest process can be selected and allowed to run. The next shortest is then allowed to run...and so on, until the processes are completed. Guaranteed Scheduling: To guarantee a minimum amount of CPU time to a process, the scheduler can promise 1/n CPU cycles (where n is the number of processes). The system must then keep track of how many cycles each process has used, and if a process has used more than its allotted number of cycles, it will be moved to a lower priority until another process has exceeded its alloted cycles and is placed lower than it. Lottery Scheduling: Similar to the Guaranteed Scheduling approach, Lottery Scheduling can promise similarly predictable results by handing out "lottery tickets" for system resources to each process. When a decision for which process to provide resources to has to be made, a ticket number can be chosen and that process allowed to run. Fair-Share Scheduling: Especially useful in multiple-user systems, to prevent a single user to monopolize system resources, the resources can be divided out to each user, and that user's processes can use all of the resources allowed, without bothering another user's resources. (User 1 = 9 processes, User 2 = 1 process). This division can be equal or weighted, depending on the application. (pp. 154-160)

Explain what a scheduler does? What is compute-bound (memory) and I/O-bound? What difference does this make to the scheduler?

Scheduling is allocating system resources such as processor time to threads and processes. This is an essential component to multitasking. A program would be compute-bound if it could perform more quickly with a faster CPU. Similarly, I/O bound means a program is limited by the I/O and will need less cycles. The scheduler must allocate processes and threads around these bounds and attempt to optimize for speed and amount of executed processes / threads. I/O bound processes will be given priority as they will need less CPU cycles than the compute-bound.

What is the difference between scheduling policy and mechanism?

Scheduling policy vs. mechanism is a way to separate the policy from the algorithm. This allows the scheduling to be parameterized so user processes can control the priority children tasks.

Define sequential process, multiprogramming, and multiuser.

Sequential process is an instance of an executing program, which includes all of the values stored in the program counter, registers, and variables for the program. Multiprogramming is when the CPU switches between processes to make them seem to run in parallel. Multiuser is a term used to describe an operating system or some software where more than one user can use the system or software at once. (http://en.wikipedia.org/wiki/Multi-user)

What is spooling? Do you think that advanced personal computers will have spooling as a standard feature in the future?

Spooling is the process on third-generation operating systems where jobs were continuously read from cards onto the computer's disk. When the computer finished running a job, the operating system loaded the next job from the disk and ran it. This allowed for continuous operation of jobs on the CPU and was also used for output.

What is the purpose of a system call in an operating system?

System calls allow user-level processes to request services of the operating system. System calls provide an interface to communicate with the operating system kernel.

List and explain the principal events of process creation.

System initialization: Self explanatory, when an operating system is booted, several processes are created. They are background processes not associated with users but instead have a specific function. Execution of a process creation system call by a running process: Often running processes will issues system calls to create on or more new processes to help it do its job. A user request to create a new process: Users can start a program by typing a command or double clicking an icon, therefore creating new processes. Initiation of a batch job: Users can submit batch jobs to the system. When the OS decides that it has the resources to run another job, it creates a new process and runs the next job from the queue.

What is TSL? How does it work? Why is this atomic? Atomic happens without a switch in processes.

TSL, or Test and Set Lock, is an instruction that locks the memory bus to prohibit other CPUs from accessing memory until it is done. It is atomic because it also needs to prevent two processes from accessing memory at the same time to do this it must be seperate of process switching.

On early computers, every byte of data read or written was handled by the CPU (i.e., there was no DMA). What implications does this have for multiprogramming?

The CPU allowed multiple programs to run however they had to all run in sequence which could cause the programs to become backed up since they would have to wait for one process to end before another one could be initiated.

What is Peterson's solution? How many processes can it work with?

The Peterson solution is an algorithm which allows for mutual exclusion. According to Wikipedia, the algorithm can be generalized for more than two processes. The original formulation worked for only two processes.

What is the Producer-Consumer Problem? Where can there be race conditions in this problem?

The Producer-Consumer problem is when two processes share a common, fixed-sized buffer. One of them, the producer, puts information into the buffer, and the other one, the consumer takes it out. The problem is when the producer wants to put a new item in the buffer, when it is already full. Race conditions are when two or more processes are reading or writing shared data and the result depends on who runs when. The race condition occurs when the buffer is full and the producer wants to put a new item in the buffer. What the producer does depends on when the consumer takes data out.

Consider a computer system that has cache memory, main memory (RAM) and disk, and the operating system uses virtual memory. It takes 2 nsec to access a word from the cache, 10 nsec to access a word from the RAM, and 10 ms to access a word from the disk. If the cache hit rate is 95% and main memory hit rate (after a cache miss) is 99%, what is the average time to access a word?

The average time to access a word would be 3.33 ms.

Define such that one can program the Dinning Philosophers Problem.

The key to solving the Dining Philosopher's problem is locking down any checking or changing of Fork status(es) to a single thread. Only one philosopher may actively pick up or check forks at a time. To accomplish this, you use a semaphore/mutex that is put down() when thread X wants to try and eat. Any other thread that attempts to eat at the "same time" as thread X must sleep [wait], as there is another thread checking on forks. When thread X is done checking forks, then the Operating System can choose another thread that is sleeping and allow that thread to run. In other words, by using semaphores on the fork-checking function(s), one can insure no race conditions occur. -When a philosopher picks up a fork and cannot pick up the other one, he or she should put the fork that he/she has picked up back down. Otherwise, you will get into deadlock.

List the things that the Operating Systems must know to work with a process. Hint: The OS will save these things.

The process' address space, program counter, stack pointer, memory allocation, open file status, its accounting and scheduling information.

What is the reader/writer problem?

The reader/writer problem models access to a database. When you have readers and writers how do you determine who has top priority (considering you want to supply all users with accurate real-time information)? The book says the top solution is to let readers finish their reads that are in progress and allow any writers in line to complete their writes before another user reads. This leads to greater concurrency, but lower performance.

Define the word "process."

The text defines a process as an abstraction of a running program. In other words, a process is an instance of a computer program that is being executed.

Compare and contrast the batch scheduling algorithms.

Throughput - maximize jobs per hour Turnaround time - minimize time between submission and termination CPU utilization - keep the CPU busy all the time 1. First-come First-served -With this algorithm, processes are assigned the CPU in the order they request it. 2. Shortest job first -Batch algorithm that schedules the shortest jobs first 3. Shortest remaining time next -With this algorithm, the scheduler always chooses the process whose remaining run time is the shortest

What is meant by the following: throughput, turnaround time, response time, and proportionality?

Throughput: the number of jobs per hour that the system completes. Turnaround time: the statistically average time from the moment that a batch job is submitted until the moment it is completed. Response time: the time between issuing a command and getting the result. Proportionality: the idea of users about how long things should take. Users can accept wait times when things that are perceived as complex take a long time, but become upset and/or angry when things take a long time that they feel should not take long at all.

The client-server model is popular in distributed systems. Can it also be used in a single-computer system?

Yes, the client and server services can be run on the same machine. In fact, in a single-computer system, certain optimizations can be made. Whether the model is used in a distributed system or in a single-machine system, the concept of message passing between the services remains the same.

Can the count = write(fd, buffer, nbytes); call return any value in count other than nbytes? If so, why?

Yes. If the end-of-file is encountered before nbytes are read, then a smaller value than nbytes is returned to count. A larger number, however, cannot be returned.

Why is the process table needed in a timesharing system? Is it also needed in personal computer systems in which only one process exists, that process taking over the entire machine until it is finished?

a. The process table stores the information needed for each process, so the CPU can restore the state of each thread. b. No. A single process system needs no table to record process switching information.

For each of the following system calls give a condition that causes it to fail: fork, exec, and unlink.

exec: Typically one of three things: a) The file is not an executable. i.e a JPEG or other file type. b) The file does not exist. c) The current user does not have the permissions to access the file. unlink: The file does not exist or the user does not have permission to access the specified file. fork: There are no child processes, or the OS is out of available memory.

Figure 1-23 shows that a number of UNIX system calls have no Win32 API equivalents. For each of the calls listed as having no Win32 equivalent, what are the consequences for a programmer of converting a UNIX program to run under Windows?

execve - The programmer will be forced to create a new process using CreateProcess every time he or she wants to fork or execve. In other words, instead of being able to call just fork or execve alone, they we will have to call CreateProcess, which calls both of these UNIX commands. link - The programmer will not be able to duplicate files and have two files in different places refer to the same exact file. mount/umount - The programmer could not dynamically add or subtract file systems or I/O systems and thus adjust the root and/or working directories. chmod - The programmer will not be able to set file permissions. kill - The programmer will not be able to send signals to specific processes.


Related study sets

AP World History Midterm Questions

View Set

MyAP Classroom Quizzes for Dervivative Test

View Set

Probability and Statistics: Week 1 Exercise

View Set

Chapter 27: Surface Processing Operations

View Set