Chapter 6 Review Questions

¡Supera tus tareas y exámenes ahora con Quizwiz!

9. What is a buffer? Why might one be used?

A buffer is an area of RAM implemented in a device controller, an I/O device, or a storage device. It's used to resolve differences in I/O unit size or speed of data production and consumption.

2. What is a bus master? What is the advantage of having devices other than the CPU be a bus master?

A bus master is any device that can initiate a transfer across the bus. Allowing devices other than the CPU to be bus masters frees CPU cycles to perform other tasks. It also allows a form of parallelism in which peripheral-to-peripheral data transfers can occur simultaneously with instruction execution.

10. How can a cache be used to improve performance when reading data from and writing data to a storage device?

A cache controller attempts to guess what data will be requested next and prefetch this data into the cache. If the cache controller guesses correctly, data can be supplied more quickly. A cache controller confirms a write operation as soon as data is written to the cache but before it's written to the storage device. This improves the performance of a program waiting for write confirmation by reducing the interval between the write request and the write confirmation.

8. What functions does a device controller perform?

A device controller translates logical accesses into physical accesses and translates messages between the bus protocol and the protocols used to control attached devices. A device controller can also perform multiplexing (that is, allowing multiple devices to share a single bus port).

13. What is a multicore processor? What are its advantages compared with multiple-processor architecture? Why have multicore processors been used only since the mid-2000s?

A multicore processor is a single microchip containing two or more fully functional CPUs. The main advantage of multicore architecture compared with multi-CPU architecture with CPUs of equivalent number and power is more efficient inter-CPU communication, which increases total computational power when multiple CPUs cooperate on the same task. Multicore processors didn't become available until the mid-2000s because that's when the capabilities of modern semiconductor manufacturing methods evolved to the point that enough transistors could be placed on a single chip to implement multiple CPUs and their memory caches.

7. What's the difference between a physical access and a logical access?

A physical access describes the storage location to be read or written in terms of the physical organization of storage locations (for example, track, sector, and head for a disk drive). A logical access assumes that the location to be read or written is contained in a linear address space. Therefore, the location is described by a single unsigned integer.

6. Describe the execution of the push and pop operations.

A push operation copies register values to the top of the stack and increments a pointer (called the stack register) to point to the new stack top. A pop operation removes stack contents on a last in, first out (LIFO) basis. Each pop operation decrements the stack pointer and copies one set of register values from the top of the stack to CPU registers. The last register value copied is the instruction pointer, which effectively transfers control back to the process just removed from the stack.

4. What is an interrupt? How is an interrupt generated? How is it processed?

An interrupt is a signal to the OS that a request or event has occurred that requires its attention. Interrupts are numeric codes and can be generated by peripheral devices, an explicit software instruction, or the CPU. Peripheral device interrupts are sent over the system bus, detected by the CPU, and placed in an interrupt register. Software and CPU-generated interrupts are placed in the interrupt register by the CPU.

12. Describe how scaling up differs from scaling out. Given the speed difference between a typical system bus and a typical high-speed network, is it reasonable to assume that both approaches can yield similar increases in total computational power?

Compared with scaling out, scaling up uses fewer but larger computer systems to increase available computing power. Scaling out uses more computer systems of lesser power, often distributed across locations or organized into multicomputer configurations. All other things being equal, scaling up typically yields more increases in computing power than scaling out because communication within computers is faster than in between-computer configurations. However, all other things are not equal. Applications with more reliance on external data or other resources aren't slowed as much by scaling out. Also, scaling out brings other benefits, including flexibility and the ability to use lower cost hardware. The best approach for an organization depends on many factors other than raw computational speed. Today, for all but the most computationally demanding tasks, scaling out tends to offer greater net benefits than scaling up.

11. What's the difference between lossy and lossless compression? For what types of data is lossy compression normally used?

Lossless compression compresses the data without losing any information. The original data can be recovered entirely by decompression. Lossy compression loses some data that can never be recovered. Lossy compression is most commonly used with continuous audio or video streams because the human brain doesn't detect the missing data or "fills in the missing pieces automatically. Examples of lossy compression methods include MPEG, JPEG, and MP3.

5. What is a stack? Why is it needed?

The stack is an area of memory that holds register values of suspended processes. It's needed because register values represent a suspended program's state. These values must be restored to CPU registers to allow a program to resume execution at the point it was suspended. Multiple sets of register values must be stored when interrupts of higher priority cause lower-priority interrupt handlers to be suspended and placed on the stack.

1. What is the system bus? What are its primary components?

The system bus is the communication channel that connects all computer components. It physically consists of parallel transmission lines that can be grouped into those carrying memory addresses (the address bus), those carrying control and status signals (the control bus), and those carrying data (the data bus). In a logical sense, the bus also consists of the bus protocol.

3. What characteristics of the CPU and of the system bus should be balanced to achieve maximum system performance?

The width of the data bus should equal or exceed CPU word size. The bus clock rate should match the CPU clock rate, though this is difficult or impossible to achieve.


Conjuntos de estudio relacionados

Fundamentals of Nursing Course Point Quiz - Ch. 8

View Set

User Interface Study Guide for EOC

View Set