Homework 8

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

False

The L1 cache is slower and typically larger than the L2 cache.

memory-subsystems

The Pentium 4 processor core consists of four major components: out-of-order execution logic, fetch/decode unit, execution units, and __________.

execution units

The _________ execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers.

FIFO

The _________ replacement algorithm replaces the block in the set that has been in the cache longest.

L3

The __________ cache is slower and typically larger than the L2 cache.

fetch/decode unit

The __________ fetches program instructions in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L1 instruction cache.

exclusive

The __________ policy dictates that a piece of data in one cache is guaranteed not to be found in all lower levels of caches.

inclusive

The __________ policy dictates that a piece of data in one cache is guaranteed to be also found in all lower levels of caches.

LRU

The __________ replacement algorithm replaces the block in the set that has been in the cache longest with no reference to it.

True

The cache contains a copy of portions of the main memory.

True

The direct mapping technique is simple and inexpensive to implement.

True

The evolution of cache organization is seen clearly in the evolution of Intel microprocessors.

True

The key advantage of the split cache design is that it eliminates contention for the cache between the instruction fetch/decode unit and the execution unit.

True

The length of a line, not including tag and control bits, is the line size.

True

The main disadvantage of the write through technique is that it generates substantial memory traffic and may create a bottleneck.

associative

The principal disadvantage of _________ mapping is the complex circuitry required to examine the tags of all cache lines in parallel.

direct mapping

The simplest technique for logically organizing cache is __________.

line size

The term _________ refers to the number of data bytes, or block size, contained in a line.

True

There is an additional time element for comparing tag fields for both associative and set-associative caches.

exclusive

Three inclusion policies are found in contemporary cache systems: the inclusive policy, the __________ policy, and the non-inclusive policy.

associative

Three mapping techniques that are used for logically organizing cache are direct, __________, and set-associative.

frame

To distinguish between the data transferred and the chunk of physical memory, the term _________, is sometimes used with reference to caches.

write-through

Using the __________ technique, all write operations are made to main memory as well as to the cache, ensuring that main memory is always valid

True

Victim cache was originally proposed as an approach to reduce the conflict misses of direct mapped caches without affecting its fast access time.

hardware transparency

With __________ additional hardware is used to ensure that all updates to main memory via cache are reflected in all caches.

True

With the exception of smaller embedded systems, all modern computer systems employ one or more layers of cache memory.

tag

________ is a portion of a cache line that is used for addressing purposes.

LRU

_________ is a replacement algorithm that replaces that block in the set that has experienced the fewest references.

FIFO

_________ is easily implemented as a round-robin or circular buffer technique.

block

_________ is the minimum unit of transfer between cache and main memory.

set-associative

_________ mapping is a compromise that exhibits the strengths of both the direct and associative approaches while reducing their disadvantages.

high performance

__________ computing deals with supercomputers and their software, especially for scientific applications that involve large amounts of data, vector and matrix computation, and the use of parallel algorithms.

DRAM

__________ is constructed of static RAM cells but is considerably more expensive and holds much less data than regular SRAM chips.

cache

__________ memory is designed to combine the memory access time of expensive, high-speed memory combined with the large memory size of less expensive, lower-speed memory.

physical

A ________ cache stores data using main memory physical addresses.

physical

A __________ cache stores data using main memory physical addresses.

True

A logical cache stores data using virtual addresses.

virtual

A logical cache, also known as a virtual cache, stores data using __________ addresses.

True

Associative mapping overcomes the disadvantage of direct mapping by permitting each main memory block to be loaded into any line of the cache.

locality

Because of the phenomenon of ________________, when a block of data is fetched into the cache to satisfy a single memory reference, it is likely that there will be future references to that same memory location or to other words in the block.

False

Cache design for high-performance computing is the same as it is for other hardware platforms and applications.

False

FIFO is the most popular replacement algorithm.

set

For set-associative mapping, the cache control logic interprets a memory address as three fields: Tag, _________, and Word.

thrashing

If a program happens to reference words repeatedly from two different blocks that map into the same line, the blocks will be continually swapped in the cache, and the hit ratio will be low, is a phenomenon known as __________.

False

One of the disadvantages of a direct-mapped cache is that it allows simple and fast speculation.


Ensembles d'études connexes

25.1 What is money and why we need it?

View Set

Chapter 11, The Texas Revolution

View Set

Insurance Licensing Exam Questions to Study

View Set

Chapter 21 Immune System Practice test

View Set

Chapter 15: Medical Expense Insurance

View Set