Chapter 5
A logical cache stores data using virtual addresses.
True
Associative mapping overcomes the disadvantage of direct mapping by permitting each main memory block to be loaded into any line of the cache.
True
The direct mapping technique is simple and inexpensive to implement.
True
The evolution of cache organization is seen clearly in the evolution of Intel microprocessors.
True
The key advantage of the split cache design is that it eliminates contention for the cache between the instruction fetch/decode unit and the execution unit.
True
The length of a line, not including tag and control bits, is the line size.
True
The main disadvantage of the write through technique is that it generates substantial memory traffic and may create a bottleneck.
True
There is an additional time element for comparing tag fields for both associative and set-associative caches.
True
Victim cache was originally proposed as an approach to reduce the conflict misses of direct mapped caches without affecting its fast access time.
True
With the exception of smaller embedded systems, all modern computer systems employ one or more layers of cache memory.
True
The principal disadvantage of _________ mapping is the complex circuitry required to examine the tags of all cache lines in parallel. A. set-associative B. associative C. stack mapping D. direct
associative
Three mapping techniques that are used for logically organizing cache are direct, __________, and set-associative.
associative
_________ is the minimum unit of transfer between cache and main memory. A. Frame B. Tag C. Block D. Line
block
__________ memory is designed to combine the memory access time of expensive, high-speed memory combined with the large memory size of less expensive, lolocawer-speed memory.
cache
The simplest technique for logically organizing cache is __________. A. stack mapping B. associative mapping C. direct mapping D. set-associative mapping
direct mapping
The __________ policy dictates that a piece of data in one cache is guaranteed not to be found in all lower levels of caches. A. write allocate B. exclusive C. non-inclusive D. inclusive
exclusive
Three inclusion policies are found in contemporary cache systems: the inclusive policy, the __________ policy, and the non-inclusive policy.
exclusive
The _________ execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers. A. execution units B. memory subsystem C. out-of-order execution logic D. fetch/decode unit
execution units
The __________ fetches program instructions in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L1 instruction cache. A. execution units B. out-of-order execution logic C. fetch/decode unit D. memory subsystem
fetch/decode unit
To distinguish between the data transferred and the chunk of physical memory, the term _________, is sometimes used with reference to caches. A. tag B. frame C. line D. block
frame
With __________ additional hardware is used to ensure that all updates to main memory via cache are reflected in all caches. A. noncacheable memory B. hardware transparency C. bus watching D. write through
hardware transparency
__________ computing deals with supercomputers and their software, especially for scientific applications that involve large amounts of data, vector and matrix computation, and the use of parallel algorithms.
high performance
The __________ policy dictates that a piece of data in one cache is guaranteed to be also found in all lower levels of caches. A. Non-inclusive B. write allocate C. inclusive D. exclusive
inclusive
The term _________ refers to the number of data bytes, or block size, contained in a line.
line size
Because of the phenomenon of ______________ when a block of data is fetched into the cache to satisfy a single memory reference, it is likely that there will be future references to that same memory location or to other words in the block
locality
The Pentium 4 processor core consists of four major components: out-of-order execution logic, fetch/decode unit, execution units, and __________.
memory sub-system
A __________ cache stores data using main memory physical addresses.
physical
For set-associative mapping, the cache control logic interprets a memory address as three fields: Tag, _________, and Word.
set
__________ mapping is a compromise that exhibits the strengths of both the direct and associative approaches while reducing their disadvantages.
set-associative
_________ is a portion of a cache line that is used for addressing purposes. A. Tag B. Line C. Block D. Frame
tag
If a program happens to reference words repeatedly from two different blocks that map into the same line, the blocks will be continually swapped in the cache, and the hit ratio will be low, is a phenomenon known as __________. A. tagging B. split caching C. cache missing D. thrashing
thrashing
A logical cache, also known as a virtual cache, stores data using __________ addresses.
virtual
Using the __________ technique, all write operations are made to main memory as well as to the cache, ensuring that main memory is always valid.
write through
___________ is constructed of static RAM cells but is considerably more expensive and holds much less data than regular SRAM chips.
DRAM
The _________ replacement algorithm replaces the block in the set that has been in the cache longest. A. LFU B. LCA C. FIFO D. LRU
FIFO
__________ is easily implemented as a round-robin or circular buffer technique. A. FIFO B. LRU C. LFU D. CBT
FIFO
Cache design for high-performance computing is the same as it is for other hardware platforms and applications.
False
FIFO is the most popular replacement algorithm.
False
One of the disadvantages of a direct-mapped cache is that it allows simple and fast speculation.
False
The L1 cache is slower and typically larger than the L2 cache.
False
The cache contains a copy of portions of the main memory.
False
The __________ cache is slower and typically larger than the L2 cache. A. L1 B. L2 C. L3 D. L4
L3
The __________ replacement algorithm replaces the block in the set that has been in the cache longest with no reference to it. A. LRU B. CLM C. FIFO D. LFU
LRU
__________ is a replacement algorithm that replaces that block in the set that has experienced the fewest references.
LRU