T/F
An asynchronous bus is not clocked.
T
Conflict misses do not occur in fully set-associative cache memories.
T
Direct-mapped cache of size N has same miss rate as 2-way set associative of size N/2.
T
For a given capacity and block size, a set-associative cache implementation will typically have a higher hit time than a direct-mapped implementation.
T
If the I/O devices are connected to the CPU through main memory busses, the performance of the processor may decrease since I/O commands could interfere with CPU memory accesses.
T
MIPS (million instructions per second) can be defined as (𝑖𝑛𝑠𝑡𝑟𝑢𝑐𝑡𝑖𝑜𝑛 𝑐𝑜𝑢𝑛𝑡)/(𝐶𝑃𝑈 𝑡𝑖𝑚𝑒 𝑖𝑛 𝑠𝑒𝑐𝑜𝑛𝑑𝑠) × 1/10^6 .
T
Memory buses are usually picked based on the speed whereas the I/O buses are primarily adopted based on the compatibility (industry standards) and cost.
T
Memory interleaving is a technique for reducing memory access time through increased bandwidth utilization of the data bus.
T
Software Pipelining can be used on superscalar processors.
T
TLB misses in a virtual memory system can occur and can be handled by software.
T
"Y is p% faster than X" can be defined as: 𝑡𝑖𝑚𝑒 𝑜𝑓 𝑌/𝑡𝑖𝑚𝑒 𝑜𝑓 𝑋 = 1 + 𝑝/100
F
'Polling' is always a better mechanism than 'interrupts' for handling I/O operations on a network interface card.
F
ATM has a variable message size and Ethernet has a fixed message size.
F
Asynchronous buses cannot be long due to clock skew restrictions.
F
Both DRAM and SRAM must be refreshed periodically using a dummy read/write operation.
F
Branch history tables typically eliminate more stall cycles than branch target buffers.
F
For a given capacity and block size, a set-associative cache implementation will typically have a lower hit time than a direct-mapped implementation.
F
For a given capacity and block size, a set-associative cache implementation will typically have a lower miss penalty than a direct-mapped implementation.
F
In a write-back cache, a read miss always causes a write to the lower memory level.
F
In a write-through cache, a read miss can cause a write to the lower memory level.
F
Increasing the size of a cache results in lower miss rates and higher performance.
F
It is possible to eliminate all the stalls due to data dependencies with a special hardware mechanism.
F
Loop-carried dependencies can be completely eliminated by a hardware mechanism at runtime.
F
Loops with intra-iteration dependencies can be executed in parallel.
F
Magnetic disks are volatile storage devices.
F
Page tables in virtual memory systems are placed on a special cache memory.
F
RAID 5 can recover from a two-disk failure.
F
RAID 5 uses more check disks than RAID 3 for the same number of data disks.
F
SRAMs must be refreshed periodically to prevent loss of information
F
The main difference between DRAM (the technology used to implement main memory) and SRAM (the technology used to implement caches) is that DRAM is optimized for access speed while SRAM is optimized for density.
F
The multi-cycle data path is always faster than the single-cycle data path.
F
Victim caches decrease miss penalty while they increase miss rate.
F
Virtually addressable caches would have less hit time than physically addressable cache memories.
F