CHAPTER 4
Associative mapping
overcomes the disadvantage of direct mapping by permitting each main memory block to be loaded into any line of the cache
Fetch/decode unit Out-of-order execution logic Execution units Memory subsystem
four major components of the processor core
logical cache
also known as a virtual cache, stores data using virtual addresses.
Sequential access
- Memory is organized into units of data called records - Access must be made in a specific linear sequence - Access time is variable
Hardware transparency
Additional hardware is used to ensure that all updates to main memory via cache are reflected in all caches.Thus, if one processor modifies a word in its cache, this update is written to main memory.
write back
An alternative technique which minimizes memory writes and updates are made only in the cache.
Random access
Each addressable location in memory has a unique, physically wired-in addressing mechanism. The time to access a given location is independent of the sequence of prior accesses and is constant.
Bus watching with write through
Each cache controller monitors the address lines to detect write operations to memory by other bus masters. If another master writes to a location in shared memory that also resides in the cache memory, the cache controller invalidates that cache entry
Fetch/decode unit
Fetches program instructions in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L1 instruction cache.
Access time (latency)
For random-access memory, this is the time it takes to perform a read or write operation, that is, the time from the instant that an address is presented to the memory to the instant that data have been stored or made available for use.
Read Only Memory
Non-erasable memory cannot be altered, except by destroying the storage unit
Noncacheable memory
Only a portion of main memory is shared by more than one processor. It can be identified using chip-select logic or high-address bits.
Out-of-order execution logic
Schedules execution of the micro-operations subject to data dependencies and resource availability; thus, micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream.
write through
Using this technique, all write operations are made to main memory as well as to the cache, ensuring that main memory is always valid.
Memory subsystem
This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources
Execution units
These units executes micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers.
Memory cycle time
This concept is primarily applied to random-access memory and consists of the access time plus any additional time required before a second access can commence.
Associative
This is a random access type of memory that enables one to make a comparison of desired bit locations within a word for a specified match, and to do this for all words simultaneously.
Transfer rate
This is the rate at which data can be transferred into or out of a memory unit.
locality of reference
This principle states that memory references tend to cluster.
Direct access
involves a shared read-write mechanism
Set-associative mapping
is a compromise that exhibits the strengths of both the direct and associative approaches while reducing their disadvantages
secondary memory or auxiliary memory
it is also referred for non-volatile memory
Sequential access Direct access Random access Associative
method of accessing units of data
Temporal locality
refers to the tendency for a processor to access memory locations that have been used recently
Spatial locality
refers to the tendency of execution to involve a number of memory locations that are clustered
Location
refers to whether memory is internal and external to the computer.
Access time (latency) Memory cycle time Transfer rate
three performance parameters used for memory