Chapter 3: CPUs

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

random access memory (RAM).

Any individual 1 or 0 = a bit 4 bits = a nibble 8 bits = a byte 16 bits = a word 32 bits = a double word 64 bits = a paragraph or quad word

Double-Sided DIMMs

Every type of RAM stick comes in one of two types: single-sided RAM and double-sided RAM. As their name implies, single-sided sticks have chips on only one side of the stick. Double-sided sticks have chips on both sides (see Figure 4-17)

Table 4-1: DDR Speeds

Table 4-1: DDR Speeds

system crystal

The system crystal determines the speed at which a CPU and the rest of the PC operate. The system crystal is usually a quartz oscillator, very similar to the one in a wristwatch, soldered to the motherboard

Latency

Time it takes for a bit to travel from its sender to its receiver. The delay in RAM's response time is called its latency, but shorthand like CL17 or CL19 uses initials for the technical name: column array strobe (CAS) latency. If both have the same speed rating, RAM with a lower latency—such as CL17—is slightly faster than RAM with a higher latency—such as CL19—because it responds more quickly. The CL refers to clock cycle delays. The 17 means that the memory delays 17 clock cycles before delivering the requested data; the 19 means a 19-cycle delay. Because it's measured in clock cycles, CL is relative to the clock speed of the RAM—a 19-cycle delay takes up more real-world time at a lower clock speed than it will at a higher clock speed.

clock speed

Visualize the system crystal as a metronome for the CPU. The quartz oscillator repeatedly fires a charge on the CLK wire, setting the beat, if you will, for the CPU's activities. If the system crystal sets a beat slower than the CPU's clock speed, the CPU will work just fine, though at the slower speed of the system crystal. If the system crystal forces the CPU to run faster than its clock speed, it can overheat and stop working.

to be up to date with the companies, techs should research their websites:

Your first stop should be the manufacturers' Web sites. Both companies put out a lot of information on their products. www.intel.com www.amd.com

Note

Note Bits and bytes are abbreviated differently. Bits get a lowercase b, whereas bytes get a capital B. So for example, 4 Kb is 4 kilobits, but 4 KB is 4 kilobytes. The big-B little-b standard applies all the way up the food chain, so 2 Mb = 2 megabits; 2 MB = 2 megabytes; 4 Gb = 4 gigabits; 4 GB = 4 gigabytes; and so on.

cache (to reduce wait-states)

To reduce wait states, CPUs come with built-in, very high-speed RAM called static RAM (SRAM). This SRAM preloads as many instructions as possible and keeps copies of already run instructions and data in case the CPU needs to work on them again (see Figure 3-27). SRAM used in this fashion is called a cache. Figure 3-27: SRAM cache

threads

Windows started sending many programs to the CPU. Each of these programs breaks down into some number of little pieces, called threads, and data. Each thread is a series of instructions designed to do a particular job with the data.

SO-DIMM (small outline DIMM)

small-outline DIMM (SO-DIMM) form factors Figure 4-10: A (168-pin) DIMM above a (144-pin) SO-DIMM

Hz (Hertz)

1 hertz (1 Hz) = 1 cycle per second 1 megahertz (1 MHz) = 1 million cycles per second 1 gigahertz (1 GHz) = 1 billion cycles per second

AMD

AMD has made CPUs that clone the function of Intel CPUs. If Intel invented the CPU used in the original IBM PC, how could AMD make clone CPUs without getting sued? Chipmakers have a habit of exchanging technologies through cross-license agreements. Way back in 1976, AMD and Intel signed just such an agreement, giving AMD the right to copy certain types of CPUs. The trouble started with the Intel 8088. Intel needed AMD's help to supply enough CPUs to satisfy IBM's demands. But after a few years, Intel had grown tremendously and no longer wanted AMD to make CPUs. AMD said, "Too bad. See this agreement you signed?" Throughout the 1980s and into the 1990s, AMD made pin-for-pin identical CPUs that matched the Intel lines of CPUs (see Figure 3-19). You could yank an Intel CPU out of a system and snap in an AMD CPU—no problem! Figure 3-19: Electronically identical Intel and AMD 486 CPUs from the early 1990s

Integrated Memory Controller

All current microprocessors have an integrated memory controller (IMC), moved from the motherboard chip into the CPU to optimize the flow of information into and out from the CPU memory circuitry built into the CPU that enables faster control over things like the large L3 cache shared among multiple cores.

Clock Multipliers

All modern CPUs run at some multiple of the system clock speed. The system bus on my Ryzen 7 machine, for example, runs at 100 MHz. The clock multiplier goes up to ×32 at full load to support the 3.2 GHz maximum speed. Originally, CPUs ran at the speed of the bus, but engineers early on realized the CPU was the only thing doing any work much of the time. If the engineers could speed up just the internal operations of the CPU and not anything else, they could speed up the whole computing process. Figure 3-22 shows a nifty program called CPU-Z displaying my CPU details. Note that all I'm doing is typing at the moment, so the CPU has dropped the clock multiplier down to ×15.5 and the CPU core speed is only 1546 MHz. Figure 3-22: CPU-Z showing the clock speed, multiplier, and bus speed of a Ryzen 7 processor hardly breaking a sweat Today's CPUs report to the motherboard through a function called CPUID (CPU identifier), and the speed and multiplier are set automatically. (You can manually override this automatic setup on many motherboards. See "Overclocking," later in this chapter, for details.)

DRAM

Because of its low cost, high speed, and capability to contain a lot of data in a relatively small package, DRAM has been the standard RAM used in all computers—not just PCs—since the mid-1970s. DRAM can be found in just about everything, from automobiles to automatic bread makers.

Arithmetic Logic Unit (ALU) one part, the arithmetic logic unit (ALU) (or integer unit), handles integer math: basic math for numbers with no decimal point. A perfect example of integer math is 2 + 3 = 5.

CPUs also have special circuitry to handle complex numbers, called the floating point unit (FPU)

general-purpose registers

CPUs contain a large number of registers, but for the moment let's concentrate on the four most common ones: the general-purpose registers. Intel named them AX, BX, CX, and DX.

EXAM TIP

CompTIA only uses the term CPU, not microprocessor. Expect to see CPU on the 1001 exam.

Computers use dynamic RAM (DRAM) for the main system memory.

Computers use dynamic RAM (DRAM) for the main system memory. DRAM needs both a constant electrical charge and a periodic refresh of the circuits; otherwise, it loses data—that's what makes it dynamic rather than static in content.

DDR SDRAM for laptops comes in ____.

DDR SDRAM for laptops comes in either: 200-pin SO-DIMMs or 172-pin micro-DIMMs Figure 4-13: 172-pin DDR SDRAM micro-DIMM

DDR2

DDR2 is DDR RAM with some improvements in its electrical characteristics, enabling it to run even faster than DDR while using less power. The big speed increase from DDR2 comes by clock doubling the input/output circuits on the chips. This does not speed up the core RAM—the part that holds the data—but speeding up the input/output and adding special buffers (sort of like a cache) makes DDR2 run much faster than regular DDR. DDR2 uses a 240-pin DIMM that's not compatible with DDR (see Figure 4-15). Likewise, the DDR2 200-pin SO-DIMM is incompatible with the DDR SO-DIMM. You'll find motherboards running both single-channel and dual-channel DDR2. Note DDR2 RAM sticks will not fit into DDR sockets, nor are they electronically compatible. Figure 4-15: 240-pin DDR2 DIMM

DDR3

DDR3 boasts higher speeds, more efficient architecture, and around 30 percent lower power consumption than DDR2 RAM. Just like its predecessor, DDR3 uses a 240-pin DIMM, albeit one that is slotted differently to make it difficult for users to install the wrong RAM in their system without using a hammer (see Figure 4-16). DDR3 SO-DIMMs for portable computers have 204 pins. Neither fits into a DDR2 socket. Figure 4-16: DDR2 DIMM on top of a DDR3 DIMM

DDR3

DDR3 doubles the buffer of DDR2 from 4 bits to 8 bits, giving it a huge boost in bandwidth over older RAM. Not only that, but some DDR3 (and later) modules also include a feature called XMP, or Extreme Memory Profile, that enables power users to overclock their RAM easily, boosting their already fast memory. DDR3 modules also use higher-density memory chips, up to 16-GB DDR3 modules. AMD's version of XMP is called AMP, for AMD Memory Profile.

DDR4

DDR4 uses a 288-pin DIMM, so they are not backwardly compatible with DDR3 slots. DDR4 SO-DIMMs have 260 pins that are not compatible with DDR3 204-pin SO-DIMM slots. Some motherboard manufacturers have released boards that offer support for both DDR3 and DDR4, by providing both slot types.

MEMORY

Devices that in any way hold ones and zeros that the CPU accesses are known generically as memory

DDR SDRAM

Double Data Rate SDRAM is memory that transfers data twice as fast as SDRAM. DDR SDRAM increases performance by transferring data twice per clock cycle. DDR SDRAM for desktops comes in 184-pin DIMMs.

EXAM TIP

EXAM TIP Some manufacturers (and CompTIA) drop the hyphen: SODIMM. You might see the RAM package spelled as SO-DIMM, SODIMM, or even SoDIMM.

EXAM TIP

EXAM TIP Be sure you are familiar with single-, dual-, triple-, and quad-channel memory architectures.

EXAM TIP

EXAM TIP The CompTIA A+ 1001 exam covers DDR2, DDR3, and DDR4. You won't find DDR, DDR3L, or DDR3U on the exam.

EXAM TIP

EXAM TIP The CompTIA A+ 1001 objectives refer to virtualization support as the virtual technology CPU feature.

EXAM TIP

EXAM TIP The primary benefit of 64-bit computing is to support more than 4 GB of memory, the limit with 32-bit processing.

EXAM TIP

EXAM TIP Typically, the CompTIA A+ exams expect you to know that L1 cache will be the smallest and fastest cache; L2 will be bigger and slower than L1; and L3 will be the biggest and slowest cache. (This is not completely true anymore, with L1 and L2 running the same speed in many CPUs, but it is how it will appear on the exams.)

EXTERNAL DATA BUS

EXTERNAL DATA BUS So far, the entire PC consists of only a CPU and RAM. But the CPU and the RAM need some connection so they can talk to each other. To do so, extend the external data bus from the CPU so it can talk to the RAM Figure 3-16: Extending the EDB(see Figure 3-16).

Pipelining To get a command from the data bus, do the calculation, and then send the answer back out onto the data bus, a CPU takes at least four steps (each of these steps is called a stage):

Fetch Get the data from the EDB. Decode Figure out what type of command needs to be executed. Execute Perform the calculation. Write Send the data back onto the EDB.

*** Intel and AMD now ***

In January 1995, after many years of legal wrangling, Intel and AMD settled and decided to end the licensing agreements. As a result of this settlement, AMD chips are no longer compatible with sockets or motherboards made for Intel CPUs—even though in some cases the chips look similar. Today, if you want to use an AMD CPU, you must purchase a motherboard designed for AMD CPUs. If you want to use an Intel CPU, you must purchase a motherboard designed for Intel CPUs. You have a choice: Intel or AMD.

Intel

Intel Corporation thoroughly dominates the personal computer market with its CPUs and motherboard support chips. Intel currently produces a dozen or so models of CPU for both desktop and portable computers. Most of Intel's desktop and laptop processors are sold under the: Core, Pentium, & Celeron brands. Their high-end server chips are called Xeon.

** Microarchitecture **

Intel and AMD continually develop faster, smarter, and generally more capable CPUs. In general, each company comes up with a major new design, called a microarchitecture, about every three years. They try to minimize the number of model names in use, however, most likely for marketing purposes. This means that they release CPUs labeled as the same model, but the CPUs inside can be very different from earlier versions of that model. Both companies use code names to keep track of different variations within mod Figure 3-20: Same branding, but different capabilities

Model Names

Intel and AMD differentiate product lines by using different product names, and these names have changed over the years. For a long time, Intel used Pentium for its flagship model, just adding model numbers to show successive generations—Pentium, Pentium II, Pentium III, and so on. AMD used the Athlon brand in a similar fashion.

Virtualization Support

Intel and AMD have built in support for running more than one operating system at a time, a process called virtualization.

an example of code names using the Intel i7 processor

Intel released the first Core i7 in the summer of 2008. By spring of 2012, the original microarchitecture—code named Nehalem—had gone through five variations, none of which worked on motherboards designed for one of the other variations. Plus, in 2011, Intel introduced the Sandy Bridge version of the Core i7 that eventually had two desktop versions and a mobile version, all of which used still other sockets. Just about every year since then has seen a new Core i7 based on improved architectures with different code names such as Ivy Bridge, Haswell, Broadwell, and so on. (And I'm simplifying the variations here.)

Multicore Processing

Microarchitecture hit a plateau back in 2002 when CPU clock speeds hit a practical limit of roughly 4 GHz, motivating the CPU makers to find new ways to get more processing power for CPUs. Intel and AMD both decided at virtually the same time to move beyond the single-core CPU and combine two CPUs (or cores) into a single chip, creating a dual-core architecture. A dual-core CPU has two execution units—two sets of pipelines—but the two sets of pipelines share caches and RAM. A single-core CPU has only one set of everything. Today, multicore CPUs—with four, six, or eight cores—are common. Higher-end CPUs have up to 32 cores! With each generation of multicore CPU, both Intel and AMD have tinkered with the mixture of how to allocate the cache among the cores. Figure 3-31: CPU-Z showing the cache details of a Haswell Core i7

Parallel Execution

Modern CPUs can process multiple commands and parts of commands in parallel, known as parallel execution. Early processors had to do everything in a strict, linear fashion. The CPUs accomplish this parallelism through:: - multiple pipelines, -dedicated cache, and the capability to work with multiple threads or programs at one time. To understand the mighty leap in efficiency gained from parallel execution, you need insight into the processing stages.

SDRAM

Most modern systems use some form of synchronous DRAM (SDRAM). SDRAM is still DRAM, but it is synchronous—tied to the system clock, just like the CPU and MCC, so the MCC knows when data is ready to be grabbed from SDRAM. This results in little wasted time.

Mulitthreading

Multithreading At the peak of the single-CPU 32-bit computing days, Intel released a CPU called the Pentium 4 that took parallelism to the next step with Hyper-Threading. Hyper-Threading enabled the Pentium 4 to run multiple threads at the same time, what's generically called simultaneous multithreading, effectively turning the CPU into two CPUs on one chip—with a catch. Figure 3-30 shows the Task Manager in an ancient Windows XP computer on a system running a Hyper-Threaded Pentium 4. Note how the CPU box is broken into two groups—Windows thinks this one CPU is two CPUs. Multithreading enhances a CPU's efficiency but with a couple of limitations. First, the operating system and the application must be designed to take advantage of the feature. Second, although the CPU simulates the actions of a second processor, it doesn't double the processing power because the main execution resources are not duplicated.

Note

Note By the time you read this, the 64-GB limit on DDR4 might have been eclipsed. Check www.newegg.com for the latest.

Note

Note DDR2 RAM sticks will not fit into DDR sockets, nor are they electronically compatible.

Note

Note Some memory manufacturers call the technology error checking and correction (ECC). Don't be thrown off if you see the phrase—it's the same thing, just a different marketing slant for error correction code.

NOTE:

Note The industry describes how much heat a busy CPU generates with a figure (measured in watts) called its thermal design power (TDP). The TDP can give you a rough idea of how much energy a CPU draws and what kind of cooling it will need. It can also help you select more efficient CPUs. TDP has been trending down over time (especially in recent years), but it may help to have a sense of what these values look like in the real world. The CPUs in a smartphone or tablet typically have a TDP from 2 to 15 watts, laptop CPUs range from 7 to 65 watts, and desktop CPUs tend to range from 50 to 140 watts.

Note

Note To keep up with faster processors, motherboard manufacturers began to double and even quadruple the throughput of the frontside bus. Techs sometimes refer to these as double-pumped and quad-pumped frontside buses.

Early Days: Smart, discrete circuits inside the CPU handle each of these stages. In early CPUs, when a command was placed on the data bus, each stage did its job and the CPU handed back the answer before starting the next command, requiring at least four clock cycles to process a command. In every clock cycle, three of the four circuits sat idle.

Now: Today, the circuits are organized in a conveyer-belt fashion called a pipeline. With pipelining, each stage does its job with each clock-cycle pulse, creating a much more efficient process. The CPU has multiple circuits doing multiple jobs, so let's add pipelining to the Man in the Box analogy. Now, it's Men in the Box

ADDRESS BUS

Once the MCC is in place to grab any discrete byte of RAM, the CPU needs to be able to tell the MCC which line of code it needs. The CPU therefore gains a second set of wires, called the address bus, with which it can communicate with the MCC. Different CPUs have different numbers of wires (which, you will soon see, is very significant). The 8088 had 20 wires in its address bus

64-Bit Processing

Over successive generations of microprocessors, engineers have upgraded many physical features of CPUs. The EDB gradually increased in size, from 8- to 16- to 32- to 64-bits wide. The address bus similarly jumped, going from 20- to 24- to 32-bits wide (where it stayed for a decade). Most new CPUs support 64-bit processing, meaning they can run a compatible 64-bit operating system, such as Windows 10, and 64-bit applications.

wait-states

Pipelining CPUs work fantastically well as long as the pipelines stay filled with instructions. Because the CPU runs faster than the RAM can supply it with code, you'll always get pipeline stalls—called wait states—because the RAM can't keep up with the CPU.

Throttling

Saving energy by making the CPU run more slowly when demand is light is generically called throttling. Figure 3-21: Desktop vs. mobile, fight!

triple-channel architecture or quad-channel architecture,

Some motherboards that support DDR3 also support features called triple-channel architecture or quad-channel architecture, which work a lot like dual-channel, but with three or four sticks of RAM instead of two. More recently, Intel and AMD systems have switched back to dual-channel, although there's plenty of systems out there using triple- or quad-channel memory.

Table 4-2 shows some of the common DDR2 speeds.

Table 4-2: DDR2 Speeds

Table 4-4: Standard DDR4 Varieties

Table 4-4: Standard DDR4 Varieties

NX bit

Technology that enables the CPU to protect certain sections of memory. This feature, coupled with implementation by the operating system, stops malicious attacks from getting to essential operating system files. Microsoft calls the feature Data Execution Prevention (DEP) Figure 3-32: DEP in Windows 10

Random Access

The CPU accesses any one row of RAM as easily and as fast as any other row, which explains the "random access" part of RAM. Not only is RAM randomly accessible, it's also fast. By storing programs on RAM, the CPU can access and run them very quickly. RAM also stores any data that the CPU actively uses.

clock

The CPU does nothing until activated by the clock Each time you press the button to sound the bell, the Man in the Box reads the next set of lights on the EDB. Of course, a real computer doesn't use a bell. The bell on a real CPU is a special wire called the clock wire (most diagrams label the clock wire CLK). A charge on the CLK wire tells the CPU that another piece of information is waiting to be processed

early days of cache: and today:

The SRAM cache inside the early CPUs was tiny, only about 16 KB, but it improved performance tremendously. In fact, it helped so much that many motherboard makers began adding a cache directly to the motherboards. These caches were much larger, usually around 128 to 512 KB. When the CPU looked for a line of code, it first went to the built-in cache; if the code wasn't there, the CPU went to the cache on the motherboard. The cache on the CPU was called the L1 cache because it was the one the CPU first tried to use. The cache on the motherboard was called the L2 cache, not because it was on the motherboard, but because it was the second cache the CPU checked. Eventually, engineers took this cache concept even further and added the L2 cache onto the CPU package. Many modern CPUs include three caches: an L1, an L2, and an L3 cache (see Figure 3-28). Figure 3-28: CPU-Z displaying the cache information for a Ryzen 7 processor

Arithmetic Logic Unit (ALU)

The arithmetic logic unit (ALU)—that's the Man in the Box—still crunches numbers many millions of times per second. CPUs rely on memory to feed them lines of programming as quickly as possible.

NOTE (ARM)

The ever-growing selection of mobile devices, such as the Apple iPhone and iPad and most Android devices, use a CPU architecture developed by ARM Holdings, called ARM. ARM-based processors use a simpler, more energy-efficient design, the reduced instruction set computing (RISC) architecture. They can'T match the raw power of the Intel and AMD complex instruction set computing (CISC) chips, but the savings in cost and battery life make ARM-based processors ideal for mobile devices. (Note that the clear distinction between RISC and CISC processors has blurred. Each design today borrows features of the other design to increase efficiency.) ARM Holdings designs ARM CPUs, but doesn't manufacture them. Many other companies—most notably, Qualcomm—license the design and manufacture their own versions. See Chapter 24, "Understanding Mobile Devices."

Parity and ECC

The first type of error-detecting RAM was known as parity RAM Figure 4-18: Ancient parity RAM stick Compared with non-parity RAM, parity RAM stored an extra bit of data (called the parity bit) that the MCC used to verify whether the data was correct. Parity wasn't perfect. It wouldn't always detect an error, and if the MCC did find an error, it couldn't correct the error. For years, parity was the only available way to tell if the RAM made a mistake.

hardwarre-based virtualization

The key issue from a CPU standpoint is that virtualization used to work entirely through software. Programmers had to write a ton of code to enable a CPU—that was designed to run one OS at a time—to run more than one OS at the same time. Think about the issues involved. How does the memory get allocated, for example, or how does the CPU know which OS to update when you type something or click an icon? With hardware-based virtualization support, CPUs took a lot of the burden off the programmers and made virtualization a whole lot easier.

With a 64-bit address bus, CPUs can address 264 bytes of memory, or more precisely, 18,446,744,073,709,551,616 bytes of memory—that's a lot of RAM! This number is so big that gigabytes and terabytes are no longer convenient, so we now go to an exabyte (260), abbreviated EB. A 64-bit address bus can address 16 EB of RAM.

The primary benefit to moving to 64-bit computing is that modern systems can support much more than the 4 GB of memory supported with 32-bit processing. In practical terms, 64-bit computing greatly enhances the performance of programs that work with large files, such as video editing applications. You'll see a profound improvement moving from 4 GB to 8 GB or 16 GB of RAM with such programs.

ECC RAM

Today's PCs that need to watch for RAM errors use a special type of RAM called error correction code RAM (ECC RAM). ECC is a major advance in error checking on DRAM. ECC detects and corrects any time a single bit is flipped, on-the-fly. It can detect but not correct a double-bit error. The checking and fixing come at a price, however, as ECC RAM is always slower than non-ECC RAM. ECC DRAM comes in every DIMM package type and can lead to some odd-sounding numbers. You can find DDR2, DDR3 (Figure 4-19), or DDR4 RAM sticks, for example, that come in 240-pin, 72-bit versions. Similarly, you'll see 200-pin, 72-bit SO-DIMM format. The extra 8 bits beyond the 64-bit data stream are for the ECC. Figure 4-19: Stick of ECC DDR3 with 9 memory chips

address space

Total amount of memory addresses that an address bus can contain. Because the 8088 had a 20-wire address bus, the most RAM it could handle was 220 or 1,048,576 bytes. The 8088, therefore, had an address space of 1,048,576 bytes. Okay, so you know that the 8088 had 20 address wires and a total address space of 1,048,576 bytes. no one called it that, instead: Instead you say that the 8088 had one megabyte (1 MB) of address space.

dual-channel architecture

Using two sticks of RAM (either RDRAM or DDR) to increase throughput. Dual-channel DDR requires two identical sticks of DDR and they must snap into two paired slots. Many motherboards offer four slots Figure 4-14: A motherboard showing four RAM slots. By populating the same-colored slots with identical

Registered and Buffered Memory

When shopping for memory, especially for ECC memory, you are bound to come across the terms registered RAM or buffered RAM. Either term refers to a small register installed on some memory modules to act as a buffer between the DIMM and the memory controller. This little extra bit of circuitry helps compensate for electrical problems that crop up in systems with lots of memory modules, such as servers. The key thing to remember is that a motherboard will use either buffered or unbuffered RAM (that's typical consumer RAM), not both. If you insert the wrong module in a system you are upgrading, the worst that will happen is a blank screen and a lot of head scratching.

DDR4 megatransfers per second (MT/s),

With DDR4, most techs have switched from bit rate to megatransfers per second (MT/s), a way to describe the number of data transfer operations happening at any given second. For DDR4, the number is pretty huge. Table 4-4 shows some DDR4 speeds and labels.

plus research and read other tech websites

You can also find many high-quality tech Web sites devoted to reporting on the latest CPUs. When a client needs an upgrade, surf the Web for recent articles and make comparisons. Because you'll understand the underlying technology from your CompTIA A+ studies, you'll be able to follow the conversations with confidence. Here's a list of some of the sites I use: http://arstechnica.com www.anandtech.com www.tomshardware.com www.bit-tech.net

Graphics Processing Unit (GPU)

chip that controls the manipulation and display of graphics on a display device it turns out that graphics processors can handle certain tasks much more efficiently than the standard CPU. Integrating a GPU into the CPU enhances the overall performance of the computer while at the same time reducing energy use, size, and cost.

DDR4

its mainstream memory today DDR4 offers higher density and lower voltages than DDR3, and can handle faster data transfer rates. In theory, manufacturers could create DDR4 DIMMs up to 512 GB. DIMMS running DDR4 top out at 64 GB, compared to the 16 GB max of DDR3, but run at only 1.2 V. (There's a performance version that runs at 1.35 V and a low-voltage version at 1.05 V too.)

MCC :memory controller chip (MCC).

memory controller chip (MCC) We need some type of chip between the RAM and the CPU to make the connection. The CPU needs to be able to say which row of RAM it wants, and the chip should handle the mechanics of retrieving that row of data from the RAM and putting it on the EDB. This chip comes with many names, but for right now just call it the memory controller chip (MCC). The MCC contains special circuitry so it can grab the contents of any line of RAM and place that data or command on the EDB. This in turn enables the CPU to act on that code Figure 3-17: The MCC grabs a byte of RAM

clock speed

the maximum number of clock cycles that a CPU can handle in a given period of time is referred to as its clock speed. The clock speed is the fastest speed at which a CPU can operate, determined by the CPU manufacturer. The Intel 8088 processor had a clock speed of 4.77 MHz (4.77 million cycles per second), extremely slow by modern standards, but still a pretty big number compared to using a pencil and paper. High-end CPUs today run at speeds in excess of 5 GHz (5 billion cycles per second). You'll see these "hertz" terms a lot in this chapter, so here's what they mean:

DDR sticks use a rather interesting naming convention based on the number of bytes per second of data throughput the RAM can handle. To determine the bytes per second, take the MHz speed and multiply by 8 bytes (the width of all DDR SDRAM sticks).

to name stick of DDR memory: So 400 MHz multiplied by 8 is 3200 megabytes per second (MBps). Put the abbreviation "PC" in the front to make the new term: PC3200. to name DDR chip: Many techs also use the naming convention used for the individual DDR chips; for example, DDR400 refers to a 400-MHz DDR SDRAM chip running on a 200-MHz clock. ***Even though the term DDRxxx is really just for individual DDR chips and the term PCxxxx is for DDR sticks, this tradition of two names for every speed of RAM is a bit of a challenge because you'll often hear both terms used interchangeably.

Technology

we will take a look at 8 technologies for computing: lock multipliers 64-bit processing Virtualization support Parallel execution Multicore processing Integrated memory controller (IMC) Integrated graphics processing unit (GPU) Security


Kaugnay na mga set ng pag-aaral

Auto Insurance, Commercial General Liability Insurance, Workers Compensation Insurance

View Set

MCAT - Biochem - Glycolysis, Gluconeogenesis, and Pentose Phosphate Pathway

View Set

2.6 - Cell Division, Cell Diversity & Cell Differentiation

View Set

Sarbanes-Oxley Act & Internal Control

View Set

Prep U Chapter 34: Assessment and Management of Patients with Inflammatory Rheumatic Disorders

View Set