Chapter 1

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

1.3 Cont.

-Form factor = size and shape -The number of pulsations emitted each second by the clock is its frequency. Clock frequencies are measured in cycles per second, or hertz. Most computers today operate in the gigahertz (GHz) range, generating billions of pulses per second -The number of instructions per second that a microprocessor can actually execute is proportionate to its clock speed (IPC). The number of clock cycles required to carry out a particular machine instruction is a function of both the machine's organization and its architecture. -SDRAM (Synchronous Dynamic Random Access Memory): SDRAM is much faster than conventional (nonsynchronous) memory because it can synchronize itself with a microprocessor's bus. DDR4 = Double Data Rate Type 4 -No matter how fast a bus is, it still takes some time to get data from memory to the processor. To provide even faster access to data, many systems contain a special memory called cache. Level 1 cache (L1) is a small, fast memory cache that is built into the microprocessor chip and helps speed up access to frequently used data. Level 2 cache (L2) is a collection of fast, built-in memory chips situated between the microprocessor and main memory. Motherboard: -The southbridge, an integrated circuit that controls the hard disk and I/O (including sound and video cards), is a hub that connects slower I/O devices to the system bus. -The BIOS flash chip contains the instructions in ROM that your computer uses when it is first powered up. SSD vs HDD: (HDD), which means that it stores information magnetically and spins around to provide access to various segments. It also has a 128GB solid-state drive. Solid-state drives (SSDs) use memory chips (similar to what you might find in a thumb drive). HDDs are typically larger and less expensive than SDDs, but SDDs provide faster access and thus better performance while requiring less power. -SATA: Serial Advanced Technology Attachment or Serial ATA IDE: Integrated Drive Electronics. Ports and Buses: -Whereas the system bus is responsible for all data movement internal to the computer, ports allow movement of data to and from devices external to the computer. -Serial ports transfer data by sending a series of electrical pulses across one or two data lines. -Another type of port some computers have is a parallel port. Parallel ports use at least eight data lines, which are energized simultaneously to transmit data. -Many new computers no longer come with serial or parallel ports, but instead have only USB ports. USB (universal serial bus) is a popular external bus that supports plug-and-play installation (the ability to configure devices automatically) as well as hot plugging (the ability to add and remove devices while the computer is running). Expansion Slots: -Peripheral Component Interconnect (PCI) is one such I/O bus standard that supports the connection of multiple peripheral devices. PCI, developed by Intel, operates at high speeds and also supports plug-and-play. PCIe has not only superseded PCI and PCI-X, but in the graphics world, it has also progressively replaced the AGP (accelerated graphics port) graphics interface designed by Intel specifically for 3D graphics. PCIe operates in serial, giving each peripheral device its own dedicated bus. LCD = Liquid Crystal Display -LCDs use a liquid crystal material sandwiched between two pieces of polarized glass. Electric currents cause the crystals to move around, allowing differing levels of backlighting to pass through, creating the text, colors, and pictures that appear on the screen. This is done by turning on/off different pixels, small "picture elements" or dots on the screen. -Most LCDs manufactured today utilize active matrix technology, whereas passive technology is reserved for smaller devices such as calculators and clocks. Active matrix technology uses one transistor per pixel; passive matrix technology uses transistors that activate entire rows and columns. Although passive technology is less costly, active technology renders a better image because it drives each pixel independently. -LCD monitor specifications often list a response time, which indicates the rate at which the pixels can change colors. Originally, response rates measured the time to go from black to white and back to black. Many manufacturers now list the response time for gray-to-gray transitions (This can cause huge variation in what the actual response time could be - 8ms on one monitor may be 5ms on another, even if they are identical) -Luminance (or image brightness) is a measure of the amount of light an LCD monitor emits. This measure is typically given in candelas per square meter (cd/m2). - Contrast ratio: measures the difference in intensity between bright whites and dark blacks. Contrast ratios can be static (the ratio of the brightest point on the monitor to the darkest point on the monitor that can be produced at a given instant in time) or dynamic (the ratio of the darkest point in one image to the lightest point in another image produced at a separate point in time). Static specifications are typically preferred. A low static ratio (such as 300:1) makes it more difficult to discern shades; a good static ratio is 500:1 (with ranges from 400:1 to 3000:1). LCD monitors can have dynamic ratios of 12,000,000:1 and higher, but a higher dynamic number does not necessarily mean the monitor is better than a monitor with a much lower static ratio. -Color Depth: this number reflects the number of colors that can be displayed on the screen at one time. Common depths are 8-bit, 16-bit, 24-bit, and 32-bit. -GPU: The job of the graphics card is to input the binary data from your computer and translate it into signals to control all pixels on the monitor; the graphics card therefore acts as a middleman between the computer's processor and monitor. The GPU is no ordinary processor; it is designed to most efficiently perform the complex calculations required for image rendering and contains special programs allowing it to perform this task more effectively. Graphics cards typically contain their own dedicated RAM used to hold temporary results and information, including the location and color for each pixel on the screen. A frame buffer (part of this RAM) is used to store rendered images until these images are intended to be displayed. The memory on a graphics card connects to a digital-to-analog converter (DAC), a device that converts a binary image to analog signals that a monitor can understand and sends them via a cable to the monitor. Online: -USB and PCI modems are available that allow you to connect your computer to the internet using the phone line; many of these also allow you to use your computer as a fax machine. -A computer can also connect directly to a network. Networking allows computers to share files and peripheral devices. Computers can connect to a network via either a wired or a wireless technology. Wired computers use Ethernet technology, an international standard networking technology for wired networks -Although each Wi-Fi standard has a theoretical bandwidth, there is no speed listed with the wireless (in the ad) because the performance is typically limited by the network bandwidth of the wireless network you are using. -Large computers include mainframes, enterprise-class servers, and supercomputers. Small computers include personal systems, workstations, and handheld devices. -According to some estimates, as many as 65% of internet users in the United States connect exclusively via mobile platforms Touch Screens: -Come in two general types: resistive and capacitive Capacitive touchscreens react to the electrical properties of the human skin. Resistive screens are less sensitive than capacitive screens, but they provide higher resolution. Unlike resistive screens, capacitive screens support multitouch, which is the ability to detect the simultaneous press of two or more fingers. Military/Medical: Two different technologies, surface acoustic wave touch sense and infrared touch sense, respectively, send ultrasonic and infrared waves across the surface of a ruggedized touchscreen. -With a size, shape, and weight similar to a paperback book, tablet computers are replacing paper textbooks in some U.S. school districts.

Section 1.2

1.2.1: The Main Components of a Computer -Modern computers are actually implementations of algorithms that execute other algorithms. This chain of nested algorithms leads us to the following principle: *Principle of equivalence of hardware and software: Any task done by software can also be done using hardware, and any operation performed directly by hardware can be done using software. *-Computers consist of 3 pieces: Processor, Memory, I/O -*Processors consists of an ALU (Arithmetic Logic Unit) and a Control Unit -*2 Types of Memory: Long-term and Temporary -Typically memory is "hierarchical," meaning that there are different levels of memory, varying in size and speed. The goal of this memory hierarchy is to give the best performance at the lowest cost. *-The ALU must be connected to the registers, and both must be connected to the memory; this is done by a special pathway called a bus. The collection of ALU, registers, and bus is called a datapath, an extremely important component of any computer, because it is the hardware that is ultimately responsible for running programs. 1.2.2: System Components *-We refer to the combination of hardware and software as a computer system. -Types of Software: System Software (Most important software - Example: OS), Application Software, Utility Software -Both application software and utility software use the system software to communicate with the hardware, reiterating how important system software really is. ***-Principal of Equivalence: an extremely important concept, particularly when it comes to the design of computer architecture. *Cost and speed are often the determining factors when making design decisions regarding whether to implement something in hardware or software. *There is often a hardware/software design tradeoff. If the goal is performance, functionality will typically be moved to hardware as much as possible. However, if the goal is to achieve compatibility for the future, software is often used, because it is easy to update. Note that by "equivalent" we mean "functionally equivalent." In principal, any function performed by one can be performed by the other. However, at the lowest level, we must have hardware. *If someone has a hardware device that provides a specific function, this device is eligible for a patent. However, the principle says that this machine can be exactly duplicated in software, yet patent status is often denied for software. *The choices made for implementation in hardware versus software are often based on simply which is more practical, is more efficient, is more profitable, or provides better performance. -Although we often refer to a computer system as simply a computer, it is important to note that technically, the term computer refers to the hardware only. However, it has become quite common to refer to the combination of hardware and software as a computer. 1.2.3: Classification of Computing Devices -Five Categories of Computer Systems: 1. Supercomputers 2. Mainframes 3. Personal Computers 4. Mobile Devices 5. Embedded Systems -Supercomputers are simply very powerful mainframes that are used for compute-intensive applications, such as weather calculations, advanced biological modeling, and genetic research. -Mainframe computers are used by companies for specific applications such as data processing (a good example is systems used by airlines for ticket reservations) and financial transactions. -Mobile devices include any handheld portable computing device, such as your smartphone, e-reader, or tablet. -Embedded Computer: these are computer systems that perform dedicated tasks specifically designed for the product in which they are enclosed. These systems can be programmable or not, depending on the application, and their main goals are speed, low power usage, size, and reliability.

Section 1.9 - Von Neumann Model

In the earliest electronic computing machines, programming was synonymous with connecting wires to plugs. No layered architecture existed, so programming a computer was as much of a feat of electrical engineering as it was an exercise in algorithm design. John W. Mauchly and J. Presper Eckert conceived of an easier way to change the behavior of their calculating machine. They reckoned that memory devices, in the form of mercury delay lines, could provide a way to store program instructions. This would forever end the tedium of rewiring the system each time it had a new problem to solve, or an old one to debug. Mauchly and Eckert documented their idea, proposing it as the foundation for their next computer, the EDVAC (Electronic Discrete Variable Automatic Computer). After reading Mauchly and Eckert's proposal for the EDVAC, von Neumann published and publicized the idea. So effective was he in the delivery of this concept that history has credited him with its invention. All stored-program computers have come to be known as von Neumann systems using the von Neumann architecture. Although we are compelled by tradition to say that stored-program computers use the von Neumann architecture, we shall not do so without paying proper tribute to its true inventors: John W. Mauchly and J. Presper Eckert. -Von Neumann architecture consists of 3 main things: 1) Consists of three hardware systems: A central processing unit (CPU) with a control unit, an arithmetic logic unit (ALU), registers (small storage areas), and a program counter; a main memory system, which holds programs that control the computer's operation; and an I/O system. 2) Has the capacity to carry out sequential instruction processing. 3) Contains a single path, either physically or logically, between the main memory system and the control unit of the CPU, forcing alternation of instruction and execution cycles. ****This single path is often referred to as the von Neumann bottleneck. ***-This architecture runs programs in what is known as the von Neumann execution cycle (also called the fetch-decode-execute cycle), which describes how the machine works. Von Neumann Execution cycle (fetch-decode-execute cycle): 1) The control unit fetches the next program instruction from the memory, using the program counter to determine where the instruction is located. 2) The instruction is decoded into a language the ALU can understand. 3) Any data operands required to execute the instruction are fetched from memory and placed in registers in the CPU. 4) The ALU executes the instruction and places the results in registers or memory. The ideas present in the von Neumann architecture have been extended so that programs and data stored in a slow-to-access storage medium, such as a hard disk, can be copied to a fast-access, volatile storage medium such as RAM prior to execution. ***- This architecture has also been streamlined into what is currently called the system bus model. The data bus moves data from main memory to the CPU registers (and vice versa). The address bus holds the address of the data that the data bus is currently accessing. The control bus carries the necessary control signals that specify how the information transfer is to take place. Other enhancements to the von Neumann architecture include using index registers for addressing, adding floating-point data, using interrupts and asynchronous I/O, adding virtual memory, and adding general registers 7-Atom Transistor: Experts estimate it may shrink microchips by a factor of 100, while enabling an exponential speedup in processing. This means our computers could become one hundred times smaller, but at the same time, also one hundred times faster. This new seven-atom transistor is significant for another reason. Recall Moore's Law; this law is not so much a law of nature, but rather an expectation of innovation and a significant driving force in chip design. Moore's Law has held since 1965, but in order to do so, chip manufacturers have jumped from one technology to another. Gordon Moore himself has predicted that, if restricted to CMOS silicon, his law will fail sometime around 2020. The discovery of this seven-atom transistor gives new life to Moore's Law—and we suspect that Gordon Moore is breathing a sigh of relief over its discovery. However, noted physicist Stephen Hawking has explained that chip manufacturers are limited in their quest to "enforce" Moore's Law by two fundamental constraints: the speed of light and the atomic nature of matter, implying that Moore's Law will eventually fail, regardless of the technology being used.

Section 1.5 - 1.5.3

The vacuum tube technology of the first generation was not very dependable. Although system reliability wasn't as bad as the doomsayers predicted, vacuum tube systems often experienced more downtime than uptime. ****-In 1948, three researchers with Bell Laboratories—John Bardeen, Walter Brattain, and William Shockley—invented the transistor. This new technology not only revolutionized devices such as televisions and radios—it also pushed the computer industry into a new generation. Because transistors consume less power than vacuum tubes, are smaller, and work more reliably, the circuitry in computers consequently became smaller and more reliable. -Control Data Corporation (CDC), under the supervision of Seymour Cray, built the CDC 6600, the world's first supercomputer. Transistors: The transistor, short for transfer resistor, is the solid-state version of a switch that can be opened or closed by electrical signals. Whereas mechanical switches have moving parts, transistors do not. This allows them to be very small, resulting in the transistor being ubiquitous in electronics today. A transistor actually performs two different functions: it can behave either as a switch or as an amplifier. -Loudspeakers and hearing aids are good examples of using transistors as amplifiers—a small electric current at one end becomes a much larger current at the other end. -Although transistors are used in many devices for amplification, when discussing computers, the ability of a transistor to work as a switch is much more relevant. Low current can flow through a transistor and switch on larger current through another transistor. -Transistors are made from silicon. Silicon is not a good conductor of electricity. But when it is combined with trace amounts of neighboring elements from the periodic table, silicon conducts electricity in an effective and easily controlled manner. -Boron, aluminum, and gallium can be found to the left of silicon and germanium on the periodic table. Because they lie to the left of silicon and germanium, they have one less electron in their outer electron shell, or valence. So if you add a small amount of aluminum to silicon, the silicon ends up with a slight imbalance in its outer electron shell, and therefore attracts electrons from any pole that has a negative potential (an excess of electrons). When modified (or doped) in this way, silicon or germanium becomes a P-type material. Similarly, if we add a little boron, arsenic, or gallium to silicon, we'll have extra electrons in valences of the silicon crystals. This gives us an N-type material. *****(P-Type is elements left of silicon, N-type is elements to he right) A transistor essentially works when electrons flow between the two different types of silicon; we can layer them in ways that allow us to make various components. For example, if we join N-type silicon to P-type silicon with contacts on one side, electrons can flow from the N-side to the P-side, but not the other way, producing one-way current. We can create N-P-N layers, or P-N-P layers, and so on, each able to either amplify or switch current.

Section 1.5 - 1.5.6

Moore's Law: -More than one skeptic raised an eyebrow when, in 1965, Intel founder Gordon Moore stated, "The density of transistors in an integrated circuit will double every year." The current version of this prediction is usually conveyed as "the density of silicon chips doubles every 18 months." This assertion has become known as Moore's Law. Moore intended this postulate to hold for only 10 years. However, advances in chip manufacturing processes have allowed this assertion to hold for more than 50 years. Rock's Law -Using current technology, Moore's Law cannot hold forever. There are physical and financial limitations that must ultimately come into play. At the current rate of miniaturization, it would take about 500 years to put the entire solar system on a chip! Clearly, the limit lies somewhere between here and there. Cost may be the ultimate constraint. Rock's Law, proposed by early Intel capitalist Arthur Rock, is a corollary to Moore's Law: "The cost of capital equipment to build semiconductors will double every four years." Essentially, even if we continue to make chips smaller and faster, the ultimate question may be whether we can afford to build them. Certainly, if Moore's Law is to hold, Rock's Law must fall. It is evident that for these two things to happen, computers must shift to a radically different technology.

Section 1.11 - Parallel Processors and Parallel Computing

Today, parallel processing solves some of our biggest problems in much the same way that settlers of the Old West solved their biggest problems using parallel oxen. If they were using an ox to move a tree and the ox was not big enough or strong enough, they certainly didn't try to grow a bigger ox—they used two oxen. If our computer isn't fast enough or powerful enough, instead of trying to develop a faster, more powerful computer, why not simply use multiple computers? This is precisely what parallel computing does. The first parallel processing systems were built in the late 1960s and had only two processors. The 1970s saw the introduction of supercomputers with as many as 32 processors, and the 1980s brought the first systems with more than 1000 processors. Finally, in 1999, IBM announced funding for the development of a supercomputer architecture called the Blue Gene series. The first computer in this series, the Blue Gene/L, is a massively parallel computer containing 131,000 dual-core processors, each with its own dedicated memory. BM has continued to add computers to this series. The Blue Gene/P appeared in 2007 and has quad-core processors. The latest computer designed for this series, the Blue Gene/Q, uses 16-core processors, with 1024 compute nodes per rack, scalable up to 512 racks. Installations of the Blue Gene/Q computer include Nostromo (being used for biomedical data in Poland), Sequoia (being used at Lawrence Livermore National Laboratory for nuclear simulations and scientific research), and Mira (used at Argonne National Laboratory). -Dual-core and quad-core processors (and higher, as we saw in Blue Gene/Q) are examples of multicore processors. But what is a multicore processor? Essentially, it is a special type of parallel processor. Parallel processors are often classified as either "shared memory" processors (in which processors all share the same global memory) or "distributed memory" computers (in which each processor has its own private memory). The following discussion is limited to shared memory multicore architectures—the type used in personal computers. -Multicore architectures are parallel processing machines that allow for multiple processing units (often called cores) on a single chip. Dual core means two cores; quad-core machines have four cores; and so on. But what is a core? Instead of a single processing unit in an integrated circuit (as found in typical von Neumann machines), independent multiple cores are "plugged in" and run in parallel. Each processing unit has its own ALU and set of registers, but all processors share memory and some other resources. "Dual core" is different from "dual processor." Dual-processor machines, for example, have two processors, but each processor plugs into the motherboard separately. The important distinction to note is that all cores in multicore machines are integrated into the same chip. This means you could, for example, replace a single-core (uniprocessor) chip in your computer with, for example, a dual-core processor chip (provided that your computer had the appropriate socket for the new chip). -Just because your computer has multiple cores does not mean it will run your programs more quickly. Application programs (including operating systems) must be written to take advantage of multiple processing units (this statement is true for parallel processing in general). Multicore computers are very useful for multitasking—when users are doing more than one thing at a time. These "multiple tasks" can be assigned to different processors and carried out in parallel, provided the operating system is able to manipulate many tasks at once. In addition to multitasking, multithreading can also increase the performance of any application with inherent parallelism. Programs are divided into threads, which can be thought of as mini-processes. If an application is multithreaded, separate threads can run in parallel on different processing units. *****-To summarize, parallel processing refers to a collection of different architectures, from multiple separate computers working together, to multiple processors sharing memory, to multiple cores integrated onto the same chip. Parallel processors are technically not classified as von Neumann machines because they do not process instructions sequentially. However, many argue that parallel processing computers contain CPUs, use program counters, and store both programs and data in main memory, which makes them more like an extension to the von Neumann architecture rather than a departure from it; these people view parallel processing computers as sets of cooperating von Neumann machines. In this regard, perhaps it is more appropriate to say that parallel processing exhibits "non-von Neumann-ness." -Even parallel computing has its limits, however. As the number of processors increases, so does the overhead of managing how tasks are distributed to those processors. Some parallel processing systems require extra processors just to manage the rest of the processors and the resources assigned to them. No matter how many processors we place in a system, or how many resources we assign to them, somehow, somewhere, a bottleneck is bound to develop. The best we can do, however, is make sure the slowest parts of the system are the ones that are used the least. This is the idea behind Amdahl's Law. ***Amdahl's law states that the performance enhancement possible with a given improvement is limited by the amount that the improved feature is used. The underlying premise is that every algorithm has a sequential part that ultimately limits the speedup that can be achieved by multiprocessor implementation. *-If parallel machines and other non-von Neumann architectures give such huge increases in processing speed and power, why isn't everyone using them everywhere? The answer lies in their programmability True multiprocessor programming is more complex than both uniprocessor and multicore programming and requires people to think about problems in a different way, using new algorithms and programming tools. One of these programming tools is a set of new programming languages. Most of our programming languages are von Neumann languages, created for the von Neumann architecture. Many common languages have been extended with special libraries to accommodate parallel programming, and many new languages have been designed specifically for the parallel programming environment. We have very few programming languages for the remaining (nonparallel) non-von Neumann platforms, and fewer people who really understand how to program in these environments efficiently. ***Deep Blue - Chess Playing Computer The problem of championship chess playing had long been considered so hard that many believed a machine could never beat a human Grandmaster. On May 11, 1997, a machine called Deep Blue did just that. Deep Blue was a massively parallel system consisting of 30 RS/6000-based nodes supplemented with 480 chips built especially to play chess. Deep Blue included a database of 700,000 complete games with separate systems for opening and endgames. It evaluated 200 million positions per second on average. This enabled Deep Blue to produce a 12-move look ahead. ***Watson - Jeopardy! Computer With Deep Blue's stunning win over Kasparov now in the history books, IBM Research manager Charles Lickel began looking for a new challenge. Playing Jeopardy! is enormously more difficult than playing chess. In chess, the problem domain is clearly defined with fixed, unambiguous rules, and a finite (although very large) solution space. Jeopardy! questions, on the other hand, cover a nearly infinite problem space compounded by the vagaries of human language, odd relations between concepts, puns, and vast amounts of unstructured factual information. To make the game fair, Watson had to emulate a human player as closely as possible. No connection to the internet or any other computers was permitted, and Watson was required to physically press a plunger to "buzz in" with an answer. Once a clue was read, Watson initiated several parallel processes. Each process examined different aspects of the clue, narrowed the solution space, and formulated a hypothesis as to the answer. A typical desktop computer would need about two hours to come up with a good hypothesis. Watson had to do it in less than three seconds. It achieved this feat through a massively parallel architecture dubbed DeepQA (Deep Question and Answer). The system relied on 90 IBM POWER 750 servers. Each server was equipped with four POWER7 processors, and each POWER7 processor had eight cores, giving a total of 2880 processor cores. While playing Jeopardy!, each core had access to 16TB of main memory and 4TB of clustered storage. The DeepQA algorithms provided Watson with the ability to synthesize information—in a humanlike manner—from this universe of raw data. Watson drew inferences and made assumptions using hard facts and incomplete information. Watson could see information in context: The same question, in a different context, might well produce a different answer. On the third day of its match, February 16, 2011, Watson stunned the world by soundly beating both reigning Jeopardy! champs, Ken Jennings and Brad Rutter. Watson's Jeopardy! success has now been matched by its medical school success. Although Watson's applications and abilities have been growing, Watson's footprint has been shrinking. In the span of only a few years, system performance has improved by 240% with a 75% reduction in physical resources. Watson can now be run on a single Power 750 server, leading some to claim that "Watson on a chip" is just around the corner. In Watson, we have not merely seen an amazing Jeopardy! player or crack oncologist. What we have seen is the future of computing. Rather than people being trained to use computers, computers will train themselves to interact with people—with all their fuzzy and incomplete information. Tomorrow's systems will meet humans on human terms.

Section 1.5 - 1.5.4

*-The real explosion in computer use came with the integrated circuit generation. Jack Kilby invented the integrated circuit (IC), or microchip, made of germanium. *-Six months later, Robert Noyce (who had also been working on integrated circuit design) created a similar device using silicon instead of germanium. This is the silicon chip upon which the computer industry was built. The IC generation also saw the introduction of time-sharing and multiprogramming (the ability for more than one person to use the computer at a time). Multiprogramming, in turn, necessitated the introduction of new operating systems for these computers.

Section 1.8

-Although the internet is obviously our main conduit for information and financial transactions, it not so obviously is the fabric for supervisory control and data acquisition (SCADA) networks. SCADA systems operate vital portions of our physical infrastructure, including power generation facilities, transportation networks, sewage systems, and oil and gas pipelines, to name only a few. For example, an array of connected computers essentially runs the American power grid. This interconnectivity is so crucial that a potential attack to shut it down has become a foremost priority of the Department of Defense. Whether it be from a planned attack or a simple outage, the internet, although robust, is fragile. Because of their inherent communication needs, SCADA systems were the first wave of control systems to connect via the internet. The present wave involves less critical but much more numerous collections of control and sensory nodes known as the Internet of Things (IoT) or machine-to-machine (M2M) communication. *-IoT (Internet of Things) encompasses everything from the smallest RFID chips to home appliances to security systems. These gadgets are typically embedded in everyday objects and communicate wirelessly. IoT devices are being used in homes for energy management and allow users to remotely control devices as well as collect and report energy usage. They are also being used in medical devices for remote health monitoring. Farmers can use IoT devices to monitor local weather conditions and get up-to-the-minute reports on their crops. Smart homes, smart cities, and connected smart cars are already here. Cisco Systems has estimated that there may be as many as 50 billion IoT devices in use by 2020. It is possible that without the internet, these devices might be rendered useless—including large portions of our crucial SCADA infrastructure. -The internet was designed with survivability in mind. If one portion fails, another can be used just as easily Indeed, if the internet infrastructure in the northeastern United States could survive Superstorm Sandy, one might be led to believe that the internet can survive anything. Several internet observers are becoming increasingly worried, however. Some of the loudest alarms are sounding about the possibility of cyber warfare—particularly cyber warfare directed at SCADA systems. As previously mentioned, a well-executed SCADA attack could bring down vast swaths of a country's power systems and transportation networks. But even a less aggressive cyber attack aimed at only a few strategic components could cause a catastrophic cascading failure. In this scenario, the failed components overwhelm other components along the way, causing their failure as well. -A second concern is that of the increasing bandwidth demands brought on by the IoT. It is impossible to characterize the type of traffic that these devices will place on the network. As more devices are added to the network, the larger the routing information becomes and the more time it takes to select a route and forward data. When decisions can't be made in a timely manner, information is lost and must be retransmitted, putting even more traffic on the network. At its worst, this situation could lead to a condition known as congestive collapse, where the routers, hardware components that direct the internet traffic, go offline because they can no longer handle their incoming traffic, even in defiance of our best efforts at congestion control. Recognizing that the IoT may push packet traffic past the limits of network scalability, scientists have proposed more intelligent, or cognitive, methods to route data. This idea takes advantage of the intelligence of the end nodes in the network. This is to say that end nodes could engage in direct peer-to-peer exchanges, rather than needing to involve a host *-Despite these devices being inches from each other, the packets may travel hundreds of miles. Indeed, with 50 billion transfers like this in a day, there is no question that congestive collapse is a real concern. The internet keeps working harder to keep ahead of its many threats. But this hard work leads ultimately to exhaustion. The time has come for the internet to work smarter. Otherwise, we may be forced to recall how we ever lived without it.

Section 1.3

-Back in the 1960s, someone decided that because the powers of 2 were close to the powers of 10, the same prefix names could be used for both. For example, 210 is close to 103, so kilo is used to refer to them both. The result has been mass confusion: Does a given prefix refer to a power of 10 or a power of 2? Does "a kilo" mean 103 of something or 210 of something? Although there is no definitive answer to this question, there are accepted standards of usage. -Power-of-10 prefixes are ordinarily used for power, electrical voltage, frequency (such as computer clock speeds), and multiples of bits (such as data speeds in number of bits per second). -The International Electrotechnical Commission, with help from the National Institute of Standards and Technology, has approved standard names and symbols for binary prefixes to differentiate them from decimal prefixes In Computer Architecture: Lowercase b: bit Uppercase B: Byte Power of 10s = 10^3 - 1000 (Kilo, Mega, Giga, Tera, etc) Power of 2s = 2^10 - 1024 (Kibi, Mebi, Gibi, Tebi, etc)/(Kilo, Mega, Giga, Tera, etc) **KB can be either power of 2, or power of 10. No definite standards **A convention some professionals use to distinguish base 10 from base 2 when using these prefixes is to use a lowercase letter for the prefix to indicate a power of 10 (1KB = 1024 bytes, but 1kB = 1000 bytes). Kilobyte = 10^3 (kB) Kibibyte = 2^10 (KB) - Kibi means kilobinary - used in computing -Generally, negative powers refer to powers of 10, not powers of 2. For this reason, the new binary prefix standards do not include any new names for the negative powers. Yottabytes: 10^21 or 2^70 Zetabytes: 10^24 or 2^80 ****-It is important to understand that these prefixes can refer to both base 10 and base 2 values. For example, 1K could mean 1000, or it could mean 1024. -Storage and Manufacturers: It is increasingly common for disk capacities to be given in base 10 rather than base 2 which means 7% less space. It is important to determine whether a manufacturer uses base 10 or base 2 when calculating the storage space in a drive.

Section 1.7

-Cloud computing is the general term for any type of virtual computing platform provided over the internet that offers a set of shared resources, such as storage, networking, applications, and various other processes. A cloud computing platform is defined in terms of the services that it provides rather than its physical configuration. Its name derives from the cloud icon that symbolizes the internet on schematic diagrams. But the metaphor carries well into the actual cloud infrastructure, because the computer is more abstract than real. The "computer" and "storage" appear to the user as a single entity in the cloud but usually span several physical servers. The storage is usually located on an array of disks that are not directly connected to any particular server. System software is designed to give this configuration the illusion of being a single system; thus, we say that it presents a virtual machine to the user. -SLAs are important in the context of cloud computing as well. They specify minimum levels required for service, protection, and security of data; various parameters of performance (such as availability); ownership and location of data; user rights and costs; and specifics regarding failure to perform and mediation. Unlike outsourcing, it is typically pay-as-you-go, so you pay only for what you use. -Cloud computing services can be defined and delivered in a number of ways based on levels of the computer hierarchy. SaaS = Software as a Service PaaS = Platform as a Service IaaS = Infrastructure as a Service SaaS - Level 6: User - Executable Programs PaaS - Level 5: High-Level Language - C++, Java, etc. Level 4: Assembly Language - Assembly Code Level 3: System Software - OS, Library Code IaaS - Level 2: Machine - Instruction Set Architecture Level 1: Control - Microcode or Hardwired Level 0: Digital Logic - Circuits, Gates, etc. At the top of the hierarchy, where we have executable programs, a cloud provider might offer an entire application over the internet, with no components installed locally. Basically, you use an application that is running completely on someone else's hardware. This is called software as a service, or SaaS. The consumer of this service does not maintain the application or need to be at all concerned with the infrastructure in any way. SaaS applications tend to focus on narrow, non-business-critical applications. Well-known examples include Gmail, Dropbox, GoToMeeting, Google Docs, Zoho, and Netflix. -A great disadvantage of SaaS is that the consumer has little control over the behavior of the product. This may be problematic if a company has to make radical changes to its processes or policies in order to use a SaaS product. Companies that desire to have more control over their applications, or that need applications for which SaaS is unavailable, might instead opt to deploy their own applications on a cloud-hosted environment called platform as a service, or PaaS. PaaS provides server hardware, operating systems, database services, security components, and backup and recovery services so you can develop applications of your own. The PaaS provider manages the performance and availability of the environment, whereas the customer manages the applications hosted in the PaaS cloud -PaaS is not a good fit in situations where rapid configuration changes are required. This would be the case if a company's main business is software development. The formality of change processes necessary to a well-run PaaS operation impedes rapid software deployment (by forcing a company to play by the service provider's rules). Indeed, in any company where staff is capable of managing operating system and database software, the infrastructure as a service (IaaS) cloud model might be the best option. IaaS, the most basic of the models, allows you to buy access to basic computer hardware and thus provides only server hardware, secure network access to the servers, and backup and recovery services. The customer is responsible for all system software, including the operating system and databases. -Not only do PaaS and IaaS liberate the customer from the difficulties of data center management, they also provide elasticity: the ability to add and remove resources based on demand. A customer pays for only as much infrastructure as is needed. -Cloud storage is a limited type of IaaS. The general public can obtain small amounts of cloud storage inexpensively through services such as Dropbox, Google Drive, and Amazon Drive—to name only a few among a crowded field. -SLA management remains an important activity in the relationship between the service provider and the service consumer. Moreover, once an enterprise moves its assets to the cloud, it might be difficult to transition back to a company-owned data center, should the need arise. Thus, any notion of moving assets to the cloud must be carefully considered, and the risks clearly understood. The cloud also presents a number of challenges to computer scientists. First and foremost is the technical configuration of the data center. The infrastructure must provide for uninterrupted service, even during maintenance activities. ***-With the cost and complexity of data centers continuing to rise—with no end in sight—cloud computing is almost certain to become the platform of choice for medium- to small-sized businesses. It provides reduced infrastructure costs, as well as hardware and software managed by someone else, and allows users to pay only for what they use. But the cloud is not worry-free. In addition to potential privacy and security risks, you might end up trading technical challenges for even more vexing supplier management challenges.

Section 1.6

-If a machine is to be capable of solving a wide range of problems, it must be able to execute programs written in different languages, from Fortran and C to Lisp and Prolog. The only physical components we have to work with are wires and gates. A formidable open space—a semantic gap—exists between these physical components and a high-level language such as C++. For a system to be practical, the semantic gap must be invisible to most of the system's users. -Programming experience teaches us that when a problem is large, we should break it down and use a divide-and-conquer approach. In programming, we divide a problem into modules and then design each module separately -Through the principle of abstraction, we can imagine the machine to be built from a hierarchy of levels, in which each level has a specific function and exists as a distinct hypothetical machine. We call the hypothetical computer at each level a virtual machine. Each level's virtual machine executes its own particular set of instructions, calling upon machines at lower levels to carry out the tasks when necessary. Abstraction Levels of Modern Computing Systems Level 6: User - Executable Programs Level 5: High-Level Language - C++, Java, etc. Level 4: Assembly Language - Assembly Code Level 3: System Software - OS, Library Code Level 2: Machine - Instruction Set Architecture Level 1: Control - Microcode or Hardwired Level 0: Digital Logic - Circuits, Gates, etc. -Level 6, the User Level, is composed of applications and is the level with which everyone is most familiar. At this level, we run programs such as word processors, graphics packages, or games. The lower levels are nearly invisible from the User Level. -Level 5, the High-Level Language Level, consists of languages such as C, C++, Fortran, Lisp, Pascal, and Prolog. These languages must be translated (using either a compiler or an interpreter) to a language the machine can understand. Compiled languages are translated into assembly language and then assembled into machine code. (They are translated to the next lower level.) The user at this level sees very little of the lower levels. -Level 4, the Assembly Language Level, encompasses some type of assembly language. As previously mentioned, compiled higher-level languages are first translated to assembly, which is then directly translated to machine language. This is a one-to-one translation, meaning that one assembly language instruction is translated to exactly one machine language instruction. By having separate levels, we reduce the semantic gap between a high-level language, such as C++, and the actual machine language (which consists of 0s and 1s). -Level 3, the System Software Level, deals with operating system instructions. This level is responsible for multiprogramming, protecting memory, synchronizing processes, and various other important functions. Often, instructions translated from assembly language to machine language are passed through this level unmodified. -Level 2, the Instruction Set Architecture (ISA), or Machine Level, consists of the machine language recognized by the particular architecture of the computer system. Programs written in a computer's true machine language on a hardwired computer (see below) can be executed directly by the electronic circuits without any interpreters, translators, or compilers. -Level 1, the Control Level, is where a control unit makes sure that instructions are decoded and executed properly and that data is moved where and when it should be. The control unit interprets the machine instructions passed to it, one at a time, from the level above, causing the required actions to take place. Control units can be designed in one of two ways: They can be hardwired or they can be microprogrammed. In hardwired control units, control signals emanate from blocks of digital logic components. These signals direct all the data and instruction traffic to appropriate parts of the system. Hardwired control units are typically very fast because they are actually physical components. However, once implemented, they are very difficult to modify for the same reason. A microprogram is a program written in a low-level language that is implemented directly by the hardware. Machine instructions produced in Level 2 are fed into this microprogram, which then interprets the instructions by activating hardware suited to execute the original instruction. One machine-level instruction is often translated into several microcode instructions. This is not the one-to-one correlation that exists between assembly language and machine language. Microprograms are popular because they can be modified relatively easily. The disadvantage of microprogramming is, of course, that the additional layer of translation typically results in slower instruction execution. -Level 0, the Digital Logic Level, is where we find the physical components of the computer system: the gates and wires. These are the fundamental building blocks, the implementations of the mathematical logic, that are common to all computer systems.

Section 1.5 - 1.5.5

-In the third generation of electronic evolution, multiple transistors were integrated onto one chip. As manufacturing techniques and chip technologies advanced, increasing numbers of transistors were packed onto one chip. *-Here are now various levels of integration: (1) SSI (small-scale integration), in which there are 10 to 100 components per chip; (2) MSI (medium-scale integration), in which there are 100 to 1000 components per chip; (3) LSI (large-scale integration), in which there are 1000 to 10,000 components per chip; and finally, (4) VLSI (very-large-scale integration), in which there are more than 10,000 components per chip. This last level, VLSI, marks the beginning of the fourth generation of computers. The term ULSI (ultra-large-scale integration) has been suggested for integrated circuits containing more than 1 million transistors. -Other useful terminology includes: (1) WSI (wafer-scale integration, building superchip ICs from an entire silicon wafer; (2) 3D-IC (three-dimensional integrated circuit); and (3) SOC (system-on-a-chip), an IC that includes all the necessary components for the entire computer. *-VLSI allowed Intel, in 1971, to create the world's first microprocessor, the 4004, which was a fully functional, 4-bit system that ran at 108KHz. Intel also introduced the random access memory (RAM) chip, accommodating four kilobits of memory on a single chip. This allowed computers of the fourth generation to become smaller and faster than their solid-state predecessors. -The Personal Computer was IBM's third attempt at producing an "entry-level" computer system. -The IBM architecture continues to be the de facto standard for microcomputing, with each year heralding bigger and faster systems. -Since the 1960s, mainframe computers have seen stunning improvements in price-performance ratios, thanks to VLSI technology -The processing power brought by VLSI to supercomputers defies comprehension. The first supercomputer, the CDC 6600, could perform 10 million instructions per second, and had 128KB of main memory. By contrast, the supercomputers of today contain thousands of processors, can address terabytes of memory, and will soon be able to perform a quadrillion instructions per second. The Integrated Circuit and its Production: -Integrated circuits are found all around us, from computers to cars to refrigerators to cell phones. The most advanced circuits contain hundreds of millions (and even billions) of components in an area about the size of your thumbnail. The -How are these circuits made? They are manufactured in semiconductor fabrication facilities. Because the components are so small, all precautions must be taken to ensure a sterile, particle-free environment, so manufacturing is done in a "clean room." The process begins with the chip design, which eventually results in a mask, the template or blueprint that contains the circuit patterns. -Silicon wafer is then covered by an insulating layer of oxide, followed by a layer of photosensitive film called photo-resist. This photo-resist has regions that break down under UV light and other regions that do not. A UV light is then shone through the mask (a process called photolithography). Bare oxide is left on portions where the photo-resist breaks down under the UV light. Chemical "etching" is then used to dissolve the revealed oxide layer and also to remove the remaining photo-resist not affected by the UV light. The "doping" process embeds certain impurities into the silicon that alters the electrical properties of the unprotected areas, basically creating the transistors. The chip is then covered with another layer of both the insulating oxide material and the photo-resist, and the entire process is repeated hundreds of times, each iteration creating a new layer of the chip

Section 1.1

-Program optimization and system tuning are perhaps the most important motivations for learning how computers work. -We must become familiar with how various circuits and components fit together to create working computer systems. We do this through the study of computer organization. -Computer organization addresses issues such as control signals (how the computer is controlled), signaling methods, and memory types. It encompasses all physical aspects of computer systems. It helps us to answer the question: How does a computer work? -The study of computer architecture, on the other hand, focuses on the structure and behavior of the computer system and refers to the logical and abstract aspects of system implementation as seen by the programmer. -The computer architecture for a given machine is the combination of its hardware components plus its instruction set architecture (ISA). The ISA allows you to talk to the machine. -Studying computer architecture helps us to answer the question: How do I design a computer? -Neither computer organization nor computer architecture can stand alone. They are interrelated and interdependent. We can truly understand each of them only after we comprehend both of them.

Section 1.4 - Standards Organizations

-The Institute of Electrical and Electronics Engineers (IEEE) is an organization dedicated to the advancement of the professions of electronic and computer engineering. The IEEE actively promotes the interests of the worldwide engineering community by publishing an array of technical literature. It also sets standards for various computer components, signaling protocols, and data representation, to name only a few areas of its involvement. The IEEE has a democratic, albeit convoluted, procedure established for the creation of new standards. Its final documents are well respected and usually endure for several years before requiring revision. -The International Telecommunications Union (ITU) is based in Geneva, Switzerland. The ITU was formerly known as the Comité Consultatif International Télégraphique et Téléphonique, or the International Consultative Committee on Telephony and Telegraphy. As its name implies, the ITU concerns itself with the interoperability of telecommunications systems, including telephone, telegraph, and data communication systems -Many countries, including those in the European Community, have commissioned umbrella organizations to represent their interests in various international groups. The group representing the United States is the American National Standards Institute (ANSI). Great Britain has its British Standards Institution (BSI) in addition to having a voice on the CEN (Comité Européen de Normalisation), the European committee for standardization. ***-The International Organization for Standardization (ISO) is the entity that coordinates worldwide standards development, including the activities of ANSI with BSI, among others. ISO is not an acronym, but derives from the Greek word isos, meaning "equal." The ISO consists of more than 2800 technical committees, each of which is charged with some global standardization issue. Its interests range from the behavior of photographic film to the pitch of screw threads to the complex world of computer engineering. The proliferation of global trade has been facilitated by the ISO. Today, the ISO touches virtually every aspect of our lives.

Section 1.5 - 1.5.2

-The wired world that we know today was born from the invention of a single electronic device called a vacuum tube by Americans and—more accurately—a valve by the British. In 1911, Owen Willans Richardson analyzed this behavior. He concluded that when a negatively charged filament was heated, electrons "boiled off" as water molecules can be boiled to create steam. He aptly named this phenomenon thermionic emission. Vacuum tubes opened the door to modern computing. -Although Babbage is often called the "father of computing," his machines were mechanical, not electrical or electronic. In the 1930s, Konrad Zuse (1910-1995) picked up where Babbage left off, adding electrical technology and other improvements to Babbage's design. Zuse's computer, the Z1, used electromechanical relays instead of Babbage's hand-cranked gears. The Z1 was programmable and had a memory, an arithmetic unit, and a control unit. Although his machine was designed to use vacuum tubes, Zuse, who was building his machine on his own, could not afford the tubes. Thus, the Z1 correctly belongs in the first generation, although it had no tubes. -Digital computers, as we know them today, are the outcome of work done by a number of people in the 1930s and 1940s. ***-Three people clearly stand out as the inventors of modern computers: John Atanasoff, John Mauchly, and J. Presper Eckert. John Atanasoff (1904-1995) has been credited with the construction of the first completely electronic computer. The Atanasoff Berry Computer (ABC) was a binary machine built from vacuum tubes. *John Mauchly (1907-1980) and J. Presper Eckert (1929-1995) were the two principal inventors of the ENIAC, introduced to the public in 1946. The ENIAC is recognized as the first all-electronic, general-purpose digital computer. Although they probably knew that computers would be able to function most efficiently using the binary numbering system, Mauchly and Eckert designed their system to use base 10 numbers, in keeping with the appearance of building a huge electronic adding machine. Realizing that an electronic device could shorten ballistic table calculation from days to minutes, the army funded the ENIAC. And the ENIAC did indeed shorten the time to calculate a table from 20 hours to 30 seconds

Section 1.5 - 1.5.1

-Wilhelm Schickard (1592-1635) has been credited with the invention of the first mechanical calculator, the Calculating Clock (exact date unknown). This device was able to add and subtract numbers containing as many as six digits. -In 1642, Blaise Pascal (1623-1662) developed a mechanical calculator called the Pascaline to help his father with his tax work. The Pascaline could do addition with carry and subtraction. It was probably the first mechanical adding device actually used for a practical purpose. -The Pascaline was so well conceived that its basic design was still being used at the beginning of the twentieth century -Gottfried Wilhelm von Leibniz (1646-1716), a noted mathematician, invented a calculator known as the Stepped Reckoner that could add, subtract, multiply, and divide. None of these devices could be programmed or had memory. They required manual intervention throughout each step of their calculations. -The Difference Engine by Charles Babbage (1791-1871): Some people refer to Babbage as "the father of computing." Babbage built his Difference Engine in 1822. The Difference Engine got its name because it used a calculating technique called the method of differences. The machine was designed to mechanize the solution of polynomial functions and was actually a calculator, not a computer. ***-Although Babbage died before he could build it, the Analytical Engine was designed to be more versatile than his earlier Difference Engine. The Analytical Engine would have been capable of performing any mathematical operation. The Analytical Engine included many of the components associated with modern computers: an arithmetic processing unit to perform calculations (Babbage referred to this as the mill), a memory (the store), and input and output devices. ***Ada, Countess of Lovelace and daughter of poet Lord Byron, suggested that Babbage write a plan for how the machine would calculate numbers. This is regarded as the first computer program, and Ada is considered to be the first computer programmer. It is also rumored that she suggested the use of the binary number system rather than the decimal number system to store data. How to get data into the computer? Punch cards: Using cards to control the behavior of a machine did not originate with Babbage, but with one of his friends, Joseph-Marie Jacquard (1752-1834). In 1801, Jacquard invented a programmable weaving loom that could produce intricate patterns in cloth. Jacquard gave Babbage a tapestry that had been woven on this loom using more than 10,000 punched cards. To Babbage, it seemed only natural that if a loom could be controlled by cards, then his Analytical Engine could be as well. ***The punched card proved to be the most enduring means of providing input to a computer system. In the latter half of the nineteenth century, most machines used wheeled mechanisms, which were difficult to integrate with early keyboards because they were levered devices. But levered devices could easily punch cards and wheeled devices could easily read them. So a number of devices were invented to encode and then "tabulate" card-punched data. The most important of the late-nineteenth-century tabulating machines was the one invented by Herman Hollerith (1860-1929). Hollerith's machine was used for encoding and compiling 1890 census data. This census was completed in record time, thus boosting Hollerith's finances and the reputation of his invention. Hollerith later founded the company that would become IBM. His 80-column punched card, the Hollerith card, was a staple of automated data processing for more than 50 years. -The Mechanical Turk: Elaborate clocks began appearing at the beginning of the 1700s. Complex and ornate models graced cathedrals and town halls. These clockworks eventually morphed into mechanical robots called automata. Marie-Therese challenged Wolfgang von Kempelen to build an automaton to surpass all that had ever been brought to her court. He delivered a turban-wearing, pipe-smoking, chess-playing automaton. For all appearances, "The Turk" was a formidable opponent for even the best players of the day. As an added touch, the machine contained a set of baffles enabling it to rasp "Échec!" as needed. So impressive was this machine that for 84 years it drew crowds across Europe and the United States. A human chess player was cleverly concealed inside its cabinet. The Turk thus pulled off one of the first and most impressive "computer" hoaxes in the history of technology. It would take another 200 years before a real machine could match the Turk—without the trickery.

Section 1.10 - Non-von Neumann Architectures

The von Neumann bottleneck continues to baffle engineers looking for ways to build fast systems that are inexpensive and compatible with the vast body of commercially available software. Non-von Neumann architectures are those in which the model of computation varies from the characteristics listed for the von Neumann architecture. For example, an architecture that does not store programs and data in memory or does not process a program sequentially would be considered a non-von Neumann machine. Also, a computer that has two buses, one for data and a separate one for instructions, would be considered a non-von Neumann machine. Computers designed using the Harvard architecture have two buses, thus allowing data and instructions to be transferred simultaneously, but also have separate storage for data and instructions. Many modern general-purpose computers use a modified version of the Harvard architecture in which they have separate pathways for data and instructions but not separate storage. Pure Harvard architectures are typically used in microcontrollers (an entire computer system on a chip), such as those found in embedded systems, as in appliances, toys, and cars. -Many non-von Neumann machines are designed for special purposes. The first recognized non-von Neumann processing chip was designed strictly for image processing. ***-A number of different subfields fall into the non-von Neumann category, including neural networks (using ideas from models of the brain as a computing paradigm) implemented in silicon, cellular automata, cognitive computers (machines that learn by experience rather than through programming, e.g., IBM's SyNAPSE computer, a machine that models the human brain), quantum computation (a combination of computing and quantum physics), dataflow computation, and parallel computers. ***These all have something in common—the computation is distributed among different processing units that act in parallel. They differ in how weakly or strongly the various components are connected. Of these, parallel computing is currently the most popular.


Ensembles d'études connexes

Midterm Textbook Questions (Chapters 10, 17, 8, 9, 14 & 22)

View Set

Limited Liability Company (Business)

View Set

Caudal, Cranial, Ventral, Dorsal_[Bio&Disease]

View Set

Bio II Unit 3 - Invertebrate Adaptation to Land

View Set