history of the PC

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Computer From Wikipedia, the free encyclopedia (Redirected from Computers) For other uses, see Computer (disambiguation). "Computer technology" redirects here. For the company, see Computer Technology Limited. Computer A computer is a programmable machine designed to sequentially and automatically carry out a sequence of arithmetic or logical operations. The particular sequence of operations can be changed readily, allowing the computer to solve more than one kind of problem. An important class of computer operations on some computing platforms is the accepting of input from human operators and the output of results formatted for human consumption. The interface between the computer and the human operator is known as the user interface. Conventionally a computer consists of some form of memory, at least one element that carries out arithmetic and logic operations, and a sequencing and control unit that can change the order of operations based on the information that is stored. Peripheral devices allow information to be entered from an external source, and allow the results of operations to be sent out. A computer's processing unit executes series of instructions that make it read, manipulate and then store data. Conditional instructions change the sequence of instructions as a function of the current state of the machine or its environment. The first electronic digital computers were developed in the mid-20th century (1940-1945). Originally, they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).[1] In this era mechanical analog computers were used for military applications. Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Simple computers are small enough to fit into mobile devices, and mobile computers can be powered by small batteries. Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". However, the embedded computers found in many devices from mp3 players to fighter aircraft and from toys to industrial robots are the most numerous.Contents [hide] 1 History of computing 1.1 Limited-function early computers 1.2 First general-purpose computers 1.3 Stored-program architecture 1.4 Semiconductors and microprocessors 2 Programs 2.1 Stored program architecture 2.2 Bugs 2.3 Machine code 2.4 Higher-level languages and program design 3 Function 3.1 Control unit 3.2 Arithmetic logic unit (ALU) 3.3 Memory 3.4 Input/output (I/O) 3.5 Multitasking 3.6 Multiprocessing 3.7 Networking and the Internet 4 Misconceptions 4.1 Required technology 4.2 Computer architecture paradigms 5 Further topics 5.1 Artificial intelligence 5.2 Hardware 5.3 Software 5.4 Programming languages 5.5 Professions and organizations 6 See also 7 Notes 8 References 9 External links History of computing Main article: History of computing hardware The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations.[3] Limited-function early computers The Jacquard loom, on display at the Museum of Science and Industry in Manchester, England, was one of the first programmable devices. The history of the modern computer begins with two separate technologies, automated calculation and programmability, but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. A few devices are worth mentioning though, like some mechanical aids to computing, which were very successful and survived for centuries until the advent of the electronic calculator, like the Sumerian abacus, designed around 2500 BC[4] which descendant won a speed competition against a modern desk calculating machine in Japan in 1946,[5] the slide rules, invented in the 1620s, which were carried on five Apollo space missions, including to the moon[6] and arguably the astrolabe and the Antikythera mechanism, an ancient astronomical computer built by the Greeks around 80 BC.[7] The Greek mathematician Hero of Alexandria (c. 10-70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.[8] This is the essence of programmability. Around the end of the 10th century, the French monk Gerbert d'Aurillac brought back from Spain the drawings of a machine invented by the Moors that answered either Yes or No to the questions it was asked.[9] Again in the 13th century, the monks Albertus Magnus and Roger Bacon built talking androids without any further development (Albertus Magnus complained that he had wasted forty years of his life when Thomas Aquinas, terrified by his machine, destroyed it).[10] In 1642, the Renaissance saw the invention of the mechanical calculator,[11] a device that could perform all four arithmetic operations without relying on human intelligence.[12] The mechanical calculator was at the root of the development of computers in two separate ways. Initially, it was in trying to develop more powerful and more flexible calculators[13] that the computer was first theorized by Charles Babbage[14][15] and then developed.[16] Secondly, development of a low-cost electronic calculator, successor to the mechanical calculator, resulted in the development by Intel[17] of the first commercially available microprocessor integrated circuit. First general-purpose computers In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability. The Most Famous Image in the Early History of Computing[18] This portrait of Jacquard was woven in silk on a Jacquard loom and required 24,000 punched cards to create (1839). It was only produced to order. Charles Babbage owned one of these portraits ; it inspired him in using perforated cards in his analytical engine[19] It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine.[20] Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed ; nevertheless his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. This machine was given to the Science museum in South Kensington in 1910. In the late 1880s, Herman Hollerith invented the recording of data on a machine-readable medium. Earlier uses of machine-readable media had been for control, not data. "After some initial trials with paper tape, he settled on punched cards ..."[21] To process these punched cards he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of ideas and technologies, that would later prove useful in the realization of practical computers, had begun to appear: Boolean algebra, the vacuum tube (thermionic valve), punched cards and tape, and the teleprinter. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. Alan Turing is widely regarded as the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with the Turing machine, providing a blueprint for the electronic digital computer.[22] Of his role in the creation of the modern computer, Time magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine".[22] The Zuse Z3, 1941, considered the world's first working programmable, fully automatic computing machine. The ENIAC, which became operational in 1946, is considered to be the first general-purpose electronic computer. EDSAC was one of the first computers to implement the stored-program (von Neumann) architecture. Die of an Intel 80486DX2 microprocessor (actual size: 12×6.75 mm) in its packaging. The Atanasoff-Berry Computer (ABC) was the world's first electronic digital computer, albeit not programmable.[23] Atanasoff is considered to be one of the fathers of the computer.[24] Conceived in 1937 by Iowa State College physics professor John Atanasoff, and built with the assistance of graduate student Clifford Berry,[25] the machine was not programmable, being designed only to solve systems of linear equations. The computer did employ parallel computation. A 1973 court ruling in a patent dispute found that the patent for the 1946 ENIAC computer derived from the Atanasoff-Berry Computer. The first program-controlled computer was invented by Konrad Zuse, who built the Z3, an electromechanical computing machine, in 1941.[26] The first programmable electronic computer was the Colossus, built in 1943 by Tommy Flowers. George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.[27] A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult.Shannon 1940 Notable achievements include. Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer.[28] The non-programmable Atanasoff-Berry Computer (commenced in 1937, completed in 1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory. The use of regenerative memory allowed it to be much more compact than its peers (being approximately the size of a large desk or workbench), since intermediate results could be stored and then fed back into the same set of computation elements. The secret British Colossus computers (1943),[29] which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes. The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.[30] The U.S. Army's Ballistic Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming. Stored-program architecture Replica of the Small-Scale Experimental Machine (SSEM), the world's first stored-program computer, at the Museum of Science and Industry in Manchester, England Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the "stored-program architecture" or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of which was completed in 1948 at the University of Manchester in England, the Manchester Small-Scale Experimental Machine (SSEM or "Baby"). The Electronic Delay Storage Automatic Calculator (EDSAC), completed a year after the SSEM at Cambridge University, was the first practical, non-experimental implementation of the stored-program design and was put to use immediately for research work at the university. Shortly thereafter, the machine originally described by von Neumann's paper—EDVAC—was completed but did not see full-time use for an additional two years. Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by which the word "computer" is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture. Beginning in the 1950s, Soviet scientists Sergei Sobolev and Nikolay Brusentsov conducted research on ternary computers, devices that operated on a base three numbering system of −1, 0, and 1 rather than the conventional binary numbering system upon which most computers are based. They designed the Setun, a functional ternary computer, at Moscow State University. The device was put into limited production in the Soviet Union, but supplanted by the more common binary architecture. Semiconductors and microprocessors Computers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s had been largely replaced by transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorised computer was demonstrated at the University of Manchester in 1953.[31] In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the late 1970s, many products such as video recorders contained dedicated computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic appliances such as washing machines. The 1980s witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household[citation needed]. Modern smartphones are fully programmable computers in their own right, and as of 2009 may well be the most common form of such computers in existence[citation needed]. Programs The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will carry process them. While some computers may have strange concepts "instructions" and "output" (see quantum computing), modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. Stored program architecture Main articles: Computer program and Computer programming A 1970s punched card containing one line from a FORTRAN program. The card reads: "Z(1) = Y + W(1)" and is labelled "PROJ039" for identification purposes. This section applies to most common RAM machine-based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example: mov #0, sum ; set sum to 0 mov #1, num ; set num to 1 loop: add num, sum ; add num to sum add #1, num ; add 1 to num cmp num, #1000 ; compare num to 1000 ble loop ; if num <= 1000, go back to 'loop' halt ; end of program. stop running Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.[32] Bugs Main article: software bug The actual first computer bug, a moth found trapped on a relay of the Harvard Mark II computer Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases they may cause the program or the entire system to "hang" - become unresponsive to input such as mouse clicks or keystrokes - to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[33] Rear Admiral Grace Hopper is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.[34] Machine code In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated ins

Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Computer From Wikipedia, the free encyclopedia (Redirected from Computers) For other uses, see Computer (disambiguation). "Computer technology" redirects here. For the company, see Computer Technology Limited. Computer A computer is a programmable machine designed to sequentially and automatically carry out a sequence of arithmetic or logical operations. The particular sequence of operations can be changed readily, allowing the computer to solve more than one kind of problem. An important class of computer operations on some computing platforms is the accepting of input from human operators and the output of results formatted for human consumption. The interface between the computer and the human operator is known as the user interface. Conventionally a computer consists of some form of memory, at least one element that carries out arithmetic and logic operations, and a sequencing and control unit that can change the order of operations based on the information that is stored. Peripheral devices allow information to be entered from an external source, and allow the results of operations to be sent out. A computer's processing unit executes series of instructions that make it read, manipulate and then store data. Conditional instructions change the sequence of instructions as a function of the current state of the machine or its environment. The first electronic digital computers were developed in the mid-20th century (1940-1945). Originally, they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).[1] In this era mechanical analog computers were used for military applications. Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Simple computers are small enough to fit into mobile devices, and mobile computers can be powered by small batteries. Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". However, the embedded computers found in many devices from mp3 players to fighter aircraft and from toys to industrial robots are the most numerous.Contents [hide] 1 History of computing 1.1 Limited-function early computers 1.2 First general-purpose computers 1.3 Stored-program architecture 1.4 Semiconductors and microprocessors 2 Programs 2.1 Stored program architecture 2.2 Bugs 2.3 Machine code 2.4 Higher-level languages and program design 3 Function 3.1 Control unit 3.2 Arithmetic logic unit (ALU) 3.3 Memory 3.4 Input/output (I/O) 3.5 Multitasking 3.6 Multiprocessing 3.7 Networking and the Internet 4 Misconceptions 4.1 Required technology 4.2 Computer architecture paradigms 5 Further topics 5.1 Artificial intelligence 5.2 Hardware 5.3 Software 5.4 Programming languages 5.5 Professions and organizations 6 See also 7 Notes 8 References 9 External links History of computing Main article: History of computing hardware The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations.[3] Limited-function early computers The Jacquard loom, on display at the Museum of Science and Industry in Manchester, England, was one of the first programmable devices. The history of the modern computer begins with two separate technologies, automated calculation and programmability, but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. A few devices are worth mentioning though, like some mechanical aids to computing, which were very successful and survived for centuries until the advent of the electronic calculator, like the Sumerian abacus, designed around 2500 BC[4] which descendant won a speed competition against a modern desk calculating machine in Japan in 1946,[5] the slide rules, invented in the 1620s, which were carried on five Apollo space missions, including to the moon[6] and arguably the astrolabe and the Antikythera mechanism, an ancient astronomical computer built by the Greeks around 80 BC.[7] The Greek mathematician Hero of Alexandria (c. 10-70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.[8] This is the essence of programmability. Around the end of the 10th century, the French monk Gerbert d'Aurillac brought back from Spain the drawings of a machine invented by the Moors that answered either Yes or No to the questions it was asked.[9] Again in the 13th century, the monks Albertus Magnus and Roger Bacon built talking androids without any further development (Albertus Magnus complained that he had wasted forty years of his life when Thomas Aquinas, terrified by his machine, destroyed it).[10] In 1642, the Renaissance saw the invention of the mechanical calculator,[11] a device that could perform all four arithmetic operations without relying on human intelligence.[12] The mechanical calculator was at the root of the development of computers in two separate ways. Initially, it was in trying to develop more powerful and more flexible calculators[13] that the computer was first theorized by Charles Babbage[14][15] and then developed.[16] Secondly, development of a low-cost electronic calculator, successor to the mechanical calculator, resulted in the development by Intel[17] of the first commercially available microprocessor integrated circuit. First general-purpose computers In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability. The Most Famous Image in the Early History of Computing[18] This portrait of Jacquard was woven in silk on a Jacquard loom and required 24,000 punched cards to create (1839). It was only produced to order. Charles Babbage owned one of these portraits ; it inspired him in using perforated cards in his analytical engine[19] It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine.[20] Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed ; nevertheless his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. This machine was given to the Science museum in South Kensington in 1910. In the late 1880s, Herman Hollerith invented the recording of data on a machine-readable medium. Earlier uses of machine-readable media had been for control, not data. "After some initial trials with paper tape, he settled on punched cards ..."[21] To process these punched cards he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of ideas and technologies, that would later prove useful in the realization of practical computers, had begun to appear: Boolean algebra, the vacuum tube (thermionic valve), punched cards and tape, and the teleprinter. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. Alan Turing is widely regarded as the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with the Turing machine, providing a blueprint for the electronic digital computer.[22] Of his role in the creation of the modern computer, Time magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine".[22] The Zuse Z3, 1941, considered the world's first working programmable, fully automatic computing machine. The ENIAC, which became operational in 1946, is considered to be the first general-purpose electronic computer. EDSAC was one of the first computers to implement the stored-program (von Neumann) architecture. Die of an Intel 80486DX2 microprocessor (actual size: 12×6.75 mm) in its packaging. The Atanasoff-Berry Computer (ABC) was the world's first electronic digital computer, albeit not programmable.[23] Atanasoff is considered to be one of the fathers of the computer.[24] Conceived in 1937 by Iowa State College physics professor John Atanasoff, and built with the assistance of graduate student Clifford Berry,[25] the machine was not programmable, being designed only to solve systems of linear equations. The computer did employ parallel computation. A 1973 court ruling in a patent dispute found that the patent for the 1946 ENIAC computer derived from the Atanasoff-Berry Computer. The first program-controlled computer was invented by Konrad Zuse, who built the Z3, an electromechanical computing machine, in 1941.[26] The first programmable electronic computer was the Colossus, built in 1943 by Tommy Flowers. George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.[27] A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult.Shannon 1940 Notable achievements include. Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer.[28] The non-programmable Atanasoff-Berry Computer (commenced in 1937, completed in 1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory. The use of regenerative memory allowed it to be much more compact than its peers (being approximately the size of a large desk or workbench), since intermediate results could be stored and then fed back into the same set of computation elements. The secret British Colossus computers (1943),[29] which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes. The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.[30] The U.S. Army's Ballistic Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming. Stored-program architecture Replica of the Small-Scale Experimental Machine (SSEM), the world's first stored-program computer, at the Museum of Science and Industry in Manchester, England Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the "stored-program architecture" or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of which was completed in 1948 at the University of Manchester in England, the Manchester Small-Scale Experimental Machine (SSEM or "Baby"). The Electronic Delay Storage Automatic Calculator (EDSAC), completed a year after the SSEM at Cambridge University, was the first practical, non-experimental implementation of the stored-program design and was put to use immediately for research work at the university. Shortly thereafter, the machine originally described by von Neumann's paper—EDVAC—was completed but did not see full-time use for an additional two years. Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by which the word "computer" is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture. Beginning in the 1950s, Soviet scientists Sergei Sobolev and Nikolay Brusentsov conducted research on ternary computers, devices that operated on a base three numbering system of −1, 0, and 1 rather than the conventional binary numbering system upon which most computers are based. They designed the Setun, a functional ternary computer, at Moscow State University. The device was put into limited production in the Soviet Union, but supplanted by the more common binary architecture. Semiconductors and microprocessors Computers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s had been largely replaced by transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorised computer was demonstrated at the University of Manchester in 1953.[31] In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the late 1970s, many products such as video recorders contained dedicated computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic appliances such as washing machines. The 1980s witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household[citation needed]. Modern smartphones are fully programmable computers in their own right, and as of 2009 may well be the most common form of such computers in existence[citation needed]. Programs The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will carry process them. While some computers may have strange concepts "instructions" and "output" (see quantum computing), modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. Stored program architecture Main articles: Computer program and Computer programming A 1970s punched card containing one line from a FORTRAN program. The card reads: "Z(1) = Y + W(1)" and is labelled "PROJ039" for identification purposes. This section applies to most common RAM machine-based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example: mov #0, sum ; set sum to 0 mov #1, num ; set num to 1 loop: add num, sum ; add num to sum add #1, num ; add 1 to num cmp num, #1000 ; compare num to 1000 ble loop ; if num <= 1000, go back to 'loop' halt ; end of program. stop running Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.[32] Bugs Main article: software bug The actual first computer bug, a moth found trapped on a relay of the Harvard Mark II computer Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases they may cause the program or the entire system to "hang" - become unresponsive to input such as mouse clicks or keystrokes - to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[33] Rear Admiral Grace Hopper is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.[34] Machine code In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated ins

Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Mainframe computer From Wikipedia, the free encyclopedia For other uses, see Mainframe (disambiguation). This article has been nominated to be checked for its neutrality. Discussion of this nomination can be found on the talk page. (July 2009) This article contains weasel words: vague phrasing that often accompanies biased or unverifiable information. Such statements should be clarified or removed. (January 2010) An IBM 704 mainframe (1964) Mainframes (often colloquially referred to as "big iron"[1]) are powerful computers used primarily by corporate and governmental organizations for critical applications, bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing. The term originally referred to the large cabinets that housed the central processing unit and main memory of early computers.[2][3] Later the term was used to distinguish high-end commercial machines from less powerful units. Most large-scale computer system architectures were firmly established in the 1960s. Several minicomputer operating systems and architectures arose in the 1970s and 1980s, which were known alternately as mini-mainframes or minicomputers; two examples are Digital Equipment Corporation's PDP-8 and the Data General Nova. Many defining characteristics of "mainframe" were established in the 1960s, but those characteristics continue to expand and evolve to the present day.Contents [hide] 1 Description 2 Characteristics 3 Market 4 History 5 Differences from supercomputers 6 See also 7 References 8 External links [edit] Description Most modern mainframe design is not so much defined by single task computational speed, typically defined as MIPS rate or FLOPS in the case of floating point calculations, as much as by their redundant internal engineering and resulting high reliability and security, extensive input-output facilities, strict backward compatibility with older software, and high hardware and computational utilization rates to support massive throughput. These machines often run for long periods of time without interruption, given their inherent high stability and reliability. Software upgrades usually require resetting the Operating System or portions thereof, and are non-disruptive only when using virtualizing facilities such as IBM's Z/OS and Parallel Sysplex, or Unisys' XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed. Mainframes are defined by high availability, one of the main reasons for their longevity, since they are typically used in applications where downtime would be costly or catastrophic. The term reliability, availability and serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning and implementation is required to exploit these features, and if improperly implemented, may serve to inhibit the benefits provided. In addition, mainframes are more secure than other computer types. The NIST National Institute of Standards and Technology vulnerabilities database, US-CERT, rates traditional mainframes such as IBM zSeries, Unisys Dorado and Unisys Libra as among the most secure with vulnerabilities in the low single digits as compared with thousands for Windows, Linux and Unix.[4] In the 1960s, most mainframes had no explicitly interactive interface. They accepted sets of punched cards, paper tape, and/or magnetic tape and operated solely in batch mode to support back office functions, such as customer billing. Teletype devices were also common, for system operators, in implementing programming techniques. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. By the 1980s, many mainframes supported graphical terminals, and terminal emulation, but not graphical user interfaces. This format of end-user computing reached mainstream obsolescence in the 1990s due to the advent of personal computers provided with GUIs. After 2000, most modern mainframes have partially or entirely phased out classic terminal access for end-users in favour of Web user interfaces. Historically, mainframes acquired their name in part because of their substantial size, and because of requirements for specialized heating, ventilation, and air conditioning (HVAC), and electrical power, essentially posing a "main framework" of dedicated infrastructure. The requirements of high-infrastructure design were drastically reduced during the mid-1990s with CMOS mainframe designs replacing the older bipolar technology. IBM claimed that its newer mainframes can reduce data center energy costs for power and cooling, and that they could reduce physical space requirements compared to server farms. [5] [edit] Characteristics Nearly all mainframes have the ability to run (or host) multiple operating systems, and thereby operate as a host of a collective of virtual machines. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication[citation needed]. Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not usually available with most server solutions[citation needed]. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions (LPARs, via the PR/SM facility) and virtual machines (via the z/VM operating system). Many mainframe customers run two machines: one in their primary data center, and one in their backup data center—fully active, partially active, or on standby—in case there is a catastrophe affecting the first building. Test, development, training, and production workload for applications and databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages. In practice many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD (in IBM's case)[citation needed], or with shared, geographically dispersed storage provided by EMC or Hitachi. Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the mid-1960s, mainframe designs have included several subsidiary computers (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Gigabyte to terabyte-size record files are not unusual.[6] Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it much faster.[citation needed] Other server families also offload I/O processing and emphasize throughput computing. Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors. Mainframes also have execution integrity characteristics for fault tolerant computing. For example, z900, z990, System z9, and System z10 servers effectively execute result-oriented instructions twice, compare results, arbitrate between any differences (through instruction retry and failure isolation), then shift workloads "in flight" to functioning processors, including spares, without any impact to operating systems, applications, or users. This hardware-level feature, also found in HP's NonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing[citation needed]. [edit] Market IBM mainframes dominate the mainframe market at well over 90% market share.[7] Unisys manufactures ClearPath mainframes, based on earlier Burroughs products and ClearPath mainframes based on OS1100 product lines. In 2002, Hitachi co-developed the zSeries z800 with IBM to share expenses, but subsequently the two companies have not collaborated on new Hitachi models. Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers and which some analysts classify as mainframes. Groupe Bull's DPS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME mainframes are still available in Europe. Fujitsu, Hitachi, and NEC (the "JCMs") still maintain mainframe hardware businesses in the Japanese market.[8][9] The amount of vendor investment in mainframe development varies with marketshare. Fujitsu and Hitachi both continue to use custom S/390-compatible processors, as well as other CPU's (including POWER, SPARC, MIPS, and Xeon) for lower-end systems. Bull uses a mix of custom and Xeon processors. NEC and Bull both use a mixture of Xeon and Itanium processors for their mainframes. IBM continues to pursue a different business strategy of mainframe investment and growth[citation needed]. IBM has its own large research and development organization designing new, homegrown CPUs, including mainframe processors such as 2008's 4.4 GHz quad-core z10 mainframe microprocessor. Unisys produces code compatible mainframe systems that range from laptops to cabinet sized mainframes that utilize homegrown CPUs as well as Xeon processors. IBM is rapidly expanding its software business, including its mainframe software portfolio, to seek additional revenue and profits.[10][11] [edit] History This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008) Several manufacturers produced mainframe computers from the late 1950s through the 1970s. The group of manufacturers was first known as "IBM and the Seven Dwarfs": IBM, Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA. Later, shrinking, it was referred to as IBM and the BUNCH. IBM's dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries mainframes which, along with the then Burroughs and Sperry (now Unisys) MCP-based and OS1100 mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. That said, while IBM's zSeries can still run 24-bit System/360 code, the 64-bit zSeries and System z9 CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the USA were Siemens and Telefunken in Germany, ICL in the United Kingdom, Olivetti in Italy, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the BESM series and Strela are examples of an independently designed Soviet computer. Shrinking demand and tough competition started a shakeout in the market in the early 1970s — RCA sold out to UNIVAC and GE also left; in the 1980s Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986. In 1991, AT&T briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks. InfoWorld's Stewart Alsop famously predicted that the last mainframe would be unplugged in 1996. That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world, encouraging trends toward more centralized computing. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Batch processing, such as billing, became even more important (and larger) with the growth of e-business, and mainframes are particularly adept at large scale batch computing. Another factor currently increasing mainframe use is the development of the Linux operating system, which arrived on IBM mainframe systems in 1999 and is typically run in scores or hundreds of virtual machines on a single mainframe. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly People's Republic of China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.) In late 2000 IBM introduced 64-bit z/Architecture, acquired numerous software companies such as Cognos and introduced those software products to the mainframe. IBM's quarterly and annual reports in the 2000s usually reported increasing mainframe revenues and capacity shipments. However, IBM's mainframe hardware business has not been immune to the recent overall downturn in the server hardware market or to model cycle effects. For example, in the 4th quarter of 2009, IBM's System z hardware revenues decreased by 27% year over year. But MIPS shipments (a measure of mainframe capacity) increased 4% per year over the past two years.[12] [edit] Differences from supercomputers A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers are used for scientific and engineering problems (high-performance computing) which are limited by processing speed and memory size, while mainframes are used for problems which are limited by data movement in input/output devices, reliability, and for handling multiple business transactions concurrently. The differences are as follows: Mainframes are measured in millions of instructions per second (MIPS) while assuming typical instructions are integer operations, but supercomputers are measured in floating point operations per second (FLOPS)[citation needed]. Examples of integer operations include moving data around in memory or checking values. Floating point operations are mostly addition, subtraction, and multiplication with enough digits of precision to model continuous phenomena such as weather prediction and nuclear simulations. In terms of computational ability, supercomputers are more powerful.[13] Mainframes are built to be reliable for transaction processing as it is commonly understood in the business world: a commercial exchange of goods, services, or money[citation needed]. A typical transaction, as defined by the Transaction Processing Performance Council,[14] would include the updating to a database system for such things as inventory control (goods), airline reservations (services), or banking (money). A transaction could refer to a set of operations including disk read/writes, operating system calls, or some form of data transfer from one subsystem to another. This operation does not count toward the processing power of a computer. Transaction processing is not exclusive to mainframes but also used in the performance of microprocessor-based servers and online networks. [edit] See also Computer types Failover [edit] References ^ "IBM preps big iron fiesta". The Register. July 20, 2005. ^ Oxford English Dictionary, on -line edition, mainframe, n ^ Ebbers, Mike; O'Brien, W.; Ogden, B. (2006). "Introduction to the New Mainframe: z/OS Basics" (pdf). IBM International Technical Support Organization. Retrieved 2007-06-01. ^ "National Vulnerability Database". Retrieved September 20,2011. ^ "Get the facts on IBM vs the Competition- The facts about IBM System z "mainframe"". IBM. Retrieved December 28, 2009. ^ "Largest Commercial Database in Winter Corp. TopTen Survey Tops One Hundred Terabytes". Press release. Retrieved 2008-05-16. ^ "IBM Tightens Stranglehold Over Mainframe Market; Gets Hit with Antitrust Complaint in Europe". CCIA. 2008-07-02. Retrieved 2008-07-09. ^ [1] ^ [2] ^ "IBM Opens Latin America's First Mainframe Software Center". Enterprise Networks and Servers. August 2007. ^ "IBM Helps Clients Modernize Applications on the Mainframe". IBM. November 7, 2007. ^ "IBM 4Q2009 Financial Report: CFO's Prepared Remarks". IBM. January 19, 2010. ^ World's Top Supercomputer Retrieved on December 25, 2009 ^ Transaction Processing Performance Council Retrieved on December 25, 2009. [edit] External links Wikimedia Commons has media related to: Mainframe computers IBM eServer zSeries mainframe servers Univac 9400, a mainframe from the 1960s, still in use in a German computer museum Lectures in the History of Computing: Mainframes Articles and Tutorials at Mainframes360.com: Mainframes[hide] v · d · e Computer sizes Classes of computers Larger Super · Minisuper · Mainframe Mini Midrange · Supermini · Server Micro Personal (Workstation · Desktop · Home · SFF (Nettop)) · Plug · Portable · Arcade system board · Video game console Mobile Portable/Mobile data terminal · Electronic organizer · Pocket computer Laptop Desktop replacement computer · Subnotebook (Netbook · Smartbook · Ultrabook) Tablet computer Ultra-mobile PC · Mobile Internet device (Internet tablet) Information appliance Handheld PC (Palm-size PC · Pocket computer) · PDA (EDA) · Mobile phone (Feature phone · Smartphone) · PMP (DAP) · E-book reader · Handheld game console Calculators Scientific · Programmable · Graphing Wearable computer Calculator watch · Wristop · Virtual retinal display · Head-mounted display (Head-up display) Others Microcontroller · Nanocomputer · Pizza Box Case · Single-board computer · Smartdust · Wireless sensor network View page ratings Rate this page What's this? Trustworthy Objective Complete Well-written I am highly knowledgeable about this topic (optional) Submit ratings Categories: Mainframe computers Log in / create account Article Discussion Read Edit View history Main page Contents Featured content Current events Random article Donate to Wikipedia Interaction Help About Wikipedia Community portal Recent changes Contact Wikipedia Toolbox Print/export Languages العربية Azərbaycanca Български Català Česky Dansk Deutsch Eesti Ελληνικά Español Euskara فارسی Français 한국어 Hrvatski Bahasa Indonesia Italiano עברית ລາວ Latviešu Lietuvių Magyar മലയാളം Bahasa Melayu Nederlands 日本語 ‪Norsk (bokmål)‬ Polski Português Română Русский Shqip Simple English Slovenščina Suomi Svenska ไทย Türkçe Українська Tiếng Việt ייִדיש 中文 This page was last modified on 6 December 2011 at 03:32. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of use for details. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Contact us Privacy policy About Wikipedia Disclaimers Mobile view Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Mainframe computer From Wikipedia, the free encyclopedia For other uses, see Mainframe (disambiguation). This article has been nominated to be checked for its neutrality. Discussion of this nomination can be found on the talk page. (July 2009) This article contains weasel words: vague phrasing that often accompanies biased or unverifiable information. Such statements should be clarified or removed. (January 2010) An IBM 704 mainframe (1964) Mainframes (often colloquially referred to as "big iron"[1]) are powerful computers used primarily by corporate and governmental organizations for critical applications, bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing. The term originally referred to the large cabinets that housed the c

Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Mainframe computer From Wikipedia, the free encyclopedia For other uses, see Mainframe (disambiguation). This article has been nominated to be checked for its neutrality. Discussion of this nomination can be found on the talk page. (July 2009) This article contains weasel words: vague phrasing that often accompanies biased or unverifiable information. Such statements should be clarified or removed. (January 2010) An IBM 704 mainframe (1964) Mainframes (often colloquially referred to as "big iron"[1]) are powerful computers used primarily by corporate and governmental organizations for critical applications, bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing. The term originally referred to the large cabinets that housed the central processing unit and main memory of early computers.[2][3] Later the term was used to distinguish high-end commercial machines from less powerful units. Most large-scale computer system architectures were firmly established in the 1960s. Several minicomputer operating systems and architectures arose in the 1970s and 1980s, which were known alternately as mini-mainframes or minicomputers; two examples are Digital Equipment Corporation's PDP-8 and the Data General Nova. Many defining characteristics of "mainframe" were established in the 1960s, but those characteristics continue to expand and evolve to the present day.Contents [hide] 1 Description 2 Characteristics 3 Market 4 History 5 Differences from supercomputers 6 See also 7 References 8 External links [edit] Description Most modern mainframe design is not so much defined by single task computational speed, typically defined as MIPS rate or FLOPS in the case of floating point calculations, as much as by their redundant internal engineering and resulting high reliability and security, extensive input-output facilities, strict backward compatibility with older software, and high hardware and computational utilization rates to support massive throughput. These machines often run for long periods of time without interruption, given their inherent high stability and reliability. Software upgrades usually require resetting the Operating System or portions thereof, and are non-disruptive only when using virtualizing facilities such as IBM's Z/OS and Parallel Sysplex, or Unisys' XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed. Mainframes are defined by high availability, one of the main reasons for their longevity, since they are typically used in applications where downtime would be costly or catastrophic. The term reliability, availability and serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning and implementation is required to exploit these features, and if improperly implemented, may serve to inhibit the benefits provided. In addition, mainframes are more secure than other computer types. The NIST National Institute of Standards and Technology vulnerabilities database, US-CERT, rates traditional mainframes such as IBM zSeries, Unisys Dorado and Unisys Libra as among the most secure with vulnerabilities in the low single digits as compared with thousands for Windows, Linux and Unix.[4] In the 1960s, most mainframes had no explicitly interactive interface. They accepted sets of punched cards, paper tape, and/or magnetic tape and operated solely in batch mode to support back office functions, such as customer billing. Teletype devices were also common, for system operators, in implementing programming techniques. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. By the 1980s, many mainframes supported graphical terminals, and terminal emulation, but not graphical user interfaces. This format of end-user computing reached mainstream obsolescence in the 1990s due to the advent of personal computers provided with GUIs. After 2000, most modern mainframes have partially or entirely phased out classic terminal access for end-users in favour of Web user interfaces. Historically, mainframes acquired their name in part because of their substantial size, and because of requirements for specialized heating, ventilation, and air conditioning (HVAC), and electrical power, essentially posing a "main framework" of dedicated infrastructure. The requirements of high-infrastructure design were drastically reduced during the mid-1990s with CMOS mainframe designs replacing the older bipolar technology. IBM claimed that its newer mainframes can reduce data center energy costs for power and cooling, and that they could reduce physical space requirements compared to server farms. [5] [edit] Characteristics Nearly all mainframes have the ability to run (or host) multiple operating systems, and thereby operate as a host of a collective of virtual machines. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication[citation needed]. Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not usually available with most server solutions[citation needed]. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions (LPARs, via the PR/SM facility) and virtual machines (via the z/VM operating system). Many mainframe customers run two machines: one in their primary data center, and one in their backup data center—fully active, partially active, or on standby—in case there is a catastrophe affecting the first building. Test, development, training, and production workload for applications and databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages. In practice many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD (in IBM's case)[citation needed], or with shared, geographically dispersed storage provided by EMC or Hitachi. Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the mid-1960s, mainframe designs have included several subsidiary computers (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Gigabyte to terabyte-size record files are not unusual.[6] Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it much faster.[citation needed] Other server families also offload I/O processing and emphasize throughput computing. Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors. Mainframes also have execution integrity characteristics for fault tolerant computing. For example, z900, z990, System z9, and System z10 servers effectively execute result-oriented instructions twice, compare results, arbitrate between any differences (through instruction retry and failure isolation), then shift workloads "in flight" to functioning processors, including spares, without any impact to operating systems, applications, or users. This hardware-level feature, also found in HP's NonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing[citation needed]. [edit] Market IBM mainframes dominate the mainframe market at well over 90% market share.[7] Unisys manufactures ClearPath mainframes, based on earlier Burroughs products and ClearPath mainframes based on OS1100 product lines. In 2002, Hitachi co-developed the zSeries z800 with IBM to share expenses, but subsequently the two companies have not collaborated on new Hitachi models. Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers and which some analysts classify as mainframes. Groupe Bull's DPS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME mainframes are still available in Europe. Fujitsu, Hitachi, and NEC (the "JCMs") still maintain mainframe hardware businesses in the Japanese market.[8][9] The amount of vendor investment in mainframe development varies with marketshare. Fujitsu and Hitachi both continue to use custom S/390-compatible processors, as well as other CPU's (including POWER, SPARC, MIPS, and Xeon) for lower-end systems. Bull uses a mix of custom and Xeon processors. NEC and Bull both use a mixture of Xeon and Itanium processors for their mainframes. IBM continues to pursue a different business strategy of mainframe investment and growth[citation needed]. IBM has its own large research and development organization designing new, homegrown CPUs, including mainframe processors such as 2008's 4.4 GHz quad-core z10 mainframe microprocessor. Unisys produces code compatible mainframe systems that range from laptops to cabinet sized mainframes that utilize homegrown CPUs as well as Xeon processors. IBM is rapidly expanding its software business, including its mainframe software portfolio, to seek additional revenue and profits.[10][11] [edit] History This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (July 2008) Several manufacturers produced mainframe computers from the late 1950s through the 1970s. The group of manufacturers was first known as "IBM and the Seven Dwarfs": IBM, Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA. Later, shrinking, it was referred to as IBM and the BUNCH. IBM's dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries mainframes which, along with the then Burroughs and Sperry (now Unisys) MCP-based and OS1100 mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. That said, while IBM's zSeries can still run 24-bit System/360 code, the 64-bit zSeries and System z9 CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the USA were Siemens and Telefunken in Germany, ICL in the United Kingdom, Olivetti in Italy, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the BESM series and Strela are examples of an independently designed Soviet computer. Shrinking demand and tough competition started a shakeout in the market in the early 1970s — RCA sold out to UNIVAC and GE also left; in the 1980s Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986. In 1991, AT&T briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks. InfoWorld's Stewart Alsop famously predicted that the last mainframe would be unplugged in 1996. That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world, encouraging trends toward more centralized computing. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Batch processing, such as billing, became even more important (and larger) with the growth of e-business, and mainframes are particularly adept at large scale batch computing. Another factor currently increasing mainframe use is the development of the Linux operating system, which arrived on IBM mainframe systems in 1999 and is typically run in scores or hundreds of virtual machines on a single mainframe. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly People's Republic of China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.) In late 2000 IBM introduced 64-bit z/Architecture, acquired numerous software companies such as Cognos and introduced those software products to the mainframe. IBM's quarterly and annual reports in the 2000s usually reported increasing mainframe revenues and capacity shipments. However, IBM's mainframe hardware business has not been immune to the recent overall downturn in the server hardware market or to model cycle effects. For example, in the 4th quarter of 2009, IBM's System z hardware revenues decreased by 27% year over year. But MIPS shipments (a measure of mainframe capacity) increased 4% per year over the past two years.[12] [edit] Differences from supercomputers A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers are used for scientific and engineering problems (high-performance computing) which are limited by processing speed and memory size, while mainframes are used for problems which are limited by data movement in input/output devices, reliability, and for handling multiple business transactions concurrently. The differences are as follows: Mainframes are measured in millions of instructions per second (MIPS) while assuming typical instructions are integer operations, but supercomputers are measured in floating point operations per second (FLOPS)[citation needed]. Examples of integer operations include moving data around in memory or checking values. Floating point operations are mostly addition, subtraction, and multiplication with enough digits of precision to model continuous phenomena such as weather prediction and nuclear simulations. In terms of computational ability, supercomputers are more powerful.[13] Mainframes are built to be reliable for transaction processing as it is commonly understood in the business world: a commercial exchange of goods, services, or money[citation needed]. A typical transaction, as defined by the Transaction Processing Performance Council,[14] would include the updating to a database system for such things as inventory control (goods), airline reservations (services), or banking (money). A transaction could refer to a set of operations including disk read/writes, operating system calls, or some form of data transfer from one subsystem to another. This operation does not count toward the processing power of a computer. Transaction processing is not exclusive to mainframes but also used in the performance of microprocessor-based servers and online networks. [edit] See also Computer types Failover [edit] References ^ "IBM preps big iron fiesta". The Register. July 20, 2005. ^ Oxford English Dictionary, on -line edition, mainframe, n ^ Ebbers, Mike; O'Brien, W.; Ogden, B. (2006). "Introduction to the New Mainframe: z/OS Basics" (pdf). IBM International Technical Support Organization. Retrieved 2007-06-01. ^ "National Vulnerability Database". Retrieved September 20,2011. ^ "Get the facts on IBM vs the Competition- The facts about IBM System z "mainframe"". IBM. Retrieved December 28, 2009. ^ "Largest Commercial Database in Winter Corp. TopTen Survey Tops One Hundred Terabytes". Press release. Retrieved 2008-05-16. ^ "IBM Tightens Stranglehold Over Mainframe Market; Gets Hit with Antitrust Complaint in Europe". CCIA. 2008-07-02. Retrieved 2008-07-09. ^ [1] ^ [2] ^ "IBM Opens Latin America's First Mainframe Software Center". Enterprise Networks and Servers. August 2007. ^ "IBM Helps Clients Modernize Applications on the Mainframe". IBM. November 7, 2007. ^ "IBM 4Q2009 Financial Report: CFO's Prepared Remarks". IBM. January 19, 2010. ^ World's Top Supercomputer Retrieved on December 25, 2009 ^ Transaction Processing Performance Council Retrieved on December 25, 2009. [edit] External links Wikimedia Commons has media related to: Mainframe computers IBM eServer zSeries mainframe servers Univac 9400, a mainframe from the 1960s, still in use in a German computer museum Lectures in the History of Computing: Mainframes Articles and Tutorials at Mainframes360.com: Mainframes[hide] v · d · e Computer sizes Classes of computers Larger Super · Minisuper · Mainframe Mini Midrange · Supermini · Server Micro Personal (Workstation · Desktop · Home · SFF (Nettop)) · Plug · Portable · Arcade system board · Video game console Mobile Portable/Mobile data terminal · Electronic organizer · Pocket computer Laptop Desktop replacement computer · Subnotebook (Netbook · Smartbook · Ultrabook) Tablet computer Ultra-mobile PC · Mobile Internet device (Internet tablet) Information appliance Handheld PC (Palm-size PC · Pocket computer) · PDA (EDA) · Mobile phone (Feature phone · Smartphone) · PMP (DAP) · E-book reader · Handheld game console Calculators Scientific · Programmable · Graphing Wearable computer Calculator watch · Wristop · Virtual retinal display · Head-mounted display (Head-up display) Others Microcontroller · Nanocomputer · Pizza Box Case · Single-board computer · Smartdust · Wireless sensor network View page ratings Rate this page What's this? Trustworthy Objective Complete Well-written I am highly knowledgeable about this topic (optional) Submit ratings Categories: Mainframe computers Log in / create account Article Discussion Read Edit View history Main page Contents Featured content Current events Random article Donate to Wikipedia Interaction Help About Wikipedia Community portal Recent changes Contact Wikipedia Toolbox Print/export Languages العربية Azərbaycanca Български Català Česky Dansk Deutsch Eesti Ελληνικά Español Euskara فارسی Français 한국어 Hrvatski Bahasa Indonesia Italiano עברית ລາວ Latviešu Lietuvių Magyar മലയാളം Bahasa Melayu Nederlands 日本語 ‪Norsk (bokmål)‬ Polski Português Română Русский Shqip Simple English Slovenščina Suomi Svenska ไทย Türkçe Українська Tiếng Việt ייִדיש 中文 This page was last modified on 6 December 2011 at 03:32. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of use for details. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Contact us Privacy policy About Wikipedia Disclaimers Mobile view Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Mainframe computer From Wikipedia, the free encyclopedia For other uses, see Mainframe (disambiguation). This article has been nominated to be checked for its neutrality. Discussion of this nomination can be found on the talk page. (July 2009) This article contains weasel words: vague phrasing that often accompanies biased or unverifiable information. Such statements should be clarified or removed. (January 2010) An IBM 704 mainframe (1964) Mainframes (often colloquially referred to as "big iron"[1]) are powerful computers used primarily by corporate and governmental organizations for critical applications, bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing. The term originally referred to the large cabinets that housed the c

Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Modem From Wikipedia, the free encyclopedia For other uses, see Modem (disambiguation). This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (October 2010) A modem (modulator-demodulator) is a device that modulates an analog carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data. Modems can be used over any means of transmitting analog signals, from light emitting diodes to radio. The most familiar example is a voice band modem that turns the digital data of a personal computer into modulated electrical signals in the voice frequency range of a telephone channel. These signals can be transmitted over telephone lines and demodulated by another modem at the receiver side to recover the digital data. Modems are generally classified by the amount of data they can send in a given unit of time, usually expressed in bits per second (bit/s, or bps). Modems can alternatively be classified by their symbol rate, measured in baud. The baud unit denotes symbols per second, or the number of times per second the modem sends a new signal. For example, the ITU V.21 standard used audio frequency-shift keying, that is to say, tones of different frequencies, with two possible frequencies corresponding to two distinct symbols (or one bit per symbol), to carry 300 bits per second using 300 baud. By contrast, the original ITU V.22 standard, which was able to transmit and receive four distinct symbols (two bits per symbol), handled 1,200 bit/s by sending 600 symbols per second (600 baud) using phase shift keying.Contents [hide] 1 History 1.1 The Carterfone decision 1.2 The Smartmodem and the rise of BBSes 1.3 Softmodem 2 Narrow-band/phone-line dialup modems 2.1 Increasing speeds (V.21, V.22, V.22bis) 2.2 Increasing speeds (one-way proprietary standards) 2.3 4,800 and 9,600 bit/s (V.27ter, V.32) 2.3.1 Error correction and compression 2.4 Breaking the 9.6k barrier 2.4.1 V.34/28.8k and 33.6k 2.4.2 V.61/V.70 Analog/Digital Simultaneous Voice and Data 2.5 Using digital lines and PCM (V.90/92) 2.6 Using compression to exceed 56k 2.6.1 Compression by the ISP 2.7 List of dialup speeds 3 Radio modems 3.1 WiFi and WiMax 4 Mobile modems and routers 5 Broadband 6 Home networking 7 Deep-space telecommunications 8 Voice modem 9 Popularity 10 See also 11 References 12 External links 12.1 Standards organizations and modem protocols 12.2 General modem info (drivers, chipsets, etc.) 12.3 Other [edit] History News wire services in 1920s used multiplex equipment that met the definition, but the modem function was incidental to the multiplexing function, so they are not commonly included in the history of modems. TeleGuide terminal Modems grew out of the need to connect teletype machines over ordinary phone lines instead of more expensive leased lines which had previously been used for current loop-based teleprinters and automated telegraphs. In 1943, IBM adapted this technology to their unit record equipment and were able to transmit punched cards at 25 bits/second.[citation needed] Mass-produced modems in the United States began as part of the SAGE air-defense system in 1958, connecting terminals at various airbases, radar sites, and command-and-control centers to the SAGE director centers scattered around the U.S. and Canada. SAGE modems were described by AT&T's Bell Labs as conforming to their newly published Bell 101 dataset standard. While they ran on dedicated telephone lines, the devices at each end were no different from commercial acoustically coupled Bell 101, 110 baud modems. In the summer of 1960[citation needed], the name Data-Phone was introduced to replace the earlier term digital subset. The 202 Data-Phone was a half-duplex asynchronous service that was marketed extensively in late 1960[citation needed]. In 1962[citation needed], the 201A and 201B Data-Phones were introduced. They were synchronous modems using two-bit-per-baud phase-shift keying (PSK). The 201A operated half-duplex at 2,000 bit/s over normal phone lines, while the 201B provided full duplex 2,400 bit/s service on four-wire leased lines, the send and receive channels running on their own set of two wires each. The famous Bell 103A dataset standard was also introduced by AT&T in 1962. It provided full-duplex service at 300 bit/s over normal phone lines. Frequency-shift keying was used with the call originator transmitting at 1,070 or 1,270 Hz and the answering modem transmitting at 2,025 or 2,225 Hz. The readily available 103A2 gave an important boost to the use of remote low-speed terminals such as the KSR33, the ASR33, and the IBM 2741. AT&T reduced modem costs by introducing the originate-only 113D and the answer-only 113B/C modems. [edit] The Carterfone decision The Novation CAT acoustically coupled modem For many years, the Bell System (AT&T) maintained a monopoly on the use of its phone lines, allowing only Bell-supplied devices to be attached to its network. Before 1968, AT&T maintained a monopoly on what devices could be electrically connected to its phone lines. This led to a market for 103A-compatible modems that were mechanically connected to the phone, through the handset, known as acoustically coupled modems. Particularly common models from the 1970s were the Novation CAT and the Anderson-Jacobson, spun off from an in-house project at Stanford Research Institute (now SRI International). Hush-a-Phone v. FCC was a seminal ruling in United States telecommunications law decided by the DC Circuit Court of Appeals on November 8, 1956. The District Court found that it was within the FCC's authority to regulate the terms of use of AT&T's equipment. Subsequently, the FCC examiner found that as long as the device was not physically attached it would not threaten to degenerate the system. Later, in the Carterfone decision of 1968, the FCC passed a rule setting stringent AT&T-designed tests for electronically coupling a device to the phone lines. AT&T's tests were complex, making electronically coupled modems expensive,[citation needed] so acoustically coupled modems remained common into the early 1980s. In December 1972, Vadic introduced the VA3400. This device was remarkable because it provided full duplex operation at 1,200 bit/s over the dial network, using methods similar to those of the 103A in that it used different frequency bands for transmit and receive. In November 1976, AT&T introduced the 212A modem to compete with Vadic. It was similar in design to Vadic's model, but used the lower frequency set for transmission. It was also possible to use the 212A with a 103A modem at 300 bit/s. According to Vadic, the change in frequency assignments made the 212 intentionally incompatible with acoustic coupling, thereby locking out many potential modem manufacturers. In 1977, Vadic responded with the VA3467 triple modem, an answer-only modem sold to computer center operators that supported Vadic's 1,200-bit/s mode, AT&T's 212A mode, and 103A operation. [edit] The Smartmodem and the rise of BBSes US Robotics Sportster 14,400 Fax modem (1994) The next major advance in modems was the Smartmodem, introduced in 1981 by Hayes Communications. The Smartmodem was an otherwise standard 103A 300-bit/s modem, but was attached to a small controller that let the computer send commands to it and enable it to operate the phone line. The command set included instructions for picking up and hanging up the phone, dialing numbers, and answering calls. The basic Hayes command set remains the basis for computer control of most modern modems. Prior to the Hayes Smartmodem, dial-up modems almost universally required a two-step process to activate a connection: first, the user had to manually dial the remote number on a standard phone handset, and then secondly, plug the handset into an acoustic coupler. Hardware add-ons, known simply as dialers, were used in special circumstances, and generally operated by emulating someone dialing a handset. With the Smartmodem, the computer could dial the phone directly by sending the modem a command, thus eliminating the need for an associated phone instrument for dialing and the need for an acoustic coupler. The Smartmodem instead plugged directly into the phone line. This greatly simplified setup and operation. Terminal programs that maintained lists of phone numbers and sent the dialing commands became common. The Smartmodem and its clones also aided the spread of bulletin board systems (BBSs). Modems had previously been typically either the call-only, acoustically coupled models used on the client side, or the much more expensive, answer-only models used on the server side. The Smartmodem could operate in either mode depending on the commands sent from the computer. There was now a low-cost server-side modem on the market, and the BBSs flourished. Almost all modern modems can interoperate with fax machines. Digital faxes, introduced in the 1980s, are simply a particular image format sent over a high-speed (commonly 14.4 kbit/s) modem. Software running on the host computer can convert any image into fax-format, which can then be sent using the modem. Such software was at one time an add-on, but since has become largely universal. [edit] Softmodem A PCI Winmodem/softmodem (on the left) next to a traditional ISA modem (on the right). Main article: Softmodem A Winmodem or softmodem is a stripped-down modem that replaces tasks traditionally handled in hardware with software. In this case the modem is a simple interface designed to create voltage variations on the telephone line and to sample the line voltage levels (digital to analog and analog to digital converters). Softmodems are cheaper than traditional modems, since they have fewer hardware components. One downside is that the software generating and interpreting the modem tones is not simple (as most of the protocols are complex), and the performance of the computer as a whole often suffers when it is being used. For online gaming this can be a real concern. Another problem is lack of portability such that non-Windows operating systems (such as Linux) often do not have an equivalent driver to operate the modem. [edit] Narrow-band/phone-line dialup modems A standard modem of today contains two functional parts: an analog section for generating the signals and operating the phone, and a digital section for setup and control. This functionality is often incorporated into a single chip nowadays, but the division remains in theory. In operation the modem can be in one of two modes, data mode in which data is sent to and from the computer over the phone lines, and command mode in which the modem listens to the data from the computer for commands, and carries them out. A typical session consists of powering up the modem (often inside the computer itself) which automatically assumes command mode, then sending it the command for dialing a number. After the connection is established to the remote modem, the modem automatically goes into data mode, and the user can send and receive data. When the user is finished, the escape sequence, "+++" followed by a pause of about a second, may be sent to the modem to return it to command mode, then a command (e.g. "ATH") to hang up the phone is sent. Note that on many modem controllers it is possible to issue commands to disable the escape sequence so that it is not possible for data being exchanged to trigger the mode change inadvertently. The commands themselves are typically from the Hayes command set, although that term is somewhat misleading. The original Hayes commands were useful for 300 bit/s operation only, and then extended for their 1,200 bit/s modems. Faster speeds required new commands, leading to a proliferation of command sets in the early 1990s. Things became considerably more standardized in the second half of the 1990s, when most modems were built from one of a very small number of chipsets. We call this the Hayes command set even today, although it has three or four times the numbers of commands as the actual standard. [edit] Increasing speeds (V.21, V.22, V.22bis) The 300 bit/s modems used audio frequency-shift keying to send data. In this system the stream of 1s and 0s in computer data is translated into sounds which can be easily sent on the phone lines. In the Bell 103 system the originating modem sends 0s by playing a 1,070 Hz tone, and 1s at 1,270 Hz, with the answering modem putting its 0s on 2,025 Hz and 1s on 2,225 Hz. These frequencies were chosen carefully, they are in the range that suffer minimum distortion on the phone system, and also are not harmonics of each other. In the 1,200 bit/s and faster systems, phase-shift keying was used. In this system the two tones for any one side of the connection are sent at the similar frequencies as in the 300 bit/s systems, but slightly out of phase. By comparing the phase of the two signals, 1s and 0s could be pulled back out, Voiceband modems generally remained at 300 and 1,200 bit/s (V.21 and V.22) into the mid 1980s. A V.22bis 2,400-bit/s system similar in concept to the 1,200-bit/s Bell 212 signalling was introduced in the U.S., and a slightly different one in Europe. By the late 1980s, most modems could support all of these standards and 2,400-bit/s operation was becoming common. For more information on baud rates versus bit rates, see the companion article list of device bandwidths. [edit] Increasing speeds (one-way proprietary standards) Many other standards were also introduced for special purposes, commonly using a high-speed channel for receiving, and a lower-speed channel for sending. One typical example was used in the French Minitel system, in which the user's terminals spent the majority of their time receiving information. The modem in the Minitel terminal thus operated at 1,200 bit/s for reception, and 75 bit/s for sending commands back to the servers. Three U.S. companies became famous for high-speed versions of the same concept. Telebit introduced its Trailblazer modem in 1984, which used a large number of 36 bit/s channels to send data one-way at rates up to 18,432 bit/s. A single additional channel in the reverse direction allowed the two modems to communicate how much data was waiting at either end of the link, and the modems could change direction on the fly. The Trailblazer modems also supported a feature that allowed them to spoof the UUCP g protocol, commonly used on Unix systems to send e-mail, and thereby speed UUCP up by a tremendous amount. Trailblazers thus became extremely common on Unix systems, and maintained their dominance in this market well into the 1990s. U.S. Robotics (USR) introduced a similar system, known as HST, although this supplied only 9,600 bit/s (in early versions at least) and provided for a larger backchannel. Rather than offer spoofing, USR instead created a large market among Fidonet users by offering its modems to BBS sysops at a much lower price, resulting in sales to end users who wanted faster file transfers. Hayes was forced to compete, and introduced its own 9,600-bit/s standard, Express 96 (also known as Ping-Pong), which was generally similar to Telebit's PEP. Hayes, however, offered neither protocol spoofing nor sysop discounts, and its high-speed modems remained rare. [edit] 4,800 and 9,600 bit/s (V.27ter, V.32) Echo cancellation was the next major advance in modem design. Local telephone lines use the same wires to send and receive, which results in a small amount of the outgoing signal bouncing back. This signal can confuse the modem, which was unable to distinguish between the echo and the signal from the remote modem. This was why earlier modems split the signal frequencies into 'answer' and 'originate'; the modem could then ignore its own transmitting frequencies. Even with improvements to the phone system allowing higher speeds, this splitting of available phone signal bandwidth still imposed a half-speed limit on modems. Echo cancellation got around this problem. Measuring the echo delays and magnitudes allowed the modem to tell if the received signal was from itself or the remote modem, and create an equal and opposite signal to cancel its own. Modems were then able to send over the whole frequency spectrum in both directions at the same time, leading to the development of 4,800 and 9,600 bit/s modems. Increases in speed have used increasingly complicated communications theory. 1,200 and 2,400 bit/s modems used the phase shift key (PSK) concept. This could transmit two or three bits per symbol. The next major advance encoded four bits into a combination of amplitude and phase, known as Quadrature Amplitude Modulation (QAM). The new V.27ter and V.32 standards were able to transmit 4 bits per symbol, at a rate of 1,200 or 2,400 baud, giving an effective bit rate of 4,800 or 9,600 bit/s. The carrier frequency was 1,650 Hz. For many years, most engineers considered this rate to be the limit of data communications over telephone networks. [edit] Error correction and compression Operations at these speeds pushed the limits of the phone lines, resulting in high error rates. This led to the introduction of error-correction systems built into the modems, made most famous with Microcom's MNP systems. A string of MNP standards came out in the 1980s, each increasing the effective data rate by minimizing overhead, from about 75% theoretical maximum in MNP 1, to 95% in MNP 4. The new method called MNP 5 took this a step further, adding data compression to the system, thereby increasing the data rate above the modem's rating. Generally the user could expect an MNP5 modem to transfer at about 130% the normal data rate of the modem. Details of MNP were later released and became popular on a series of 2,400-bit/s modems, and ultimately led to the development of V.42 and V.42bis ITU standards. V.42 and V.42bis were non-compatible with MNP but were similar in concept: Error correction and compression. Another common feature of these high-speed modems was the concept of fallback, or speed hunting, allowing them to talk to less-capable modems. During the call initiation the modem would play a series of signals into the line and wait for the remote modem to respond to them. They would start at high speeds and progressively get slower and slower until they heard an answer. Thus, two USR modems would be able to connect at 9,600 bit/s, but, when a user with a 2,400-bit/s modem called in, the USR would fallback to the common 2,400-bit/s speed. This would also happen if a V.32 modem and a HST modem were connected. Because they used a different standard at 9,600 bit/s, they would fall back to their highest commonly supported standard at 2,400 bit/s. The same applies to V.32bis and 14,400 bit/s HST modem, which would still be able to communicate with each other at only 2,400 bit/s. [edit] Breaking the 9.6k barrier In 1980, Gottfried Ungerboeck from IBM Zurich Research Laboratory applied channel coding techniques to search for new ways to increase the speed of modems. His results were astonishing but only conveyed to a few colleagues.[1] Finally in 1982, he agreed to publish what is now a landmark paper in the theory of information coding.[citation needed] By applying parity check coding to the bits in each symbol, and mapping the encoded bits into a two-dimensional diamond pattern, Ungerboeck showed that it was possible to increase the speed by a factor of two with the same error rate. The new technique was called mapping by set partitions (now known as trellis modulation). Error correcting codes, which encode code words (sets of bits) in such a way that they are far from each other, so that in case of error they are still closest to the original word (and not confused with another) can be thought of as analogous to sphere packing or packing pennies on a surface: the further two bit sequences are from one another, the easier it is to correct minor errors. V.32bis was so successful that the older high-speed standards had little to recommend them. USR fought back with a 16,800 bit/s version of HST, while AT&T introduced a one-off 19,200 bit/s method they referred to as V.32ter (also known as V.32 terbo or tertiary), but neither non-standard modem sold well. [edit] V.34/28.8k and 33.6k An ISA modem manufactured to conform to the V.34 protocol. Any interest in these systems was destroyed during the lengthy introduction of the 28,800 bit/s V.34 standard. While waiting, several companies decided to release hardware and introduced modems they referred to as V.FAST. In order to guarantee compatibility with V.34 modems once the standard was ratified (1994), the manufacturers were forced to use more flexible parts, generally a DSP and microcontroller, as opposed to purpose-designed ASIC modem chips. Today, the ITU standard V.34 represents the culmination of the joint efforts. It employs the most powerful coding techniques including channel encoding and shape encoding. From the mere 4 bits per symbol (9.6 kbit/s), the new standards used the functional equivalent of 6 to 10 bits per symbol, plus increasing baud rates from 2,400 to 3,429, to create 14.4, 28.8, an

Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Modem From Wikipedia, the free encyclopedia For other uses, see Modem (disambiguation). This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (October 2010) A modem (modulator-demodulator) is a device that modulates an analog carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data. Modems can be used over any means of transmitting analog signals, from light emitting diodes to radio. The most familiar example is a voice band modem that turns the digital data of a personal computer into modulated electrical signals in the voice frequency range of a telephone channel. These signals can be transmitted over telephone lines and demodulated by another modem at the receiver side to recover the digital data. Modems are generally classified by the amount of data they can send in a given unit of time, usually expressed in bits per second (bit/s, or bps). Modems can alternatively be classified by their symbol rate, measured in baud. The baud unit denotes symbols per second, or the number of times per second the modem sends a new signal. For example, the ITU V.21 standard used audio frequency-shift keying, that is to say, tones of different frequencies, with two possible frequencies corresponding to two distinct symbols (or one bit per symbol), to carry 300 bits per second using 300 baud. By contrast, the original ITU V.22 standard, which was able to transmit and receive four distinct symbols (two bits per symbol), handled 1,200 bit/s by sending 600 symbols per second (600 baud) using phase shift keying.Contents [hide] 1 History 1.1 The Carterfone decision 1.2 The Smartmodem and the rise of BBSes 1.3 Softmodem 2 Narrow-band/phone-line dialup modems 2.1 Increasing speeds (V.21, V.22, V.22bis) 2.2 Increasing speeds (one-way proprietary standards) 2.3 4,800 and 9,600 bit/s (V.27ter, V.32) 2.3.1 Error correction and compression 2.4 Breaking the 9.6k barrier 2.4.1 V.34/28.8k and 33.6k 2.4.2 V.61/V.70 Analog/Digital Simultaneous Voice and Data 2.5 Using digital lines and PCM (V.90/92) 2.6 Using compression to exceed 56k 2.6.1 Compression by the ISP 2.7 List of dialup speeds 3 Radio modems 3.1 WiFi and WiMax 4 Mobile modems and routers 5 Broadband 6 Home networking 7 Deep-space telecommunications 8 Voice modem 9 Popularity 10 See also 11 References 12 External links 12.1 Standards organizations and modem protocols 12.2 General modem info (drivers, chipsets, etc.) 12.3 Other [edit] History News wire services in 1920s used multiplex equipment that met the definition, but the modem function was incidental to the multiplexing function, so they are not commonly included in the history of modems. TeleGuide terminal Modems grew out of the need to connect teletype machines over ordinary phone lines instead of more expensive leased lines which had previously been used for current loop-based teleprinters and automated telegraphs. In 1943, IBM adapted this technology to their unit record equipment and were able to transmit punched cards at 25 bits/second.[citation needed] Mass-produced modems in the United States began as part of the SAGE air-defense system in 1958, connecting terminals at various airbases, radar sites, and command-and-control centers to the SAGE director centers scattered around the U.S. and Canada. SAGE modems were described by AT&T's Bell Labs as conforming to their newly published Bell 101 dataset standard. While they ran on dedicated telephone lines, the devices at each end were no different from commercial acoustically coupled Bell 101, 110 baud modems. In the summer of 1960[citation needed], the name Data-Phone was introduced to replace the earlier term digital subset. The 202 Data-Phone was a half-duplex asynchronous service that was marketed extensively in late 1960[citation needed]. In 1962[citation needed], the 201A and 201B Data-Phones were introduced. They were synchronous modems using two-bit-per-baud phase-shift keying (PSK). The 201A operated half-duplex at 2,000 bit/s over normal phone lines, while the 201B provided full duplex 2,400 bit/s service on four-wire leased lines, the send and receive channels running on their own set of two wires each. The famous Bell 103A dataset standard was also introduced by AT&T in 1962. It provided full-duplex service at 300 bit/s over normal phone lines. Frequency-shift keying was used with the call originator transmitting at 1,070 or 1,270 Hz and the answering modem transmitting at 2,025 or 2,225 Hz. The readily available 103A2 gave an important boost to the use of remote low-speed terminals such as the KSR33, the ASR33, and the IBM 2741. AT&T reduced modem costs by introducing the originate-only 113D and the answer-only 113B/C modems. [edit] The Carterfone decision The Novation CAT acoustically coupled modem For many years, the Bell System (AT&T) maintained a monopoly on the use of its phone lines, allowing only Bell-supplied devices to be attached to its network. Before 1968, AT&T maintained a monopoly on what devices could be electrically connected to its phone lines. This led to a market for 103A-compatible modems that were mechanically connected to the phone, through the handset, known as acoustically coupled modems. Particularly common models from the 1970s were the Novation CAT and the Anderson-Jacobson, spun off from an in-house project at Stanford Research Institute (now SRI International). Hush-a-Phone v. FCC was a seminal ruling in United States telecommunications law decided by the DC Circuit Court of Appeals on November 8, 1956. The District Court found that it was within the FCC's authority to regulate the terms of use of AT&T's equipment. Subsequently, the FCC examiner found that as long as the device was not physically attached it would not threaten to degenerate the system. Later, in the Carterfone decision of 1968, the FCC passed a rule setting stringent AT&T-designed tests for electronically coupling a device to the phone lines. AT&T's tests were complex, making electronically coupled modems expensive,[citation needed] so acoustically coupled modems remained common into the early 1980s. In December 1972, Vadic introduced the VA3400. This device was remarkable because it provided full duplex operation at 1,200 bit/s over the dial network, using methods similar to those of the 103A in that it used different frequency bands for transmit and receive. In November 1976, AT&T introduced the 212A modem to compete with Vadic. It was similar in design to Vadic's model, but used the lower frequency set for transmission. It was also possible to use the 212A with a 103A modem at 300 bit/s. According to Vadic, the change in frequency assignments made the 212 intentionally incompatible with acoustic coupling, thereby locking out many potential modem manufacturers. In 1977, Vadic responded with the VA3467 triple modem, an answer-only modem sold to computer center operators that supported Vadic's 1,200-bit/s mode, AT&T's 212A mode, and 103A operation. [edit] The Smartmodem and the rise of BBSes US Robotics Sportster 14,400 Fax modem (1994) The next major advance in modems was the Smartmodem, introduced in 1981 by Hayes Communications. The Smartmodem was an otherwise standard 103A 300-bit/s modem, but was attached to a small controller that let the computer send commands to it and enable it to operate the phone line. The command set included instructions for picking up and hanging up the phone, dialing numbers, and answering calls. The basic Hayes command set remains the basis for computer control of most modern modems. Prior to the Hayes Smartmodem, dial-up modems almost universally required a two-step process to activate a connection: first, the user had to manually dial the remote number on a standard phone handset, and then secondly, plug the handset into an acoustic coupler. Hardware add-ons, known simply as dialers, were used in special circumstances, and generally operated by emulating someone dialing a handset. With the Smartmodem, the computer could dial the phone directly by sending the modem a command, thus eliminating the need for an associated phone instrument for dialing and the need for an acoustic coupler. The Smartmodem instead plugged directly into the phone line. This greatly simplified setup and operation. Terminal programs that maintained lists of phone numbers and sent the dialing commands became common. The Smartmodem and its clones also aided the spread of bulletin board systems (BBSs). Modems had previously been typically either the call-only, acoustically coupled models used on the client side, or the much more expensive, answer-only models used on the server side. The Smartmodem could operate in either mode depending on the commands sent from the computer. There was now a low-cost server-side modem on the market, and the BBSs flourished. Almost all modern modems can interoperate with fax machines. Digital faxes, introduced in the 1980s, are simply a particular image format sent over a high-speed (commonly 14.4 kbit/s) modem. Software running on the host computer can convert any image into fax-format, which can then be sent using the modem. Such software was at one time an add-on, but since has become largely universal. [edit] Softmodem A PCI Winmodem/softmodem (on the left) next to a traditional ISA modem (on the right). Main article: Softmodem A Winmodem or softmodem is a stripped-down modem that replaces tasks traditionally handled in hardware with software. In this case the modem is a simple interface designed to create voltage variations on the telephone line and to sample the line voltage levels (digital to analog and analog to digital converters). Softmodems are cheaper than traditional modems, since they have fewer hardware components. One downside is that the software generating and interpreting the modem tones is not simple (as most of the protocols are complex), and the performance of the computer as a whole often suffers when it is being used. For online gaming this can be a real concern. Another problem is lack of portability such that non-Windows operating systems (such as Linux) often do not have an equivalent driver to operate the modem. [edit] Narrow-band/phone-line dialup modems A standard modem of today contains two functional parts: an analog section for generating the signals and operating the phone, and a digital section for setup and control. This functionality is often incorporated into a single chip nowadays, but the division remains in theory. In operation the modem can be in one of two modes, data mode in which data is sent to and from the computer over the phone lines, and command mode in which the modem listens to the data from the computer for commands, and carries them out. A typical session consists of powering up the modem (often inside the computer itself) which automatically assumes command mode, then sending it the command for dialing a number. After the connection is established to the remote modem, the modem automatically goes into data mode, and the user can send and receive data. When the user is finished, the escape sequence, "+++" followed by a pause of about a second, may be sent to the modem to return it to command mode, then a command (e.g. "ATH") to hang up the phone is sent. Note that on many modem controllers it is possible to issue commands to disable the escape sequence so that it is not possible for data being exchanged to trigger the mode change inadvertently. The commands themselves are typically from the Hayes command set, although that term is somewhat misleading. The original Hayes commands were useful for 300 bit/s operation only, and then extended for their 1,200 bit/s modems. Faster speeds required new commands, leading to a proliferation of command sets in the early 1990s. Things became considerably more standardized in the second half of the 1990s, when most modems were built from one of a very small number of chipsets. We call this the Hayes command set even today, although it has three or four times the numbers of commands as the actual standard. [edit] Increasing speeds (V.21, V.22, V.22bis) The 300 bit/s modems used audio frequency-shift keying to send data. In this system the stream of 1s and 0s in computer data is translated into sounds which can be easily sent on the phone lines. In the Bell 103 system the originating modem sends 0s by playing a 1,070 Hz tone, and 1s at 1,270 Hz, with the answering modem putting its 0s on 2,025 Hz and 1s on 2,225 Hz. These frequencies were chosen carefully, they are in the range that suffer minimum distortion on the phone system, and also are not harmonics of each other. In the 1,200 bit/s and faster systems, phase-shift keying was used. In this system the two tones for any one side of the connection are sent at the similar frequencies as in the 300 bit/s systems, but slightly out of phase. By comparing the phase of the two signals, 1s and 0s could be pulled back out, Voiceband modems generally remained at 300 and 1,200 bit/s (V.21 and V.22) into the mid 1980s. A V.22bis 2,400-bit/s system similar in concept to the 1,200-bit/s Bell 212 signalling was introduced in the U.S., and a slightly different one in Europe. By the late 1980s, most modems could support all of these standards and 2,400-bit/s operation was becoming common. For more information on baud rates versus bit rates, see the companion article list of device bandwidths. [edit] Increasing speeds (one-way proprietary standards) Many other standards were also introduced for special purposes, commonly using a high-speed channel for receiving, and a lower-speed channel for sending. One typical example was used in the French Minitel system, in which the user's terminals spent the majority of their time receiving information. The modem in the Minitel terminal thus operated at 1,200 bit/s for reception, and 75 bit/s for sending commands back to the servers. Three U.S. companies became famous for high-speed versions of the same concept. Telebit introduced its Trailblazer modem in 1984, which used a large number of 36 bit/s channels to send data one-way at rates up to 18,432 bit/s. A single additional channel in the reverse direction allowed the two modems to communicate how much data was waiting at either end of the link, and the modems could change direction on the fly. The Trailblazer modems also supported a feature that allowed them to spoof the UUCP g protocol, commonly used on Unix systems to send e-mail, and thereby speed UUCP up by a tremendous amount. Trailblazers thus became extremely common on Unix systems, and maintained their dominance in this market well into the 1990s. U.S. Robotics (USR) introduced a similar system, known as HST, although this supplied only 9,600 bit/s (in early versions at least) and provided for a larger backchannel. Rather than offer spoofing, USR instead created a large market among Fidonet users by offering its modems to BBS sysops at a much lower price, resulting in sales to end users who wanted faster file transfers. Hayes was forced to compete, and introduced its own 9,600-bit/s standard, Express 96 (also known as Ping-Pong), which was generally similar to Telebit's PEP. Hayes, however, offered neither protocol spoofing nor sysop discounts, and its high-speed modems remained rare. [edit] 4,800 and 9,600 bit/s (V.27ter, V.32) Echo cancellation was the next major advance in modem design. Local telephone lines use the same wires to send and receive, which results in a small amount of the outgoing signal bouncing back. This signal can confuse the modem, which was unable to distinguish between the echo and the signal from the remote modem. This was why earlier modems split the signal frequencies into 'answer' and 'originate'; the modem could then ignore its own transmitting frequencies. Even with improvements to the phone system allowing higher speeds, this splitting of available phone signal bandwidth still imposed a half-speed limit on modems. Echo cancellation got around this problem. Measuring the echo delays and magnitudes allowed the modem to tell if the received signal was from itself or the remote modem, and create an equal and opposite signal to cancel its own. Modems were then able to send over the whole frequency spectrum in both directions at the same time, leading to the development of 4,800 and 9,600 bit/s modems. Increases in speed have used increasingly complicated communications theory. 1,200 and 2,400 bit/s modems used the phase shift key (PSK) concept. This could transmit two or three bits per symbol. The next major advance encoded four bits into a combination of amplitude and phase, known as Quadrature Amplitude Modulation (QAM). The new V.27ter and V.32 standards were able to transmit 4 bits per symbol, at a rate of 1,200 or 2,400 baud, giving an effective bit rate of 4,800 or 9,600 bit/s. The carrier frequency was 1,650 Hz. For many years, most engineers considered this rate to be the limit of data communications over telephone networks. [edit] Error correction and compression Operations at these speeds pushed the limits of the phone lines, resulting in high error rates. This led to the introduction of error-correction systems built into the modems, made most famous with Microcom's MNP systems. A string of MNP standards came out in the 1980s, each increasing the effective data rate by minimizing overhead, from about 75% theoretical maximum in MNP 1, to 95% in MNP 4. The new method called MNP 5 took this a step further, adding data compression to the system, thereby increasing the data rate above the modem's rating. Generally the user could expect an MNP5 modem to transfer at about 130% the normal data rate of the modem. Details of MNP were later released and became popular on a series of 2,400-bit/s modems, and ultimately led to the development of V.42 and V.42bis ITU standards. V.42 and V.42bis were non-compatible with MNP but were similar in concept: Error correction and compression. Another common feature of these high-speed modems was the concept of fallback, or speed hunting, allowing them to talk to less-capable modems. During the call initiation the modem would play a series of signals into the line and wait for the remote modem to respond to them. They would start at high speeds and progressively get slower and slower until they heard an answer. Thus, two USR modems would be able to connect at 9,600 bit/s, but, when a user with a 2,400-bit/s modem called in, the USR would fallback to the common 2,400-bit/s speed. This would also happen if a V.32 modem and a HST modem were connected. Because they used a different standard at 9,600 bit/s, they would fall back to their highest commonly supported standard at 2,400 bit/s. The same applies to V.32bis and 14,400 bit/s HST modem, which would still be able to communicate with each other at only 2,400 bit/s. [edit] Breaking the 9.6k barrier In 1980, Gottfried Ungerboeck from IBM Zurich Research Laboratory applied channel coding techniques to search for new ways to increase the speed of modems. His results were astonishing but only conveyed to a few colleagues.[1] Finally in 1982, he agreed to publish what is now a landmark paper in the theory of information coding.[citation needed] By applying parity check coding to the bits in each symbol, and mapping the encoded bits into a two-dimensional diamond pattern, Ungerboeck showed that it was possible to increase the speed by a factor of two with the same error rate. The new technique was called mapping by set partitions (now known as trellis modulation). Error correcting codes, which encode code words (sets of bits) in such a way that they are far from each other, so that in case of error they are still closest to the original word (and not confused with another) can be thought of as analogous to sphere packing or packing pennies on a surface: the further two bit sequences are from one another, the easier it is to correct minor errors. V.32bis was so successful that the older high-speed standards had little to recommend them. USR fought back with a 16,800 bit/s version of HST, while AT&T introduced a one-off 19,200 bit/s method they referred to as V.32ter (also known as V.32 terbo or tertiary), but neither non-standard modem sold well. [edit] V.34/28.8k and 33.6k An ISA modem manufactured to conform to the V.34 protocol. Any interest in these systems was destroyed during the lengthy introduction of the 28,800 bit/s V.34 standard. While waiting, several companies decided to release hardware and introduced modems they referred to as V.FAST. In order to guarantee compatibility with V.34 modems once the standard was ratified (1994), the manufacturers were forced to use more flexible parts, generally a DSP and microcontroller, as opposed to purpose-designed ASIC modem chips. Today, the ITU standard V.34 represents the culmination of the joint efforts. It employs the most powerful coding techniques including channel encoding and shape encoding. From the mere 4 bits per symbol (9.6 kbit/s), the new standards used the functional equivalent of 6 to 10 bits per symbol, plus increasing baud rates from 2,400 to 3,429, to create 14.4, 28.8, an

Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Server (computing) From Wikipedia, the free encyclopedia For other uses, see Server (disambiguation). This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2010) Servers in a data center. Several servers are mounted on a rack and connected to a display. A rack-mountable server. Top cover removed to reveal the internals. In the context of client-server architecture, a server is a computer program running to serve the requests of other programs, the "clients". Thus, the "server" performs some computational task on behalf of "clients". The clients either run on the same computer or connect through the network. In most common use, server is a physical computer (a hardware system) dedicated to running one or more such services (as a host),[1] to serve the needs of users of the other computers on the network. Depending on the computing service that it offers it could be a database server, file server, mail server, print server, web server, or other. In the context of Internet Protocol (IP) networking, a server is a program that operates as a socket listener.[2] Servers often provide essential services across a network, either to private users inside a large organization or to public users via the Internet. For example, when you enter a query in a search engine, the query is sent from your computer over the internet to the servers that store all the relevant web pages. The results are sent back by the server to your computer.Contents [hide] 1 Usage 2 Server hardware 3 Server operating systems 4 Types of servers 5 Energy consumption of servers 6 Size classes 7 See also 8 References 9 External links [edit] Usage The term server is used quite broadly in information technology. Despite the many server-branded products available (such as server versions of hardware, software or operating systems), in theory any computerised process that shares a resource to one or more client processes is a server. To illustrate this, take the common example of file sharing. While the existence of files on a machine does not classify it as a server, the mechanism which shares these files to clients by the operating system is the server. Similarly, consider a web server application (such as the multiplatform "Apache HTTP Server"). This web server software can be run on any capable computer. For example, while a laptop or personal computer is not typically known as a server, they can in these situations fulfill the role of one, and hence be labelled as one. It is in this case that the machine's purpose as a web server classifies it in general as a server. In the hardware sense, the word server typically designates computer models intended for hosting software applications under the heavy demand of a network environment. In this client-server configuration one or more machines, either a computer or a computer appliance, share information with each other with one acting as a host for the other[s]. While nearly any personal computer is capable of acting as a network server, a dedicated server will contain features making it more suitable for production environments. These features may include a faster CPU, increased high-performance RAM, and typically more than one large hard drive. More obvious distinctions include marked redundancy in power supplies, network connections, and even the servers themselves. Between the 1990s and 2000s an increase in the use of dedicated hardware saw the advent of self-contained server appliances. One well-known product is the Google Search Appliance, a unit that combines hardware and software in an out-of-the-box packaging. Simpler examples of such appliances include switches, routers, gateways, and print server, all of which are available in a near plug-and-play configuration. Modern operating systems such as Microsoft Windows or Linux distributions rightfully seem to be designed with a client-server architecture in mind. These operating systems attempt to abstract hardware, allowing a wide variety of software to work with components of the computer. In a sense, the operating system can be seen as serving hardware to the software, which in all but low-level programming languages must interact using an API. These operating systems may be able to run programs in the background called either services or daemons. Such programs may wait in a sleep state for their necessity to become apparent, such as the aforementioned Apache HTTP Server software. Since any software that provides services can be called a server, modern personal computers can be seen as a forest of servers and clients operating in parallel. The Internet itself is also a forest of servers and clients. Merely requesting a web page from a few kilometers away involves satisfying a stack of protocols that involve many examples of hardware and software servers. The least of these are the routers, modems, domain name servers, and various other servers necessary to provide us the world wide web. [edit] Server hardware A server rack seen from the rear Hardware requirements for servers vary, depending on the server application. Absolute CPU speed is not usually as critical to a server as it is to a desktop machine[citation needed]. Servers' duties to provide service to many users over a network lead to different requirements such as fast network connections and high I/O throughput. Since servers are usually accessed over a network, they may run in headless mode without a monitor or input device. Processes that are not needed for the server's function are not used. Many servers do not have a graphical user interface (GUI) as it is unnecessary and consumes resources that could be allocated elsewhere. Similarly, audio and USB interfaces may be omitted. Servers often run for long periods without interruption and availability must often be very high, making hardware reliability and durability extremely important. Although servers can be built from commodity computer parts, mission-critical enterprise servers are ideally very fault tolerant and use specialized hardware with low failure rates in order to maximize uptime, for even a short-term failure can cost more than purchasing and installing the system. For example, it may take only a few minutes of down time at a national stock exchange to justify the expense of entirely replacing the system with something more reliable. Servers may incorporate faster, higher-capacity hard drives, larger computer fans or water cooling to help remove heat, and uninterruptible power supplies that ensure the servers continue to function in the event of a power failure. These components offer higher performance and reliability at a correspondingly higher price. Hardware redundancy—installing more than one instance of modules such as power supplies and hard disks arranged so that if one fails another is automatically available—is widely used. ECC memory devices that detect and correct errors are used; non-ECC memory is more likely to cause data corruption.[citation needed] To increase reliability, most of the servers use memory with error detection and correction, redundant disks, redundant power supplies and so on. Such components are also frequently hot swappable, allowing technicians to replace them on the running server without shutting it down. To prevent overheating, servers often have more powerful fans. As servers are usually administered by qualified engineers, their operating systems are also more tuned for stability and performance than for user friendliness and ease of use, Linux taking noticeably larger percentage than for desktop computers.[citation needed] As servers need a stable power supply, good Internet access, increased security and are also noisy, it is usual to store them in dedicated server centers or special rooms. This requires reducing the power consumption, as extra energy used generates more heat thus causing the temperature in the room to exceed the acceptable limits; hence normally, server rooms are equipped with air conditioning devices. Server casings are usually flat and wide, adapted to store many devices next to each other in server rack. Unlike ordinary computers, servers usually can be configured, powered up and down or rebooted remotely, using out-of-band management. Many servers take a long time for the hardware to start up and load the operating system. Servers often do extensive pre-boot memory testing and verification and startup of remote management services. The hard drive controllers then start up banks of drives sequentially, rather than all at once, so as not to overload the power supply with startup surges, and afterwards they initiate RAID system pre-checks for correct operation of redundancy. It is common for a machine to take several minutes to start up, but it may not need restarting for months or years. [edit] Server operating systems Server-oriented operating systems tend to have certain features in common that make them more suitable for the server environment, such as GUI not available or optional ability to reconfigure and update both hardware and software to some extent without restart, advanced backup facilities to permit regular and frequent online backups of critical data, transparent data transfer between different volumes or devices, flexible and advanced networking capabilities, automation capabilities such as daemons in UNIX and services in Windows, and tight system security, with advanced user, resource, data, and memory protection. Server-oriented operating systems can, in many cases, interact with hardware sensors to detect conditions such as overheating, processor and disk failure, and consequently alert an operator or take remedial measures themselves. Because servers must supply a restricted range of services to perhaps many users while a desktop computer must carry out a wide range of functions required by its user, the requirements of an operating system for a server are different from those of a desktop machine. While it is possible for an operating system to make a machine both provide services and respond quickly to the requirements of a user, it is usual to use different operating systems on servers and desktop machines. Some operating systems are supplied in both server and desktop versions with similar user interface. Windows and Mac OS X server operating systems are deployed on a minority of servers, as are other proprietary mainframe operating systems, such as z/OS. The dominant operating systems among servers are UNIX-based or open source kernel distributions, such as Linux (the kernel).[citation needed] The rise of the microprocessor-based server was facilitated by the development of Unix to run on the x86 microprocessor architecture. The Microsoft Windows family of operating systems also runs on x86 hardware and, since Windows NT, have been available in versions suitable for server use. While the role of server and desktop operating systems remains distinct, improvements in the reliability of both hardware and operating systems have blurred the distinction between the two classes. Today, many desktop and server operating systems share similar code bases, differing mostly in configuration. The shift towards web applications and middleware platforms has also lessened the demand for specialist application servers. [edit] Types of servers In a general network environment the following types of servers may be found. Application server, a server dedicated to running certain software applications Catalog server, a central search point for information across a distributed network Communications server, carrier-grade computing platform for communications networks Database server, provides database services to other computer programs or computers Fax server, provides fax services for clients File server, provides file services Game server, a server that video game clients connect to in order to play online together Home server, a server for the home Name server or DNS server Print server, provides printer services Proxy server, acts as an intermediary for requests from clients seeking resources from other servers Sound server, provides multimedia broadcasting, streaming. Standalone server, an emulator for client-server (web-based) programs Web server, a server that HTTP clients connect to in order to send commands and receive responses along with data contents Almost the entire structure of the Internet is based upon a client-server model. High-level root nameservers, DNS servers, and routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world. World Wide Web Domain Name System E-mail FTP file transfer Chat and instant messaging Voice communication Streaming audio and video Online gaming Database servers Virtually every action taken by an ordinary Internet user requires one or more interactions with one or more servers. There are also technologies that operate on an inter-server level. Other services do not use dedicated servers; for example peer-to-peer file sharing, some implementations of telephony (e.g. Skype), and supplying television programs to several users (e.g. Kontiki, SlingBox). [edit] Energy consumption of servers In 2010, servers were responsible for 2.5% of energy consumption in the United States. A further 2.5% of United States energy consumption was used by cooling systems required to cool the servers. In 2010 it was estimated that by 2020 servers would use more of the world's energy than air travel if current trends continued.[3] [edit] Size classes Sizes include: Rack Server Tower Server Miniature (Home) Servers Mini Rack Servers [edit] See also Server definitions Home server File server Print server Media server [edit] References ^ What is a server? ^ Comer, Douglas E.; Stevens, David L. (1993). Vol III: Client-Server Programming and Applications. Internetworking with TCP/IP. Department of Computer Sciences, Purdue University, West Lafayette, IN 47907: Prentice Hall. pp. 11d. ISBN 0134742222. ^ "ARM chief calls for low-drain wireless". The Inquirer. 29 June 2010. Retrieved 30 June 2010. [edit] External links Google's first server, now held at the Computer History Museum[hide] v · d · e Computer sizes Classes of computers Larger Super · Minisuper · Mainframe Mini Midrange · Supermini · Server Micro Personal (Workstation · Desktop · Home · SFF (Nettop)) · Plug · Portable · Arcade system board · Video game console Mobile Portable/Mobile data terminal · Electronic organizer · Pocket computer Laptop Desktop replacement computer · Subnotebook (Netbook · Smartbook · Ultrabook) Tablet computer Ultra-mobile PC · Mobile Internet device (Internet tablet) Information appliance Handheld PC (Palm-size PC · Pocket computer) · PDA (EDA) · Mobile phone (Feature phone · Smartphone) · PMP (DAP) · E-book reader · Handheld game console Calculators Scientific · Programmable · Graphing Wearable computer Calculator watch · Wristop · Virtual retinal display · Head-mounted display (Head-up display) Others Microcontroller · Nanocomputer · Pizza Box Case · Single-board computer · Smartdust · Wireless sensor network View page ratings Rate this page What's this? Trustworthy Objective Complete Well-written I am highly knowledgeable about this topic (optional) Submit ratings Categories: Servers Server hardware Log in / create account Article Discussion Read Edit View history Main page Contents Featured content Current events Random article Donate to Wikipedia Interaction Help About Wikipedia Community portal Recent changes Contact Wikipedia Toolbox Print/export Languages Afrikaans Alemannisch العربية Bosanski Български Català Česky Dansk Deutsch Eesti Ελληνικά Español Esperanto Euskara فارسی Français Galego 한국어 हिन्दी Hrvatski Bahasa Indonesia Interlingua Ирон Íslenska Italiano עברית ქართული Қазақша Latviešu Lietuvių Magyar മലയാളം მარგალური Bahasa Melayu Nederlands 日本語 ‪Norsk (bokmål)‬ ‪Norsk (nynorsk)‬ Occitan Polski Português Română Русский සිංහල Simple English Slovenčina Slovenščina کوردی Српски / Srpski Srpskohrvatski / Српскохрватски Suomi Svenska Tagalog தமிழ் ไทย Тоҷикӣ Türkçe Українська اردو Tiếng Việt ייִדיש 中文 This page was last modified on 18 December 2011 at 03:40. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of use for details. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Contact us Privacy policy About Wikipedia Disclaimers Mobile view

Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Server (computing) From Wikipedia, the free encyclopedia For other uses, see Server (disambiguation). This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2010) Servers in a data center. Several servers are mounted on a rack and connected to a display. A rack-mountable server. Top cover removed to reveal the internals. In the context of client-server architecture, a server is a computer program running to serve the requests of other programs, the "clients". Thus, the "server" performs some computational task on behalf of "clients". The clients either run on the same computer or connect through the network. In most common use, server is a physical computer (a hardware system) dedicated to running one or more such services (as a host),[1] to serve the needs of users of the other computers on the network. Depending on the computing service that it offers it could be a database server, file server, mail server, print server, web server, or other. In the context of Internet Protocol (IP) networking, a server is a program that operates as a socket listener.[2] Servers often provide essential services across a network, either to private users inside a large organization or to public users via the Internet. For example, when you enter a query in a search engine, the query is sent from your computer over the internet to the servers that store all the relevant web pages. The results are sent back by the server to your computer.Contents [hide] 1 Usage 2 Server hardware 3 Server operating systems 4 Types of servers 5 Energy consumption of servers 6 Size classes 7 See also 8 References 9 External links [edit] Usage The term server is used quite broadly in information technology. Despite the many server-branded products available (such as server versions of hardware, software or operating systems), in theory any computerised process that shares a resource to one or more client processes is a server. To illustrate this, take the common example of file sharing. While the existence of files on a machine does not classify it as a server, the mechanism which shares these files to clients by the operating system is the server. Similarly, consider a web server application (such as the multiplatform "Apache HTTP Server"). This web server software can be run on any capable computer. For example, while a laptop or personal computer is not typically known as a server, they can in these situations fulfill the role of one, and hence be labelled as one. It is in this case that the machine's purpose as a web server classifies it in general as a server. In the hardware sense, the word server typically designates computer models intended for hosting software applications under the heavy demand of a network environment. In this client-server configuration one or more machines, either a computer or a computer appliance, share information with each other with one acting as a host for the other[s]. While nearly any personal computer is capable of acting as a network server, a dedicated server will contain features making it more suitable for production environments. These features may include a faster CPU, increased high-performance RAM, and typically more than one large hard drive. More obvious distinctions include marked redundancy in power supplies, network connections, and even the servers themselves. Between the 1990s and 2000s an increase in the use of dedicated hardware saw the advent of self-contained server appliances. One well-known product is the Google Search Appliance, a unit that combines hardware and software in an out-of-the-box packaging. Simpler examples of such appliances include switches, routers, gateways, and print server, all of which are available in a near plug-and-play configuration. Modern operating systems such as Microsoft Windows or Linux distributions rightfully seem to be designed with a client-server architecture in mind. These operating systems attempt to abstract hardware, allowing a wide variety of software to work with components of the computer. In a sense, the operating system can be seen as serving hardware to the software, which in all but low-level programming languages must interact using an API. These operating systems may be able to run programs in the background called either services or daemons. Such programs may wait in a sleep state for their necessity to become apparent, such as the aforementioned Apache HTTP Server software. Since any software that provides services can be called a server, modern personal computers can be seen as a forest of servers and clients operating in parallel. The Internet itself is also a forest of servers and clients. Merely requesting a web page from a few kilometers away involves satisfying a stack of protocols that involve many examples of hardware and software servers. The least of these are the routers, modems, domain name servers, and various other servers necessary to provide us the world wide web. [edit] Server hardware A server rack seen from the rear Hardware requirements for servers vary, depending on the server application. Absolute CPU speed is not usually as critical to a server as it is to a desktop machine[citation needed]. Servers' duties to provide service to many users over a network lead to different requirements such as fast network connections and high I/O throughput. Since servers are usually accessed over a network, they may run in headless mode without a monitor or input device. Processes that are not needed for the server's function are not used. Many servers do not have a graphical user interface (GUI) as it is unnecessary and consumes resources that could be allocated elsewhere. Similarly, audio and USB interfaces may be omitted. Servers often run for long periods without interruption and availability must often be very high, making hardware reliability and durability extremely important. Although servers can be built from commodity computer parts, mission-critical enterprise servers are ideally very fault tolerant and use specialized hardware with low failure rates in order to maximize uptime, for even a short-term failure can cost more than purchasing and installing the system. For example, it may take only a few minutes of down time at a national stock exchange to justify the expense of entirely replacing the system with something more reliable. Servers may incorporate faster, higher-capacity hard drives, larger computer fans or water cooling to help remove heat, and uninterruptible power supplies that ensure the servers continue to function in the event of a power failure. These components offer higher performance and reliability at a correspondingly higher price. Hardware redundancy—installing more than one instance of modules such as power supplies and hard disks arranged so that if one fails another is automatically available—is widely used. ECC memory devices that detect and correct errors are used; non-ECC memory is more likely to cause data corruption.[citation needed] To increase reliability, most of the servers use memory with error detection and correction, redundant disks, redundant power supplies and so on. Such components are also frequently hot swappable, allowing technicians to replace them on the running server without shutting it down. To prevent overheating, servers often have more powerful fans. As servers are usually administered by qualified engineers, their operating systems are also more tuned for stability and performance than for user friendliness and ease of use, Linux taking noticeably larger percentage than for desktop computers.[citation needed] As servers need a stable power supply, good Internet access, increased security and are also noisy, it is usual to store them in dedicated server centers or special rooms. This requires reducing the power consumption, as extra energy used generates more heat thus causing the temperature in the room to exceed the acceptable limits; hence normally, server rooms are equipped with air conditioning devices. Server casings are usually flat and wide, adapted to store many devices next to each other in server rack. Unlike ordinary computers, servers usually can be configured, powered up and down or rebooted remotely, using out-of-band management. Many servers take a long time for the hardware to start up and load the operating system. Servers often do extensive pre-boot memory testing and verification and startup of remote management services. The hard drive controllers then start up banks of drives sequentially, rather than all at once, so as not to overload the power supply with startup surges, and afterwards they initiate RAID system pre-checks for correct operation of redundancy. It is common for a machine to take several minutes to start up, but it may not need restarting for months or years. [edit] Server operating systems Server-oriented operating systems tend to have certain features in common that make them more suitable for the server environment, such as GUI not available or optional ability to reconfigure and update both hardware and software to some extent without restart, advanced backup facilities to permit regular and frequent online backups of critical data, transparent data transfer between different volumes or devices, flexible and advanced networking capabilities, automation capabilities such as daemons in UNIX and services in Windows, and tight system security, with advanced user, resource, data, and memory protection. Server-oriented operating systems can, in many cases, interact with hardware sensors to detect conditions such as overheating, processor and disk failure, and consequently alert an operator or take remedial measures themselves. Because servers must supply a restricted range of services to perhaps many users while a desktop computer must carry out a wide range of functions required by its user, the requirements of an operating system for a server are different from those of a desktop machine. While it is possible for an operating system to make a machine both provide services and respond quickly to the requirements of a user, it is usual to use different operating systems on servers and desktop machines. Some operating systems are supplied in both server and desktop versions with similar user interface. Windows and Mac OS X server operating systems are deployed on a minority of servers, as are other proprietary mainframe operating systems, such as z/OS. The dominant operating systems among servers are UNIX-based or open source kernel distributions, such as Linux (the kernel).[citation needed] The rise of the microprocessor-based server was facilitated by the development of Unix to run on the x86 microprocessor architecture. The Microsoft Windows family of operating systems also runs on x86 hardware and, since Windows NT, have been available in versions suitable for server use. While the role of server and desktop operating systems remains distinct, improvements in the reliability of both hardware and operating systems have blurred the distinction between the two classes. Today, many desktop and server operating systems share similar code bases, differing mostly in configuration. The shift towards web applications and middleware platforms has also lessened the demand for specialist application servers. [edit] Types of servers In a general network environment the following types of servers may be found. Application server, a server dedicated to running certain software applications Catalog server, a central search point for information across a distributed network Communications server, carrier-grade computing platform for communications networks Database server, provides database services to other computer programs or computers Fax server, provides fax services for clients File server, provides file services Game server, a server that video game clients connect to in order to play online together Home server, a server for the home Name server or DNS server Print server, provides printer services Proxy server, acts as an intermediary for requests from clients seeking resources from other servers Sound server, provides multimedia broadcasting, streaming. Standalone server, an emulator for client-server (web-based) programs Web server, a server that HTTP clients connect to in order to send commands and receive responses along with data contents Almost the entire structure of the Internet is based upon a client-server model. High-level root nameservers, DNS servers, and routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world. World Wide Web Domain Name System E-mail FTP file transfer Chat and instant messaging Voice communication Streaming audio and video Online gaming Database servers Virtually every action taken by an ordinary Internet user requires one or more interactions with one or more servers. There are also technologies that operate on an inter-server level. Other services do not use dedicated servers; for example peer-to-peer file sharing, some implementations of telephony (e.g. Skype), and supplying television programs to several users (e.g. Kontiki, SlingBox). [edit] Energy consumption of servers In 2010, servers were responsible for 2.5% of energy consumption in the United States. A further 2.5% of United States energy consumption was used by cooling systems required to cool the servers. In 2010 it was estimated that by 2020 servers would use more of the world's energy than air travel if current trends continued.[3] [edit] Size classes Sizes include: Rack Server Tower Server Miniature (Home) Servers Mini Rack Servers [edit] See also Server definitions Home server File server Print server Media server [edit] References ^ What is a server? ^ Comer, Douglas E.; Stevens, David L. (1993). Vol III: Client-Server Programming and Applications. Internetworking with TCP/IP. Department of Computer Sciences, Purdue University, West Lafayette, IN 47907: Prentice Hall. pp. 11d. ISBN 0134742222. ^ "ARM chief calls for low-drain wireless". The Inquirer. 29 June 2010. Retrieved 30 June 2010. [edit] External links Google's first server, now held at the Computer History Museum[hide] v · d · e Computer sizes Classes of computers Larger Super · Minisuper · Mainframe Mini Midrange · Supermini · Server Micro Personal (Workstation · Desktop · Home · SFF (Nettop)) · Plug · Portable · Arcade system board · Video game console Mobile Portable/Mobile data terminal · Electronic organizer · Pocket computer Laptop Desktop replacement computer · Subnotebook (Netbook · Smartbook · Ultrabook) Tablet computer Ultra-mobile PC · Mobile Internet device (Internet tablet) Information appliance Handheld PC (Palm-size PC · Pocket computer) · PDA (EDA) · Mobile phone (Feature phone · Smartphone) · PMP (DAP) · E-book reader · Handheld game console Calculators Scientific · Programmable · Graphing Wearable computer Calculator watch · Wristop · Virtual retinal display · Head-mounted display (Head-up display) Others Microcontroller · Nanocomputer · Pizza Box Case · Single-board computer · Smartdust · Wireless sensor network View page ratings Rate this page What's this? Trustworthy Objective Complete Well-written I am highly knowledgeable about this topic (optional) Submit ratings Categories: Servers Server hardware Log in / create account Article Discussion Read Edit View history Main page Contents Featured content Current events Random article Donate to Wikipedia Interaction Help About Wikipedia Community portal Recent changes Contact Wikipedia Toolbox Print/export Languages Afrikaans Alemannisch العربية Bosanski Български Català Česky Dansk Deutsch Eesti Ελληνικά Español Esperanto Euskara فارسی Français Galego 한국어 हिन्दी Hrvatski Bahasa Indonesia Interlingua Ирон Íslenska Italiano עברית ქართული Қазақша Latviešu Lietuvių Magyar മലയാളം მარგალური Bahasa Melayu Nederlands 日本語 ‪Norsk (bokmål)‬ ‪Norsk (nynorsk)‬ Occitan Polski Português Română Русский සිංහල Simple English Slovenčina Slovenščina کوردی Српски / Srpski Srpskohrvatski / Српскохрватски Suomi Svenska Tagalog தமிழ் ไทย Тоҷикӣ Türkçe Українська اردو Tiếng Việt ייִדיש 中文 This page was last modified on 18 December 2011 at 03:40. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of use for details. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Contact us Privacy policy About Wikipedia Disclaimers Mobile view

Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Video card From Wikipedia, the free encyclopedia (Redirected from Graphics card) This article may require cleanup to meet Wikipedia's quality standards. (Consider using more specific cleanup instructions.) Please help improve this article if you can. The talk page may contain suggestions. (September 2011) Video Card image = Gpu-connections.pngConnects to Motherboard via one of: ISA MCA VLB PCI AGP PCI-X PCI Express Others Display via one of: VGA connector Digital Visual Interface Composite video S-Video Component video HDMI DMS-59 DisplayPort Others A video card, display card, graphics card, or graphics adapter is an expansion card which generates output images to a display. Most video cards offer various functions such as accelerated rendering of 3D scenes and 2D graphics, MPEG-2/MPEG-4 decoding, TV output, or the ability to connect multiple monitors (multi-monitor). Other modern high performance video cards are used for more graphically demanding purposes, such as PC games. Video hardware is often integrated into the motherboard, however all modern motherboards provide expansion ports to which a video card can be attached. In this configuration it is sometimes referred to as a video controller or graphics controller. Modern low-end to mid-range motherboards often include a graphics chipset manufactured by the developer of the northbridge (i.e. an nForce chipset with Nvidia graphics or an Intel chipset with Intel graphics) on the motherboard. This graphics chip usually has a small quantity of embedded memory and takes some of the system's main RAM, reducing the total RAM available. This is usually called integrated graphics or on-board graphics, and is low-performance and undesirable for those wishing to run 3D applications. A dedicated graphics card on the other hand has its own RAM and Processor specifically for processing video images, and thus offloads this work from the CPU and system RAM. Almost all of these motherboards allow the disabling of the integrated graphics chip in BIOS, and have an AGP, PCI, or PCI Express slot for adding a higher-performance graphics card in place of the integrated graphics.Contents [hide] 1 Components 1.1 Graphics Processing Unit 1.2 Video BIOS 1.3 Video memory 1.4 RAMDAC 1.5 Outputs 1.5.1 Video Graphics Array (VGA) (DB-15) 1.5.2 Digital Visual Interface (DVI) 1.5.3 Video In Video Out (VIVO) for S-Video, Composite video and Component video 1.5.4 High-Definition Multimedia Interface (HDMI) 1.5.5 DisplayPort 1.5.6 Other types of connection systems 1.6 Motherboard interface 1.7 Power demand 2 See also 3 References 4 External links [edit] Components A modern video card consists of a printed circuit board on which the components are mounted. These include: [edit] Graphics Processing Unit Main article: Graphics processing unit A GPU is a dedicated processor optimized for accelerating graphics. The processor is designed specifically to perform floating-point calculations, which are fundamental to 3D graphics rendering and 2D picture drawing. The main attributes of the GPU are the core clock frequency, which typically ranges from 250 MHz to 4 GHz and the number of pipelines (vertex and fragment shaders), which translate a 3D image characterized by vertices and lines into a 2D image formed by pixels. Modern GPUs are massively parallel, and fully programmable. Their computing power in orders of magnitude are higher than that of CPUs. As consequence, they challenge CPUs in high performance computing, and push leading manufacturers on processors. [edit] Video BIOS The video BIOS or firmware contains the basic program, which is usually hidden, that governs the video card's operations and provides the instructions that allow the computer and software to interact with the card. It may contain information on the memory timing, operating speeds and voltages of the graphics processor, RAM, and other information. It is sometimes possible to change the BIOS (e.g. to enable factory-locked settings for higher performance), although this is typically only done by video card overclockers and has the potential to irreversibly damage the card. [edit] Video memoryType Memory clock rate (MHz) Bandwidth (GB/s) DDR 166 - 950 1.2 - 30.4 DDR2 533 - 1000 8.5 - 16 GDDR3 700 - 2400 5.6 - 156.6 GDDR4 2000 - 3600 128 - 200 GDDR5 900 - 5600 130 - 230 The memory capacity of most modern video cards ranges from 128 MB to 8 GB.[1][2] Since video memory needs to be accessed by the GPU and the display circuitry, it often uses special high-speed or multi-port memory, such as VRAM, WRAM, SGRAM, etc. Around 2003, the video memory was typically based on DDR technology. During and after that year, manufacturers moved towards DDR2, GDDR3, GDDR4 and GDDR5. The effective memory clock rate in modern cards is generally between 400 MHz and 3.8 GHz. Video memory may be used for storing other data as well as the screen image, such as the Z-buffer, which manages the depth coordinates in 3D graphics, textures, vertex buffers, and compiled shader programs. [edit] RAMDAC The RAMDAC, or Random Access Memory Digital-to-Analog Converter, converts digital signals to analog signals for use by a computer display that uses analog inputs such as CRT displays. The RAMDAC is a kind of RAM chip that regulates the functioning of the graphics card. Depending on the number of bits used and the RAMDAC-data-transfer rate, the converter will be able to support different computer-display refresh rates. With CRT displays, it is best to work over 75 Hz and never under 60 Hz, in order to minimize flicker.[3] (With LCD displays, flicker is not a problem.) Due to the growing popularity of digital computer displays and the integration of the RAMDAC onto the GPU die, it has mostly disappeared as a discrete component. All current LCDs, plasma displays and TVs work in the digital domain and do not require a RAMDAC. There are few remaining legacy LCD and plasma displays that feature analog inputs (VGA, component, SCART etc.) only. These require a RAMDAC, but they reconvert the analog signal back to digital before they can display it, with the unavoidable loss of quality stemming from this digital-to-analog-to-digital conversion. [edit] Outputs Video In Video Out (VIVO) for S-Video (TV-out), Digital Visual Interface (DVI) for High-definition television (HDTV), and DB-15 for Video Graphics Array (VGA) The most common connection systems between the video card and the computer display are: [edit] Video Graphics Array (VGA) (DB-15) Video Graphics Array (VGA) (DE-15). Analog-based standard adopted in the late 1980s designed for CRT displays, also called VGA connector. Some problems of this standard are electrical noise, image distortion and sampling error evaluating pixels. [edit] Digital Visual Interface (DVI) Digital Visual Interface (DVI-I). Digital-based standard designed for displays such as flat-panel displays (LCDs, plasma screens, wide high-definition television displays) and video projectors. In some rare cases high end CRT monitors also use DVI. It avoids image distortion and electrical noise, corresponding each pixel from the computer to a display pixel, using its native resolution. It is worth to note that most manufacturers include DVI-I connector, allowing(via simple adapter) standard RGB signal output to an old CRT or LCD monitor with VGA input. [edit] Video In Video Out (VIVO) for S-Video, Composite video and Component video 9-pin mini-DIN connector, frequently used for VIVO connections. Included to allow the connection with televisions, DVD players, video recorders and video game consoles. They often come in two 10-pin mini-DIN connector variations, and the VIVO splitter cable generally comes with either 4 connectors (S-Video in and out + composite video in and out), or 6 connectors (S-Video in and out + component PB out + component PR out + component Y out [also composite out] + composite in). [edit] High-Definition Multimedia Interface (HDMI) High-Definition Multimedia Interface (HDMI) An advanced digital audio/video interconnect released in 2003 and is commonly used to connect game consoles and DVD players to a display. HDMI supports copy protection through HDCP. [edit] DisplayPort DisplayPort An advanced license- and royalty-free digital audio/video interconnect released in 2007. DisplayPort intends to replace VGA and DVI for connecting a display to a computer. [edit] Other types of connection systemsComposite video Analog system with lower resolution; it uses the RCA connector. Component video It has three cables, each with RCA connector (YCBCR for digital component, or YPBPR for analog component); it is used in projectors, DVD players and some televisions. DB13W3 An analog standard once used by Sun Microsystems, SGI and IBM. DMS-59 A connector that provides two DVI or VGA outputs on a single connector. This is a DMS-59 port. '''' [edit] Motherboard interface Main articles: Bus (computing) and Expansion card Chronologically, connection systems between video card and motherboard were, mainly: S-100 bus: designed in 1974 as a part of the Altair 8800, it was the first industry-standard bus for the microcomputer industry. ISA: Introduced in 1981 by IBM, it became dominant in the marketplace in the 1980s. It was an 8 or 16-bit bus clocked at 8 MHz. NuBus: Used in Macintosh II, it was a 32-bit bus with an average bandwidth of 10 to 20 MB/s. MCA: Introduced in 1987 by IBM it was a 32-bit bus clocked at 10 MHz. EISA: Released in 1988 to compete with IBM's MCA, it was compatible with the earlier ISA bus. It was a 32-bit bus clocked at 8.33 MHz. VLB: An extension of ISA, it was a 32-bit bus clocked at 33 MHz. PCI: Replaced the EISA, ISA, MCA and VESA buses from 1993 onwards. PCI allowed dynamic connectivity between devices, avoiding the jumpers manual adjustments. It is a 32-bit bus clocked 33 MHz. UPA: An interconnect bus architecture introduced by Sun Microsystems in 1995. It had a 64-bit bus clocked at 67 or 83 MHz. USB: Although mostly used for miscellaneous devices, such as secondary storage devices and toys, USB displays and display adapters exist. AGP: First used in 1997, it is a dedicated-to-graphics bus. It is a 32-bit bus clocked at 66 MHz. PCI-X: An extension of the PCI bus, it was introduced in 1998. It improves upon PCI by extending the width of bus to 64-bit and the clock frequency to up to 133 MHz. PCI Express: Abbreviated PCIe, it is a point to point interface released in 2004. In 2006 provided double the data-transfer rate of AGP. It should not be confused with PCI-X, an enhanced version of the original PCI specification. In the attached table[4] is a comparison between a selection of the features of some of those interfaces.Bus Width (bits) Clock rate (MHz) Bandwidth (MB/s) Style ISA XT 8 4,77 8 Parallel ISA AT 16 8,33 16 Parallel MCA 32 10 20 Parallel NUBUS 32 10 10-40 Parallel EISA 32 8,33 32 Parallel VESA 32 40 160 Parallel PCI 32 - 64 33 - 100 132 - 800 Parallel AGP 1x 32 66 264 Parallel AGP 2x 32 66 528 Parallel AGP 4x 32 66 1000 Parallel AGP 8x 32 66 2000 Parallel PCIe x1 1 2500 / 5000 250 / 500 Serial PCIe x4 1 × 4 2500 / 5000 1000 / 2000 Serial PCIe x8 1 × 8 2500 / 5000 2000 / 4000 Serial PCIe x16 1 × 16 2500 / 5000 4000 / 8000 Serial PCIe x16 2.0 1 × 16 5000 / 10000 8000 / 16000 Serial [edit] Power demand As the processing power of video cards has increased, so has their demand for electrical power. Current high-performance video cards tend to consume a great deal of power. While CPU and power supply makers have recently moved toward higher efficiency, power demands of GPUs have continued to rise, so the video card may be the biggest electricity user in a computer.[5][6] Although power supplies are increasing their power too, the bottleneck is due to the PCI-Express connection, which is limited to supplying 75 Watts.[7] Modern video cards with a power consumption over 75 Watts usually include a combination of six-pin (75W) or eight-pin (150W) sockets that connect directly to the power supply [edit] See also Computer science portal ATI, NVIDIA - duopoly of 3D chip GPU and graphics card designers ATI Crossfire - ATI's proprietary mechanism for scaling graphics performance Computer display standards - detailed list of standards like SVGA, WXGA, WUXGA, etc. Feature connector GeForce, Radeon - Examples of GPUs. GPGPU (i.e.: CUDA, AMD FireStream) Graphics hardware and FOSS Framebuffer - The computer memory used to store a screen image Mini-DIN connector List of video card manufacturers Scalable Link Interface - NVIDIA's proprietary mechanism for scaling graphics performance Texture mapping - A means of adding image details to a 3D scene Video In Video Out (VIVO) Z-buffering - A means of determining visibility. [edit] References ^ ATI FireGL V8650. ^ NVIDIA Quadro FX 5800. ^ "Refresh rate recommended". ^ "Buses features". ^ http://www.xbitlabs.com/articles/video/display/power-noise.html X-bit labs: Faster, Quieter, Lower: Power Consumption and Noise Level of Contemporary Graphics Cards ^ http://www.codinghorror.com/blog/archives/000662.html Coding Horror Video Card Power Consumption ^ Maxim Integrated Products. "Power-Supply Management Solution for PCI Express x16 Graphics 150W-ATX Add-In Cards". Mueller, Scott (2005) Upgrading and Repairing PCs. 16th edition. Que Publishing. ISBN 0-7897-3173-8 [edit] External links Wikimedia Commons has media related to: Graphics card Graphics Card Comparisons, Performance, and Specifications How Graphics Cards Work at HowStuffWorks http://www.gpureview.com[hide] v · d · e Basic computer components Input devices Keyboard · Image scanner · Microphone · Pointing device (Graphics tablet · Joystick · Light pen · Mouse · Touchpad · Touchscreen · Trackball) · Webcam (Softcam) Output devices Monitor · Printer · Speakers Removable data storage Optical disc drive (CD-RW · DVD+RW) · Floppy disk · Memory card · USB flash drive Computer case Central processing unit (CPU) · Hard disk / Solid-state drive · Motherboard · Network interface controller · Power supply · Random-access memory (RAM) · Sound card · Video card Data ports Ethernet · Firewire (IEEE 1394) · Parallel port · Serial port · Thunderbolt · Universal Serial Bus (USB) View page ratings Rate this page What's this? Trustworthy Objective Complete Well-written I am highly knowledgeable about this topic (optional) Submit ratings Categories: Graphics hardware Video cards Log in / create account Article Discussion Read Edit View history Main page Contents Featured content Current events Random article Donate to Wikipedia Interaction Help About Wikipedia Community portal Recent changes Contact Wikipedia Toolbox Print/export Languages العربية Bosanski Brezhoneg Български Català Česky Dansk Deutsch Eesti Ελληνικά Español Esperanto Français Galego 한국어 Hrvatski Bahasa Indonesia Italiano עברית Қазақша Latviešu Lietuvių Lumbaart Magyar Македонски Bahasa Melayu Nederlands 日本語 ‪Norsk (bokmål)‬ ‪Norsk (nynorsk)‬ Олык Марий Polski Português Română Русский Shqip Simple English Slovenčina Slovenščina Српски / Srpski Srpskohrvatski / Српскохрватски Suomi Svenska தமிழ் ไทย Тоҷикӣ Türkçe Українська اردو Vèneto Tiếng Việt 中文 This page was last modified on 14 December 2011 at 15:44. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of use for details. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Contact us Privacy policy About Wikipedia Disclaimers Mobile view Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Video card From Wikipedia, the free encyclopedia (Redirected from Graphics card) This article may require cleanup to meet Wikipedia's quality standards. (Consider using more specific cleanup instructions.) Please help improve this article if you can. The talk page may contain suggestions. (September 2011) Video Card image = Gpu-connections.pngConnects to Motherboard via one of: ISA MCA VLB PCI AGP PCI-X PCI Express Others Display via one of: VGA connector Digital Visual Interface Composite video S-Video Component video HDMI DMS-59 DisplayPort Others A video card, display card, graphics card, or graphics adapter is an expansion card which generates output images to a display. Most video cards offer various functions such as accelerated rendering of 3D scenes and 2D graphics, MPEG-2/MPEG-4 decoding, TV output, or the ability to connect multiple monitors (multi-monitor). Other modern high performance video cards are used for more graphically demanding purposes, such as PC games. Video hardware is often integrated into the motherboard, however all modern motherboards provide expansion ports to which a video card can be attached. In this configuration it is sometimes referred to as a video controller or graphics controller. Modern low-end to mid-range motherboards often include a graphics chipset manufactured by the developer of the northbridge (i.e. an nForce chipset with Nvidia graphics or an Intel chipset with Intel graphics) on the motherboard. This graphics chip usually has a small quantity of embedded memory and takes some of the system's main RAM, reducing the total RAM available. This is usually called integrated graphics or on-board graphics, and is low-performance and undesirable for those wishing to run 3D applications. A dedicated graphics card on the other hand has its own RAM and Processor specifically for processing video images, and thus offloads this work from the CPU and system RAM. Almost all of these motherboards allow the disabling of the integrated graphics chip in BIOS, and have an AGP, PCI, or PCI Express slot for adding a higher-performance graphics card in place of the integrated graphics.Contents [hide] 1 Components 1.1 Graphics Processing Unit 1.2 Video BIOS 1.3 Video memory 1.4 RAMDAC 1.5 Outputs 1.5.1 Video Graphics Array (VGA) (DB-15) 1.5.2 Digital Visual Interface (DVI) 1.5.3 Video In Video Out (VIVO) for S-Video, Composite video and Component video 1.5.4 High-Definition Multimedia Interface (HDMI) 1.5.5 DisplayPort 1.5.6 Other types of connection systems 1.6 Motherboard interface 1.7 Power demand 2 See also 3 References 4 External links [edit] Components A modern video card consists of a printed circuit board on which the components are mounted. These include: [edit] Graphics Processing Unit Main article: Graphics processing unit A GPU is a dedicated processor optimized for accelerating graphics. The processor is designed specifically to perform floating-point calculations, which are fundamental to 3D graphics rendering and 2D picture drawing. The main attributes of the GPU are the core clock frequency, which typically ranges from 250 MHz to 4 GHz and the number of pipelines (vertex and fragment shaders), which translate a 3D image characterized by vertices and lines into a 2D image formed by pixels. Modern GPUs are massively parallel, and fully programmable. Their computing power in orders of magnitude are higher than that of CPUs. As consequence, they challenge CPUs in high performance computing, and push leading manufacturers on processors. [edit] Video BIOS The video BIOS or firmware contains the basic program, which is usually hidden, that governs the video card's operations and provides the instructions that allow the computer and software to interact with the card. It may contain information on the memory timing, operating speeds and voltages of the graphics processor, RAM, and other information. It is sometimes possible to change the BIOS (e.g. to enable factory-locked settings for higher performance), although this is typically only done by video card overclockers and has the potential to irreversibly damage the card. [edit] Video memoryType Memory clock rate (MHz) Bandwidth (GB/s) DDR 166 - 950 1.2 - 30.4 DDR2 533 - 1000 8.5 - 16 GDDR3 700 - 2400 5.6 - 156.6 GDDR4 2000 - 3600 128 - 200 GDDR5 900 - 5600 130 - 230 The memory capacity of most modern video cards ranges from 128 MB to 8 GB.[1][2] Since video memory needs to be accessed by the GPU and the display circuitry, it often uses special high-speed or multi-port memory, such as VRAM, WRAM, SGRAM, etc. Around 2003, the video memory was typically based on DDR technology. During and after that year, manufacturers moved towards DDR2, GDDR3, GDDR4 and GDDR5. The effective memory clock rate in modern cards is generally between 400 MHz and 3.8 GHz. Video memory may be used for storing other data as well as the screen image, such as the Z-buffer, which manages the depth coordinates in 3D graphics, textures, vertex buffers, and compiled shader programs. [edit] RAMDAC The RAMDAC, or Random Access Memory Digital-to-Analog Converter, converts digital signals to analog signals for use by a computer display that uses analog inputs such as CRT displays. The RAMDAC is a kind of RAM chip that regulates the functioning of the graphics card. Depending on the number of bits used and the RAMDAC-data-transfer rat

Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Video card From Wikipedia, the free encyclopedia (Redirected from Graphics card) This article may require cleanup to meet Wikipedia's quality standards. (Consider using more specific cleanup instructions.) Please help improve this article if you can. The talk page may contain suggestions. (September 2011) Video Card image = Gpu-connections.pngConnects to Motherboard via one of: ISA MCA VLB PCI AGP PCI-X PCI Express Others Display via one of: VGA connector Digital Visual Interface Composite video S-Video Component video HDMI DMS-59 DisplayPort Others A video card, display card, graphics card, or graphics adapter is an expansion card which generates output images to a display. Most video cards offer various functions such as accelerated rendering of 3D scenes and 2D graphics, MPEG-2/MPEG-4 decoding, TV output, or the ability to connect multiple monitors (multi-monitor). Other modern high performance video cards are used for more graphically demanding purposes, such as PC games. Video hardware is often integrated into the motherboard, however all modern motherboards provide expansion ports to which a video card can be attached. In this configuration it is sometimes referred to as a video controller or graphics controller. Modern low-end to mid-range motherboards often include a graphics chipset manufactured by the developer of the northbridge (i.e. an nForce chipset with Nvidia graphics or an Intel chipset with Intel graphics) on the motherboard. This graphics chip usually has a small quantity of embedded memory and takes some of the system's main RAM, reducing the total RAM available. This is usually called integrated graphics or on-board graphics, and is low-performance and undesirable for those wishing to run 3D applications. A dedicated graphics card on the other hand has its own RAM and Processor specifically for processing video images, and thus offloads this work from the CPU and system RAM. Almost all of these motherboards allow the disabling of the integrated graphics chip in BIOS, and have an AGP, PCI, or PCI Express slot for adding a higher-performance graphics card in place of the integrated graphics.Contents [hide] 1 Components 1.1 Graphics Processing Unit 1.2 Video BIOS 1.3 Video memory 1.4 RAMDAC 1.5 Outputs 1.5.1 Video Graphics Array (VGA) (DB-15) 1.5.2 Digital Visual Interface (DVI) 1.5.3 Video In Video Out (VIVO) for S-Video, Composite video and Component video 1.5.4 High-Definition Multimedia Interface (HDMI) 1.5.5 DisplayPort 1.5.6 Other types of connection systems 1.6 Motherboard interface 1.7 Power demand 2 See also 3 References 4 External links [edit] Components A modern video card consists of a printed circuit board on which the components are mounted. These include: [edit] Graphics Processing Unit Main article: Graphics processing unit A GPU is a dedicated processor optimized for accelerating graphics. The processor is designed specifically to perform floating-point calculations, which are fundamental to 3D graphics rendering and 2D picture drawing. The main attributes of the GPU are the core clock frequency, which typically ranges from 250 MHz to 4 GHz and the number of pipelines (vertex and fragment shaders), which translate a 3D image characterized by vertices and lines into a 2D image formed by pixels. Modern GPUs are massively parallel, and fully programmable. Their computing power in orders of magnitude are higher than that of CPUs. As consequence, they challenge CPUs in high performance computing, and push leading manufacturers on processors. [edit] Video BIOS The video BIOS or firmware contains the basic program, which is usually hidden, that governs the video card's operations and provides the instructions that allow the computer and software to interact with the card. It may contain information on the memory timing, operating speeds and voltages of the graphics processor, RAM, and other information. It is sometimes possible to change the BIOS (e.g. to enable factory-locked settings for higher performance), although this is typically only done by video card overclockers and has the potential to irreversibly damage the card. [edit] Video memoryType Memory clock rate (MHz) Bandwidth (GB/s) DDR 166 - 950 1.2 - 30.4 DDR2 533 - 1000 8.5 - 16 GDDR3 700 - 2400 5.6 - 156.6 GDDR4 2000 - 3600 128 - 200 GDDR5 900 - 5600 130 - 230 The memory capacity of most modern video cards ranges from 128 MB to 8 GB.[1][2] Since video memory needs to be accessed by the GPU and the display circuitry, it often uses special high-speed or multi-port memory, such as VRAM, WRAM, SGRAM, etc. Around 2003, the video memory was typically based on DDR technology. During and after that year, manufacturers moved towards DDR2, GDDR3, GDDR4 and GDDR5. The effective memory clock rate in modern cards is generally between 400 MHz and 3.8 GHz. Video memory may be used for storing other data as well as the screen image, such as the Z-buffer, which manages the depth coordinates in 3D graphics, textures, vertex buffers, and compiled shader programs. [edit] RAMDAC The RAMDAC, or Random Access Memory Digital-to-Analog Converter, converts digital signals to analog signals for use by a computer display that uses analog inputs such as CRT displays. The RAMDAC is a kind of RAM chip that regulates the functioning of the graphics card. Depending on the number of bits used and the RAMDAC-data-transfer rate, the converter will be able to support different computer-display refresh rates. With CRT displays, it is best to work over 75 Hz and never under 60 Hz, in order to minimize flicker.[3] (With LCD displays, flicker is not a problem.) Due to the growing popularity of digital computer displays and the integration of the RAMDAC onto the GPU die, it has mostly disappeared as a discrete component. All current LCDs, plasma displays and TVs work in the digital domain and do not require a RAMDAC. There are few remaining legacy LCD and plasma displays that feature analog inputs (VGA, component, SCART etc.) only. These require a RAMDAC, but they reconvert the analog signal back to digital before they can display it, with the unavoidable loss of quality stemming from this digital-to-analog-to-digital conversion. [edit] Outputs Video In Video Out (VIVO) for S-Video (TV-out), Digital Visual Interface (DVI) for High-definition television (HDTV), and DB-15 for Video Graphics Array (VGA) The most common connection systems between the video card and the computer display are: [edit] Video Graphics Array (VGA) (DB-15) Video Graphics Array (VGA) (DE-15). Analog-based standard adopted in the late 1980s designed for CRT displays, also called VGA connector. Some problems of this standard are electrical noise, image distortion and sampling error evaluating pixels. [edit] Digital Visual Interface (DVI) Digital Visual Interface (DVI-I). Digital-based standard designed for displays such as flat-panel displays (LCDs, plasma screens, wide high-definition television displays) and video projectors. In some rare cases high end CRT monitors also use DVI. It avoids image distortion and electrical noise, corresponding each pixel from the computer to a display pixel, using its native resolution. It is worth to note that most manufacturers include DVI-I connector, allowing(via simple adapter) standard RGB signal output to an old CRT or LCD monitor with VGA input. [edit] Video In Video Out (VIVO) for S-Video, Composite video and Component video 9-pin mini-DIN connector, frequently used for VIVO connections. Included to allow the connection with televisions, DVD players, video recorders and video game consoles. They often come in two 10-pin mini-DIN connector variations, and the VIVO splitter cable generally comes with either 4 connectors (S-Video in and out + composite video in and out), or 6 connectors (S-Video in and out + component PB out + component PR out + component Y out [also composite out] + composite in). [edit] High-Definition Multimedia Interface (HDMI) High-Definition Multimedia Interface (HDMI) An advanced digital audio/video interconnect released in 2003 and is commonly used to connect game consoles and DVD players to a display. HDMI supports copy protection through HDCP. [edit] DisplayPort DisplayPort An advanced license- and royalty-free digital audio/video interconnect released in 2007. DisplayPort intends to replace VGA and DVI for connecting a display to a computer. [edit] Other types of connection systemsComposite video Analog system with lower resolution; it uses the RCA connector. Component video It has three cables, each with RCA connector (YCBCR for digital component, or YPBPR for analog component); it is used in projectors, DVD players and some televisions. DB13W3 An analog standard once used by Sun Microsystems, SGI and IBM. DMS-59 A connector that provides two DVI or VGA outputs on a single connector. This is a DMS-59 port. '''' [edit] Motherboard interface Main articles: Bus (computing) and Expansion card Chronologically, connection systems between video card and motherboard were, mainly: S-100 bus: designed in 1974 as a part of the Altair 8800, it was the first industry-standard bus for the microcomputer industry. ISA: Introduced in 1981 by IBM, it became dominant in the marketplace in the 1980s. It was an 8 or 16-bit bus clocked at 8 MHz. NuBus: Used in Macintosh II, it was a 32-bit bus with an average bandwidth of 10 to 20 MB/s. MCA: Introduced in 1987 by IBM it was a 32-bit bus clocked at 10 MHz. EISA: Released in 1988 to compete with IBM's MCA, it was compatible with the earlier ISA bus. It was a 32-bit bus clocked at 8.33 MHz. VLB: An extension of ISA, it was a 32-bit bus clocked at 33 MHz. PCI: Replaced the EISA, ISA, MCA and VESA buses from 1993 onwards. PCI allowed dynamic connectivity between devices, avoiding the jumpers manual adjustments. It is a 32-bit bus clocked 33 MHz. UPA: An interconnect bus architecture introduced by Sun Microsystems in 1995. It had a 64-bit bus clocked at 67 or 83 MHz. USB: Although mostly used for miscellaneous devices, such as secondary storage devices and toys, USB displays and display adapters exist. AGP: First used in 1997, it is a dedicated-to-graphics bus. It is a 32-bit bus clocked at 66 MHz. PCI-X: An extension of the PCI bus, it was introduced in 1998. It improves upon PCI by extending the width of bus to 64-bit and the clock frequency to up to 133 MHz. PCI Express: Abbreviated PCIe, it is a point to point interface released in 2004. In 2006 provided double the data-transfer rate of AGP. It should not be confused with PCI-X, an enhanced version of the original PCI specification. In the attached table[4] is a comparison between a selection of the features of some of those interfaces.Bus Width (bits) Clock rate (MHz) Bandwidth (MB/s) Style ISA XT 8 4,77 8 Parallel ISA AT 16 8,33 16 Parallel MCA 32 10 20 Parallel NUBUS 32 10 10-40 Parallel EISA 32 8,33 32 Parallel VESA 32 40 160 Parallel PCI 32 - 64 33 - 100 132 - 800 Parallel AGP 1x 32 66 264 Parallel AGP 2x 32 66 528 Parallel AGP 4x 32 66 1000 Parallel AGP 8x 32 66 2000 Parallel PCIe x1 1 2500 / 5000 250 / 500 Serial PCIe x4 1 × 4 2500 / 5000 1000 / 2000 Serial PCIe x8 1 × 8 2500 / 5000 2000 / 4000 Serial PCIe x16 1 × 16 2500 / 5000 4000 / 8000 Serial PCIe x16 2.0 1 × 16 5000 / 10000 8000 / 16000 Serial [edit] Power demand As the processing power of video cards has increased, so has their demand for electrical power. Current high-performance video cards tend to consume a great deal of power. While CPU and power supply makers have recently moved toward higher efficiency, power demands of GPUs have continued to rise, so the video card may be the biggest electricity user in a computer.[5][6] Although power supplies are increasing their power too, the bottleneck is due to the PCI-Express connection, which is limited to supplying 75 Watts.[7] Modern video cards with a power consumption over 75 Watts usually include a combination of six-pin (75W) or eight-pin (150W) sockets that connect directly to the power supply [edit] See also Computer science portal ATI, NVIDIA - duopoly of 3D chip GPU and graphics card designers ATI Crossfire - ATI's proprietary mechanism for scaling graphics performance Computer display standards - detailed list of standards like SVGA, WXGA, WUXGA, etc. Feature connector GeForce, Radeon - Examples of GPUs. GPGPU (i.e.: CUDA, AMD FireStream) Graphics hardware and FOSS Framebuffer - The computer memory used to store a screen image Mini-DIN connector List of video card manufacturers Scalable Link Interface - NVIDIA's proprietary mechanism for scaling graphics performance Texture mapping - A means of adding image details to a 3D scene Video In Video Out (VIVO) Z-buffering - A means of determining visibility. [edit] References ^ ATI FireGL V8650. ^ NVIDIA Quadro FX 5800. ^ "Refresh rate recommended". ^ "Buses features". ^ http://www.xbitlabs.com/articles/video/display/power-noise.html X-bit labs: Faster, Quieter, Lower: Power Consumption and Noise Level of Contemporary Graphics Cards ^ http://www.codinghorror.com/blog/archives/000662.html Coding Horror Video Card Power Consumption ^ Maxim Integrated Products. "Power-Supply Management Solution for PCI Express x16 Graphics 150W-ATX Add-In Cards". Mueller, Scott (2005) Upgrading and Repairing PCs. 16th edition. Que Publishing. ISBN 0-7897-3173-8 [edit] External links Wikimedia Commons has media related to: Graphics card Graphics Card Comparisons, Performance, and Specifications How Graphics Cards Work at HowStuffWorks http://www.gpureview.com[hide] v · d · e Basic computer components Input devices Keyboard · Image scanner · Microphone · Pointing device (Graphics tablet · Joystick · Light pen · Mouse · Touchpad · Touchscreen · Trackball) · Webcam (Softcam) Output devices Monitor · Printer · Speakers Removable data storage Optical disc drive (CD-RW · DVD+RW) · Floppy disk · Memory card · USB flash drive Computer case Central processing unit (CPU) · Hard disk / Solid-state drive · Motherboard · Network interface controller · Power supply · Random-access memory (RAM) · Sound card · Video card Data ports Ethernet · Firewire (IEEE 1394) · Parallel port · Serial port · Thunderbolt · Universal Serial Bus (USB) View page ratings Rate this page What's this? Trustworthy Objective Complete Well-written I am highly knowledgeable about this topic (optional) Submit ratings Categories: Graphics hardware Video cards Log in / create account Article Discussion Read Edit View history Main page Contents Featured content Current events Random article Donate to Wikipedia Interaction Help About Wikipedia Community portal Recent changes Contact Wikipedia Toolbox Print/export Languages العربية Bosanski Brezhoneg Български Català Česky Dansk Deutsch Eesti Ελληνικά Español Esperanto Français Galego 한국어 Hrvatski Bahasa Indonesia Italiano עברית Қазақша Latviešu Lietuvių Lumbaart Magyar Македонски Bahasa Melayu Nederlands 日本語 ‪Norsk (bokmål)‬ ‪Norsk (nynorsk)‬ Олык Марий Polski Português Română Русский Shqip Simple English Slovenčina Slovenščina Српски / Srpski Srpskohrvatski / Српскохрватски Suomi Svenska தமிழ் ไทย Тоҷикӣ Türkçe Українська اردو Vèneto Tiếng Việt 中文 This page was last modified on 14 December 2011 at 15:44. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of use for details. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Contact us Privacy policy About Wikipedia Disclaimers Mobile view Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Video card From Wikipedia, the free encyclopedia (Redirected from Graphics card) This article may require cleanup to meet Wikipedia's quality standards. (Consider using more specific cleanup instructions.) Please help improve this article if you can. The talk page may contain suggestions. (September 2011) Video Card image = Gpu-connections.pngConnects to Motherboard via one of: ISA MCA VLB PCI AGP PCI-X PCI Express Others Display via one of: VGA connector Digital Visual Interface Composite video S-Video Component video HDMI DMS-59 DisplayPort Others A video card, display card, graphics card, or graphics adapter is an expansion card which generates output images to a display. Most video cards offer various functions such as accelerated rendering of 3D scenes and 2D graphics, MPEG-2/MPEG-4 decoding, TV output, or the ability to connect multiple monitors (multi-monitor). Other modern high performance video cards are used for more graphically demanding purposes, such as PC games. Video hardware is often integrated into the motherboard, however all modern motherboards provide expansion ports to which a video card can be attached. In this configuration it is sometimes referred to as a video controller or graphics controller. Modern low-end to mid-range motherboards often include a graphics chipset manufactured by the developer of the northbridge (i.e. an nForce chipset with Nvidia graphics or an Intel chipset with Intel graphics) on the motherboard. This graphics chip usually has a small quantity of embedded memory and takes some of the system's main RAM, reducing the total RAM available. This is usually called integrated graphics or on-board graphics, and is low-performance and undesirable for those wishing to run 3D applications. A dedicated graphics card on the other hand has its own RAM and Processor specifically for processing video images, and thus offloads this work from the CPU and system RAM. Almost all of these motherboards allow the disabling of the integrated graphics chip in BIOS, and have an AGP, PCI, or PCI Express slot for adding a higher-performance graphics card in place of the integrated graphics.Contents [hide] 1 Components 1.1 Graphics Processing Unit 1.2 Video BIOS 1.3 Video memory 1.4 RAMDAC 1.5 Outputs 1.5.1 Video Graphics Array (VGA) (DB-15) 1.5.2 Digital Visual Interface (DVI) 1.5.3 Video In Video Out (VIVO) for S-Video, Composite video and Component video 1.5.4 High-Definition Multimedia Interface (HDMI) 1.5.5 DisplayPort 1.5.6 Other types of connection systems 1.6 Motherboard interface 1.7 Power demand 2 See also 3 References 4 External links [edit] Components A modern video card consists of a printed circuit board on which the components are mounted. These include: [edit] Graphics Processing Unit Main article: Graphics processing unit A GPU is a dedicated processor optimized for accelerating graphics. The processor is designed specifically to perform floating-point calculations, which are fundamental to 3D graphics rendering and 2D picture drawing. The main attributes of the GPU are the core clock frequency, which typically ranges from 250 MHz to 4 GHz and the number of pipelines (vertex and fragment shaders), which translate a 3D image characterized by vertices and lines into a 2D image formed by pixels. Modern GPUs are massively parallel, and fully programmable. Their computing power in orders of magnitude are higher than that of CPUs. As consequence, they challenge CPUs in high performance computing, and push leading manufacturers on processors. [edit] Video BIOS The video BIOS or firmware contains the basic program, which is usually hidden, that governs the video card's operations and provides the instructions that allow the computer and software to interact with the card. It may contain information on the memory timing, operating speeds and voltages of the graphics processor, RAM, and other information. It is sometimes possible to change the BIOS (e.g. to enable factory-locked settings for higher performance), although this is typically only done by video card overclockers and has the potential to irreversibly damage the card. [edit] Video memoryType Memory clock rate (MHz) Bandwidth (GB/s) DDR 166 - 950 1.2 - 30.4 DDR2 533 - 1000 8.5 - 16 GDDR3 700 - 2400 5.6 - 156.6 GDDR4 2000 - 3600 128 - 200 GDDR5 900 - 5600 130 - 230 The memory capacity of most modern video cards ranges from 128 MB to 8 GB.[1][2] Since video memory needs to be accessed by the GPU and the display circuitry, it often uses special high-speed or multi-port memory, such as VRAM, WRAM, SGRAM, etc. Around 2003, the video memory was typically based on DDR technology. During and after that year, manufacturers moved towards DDR2, GDDR3, GDDR4 and GDDR5. The effective memory clock rate in modern cards is generally between 400 MHz and 3.8 GHz. Video memory may be used for storing other data as well as the screen image, such as the Z-buffer, which manages the depth coordinates in 3D graphics, textures, vertex buffers, and compiled shader programs. [edit] RAMDAC The RAMDAC, or Random Access Memory Digital-to-Analog Converter, converts digital signals to analog signals for use by a computer display that uses analog inputs such as CRT displays. The RAMDAC is a kind of RAM chip that regulates the functioning of the graphics card. Depending on the number of bits used and the RAMDAC-data-transfer rat

Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Digital video recorder From Wikipedia, the free encyclopedia Foxtel iQ, a combined digital video recorder and satellite receiver. V+, a combined digital video recorder and cable TV receiver. A digital video recorder (DVR), sometimes referred to by the merchandising term personal video recorder (PVR), is a consumer electronics device or application software that records video in a digital format to a disk drive, USB flash drive, SD memory card or other local or networked mass storage device. The term includes set-top boxes (STB) with direct to disk recording facility, portable media players (PMP) with recording, recorders (PMR) as camcorders that record onto Secure Digital memory cards and software for personal computers which enables video capture and playback to and from a hard disk. A television set with built-in digital video-recording facilities was introduced by LG in 2007,[1] followed by other manufacturers. DVR adoption has rapidly accelerated in recent years: in January 2006, ACNielsen recorded 1.2% of U.S. households having a DVR but by February 2011, this number had grown to 42.2% of viewers in the United States.[2]Contents [hide] 1 History 1.1 Hard-disk based digital video recorders 1.2 Introduction of dual tuners 2 Integrated TV-set digital video recorders 3 VESA Compatible digital video recorders 4 PC-based digital video recorders 5 NAS DVR 5.1 Linux 5.2 Mac OS 5.3 Windows 6 Source video 6.1 Analog sources overview 6.1.1 Analog broadcast copy protection 6.2 Digital sources overview 6.2.1 ATSC broadcast 6.2.1.1 Copy protection 6.2.2 DVB 6.2.3 Digital cable and satellite television 6.2.4 DVD 6.2.5 Digital camcorders 7 File formats, resolutions and file systems 8 Applications 8.1 TV recording 8.2 Security 8.2.1 Hardware features 8.2.2 Software features 9 Privacy concerns 10 The future of TV advertisements 11 Patent and copyright litigation 12 See also 13 Notes 14 References 15 External links [edit] History [edit] Hard-disk based digital video recorders Back view of a TiVo Series2 5xx-generation unit. Consumer digital video recorders ReplayTV and TiVo were launched at the 1999 Consumer Electronics Show in Las Vegas, USA. Microsoft also demonstrated a unit with DVR capability, but this did not become available until the end of 1999 for full DVR features in Dish Network's DISHplayer receivers. TiVo shipped their first units on March 31, 1999. ReplayTV won the "Best of Show" award in the video category[3] with Netscape co-founder Marc Andreessen as an early investor and board member,[4] but TiVo was more successful commercially. While early legal action by media companies forced ReplayTV to remove many features such as automatic commercial skip and the sharing of recordings over the Internet,[5] newer devices have steadily regained these functions while adding complementary abilities, such as recording onto DVDs and programming and remote control facilities using PDAs, networked PCs, and Web browsers. Hard-disk based digital video recorders make the "time shifting" feature (traditionally done by a VCR) much more convenient, and also allow for "trick modes" such as pausing live TV, instant replay of interesting scenes, chasing playback where a recording can be viewed before it has been completed, and skipping of advertising. Most DVRs use the MPEG format for compressing the digitized video signals. Video recording capabilities have become an essential part of the modern set-top box, as TV viewers have wanted to take control of their viewing experiences. As consumers have been able to converge increasing amounts of video content on their set-tops, delivered by traditional 'broadcast' [ Cable, Satellite and terrestrial] as well as IP networks, the ability to capture programming and view it whenever they want has become a must-have function for many consumers. Digital video recorders tied to a video service At the 1999 CES, Dish Network demonstrated the hardware that would later have DVR capability with the assistance of Microsoft software.[6] which also included WebTV Networks internet TV.[6] By the end of 1999 the Dishplayer had full DVR capabilities and within a year, over 200,000 units were sold.[7][8] In the UK, digital video recorders are often referred to as "plus boxes" (such as BSKYB's Sky+ and Virgin Media's V+ which integrates an HD capability, and the subscription free Freesat+ and Freeview+). British Sky Broadcasting markets a popular combined EPG and DVR as Sky+. TiVo launched a UK model in 2000, and while no longer on sale, the subscription service is still maintained. South African based Africa Satellite TV beamer Multichoice recently launched their DVR which is available on their Dstv platform. In addition to ReplayTV and TiVo, there are a number of other suppliers of digital terrestrial (DTT) DVRs, including Thomson, Topfield, Fusion, Pace Micro Technology, Humax, AC Ryan Playon and Advanced Digital Broadcast (ADB). Many satellite, cable and IPTV companies are incorporating digital video recording functions into their set-top box, such as with DirecTiVo, DISHPlayer/DishDVR, Scientific Atlanta Explorer 8xxx from Time Warner, Total Home DVR from AT&T U-verse, Motorola 6412 from Comcast and others, Moxi Media Center by Digeo (available through Charter, Adelphia, Sunflower, Bend Broadband, and soon Comcast and other cable companies), or Sky+. Astro introduced their DVR system, called Astro MAX, which was the first PVR in Malaysia. Sadly, it was phased out two years after its introduction. In the case of digital television, there is no encoding necessary in the DVR since the signal is already a digitally encoded MPEG stream. The digital video recorder simply stores the digital stream directly to disk. Having the broadcaster involved with, and sometimes subsidizing, the design of the DVR can lead to features such as the ability to use interactive TV on recorded shows, pre-loading of programs, or directly recording encrypted digital streams. It can, however, also force the manufacturer to implement non-skippable advertisements and automatically expiring recordings. In the United States, the FCC has ruled that starting on July 1, 2007, consumers will be able to purchase a set-top box from a third-party company, rather than being forced to purchase or rent the set-top box from their cable company. [9] This ruling only applies to "navigation devices," otherwise known as a cable television set-top box, and not to the security functions that control the user's access to the content of the cable operator.[10] The overall net effect on digital video recorders and related technology is unlikely to be substantial as standalone DVRs are currently readily available on the open market. [edit] Introduction of dual tuners In 2003 many Satellite and Cable providers introduced dual-tuner digital video recorders. In the UK, BSkyB introduced their first PVR Sky+ with dual tuner support in 2001. These machines have two independent tuners within the same receiver. The main use for this feature is the capability to record a live program while watching another live program simultaneously or to record two programs at the same time, possibly while watching a previously recorded one. Kogan Technologies introduced a dual-tuner PVR in the Australian market allowing free-to-air television to be recorded on a removable hard drive. Some dual-tuner DVRs also have the ability to output to two separate television sets at the same time. The PVR manufactured by UEC (Durban, South Africa) and used by Multichoice and Scientific Atlanta 8300DVB PVR have the ability to view two programs while recording a third using a triple tuner. Where several digital subchannels are transmitted on a single RF channel, some PVRs can record two channels and view a third, so long as all three subchannels are on two channels (or one).[11] In the United States, DVRs were used by 32 percent of all TV households in 2009, and 38 percent by 2010, with viewership among 18- to 40-year-olds 40 percent higher in homes that have them.[12] [edit] Integrated TV-set digital video recorders Integrated LCD DVR Side view: Even with all the DVR components inside the LCD monitor is still slim. Media type LCD DVR Digital video recorders are often integrated in the LCD and LED TV-sets. These systems let the user simplify the wiring and installation, because they do not use ports (SCART or HDMI), and they only need to use only one device and power and the same remote control instead of two. There are examples of security systems integrated into such DVRs, and thus they are capable of recording more input streams in parallel. Some of them include wireless ports such as (Bluetooth and WiFi), so they can play and record files to or from cellular phones and other devices. Such devices can also be used as disguised observation systems, displaying pictures or videos as typical store display. [edit] VESA Compatible digital video recorders VESA Compatible DVR The underside of a VESA compatible DVR Media type DVR Developed by Lorex Technology VESA compatible DVR are designed small and light enough to mount to the back of an LCD monitor that has clear access to VESA mounting holes (100x100mm). This allows users to use their own personal monitor to save on cost and space. [edit] PC-based digital video recorders Software and hardware is available which can turn personal computers running Microsoft Windows, Linux, and Mac OS X into DVRs, and is a popular option for home-theater PC (HTPC) enthusiasts. [edit] NAS DVR An increasing number of Pay-TV operators are offering their subscribers the ability to create their own digital recording platform capable of storing video, audio, photos, etc. These customizable hardware and software platforms enable subscribers to attach their own NAS (Network Attached Storage) hard drives or solid state/flash memory to set-tops which do not have their own internal storage. This minimizes an operator's investment, while offering subscribers the flexibility to create a digital recording solution that meets their specific requirements. One such product is DVR-Lite(™), a vertically integrated hardware and software platform from Advanced Digital Broadcast, available on its Set-Back Box, which allows external storage to added by subscribers. [edit] Linux There are many free DVR applications available for Linux, each released as free and open source software under the GNU General Public License: MythTV VDR LinuxMCE A commercial and proprietary application called SageTV is available for most popular Linux distributions. [edit] Mac OS Elgato makes a series of digital video recording devices called EyeTV. The software supplied with each device is also called EyeTV, and is available separately for use on compatible third-party tuners from manufacturers such as Pinnacle, TerraTec, and Hauppauge. SageTV provided DVR software for the Mac but no longer sells it.[13] Previously sold devices support the Hauppauge HVR-950, myTV.PVR and HDHomeRun hardware with its DVR software. SageTV software also included the ability to watch YouTube and other online video with a remote control. MythTV (see above) also runs under Mac OS X, but most recording devices are currently only supported under Linux. Precompiled binaries are available for the MythTV front-end, allowing a Mac to watch video from (and control) a MythTV server running under Linux. Apple provides applications in the FireWire software developer kit which allow any Mac with a FireWire port to record the MPEG2 transport stream from a FireWire equipped cable box (for example: Motorola 62xx, including HD streams). Applications can also change channels on the cable box via the firewire interface. Only broadcast channels can be recorded as the rest of the channels are encrypted. FireRecord (formerly iRecord) is a free scheduled-recording program derived from this SDK. [edit] Windows There are several free digital video recording applications available for Microsoft Windows including GB-PVR, MediaPortal, and Orb (web-based remote interface). There are also several commercial applications available including CyberLink, SageTV, Beyond TV, Showshifter, InterVideo WinDVR, the R5000-HD and Meedio (now a dead product - Yahoo! bought most of the company's technology and discontinued the Meedio line, and rebranded the software Yahoo! Go - TV, which is now a free product but only works in the U.S.[14]). Most TV tuner cards come bundled with software which allows the PC to record television to hard disk.[15] For example, Leadtek's WinFast DTV1000 digital TV card comes bundled with the WinFast PVR2 software, which can also record analog video from the card's composite video input socket.[16] Windows Media Center is a DVR software by Microsoft bundled with the Media Center edition of Windows XP, the Home Premium / Ultimate editions of Windows Vista, as well as most editions of Windows 7. [edit] Source video Television and video are terms that are sometimes used interchangeably, but differ in their technical meaning. Video is the visual portion of television, whereas television is the combination of video and audio modulated onto a carrier frequency (i.e., a television channel) for delivery. Most DVRs can record both. [edit] Analog sources overview The first digital video recorders were designed to record Analog television in NTSC, PAL or SECAM formats. To record an analog signal a few steps are required. TV tuner card tunes into a particular frequency and then functions as a frame grabber, breaking the lines into individual pixels and quantizing them into a format that a computer can comprehend. Then the series of frames along with the audio (also sampled and quantized) are compressed into a manageable format, like MPEG-2, usually in software. [edit] Analog broadcast copy protection Many mass-produced consumer DVRs implement a copy-protection system called CGMS-A or Copy Generation Management System—Analog. This encodes a pair of bits in the VBI of the analog video signal that specify one of the following settings: Copying is freely allowed Copying is prohibited Only one copy of this material may be made This is a copy of material for which only one copy was allowed to be made, so no further copies are allowed. CGMS-A information may be present in analog broadcast TV signals, and is preserved when the signal is recorded and played back by analog VCRs, which of course don't understand the meanings of the bits. But the restrictions still come into effect when you try to copy the tape onto a PVR. DVRs such as Tivo also detect and act upon[17] analogue protection systems such as Macrovision and DCS Copy Protection which were originally designed to block copying on analog VCRs. [edit] Digital sources overview Recording digital signals is generally a straightforward capture of the binary MPEG data being received. No expensive hardware is required to quantize and compress the signal (as the television broadcaster has already done this in the studio). DVD-based PVRs available on the market as of 2006 are not capable of capturing the full range of the visual signal available with high definition television (HDTV). This is largely because HDTV standards were finalized at a later time than the standards for DVDs. However, DVD-based PVRs can still be used (albeit at reduced visual quality) with HDTV since currently available HDTV sets also have standard A/V connections. [edit] ATSC broadcast ATSC television broadcasting is primarily used in North America. The ATSC data stream can be directly recorded by a digital video recorder, though many DVRs record only a subset of this information (that can later be transferred to DVD. An ATSC DVR will also act as a Set-top box, allowing older televisions or monitors to receive digital television. [edit] Copy protection The U.S. FCC attempted to limit the abilities of DVRs with its "broadcast flag" regulation. Digital video recorders that had not won prior approval from the FCC for implementing "effective" digital rights management would have been banned from interstate commerce from July 2005, but the regulation was struck down on May 6, 2005. [edit] DVB See also: DVB-T receiver DVB Digital television contains audio/visual signals that are broadcast over the air in a digital rather than analog format. The DVB data stream can be directly recorded by the DVR. Autonomous devices (this is, that can be used without a computer/tablet) that can store in an external hard disk are called a telememory.[18] [edit] Digital cable and satellite television Recording satellite or digital cable signals on a digital video recorder can be more complex than recording analog signals or broadcast digital signals. There are several different transmission schemes, and the video streams may be encrypted to restrict access to subscribers only. A satellite or cable set-top box both decrypts the signal if encrypted, and decodes the MPEG stream into an analog signal for viewing on the television. In order to record cable or satellite digital signals the signal must be captured after it has been decrypted but before it is decoded; this is how DVRs built into set-top boxes work. Cable and satellite providers often offer their own digital video recorders along with a service plan. These DVRs have access to the encrypted video stream, and generally enforce the provider's restrictions on copying of material even after recording. [edit] DVD Many DVD-based DVRs have the capability to copy content from a source DVD (ripping). In the U.S. this is prohibited under the Digital Millennium Copyright Act if the disc is encrypted. Most such DVRs will hence not allow recording of video streams from encrypted movie discs. [edit] Digital camcorders A digital camcorder combines a camera and a digital video recorder. Some DVD-based DVRs incorporate connectors that can be used to capture digital video from a camcorder. Some editing of the resulting DVD is usually possible, such as adding chapter points. Some digital video recorders can now record to solid state flash memory cards (called flash camcorders). They generally use Secure Digital cards, can include wireless connections (Bluetooth and Wi-Fi), and can play SWF files. There are some digital video recorders that combine video and graphics in real time to the flash card, called DTE or "direct to edit". These are used to speed-up the editing workflow in video and television production, since linear videotapes do not then need to be transferred to the edit workstation (see Non-linear editing system). [edit] File formats, resolutions and file systems DVRs can usually record and play H.264, MPEG-4 Part 2, MPEG-2 .mpg, MPEG-2 .TS, VOB and ISO images video, with MP3 and AC3 audio tracks. They can also display images (JPEG and PNG) and play music files (MP3 and Ogg). Some devices can be updated to play and record in new formats. Recordings from standard-definition television usually have 480p/i/576p/i while HDTV is usually in 720p/1080i. DVRs usually record in proprietary filesystems for copy protection, although some can use FAT filesystems. [edit] Applications [edit] TV recording TV DVRs generally uses the electronic programming guide (EPG). [edit] Security Digital video recorders configured for physical security applications record video signals from closed circuit television cameras for detection and documentation purposes. Many are designed to record audio as well. DVRs have evolved into devices that are feature rich and provide services that exceed the simple recording of video images that was previously done through VCRs. A DVR CCTV system provides a multitude of advanced functions over VCR technology including video searches by event, time, date and camera. There is also much more control over quality and frame rate allowing disk space usage to be optimized and the DVR can also be set to overwrite the oldest security footage should the disk become full. In some DVR security systems remote access to security footage using a PC can also be achieved by connecting the DVR to a LAN network or the internet. videoNEXT also makes a NVR surveillance application for the Mac OS X. Some of the latest professional digital video recorders include video analytics firmware, to enable functionality such as 'virtual tripwire' or even the detection of abandoned objects on the scene. Security DVRs may be categorized as being either PC based or embedded. A PC based DVR's architecture is a classical personal computer with video capture cards designed to capture video images. An embedded type DVR is specifically designed as a digital video recorder with its operating system and application software contained in firmware or read only memory. [edit] Hardware features Hardware features of security DVRs vary between manufacturers and may include but are not necessarily limited to Designed for rack mounting or desktop configurations. Single or multiple video inputs with connector types consistent with the analogue or digital video provided such as coaxial cable, twisted pair or optical fiber cable. The most common number of inputs are 1, 2, 4, 8, 16 and 32. Systems may be configured with a very large number of inputs by networking or bussing individual DVRs together. Looping video outputs for each inpu

Please read: A personal appeal from Ward Cunningham, inventor of the wiki Read now Digital video recorder From Wikipedia, the free encyclopedia Foxtel iQ, a combined digital video recorder and satellite receiver. V+, a combined digital video recorder and cable TV receiver. A digital video recorder (DVR), sometimes referred to by the merchandising term personal video recorder (PVR), is a consumer electronics device or application software that records video in a digital format to a disk drive, USB flash drive, SD memory card or other local or networked mass storage device. The term includes set-top boxes (STB) with direct to disk recording facility, portable media players (PMP) with recording, recorders (PMR) as camcorders that record onto Secure Digital memory cards and software for personal computers which enables video capture and playback to and from a hard disk. A television set with built-in digital video-recording facilities was introduced by LG in 2007,[1] followed by other manufacturers. DVR adoption has rapidly accelerated in recent years: in January 2006, ACNielsen recorded 1.2% of U.S. households having a DVR but by February 2011, this number had grown to 42.2% of viewers in the United States.[2]Contents [hide] 1 History 1.1 Hard-disk based digital video recorders 1.2 Introduction of dual tuners 2 Integrated TV-set digital video recorders 3 VESA Compatible digital video recorders 4 PC-based digital video recorders 5 NAS DVR 5.1 Linux 5.2 Mac OS 5.3 Windows 6 Source video 6.1 Analog sources overview 6.1.1 Analog broadcast copy protection 6.2 Digital sources overview 6.2.1 ATSC broadcast 6.2.1.1 Copy protection 6.2.2 DVB 6.2.3 Digital cable and satellite television 6.2.4 DVD 6.2.5 Digital camcorders 7 File formats, resolutions and file systems 8 Applications 8.1 TV recording 8.2 Security 8.2.1 Hardware features 8.2.2 Software features 9 Privacy concerns 10 The future of TV advertisements 11 Patent and copyright litigation 12 See also 13 Notes 14 References 15 External links [edit] History [edit] Hard-disk based digital video recorders Back view of a TiVo Series2 5xx-generation unit. Consumer digital video recorders ReplayTV and TiVo were launched at the 1999 Consumer Electronics Show in Las Vegas, USA. Microsoft also demonstrated a unit with DVR capability, but this did not become available until the end of 1999 for full DVR features in Dish Network's DISHplayer receivers. TiVo shipped their first units on March 31, 1999. ReplayTV won the "Best of Show" award in the video category[3] with Netscape co-founder Marc Andreessen as an early investor and board member,[4] but TiVo was more successful commercially. While early legal action by media companies forced ReplayTV to remove many features such as automatic commercial skip and the sharing of recordings over the Internet,[5] newer devices have steadily regained these functions while adding complementary abilities, such as recording onto DVDs and programming and remote control facilities using PDAs, networked PCs, and Web browsers. Hard-disk based digital video recorders make the "time shifting" feature (traditionally done by a VCR) much more convenient, and also allow for "trick modes" such as pausing live TV, instant replay of interesting scenes, chasing playback where a recording can be viewed before it has been completed, and skipping of advertising. Most DVRs use the MPEG format for compressing the digitized video signals. Video recording capabilities have become an essential part of the modern set-top box, as TV viewers have wanted to take control of their viewing experiences. As consumers have been able to converge increasing amounts of video content on their set-tops, delivered by traditional 'broadcast' [ Cable, Satellite and terrestrial] as well as IP networks, the ability to capture programming and view it whenever they want has become a must-have function for many consumers. Digital video recorders tied to a video service At the 1999 CES, Dish Network demonstrated the hardware that would later have DVR capability with the assistance of Microsoft software.[6] which also included WebTV Networks internet TV.[6] By the end of 1999 the Dishplayer had full DVR capabilities and within a year, over 200,000 units were sold.[7][8] In the UK, digital video recorders are often referred to as "plus boxes" (such as BSKYB's Sky+ and Virgin Media's V+ which integrates an HD capability, and the subscription free Freesat+ and Freeview+). British Sky Broadcasting markets a popular combined EPG and DVR as Sky+. TiVo launched a UK model in 2000, and while no longer on sale, the subscription service is still maintained. South African based Africa Satellite TV beamer Multichoice recently launched their DVR which is available on their Dstv platform. In addition to ReplayTV and TiVo, there are a number of other suppliers of digital terrestrial (DTT) DVRs, including Thomson, Topfield, Fusion, Pace Micro Technology, Humax, AC Ryan Playon and Advanced Digital Broadcast (ADB). Many satellite, cable and IPTV companies are incorporating digital video recording functions into their set-top box, such as with DirecTiVo, DISHPlayer/DishDVR, Scientific Atlanta Explorer 8xxx from Time Warner, Total Home DVR from AT&T U-verse, Motorola 6412 from Comcast and others, Moxi Media Center by Digeo (available through Charter, Adelphia, Sunflower, Bend Broadband, and soon Comcast and other cable companies), or Sky+. Astro introduced their DVR system, called Astro MAX, which was the first PVR in Malaysia. Sadly, it was phased out two years after its introduction. In the case of digital television, there is no encoding necessary in the DVR since the signal is already a digitally encoded MPEG stream. The digital video recorder simply stores the digital stream directly to disk. Having the broadcaster involved with, and sometimes subsidizing, the design of the DVR can lead to features such as the ability to use interactive TV on recorded shows, pre-loading of programs, or directly recording encrypted digital streams. It can, however, also force the manufacturer to implement non-skippable advertisements and automatically expiring recordings. In the United States, the FCC has ruled that starting on July 1, 2007, consumers will be able to purchase a set-top box from a third-party company, rather than being forced to purchase or rent the set-top box from their cable company. [9] This ruling only applies to "navigation devices," otherwise known as a cable television set-top box, and not to the security functions that control the user's access to the content of the cable operator.[10] The overall net effect on digital video recorders and related technology is unlikely to be substantial as standalone DVRs are currently readily available on the open market. [edit] Introduction of dual tuners In 2003 many Satellite and Cable providers introduced dual-tuner digital video recorders. In the UK, BSkyB introduced their first PVR Sky+ with dual tuner support in 2001. These machines have two independent tuners within the same receiver. The main use for this feature is the capability to record a live program while watching another live program simultaneously or to record two programs at the same time, possibly while watching a previously recorded one. Kogan Technologies introduced a dual-tuner PVR in the Australian market allowing free-to-air television to be recorded on a removable hard drive. Some dual-tuner DVRs also have the ability to output to two separate television sets at the same time. The PVR manufactured by UEC (Durban, South Africa) and used by Multichoice and Scientific Atlanta 8300DVB PVR have the ability to view two programs while recording a third using a triple tuner. Where several digital subchannels are transmitted on a single RF channel, some PVRs can record two channels and view a third, so long as all three subchannels are on two channels (or one).[11] In the United States, DVRs were used by 32 percent of all TV households in 2009, and 38 percent by 2010, with viewership among 18- to 40-year-olds 40 percent higher in homes that have them.[12] [edit] Integrated TV-set digital video recorders Integrated LCD DVR Side view: Even with all the DVR components inside the LCD monitor is still slim. Media type LCD DVR Digital video recorders are often integrated in the LCD and LED TV-sets. These systems let the user simplify the wiring and installation, because they do not use ports (SCART or HDMI), and they only need to use only one device and power and the same remote control instead of two. There are examples of security systems integrated into such DVRs, and thus they are capable of recording more input streams in parallel. Some of them include wireless ports such as (Bluetooth and WiFi), so they can play and record files to or from cellular phones and other devices. Such devices can also be used as disguised observation systems, displaying pictures or videos as typical store display. [edit] VESA Compatible digital video recorders VESA Compatible DVR The underside of a VESA compatible DVR Media type DVR Developed by Lorex Technology VESA compatible DVR are designed small and light enough to mount to the back of an LCD monitor that has clear access to VESA mounting holes (100x100mm). This allows users to use their own personal monitor to save on cost and space. [edit] PC-based digital video recorders Software and hardware is available which can turn personal computers running Microsoft Windows, Linux, and Mac OS X into DVRs, and is a popular option for home-theater PC (HTPC) enthusiasts. [edit] NAS DVR An increasing number of Pay-TV operators are offering their subscribers the ability to create their own digital recording platform capable of storing video, audio, photos, etc. These customizable hardware and software platforms enable subscribers to attach their own NAS (Network Attached Storage) hard drives or solid state/flash memory to set-tops which do not have their own internal storage. This minimizes an operator's investment, while offering subscribers the flexibility to create a digital recording solution that meets their specific requirements. One such product is DVR-Lite(™), a vertically integrated hardware and software platform from Advanced Digital Broadcast, available on its Set-Back Box, which allows external storage to added by subscribers. [edit] Linux There are many free DVR applications available for Linux, each released as free and open source software under the GNU General Public License: MythTV VDR LinuxMCE A commercial and proprietary application called SageTV is available for most popular Linux distributions. [edit] Mac OS Elgato makes a series of digital video recording devices called EyeTV. The software supplied with each device is also called EyeTV, and is available separately for use on compatible third-party tuners from manufacturers such as Pinnacle, TerraTec, and Hauppauge. SageTV provided DVR software for the Mac but no longer sells it.[13] Previously sold devices support the Hauppauge HVR-950, myTV.PVR and HDHomeRun hardware with its DVR software. SageTV software also included the ability to watch YouTube and other online video with a remote control. MythTV (see above) also runs under Mac OS X, but most recording devices are currently only supported under Linux. Precompiled binaries are available for the MythTV front-end, allowing a Mac to watch video from (and control) a MythTV server running under Linux. Apple provides applications in the FireWire software developer kit which allow any Mac with a FireWire port to record the MPEG2 transport stream from a FireWire equipped cable box (for example: Motorola 62xx, including HD streams). Applications can also change channels on the cable box via the firewire interface. Only broadcast channels can be recorded as the rest of the channels are encrypted. FireRecord (formerly iRecord) is a free scheduled-recording program derived from this SDK. [edit] Windows There are several free digital video recording applications available for Microsoft Windows including GB-PVR, MediaPortal, and Orb (web-based remote interface). There are also several commercial applications available including CyberLink, SageTV, Beyond TV, Showshifter, InterVideo WinDVR, the R5000-HD and Meedio (now a dead product - Yahoo! bought most of the company's technology and discontinued the Meedio line, and rebranded the software Yahoo! Go - TV, which is now a free product but only works in the U.S.[14]). Most TV tuner cards come bundled with software which allows the PC to record television to hard disk.[15] For example, Leadtek's WinFast DTV1000 digital TV card comes bundled with the WinFast PVR2 software, which can also record analog video from the card's composite video input socket.[16] Windows Media Center is a DVR software by Microsoft bundled with the Media Center edition of Windows XP, the Home Premium / Ultimate editions of Windows Vista, as well as most editions of Windows 7. [edit] Source video Television and video are terms that are sometimes used interchangeably, but differ in their technical meaning. Video is the visual portion of television, whereas television is the combination of video and audio modulated onto a carrier frequency (i.e., a television channel) for delivery. Most DVRs can record both. [edit] Analog sources overview The first digital video recorders were designed to record Analog television in NTSC, PAL or SECAM formats. To record an analog signal a few steps are required. TV tuner card tunes into a particular frequency and then functions as a frame grabber, breaking the lines into individual pixels and quantizing them into a format that a computer can comprehend. Then the series of frames along with the audio (also sampled and quantized) are compressed into a manageable format, like MPEG-2, usually in software. [edit] Analog broadcast copy protection Many mass-produced consumer DVRs implement a copy-protection system called CGMS-A or Copy Generation Management System—Analog. This encodes a pair of bits in the VBI of the analog video signal that specify one of the following settings: Copying is freely allowed Copying is prohibited Only one copy of this material may be made This is a copy of material for which only one copy was allowed to be made, so no further copies are allowed. CGMS-A information may be present in analog broadcast TV signals, and is preserved when the signal is recorded and played back by analog VCRs, which of course don't understand the meanings of the bits. But the restrictions still come into effect when you try to copy the tape onto a PVR. DVRs such as Tivo also detect and act upon[17] analogue protection systems such as Macrovision and DCS Copy Protection which were originally designed to block copying on analog VCRs. [edit] Digital sources overview Recording digital signals is generally a straightforward capture of the binary MPEG data being received. No expensive hardware is required to quantize and compress the signal (as the television broadcaster has already done this in the studio). DVD-based PVRs available on the market as of 2006 are not capable of capturing the full range of the visual signal available with high definition television (HDTV). This is largely because HDTV standards were finalized at a later time than the standards for DVDs. However, DVD-based PVRs can still be used (albeit at reduced visual quality) with HDTV since currently available HDTV sets also have standard A/V connections. [edit] ATSC broadcast ATSC television broadcasting is primarily used in North America. The ATSC data stream can be directly recorded by a digital video recorder, though many DVRs record only a subset of this information (that can later be transferred to DVD. An ATSC DVR will also act as a Set-top box, allowing older televisions or monitors to receive digital television. [edit] Copy protection The U.S. FCC attempted to limit the abilities of DVRs with its "broadcast flag" regulation. Digital video recorders that had not won prior approval from the FCC for implementing "effective" digital rights management would have been banned from interstate commerce from July 2005, but the regulation was struck down on May 6, 2005. [edit] DVB See also: DVB-T receiver DVB Digital television contains audio/visual signals that are broadcast over the air in a digital rather than analog format. The DVB data stream can be directly recorded by the DVR. Autonomous devices (this is, that can be used without a computer/tablet) that can store in an external hard disk are called a telememory.[18] [edit] Digital cable and satellite television Recording satellite or digital cable signals on a digital video recorder can be more complex than recording analog signals or broadcast digital signals. There are several different transmission schemes, and the video streams may be encrypted to restrict access to subscribers only. A satellite or cable set-top box both decrypts the signal if encrypted, and decodes the MPEG stream into an analog signal for viewing on the television. In order to record cable or satellite digital signals the signal must be captured after it has been decrypted but before it is decoded; this is how DVRs built into set-top boxes work. Cable and satellite providers often offer their own digital video recorders along with a service plan. These DVRs have access to the encrypted video stream, and generally enforce the provider's restrictions on copying of material even after recording. [edit] DVD Many DVD-based DVRs have the capability to copy content from a source DVD (ripping). In the U.S. this is prohibited under the Digital Millennium Copyright Act if the disc is encrypted. Most such DVRs will hence not allow recording of video streams from encrypted movie discs. [edit] Digital camcorders A digital camcorder combines a camera and a digital video recorder. Some DVD-based DVRs incorporate connectors that can be used to capture digital video from a camcorder. Some editing of the resulting DVD is usually possible, such as adding chapter points. Some digital video recorders can now record to solid state flash memory cards (called flash camcorders). They generally use Secure Digital cards, can include wireless connections (Bluetooth and Wi-Fi), and can play SWF files. There are some digital video recorders that combine video and graphics in real time to the flash card, called DTE or "direct to edit". These are used to speed-up the editing workflow in video and television production, since linear videotapes do not then need to be transferred to the edit workstation (see Non-linear editing system). [edit] File formats, resolutions and file systems DVRs can usually record and play H.264, MPEG-4 Part 2, MPEG-2 .mpg, MPEG-2 .TS, VOB and ISO images video, with MP3 and AC3 audio tracks. They can also display images (JPEG and PNG) and play music files (MP3 and Ogg). Some devices can be updated to play and record in new formats. Recordings from standard-definition television usually have 480p/i/576p/i while HDTV is usually in 720p/1080i. DVRs usually record in proprietary filesystems for copy protection, although some can use FAT filesystems. [edit] Applications [edit] TV recording TV DVRs generally uses the electronic programming guide (EPG). [edit] Security Digital video recorders configured for physical security applications record video signals from closed circuit television cameras for detection and documentation purposes. Many are designed to record audio as well. DVRs have evolved into devices that are feature rich and provide services that exceed the simple recording of video images that was previously done through VCRs. A DVR CCTV system provides a multitude of advanced functions over VCR technology including video searches by event, time, date and camera. There is also much more control over quality and frame rate allowing disk space usage to be optimized and the DVR can also be set to overwrite the oldest security footage should the disk become full. In some DVR security systems remote access to security footage using a PC can also be achieved by connecting the DVR to a LAN network or the internet. videoNEXT also makes a NVR surveillance application for the Mac OS X. Some of the latest professional digital video recorders include video analytics firmware, to enable functionality such as 'virtual tripwire' or even the detection of abandoned objects on the scene. Security DVRs may be categorized as being either PC based or embedded. A PC based DVR's architecture is a classical personal computer with video capture cards designed to capture video images. An embedded type DVR is specifically designed as a digital video recorder with its operating system and application software contained in firmware or read only memory. [edit] Hardware features Hardware features of security DVRs vary between manufacturers and may include but are not necessarily limited to Designed for rack mounting or desktop configurations. Single or multiple video inputs with connector types consistent with the analogue or digital video provided such as coaxial cable, twisted pair or optical fiber cable. The most common number of inputs are 1, 2, 4, 8, 16 and 32. Systems may be configured with a very large number of inputs by networking or bussing individual DVRs together. Looping video outputs for each inpu


Set pelajaran terkait

Chapter 23: The Female Reproductive System (FINAL)

View Set

Cause and Effect in Wheels of Change, Part 4

View Set

Atraumatic Care & Pediatric Health Assessment

View Set