Computer Science

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Machine Language

A mixture of electrical signals/string of binary bits that are unintelligible to humans.

Multicore Systems

A multi-core processor is an integrated circuit to which two or more processors have been attached for enhanced performance, reduced power consumption, and more efficient simultaneous processing of multiple tasks

Serial Computing

A problem is broken into a discrete series of instruction. Instructions are executed sequentially one after another and executed on a single processor. Only one instruction may execute at any moment in time.

Serial Computing

A serial computer is typified by bit-serial architecture — i.e., internally operating on one bit or digit for each clock cycle. Machines with serial main storage devices such as acoustic or magnetostrictive delay lines and rotating magnetic devices were usually serial computers.

File Virtualism

Addresses the NAS challenges by eliminating the dependencies between the data accessed at the file level and the location where the files are physically stored.

Von Neumann Continued

All parts of the computer are connected together by Bus. Memory and devices are controlled by CPU. Data can pass through bus to and from CPU. Memory holds both programs and data. Memory is addressed linearly; this means that there is an address for each and every memory location. Memory is addressed by the location number without regard to the data contained within.

ALU

Arithmetic and Logic Unit - Deals with all arithmetic and logic within the computer. The part of the central processing unit that deals with operations such as addition, subtraction, and multiplication of integers and Boolean operations. It receives control signals from the control unit telling it to carry out these operations.

CPU Performance

CPU Time = Seconds/Program = Instructions/Program X Cycles/Instructions X Seconds/Cycle. The CPU performance is dependent upon instruction Count, SPI (Cycles per Instruction) and Clock cycle time. All three are affected by the instruction set architecture.

CPU

Central Processing Unit - Brain of the computer; fetches, decodes and executes instructions.

Disadvantages of RISC

Code Quality. The performance of a RISC processor depends greatly on the code that it is executing. If the programmer (or compiler) does a poor job of instruction scheduling, the processor can spend quite a bit of time stalling: waiting for the result of one instruction before it can proceed with a subsequent instruction. Code Expansion. CISC machines perform complex actions with a single instruction; RISC machines may require multiple instructions for the same action, code expansion can be a problem. Code expansion refers to the increase in size that you get when you take a program that had been compiled for a CISC machine and re-compile it for a RISC machine. The exact expansion depends primarily on the quality of the compiler and the nature of the machine's instruction set. System Design. Another problem that faces RISC machines is that they require very fast memory systems to feed them instructions. RISC-based systems typically contain large memory caches, usually on the chip itself. This is known as a first-level cache.

CISC

Commonly implemented within large computers, this just uses one instruction to execute everything, instead of using multiple instructions.

Comparison

Comparison operations compare values in order to determine such things as whether one number is greater than, less than or equal to another. These operations can be performed by subtraction of one of the numbers from the other, and as such can be handled by the aforementioned logic gates. However, it is not strictly necessary for the result of the calculation to be stored in this instance.. the amount by which the values differ is not required. Instead, the appropriate status flags in the flag register are set and checked to determine the result of the operation.

Characteristics of CISC

Complex instruction-decoding logic: It is driven by the need for a single instruction to support multiple addressing modes. Small number of general purpose registers: Instructions which operate directly on memory, and only the limited amount of chip space is dedicated for general purpose registers. Several special purpose registers: Many CISC designs set aside special registers for the stack pointer, interrupt handling, and so on. This can simplify the hardware design somewhat, at the expense of making the instruction set more complex. 'Condition code" register: This register reflects whether the result of the last operation is less than, equal to, or greater than zero and records if certain error conditions occur.

Advantages of Von Nuemann

Control unit gets data and instructions in the same way from memory. It simplifies design and development of the control unit. Data from memory and from devices are accessed in the same way. Memory organisation is in the hands of programmers.

Direct Addressing

For direct addressing, the operands of the instruction contain the memory address where the data required for execution is stored. For the instruction to be processed the required data must be first fetched from that location.

Logical Tests

Further logic gates are used within the ALU to perform a number of different logical tests, including seeing if an operation produces a result of zero. Most of these logical tests are used to then change the values stored in the flag register, so that they may be checked later by separate operations or instructions. Others produce a result which is then stored, and used later in further processing.

GPU

GPUs are processors which can be used for a range of tasks other than processing computer game graphics. GPUs are used to display high quality video content such as HDMI or Blu-Ray on a screen. Video editing also requires many calculations, especially where edits or effects have been made. The decoding and encoding of videos is also carried out by the GPU

Decode

Here, the control unit checks the instruction that is now stored within the instruction register. It determines which opcode and addressing mode have been used, and as such what actions need to be carried out in order to execute the instruction in question.

Pipelining

In computers, a pipeline is the continuous and somewhat overlapped movement ofinstruction to the processor or in the arithmetic steps taken by the processor to perform an instruction. Pipelining is the use of a pipeline. Without a pipeline, a computer processor gets the first instruction from memory, performs the operation it calls for, and then goes to get the next instruction from memory, and so forth. While fetching (getting) the instruction, the arithmetic part of the processor is idle. It must wait until it gets the next instruction. With pipelining, the computer architecture allows the next instructions to be fetched while the processor is performing arithmetic operations, holding them in a buffer close to the processor until each instruction operation can be performed. The staging of instruction fetching is continuous. The result is an increase in the number of instructions that can be performed during a given time period. Pipelining is sometimes compared to a manufacturing assembly line in which different parts of a product are being assembled at the same time although ultimately there may be some parts that have to be assembled before others are. Even if there is some sequential dependency, the overall process can take advantage of those operations that can proceed concurrently. Computer processor pipelining is sometimes divided into an instruction pipeline and an arithmetic pipeline. The instruction pipeline represents the stages in which an instruction is moved through the processor, including its being fetched, perhaps buffered, and then executed. The arithmetic pipeline represents the parts of an arithmetic operation that can be broken down and overlapped as they are performed. Pipelines and pipelining also apply to computer memory controllers and moving data through various memory staging places.

Multiplication and Division

In most modern processors, the multiplication and division of integer values is handled by specific floating-point hardware within the CPU. Earlier processors used either additional chips known as maths co-processors, or used a completely different method to perform the task.

Control Unit

This controls the movement of instructions in and out of the processor, and also controls the operation of the ALU. It consists of a decoder, control logic circuits, and a clock to ensure everything happens at the correct time. It is also responsible for performing the instruction execution cycle.

Opcode Short Codes

MOV Moves a data value from one location to another ADD Adds to data values using the ALU, and returns the result to the accumulator STO Stores the contents of the accumulator in the specified location END Marks the end of the program in memory

Features of RISC

One Cycle Execution Time: RISC processors have a CPI (clock per instruction) of one cycle. Pipelining: A technique that allows simultaneous execution of parts, or stages, of instructions to more efficiently process instructions. Large Number of Registers. The RISC design philosophy generally incorporates a larger number of registers to prevent large amounts of interactions with memory

Disadvantages of Von Neumann

One bus has a bottleneck effect. Only one piece of information can be accessed at the same time. Instructions stored in the same memory as the data can be accidentally rewritten by and error in a program.

Logic

Problems that need to be solved, logically.

Disadvantages of Harvard

Production of a computer with two buses and two memory storage's is more expensive and needs more time.

Quantum Computing

Quantum computing studies theoretical computation systems (quantum computers) that make direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors.

Reduced Instruction Set Architecture(RISC)

RISC does the opposite, reducing the cycles per instruction at the cost of instruction per program.

Bit Shifting

Shifting operations move bits left or right within a word, with different operations filling the gaps created in different ways. This is accomplished via the use of a shift register, which uses pulses from the clock within the control unit to trigger a chain reaction of movement across the bits that make up the word.

Characteristics of RISC

Simple Instructions Limited fixed length instructions and no instructions combine load/store with arithmetic Few Data Types Supports simple data types such as integers/characterrs to complex data structures such as records Simple Addressing modes Use simple addressing modes and fixed length instructions to facilitate pipelining. Memory indirect addressing isn't provided. Identical general purpose Registers Allow any register to be used in any context Harvard Architecture Harvard memory model - The instruction stream and data stream are conceptually separated.

Advantages of Harvard

Since it had two memory locations, the allows parallel access to data and instructions. Data and instructions are accessed in the same way.

Advantages of RISC

Speed. Since a simplified instruction set allows for a pipelined, superscalar design RISC processors often achieve 2 to 4 times the performance of CISC processors using comparable semiconductor technology and the same clock rates. Simpler hardware. Because the instruction set of a RISC processor is so simple, it uses up much less chip space. Smaller chips allow a semiconductor manufacturer to place more parts on a single silicon wafer, which can lower the per-chip cost dramatically. Shorter design cycle. Since RISC processors are simpler than corresponding CISC processors, they can be designed more quickly, and can take advantage of other technological developments sooner than corresponding CISC designs, leading to greater leaps in performance between generations. Efficient Code. Higher-level language compilers produce more efficient code than formerly because they have always tended to use the smaller set of instructions to be found in a RISC computer. Simplicity. The simplicity of RISC allows more freedom to choose how to use the space on a microprocessor.

Storage Virtualization

Storage systems typically use special hardware and software along with disk drives in order to provide very fast reliable storage for computing and data.

Complex Instruction Set Architecture(CISC)

The CISC approach attempts to minimize the number of instruction per program, sacrificing the number count per instruction.

Operand

The Operand indicates where the data required for the operation can be found and how it can be accessed.

Accumulator

The accumulator is used to hold the result of operations performed by the arithmetic and logic unit, as covered in the section of the ALU.

Execute

The actual actions which occur during the execute cycle of an instruction depend on both the instruction itself, and the addressing mode specified to be used to access the data that may be required. However, four main groups of actions do exist, which are discussed in full later on.

Address Bus

The address bus contains the connections between the microprocessor and memory that carry the signals relating to the addresses which the CPU is processing at that time, such as the locations that the CPU is reading from or writing to. The width of the address bus corresponds to the maximum addressing capacity of the bus, or the largest address within memory that the bus can work with. The addresses are transferred in binary format, with each line of the address bus carrying a single binary digit. Therefore the maximum address capacity is equal to two to the power of the number of lines present (2^lines).

Parallel

The computational problem should be able to: Be broken apart into pieces of work that can be solved simultaneously; Execute multiple program instructions at any moment in time; Be solved in less time with multiple compute resources than with a single compute resource. The compute resources are typically: A single computer with multiple processors/cores An subjective number of such computers connected by a network

Control Bus

The control bus carries the signals relating to the control and co-ordination of the various activities across the computer, which can be sent from the control unit within the CPU. Different architectures result in differing number of lines of wire within the control bus, as each line is used to perform a specific task. For instance, different, specific lines are used for each of read, write and reset requests.

Control Logic Circuits

The control logic circuits are used to create the control signals themselves, which are then sent around the processor. These signals inform the arithmetic and logic unit and the register array what they actions and steps they should be performing, what data they should be using to perform said actions, and what should be done with the results.

Fetch

The fetch cycle takes the address required from memory, stores it in the instruction register, and moves the program counter on one so that it points to the next instruction.

Flag register / status

The flag register is specially designed to contain all the appropriate 1-bit status flags, which are changed as a result of operations involving the arithmetic and logic unit. Further information can be found in the section on the ALU.

Memory

The memory is not an actual part of the CPU itself, and is instead housed elsewhere on the motherboard. However, it is here that the program being executed is stored, and as such is a crucial part of the overall structure involved in program execution.

Opcode

The opcode is a short code which indicates what operation is expected to be performed. Each operation has a unique opcode. Once the opcode is known, the execution cycle can occur. Different actions need to be carried out dependent on opcode, with two opcodes requiring the same actions to occur. 4 actions can occur: Transfer of data between CPU and memory.

Timer or Clock

The timer or clock ensures that all processes and instructions are carried out and completed at the right time. Pulses are sent to the other areas of the CPU at regular intervals (related to the processor clock speed), and actions only occur when a pulse is detected. This ensures that the actions themselves also occur at these same regular intervals, meaning that the operations of the CPU are synchronized.

Other general purpose registers

These registers have no specific purpose, but are generally used for the quick storage of pieces of data that are required later in the program execution. In the model used here these are assigned the names A and B, with suffixes of L and U indicating the lower and upper sections of the register respectively.

Addition and Subtraction

These two tasks are performed by constructs of logic gates, such as half adders and full adders. While they may be termed 'adders', they can also perform subtraction via use of inverters and 'two's complement' arithmetic.

Von Neumann Architecture

This describes the design architecture for an electronic digital computer with parts consisting of a processing unit containing: ALU, Control Unit, Register Array, Memory to store both data and instructions, External Mass Storage, Input and Output. Programs consist of a sequence of instructions. Instructions are executed in order they are stored in memory. Instructions, characters, data and numbers are represented in binary form.

Harvard Architecture

This is a computer architecture with physically separate storage and signal pathways for instructions of data.

Register Array

This is a small amount of internal memory that is used for the quick storage and retrieval of data and instructions. All processors include some common registers used for specific functions, namely the program counter, instruction register, accumulator, memory address register and stack pointer.

System Bus

This is comprised of the control bus, data bus and address bus. It is used for connections between the processor, memory and peripherals, and transferal of data between the various parts.

Block virtualism

This is the abstraction(separation) of logical storage from physical storage so that it may be accessed without the regard to physical storage or varied structure. This separation allows the administrators of the storage system greater flexibility in how they manage storage for end users.

Parallel Computing

This is the simultaneous use of multiple compute resources to solve a computational problem: A problem is broken into parts that can be solved concurrently Each part is further broken down to a series of instructions Instructions from each part execute simultaneously on different processors An overall control/coordination mechanism is employed

Data Bus

This is used for the exchange of data between the processor, memory and peripherals, and is bi-directional so that it allows data flow in both directions along the wires. Again, the number of wires used in the data bus (sometimes known as the 'width') can differ. Each wire is used for the transfer of signals corresponding to a single bit of binary data. As such, a greater width allows greater amounts of data to be transferred at the same time.

Decoder

This is used to decode the instructions that make up a program when they are being processed, and to determine in what actions must be taken in order to process them. These decisions are normally taken by looking at the opcode of the instruction, together with the addressing mode used. This is covered in greater detail in the instruction execution section of this tutorial.

Instruction Register

This is used to hold the current instruction in the processor while it is being decoded and executed, in order for the speed of the whole execution process to be reduced. This is because the time needed to access the instruction register is much less than continual checking of the memory location itself.

Program Counter

This register is used to hold the memory address of the next instruction that has to executed in a program. This is to ensure the CPU knows at all times where it has reached, that is able to resume following an execution at the correct point, and that the program is executed correctly.

Memory Address Register

Used for storage of memory addresses, usually the addresses involved in the instructions held in the instruction register. The control unit then checks this register when needing to know which memory address to check or obtain data from.

Virtual Storage

Virtual storage is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console.

Parallel Computers

Virtually all stand-alone computers today are parallel from a hardware perspective: (Multiple functional units (L1 cache, L2 cache, branch, fetch, decode, floating-point, graphics processing (GPU), integer, etc.) Multiple execution units/cores Multiple hardware threads

Execution Cycle

When a program is loaded into memory, it has to be executed.

Memory Buffer/Data Register

When an instruction or data is obtained from the memory or elsewhere, it is first placed in the memory buffer register. The next action to take is then determined and carried out, and the data is moved on to the desired location.

Indirect Addressing

When using indirect addressing, the operands give a location in memory similarly to direct addressing. However, rather than the data being at this location, there is instead another memory address given where the data actually is located. This is the most flexible of the modes, but also the slowest as two data look ups are required.

Von Neumann vs Harvard

With Von Neumann architecture the CPU can be either reading an instruction or reading/writing data from/to the memory. Both cannot occur at the same time since the instructions and data use the same bus system. In a computer using the Harvard architecture, the CPU can both read an instruction and perform a data memory access at the same time, even without a cache. A Harvard architecture computer can thus be faster for a given circuit complexity because instruction fetches and data access do not contend for a single memory pathway. Also, a Harvard architecture machine has distinct code and data address spaces.

Semantic GAP

With an objective of improving efficiency of software development, several powerful programming languages have been developed. They provide high level of abstraction, conciseness and power. By this evolution the semantic gap grows. To enable efficient compilation of high level language programs, CISC and RISC designs are the two options. CISC designs involve very complex architectures including a large number of instructions and addressing modes, whereas RISC designs involve simplified instruction set and adapt it to the real requirements of user programs.

Immediate Addressing

With immediate addressing, no look up of data is actually required. The data is located within the operands of the instruction itself, not in a separate memory location. This is the quickest of the addressing modes to execute, but the least flexible. As such it is the least used of the three in practice.

RISC

is a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions, rather than a more specialized set of instructions often found in other types of architectures. Prime difference between RISC and CISC design is the number and complexity of instructions. CISC designs includes complex instruction sets so as to provide an instruction set that closely supports the operations and data structures used by Higher-Level Languages


Set pelajaran terkait

3020 Week 1: Chronic Diseases and the Cranial Nerves

View Set