Numerical Analysis

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Mantissa (fraction) bits

"Fraction Part" For example, the mantissa of the number 12.345 is .345.

Approximate (absolute) Error (1st approx-2nd approx)

(1st approx. - 2nd approx.)

Approximate Relative Error ((1st approx- 2nd approx)/2nd approx)

(1st approx. - 2nd approx.)/2nd approx.

True Relative Error ((exact-approx)/exact)

(exact - approx.)/exact

Binary to decimal conversion

1/2, 1/4, 1/8, 1/16 lines up with bianary code if one add this number to it

kByte

1000 bytes

MByte

1000000 bytes 1000^2

GByte

1000000000 bytes 1000^3

ASCII character set (7bits, 128 characters)

128 characters that make up the ASCII coding scheme; most unerversal coding set

Unicode character set (16 bits, numerous families and characters)

16 bit character set designed to cover all the worlds major living languages, in addition to scientific symbols and dead languages.

Byte

8 bits

Coded Exponent Bits

??? Bits used for exponent bias

Sign bit

A binary bit that is added to the leftmost position of a binary number to indicate whether that number represents a positive or a negative quantity.

Accuracy

A description of how close a measurement is to the true value of the quantity measured.

Truth Table

A list of all possible input values to a digital circuit, listed in ascending binary order, and the output response for each input combination.

XOR gate

A logic circuit whose output is 1 when one input is 1, but not both.

Algorithm

A methodical, logical rule or procedure that guarantees solving a particular problem.

Full-Adder

A unit which adds together two input variables. A full adder can add a bit carried from another addition as well as the two inputs, whereas a half adder can only add the inputs together

Integer Format

An integer format is a data type in computer programming. Data is typed by the kind of information that is being stored, to what accuracy numeric data is stored, and how that information is to be manipulated in processing. Integers represent whole units. Integers occupy less space in memory, but this space-saving feature limits the magnitude of the integer that can be stored.

AND gate

Digital circuit that implements the AND operation. The output of this circuit is HIGH only if all of its inputs are HIGH.

OR gate

Digital circuit that implements the OR operation. The output of this circuit is HIGH (logic level 1) if any or all of its inputs are HIGH.

Coded Exponent Bias

In IEEE 754 floating point numbers, the exponent is biased in the engineering sense of the word - the value stored is offset from the actual value by the exponent bias. Biasing is done because exponents have to be signed values in order to be able to represent both tiny and huge values, but two's complement, the usual representation for signed values, would make comparison harder. To solve this problem the exponent is biased before being stored, by adjusting its value to put it within an unsigned range suitable for comparison. By arranging the fields so that the sign bit is in the most significant bit position, the biased exponent in the middle, then the mantissa in the least significant bits, the resulting value will be ordered properly, whether it's interpreted as a floating point or integer value. This allows high speed comparisons of floating point numbers using fixed point hardware. To calculate the bias for an arbitrary sized floating point number apply the formula 2k−1 − 1 where k is the number of bits in the exponent.[1]

DO FOR LOOP

In computer science a for loop is a programming language statement which allows code to be repeatedly executed. A for loop is classified as an iteration statement.

Double Precision (64 bit word)

Refers to a type of floating-point number that has more precision (that is, more digits to the right of the decimal point) than a single-precision number. The term double precision is something of a misnomer because the precision is not really double. The word double derives from the fact that a double-precision number uses twice as many bits as a regular floating-point number. For example, if a single-precision number requires 32 bits, its double-precision counterpart will be 64 bits long.

Single Precision (32 bit word)

Single-precision floating-point format is a computer number format that occupies 4 bytes (32 bits) in computer memory and represents a wide dynamic range of values by using a floating point.

Decimal to binary conversion

Step 1: Begin with the decimal fraction and multiply by 2. The whole number part of the result is the first binary digit to the right of the point. Because .625 x 2 = 1.25, the first binary digit to the right of the point is a 1. So far, we have .625 = .1??? . . . (base 2) . Step 2: Next we disregard the whole number part of the previous result (the 1 in this case) and multiply by 2 once again. The whole number part of this new result is the second binary digit to the right of the point. We will continue this process until we get a zero as our decimal part or until we recognize an infinite repeating pattern. Because .25 x 2 = 0.50, the second binary digit to the right of the point is a 0. So far, we have .625 = .10?? . . . (base 2) . Step 3: Disregarding the whole number part of the previous result (this result was .50 so there actually is no whole number part to disregard in this case), we multiply by 2 once again. The whole number part of the result is now the next binary digit to the right of the point. Because .50 x 2 = 1.00, the third binary digit to the right of the point is a 1. So now we have .625 = .101?? . . . (base 2) . Step 4: In fact, we do not need a Step 4. We are finished in Step 3, because we had 0 as the fractional part of our result there. Hence the representation of .625 = .101 (base 2) .

True (absolute) Error (exact-approx)

The difference between the measured or inferred value of a quantity and its actual value , given by Exact - approx.

Flow Chart

The graphical representation of all activities in a process including tasks, delays, decisions, movement, etc.

Sum bit

a bit that you add

Carry Bit

a bit that you carry from addition or multiplication

If-Then-Else Conditional Logic

a programming language statement that compares two or more sets of data and tests the results. If the results are true, the THEN instructions are taken; if not, the ELSE instructions are taken.

Subroutine or Function

a set of instructions or blocks that can be inserted into a program and repeated whenever required

BINARY ADDITION

a way to add binary numbers

BINARY MULTIPLICATION

a way to multiply binary numbers

Pseudo-code

a written representation of what a program does, using plain English instead of computer programming language

Implicit leading bit (assumed=1)

all numbers led with 1 so you get rid of this because you know its already there to make your number more precise

WHILE DO LOOP

an exit-condition loop. This means that first the expression or test condition is evaluated. If it is true, the code executes the body of the loop again.

DO WHILE LOOP

an exit-condition loop. This means that the code must always be executed first and then the expression or test condition is evaluated. If it is true, the code executes the body of the loop again.

Overflow bit

caused by a carry bit that creates a number too large for the current bit-size that you were working with. For example, if your program can only store 8-bit numbers and you multiply a 5-bit number by a 4-bit number, you will have a 9-bit number as your answer. This is bigger than 8-bit which creates an overflow error.

Modeling Error

errors that arise due to over-idealizations of the geometry, boundary and intermediate support conditions, connection stiffness, member releases, interfaces between various structural sub-systems, movement systems, etc.

Computation Error

errors that occur during mathematical operations

Truncation error (of infinite Taylor series)

is the error made by truncating an infinite sum and approximating it by a finite sum. For instance, if we approximate the sine function by the first two non-zero term of its Taylor series, as in \sin(x) \approx x - \tfrac16 x^3 for small x, the resulting error is a truncation error.

Calculation Error

make calculation mistake

Condition number (uncertainty in function output/ uncertainty in function input)

measures how much the output value of the function can change for a small change in the input argument

Floating Point Format

n computing, floating point describes a method of representing an approximation of a real number in a way that can support a wide range of values. The numbers are, in general, represented approximately to a fixed number of significant digits (the significand) and scaled using an exponent. The base for the scaling is normally 2, 10 or 16. The typical number that can be represented exactly is of the form: Significant digits × base^exponent

Precision

refers to the closeness of agreement within individual results

Taylor Series Expansion

representation of a function as an infinite sum of terms that are calculated from the values of the function's derivatives at a single point.

Bit

smallest unit of measurement used to quanify computer data. It contains a single binary value 0 or 1.

Half-Adder

the half-adder adds two single binary digits A and B. It has two outputs, sum (S) and carry (C). The carry signal represents an overflow into the next digit of a multi-digit addition.

Approximation Error

the measurement of the data is not precise or approximations are used instead of the real data

Extended ASCII character set (8bits, 256 characters)

uses 8 bits instead of 7bits allowing it to have 128 more characters than not extended. These characters include foreign language symbols and drawing symbols


Set pelajaran terkait

Cuban Missile Crisis 10 Questions

View Set

Microbiology Final Exam HW Questions

View Set