Numerical Methods Test 1

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

as calculations (with iterations) increase

round off error gets compounded

Approximate Percent Relative Error

((current approximate - previous approximate) / current approximate)*100

True Percent Relative Error

((true value - approximate value) / true value)*100

Sensitivity Analysis

How sensitive the error in F(x) is to errors in each of its variables. The greater the Kx value the higher the sensitivity of error in F(x) to the error in x is. (sign of K(x) does not matter)

Graphical Method

Used to obtain a root of F(x), plot F(x) vs. x and determine roots visually. Advantages: Simple, Alert us to the presence of multiple roots, sense of the function Disadvantages: Rough estimate of the root, requires large number of function evaluations.

Relative Round-Off error for Floating Point Representation is bounded by

x= true value ~x= approximate(rounded) value (delta)~x= (x - ~x) Machine Epsilon = b^(1-t) b=base, t=number of digits in mantissa |((delta) ~x)/x|</= machine epsilon/2

True Absolute Error (E_t)

true value - approximate value

Modified Secant Method

(Open Method) Modified version of the Newton-Raphsen Method. Used often when finding the derivative of an equation is complicated or impossible. 1. Obtain an initial estimate of the root (x0) 2. Obtain revised estimate of root (x_i+1) 3. Obtain |Ea| and determine completion (x_i+1) = xi - (F(xi)del(x))/(F(xi+del(x))-F(xi))

Approximate Absolute Error (E_a)

(current approximate - previous approximate)

The Newton-Raphsen (NR) Method

(most popular method) (Open Method) Finds where the tangent of f(x0) hits the x-axis and uses that as the revised root. 1. Obtain an initial estimate of the root (x0) 2. Obtain revised estimate of root (x_i+1) 3. Obtain |Ea| and determine completion (x_i+1) = xi - F(xi)/F'(xi)

Typical Characteristics of Numerical Methods

1. Large number of simple, repetitive, calculations (iterations)--> Programing Language 2. Solutions are approximate--> think of accuracy 3. Approximations change with each repetition (iteration) -->convergence.

Open Methods

1. Start with an initial estimate of the root (x0) 2. Systematically revises the estimate to converge to the true root. i.e. fixed-point iteration, Newton Raphsen (NR) Method, Secant Method, Modified Secant.

Comparing the False Position Method and the Bisection Method

1. The False Position Method is a faster way to come to the answer. 2. The Bisection Method converges slower, but thats not always the case. 3. The bracket itself in the Bisection Method gets smaller, the bracket in the False Position Method does not continually shrink. 4. Both Bracketing methods are guaranteed to converge.

Comparing the Newton-Raphsen Method and the Modified Secant Methods

1. The Newton-Raphsen Method converges faster than the Modified Secant Method. 2. If the derivative of the function is zero, it would cause problems for the NR Method. 3. Changing the initial guess can fix the problem of the x2 if it is zero or inf. 4. In both cases you could end up diverging away from the root. 5. Graphing could help you avoid diverging. 6. Both of the Open Methods are the most popular methods because of the speed of the convergence.

reducing truncation and round off error

1. The more 'rectangles' (iterations) you have the more precise your answer will be, the more Truncation error will decrease. 2. The more significant digits you have the more precise your answer will be, the more Round-Off error will be reduced. 3. Every approximate solution provided by a numerical method will have both Round-Off error and Truncation error.

Newton-Raphsen Method for a system of non-linear equations

1. start with an initial guess (x0, y0) 2. obtain revised estimate 3. obtain |Ea| and determine completion. Both Eax and Eay must be less than the specified Es for the roots to be true. If not, then step 2 must be repeated. X_i+1 = xi - ((F1*df2dy)-(F2*df1dy))/(df1dx*df2dy-df1dy*df2dx) Y_i+1 = yi - ((F2*df1dx)-(F1*df2dx))/(df1dx*df2dy-df1dy*df2dx)

Bracketing Methods

All Bracketing Methods Start with: 1. Start with a bracket [xL,xU] that contains the root. 2. Systematically refine the bracket to converge to the root. i.e. False Position Methods, Bisection Method. 3. To start f(xL)*f(xU)<0 if there is at least one root within [xL,xU].

Round-Off Error

Error because of using a finite number of digits to represent numerical values, instead of using all of the digits.

Truncation Error

Error because of using an approximation of a method/formula. Typically due to use of finite number of terms/steps/iterations or use finite values instead of infinitesimal values.

Fixed Point Representation +/-_ _ _ . _ _ _

Largest Positive Number: +999.999 Smallest Positive Number: +000.001 Zero: 000.000 Largest Negative Number: -000.001 Smallest Negative Number: -999.999 1. Has a limited range (min, max) 2. Only a finite number of numbers can be represented 3. Spacing Between adjacent numbers is constant

Strategies for Rounding Numbers

Rounding Half Up, Rounding Half to Even, and Chopping

Optimal Iterations

The point on the Error vs. Iterations graph where Round-Off error and Truncation error cross. This is the number of iteration that will give you the most accurate answer with the least amount of Round-Off or Truncation error. Error will decrease leading up to the optimal iteration point, then will increase after it.

False Position Method

The process of refining the root once f(xL)*f(xU)<0 s satisfied is: 1. Obtain an estimate of the root (xr) and refine the bracket. xr = xU - f(xU)(xU-xL)/(f(xU)-f(xL)) 2. If f(xL)*f(xr)<0, xU=xr else f(xL)*f(xr)>0, xL=xr else if f(xL)*f(xr)=0, f(xr)=0, xr is the solution 3. obtain Ea with two of the approximate solution and compare it to the wanted Es

Bisection Method

The process of refining the root once f(xL)*f(xU)<0 s satisfied is: 1. obtain an estimate of the root (xr) and new bracket. xr = (xL+xU)/2 2. If f(xL)*f(xr)<0, xU=xr else f(xL)*f(xr)>0, xL=xr else if f(xL)*f(xr)=0, f(xr)=0, xr is the solution 3. Calculate |Ea| and determine completion.

Big-O Notation: relationship between step size, number of steps, and the truncation error

The smaller the step size the smaller the error, and the more number of steps there are the smaller the truncation error.

Abel's Impossibility Theorem

There is no general expression to find the roots of the 5th degree or higher polynomial.

Floating Point Representation +/-m*b^(+/-e) m=mantissa b=base (10) e= exponent

form1: [+/- sign of m][+/-sign of e][for][ e ][ for ][ m ] form2: [+/- m] . [value][of][m]X10^[+/-e][value][of][e] Largest Positive Number: 0.999X10^+99 Smallest Positive Number: 0.100X10^-99 Largest Negative Number: -0.100X10^-99 Smallest Negative Number: -0.999X10^+99 1. Limited Range (min, max) 2.finite number of numbers can be represented 3. Spacing increases with magnitude of a number 4. The relative round off error is based on the size of the number, which makes it more precise 5. Modern computers use floating point representation


Ensembles d'études connexes

LGS 200-exam 1 practice questions

View Set

Ch. 13: Violence Against Women - Quiz

View Set

Ch 6 Enzymes: The Catalysts of Life

View Set

Physical Activity and Health Exam 2: Practice Quiz Questions

View Set

Upper Extremity Review Quiz- Muscle Movements

View Set

M11 Chapter 11 Computer Outputs and Networks AUTI 131

View Set