Numerical Methods - Exam 1
True or False: the bisection method will converge to the true value of the root for this problem
False
Binary
(101.111)_2 * 2^2 = (10101.1)_2 = 2^4 2^3 2^2 2^1 2^0 2^-1 =21.5
True or False: if a function has multiple roots, then the bisection method will find all of them
False
True or false: Newton's Method converges to a root r of f for any starting value x0
False, might take multiple iterations
Backward Substitution Python Code
x_(n) = b_(n) / a_(nn) for i from n - 1 to r | sum = b_(i) | for j from i + 1 to n | | sum = sum - a_(ij)*x_(j) | End for x _(i) = sum / a_(ii) End for
Newton's Method
Given a function on [a,b] A guess at x0, take the tangent line at f(x0). Where the tangent line crosses 0, this is the next guess.
def epsilon(): | esp = 1.0 | while (1.0+eps >1.0): | | eps = eps/2.0 | break | eps=2.0*eps print(eps)
Given from psuedo code. he said it was wrong but didn't tell me why, so whatever.
False Position Method
Goal: To find x in [a,b] so that f(x) = 0 y = mx+k y = (f(b) - f(a))/(b-a)x + k k = f(a) - ((af(b)-af(a))/(b-a)) k = (bf(a) - af(b))/(b-a) SO... y = [(f(b)-f(a))/(b-a)]x + [(bf(a) - af(b))/(b-a)]
Issues with Newton's Method:
Have a function where the tangent line crosses 0 is not close to the root AT ALL Were the tangent lines keep bouncing back and forth between each other Where the root IS 0, then the tangent line will be 0
Problems with Naive Gaussian Elimination:
Have a system that looks like the following: [ 0 1 1 ] [ 1 1 2 ] **zero in pivot position** Could swap these rows...but what if we had a larger system with more zeros
True or False: The forward elimination subroutine of Gaussian elimination has more operations than the back substitution subroutine
TRUE Forward elimination has n^3 operations where as back sub. has n^2 operations
Iterative Methods for solving linear systems - why are they needed?
Take advantage of some structure in your system to reduce the number of computations
def F(n): v=[0]*n v[0]=1 v[1]=1 for k in range(2,n): v[k]=v[k-1]+v[k-2] print(v) F(3) [1,1,2]
The returned answer will be an array that is equal to length of n, in this case 3. Being printed out is the fibonacci sequence.
True or false: each iterate in Newton's method roughly doubles the precision of the previous iterate
True EX: If c(d) = 1 and if |r - x0| < 10^-6 then |r - x_(n+1)| < |r - x_(n)|^2 < 10^-6^2 = 10^-12 (doubled)
Naive Gauss elimination
Where the system is left with [1,1,1] and all zeros and the b vector is our solution
Point Slope Formula
y - y1 = m(x - x1)
Scale Factor
s_(i) = max |a_(ij)| S = will be the largest value in each row So if our matrix is... [3 -13 9 3 -19] [-6 4 1 -18 -34] [6 -2 2 4 16] [12 -8 6 20 26] S = [13 18 6 12] **absolute values**
def Z(x,n): | ret = 1 | nterm = 1 | for k in range(1,n+1): | | nterm = nterm (-1)(x**2)/k | | newret = ret + nterm | if newret-ret==0: | | print("Terminated at n=",k) | break | ret=newret | print(ret) Z(10,300) Try different values of x
x values far from 0, such as 10, begin to move towards infinity. This is either because 1) the Taylor Polynominal fails to be a good approximation outside of some range near 0; or 2) propagation of roundoff errors
Which initial value, x0, will have issues with Newton's Method for the function f(x) = sin(x) + x
x0 = pi
Accuracy of E...Bisection Method
(b-a)/2^(n+1) < E (b-a)/E < 2^(n+1) log_(2) (b-a / E) - 1 < n log_(2) (5-0 / 10^-6) - 1 < n
Representation of Bases
(d1 d2 d3. d4 d5...) * B^e EX: 102.635 * 10^4 = 1,023,650 OR 1.02365 * 10^6
Row Operations
1) replace a row by a non-0 multiple of that row 2) replace a row with the sum of that row with another row 3) swap two rows
Using 3 significant digit arithmetic add the following numbers from smallest to largest and then largest to smallest. Be sure to round after EACH addition! Which order produces the better answer (true solution is 10.7)?
10.5, 0.141, 0.0052, 0.0049 Smallest to Largest: 0.0049 + 0.0052 = 0.0101 + 0.141 = 0.151 + 10.5 = 10.7 Largest to Smallest: 10.5 + 0.141 = 10.6 + 0.0052 = 10.6 + 0.0049 = 10.6
arctan(x)=∑n=0∞ ((−1)nx2n+1) \ (2n+1)
= 1 - 1/3 + 1/5 - 1/7 + ... ? = x - x^3/3 + x^5/5 - x^7/7 + x^9/9 + ... x = 1 So sum is approximating pi/4 So in the code, M should be 4
Issues with finding roots
A function like x^2: f(a) and f(b) have the same sign so bisection isn't going to work here. A function with multiple roots: It's gonna converge, but it's not gonna find all of them. It's gonna converge to one of them
Fibonacci sequence
A sequence of numbers in which each number is the sum of the preceding two. 1, 1, 2, 3, 5, 8, 13,...
Scaled Partial Pivoting
Allowing row and column swaps to manipulate what the pivots are gonna be
E(^n)(_k=0) k*x^k def P(x,n): | ret = 0 | for k in range(0,n+1): || ret = ret + (k*(x**k)) | print(ret) P(3.0,5) =1641
E^5 k*3^k = 0*3^0 + 1*3^1 + 2*3^2 + 3*3^3 + 4*3^4 + 5*3^5 = 0 + 3 + 16 + 81 + 324 + 1215 = 1641
Taylor Series
Error: E_(n+1) E_(n+1) = (f^(n+1)(e))/((n+1)!) * (x-c)^(n+1)
Machine Numbers
Every number you can represent on the computer is a rational number but not every rational number can be represented. The numbers you can represent is a subset of the rational numbers called the machine numbers (no set number or what numbers these are)
True or False: Gaussian elimination with scaled partial pivoting is most appropriate for matrices where entries in different rows are differing magnitudes
FALSE Entries of differing magnitudes in the SAME row [1 .0001] [2 2] The problem is not that 2 and .0001 are different magnitudes, its that 1 and .0001 are different.
True or False: The bisection method for finding roots has exponential convergence since the error in each iteration is reduced by a factor of 1/2
FALSE Has linear convergence |x_(n) - r| <= 1/2 |x_(n-1) - r| e_(n) <= 1/2 e_(n-1)
True or False: Iterative methods such as Jacobi and Gauss-Seidel (in exact arithmetic) will produce the exact solution to Ax=b in a finite number of iterations
FALSE Iterative methods are meant to take an initial guess and if we are luckily, it will converge to the exact solution. So they will be approaching the exact solution, not hit it exactly.
True or False: Machine epsilon is the smallest number representable in a particular computer
FALSE Machine epsilon gives us an upper bound for relative error. It IS small enough so that 1 + e is small enough to be different from one, but it is not necessarily THE smallest
Truncation Error
Introduced by limitations in approximating infinite processes as finite ones
Rounding Error
Introduced by limitations of storage
11000010100000100000000000000100
Sign is negative since first number is 1 Next 8: 10000101 are the exponent 2^7 + 2^2 + 2^0 128 + 4 + 1 = 133 Mantissa: 00000100000000000000100 = -(1.00000100000000000000100)2 * 2^(133-127) = - (100001.00000000000000100)2 = 2^6 + 2^0 * 2^-15
Real Numbers
No hope that a computer can represent ALL real numbers
IEEE Floating-Point Format
Sign: 0 = positive 1 = negative (1 bit) Exponent: 00000000 (smallest) 11111111 (largest) Largest: 255 - 127 So exponent is -127<e<128 Mantissa: f1....f23 (23 bits)
def cool(a): | x = a/2.0 | for k in range(1,50): | | x = (x*x+a)/(2.0x) | print(x) cool(3) 1.7320508075688774
The code outputs the square root of the given number, a
Absolute Error
The difference between the actual number and the nearest representable value. |actual - estimated|
Other problems with NGE
Zero Divisors Rounding error in computer (only store 4 digits)
How to do Gauss - Seidel Method
[5 -1 2 12] [3 8 -2 -25] [1 1 4 6] 5x - y + 2z = 12 3x + 8y - 2z = -25 x + y + 4z = 6 x = 12/5 + 1/5y - 2/5 z y = -25/8 - 3/8x + 2/8z z = 6/4 - 1/4x - 1/4y X0 = [0 0 0], initial guess Next iteration... x = 12/5 + 1/5(0) - 2/5(0) = 2.4 y = -25/8 - 3/8(2.4) + 2/8(0) = -4.025 z = 6/4 - 1/4(2.4) - 1/4(-4.025) = 1.90625 X1 = [2.4, -4.-25, 1.90625]
How to do Jacobi Method
[5 6 7 18] [9 1 0 10] [2 3 4 9] 5x + 6y + 7z = 18 9x + y = 10 2x + 3y + 4z = 9 5x = -6y - 7z + 18 y = 10 - 9x 4z = 9 - 2x - 3y x = -6/5y - 7/5z + 18/5 y = -9x + 10 z = -1/2x - 3/4y + 9/4 Initial guess: [0 0 0] Then first iteration is: x = 18/5 - 6/5(0) - 7/5(0) = 18/5 y = 10 - 9(0) = 10 z = 9/4 X1 = [3.6, 10 , 2.25]
Determinant
a square array of numbers or variables enclosed between two parallel lines det(A) = ad - bc A = (10 01) 1 - 0 = 1 A = (20 03) 6 - 0 = 6 det(A) = 6 / 1 = 6
Forward Elimination Code
def FE(A,b): | n = len(b) | for k in range(0, n - 1): | | for i in range(k+1, n): | | | xmulti = (A[i][k])/(A[k][k]) | | | for j in range (k+1, n): | | | | A[i][j] = A[i][j] - (xmulti)*A[k][j] | | | b[i] = b[i] - (xmulti)*b[k] | print(A,b)
Naive Gaussian Elimination Python Code
def NGE(A,b): | n=len(A) # number of rows of A | m=len(A[1]) # number of cols of A | for i in range(n-1): | | for k in range(i+1,n): | | | xmult = A[k][i]/A[i][i] | | | for j in range(i+1,m): | | | | A[k][j]=A[k][j]-(xmult)*A[i][j] | | | b[k]=b[k]-(xmult)*b[i] | b[n-1]=b[n-1]/A[n-1][n-1] | for i in range(n-2,-1,-1): | | sum = b[i] | | for j in range(i+1,n): | | | sum=sum-A[i][j]*b[j] | | b[i]=sum/A[i][i] | print(b)
Write python code to print out the average of the numbers in a list named A.
def avg(A): | n = len(A) | sum = 0 | for i in range(0,n): | | sum = sum + A[i] | average = sum/n | return average
Bisection Method - Python Code
def bisection(f,a,b,nmax,eps): | fa = f(a) | fb = f(b) | if fa*fb>=0: | | print('The function has the same sign at each endpoint of [a,b] and may not have a root. Code terminated') | | return | error = b-a | for n in range(0,nmax+1): | | error = error/2 | | c = a+error | | fc = f(c) | | if error < eps: | | | print('The method has converged to the desired accuracy. The root is ' + str(c) +'.') | | | return | | if fa*fb<0: | | | b = c | | | fb = fc | | else: | | | a = c | | | fa = fc | print('After ' + str(nmax) +' iterations, the bisection method returned a root of ' + str(c) + '.')
False position python code:
def falseposition(f,a,b,nmax): | fa = f(a) | fb = f(b) | if fa*fb>=0: | | print('The function has the same sign as [a,b] and therefore no root is found. Code is terminated.') | | return | for n in range(0,nmax+1): | | c = ((a*fb) - (b*fa))/(fb - fa) | | fc = f(c) | | if fa*fb<0: | | | b = c | | | fb = fc | | else: | | | a = c | | | fa = fc | print('After ' + str(nmax) +' iterations, the bisection method returned a root of ' + str(c) + '.')
How many iterations of the bisection method would you need to approximate the unique root of f(x) = e^x sin(x) - 1 in the interval [0, pi/2] to an error of 10^-6?
en <= (b-a)/2^(n+1) en <= (pi/2 - 0)/2^n+1 < 10^-16 pi/(2^(n+2)) < 10^-6 (10^6pi) / 2^(n+2) < 1 10^6pi < 2^(n+2) log_(2) (10^6pi) < n+2 log_(2) (10^6pi) -2 < n
Relative Error
estimated value and an actual value compared to the actual value
The bisection method is used for which of the following types of problems
estimating roots of nonlinear equations
Consider the function f(x) = 3x - 1. You begin searching for your root on the interval [0, 1]. Your estimate is x1 = 1/2. You continue producing estimates by bisection method. What is your third estimate, x3?
f(0) = -1 f(1) = 2 f(1/2) = 1/2 Now... f(0) = -1 f(1/2) = 1/2 f(1/4) = -1/4 Then... f(1/4) = -1/4 f(1/2) = 1/2 The third midpoint is 3/8 = 0.375
Suppose f(x) = x^2 - 3x - 2 and x0 = 1. Find the first iterate in Newton's Method, x1.
f(1) = -4 f'(x) = 2x - 3 f'(1) = -1 (slope) y - y1 = m(x - x1) y + 4 = -1(x - 1) y + 4 = -x + 1 y = -x - 3 0 = -x - 3 x = -3 (next iterate)
Bisection Method: by hand
have a function f(x) on [a,b] what are f(a) and f(b)? Take the midpoint between a and b which is c, so we get f(c), which is our first guess. So if f(a) > 0 and f(b) < 0 and then f(c) < 0, then we get ride of f(c) to f(b). So our next interval is [a,c] and we start over by looking for the midpoint again.
[3 -13 9 3 -19] [-6 4 1 -18 -34] [6 -2 2 4 16] [12 -8 6 20 26] Start with the first column
{|a_(ij)\S_(i)|} = {3\13, 6\18, 6\6 , 12\12} = 1/4, 1/3, 1, 1 use 6/6 as the LARGEST So out matrix changes to... [0 -12 8 1 -27] [0 2 3 -14 -18] [6 -2 2 4 16] [0 4 2 2 -6] To make the rest of the first column zero...then we do 6 * (-1/2) for the first row and this changes every value for the first row We keep track of the rows by using the vector L L0 = [1 2 3 4] So now L1 = [3 2 1 4]
accurarcy of root in bisection method
|c_(n) - r| < (|b - a| / 2) So... |c_(0) - r| < (b-a)/2 |c_(1) - r| < (b-a)/4 |c_(n) - r| < (b-a)/(2^(n+1))
Calculating unit rounding error
|x - fl(x)| \ |x| where fl() is a floating point representation The relative error I can except from doing one floating point rounding.