Quiz 2 (Lectures 5-7) Numerical Methods
Lecture 6
Advanced Root Finding
Golden Ratio
phi (φ) represents it. Euclid defined it as l1+l2/l1=l1/l2=φ The value of φ is computed through φ^2-φ-1=0 The value of φ is found to be the positive root
Section 7
Optimization
Secant Method
xi+1=xi-f(xi)(xi-1-xi)/f(xi-1)-f(xi) Two initial estimates of x are required, but they do not need to bracket the root
Using the Incremental Search Method
How do we find the two initial guesses? 1. Plot the function 2. Use the incremental search tool
Graphical Methods
Useful for obtaining quick rough estimates
General Bracketing Cases
Ways that roots may occur: 1. If two guess points have the same sign, then there are an even number of (or zero) roots 2. A single root may be bracketed by negative and positive values 3. Three roots may be bracketed by a single pair of negative and positive values
Bracketing Methods AKA two point methods
Bisection and False Position require two initial guesses to surround the root
Lecture 5
Root Finding
Open Methods Key Points
1. Based on formulas that require only a single starting value of x or two starting values that do not necessarily bracket the root 2. May diverge as the computation progresses, but when they do converge, they usually do so much faster than bracketing methods
Visualizing a 2D Function
1. Create vector x & y which span the range to be plotted a. x=linspace(x1,x2,nx); y=linspace (y1,y2,ny); 2. Create a grid using the meshgrid command on which the function will be evaluated a. [X,Y]=meshgrid(x,y); 3. Now we define the function which will be evaluated at each point in [X,Y] a. Z=2+X-Y+2*X.*Y+Y.^2 4. Plot the function using mesh. meshc or surfc command a. mesh(X,Y,Z)
Golden-Section Search
1. Find the minimum of the function f(x)=x^2/10-2sin(x) between 0<=x<=4 a. Compute d, x1,x2 b. Evaluate function at these two points c. Determine which is smaller d. Estimate error
Parabolic Interpolation
1. Given three points, the approximate maximum can be calculated from x4=x2-1/2(x2-x1)^2[f(x2)-f(x3)]-(x2-x3)^2[f(x2)-f(x1)]/(x2-x1)[f(x2)-f(x3)]-(x2-x3)[f(x2)-(x1)] 2. Once this approximate maximum x4 is known, we can drop x1 or x3 depending upon the location of the maximum a. If x1<x4<x2 then x3 is dropped b. If x2<x4<x3 then x1 is dropped
Issues with Incremental Search
1. If the increment length is too small, the search can be very time consuming 2. If the increment length is too great, the algorithm may miss roots or bracket regions with multiple roots 3. Tangent points and discontinuous functions are not handled and may result in brackets with more than one root
Golden-Section Method
1. In determining the minimum of a function, the ratio helps to minimize computational costs in searching for the minimum *Note that x1 is always greater than x2 2. Out goal is to reduce the range over which we are searching at each iteration while keeping the point at which the minimum value occurs within the range 3. If f(x1)>f(x2), then the function is decreasing from x1 to x2 and the minimum must be between xlow and x1, so we set xup=x1 4. Similarly, if f(x1)<f(x2). then the minimum must be between x2 and xu so we set xlow=x2
Basic Objectives
1. Knowing how to determine a root graphically 2. Understanding the incremental search method and its shortcomings 3. Knowing how to solve a roots problem with the bisection method 4. Knowing how to estimate the error of bisection and why it differs from error estimates for other types of root location algorithms 5. Understanding the false position method and how it differs from bisection
Newton-Raphson Method AKA Newton Method
1. Most widely used root finding method 2. Convenient for functions whose derivatives can be evaluated analytically 3. Inflection points and max/min can result in poor convergence or failure to converge a. An inflection point occurs in the vicinity of a root b. Oscillations around a local maximum or minimum c. The initial guess is close to max/min (near zero slope) causing the estimate to jump to several roots away d. A zero slope causes the algorithm to fail
Bracketing Cases-Exceptions
1. Tangent points. Here we have end points of opposite signs, but there are an even number of roots between bounds 2. Discontinuous functions. Here we have two end points of opposite signs but still have an even number of roots
False Position vs. Secant
1. The False Position Method and the Secant Method are very similar 2. The False Position Method will always converge since it brackets the root 3. The Secant Method can diverge, since it does not bracket the root between positive and negative values
Root Finding Questions and Answers
1. What would you do if you were asked to find a real root? a. Apply the algorithm and iterate until the error is close to 0% 2. What happens if you choose an initial value which is far from an actual root? a. The first iteration can shoot way off and take many iterations to converge (or never converge) 3. What if a function has more than one root and you're asked to "find the real roots"? a. Graph it, get an approximate idea of where each root is and apply the algorithm for each root (choosing a different initial estimate)
Incremental Search Method
A numerical tool used by bracketing methods to identify brackets. (NOT A BRACKETING METHOD!) The basic idea is the fact that there will be a sign change before and after a root is found.
Bisection Method
A variation of the incremental search method to where the interval from xl to xu is always divided in half Steps: 1. Choose xl and xu such that they bound the root of interest (the function changes sign over the interval), check if f(x1)*f(xu)<0. 2. Calculate the midpoint (new estimate the root xr): xr=xl+xu/2 3. Determine in which subinterval the root lies: a. If f(xl)*f(xu)<0 the root lies in the lower subinterval. Set xu=xr and return to step 2 b. If f(xl)*f(xr)>0, the root lies in the upper subinterval. Set xl=xr and return to step 2 c. If f(xl)*f(xr)=o, xr is the root. Terminate the algorithm
Bisection Method Advantages and Disadvantages
Advantages 1. Easy to understand 2.Easy to implement 3.Always finds a root 4.Number of iterations required to attain an absolute error can be computed a priori (in advance) Disadvantages 1. Relatively slow 2. Requires two guesses that bound the root 3. Multiple roots can cause problems 4. Doesn't consider the values of f(xl) and f(xu).(if f(xl) is closer to zero, it is likely that root is closer to xl).
Parabolic Interpolation Advantages and Disadvantages
Advantages: - Faster Disadvantages: - Unreliable
Golden-Section Method Advantages and Disadvantages
Advantages: -We do not need to calculate both x1 and x2 after the first iteration - Reliable and computationally efficient Disadvantages: - Slow
False Position Method Advantages and Disadvantages
Advantages: 1. Faster 2. Always converges to a single root Disadvantages 1. The method is one-sided. That is, one of the bracketing points will tend to stay fixed which will sometimes lead to poor convergence for functions with a significant amount of curvature
False Position
Also called the linear interpolation method and Regula Falsi (straight line of falsehood). Based on the false assumption that the function can be approximated by a straight line Steps: 1. Find a pair of values of x1 and xu such that f(xl) and f(xu) have different signs 2. Estimate the value of the root xr=xu-f(xu)(xl-xu)/f(xl)-f(xu) and evaluate f(xr) 3. Use the new point to replace one of the original points, keeping the two points on opposite sides of the x axis: a. if f(xr)*f(xl)<0 then xu=xr b. if f(xr)*f(xl)>0 then xl=xr c. if f(xr)=0 then you have found the root and need go no further!
Bisection Method Error
Approximate error =|present-previous| or Percent Approximate Relative Error=|present-previous/present|*100
Textbook Example 5.1
Determine the mass, m, of a bungee jumper with a drag coefficient, cd, of 0.25 [kg/m] which results in a velocity of 36 [m/s] after 4 [s] of free fall. v(t)=36 m/s cd=0.25 kg/m t=4s g=9.81 m/s2 v(t)=sqrt(mg/cd)tanh*(sqrt(cd*g/m)*t)
Brent's Method
Developed by Richard Brent(1973) Combines the robustness of the bracketing methods (bisection) with the quickness of the open methods (secant and/or inverse quad) Basic principle in MATLAB's fzero function
One-Dimensional Optimization
It is possible that there are local maxima and minima and there is always a global maximum and global minimum The techniques we will learn in this lecture will focus on finding the minimum value of a function
Modified Secant Method
Method made for function whose derivatives are difficult to evaluate analytically f'(x)~=f(xi+δxi)-f(xi)/δxi So what is an approximate value of δxi? a. too small-> round-off error issues b. too big -> inefficient or divergent
Open Methods
Newton-Raphson and Secant use trial and error using one initial guess
Finding Maximums and Minimums
The first derivative of a function is the slope. When the slope is zero this indicates a local extremum The second derivative indicates whether the local extremum is a maximum (if f''<0) or a minimum (if f''>0)
Inverse Quadratic Interpolation
The solution is to calculate a "sideways" parabola through the same thee points - Quadratic fit(parabola): y=f(x) -"Sideways" parabola: x=f(y) xi+1=(yi-1*yi)/(yi-2-yi-1)(yi-2-yi)*xi-2+(yi-2*yi)/(yi-1-yi-2)(yi-1-yi)*xi-1+(yi-2*yi)/(yi-yi-2)(yi-yi-1)*xi