Chapter 5: Roots > Bracketing Methods
desired error (Ead) is defined as
(deltaX of last iteration)/2
two stopping criteria of root searching (bisection method)
* with bisection each iteration halves the error
bracketing methods include
- false position - bisection method
bracketing methods
- making two initial guesses that bracket the root- they are on either side of the root
for the bisection method the approximate error is reduced by a factor of ___ after each iteration
2
Although we have emphasized the use of relative errors for obvious reasons, there will be cases where (usually through knowledge of the problem context) you will be able to specify an absolute error. For these cases, bisection along with Eq. (5.6) can provide a useful root-location algorithm.
:P
Ea provides an upper bound for Et, for the bound to be exceeded, the true root would have to fall outside the bracketing interval , which by definition could never occur for bisection
:{ (this is one benefit of the bisection method)
the false position formula uses the value of xr computed with Eq. (5.7) then replaces whichever of the two initial guesses, xl or xu , yields a function value with the same sign as f(xr )
In this way the values of xl and xu always bracket the true root. The process is repeated until the root is estimated adequately.
why does the true error vary by a lot in bisection method? why is it ragged?
The "ragged" nature of the true error is due to the fact that, for bisection, the true root can lie anywhere within the bracketing interval.
when are the true an approximate errors far apart? when are they close?
The true and approximate errors are far apart when the interval happens to be centered on the true root. They are close when the true root falls at either end of the interval.
what is function function
are functions that operate on other functions which are passed to it as input arguments. The function that is passed to the function function is referred to as the passed function. A simple example is the built-in function fplot, which plots the graphs of functions.
if both f(x1) and f(xu) have different sign, then how many roots are possible
at least one, but there will be an odd number of roots
absolute error for bisection method is
at the zeroth iteration, it is *solely* dependent on the absolute error at the start of the process (the space between the two guesses) and the number of iterations.
why is the desired error useful
because if we knew the desired error beforehand, we would be able to calculate the number of iterations needed.
approaches that allow one to guess the root based on trial and error are
bracketing method open method
difference between open and bracketing methods?
bracketing methods home in (converge) on the root slowly though successive iterations, while open methods don't always work because they can diverge. However, when they work, they usually converge quicker. Bracketing methods also require at least two initial guesses, but open methods do not
how do you find the roots for tangential equations or discontinuous equations?
by coupling graphical methods with bracketing or bisection method
brackets are found how?
by forming two guesses xi and xu where the sign function changes
incremental search
capitalize on the latter (t/f) observation by locating an interval were the function changes sign.
what price do you pay for smaller step size in eulers?
computational price!
what is linspace()
creates a uniform vector between xl and xu with n elements x = linspace(xl, xu, n)
why will |εa | always be greater than |εt | for bisection
each time an approximate root is located using bisection as xr = (xl + xu )/2, we know that the true root lies somewhere within an interval of deltax = xu − xl . Therefore, the root must lie within ± deltax/2 of our estimate. (look at next card)
if both f(x1) and f(xu) have the same sign, then how many roots are possible
either no roots or an even number of roots
how to solve for implicit parameters ?
finding its roots (graphical/bisection/bracketing)
Although the approximate error does not provide an exact estimate of the true error, approximate error captures ...... of the true error
general downward trend
what is an anonymous function
handy means to define simple user defined functions without developing a full-blown M-file using function handles
how will the choice of increment lengt poise a problem for incremental search?
if the length is *too small*, the serach can be very time consuming. if the length is *too great* there is a possibility that closely spaced roots might be missed.
false position is similar to bisection method with the exception that it uses a different strategy to come up with its new root estimate. Rather than bisecting the interval,
it locates the root by joining f (xl ) and f (xu ) with a straight line (Fig. 5.8). The intersection of this line with the x axis represents an improved estimate of the root.
false position AKA
linear interpolation method
number of function input arguments (nargin)
nargin returns the number of function input arguments given in the call to the currently executing function. Use this syntax in the body of a function only.
are the two latter statments always true? If not, when are they not?
no, they're not true when you have functions that are tangential to the x axis or when they are discontinuous
are graphical methods best option?
not really because they lack precision (or limited). But they can be utilized to obtain rough estimates of roots (where roots may be and where some root finding methods may fail)
What's the second benefit of the bisection method?
number of iterations required to attain an absolute error can be computed a priori based on the prescribed tolerance desired error and the initial guess
open methods
one or more initial guesses, but no need to bracket the root
what is passed function
passing a function through another function?
false position often performs better than bisection, but there are cases where it does not. (1/2)
slow convergence of false position method f(x) = x^10 - 1 due to the root being closer to xu and not xl. theres significant curvature in the plot, meaning it'll converge slower
how to come up with initial guesses for bracketing and open methods?
so sometimes it can be intuitive based on the problem and other times it won't be so obvious, so one way to get a guess is through *incremental search.*
Eulers method
substituting the finite difference into the differential equation, and getting *new = old + slope x step*
what are examples of roots that might be missed regardless of the incremental length?
tangential roots double roots (last root in this pic)
what is a potential problem with an incremental search?
the choice of the increment length.
why is the stopping mechanism for bisection method when approximate error falls bellow the error criterion ES = 0.5 acceptable?
the computation could be terminated with confidence that the root is known to be at least as accurate as the prespecified acceptable level.
the dependent variables represent the...... wile the parameters represent
the state or performance of the system its properties or composition
(T/F) if f(x) is continuous between the interval xi and xu and f(xi) and f(xu) are opposite in sign that is, f(xi)f(xu) <0 then there is *at leas* 1 real root in between the intervals
true
(T/F) Bisection does not take into account the shape of the function!
true, it can be good or bad depending on the function
(T/F) for bisection method approximate error (Ea) is always greater than true error (Et)
true. its because of the true errors ragged nature. look at next card.
varargin
varargin is an input variable in a function definition statement that enables the function to accept any number of input arguments. en the function executes, varargin is a 1-by-N cell array, where N is the number of inputs that the function receives after the explicitly declared inputs. However, if the function receives no inputs after the explicitly declared inputs, varargin is an empty cell array.
bisection method
variation of the incremental search method in which the interval is always divided in half. This means that If a function changes sign over an interval, the function value at the midpoint is evaluated. The location of the root is then determined as lying within the subinterval where the sign change occurs. If a function changes sign over an interval, the function value at the midpoint is evaluated. The location of the root is then determined as lying within the subinterval where the sign change occurs. The subinterval then becomes the interval for the next iteration. The process is repeated until the root is known to the required precision.
implicit
when you cannot solve for a variable because an equation won't allow you to isolate it