Chapter 21: Numerical Differentiation

¡Supera tus tareas y exámenes ahora con Quizwiz!

other uses of diff() for testing certain characteristics of vectors is....

1) checking for unequal spacing checking for equal spacing 2) check whether a vector is in ascending or descending order

EXAMPLE 21.2 Richardson Extrapolation page 529

*NOTE:* This is using the first derivative centered difference formula of O(h^2) to calculate D(h2) and D(h1) f′(xi)=[ f(xi+1)−f(xi−1)] / 2h this example yields an exact results because the function being analyzed was fourth order polynomial. && Richardson extrapolation is equivalent to fitting a *higher order polynomial* through the data and then evaluating the derivatives by centered divided differences.

diff() syntax

-- passed a one dimensional vector of length n and returns a vector of length n - 1 containing the differences between adjacent elements which can then be used to determine the *finite divided difference approximations* of first derivatives I think it does x2 - x1 diff(x)

two ways to improve derivative estimates when employing finite differences

1) decrease step size 2) use higher order formula that employs more points

how does gradient() calculate the differences between elements to allow one to evaluate derivatives at original x values?

1) uses *forward difference* on the first two points to calculate the first point in the returned vector 2) uses *centered difference* in the intermediate points using the formula in the pic to calculate intermediate points in returned vector 3) uses *backward difference* on the last two points to calculate the last point in the returned vector

*Derivatives and integrals for data with errors* starts here

:P

*Derivatives of unequally spaced data* starts here

:P

*NOTE* all application for derivatives discussed up to this point can also be used for partial derivatives :P

:P

*Partial Derivatives* start here

:P

*Richardson Extrapolation* starts here

:P

*Numerical Differentiation with MATLAB* starts here

:P using diff() and gradient()

which differential equation is more complicated? Lagrange or the finite differences ? are there advantages?

Lagrange; but there are advantages in that it 1) *doesn't require equally spaced data* 2) you can calculate the derivative of any value in between the data points used to create the interpolating polynomial 3) the derivative w Lagrange is as accurate as using the *centered difference approximation*. In fact, putting x = x1 in the lagrange formula talked about right before this card, yields the O(h^2) equation for centered difference approximation

[x,y] = meshgrid() does what

[X,Y] = meshgrid(x,y) returns 2-D grid coordinates based on the coordinates contained in vectors x and y. X is a matrix where each *row is a copy of x*, and Y is a matrix where each *column is a copy of y*. The grid represented by the coordinates X and Y has length(y) rows and length(x) columns.

gradient() syntax with two dimensional matrices (i.e. partial derivatives)

[fx,fy] = gradient(f, h) fx - differences in the x column direction fy - differences in the y row direction h - spacing between points

why is gradient() favorable over diff() sometimes

allows one to evaluate derivatives at the original x values unlike diff() that evaluates derivatives at the midpoint between adjacent values in the original x values

gradient()

also returns differences between values in a vector of length n but returns a vector of length n unlike the diff() function. (more specifics in later cards) *spacing between points is assumed to be one* syntax with spacing between points = 1 : gradient(x) syntax with spacing between points != 1: gradient(x, h) where h is the spacing between intervals

checking for unequal spacing using diff()

any(diff(diff(x))~=0) taking the differences twice between vectors to see if they all equate to 0. if they don't equate to 0, that means some points aren't equally spaced, bc for example let's say you have x = 0:0.1:0.8 doing diff(x) should return a vector of n-1 with [0.1 0.1....etc.] taking diff again should return 0 meaning the points are equally spaced

why does error amplify when there's a slight change in data?

because differentiating is subtractive and random positive and negative errors tend to add.

why is the *improved* center finite difference approximation so exact some times?

because the improved has an O(h^4) error, meaning it is *equivalent to passing a fourth order polynomial through the data points*

why were the finite difference approximations derived back in chapter 4 so accurate ? O(h^2)?

because the number of terms of the taylor series that were retained during the derivation of the formulas.

derivative equation when h2 = h1/2 which difference approximation is this used for? and what order is it at now? what order was it before?

centered difference approximation the order improved from O(h^2) to O(h^4)

using diff() to check whether a vector is in ascending or descending order

considering that diff() does x2 - x1, if a vector is in ascending order, the diff() vals returned should be positive, if they are negative, they are in descending order it's less than or equal to check if some points are the same

which is more accurate diff() or gradient()?

diff() is more accurate because it uses smaller intervals to obtain the derivatives while gradient() uses intervals two times the size of the interval for diff()

the mathematical definition of the derivative begins with a

difference approximation

(2/5)

don't forget dot operator when doing divided differences using diff() makes sense doing diff(y)/diff(x) bc dy/dx is derivative

partial derivatives using centered differences:

eq 21.22 eq 21.23

mixed partial derivative as a finite difference approximation step 2) use finite differences to evaluate each of the partial derivatives in y

eq 21.23 to eq 21.25 (previous card) this is eq 21.26

mixed partial derivative as a finite difference approximation step 3) simplify eq 21.26 to yield

eq 21.27

(T/F) Lagrange interpolating polynomial requires equally spaced data points

false; it doesn't require equally spaced data points.

(T/F) Richardson extrapolation and the finite difference approximations in the tables above require unequally spaced data

false; they require equally spaced data

one way to handle nonequispaced (unequally spaced data) data is to

fit a lagrange interpolating polynomial to a set of adjacent points that bracket the location value at which you want to evaluate the derivative. Then resulting polynomial can then be differentiated analytically to yield a formula that can be used to estimate the derivative

mixed partial derivative

for higher order partial derivatives, we might want to differentiate a function w respect to two or more different variables ex: taking partial derivative of f(x,y) with respect to both independent variable4 eq 21.24

why does richardson extrapolation require equally spaced data

for successful generated halved intervals

which of the two functions just described are well suited for partial derivatives?

gradient()

EXAMPLE 21.1 High-Accuracy Differentiation Formulas page 526 (1/3)

have

a high value of the second derivative means what

high curvature

Richardson extrapolation works better and makes more accurate approximations w (higher/lower) order polynomials

higher

what can a gradient vector tell you in the case of mountain climbing

if plotting the positive of the gradient vector, it can tell you the steepest route to the peak of mountain if plotting the negative of the gradient, it'll tell you the path a rolling ball would take going downhill

how does gradient(x, h) work

if spacing between points is not 1, the function divides all the resulting derivatives by h to get the derivatives at the actual values

keeping the second derivative in the forward difference approximation, does what to the o.g. equation

improves its accuracy to O(h^2)

forward difference approximation w Taylor Expansion derivation what's the error/accuracy?

in chapter 4 the second derivative was truncated making the forward difference approximation pictured at the end. O(h)

how was the improved forward finite difference approximation derived?

instead of deleting the second derivative term from the taylor series expansion, the second derivative is substituted w equation highlighted in *yellow* . Then solved for f'(x) look at this card for the taylor series expansion formula "forward difference approximation w Taylor Expansion derivation"

primary approach for determining derivatives for imprecise data is to use

least-squares regression to fit a smooth, differentiable function to the data if least squares doesn't workout, a lower order polynomial regression might be a good first choice.

if the polynomial is of lower order will Richardson extrapolation work exact

nah, it'll make an improved estimate but it won't be exact. the approach can be applied iteratively using a Romberg algorithm until the result falls below an acceptable error criterion. (like richardson extrapolation)

producing

negative of the resultant partial derivatives are being displayed -- meaning they are POINTING DOWNHILL The function's peak occurs at x = −1 and y = 1.5 and then drops away in all directions. As indicated by the *lengthening arrows*, the gradient drops off more *steeply* to the *northeast and the southwest*.

do changes in data cause huge changes/errors in integration? why or why not?

no because integration is a summing process, meaning random negative and positive errors will cancel out, whereas differentiating is a subtractive process, meaning the random negative and positive errors will add up!

(3/5)

note that when using diff, considering that it returns the differences between adjacent elements in a vector of length *n-1*, the new x values corresponding to the divided difference derivatives are the *midpoint between adjacent elements in the original x elements* calculate new x values using the formula in red box don't forget dot

*visualizing fields case study* with gradient() quiver() contour() will be explained in the following cards

page 538

Fourier's law of heat conduction

quantifies the observation that heat flows from regions of high to low temperature. q(x) = heat flux (w/m^2) k = coefficient of thermal conductivity [w/(m*k)] T = temperature (K) x = distance (m) meaning: *the derivative, or gradient, provides a measure of the intensity of the spatial temperature change, which drives the transfer of heat*

differentiation

represents the rate of change of a dependent variable with respect to an independent variable

*summary* of *centered* finite difference formulas for higher order accuracies

second equation is more accurate because it incorporates more terms of the Taylor Series expansion page 529

*summary* of *forward* finite difference formulas for higher order accuracies

second equation is more accurate because it incorporates more terms of the Taylor Series expansion page 527

*summary* of *backward* finite difference formulas for higher order accuracies

second equation is more accurate because it incorporates more terms of the Taylor Series expansion page 528

mixed partial derivative as a finite difference approximation step 1) form a difference in x of the partial derivatives in y

that is, apply eq 21.22 to eq 21.24 the resultant is eq 21.25

the second derivative tells us what

the rate of change of the *slope* AKA curvature tells whether curvature is maxima or minima!

what would a differentiated lagrange interpolating polynomial for three points look like (3)

the resulting polynomial would be second order. (use the formulas from chapter 17 to get the polynomial). Differentiating gives the following polynomial where x is the value at which you want to estimate the derivative

(2/2)

the results are not as accurate as those obtained with the diff() function in Example 21.4. This is due to the fact that gradient() employs intervals that are two times (0.2) as wide as for those used for diff (0.1).

derivative graph

the slope of the tangent to the curve at xi

aside from unequal spacing, what's another problem with empirical data ?

there are errors present, and differentiating only amplifies that error

The vector which represents the steepest slope has a magnitude of.... and direction of....

theta is the angle measured counterclockwise from the x axis

Richardson extrapolation h2 = h1/2

this formula is convenient when expressed as a computer algorithm

(T/F) A function that depends on two variables is a *surface* rather than a curve. (i.e. partial derivatives)

true

(T/F) Lagrange is as accurate as centered difference approximation

true

partial derivatives

used for function that depend on more than one variable derived by holding all but one variable constant.

quiver() syntax

used to make a gradient vector plot quiver(x,y,u,v) x,y are matrices containing the position coordinates u, v are matrices containing the partial derivatives

richardson extrapolation

uses two derivative estimates to compute a third, more accurate, approximation there's a more common formula in the next card when h2 = h1/2

when does the difference approximation become a derivative?

when the delta x in the equation approximation tends to zero as it does in images a through c

when can we assure that we use evenly spaced data ?

when we are given a function. if we are given tabulated data, we can't assure that, bc can't manipulate points without creating more error

develop a contour plot of the results

x = x points y = y points z = function value at x and y


Conjuntos de estudio relacionados

CompTia A+ Core 2 (220-1002) yesss

View Set

4-Stroke Cycle Engine Theory Terms Ch 5

View Set

Medical Terminology Ch1 Test Questions

View Set