Numerical Methods: Formulas
How can you derive the error of the numerical differentiation?
From the interpolation error or through Taylor series/expansions.
How does the Secant Method work?
Idea: Like Newton, but use finite differences to compute f′ Remarks: i) Only f needed ii) Converges with p≅1.618 (golden ratio) iii) Can fail, e.g. f(x^((k) ) )=f(x^((k−1) ) ) iv) Generalizes to systems
What is the midpoint (quadrature) rule?
Q = (b-a) f((a+b)/2)
What is the trapezoidal (quadrature) rule?
Q = (b-a)/2 (f(a) + f(b))
What is the formula for the Dahlquist test equation (DTE)?
The simple initial value problem y ̇(t) = λy(t), λ∈R,ℂ y(t₀) = y₀ In the context of absolute stability is known as the Dahlquist test equation.
What is the formula for Implicit trapezoidal method (IT)?
k_1=f(t_j,y_j ) k_2=f(t_j+hy_j+h2 (k_1+k_2 )) y_(j+1)=y_j+h2 (k_1+k_2 ) Remark: p = 2
What's the Lagrange Interpolation formula? What are Lagrange Polynomials (LPs)?
p: array of coefficient cs, x: nodes, y: data, n: degree Properties of the LPs: Lⁿ_j (x) has degree n Lⁿ_j (x_k ) = δ_kj
What is the stability function (FS) of a method?
A one-step method for solving IVP applied to the DTE can be written as y_(j+1)=g(z) y_j where z=hλ and g(z) is called the stability function (SF) of the method.
How does the Bisection method work?
Assume: we know a & b s.t. f(a)<0 & f(b)>0 ↷ root is bracketed Idea: divide/bisect and keep the subinterval that fulfills the assumption. Repeat. Remarks: i) easy and robust, needs only f ii) A priori estimate of the error ε^k ≤ (b−a)/2^(k+1) iii) Slow convergence (linear) iv) Not easy for systems (multiple dimensions)
What's the formula for the backward finite difference?
Error: O(h)
What's the formula for the forward finite difference?
Error: O(h)
What's the formula for the centered finite difference?
Error: O(h²)
What is the Stability Region (SR) of a method?
For λ∈ℂ we call SR={z=hλ∈ℂ||g(z)|<λ }
What is the Stability Interval (SI) of a method?
Given a one-step method with SF g(z) for λ∈R we call SI={x=hλ∈ℝ || g(z)|<λ }
Whats the interpolation condition (IC)?
Given a set of n+1 distinct nodes, x₀<x₁<...<xₙ, and corresponding data points y₀,y₁,..., yₙ. Find the degree n polynomial that fulfills pₙ (x_j )=y_j (Interpolation Condition (IC)) The ICs represent (n+1) equations for the (n+1) coefficients of the IP
How does the Newton Method work?
Idea: linearize f Remarks: i) Need f′ ii) Quadratic convergence if guess close enough iii) Can fail, e.g. if f^′ (x)=0 iv) Generalizes to systems easily!
What is the Absolute stability or simply A-stability (AS)?
In practice, we want absolute decay if λ<0 The solution should decrease in magnitude: |y_(j+1) |<|y_j | So with SF (still λ<0): |y_(j+1) |<|y_j | |g(z)||y_j |<|y_j | ↷|g(z)|<λ requirement on the stability function A method is called A-stable if the whole left complex half-plane is contained in the stability region. {z∈ℂ|Re(z)<0}⊂SR Hence, for A-stable methods, there is no step size restriction for Re(λ)<0 Remarks: i) In general, explicit RK methods are not A-stable. (step size restriction on decaying behaviour) ii) A-stable: IE, IM
What is a method to control the error of a IVP solving method with adabtive step size?
Perform a step with two methods, where the first method is the less accurate one and compared to the result of a second method, more accurate, method. The digits that agree, are assumed to be correct. Remarks: i) more accurate method: a. take a higher order method (e.g. p+1) b. same method but with smaller time steps (substeps) ii) the above pseudo-code is far from complete (e.g. need logic to increase step size) iii) no guarantee that the error estimate is accurate (reflects the true error). However, it often works very well in practice.
What is the simpson (quadrature) rule?
Q = (b-a)/6 (f(a) + 4f(a+b/2) + f(b))
How does Newtons Method work for nonlinear equations with n Parameters?
Remarks: i) Needs Jacobian J (can also be approximated by Finite Differences) ii) Converges quadratically (order 2) if close enough initial guess iii) Can go wrong when Df is singular (equivalent to division by zero)
What are Stopping Criteria (SC)?
Stopping criteria (SC) (SC1) |x^((k) )−x^((k−1) ) |≤atol (absolute criteria) (SC2) |x^((k) )−x^((k−1) ) |≤rtol|x^((k) ) | (relative tolerance) (SC3) |x^((k) )−x^((k−1) ) |≤tol(1+|x^((k) ) |) (hybrid) (SC4) |f(x^((k) ) )|≤ftol (function-tolerance)
Whats the interpolation polynomial (IP)?
The approximation of a function with degree n. Interpolation polynomial (IP): pₙ (x)=c₀+c₁ x+c₂ x²+...+cₙ xⁿ The ICs represent (n+1) equations for the (n+1) coefficients of the IP
How can the local truncation error be computed?
With Taylor series. Example: Local truncation Error of explicit Euler ϕ(t_j,y_j,y_(j+1),h=f(t_j,y_j ) e_j=y_j (t_j )−(y(t_j−1)+hf(t_(j−1),y(t_(j−1) ))) (Remark: y_j≈y(t_j)) e_j=y(t_(j−1)+h−(y(t_(j−1) )+hf(t_(j−1),y(t_(j−1) ))) =y(t_(j−1) )+h⋅y ̇(t_j−1)+h2/2 y ̇ ̇(t_(j−1) )+...−(y(t_(j−1) )+hf(t_(j−1),y(t_(j−1) ))) ODE: h⋅y ̇(t_j−1)=h⋅f(t_(j−1),y(t_(j−1) )) ⇒e_j=h²/2 y(t_(j−1) )+...=O(h²)
What is the formula for Implicit midpoint method (IM)?
k₁ = f(t_j+h²,y_j+h²k₁) y_(j+1) = y_j+h⋅k₁ Remark: p = 2
What is the formula for Heun's method/Explicit Trapezoidal method (ET)
k₁=f(t_j,y_j ) k₂=f(t_j+h, y_j+hk₁) y_(j+1)=y_j+h⋅(k₁+k₂)/2
What is the formula for Runge or modified Euler or explicit midpoint (EM) method?
k₁=f(t_j,y_j ),k₂=f(t_j+h²,y_j+h²k₁ ) y_(j+1)=y_j+h⋅k₂, j=0,...,N−1 kₙ are called slope approximations