Chapter 12: Iterative Methods
steps for newton raphson method
1) calculate partial derivatives 2) plug in initial guesses 3) solve for *determinant* of jacobian (look at formula for it) 4) solve for the functions at initial guesses 5) substitue all values into this equation
what are shortcomings to newton raphson m script file?
1) the jacobian is inconvenient to evaluate 2) excellent initial guesses are necessary to ensure convergence
when is lambda important
Good examples are the very large systems of linear algebraic equations that can occur when solving partial differential equations in a variety of engineering and scientific problem contexts.
equations for gauss seidel
I think the left represents the first iteration and right other iterations that aren't 1
overrelaxation is designed to
accelerate the convergence of an *already* convergent system
Relaxation is designed to enhance ....
convergence
(2/2)
convergence depends on how equations are formulated and initial guesses
matrix notation to express newton raphson method for equation in last card is
each component will be explained in successive cards * this can be solved using gauss elimination*
what type of equations does gauss seidel use?
equations that solve for the unkown of the diagonal
when lambda is between 1 and 2 what happens? what's it called?
extra weight is placed on the present value which makes the assumption that the new value is moving in the correct direction toward the true solution at *too slow a rate* Thus, the added weight of λ is intended to improve the estimate by pushing it closer to the truth. Hence, this type of modification, which is called overrelaxation, is designed to accelerate the convergence of an already convergent system. The approach is also called successive overrelaxation, or SOR. *overrelaxation*
(T/F) as long as one of the error criterions for one of the x solutions has been met, the loop can stop
false; they all have to fall below; good practice is taking the max of the ea
{xi+1} means
final values
{f} means
function values at i
multiplying the matrix equation of newton raphson by inverse jacobian results in *generalized*
implementing this equation in matlab is on card 49
{xi} means
initial values
how is the inconvenience of calculating jacobian fixed?
instead of calculating partial derivatives, finite difference approximations are used
does relaxation make a non convergent system into a convergent?
it tries to
Gauss-Seidel
iterative method to solve for solution of matrix using the following equations where j is the current iteration and j-1 the previous initial guesses must be made for all x2 and x3 (zero is a good one) stopping criterion is presented in next card
underrelaxation benefits?
makes a divergent system converge or hasten (makes faster) convergence by damping out oscillation
what exactly is being depicted in the convergence test for gauss seidel?
making sure the matrix is diagonally dominant
does a matrix have to be diagonally dominant to ensure convergence?
no, it can still converge if it's not diagonally dominant *it is sufficient, but not necessary*
[J] means
partial derivatives evaluated at i are written as the Jacobian matrix consisting of the partial derivative
what type of equations does relaxation use?
rearrange the equations so that they are diagonally dominant
Jacobi iteration
similar to gauss seidel, but instead of using the latest available x's, it computes a set of new x's on the *basis of a set of old x's* as new values are generated, they are not immediately used but rather are retained for the next iteration.
displaying jacobi vs seidel
so x1, x2, and x3 are retained until the next iteration. x1(i-1) , x2(i-1), x3(i-1) I guess
successive substitution
solve each of the non linear equations for one unknown, have initial guesses for all unkowns and implement iteratively to compute new values that will *hopefully* converge *NOTE* that each unknown can be solved two different ways and sometimes not both equations converge, you'll see in example 292
diagonally dominant
the absolute value of the diagonal coefficeint of any equation is greater than the sum of the other coefficients
For two-variable case, first order taylor series can be written as:
the root estimate corrsponds to the values of x1 and x2 where f1, i + 1 and f2, i+1 equal zero
(T/F) When the multiequation Newton-Raphson works, it exhibits the same speedy quadratic convergence as the single-equation version. However, just as with successive substitution, it can diverge if the initial guesses are not sufficiently close to the true roots.
true
(T/F) in essence the jacobian is analogous to the derivative of a multivariate function
true
(T/F)Nonlinear systems can also be solved using the same strategy as the Gauss-Seidel method.
true; successive substitution
The *weighted* average of the present and the previous results occurs when lambda is a value between *0 and 1*, what is this called?
underrelaxation
if lambda is equal to 1 then the result goes...
unmodified because 1- lambda would equal 0 meaning only (lambda (xi)) would remain, so it'll be like gauss seidel i think *no relaxation*
Newton Raphson Method for system of linear equations
uses derived first order taylor series expansion (partial derivatives)
Relaxation
uses same approaches as gauss seidel (uses same equation) but uses the weighted average of the previous iterations and the current where lambda is the weighted factor that is a value between *0 and 2*
Which is better gauss seidel or jacobi?
usually seidel because utilization of the best available estimates usually makes it the method of preference but jacobi sometimes performs better
does a diagonally dominant matrix ensure convergence for gauss seidel?
yes
can gauss seidel diverge?
yes, but it is much more predictable than the iterative method that was used to solve for the root of a linear question back in chapter 4 because if the following condition holds, it is guaranteed that it will converge
for relaxation, the first ea will be 100% becuase
you start with initial guesses of the x solutions equalling 0
the denominator of the equations that solve for i + 1 are called what
*determinant* of the Jacobian of the system
*NON LINEAR SYSTEMS START HERE*
:(
aight, imma walk myself through the derivations of taylor series in next few cards..
:(
*NOTE* Overrelaxation is also known as successive overrelaxation (SOR)
:P
Successive Substitution for a Nonlinear System example 12.3 page 292 (1/2)
:P