Math

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Permutation Matrix

A matrix with one 1-value in each of its rows and zeros everywhere else. It is some row-switcharoo of the original identity matrix. A permutation matrix to some power c will always yield the identity matrix

Gaussian Quadrature

A method of integration, stemming from a trapezoidal rul(ish) basis. Works by summing up a system of ambiguous points with designated weights. The high accuracy of Gaussian quadrature then comes from the fact that it integrates very-high-degree polynomials exactly. We choose N=2n-1, because a polynomial of degree 2n-1 has 2n coefficients, and thus the number of unknowns (n weights plus n Gauss points) equals the number of equations. Taking N=2n will yield a contradiction.

Metric

A metric is a function d attempting to measure the distance between each pair of elements of a set X, d: X x X -> [0,infinity). Has to follow the following properties: 1. positive or zero 2. d(x,y)=0 iff x=y 3. symmetry d(x,y)=d(y,x) 4. triangle inequality

Singular Matrix

A square matrix is called Singular if its inverse does not exist. The inverse of a matrix does not exist if and only if the determinant of the matrix is 0.

Residual

Another word for error between the actual and estimated values.

Hermite Cubic Polynomials

The following polynomials form a basis for third degree polynomials on the interval [a,b]

Spline

function that is piecewise defined by polynomial functions, and which has a high degree of smoothness as the the connecting points (known as knots)

Linear Regression

given a set of data, linear regression is the process of defining a linear relationship that "fits well" into the data. In other words just drawing the line of best fit using whichever relevant method (most often least squares)

sup

Basically another notation for maximum. 'Tis short for supremum and most directly means the smallest value that is equal to or greater than every element in the set.

B Spline

Basis function of degree n that the helps define the B Spline Curve. B splines are joined continuously to the (n-2) order, and each point is affected by n control points.

Jordan Block

Block within a matrix having zeros everywhere but the diagonal and the superdiagonal.

Control Points

Given points used to approximate a curve in various methods of interpolation.

Operative Statement

Loustau's name for the Newton's Method formula in finding a new x value. Last formula in the following image

Matrix Norm

Magnitude of our matrix computed by taking the vector norm of the expression Ax where A is our matrix, and x is any functioning vector, and then finally dividing it by the vector norm of our vector x. Matrix Norm is the maximum sum of one of its columns

First Forward/Backward/Central Difference

Method of approximating the first derivative

Second Central Difference

Method of approximating the second derivative

Control Polygon

Piecewise linear curve connecting neighboring control points

Cubic Spline Interpolation

Spline Interpolation with connected 1st and 2nd derivatives. There is a minimum of four control points, as well as six knots. The formula given by:

Knots

The 't's used in Spline Interpolation

Gradient

The set consisting of derivatives of a given function with respect to each present variable

Convex Hull

The smallest convex set that contains X. For instance, when X is a bounded subset of the plane, the convex hull may be visualized as the shape enclosed by a rubber band stretched around X

Equivalent Systems

Two linear systems Ax=b and Cy=d are equivalent systems most simply if they have the same solutions. On a more complex note, these two systems are equivalent if there exists a matrix E, a product of elementary matrices, such that C=EA and d=Eb.

Vandermonde Matrix

Used in monomial Polynomial Interpolation

Bezier Curve

method used to draw homogeneous curves in regard to a set of data

Newton's Quotient (Difference Quotient)

the algorithm used for the limit definition of a derivative

Bezier Interpolation

(CGFloat T=0; T<1; T+=0.01) {CGPoint A= pointInLine(p0,p1,T); CGPoint B= pointInLine(p1,p2,T); CGPoint C= pointInLine(p2,p3,T); CGPoint D= pointInLine(A,B,T); CGPoint E= pointInLine(B,C,T); CGPoint F= pointInLine(D,E,T); drawPoint(F);//And draw the point on the screen}

Richardson Method

(overly complicated) adaptation of the Jacobi and Gauss-Seidel Methods. Again, the part that changes is only to do with the use of a new x variable value as soon as we find it. This time however we extend the notion of using a new x by adapting it based on its residual (error). The resulting algorithm looks something like this

LU Decomposition

1. Use gaussian elimination [with subtraction being the only operation when combing rows] to transform the og matrix into an upper (or lower) triangular matrix, U. 2. Remember the constants you used to multiply by to change designated values to 0. Create a new matrix, L, with 1s on the diagonal and the multiplicative constants on the places they transformed. 3. LU=A 4. We're trying to solve Ax=b LUx=b (Ux=y) Ly=b, solve for y Ux=y, solve for x 5. The point of LU Decomposition is to transform the original difficult non-triangular system/matrix into a product of two triangular matrices, which are way easier to solve.

Eigenvalue

An eigenvalue is a scalar value, lambda, that when multiplied by a vector v, produces the same result as the product of v and a certain eigenvector

Eigenvector

An eigenvector is a vector v, that when a linear transformation is applied to v, it only changes by becoming a product of a scalar and v. It is computed by the following algorithm

Taylor Polynomial

An nth order Taylor Polynomial is a polynomial approximation of a certain curve based on its derivatives up to the nth derivative. It's found by applying the following algorithm

Linear Operator

An operator on a matrix or vector A is said to be linear if it follows the distributive property of multiplication and the scalar multiplication property.

Condition Number

Can be the condition number of many things, but usually is assumed to mean the condition number of the inverse matrix. Computed by taking the product of the matrix norm of A with the matrix norm of A inverse.

Jacobi Method

Given a system of linear equations, the jacobi method solves each equation for its own relative variable. Then, applying an initial guesstimate for each value, creates a loop in which the subsequent calculated guess values converge to the actual solutions

Spline Interpolation

Given an nth degree, Spline Interpolation approximates a polynomial not over the entire interval our points cover, but rather individually between each point to point interval. For example in Linear Interpolation we assume a set of lines to connect one point to the next and so on. In quadratic spline interpolation, we assume a quadratic function, or parabola to connect one point to the next and so on.

Guide Points

Given set of points for B Spline Polynomial Interpolation

Transpose

In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal, that is it switches the row and column indices of the matrix by producing another matrix denoted as AT

Seed

Initial (random) guess for an algorithm like Newton's Method or the Secant Method

Piecewise Polynomial Interpolation

Interpolation that works point by point, exclusively defining a relationship between a point and the next (this varies by the degree of the interpolation- linear piecewise interpolation working point by point, quadratic working in sets of three points, cubic in sets of four, and etc)

Gauss-Seidel Method

Let us take Jacobi's Method one step further. Where the true solution is x = (x1, x2, ... , xn), if x1(k+1) is a better approximation to the true value of x1 than x1(k) is, then it would make sense that once we have found the new value x1(k+1) to use it (rather than the old value x1(k)) in finding x2(k+1), ... , xn(k+1). So x1(k+1) is found as in Jacobi's Method, but in finding x2(k+1), instead of using the old value of x1(k) and the old values x3(k), ... , xn(k), we now use the new value x1(k+1) and the old values x3(k), ... , xn(k), and similarly for finding x3(k+1), ... , xn(k+1).

Elementary Matrix

Matrix formed by performing one elementary operation on the Identity Matrix

Upper Triangular Matrix

Matrix with (mostly) non-zero values on and north of the diagonal, with zeros anywhere south of the diagonal. The Lower Triangular Matrix is the opposite situation.

Operator Norm

Measures the size of linear operators. Calculated by taking the matrix norm of the linear operator.

Newton's Method

Method for finding the roots of a certain function f(x). Pick an initial guess x-value, x0 then calculate a more accurate x-value by subtracting the quotient of f(x0) and f'(x0). Repeat this until your f(xn) value is where you want it (close enough to 0)

Secant Method

Method for finding the roots of a certain function f(x). Pick two initial guess x-values, making sure that their y-values are on opposite sides of the x axis. Then perform the following algorithm to find a new x-value, substituting it into your algorithm in place of the x-value that had the same sign prior. Continue this loop until the value converges to your desired 0-value.

Jordan Canonical Form

Next best thing to a matrix being fully diagonalized. A matrix in the form of jordan blocks having respective eigenvalues on the diagonal and 1s on the superdiagonal of each jordan block.

Orthonormal Basis

Orthogonal Basis then normalized by dividing each vector by its relative norm

Gram-Schmidt Process

The Gram-Schmidt Process is an algorithm used to determine the orthonormal basis of a set of vectors. Suppose {x1,x2,...xn} is a basis for a subspace W in Rn. An orthogonal basis is formed by the following algorithm: vn= xn - (xn*v1)*v1/(v1*v1)-...-(xn*vn-1)*vn-1/(vn-1*vn-1). The basis is then normalized by dividing each vector by its relative norm.

Determinant

The determinant of a matrix A, denoted det(A) is the (alternating sign) sum of the top elements' determinants.

Relative Error

The difference between the actual and desired values divided by the actual value

Error

The difference between the actual and estimated x values

Gaussian Elimination

The elimination method used to solve systems of linear equations. More often than not this is performed in matrix format, multiplying constant by row and then adding that new row to another. The goal is to ultimately end up with an identity matrix, a triangular matrix if the identity isn't possible.

Characteristic Equation (Characteristic Polynomial)

The equation used to solve for a matrix's eigenvalues. It is established by taking the determinant of the difference of our matrix A and the product of lambda and the identity matrix, and then setting that whole bit equal to 0. The solutions for lambda are our eigenvalues.

Runge's Phenomenon

The idea that when using higher degrees in Polynomial Interpolation, the result isn't always better. When taking a set of equidistant points the higher the degree of the polynomial, the more waves the polynomial makes, resulting in messier approximations.

Local Control

The idea that within Polynomial Interpolation a small change in one of the input interpolation points will yield significantly different results in the resulting polynomial. We want a small change in input to only affect our results close to or "local" to the point. Because Polynomial Interpolation has drastically different results with even a small change in input value, we say Polynomial Interpolation lacks local control.

Gelfand Theorem

The limit as k approaches infinity of the matrix norm of our matrix A raised to the m power, only then raised to the one over m power. The result should equal the spectral radius of our matrix A.

Least Square Error

The most common way to interpret error. Take the summation of the difference between actual and calculated values squares, then divide by the number of points, and take the square root of the entire ordeal

Vector Norm

The norm is the magnitude or length of our vector v. It is computed by taking the square root of the sum of all of our individual elements squared. Identities: 1. positive 2. abs value scalar works the same outside/inside 3. triangle inequality

Triangle Inequality

The norm of the sum of two sides is less than or equal to the sum of the norms of the two sides.

Parameterization

The process of defining a shape or curve on certain graph parameters. Most common are cartesian coordinates (x,y,z), cylindrical polar coordinates (ρ, φ, z), and spherical coordinates (r,φ,θ) but there do exist other coordinate systems. Ex. Circle = (cos(t),sin(t))

Polynomial Interpolation

The process of finding a polynomial of a given degree n, that passes through all of the given points in a data set. Arguably the most basic approach using a monomial basis takes a Vandermonde Matrix and creates a system of equations in which we solve for the desired coefficients.

Interpolation

The process of finding new data points based on a set of determined data points. The process has many different approaches including Polynomial Interpolation, Hermite Interpolation, Spline Interpolation, and Bezier Interpolation.

Lagrange Polynomial Interpolation

The process of finding the polynomial of lowest degree n, that passes through each of the given points in a data set. The formula for which being:

Spectral Radius

The spectral radius of a square matrix is given by finding the eigenvalues of the matrix and then choosing the largest amongst their absolute values.

Hessian

The square matrix composed of second order partial derivatives of a certain function f(x).

Krylov Subspace

Used again in iterative processes used to solve systems of linear equations. Given an nxn matrix, A, as well as a vector b, the Krylov Subspace is the span of {b, Ab, A^2b,....A^n-1b}

Implicit Finite Difference Method

Using the backward and central difference methods

Crank Nicholson Finite Difference Method

Using the central difference methods

Explicit Finite Difference Method

Using the forward and central difference methods

Least Square Method

We want to build a function that minimizes the error between the various actual and calculated points. Easiest to assume in a linear function, take the derivative of the least square error and set it equal to 0. The parameters are defined in the following matrix

Fourier Series

an approximation of a function using an infinite sum of sine and cosine functions

One Dimensional Heat Equation

equation representing the diffusion of heat across a thin rod

Von Neumann Stability Analysis

estimates the stability of our functions errors, letting us know if FDM is efficient to use in a specific setting. The process is unstable if the error grows with time.

Finite Difference Methods

methods like the explicit fdm, implicit fdm, and crank nicholson methods used to solve differential equations by approximating them with difference equations, where finite differences approximate the derivative


Ensembles d'études connexes

Module 7 Textbook Study Questions

View Set

Chapter 12: Products and services strategies

View Set

Health Informatics Final (All Quizzes)

View Set