MATH 1554: Linear Algebra Terms

Ace your homework & exams now with Quizwiz!

(A^T)^-1

(A^-1)^T

if A is not invertible, then an eigenvalue is

0

stochastic matrices have an eigenvalue of

1

if A is a square matrix with orthonormal columns then det A =

1 or -1

if A is invertible, then det (A^-1) =

1/det A

(A^T)^T

A

symmetric matrix

A = A^T

n x n matrices are similar if

A = PBP^-1

A is diagonalizable if it is similar to diagonal matrix D, write as

A = PDP^-1

(AB)^-1

B^-1A^-1

span

set of all linear combinations of vectors

regular stochastic matrix if

some k such that P^k only contains positive elements, where P is a stochastic matrix

stochastic matrix

square matrix whose columns are probability vectors

upper triangular

zeros located on lower left

lower triangular

zeros located upper right

vertical shear

{[1 0] [k 1]}

horizontal shear

{[1 k] [0 1]}

counterclockwise rotation standard matrix in R2

{[cos -sin] [sin cos]}

clockwise rotation standard matrix in R2

{[cos sin] [-sin cos]}

distance between vector u and v

‖u-v‖

length of a vector

‖u‖ = √u x u = √u₁² + .... +un²

if 2 rows are interchanged odd number of times to produce B, then det B =

-det A

ordering of geometric and algebraic multiplicities of eigenvalues

1 ≤g≤a

echelon form

1. All nonzero rows are above any rows of all zeros. 2. Each leading entry of a row is in a column to the right of the leading entry of the row above it. 3. All entries in a column below a leading entry are zeros.

row reduced echelon form

1. All nonzero rows are above any rows of all zeros. 2. Each leading entry of a row is in a column to the right of the leading entry of the row above it. 3. All entries in a column below a leading entry are zeros. 4. All leading entries are 1 5. Leading entries are only nonzero entry in column

homogeneous coordinates

1s on main diagonal and 0 everywhere except for translation part on rightmost column

(A^-1)^-1

A

A is diagonalizable if

A has n linearly independent eigenvectors

eigenvector calculation

A-lamda I

invertible = non-singular

AC = CA = I

(A+B)^T

A^T + B^T

normal equations

A^TAxhat = A^T b

if eigenvector of A then

Av = lamda v

nontrivial solution

Ax = 0 has this if there is a free variable

(AB)^T

B^TA^T

google page rank equation

G = pP* + (1 - p)k --> k matrix has all elements = 1/n; p is damping factor and if p is greater than or equal to 0 but less than 1, it makes G a regular stochastic matrix

Row A is orthogonal to

Nul A

Row A perp equals

Nul A

Diagonalize to A= PDP^-1

P is eigenvectors and D are eigenvalues in order of eigenvectors

google page rank

P matrix formed from probability of going to different pages; if no link, equal probability for all pages (P*)

A=QR

Q obtained through Gram-Schmidt and R obtained through Q^TA

quadratic form

Q(x) = x^T Ax

negative definite

Q<0 for all x ≠0

positive definite

Q>0 for all x ≠0

positive semidefinite

Q≥0 for all x

linear transformations

Rn to Rm; domain of T - Rn; codomain of T - Rm

unique least-squares solution

Rxhat=Q^Tb

PCP^-1 for imaginary eigenvalues

Solve with lambda then substitute after getting eigenvector; P= (Realv Imv), C= ([a -b] [b a])

an mxn matrix has orthonormal columns if

U^TU = I; has to have linearly independent columns

LU factorization

Ux = y; solve for y in Ly = b; solve for x in Ux = y TO GET LU DECOMP: reduce A to echelon form U by sequence of row operations; place entries in L with opposite sign in same position

determinant of 2x2

ad - bc

eigenvalues of triangular matrix

are on diagonal

if A and B are similar their eigenvalues

are the same

row reductions can or cannot change eigenvalues?

can change

calculate eigenvalue

det (A-λI) = 0 roots are eigenvalues

det A^T =

det A

if a multiple of a row of A is added to another row to produce B then det B =

det A

det (AB)

det A x det B

invertibility does not tell anything about

diagonalizability

geometric multiplicity of eigenvalue

dimension of Null (A-λI)

free variables

don't have pivot in column

orthogonal definition

dot product of u and v = 0; orthogonal vectors are linearly independent

horizontal contraction/expansion

e1 changes

vertical contraction/expansion

e2 changes

equivalent statements for diagnolizability

g = a; eigenvectors of A form a basis for Rn

linearly dependent

has non-trivial solutions

2x2 matrix invertibility

if ad-bc ≠ 0, then it is invertible; inverse equals 1/ (ad - bc) times the matrix {[d -b] [-c a]}

Ax = b has a solution

if an only if b is a linear combination of the columns of A

subspace

if it is closed under scalar multiplies and vector addition

if det A ≠ 0, then A is invertible or not?

invertible

equivalent statements

invertible; row equivalent to identity; n pivotal columns; only trivial solution for Ax = 0; linearly independent columns; Ax = b has 1 solution for all b in Rn; columns of A span Rn; A^T is invertible

if A is nxn and has n distinct eigenvalyes,

it is diagonalizable

if one row of A is multiplied by a scalar k to produce B, then det B =

k det A

algebraic multiplicity of eigenvalue

multiplicity as a root of characteristic polynomial

dimension of a subspace

number of vectors in a basis of H; pivot columns

if A is symmetric with eigenvectors v1 and v2 with two different eigenvalues, then v1 and v2 are

orthgonal

orthonormal basis

orthogonal basis in which every vector u has unit length for each vector w

one-to-one transformation

pivot in every column; trivial solution if solution exists; linearly independent columns

onto transformation

pivot in every row; columns span Rm; solution for all b

basic variables

pivot position

reflection through x1 axis, reflection through x2 axis, reflection through x1 = x2 axis, reflection through x1 = -x2 axis

plot e1 (1,0) and e2 (0,1) and perform to find new

steady state vectors

probability (unit) vector q such that Pq = q, where P is stochastic matrix; (P - I) q = 0

determinant of a triangular matrix

product of entries on main diagonal

any matrix invertibility

row reduce augmented matrix (A|In) to RREF

Markov chains

sequence of probability vectors and stochastic matrix

linearly independent

set of vectors only has the trivial solution

Col A

subspace of Rm; pivotal columns of A

Nul A

subspace of Rn; Ax = 0

trace

sum of diagonal elements

if a matrix is square and diagonal then it is symmetric or not?

symmetric

if vector x is in Nul A^T

then A^Tx = 0; x is orthogonal to rows of A^T; x is orthogonal to columns of A; Col A is orthogonal to Nul A^T

if it is a regular stochastic matrix, then

there is a unique steady-state vector

standard vector

vector in Rn in which every entry is 0, except for entry i = 1

Gram-Schmidt Process

vector vp = vector xp - ((dot product of xp and v1)/(dot product of v1 and v1)) x v1; v is an orthogonal basis for W, which x is a basis for a subspace of

probability vector

vector x with non-negative elements that sum to 1

W perp

vector z is orthogonal to subspace W if z is orthogonal to every vector in W; set of all vectors orthogonal to W is a subspace

projection onto either axis

whichever axis it is projecting onto, keep values for that same, but other one becomes 0 fully

change of variable

x = Py; Q = y^TDy= eigenvalue y1^2 + eigenvalue 2 y2^2

Markov chain

x1 = Px0 = P(cv + ....)

closest vector in span of vector u is

y hat = ((dot product of vector y and u)/ (dot product of u and u)) x (u) = projection of y onto u


Related study sets

Excel PivotTables and PivotCharts

View Set