Linear Algebra Theorems
Algorithm for an LU Factorization
1. Reduce A to an echelon form U by a sequence of row replacement operations, if possible. 2. Place entries in L such that the same sequence of row operations reduces L to I.
Definition of a Basis
A _____ for a subspace H of Rn is a linearly independent set in H that spans H.
Definition of a subspace
A ______ of Rn is any set H in Rn that has 3 properties: a) The zero vector is in H. b) For each u and v in H, the sum u + v is in H. c) For each u in H and each scalar c, the vector cu is in H.
Pivot position
A _____________ in a matrix A is a location in A that corresponds to a leading 1 in the reduced echelon form of A. A pivot column is a column of A that contains a pivot position.
Chapter 3 Theorem 3 - Properties of Determinants
A is square; a) if a multiple of one row of A is added to another row to produce matrix B, then det B = det A. b) if two rows of A are interchanged to produce B, then det B = - det A. c) If one row of A is multiplied by k to produce B, then det B = k det A.
Chapter 1 Theorem 2 - Existence and Uniqueness Theorem
A linear system is consistent IFF the rightmost column of the augmented matrix is not a pivot column. That is, IFF an echelon form of the augmented matrix has no row of the form [0 ... 0 b] with b nonzero. If linear system is consistent, then the solution set contains either a unique solution, when there are no free variable or infinitely many solutions , when there is at least one free variable.
Definition of One-to-One
A mapping T : Rn -> Rm is said to be one-to-one if each b in Rm is the image of at most one x in Rn.
Definition of Onto
A mapping T : Rn -> Rm is said to be onto Rm if each b in Rm is the image of at least one x in Rn.
Chapter 3 Theorem 4 - Invertibility
A square matrix A is invertible IFF det A ≠ 0
Transformation Linearity
A transformation (or mapping) T is linear if: (i) T(u+v) = T(u) + T(v) for all u,v in the domain of T; (ii) T(cu) = cT(u) for all scalars c and all u in the domain of T.
Definition of an elementary matrix
An _______ is one that is obtained by performing a single elementary row operation on an identity matrix.
Chapter 1 Theorem 7 - Characterization of Linearly Dependent Sets
An indexed set S = {v1,...,vp} of two or more vectors is linearly dependent IFF at least one of the vectors in S is a linear combination of the others. In fact, if S is linearly dependent and v1 ≠ 0, then some vj (j>1) is a linear combination of the preceding vectors v1,...,v(j-1)
Linear In/dependence
An indexed set of vectors {v1,...,vp} in Rn is said to be linearly independent if the vector equation x1v1+x2v2+...+xpvp=0 has only the trivial solution. The set {v1,...,vp} is said to be linearly dependent if there exists weights c1,...,cp, not all zero, such that c1v1+c2v2+...+cpvp=0
Chapter 2 Theorem 7
An nxn matrix A is invertible IFF A is row equivalent to In, and in this case, any sequence of elementary row operations that reduces A to In also transforms In into A^(-1).
Definition of Multiplication
Each column of AB is a linear combination of the columns of A using weights from the corresponding column of B.
Elementary Matrices and Invertibility
Each elementary matrix E is invertible. The inverse of E is the elementary matrix of the same type that transforms E back into I.
Chapter 1 Theorem 1
Each matrix is row equivalent to one and only one reduced echelon matrix.
Chapter 1 Theorem 4
For a particular mxn A, the following are all true statements or all false: a) For each b in Rm, the equation Ax = b has a solution. b) Each b in Rm is a linear combination of the columns of A. c) The columns of A span Rm. d) A has a pivot position in every row.
The Invertible Matrix Theorem Part II
For a square matrix, the following are all true or false: m. The columns of A form the basis of Rn. n. Col A = Rn. o. dimColA = n. p. rank A = n. q. Nul A = {0}. r. dimNulA = 0.
Definition of the Determinant
For n≥2, the determinant of a square matrix A = [aij] is the sum of n terms of the form +/- a1j det Aij, with plus and minus signs alternating, where the entries a11, a12,...,a1n are from the first row of A. In symbols, det A = a11 det A11 - a12detA12 + ... + (-1)^(1+n)a1n det A1n.
Chapter 3 Theorem 6, Multiplicative Property
If A and B are square, det AB = (det A)(det B)
Chapter 3 Theorem 9
If A is a 2x2 matrix, the area of the parallelogram determined by the columns of A is |det A|. If A is a 3x3 matrix, the volume of the parallelepiped determined by the columns of A is |det A|.
Chapter 3 Theorem 2
If A is a triangular matrix, then det A is the product of the entries on the main diagonal of A.
Chapter 2 Theorem 5
If A is an invertible square matrix, then for each b in Rn, the equation Ax = b has the unique solution x = A^(-1)b.
Chapter 1 Theorem 5
If A is an m x n matrix, u and v are vectors in Rn, and c is a scalar, then a) A(u+v) = Au + Av, b) A(cu) = c(Au)
Chapter 1 Theorem 3
If A is an m x n matrix, with columns a1,...,an and if b is in R the matrix equation Ax = b has the same solution set as the vector equation x1a1 + x2a2 +...+xnan = b which, in turn, has the same solution set as the system of linear equations whose augmented matrix is [a1 a2 ... an b]
Chapter 3 Theorem 5
If A is square, det AT = det A
Properties of T
If T is a linear transformation then T(0) = 0 and T(cu + dv) = cT(u) = dT(v).
Chapter 2 Theorem 14, The Rank Theorem
If a matrix A has n columns, then rank A + dim Nul A = n.
Chapter 1 Theorem 9
If a set S = {v1,...,vp} in Rn contains the zero vector, then the set is linearly dependent.
Chapter 1 Theorem 8
If a set contains more vectors than there are entries in each vector, then the set is linearly dependent. That is, any set {v1,...,vp} in Rn is linearly dependent if p>n.
Use of an elementary matrix
If an elementary row operation is performed on an mxn matrix A, the resulting matrix can be written as EA, where the mxm matrix E is created by performing the same row operation on Im.
Row-Vector Rule for Computing Ax
If the product Ax is defined then the ith entry in Ax is the sum of the products of corresponding entries from row i of A and from the vector x.
Subset of Rn Spanned
If v1,...,vp are in Rn then the set of all linear combinations of v1,...,vp is denoted by Span{v1,...,vp} and is called the subset of Rn spanned (or generated) by v1,...,vp. That is, Span{v1,...,vp} is the collection of all vectors that can be written in the form c1v1+c2v2+...+cpvp with c1,...,cp scalars.
Chapter 2 Theorem 4 - Invertibility
Let A = [ab<br />cd]. If ad-bc ≠ 0 then A is invertible and A^(-1) = 1/(ad-bc)[d -b <br /> -c a]. If ad-bc = 0, A is not invertible.
Invertibility of Square Matrices
Let A and B be square matrices. If AB = I, then A and B are both invertible, with B = A^-1 and A=B^-1.
Chapter 2 Theorem 3
Let A and B denote matrices whose sizes are appropriate for the following sums and products. a) (AT)T = A. b) (A+B)T = AT + BT. c) For any scalar r, (rA)T = rAT. d) (AB)T = BTAT.
Chapter 2 Theorem 8 - The Invertible Matrix Theorem
Let A be a square matrix; the following are either all T or F: a) A is an invertible matrix, b) A is row equivalent to the nxn identity matrix, c) A has n pivot positions, d) The equation Ax = 0 has only the trivial solution, e) The columns of A form a linearly independent set, f) The linear transformation x |-> Ax is one-to-one, g) The equation Ax = b has at least one solution for each b in Rn, h) the columns of A span Rn, i) The linear transformation x |-> Ax maps Rn onto Rn, j) There is an nxn matrix C s.t. CA = I, k) There is an nxn matrix D s.t. AD = I, l) AT is an invertible matrix.
Chapter 3 Theorem 7, Cramer's Rule
Let A be an invertible square matrix. For any b in Rn, the unique solution x of Ax = b has entries given by xi = (det Ai(b))/(det A), i = 1,2,...,n
Chapter 3 Theorem 8, An Inverse Formula
Let A be an invertible square matrix; A^(-1) = (adj A)/(det A)
Chapter 2 Theorem 2
Let A be an mxn matrix, and let B and C have sizes for which the indicated sums and products are defined. a) Associative law of multiplication: A(BC)=(AB)C. b&c) Distributive laws. d) r(AB) = (rA)B=A(rB) for any scalar r. e) Identity for matrix multiplication: ImA = A = AIn.
Chapter 2 Theorem 1
Let A,B,C be matrices of same size, let r and s be scalars. a) A+B=B+A. b) (A+B)+C =A+(B+C). c) A+0=A. d) r(A+B)=rA+rB. e) (r+s)A = rA+sA. f) r(sA) = (rs)A
Chapter 2 Theorem 11
Let C be the consumption matrix for an economy, let d be the final demand. If C and d have nonnegative entries and if each column sum of C is less than 1, then (1 - C)^(-1) exists and the production vector x = (I-C)^(-1)d has nonnegative entries and is the unique solution of x = Cx + d.
Chapter 2 Theorem 15, The Basis Theorem
Let H be a p-dimensional subspace of Rn. Any linearly independent set of exactly p elements in H is automatically a basis for H. Also, any set of p elements of H that spans H is automatically a basis for H.
Chapter 1 Theorem 12
Let T : Rn -> Rm be a linear transformation and let A be the standard matrix for T. Then: a) T maps Rn onto Rm IFF the columns of A span Rm; b) T is 1-to-1 IFF the columns of A are linearly independent.
Chapter 1 Theorem 11
Let T : Rn -> Rm be a linear transformation. Then T is one-to-one if and only if the equation T(x) = 0 has only the trivial solution.
Chapter 2 Theorem 9
Let T : Rn -> Rn be a linear transformation and let A be the std matrix for T. Then T is invertible IFF A is an invertible matrix. In that case, the linear transformation S given by S(x) = A^(-1)x is the unique function satisfying equations (1) S(T(x)) = x for all x in Rn; (2) T(S(x)) = x for all x in Rn.
Chapter 1 Theorem 10
Let T: Rn -> Rm be a linear transformation. Then there exists a unique matrix A s.t. T(x) = Ax for all x in Rn. In fact, A is the mxn matrix whose jth column is the vector T(ej), where ej is the jth column of the identity matrix in Rn. A= [T(e1) ... T(en)]
Algorithm for Finding A^(-1)
Row reduce the augmented matrix [A I]. If A is row equivalent to I, then [A I] is row equivalent to [I A^(-1)]. Otherwise, A does not have an inverse.
Chapter 1 Theorem 6
Suppose the equation Ax = b is consistent for some given b, and let p be a solution. Then the solution set of Ax = b is the set of all vectors of the form w = p + vh, where vh is any solution of the homogenous equation Ax = 0.
Definition of the Coordinates of X Relative to the Basis
Suppose the set B = {b1,...,bp} is a basis for subspace H. For each x in H, the ______ are the weights c1,...,cp s.t. x = c1b1 + ... + cpbp and the vector in Rp. [x]B = [c1..<break/>cp] is called the coordinate vector of x (relative to B) or the B-coordinate vector of x.
Definition of Rank
The ___ of a matrix A, denoted by rank A, is the dimension of the column space of A.
Definition of a Column Space - Col(A)
The _____ of a matrix A is the set of all linear combinations of the columns of A.
Definition of a Null Space
The _____ of a matrix A is the set of all solutions of the homogenous equation Ax = 0.
Definition of Dimension, dim
The ______ of a nonzero subspace H, denoted by dim H, is the number of vectors in any basis for H. The dimension of the zero subspace {0} is defined to be zero.
Chapter 2 Theorem 12
The _______ of an mxn matrix A is a subspace of Rn. Equivalently, the set of all solutions of a system Ax = 0 of m homogenous linear equations in n unknowns is a subspace of Rn.
Linear Independence of Matrix Columns
The columns of a matrix A are linearly independent IFF the equation Ax = 0 has only the trivial solution.
Existence of Solutions
The equation Ax = b has a solution IFF b is a linear combination of the columns of A.
Homogeneous Linear Systems
The homogenous equation Ax = 0 has a nontrivial solution IFF the equation has at least 1 free variable.
Chapter 2 Theorem 13
The pivot columns of a matrix A form a basis for the column space of A.
Product of A and x
The product of A and x is the linear combination of the columns of A using the corresponding entries in x as weights. That is, Ax = [a1 a2 ... an] [ x1(vertical)xn] = x1a1 + x2a2 + ... + xnan
Transpose Properties
The transpose of a product of matrices equals the product of their transposes in the reverse order.
Transformation: Reflection through the x1-axis
[ 1 0 ] [ 0 -1]
Transformation: Reflection through the origin
[-1 0] [0 -1]
Transformation: Reflection through the x2-axis
[-1 0] [0 1]
Transformation: Reflection through the line x2 = -x1
[0 -1] [-1 0]
Transformation: Projection onto the x2-axis
[0 0] [0 1]
Transformation: Reflection through the line x2 = x1
[0 1] [1 0]
Transformation: Projection onto the x1-axis
[1 0] [0 0]
Transformation: Vertical contraction and expansion
[1 0] [0 k]
Transformation: Vertical Shear
[1 0] [k 1]
Transformation: Horizontal Shear
[1 k] [0 1]
Transformation: Horizontal contraction and expansion
[k 0] [0 1]
Chapter 2 Theorem 6
a. If A is an invertible matrix, then A^(-1) is invertible and (A^(-1))^(-1) = A. b. If A and B are square matrices, then so is AB, and the inverse of AB is the product of the inverses of A and B in the reverse order. That is, (AB)^(-1) = B^(-1) A^(-1). c. If A is an invertible matrix, then so is AT and the inverse of AT is the transpose of A^(-1). That is, (AT)^(-1) = (A^(-1))T