Math 2210: Linear Algebra

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Main Lemma: Let A be an n by n matrix. Then the following two conditions are equivalent:

1) For every column vector B, there exists a column vector X such that AX=B. 2) The reduced row echelon form of A is the identity matrix.

Elementary Row Operations

1. (Replacement) Replace one row by the sum of itself and a multiple of another row. 2. (Interchange) Interchange two rows. 3. (Scaling) Multiply all entries in a row by a nonzero constant.

A valid procedure for finding the determinant of an arbitrary 3x3 matrix A

1. Cofactor expansion along the jth row of A. 2. Augmenting A with its first two columns (in order) and then crosshatching the resulting matrix [A|a1a2]. 3. Cofactor expansion along the ith column of A.

Theorem 6

1. Every permutation is a product of cycles. 2. Every permutation is a product of transposition

Matrix Properties- Warnings

1. In general AB does not equal BA 2. The cancellation laws do not hold for matrix multiplication. That is, if AB=AC, then it is not true in general that B=C. 3. If a product AB is the zero matrix, you cannot conclude in general that either A=0 or B=0.

Using Row Reduction to solve a linear system

1. Write the augmented matrix of the system. 2. Use the row reduction algorithm to obtain an equivalent augmented matrix in echelon form. Decide whether the system is consistent. If there is no solution, stop; otherwise, continue. 3. Continue with row reduction to obtain the reduced echelon form. 4. Write the system of equations corresponding to the matrix obtained in step 3. 5. Rewrite each nonzero equation from step 4 so that its one basic variable is expressed in terms of any free variables appearing in the equation.

Existence and Uniqueness Theorem

A linear system is consistent if and only if the rightmost column of the augmented matrix is not a pivot column - that is, if and only if an echelon form of the augmented matrix has no row of the form [0...0b] with b nonzero. If a linear system is consistent, then the solution set contains either (i) a unique solution, where there are no free variables, or (ii) infinitely many solutions, where there is at least one free variable.

Vector

A matrix with only one column is called a column vector, or vector, u = |1 2| The above vector is in R2 as it is a vector with two entries. It is represented geometrically by points in a two-dimensional coordinate space.

Lemma 1: Ω(AB) = (ΩA)B

A row operation applied to a product AB of matrices gives the same result as applying the row operation to the first matrix A and then premultiplying B by the result

Vector fact 2

A scalar multiple of a continuous function is continuous

Linearly Dependent Vectors

A set of two vectors {v1, v2} is linearly dependent if at least one of the vectors is a multiple of the other. The set is linearly independent if and only if neither of the vectors is a multiple of the other.

Corollary 2 to Theorem 1

A square nxn matrix A has a left inverse if and only if it has a right inverse, and these are equal. Thus the inverse of A is unique.

Basic Variable

A variable corresponding to a pivot column

Free Variable

A variable which does not correspond to a pivot column

Vector Equation

A vector equation x1a1+x2a2+...+xnan= b has the same solution set as the linear system whose augmented matrix is [a1 a2...anb] In particular, b can be generated by a linear combination of a1,..., an if and only if there exists a solution to the linear system corresponding to the above matrix.

Lemma 12

AB is invertible if and only if both A and B are invertible

A square matrix A is called invertible if there is a matrix B (the inverse of A) such that...

AB=BA=I

Row Echelon Form

All non-zero rows are above all zero rows. Each leading entry of a row is in a column to the right of the leading entry in the row above it. All entries below a pivot are zeros.

Characterization of Linearly Dependent Sets

An indexed set S = {v1,..., vp} of two or more vectors is linearly dependent if and only if at least one of the vectors in S is a linear combination of the others.

Linear Independence

An indexed set of vectors {v1,..., vp} in Rn is said to be linearly independent if the vector equation x1v1+x2v2+...+xpvp= 0 has only the trivial solution.

Theorem 1.7

An nxn matrix A is invertible if and only if A is row equivalent to In, and in this case, any sequence of elementary row operations that reduces A to In also transforms In into A^−1 .

Lemma 3

Any interchange of two numbers in the ordering j1,..., jn can be obtained by performing an odd number of interchanges of adjacent numbers. This says that every transposition is the product of an odd number of adjacent transpositions (i, i + 1).

Right inverse

B is called a ( ) of A if AB = I

Left inverse

B is called a ( ) of A if BA = I

Diagonal Matrix

Diagonal entries in an m × n matrix A = [aij] are a11, a22,..., ann. Diagonal matrix- a square n×n matrix, whose non-diagonal entries are zero.

Uniqueness of the Reduced Echelon Form

Each matrix is row equivalent to one and only one reduced echelon matrix

Corollary 1 to Theorem 2

Every invertible matrix A is a product of elementary row matrices

Theorem 8

For all matrices A, B, detAB=detAdetB

Axioms for Scalar Multiplication

For all reals a, b and all v in V, a(bv)=(ab)v For all vectors v in V, 1v=v For all vectors v in V, 0v=0

Axioms relating Scalar Multiplication and Vector Addition

For all reals a, b and all vector v in V, (a+b)v=av+bv For all real a and all vectors v, w in V, a(v+w)=av+aw

Algebraic Properties of Rn

For all u, v, w and all scalars c and d: 1. u+v=v+u 2. (u+v)+ w= u+(v+w) 3. u+0=0+u=u 4. u+(−u)=−u+u=0 5. c(u+v)=cu+cv 6. (c+d)u=cu+du 7. c(du)= (cd)u 8. 1(u)=u

Lemma 7

If A has two identical rows, its determinant is zero

Theorem 1.5

If A is an invertible n×n matrix, then for each b in Rn, the equation Ax=b has the unique solution x=A^−1b.

Corollary 3 to Lemma 10

If B is a square matrix, and A is an invertible matrix, then detBA=(detB)(detA)

Corollary 2 to Lemma 10

If B is an invertible square matrix and A is any square matrix, then detBA=(detB)(detA)

Corollary 4 to Lemma 10

If B is invertible, then detB does not equal 0

Lemma 11

If M is a square matrix and X is a non-zero vector and MX=0, then detM=0

Lemma 10

If O is a row operation and A is a square matrix, then det(OA) = (det OI)(det A)

Corollary 1 to Lemma 10

If O1,..., Ok are row operations and A is a square matrix, then det(O1...OkA)=(det(O1 I))...(det(OkI))(det A)

Theorem 1.9

If a set S = {v1,..., vp} in R n contains the zero vector, then the set is linearly dependent.

Theorem 1.8

If a set contains more vectors than there are entries in each vector, then the set is linearly dependent. That is, any set set {v1,..., vp} in Rn is linearly dependent if p > n.

Corollary 1 to Theorem 1

If nxn matrix A has a left inverse B, then B is also a right inverse of A

Row Equivalent

If the augmented matrices of two linear systems are row equivalent, then the two systems have the same solution set

Parallelogram Rule for Addition

If u and v in R2 are represented as points in the plane, then u+v corresponds to the fourth vertex of the parallelogram whose other vertices are u, 0, and v

Subset of Rn spanned by v1, ...vp

If v1,..., vp are in Rn, then the set of all linear combinations of v1,..., vp is denoted by Span {v1,..., vp} and is called the subset of Rn spanned (or generated) by v1,..., vp. That is, Span {v1,... vp} is the collection of all vectors that can be written in the form c1v1+c2v2+...+ cpvp with c1,..., cp scalars.

Corollary to Theorem 4

In Rn any n + 1 vectors are dependent

Theorem 4

In Rn, n vectors v1,..., vn span Rn if and only if they are independent

Theorem 2.3

Let A and B denote matrices whose sizes are appropriate for the following sums and products. a. (A^T)^T=A b. (A+B)^T=A^T+B^T c. For any scalar r, (rA)^T=rA^T d. (AB)^T =B^TA^T

Matrix Properties- Advanced

Let A be an m × n matrix, and let B and C have sizes for which the indicated sums and products are defined. a. A(BC) = (AB)C (associative law of multiplication) b. A(B+C)=AB+AC (left distributive law) c. (B+C)A=BA+CA (right distributive law) d. r(AB)=(rA)B=A(rB) for any scalar r e. ImA=A=AIn (identity for matrix multiplication)

Matrix Multiplication

Let A be an mxn matrix, and if B is an nxp matrix with columns b1,..., bp, then the product AB is the mxp matrix whose columns are Ab1, ..., Abp. That is AB = A[b1 b2 ... bp] = [Ab1 Ab2 ... Abp]

Theorem 2

Let A be an n by n matrix. Then A has the identity matrix as its reduced echelon form if and only if A is invertible.

Corollary to Main Lemma

Let A be an n by n matrix. Then AX = B has a solution X for every column matrix B if and only if A is invertible

Lemma 9

Let A be any nxn matrix, let I be the nxn identity matrix. Let O be a row operation, let OI be the corresponding row matrix. Then det(OA)=det((OI)A)=(det(OI))detA

Matrix Properties- Basic

Let A, B, and C be matrices of the same size, and let r and s be scalars. a. A+B=B+A b. (A+B)+C=A+(B+C) c. A+0=A d. r(A+B)=rA+rB e. (r+s)A=rA+sA f. r(sA)=(rs)A

Theorem 2.4

Let A=|a b c d|. If ad − bc does not equal 0, then A is invertible and A^−1 = 1/(ad − bc) * |d −b −c a| If ad − bc = 0, then A is not invertible.

Lemma 2

Let C be the reduced row echelon form of A. Let P be the product of the row matrices reducing A to C, which is an invertible matrix such that PA=C. Then for any column vector B, the solutions X to AX=B are the same as the solutions X to CX=PB

Lemma 8

Let matrix B be obtained from matrix A by adding a constant c times row k to row i. Then detB=detA

Theorem 3

Let v1, ..., vm span Rn. Then m is greater than or equal to n

Lemma 5

Multiply the ith row of matrix A by a non-zero scalar c to get matrix B. Then detB = c*detA

Lemma 6

Permute rows i, j of matrix A to get matrix B. Then detB=detA.

Theorem 1- If nxn matrix A has a right inverse B, then that B is also a left inverse of A

Proof: AB=I implies that for any column vector C, A(BC)=(AB)C=IC=C, so AX=C always has a solution X

Theorem 5 Proof

Proof: If Y is a solution to AY=0, we show that X0+Y is a solution of AX=B. By the distributive law for matrix multiplication A(X0+Y)=AX0+AY=AX0+0=AX0=B

Algorithm for finding A^−1

Row reduce the augmented matrix [A I]. If A is row equivalent to I, then [A I] is row equivalent to IA^−1| Otherwise, A does not have an inverse.

Proof of Theory 2

Suppose that A has the identity matrix as its reduced echelon form. Suppose that P is the product of the elementary row matrices used to reduce A to I .Then P A = I. So A has a left inverse.

Theorem 5:

Suppose that AX = B is a matrix equation with fixed A, B. Suppose X0 is a particular solution to AX = B. Then the solutions of AX = B are all of the form X0 + Y , where Y is a solution of the (homogeneous) system AY = 0 . These are the only solutions

Augmented Matrix

The coefficient matrix with an added column containing the constants from the right of the equations

Linearly Independent Columns

The columns of a matrix A are linearly independent if and only if the equation Ax = 0 has only the trivial solution.

Lemma 4

The determinant of A =(aij) and the determinant of the transpose A^t=(bij)=(aji) of A are the same

Pivot

The first non-zero entry of each row in a row echelon form matrix. (Normally 1)

Linear Dependence

The indexed set of vectors {v1,..., vp} in R n is said to be linearly dependent if there exist weights c1,..., cp, not all zero, such that x1v1+x2v2+...+xpvp= 0

Lemma 2 Proof

The matrix equation AX = B can be interpreted as a set of non-homogeneous equations. Row operations do not change the solution set

Reduced Row Echelon Form

The matrix is in Reduced Echelon Form. Each pivot is 1. The pivot is the only non-zero entry in the pivot column.

Given A = [ai j], the (i,j) cofactor of A is...

The number Cij=(−1)^(i+j)*detAij

Theorem 7

The number of interchanges needed to carry j1,..., jn into 1,..., n is always even or always odd. That is, if we represent a particular permutation as a product of transpositions, it either always takes an even number or always takes an odd numb

Vector fact 3

The set V of continuous functions is a vector space.

Vector fact 1

The sum of continuous functions is continuous

Consistent

The system of equations has either one or infinitely many solutions.

Inconsistent

The system of equations has no solution

Coefficient Matrix

The system of linear equations written with coefficients of each variable aligned in columns

Equivalent

The systems have the same solution sets

Axioms for Vector Addition

There is a vector 0 in V such that for all v in V . 0+v=v For every vector v in V , there is a unique v in V such that v+(v)=0. For all vectors u, v, w in V , u+(v+w)=(u+v)+w. For all vectors u, v in V , u+v=v+u

Every basis of Rn has precisely n elements

We have shown that all spanning sets in Rn have at least n elements, and that all independent sets in Rn have at most n elements. So all independent spanning sets must have exactly n elements, neither more nor less

Permutation of {1,..., n} is

a 1-1 onto map sigma: {1,..., n} -> {1,..., n}.Each permutation sigma contributes one term. The inverse sigma^-1 of a permutation is a permutation. The composition (product) (sigma o tau)(i) = sigma(tau(i)) of permutations sigma, tau is also a permutation

System of linear equations

a collection of one or more linear equations

Solution of a system of linear equations

a list s1, s2,..., sn of numbers that makes the equation true

Vector space over the real numbers

a non-empty set v equipped with two operations; vector sum and scalar multiplication

Theorem 1.6

a. If A is an invertible matrix, then A^−1 is invertible and (A^−1)^−1=A. b. If A and B are invertible n×n matrices, then so is AB, and the inverse of AB is the product of the inverses of A and B in the reverse order. That is, (AB)^−1=B^−1A^−1 . c. If A is an invertible matrix, then so is A^T , and the inverse of A^T is the transpose of A^−1 . That is, (A^T)^−1=(A^−1)^T

Linear equation

an equation that can be written in the form: a1x1+a2x2+...+anxn= b where a1,..., an and b are real or complex numbers and x1,... xn are variables

A basis for Rn is

an independent spanning set

Vector sum

assigns to each pair v, w in V an element v + w of V

Scalar multiplication

assigns to each real a and each v in V an element av in V

Determinant of a 2 × 2 matrix

det= ad − bc

Vectors v1,..., vk are independent (also called linearly independent) if

for all scalars c1,..., ck, if c1v1+...+ckvk=0, then c1=...=ck =0.

Consistent system of linear equations

if it has either one solution or infinite solutions

Inconsistent system of linear equations

if it has no solution

n is called

the dimension (or linear dimension) of Rn

Solution set

the set of all possible solutions

Vectors v1,..., vk are dependent (also called linearly dependent) if

there exist scalars c1,..., ck, not all zero, such that c1v1+...+ckvk=0.

Vectors v1,..., vk are a basis for Rn if

they are both independent and span Rn

Let v1,..., vn be n vectors in Rn. Then v1,..., vn are independent if and only if

v1,..., vn span Rn.


Ensembles d'études connexes

Understanding the basics of stretching (CH 1)

View Set

CompTIA Network+ N10-005 Exam Questions

View Set

Chem 160 written lab practical review

View Set

Rasgos heredados y comportamientos aprendidos

View Set

BUS LAW CH. 7 Review ( TORT LAW)

View Set

Coursera Quizlet -- 1 minute set... from copy-paste!

View Set

Chapter 18 The Cell Division Cycle

View Set

Medical-Surgical: Gastrointestinal

View Set