Belbruno Linear Quiz #4

Ace your homework & exams now with Quizwiz!

Theorem 26 (Invertible Matrices)

1) (A^-1)^-1=A 2) (AB)^-1=(B^-1)(A^-1) 3) (A^T)^-1=(A^-1)^T

Theorem 23 (Properties of Transpose)

1) (A^T)^T=A 2) (A+B)^T=A^T+B^T 3) (rA)^T=rA^T 4) (AB)^T=(B^T)(A^T)

Theorem 22 (Properties of Matrix Multiplication)

1) A(BC)=(AB)C 2) A(B+C)=AB+AC 3) (B+C)A=BA+CA 4) r(AB)=(rA)B=A(rB) 5) IA=A=AI

Verifying that H is a subspace

1) Check for the 0 vector in H=Span{v1, v2} if 0v1+0v2 is a linear combination of v1 and v2 2) then take 2 arbitrary vectors in H, u=s1v1+s2v2 and v=t1v1+t2v2, then u+v=(s1+t1)v1+(s2+t2)v2 3) cu is in H because cu=c(s1v1+s2v2)=cs1v1+cs2v2 Note: A line not through the origin is not a subspace

Proof for Theorem 26

1) Find matrix C such that (A^-1)C=I and C(A^-1)=I, these equations are also satisfied with A 2) (AB)(B^-1*A^-1)=A(BB^-1)A^-1=AIA^-1=AA^-1=I, conversely, (B^-1*A^-1)(AB)=B^-1(A^-1A)B=B^-1IB=B^-1B=I 3) You are not responsible for this proof

Warnings Regarding Matrix Multiplcation

1) The order of multiplication matters as AB does not generally equal BA because AB is a linear combination of A with weights as the columns of B while BA is a linear combination of B with weights of the columns of A 2) If AB=AC, B does not have to equal C 3) If AB=0, neither matrix is necessarily the 0 matrix

Proof for Theorem 212

1) The zero vector is in NulA because A0=0 2) Take u and v where Au=0 and Av=0, then A(u+v)=Au+Av=0+0=0 3)Acu=cAu=c0=0

Theorem 21

1)A+B=B+A 2)(A+B)+C=A+(B+C) 3)A+0=A 4)r(A+B)=rA+rB 5)(r+s)A=rA+sA 6)r(sA)=(rs)A

Diagonal Matrix

A diagonal matrix is an n x n matrix where the non-diagonal entries are all 0, for example the identity matrix

Theorem 27

A is invertible if and only if A is row equivalent to I, in this case, any sequence of row operations which reduces A to I also transforms I into A^-1

Subspaces

A subspace of R^n, denoted by H means 1) The origin is in H 2) For each u and v in H, u+v is also in H 3) for each u in H, cu is also in H H is, therefore, generally, the span of a set of vectors Note: R^n is a subspace of R^n and 0-vector is a subspace

Powers of a Matrix

A^k=A...A k times. Therefore, A^k (x) is the result of left-multiplying x by A k times. if k=0, A^0 (x) should be x. Therefore, A^0= I

Elementary Matrices

An elementary matrix is one obtained by performing a single row operation to I and it represents that row operation in matrix form Any E times a matrix will cause the row operation represented by E to be perforemed on the matrix E can be right multiplied or left multiplied and produce the same result If an elementary row operation is performed on an mxn matrix, the result can be denoted as EA where E is an mxm matrix created by performing the row operation on I Because row operations are invertible, E is also inveritble

Invertible Linear Transformations

As linear transformations are really just matrix multiplication, a linear transformation is invertible S:R^n->R^n such that S(T(x))=x and T(S(x))=x so that S is the inverse of T or T^-1

Proof for Theorem 25

Ax=b, let x=(A^-1)b, A(A^-1*b)=(A*A^-1)b=Ib=b (Therefore, (A^-1)b is a solution to x) let u be a solution to x, Au=b, (A^-1)Au=(A^-1)b, Iu=(A^-1)b, u=(A^-1)b (Therefore, (A^-1)b is the unique solution to x

Proof for 28.7 (Ax=b for each b in R^n)

Ax=b, x=(A^-1)b AA^-1b=Ib=b=b Therefore, x=(A^-1)b and x has a solution for every b

Finding A^-1

Because invertible matrix A is row equivalent to I, we can find A^-1 by following the row reduction of A to I and performing the same row operations on I Practically, we place A and I side by side in matrix [A | I] and perform identical row operations on each side of the matrix to produce [I | A^-1] If A does not row reduce to I, it is not invertible Alternativerly, the system forming the augmented matrix [A e1 e2... en]=[A I]. Additionally, because AA^-1=I shows why the columns of A^-1 are the solution to this system

Inverse of a Matrix

Both matrices must be square for them to be considered inverse matrices. For a matrix to be considered the inverse of another, both statements must be true: AC=I and CA=I where C=A^-1 and is uniquely determined by A A(A^-1)=I An inveritble matriz is called nonsingular

Column Space

Column Space of Matrix A is ColA = the linear combinations of the columns of A ColA is, therefore, the Span of A To determine whether a vector b is in ColA, solve [A b]

Proof to Theorem 21

Each equality in Theorem 21 is proven by showing that the matrix on the left side has the same size as the matrix on the right and that corresponding columns are equal. For example: if the jth column of A, B, and C are the vectors aj, bj, cj, then the jth coluns of A, B, and C are (aj+bj)+cj=aj+(bj+cj)

Proof for Theorem 27

If A is invertible, Ax=b has a unique solution for every b, A has a pivot in every row, and, because A is square, the pivot is along the diagonal, so RER of A = I Because A~I, There exists elmentary matrices {E1...Ep} such that A~E1A~E2(E1A)~...~Ep(Ep-1...E1A)=I Such that Ep...E1A=I Since Ep...E1 are all invertible, (Ep...E1)^-1(Ep...E1)A=(Ep...E1)^-1I IA=(Ep...E1)^1 A=(Ep...E1)^-1 A^-1=[(Ep...E1)^-1]^-1 A^-1=(Ep...E1)

Theorem 25

If A is invertible, then for each b in R^n, x=(A^-1)b Note: This method of finding X is seldom used because row reducing [A b] is generally faster

Matrix Multiplication

If A is m x n and B is n x p, AB=[A(b1) A(b2),... A(bp)] and ABx=[A(b1) A(b2)... A(bp)]x This is a linear combination of A using the columns of B as weights Note: The number of columns in A must match the number of rows in B Note: AB does not necessarily equal BA

Proof for 28.3 (A has n pivots)

If A is nxn and Ax=0 has only the trivial, there are no free variables so A has n pivots

Proof for 28.2 (A~I)

If A is square and has n pivots, then the pivots must lie on the diagonal. Therefore, the row reduced form of A is I

Proof for 28.4 (AX=0 has only trivial solutions)

If Ax=0, and There is a matrix C such that CA=I CAx=C0 Ix=C0 x=0

Proof for Theorem 29

If T is invertible, then T is onto as T(x)=b S(T(x))=S(b) x=S(b), then T(S(b))=b so that each b is in the range of T Therefore, A is invertible If A is invertible, let S(x)=(A^-1)x then S is a linear transformation which satsifies S(T(x))=S(A(x))=A^-1(A(x))=Ix=x

Matrix Addition

If two matrices are both m x n, then the sum of the two is an m x n matrix whose columns are the sums of the corresponding columns in A and B Note: The summation of two matrices is only possible when they are the same size

Theorem 24 (Formula for 2x2 Inverse)

Let A be a 2x2 matrix with rows ab and cd, (a b, c d), then if ad-bc is nonzero, A is invertible such that A^-1=(1/(ad-bc) (d -b, -c a)

Proof A(BC)=(AB)C

Let C=[c1...cp], BC=[Bc1...Bcp], A(BC)=[A(Bc1)...A(Bcp)]=[(AB)c1...(AB)cp]=(AB)C

Null Space

Null Space of Matrix A is the set NulA of all solutions to Ax=0 To test if a vector is in NulA, compute Av to see if it is the 0 vector

Proof for 28.8, 28.9 (A spans R^n, T(x)|->Ax maps onto)

Per Theorem 4, if Ax=b for all b, the columns of A span R^n and per Theorem 12.1 T(x) maps onto

Proof for 28.10, 28.11 (CA=I, AD=I)

Since AB=I by 28.8, 28.1, A is invertible and B is invertible such that A^-1AB=A^-1I IB=A^-1 B=A^-1 and ABB^-1=IB^-1 AI=B^-1 A=B^-1 Thus, 28.1 implies 28.10, 28.11

Theorem 29 (Inverse Linear Transformations

T is invertible if and only if A, the standard matrix for T, is invertible. If so, S(x)=(A^-1)x satsfies S(T(x))=x and T(S(x))=x

Transposition of a Matrix

The columns of a Transpose are formed from the corresponding rows of the original matrix Note: (AB)^T does not equal A^T * B^T

Determinant

The determinant of a 2x2 matrix is the denominator of the scalar D=ad-bc If D is nonzero, the matrix is invertible If D=0, the matrix is not invertible

Diagonal entries

The diagonal entries of an mxn matrix are a11, a22, a33 where the first number in the subscript is the row while the second is the column. These form the main diagonal of a matrix

Row-Column Rule for computing AB

The entry in row i and column j of AB is the sum of the products of corresponding entries from row ai and column bj (AB)ij=ai1b1j+ai2b2j+...ainbnj Note: rowi of AB=rowi(A) B

Theorem 28 (Invertible Matrix Theorem)

The following statements are equivalent 1) A is invertible 2) A~I 3) A has n pivots 4) Ax=0 has only the trivial solution 5) The columns of A are linearly independent 6) T(X)|->Ax is one to one 7) Ax=b has at least one solution for each b in R^n 8) The columns of A span R^n 9) T(x)|->Ax is onto 10) There is an nxn matrix C such that CA=I 11) There is an nxn matrix D such that AD=I 12) A^T is invertible Note: This describes all invertible matrices, therefore, the negation describes all noninvertible matrices

Inverse of products

The inverse of nxn invertible matrices is invertible and is the inverses of the matrices in reverse order

Theorem 212

The null space of A is a subspace of R^n equivalent to the set of all solutions to Ax=0

Order of Matrix Multiplication

The order of the matrixes is very important in matrix multiplication. AB can be formulated as A is right-multiplied by B or that B is left-multiplied by A. If AB=BA, A and B commute with one another

Product of a Transpose

The transpose of a product is the product of their transposes in reverse order

Proof for 28.5, 28.6 (A's linearly independent, T(x) is 1-1)

These are implied because Ax=0 has only the trivial solution

Equivalent Matrices

Two matrices are equal if they have the same size and whose corresponding columns are equivalent

Scalar Matrix

rA where r is a scalar would be equivalent to each entry of A multiplied by r


Related study sets

Sympathy/ Caged Bird/ Hope is the thing feathers

View Set

CAWT120 Chapter 15 Seeking Employment Tutorial Quiz

View Set

Python Data Structures / Algorithms

View Set

Economics 101: Aggregate Expenditures part 2 ALA Assignment

View Set

CHAPTER 43: CARE OF PATIENTS WITH PROBLEMS OF THE CENTRAL NERVOUS SYSTEM: THE SPINAL CORD

View Set