Linear Algebra (up to midterm)

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Zero determinants and square matrices

Let A be an (n x n) matrix. 1. If A has a row or column consisting entirely of zeroes, then det(A) = 0 2. If A has 2 identical rows or 2 identical columns, then det(A) = 0

Invertible Matrix Theorem

Let A be an (n x n) matrix. The following are equivalent: 1. A is invertible (or nonsingular) 2. Ax = 0 has only the trivial solution 0 (where x and 0 are vectors) 3. A is row equivalent to I 4. A is a product of elementary matrices (in the book this is called Equivalent Conditions for Nonsingularity, page 65 or 10/13 notes for proofs)

Determinant of elementary matrices

det(E*A) = det(E) * det(A) det(E) = -1 if E is type 1 alpha (scalar) if E is type 2 1 if E is of type 3

equivalent systems

have the same solution set

reduced row echelon form

satisfifies REF criterion and also: 4. Each leading 1 is the only non-zero entry in its column Note: there can still be non-zero entries; not every column has to contain a leading 1

Vector space axioms

set V forms a vector space if these are satisfied: A1. x+y = x+y A2. (x+y)+z = x+(y+z) A3. x+0 = x A4. x+(-x) = 0 A5. a(x+y) = ax+ay A6. (a+b)x = ax+bx A7. (ab)x = a(bx) A8. 1*x = x For all vectors x, y, z in V, where a and b are scalars and 0 is the zero vector

Elementary matrix

the result of performing one of three possible elementary row operations on the identity matrix left-multiplying an arbitrary (m x n) matrix by an elementary matrix results in the arbitrary matrix with the elementary operation performed on it

Nullspace N(A)

the set of all solutions to the homogeneous system Ax=0 which forms a subspace of R^n N(A) = {x ∈ R^n | Ax=0}

Spanning set

the set {v1, ... , vn} is a spanning set for V if and only if every vector in V can be written as a linear combination of v1, v2, ..., vn

ASK PROF-----"Ax=0 has only the trivial solution"

the trivial solution is a row of zeroes, or the zero vector, so this statement implies that Ax only equals 0 when x=0. Ax=b has a unique solution for any n-vector b, so the system has n leading variables.

Linear Dependence

v1, ... , vn are linearly dependent if at least one vector in the list can be written as linear combination of the other vectors

Minimal spanning set

A spanning set with no unnecessary elements (i.e., all elements in the set are needed to span the vector space); a spanning set is minimal if its elements are linearly independent

Lower triangular matrix

A square matrix A=(aij) where aij=0 for all i < j (i.e., zeroes above the diagonal) ex: 1 0 0 4 2 0 5 6 3

Strictly lower triangular matrix

A square matrix A=(aij) where aij=0 for all i </ j (i.e., zeroes on and above the diagonal; strictly lower corner nonzero) ex: 0 0 0 4 0 0 5 6 0

Upper triangular matrix

A square matrix A=(aij) where aij=0 for all i > j (i.e., equals zero under the diagonal) ex: 1 3 1 0 5 2 0 0 3

Strictly upper triangular matrix

A square matrix A=(aij) where aij=0 for all i >/ j (i.e., zero on and under the diagonal) ex: 0 3 1 0 0 2 0 0 0

Diagonal matrix

A square matrix A=(aij) where aij=0 for all i unequal to j (i.e., everything but the diagonals are zeroes) ex: 1 0 0 0 5 0 0 0 3

Identity matrix

A square matrix with 1's on the diagonals and 0's elsewhere. IA=AI=A Commutes with all matrices of its size

Linear independence

A subset v1,...,vn of vector space V is called linearly independent if the vanishing of a linear combination only happens in the obvious way: a1v1 + ... + anvn = 0 ---> a1 = ... = an = 0 (if the scalars must equal zero for the solution set to be the zero vector)

Linear combination

A sum of the form a1v1 + a2v2 + ... + anvn where a1...,an are all scalars and v1,...,vn are vectors in vector space V

Inverse of two multiplied matrices

(AB)^(-1) = A^(-1) * B^(-1)

Properties of the transpose of a matrix

(A^T)^T = A (xA)^T = x * A^T (A+B)^T = A^T + B^T (AB)^T = B^T * A^T where x is a scalar

Vectors v1, v2, ..., vn form a basis for vector space V if and only if:

1. v1, ..., vn are linearly independent 2. v1, ..., vn span V (the elements of a minimal spanning set form the basic building blocks for the whole spanning space)

Cramer's Rule Breakdown

1) Find the determinant of augmented matrix A 2) Sub column vector b into the first column of A and find the determinant of that new matrix 3) Divide determinant found in 2 by original determinant from 1. This is your solution for x1. 4) Repeat steps 2-3 for each column in A, to get the rest of your xj values

Additional vector space properties (lemma)

1. 0x = 0 2. x+y=0 implies that y = -x 3. (-1)x = -x (proofs on page 121 of textbook, learn proofs for test)

Properties of row equivalent matrices

1. A ~R A 2. If A ~R B, then B ~R A 3. If A ~R B and B ~R C, then A ~R C

row echelon form (3 rules)

1. All zero rows are on the bottom 2. First non-zero entry from the left must be 1 (called the leading 1) 3. Each leading 1 is to the right of all leading 1's in the row above it

Row operations and the determinant

1. Swapping rows changes the sign of the determinant 2. Multiplying a row by a scalar also multiplies the determinant by that scalar 3. Adding multiples of one row to another does not change the determinant

Equivalent

2 matrices are equivalent if they are: 1. The same size 2. If aij=bij for all i,j

det(aA) where a is a scalar and A is an n x n matrix

= a^n * det(A)

Pivot column

A column that contains a pivot position

Consistency theorem for linear systems

A linear system Ax=b is consistent IF and only if (<=>) b can be written as a linear combination of the columns of A

Inhomogenous

A linear system in which bi is not 0 for all i

Homogeneous

A linear system in which bi=0 for all i (i.e., right-hand side is always 0) They are ALWAYS consistent because they have the trivial solution (0) Written as Ax=0 and thus always has the solution x=0

Pivot position

A location that corresponds to a leading 1 in the RREF form of a matrix.

Row equivalent

A matrix B is said to by row equivalent to A if there exists a finite sequence of elementary matrices E1, E2, ...., Ek such that: B= E_k * E_(k-1)....* E1 * A In other words, B is row equivalent to A if B can be obtained from A by a finite number of elementary row operations (a product of elementary matrices) Notation: ~R

Determinants and invertability

A matrix is invertible if its determinant is not zero

Singular matrix

A matrix that does not have an inverse

Invertible matrix

A matrix that had an inverse

Computing the inverse of nonsingular matrix A using determinants

A^(-1) = (1/det(A)) * adj(A)

Underdetermined

An m x n system in which m < n (there are fewer equations than unknowns) May or may not be consistent; if it is it always has infinitely many solutions

Overdetermined

An m x n system in which m > n (there are more equations that unknowns) Almost always inconsistent

Symmetric matrix

An n x n matrix is said to be symmetric if A^T=A

Solutions to an underdetermined homogenous systems

Are ALWAYS non trivial!

Linear system in matrix notation

Ax=b, where x and b are both vectors

Inverse

B is an inverse of A if AB=BA=I B can also then be written as A^(-1)

Vector space closure properties

C1. If x ∈ V and a is scalar, then ax ∈ V C2. If x, y ∈ V, then x + y ∈ V

Cramer's Rule vs. Gauss-Jordan

Cramer's is good for 2x2 or 3x3, but anything larger and GJ is better

Inverse of E

E is invertible and E^(-1) is an elementary matrix of the same type as E To find: Do reverse of the elementary row operation done to get E to I (i.e., if E1 was obtained by multiplying row 1 of I by 2, then E^(-1) is obtained by multiplying row 1 of I by 1/2) Then check by multiplying inverse by E, knowing that E^(-1)*E=I

Determinant of triangular matrices

If A is either upper triangular or lower triangular, then det(A) is the product of the diagonal entries

Solving a linear system in matrix notation

If A is invertible, x = A^(-1) * b where x and b are both vectors

The inverse of a transpose

If A is invertible, then A^T is invertible and (A^T)^(-1) = (A^(-1))^T

Span

For vectors v1, v2, ..., vn in a vector space v, the span is the set of all linear combinations of the vectors, denoted by span(v1, v2, ..., vn)

Properties of matrices

IA = A and AI = A A(BC) = (AB)C = ABC A(B+C) = AB+AC (B+C)A = BA+CA x(AB) = (xA)B = A(xB) where x is a scalar

Singularity and linear dependence

If a matrix is singular (det(A)=0), it is linearly dependent

Commutative matrices

Matrices commute if AB=BA

Free variables

Non-leading 1 variables; these will end up having to be represented by alpha, beta, etc. in the solution set and will therefore make the set infinite

REF/RREF is unique?

RREF of a matrix is unique! This is not the case for REF. The leading 1's in RREF for a matrix always appear in the same position

Cofactor

Represented by cij, let cij = (-1)^(i + j) * det(Aij) where Aij is the submatrix formed by deleting row i and column j from A

Determinants and transposes

The determinant of a matrix equals the determinant of its transpose det(A) = det(A^T)

T/F: C[a,b] is a vector space, where it denotes the set of all real-valued functions that are defined and continuous on the closed interval [a,b]

True (f+g)(x) = f(x) + g(x) and (af)(x) = af(x) for continuous functions f and g (sum of 2 continuous functions is always continuous and a constant times a continuous function is always continuous)

T/F: P^n (set of all polynomials of degree < n) is a vector space

True (p+q)(x) = p(x) and q(x) and (ap)(x) = ap(x)

Checking for linear independence

The homogeneous system has a unique solution (the trivial one where a1 = a2 = 0) if the determinant is not 0, so put the given column matrices into augmented form and find the determinant: if it's not zero (matrix is non-singular), they're lienarly independent

The inverse of an inverse

The matrix! ( (A^(-1) ) ^ (-1) = A I.e., inverses are UNIQUE

Gauss-Jordan Inversion

The same sequence of elementary row operations that bring A to I also bring I to A^(-1): 1. Write the augmented matrix (A | I) 2. Bring A of this augmented matrix to RREF using row operations, performing the operations on both A and I. When A has become I, then where I was will be A^(-1). If A ~R I, then (A | I) ~R (I | A(^-1))

Associated homogeneous system

The system obtained from all inhomogenous system by setting all bi's to 0

The transpose of a matrix

The transpose of m x n matrix A=(aij) is the n x m matrix B=(bij) where bij=aji The rows of A are the columns of B Notation: A^T

Finding N(A)

To find nullspace: 1) Use Gauss-Jordan to put augmented matrix in REF, with 0 as the solution vector 2) Identify the leading and free variables 3) Set free variables equal to alpha, beta, gamma, etc. and plug those in to get leading variables 4) Create x matrix with your new values for leading and free variables (i.e., all x values) 5) Separate x matrix by greek letter for final answer, i.e. an appropriate N(A) answer could be: N(A) = {a(1 -2 1 0) + b(-1 1 0 1) | a,b ∈ IR} if x = ( (a-b) (-2a+b) (a) (b) )

T/F: the span is a subspace of V

True (given that the vectors being spanned are elements of V)

Pivot row

Used to eliminate the elements in the first column of the remaining rows

Leading variables

Variables corresponding to the leading 1's in the augmented matrix in REF. These will be able to be solved for in the solution set

inconsistent

an m x n system with no solutions (empty solution set)

Gauss-Jordan Algorithm

Works bottom up to derive RREF from REF: -Use Gaussian to get matrix in REF -Find row containing the first leading 1 from the RIGHT (I.e., one of the bottom rows) and subtract suitable multiples of this row from the rows above it to make each entry above it equal to zero -Repeat with remaining rows

Gaussian elimination algorithm

Works top down to get REF: -Find the first column from the left containing a non-zero entry a, move the row to the top, and multiply by 1/a to get a leading 1. -Subtract multiples of that row from rows below it to make each entry beneath the leading 1 zero. -Repeat above steps using the remaining rows

Elementary matrix type 1

a matrix obtained by interchanging two rows of I ex: 1 0 0 0 1 0 0 1 0 --------> 1 0 0 0 0 1 0 0 1 <-- this is matrix E1

Elementary matrix type 2

a matrix obtained by multiplying a row of I by a nonzero constant ex: 1 0 0 1 0 0 I= 0 1 0 ----> E2= 0 1 0 0 0 1 0 0 3

Elementary matrix type 3

a matrix obtained from I by adding a multiple of one row to another row ex: 1 0 0 1 0 3 I= 0 1 0 ----> E3= 0 1 0 0 0 1 0 0 1 (R1 + 3R3 --> R1)

Defining a vector space

a non-empty set of vectors, denoted by V, in which the vectors satisfy the 8 axioms and 2 closure properties of operations To be a vector space, must prove to be 1. Non-empty 2. Satisfy C1 (x+y in space) and C2 (ax in space) ex: W = {(a,1) | a real} is not a vector space. Elements (3,1) and (5,1) are in space but the sum (8,2) is not

Subspaces

a subset S of V is a vector space closed under the operations of addition and scalar multiplication (i.e., sum of 2 elements of S is an element of S and the product of a scalar and element of S is an element of S)

consistent

a system of linear equations that has either 1 or infinite solutions

The adjoint of a matrix

adj(A) = A11 A21 ... An1 A12 A22 ...An2 . A1n A2n.... Ann thus, to form the adjoint we replace each term by its cofactor and then transpose the resulting matrix (the (i, j) entry of A is the (j, i) cofactor of A)

Cofactor expansion of the determinant (along the 1st column)

det(A) = a11*c11 + a21*c21 + ..... + an1 *cn1

Matrix multiplication

cij = summation from k=1 to n of (aik * bkj) In other words, take the sum of the products consisting of elements in the ith row of A with the corresponding elements in the jth column of B In general AB does not equal BA

Cofactor expansion along the jth column

det(A) = summation from (i=1 to n) of aij*cij or, det(A) = a1j*c1j + a2j*c2j + ...... + anj*cnj

Cofactor expansion along the ith row

det(A) = summation from (j=1 to n) of aij*cij or, det(A) = ai1*ci1 + ai2*ci2 + ..... + ain*cin

det(AB) = ? for two n x n matrices A and B

det(AB) = det(A) * det(B) (proof on page 103 of text)

Cramer's Rule (solving Ax=B using determinants)

xj = det(Aj(b)) / det(A) for all j=1, ..., n (i.e., continue for all columns/x values) where x is the unique solution to Ax=b, A is an invertible n x n matrix, b is an n-vector of real numbers, and Aj is the matrix obtained by replacing the jth column of A by b


संबंधित स्टडी सेट्स

NU271 HESI Case Study: Fluid Balance (week 10)

View Set

APUSH Chapter 16 and 17 (improved)

View Set

Colorado Recordkeeping & Trust Accounts

View Set

Chapter 1: Introduction to Networking

View Set

Chapter 14- Supply Chain Risk and Resiliency

View Set

FireFighter 1 and 2 Final Study Guide

View Set

Health and Fitness Ch.7 Nutrition

View Set