Linear Algebra Set # 2

¡Supera tus tareas y exámenes ahora con Quizwiz!

Computing Aⁿ

Aⁿ = QDⁿA⁻¹

Corollary (Cayley-Hamilton Theorem for Matrices).

Let A be an n x n matrix, and let f(t) be the characteristic polynomial of A. Then f(A) = 0, the n x n zero matrix.

Lemma 6.4 section

Let T be a linear operator on a finite-dimensional inner product space V. If T has an eigenvector, then so does T*

Theorem 7.2

Let T be a linear operator on a finite-dimensional vector space V such that the characteristic polynomial of T splits. Suppose that A is an eigenvalue of T with multiplicity m. Then (a) dim(Kλ) < m. (b) Kλ = Null((T-λI)∧m).

Theorem 5.9 Test for Diagonalizability, Union of EigenSpaces

Let T be a linear operator on a finite-dimensional vector space V such that the characteristic polynomial of T splits. let λi's be distinct eigenvalues of T. then, 1) T is diagonalizable IFF multiplicity(λi) ≡ dim(λi) 2) If T is diagonalizable and βi is an ordered basis for Eλ, for each i, then β = β₂ U β₂ ...Uβk is an ordered basis for V consisting of eigenvectors of T.

Theorem 5.7 Dimenstion of EigenSpace

Let T be a linear operator on a finite-dimensional vector space V, and let λ be an eigenvalue of T having multiplicity m. Then 1 < dim(Eλ) < m.

Theorem 3.7

Let T: V → W and U: W → Z be linear transformations on finite-dimensional vector spaces V, W, and Z, and let A and B be matrices such that the product AB is defined. Then, (a) rank(UT) ≤ rank(U). (b) rank(UT) ≤ rank(T). (c) rank(AB) ≤ rank(A). (d) rank(AB) ≤ rank(B).

Theorem 5.10

Let W1, W2,... Wk, be subspaces of a finite dimensional vector space V. 1) V = W1 ⊕ W2 ⊕... Wk . 2) V = ∑Wi's, & If v1 +v2 ...+ vk = 0. s.t. vi ∈ Wi then v =0 ∀ vi 3) each vector v ∈ V can be written uniquely from the vi's ∈ Wi's 4) if γi is an ordered basis for Wi(1 ≤ i ≤ k) then γ₁ ∪γ₂ ∪γ₃...∪γ is an ordered basis for V. 5) for each i = 1,2,...k., there exists an ordered basis γi for Wi such that γ₁ ∪γ₂ ∪γ₃...∪γ is an ordered basis for V.

Regular Sum not all values have a unique representation

Let W1, W2,... Wk, be subspaces of a vector space V. We call V the direct sum of the subspaces W1, W2,... Wk . {vi + v2 + ... vk | vi's ∈ Wi for 1 < i < k} *(a, 0,0) ∈ W₁, and (0, b, c) ∈ W₂, yet.. (a,b,0) & (0,0,c) also works. not unique.

Definitions:

Two orthogonal if <x,y>

Theorem 4.7, Theorem 4.8 Determinant Properties

For any A, B matrices(F), det(AB) = det(A) * det(B). det(A transpose) = det(A).

Corollary & Lemma Theorem 4.4

If A & Mnxn (F) has a row/column consisting entirely of zeros, then, det(A) = 0. Also cofactor expansion can be done with any row or column

Theorem 4.5 Determinants after matrix operations

If A G Mnxn (F) and B is a matrix obtained from A by interchanging any two rows of A, then det(B) = - det(A) interchanging rows negates determinant of the matrix.

Chapter 6.

Inner Products

Corollary 1

Let A be an m x n matrix of rank r. Then there exist invertible matrices B and C of sizes mxm and nxn, respectively, such that D = BAC, where, D represents a rank deficient identity matrix.

Elementary Row/Column Operations type 1, 2, 3

Let A be an m x n matrix. The following are Elementary operations. (1) interchanging any two rows [columns] of A; (2) multiplying any row [column] of A by a nonzero scalar; (3) adding any scalar multiple of a row/col of A to another row/col * recall if you multiply a system by an elementary matrix it performs the operation it represents, even for nxm matrices.

Corollary 2 Properties of Rank

Let A be an m x n matrix. Then (a) rank(4*) = rank(^l). (b) The rank of any matrix equals the maximum number of its linearly independent rows; that is, the rank of a matrix is the dimension of the subspace generated by its rows. (c) The rows and columns of any matrix generate subspaces of the same dimension, numerically equal to the rank of the matrix.

Theorem 3.1 Matrix → Matrix through Elementary Matrices

Let A be an m×n matrix. Suppose that B is obtained from A by performing an elementary row/cols operation. Then there exists an m x m [n x n] elementary matrix E such that B = E₁A. [B = AE₁] if E₁ is n×n. As expected E₁ is obtained by performing the same operation done to A that got us to B. And doing it to I. Corollary - elem. operations on a matrix are Rank preserving.

Theorem 4.2 Inverse of 2×2 matrices

Let A ∈ M2x2(F) Then the determinant of A is nonzero if and only if A is invertible. Moreover, if A is invertible, then A⁻¹ = (1/|A|) * [d -b] [-c a]

Theorem 5.3

Let A ∈ Mn×n(F) (a) The characteristic polynomial of A is a polynomial of degree n with leading coefficient (-1)ⁿ (b) A has at most n distinct eigenvalues.

Theorem 5.2 Computing eigen-values Also find eigen-vectors

Let A ∈ Mn×n(F). Then a scalar λ is an eigenvalue of A if and only if det(A - λIn) = 0. find the null space of (A -Inλ). for each corresponding eigen-value.

Theorem 3.8

Let Ax = 0 be a homogeneous system of m linear equations in n unknowns over a field F. Let K denote the set of all solutions to Ax = 0. Then K = N(LA); hence K is a subspace of Fn of dimension n - rank(LA) = n - rank(A). Corollary: If m < n, the system Ax = 0 has a nonzero solution.

Theorem 3.11 properties of Ax = b

Let Ax = b be a system of linear equations. Then the system is consistent if and only if rank(A) = rank(A|b)

Theorem 3.10 unique solution to Ax = b. and finding it.

Let Ax = b be a system of n linear equations in n unknowns. If A is invertible, then the system has exactly one solution, namely, A⁻¹b. Conversely, if the system has exactly one solution, then A is invertible. * A⁻¹b =x

Theorem 5.21 Relating Operator T to Invariant W

Let T be a linear operator on a finite-dimensional vector space V, and let W be a T-invariant subspace of V. Then the characteristic polynomial of Tw divides the characteristic polynomial of T.

Direct Sum Adding entries that have no overlap

Let W1, W2,... Wk, be subspaces of a vector space V. We call V the direct sum of the subspaces W1, W2,... Wk and write V = W1 ⊕ W2 ⊕... Wk . Union of sets and also the intersection of any of them is zero.

Algebraic Multiplicity, corresponding to λ

Let λ be an eigenvalue of a linear operator or matrix with characteristic polynomial f(t). The (algebraic) multiplicity of λ is the largest positive integer k for which (t - λ)^k is a factor of f(t). *(t -3)²(t-7)=0 is characteristic poly, then multiplicity of 3 is 2 and 1 for 7

Orientation of vectors

Orient(u,v) = det(u,v)/ |det(u,v)| ≡ ±1 right handed systems yield 1 left handed systems yield -1

Splits Theorem 5.6

The Characteristic polynomial of any diagonalizable linear operator splits. i.e. f(λ) = c(λ-a₁)(λ-a₂)...(λ-an).

Theorem 4.3.

The determinant of an n x n matrix is a linear function of each row when the remaining rows are held fixed. That is, for 1 < r < n, we have.

Theorem 4.1 linearity-n property

The function det: M2x2(-F") —> F is a linear function of each row of a 2 x 2 matrix when the other row is held fixed. That is, if u, v, and w are rows in F² and k is a scalar, then det(u + kv, w) = det(u, w) + k*det(v, w) det(w, u + kv) = det(w, u) + k*det(w, v)

Geometric Multiplicity

The nullity EigenSpace Corresponding λ. ( or rank(Eλ) - n) geometric Multiplicity. ≤ Algebraic Multiplicity. Always must be more than one.

Theorem 3.9 relating non homogeneous/ homogeneous solutions.

Theorem 3.9. Let K be the solution set of a system of linear equations Ax = b, and let KH be the solution set of the corresponding homogeneous system Ax = 0. Then for any solution s to Ax = b, K = {s} + KH = {s + k: k ∈ KH}.

Theorem 4.6. Determinants after matrix operations

Theorem 4.6. Let A G MnXn(F), adding a multiple of a row to another it doesn't change the determinant of that matrix A Scaling a row of a matrix by k changes determinant to: k*det(A)

Corollary 2 determinant(A) ≠ 0.

a matrix A is invertible if and only if det(A) ≠ 0. If A is invertible, then det(A⁻¹) = 1/det(A)

Inverse of a Matrix

an n×n matrix is invertible if and only if its rank is n. to find Inverse of A, create augmented matrix [A| In] and operate until you get [In| B], B = A⁻¹.

Determinant of nxn Matrices

choose a row or column, at each individual ij element cover its row and column find the determinant of that remaining matrix. multiply ij element by determinant add all those entries down the row or column.

Adjugate Matrix

find the matrix cofactors and apply the +- checkboard pattern, then transpose it. Cofactor(A) ^ (trans) = Adj(A) A⁻¹ = (1/ |A|) * Adj(A)

Matrix of Jordan Block, (made of e-values)

for some eigenvalue A of T. Such a matrix Ai is called a Jordan block corresponding to A, and the matrix [T]^ is called a Jordan canonical form of T. We also say that the ordered basis β is a Jordan canonical basis for T.

Determinant

if A is an nxn matrix |A| denotes the determinant of A. for 2X2 matrix |A| = ad - bc it doesn't have linearity |A|+|B| ≠ |A +B|

Cramer's Rule, theorem 4.9

solving is Ax = b system, x₁ = det ((A(with column 1 replaced by b)) / det(A) x₂ = det(A(with column 2 replaced by b)) / det(A) apply the same method for all entries

Area of a Parellelogram n-dimensional volume

take your vectors and punch them in as rows into a matrix A, then find det(A). abs-val(|A|) = area of parallelogram The same a applies for a parallelepiped and above.

Similar Matrices & row equivalent matrices

two n-by-n matrices A and D are called similar if: D = P⁻¹ A P use this fact to find a Diagonal Matrix D. *note: row equivalent seldom have the same eigenvalues.

Rank of a matrix A & Theorem 3.5

we define the rank of A, denoted rank(A), to be the rank of the linear transformation LA: Fn →Fm. * aka, the size of the largest collection of linearly independent rows of A (the row rank) or columns for that matter. Can also be seen as the dimension generated by said row/cols. Rank(A) = rank(LA) = dim(R(LA)).

Determinants of elementary matrices.

(a) If E is an elementary matrix obtained by interchanging any two rows of I then det(E) = -1. (b) If E is an elementary matrix obtained by multiplying some row of I the nonzero scalar k, then det(E) = k. (c) If E is an elementary matrix obtained by adding a multiple of some row of I to another row, then det(E) = 1.

**********************

***************************

************************

***************************

Chapter 5 ******************************************

**************************************************

*************************************************************

***************************************************

CHAPTER 7

...

Theorem 3.4 Properties of Rank

3.4. Let A be an m x n matrix, ff P and Q are invertible m×m and n×n matrices, respectively, then (a) rank(AQ) = rank(A), (b) rank(PA) = rank(A), and therefore, (c) rank(PAQ) = rank(A).

Right/Left handed Coordinate systems

A coordinate system {u, v} is called right-handed if u can be rotated in a counterclockwise direction through an angle θ (0 < θ < π) to coincide with v. Otherwise {u, v} is called a left-handed system.

What is Diagonilization? Chapter 5...

A linear operator T on V. Does there exist an ordered basis β for V such that [T]_ β is a diagonal matrix? And how do we find it. Eigen-Vectors/Values are also useful for nondiagonalizable operators,

Theorem 5.1 Diagonal Matrices & EigenValues

A linear operator T on a finite-dimensional v.s. V is diagonalizable iff ∃ an ordered basis β for V consisting of eigenvectors of T. Furthermore, if T is diagonalizable, β = {v₁,v₂,... ,vn} is an ordered basis of eigenvectors of T, and D = [T]w.r.t. β is a diagonal matrix and each Djj entry corresponds to each vector in β. *Transformations with no eigenvectors/values aren't diagonalizable.

Diagonizable

A linear operator T on a finite-dimensional vector space V is called diagonalizable β for V such that [T]β is a diagonal matrix. A square matrix A is called diagonalizable if L(subsc A) is diagonalizable.

Theorem 5.11, Diagonalizability: direct sum of Eigen-spaces spans V

A linear operator T on a finite-dimensional vector space, V is diagonalizable IFF \/ is the direct sum of the eigenspaces of T.

Inner Product

An Inner product is a V.S. equipped with an inner product. <x,y> = ~<y,x> a<x,y> = <ax,y> <x+y,z> = <x,z> + <y,z> <x,x> → x = 0 or x≥0 <x,0> = <0,x> = 0 i.e. x,y ∈ Rⁿ column vectors

Elementary Matrices (In with a single operation)

An n x n elementary matrix is a matrix obtained by performing an elementary operation on I of type 1, 2 or 3. If you multiply a matrix by an elem matrix it will output a matrix operation of type 1, 2 or 3.

Systems of Linear equations

Ax = b. Homogeneous system if Ax = 0. solution set of the system is all x vectors that solve. System (Ax = b) is called consistent if its solution set is nonempty; otherwise it is called inconsistent.

Corollary Rank Deficiency & determinants

Corollary. If A ∈Mnxn ( F) has rank less than n, then det(A) = 0.

Theorem 3.2, Corollary Invertible Elementary Matrices

Elementary matrices are invertible, and the inverse of an elementary matrix is an elementary matrix of the same type. Every Invertible Matrix is a product of Elementary Matrices.

Jordan canonical form,

Every linear operator whose characteristic polynomial splits has a Jordan canonical form that is unique up to the order of the Jordan blocks.

Theorem 5.25 Invariant subspaces can be joined to build V. (matrix form)

Let T be a linear operator on a finite-dimensional vector space V, and let W1, W2,...Wk, be T-invariant subspaces of V for each i (1 < i < k).. For each i, let β, be an ordered basis for Wi, and let β = β₁ U β₂ U • • • U βk. Let A = [T]w.r.t β and B, = [Twi]w.r.t β for i = 1 , 2 , . . . , k. Then A = B₁ ⊕ B₂ ⊕....⊕ Bk.

Theorem 5.24 Char Poly of Invariants build up Char Poly of V

Let T be a linear operator on a finite-dimensional vector space V, and suppose that V = W1 ⊕ W2 ⊕ ... ⊕Wk, where W, is a T-invariant subspace of V for each i (1 < i < k). Suppose that fi(t) is the characteristic polynomial of Tw, (1 < i < k). Then f₁(t)*f₂(t)*...fk(t) ≡ characteristic polynomial of T.

Theorem 5.22 T-Cyclic: a basis and finding Char poly.

Let T be a linear operator on a finite-dimensional vector space V. and let W denote the T-cyclic subspace of V generated by a nonzero vector v ∈ V. Let k = dim(W). Then (a) {v,T(v),T²(v),T³(v),....T^(k-1)(v)} is a basis for W. (b) If a₀v + a₁T(v) + a₂T²(v) + ak-1T^k-1(v) + Tk(v) = 0, then the characteristic polynomial of Tw is f(t) = (-1)^k (a₀+ a₁t + a₂t +ak-1t^k-1 + t^k)

Theorem 5.23 (Cayley-Hamilton). plugging in T into the characteristic polynomial.

Let T be a linear operator on a finite-dimensional vector space V. and let f(t) be the characteristic polynomial of T. Then f(T) = T₀. the zero transformation. That is, T "satisfies" its characteristic equation.

Theorem 7.4

Let T be a linear operator on a finite-dimensional vector~ space V such that the characteristic polynomial of T splits, and let λ₁, λ2, • • •, λn be the distinct eigenvalues of T with corresponding multiplicities mi,m2,...,mk. For 1 < i < k, let βi be an ordered basis for Kλ. Then the following statements are true. (a) βi ∩ β2 ≡∅ for i = j. (b) β = βi U β2 U • • • U βk is an ordered basis for V. (c) dim(Kλi) = mi for all i.

Theorem 7.3

Let T be a linear operator on a finite-dimensional vector~ space V such that the characteristic polynomial of T splits, and let λ₁, λ2, • • •, λn be the distinct eigenvalues of T. Then, for every x ∈ V, there exist vectors Vi G λi if 1 < i < k, such that X= v₁ + v₂... + vk

Definition EigenSpace

Let T be a linear operator on a vector space V, and let λ be an eigenvalue ofT. Define Eλ = {x ∈ V: T(x) = Ax} = Null(T- λIn).The set Eλ is called the eigenspace of T corresponding to the eigenvalue λ. Analogously, we define the eigenspace of a square matrix A to be the eigenspace of LA.

Theorem 5.5 & Corollary Linear Independence of Eigenspaces.

Let T be a linear operator on a vector space V, and let λ₁,λ₂..λn be distinct eigenvalues of T. If v1, v2, vk are eigenvectors of T such that λi corresponds to vi (I < i < k), then {vi,v2,... ,vk} is linearly independent. Additionally if T has n distinct eigenvalues, then T is diagonalizable. *note: if T is diagonalizable it does Not mean it has n distinct e-values.

T-Cyclic subspace generated by x (good way to check if Invariant)

Let T be a linear operator on a vector space V, and let x be a nonzero vector in V. The subspace W = span({x,T(x),T²(x)....}) is the T-cyclic subspace of V generated by x. *useful for: finding characteristic poly & Cayley-Hamilton Theorem.

Theorem 5.4

Let T be a linear operator on a vector space V, and let λ be an eigenvalue of T. A vector v ∈ V is an e-vector of T corresponding to λ IFF v ≠ 0 and v ∈ N(T - A*In).

Generalized Eigen-Space

Let T be a linear operator on a vector space V, and let λ be a scalar. A nonzero vector x in V is called a generalized eigenvector of T corresponding to λ if (T - λl)^p *(x) = 0 for some positive integer p. From these subspaces, we select ordered bases whose union is an ordered basis β for V such that, [T]wrtβ ≡ A matrix of Jordan Blocks

Theorem 7.1

Let T be a linear operator on a vector space V, and let λ be an eigenvalue of T. (a) Kλ₁ is a T-invariant subspace of V containing Eλ (the eigenspace of T corresponding to λ). (b) For any scalar µ≠λ, the restriction of T — µI to Kλ is one-to-one.

generalized eigenspace of T corresponding to λ₁

Let T be a linear operator on a vector space V, and let λ be an eigenvalue of T. The generalized eigenspace of T corresponding to A, denoted K\, is the subset ofV defined by Kλ= {x ∈ V: (T — λl)∧p *(x) = 0 for some positive integer p}.

Eigenvector, Eigenvalue

Let T be a linear operator on a vector space V. A nonzero vector v ∈ V is called an eigenvector of T if ∃ a scalar λ such that T(v) = λv. The scalar λ is called the eigenvalue corresponding to the eigenvector v. *aka: characteristic/proper vector & proper/characteristic value

T- Invariant Subspace Closure under the transformation

Let T be a linear operator on a vector space V. a subspace W of \/ is called a T-invariant subspace of V if T(W) ⊆ W. that is if T(v) ∈ W for all v ∈ W. * trivial {0}, V, & Range(T) Null(T), Eλ for any e-value λ of T.

characteristic polynomial

Let T be a linear operator on an n-dimensional v.s. V with ordered basis β. Let A ∈ Mn×n(F). f(t)= det(A - t*In) is called the characteristic polynomial. * by Theorem 5.2 the eigenvalues of a matrix are the zeros of its characteristic polynomial.

Test for Diagonalizability

Let T be a linear operator on an n-dimensional vector space V. Then T is diagonalizable IFF both of the following conditions 1. The characteristic polynomial of T splits. 2. For each eigenvalue λ of T. the multiplicity( λ) ≡ n - rank(T -λIn) or more simply geometric = algebraic for every single λ of T

Theorem 5.8 Union of Linearly Independent subsets is Independent

Let T be a linear operator, and let λi,λ2 , λk: be distinct eigenvalues of T. For each i = 1,2,...,k;, let vi ∈ Eλi, let Si be a finite linearly independent subset of the eigenspace Eλi . Then S₁∪S₂∪....Sn is linearly independent subset of V.

*Lemma

Let T be a linear operator, and let λi,λ2 , λk: be distinct eigenvalues of T. For each i = 1,2,...,k;, let vi ∈ Eλi, the eigenspace corresponding λi. If v1 +v2 ...+ vk = 0. then v =0 ∀ vi.

Theorem 3.3

Let T: V —> W be a linear transformation between finite dimensional vector spaces, and let β and γ be ordered bases for V and W, respectively. Then rank(T) = rank([T] from β to γ).


Conjuntos de estudio relacionados

Chapter 4 Translation and Protein Structure

View Set

Pneumo and Critical Care Medicine

View Set

NUR 108 Ch 12: Collaborative Practice and Care Coordination across Settings

View Set

Philosophy 101 Final: Fill in the blank questions

View Set

HESI & Final: Comprehensive Review

View Set