Linear Algebra

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Diagonalisable

A linear map T:V->V is diagonalisable if there exists a basis of V relative to which the matrix representing T is diagonal

Adjugate/adjoint matrix

Adj(A)=K^T= ((-1)^(j+1)kji)1=<i.j=<n (transpose of K) (1) Using (1) Aodj(A) = Adj(A)*A = det(A)*I (2) Adj(A)*A = A*Adj(A) = det(A)*I

Remark 1

CHT says that we can define CT(x) - characteristic polynomial of T:V->V as the c.p of any matrix A represent T

Characteristic polynomial

Let A be an nxn matrix over some field F. Its characteristic polynomial is defined by PA(x)=det(xI-A). The polynomial PA has degree n and it's zeros are the eigenvalues of A.

Cayley-Hamilton Theorem

Let A be an nxn matrix with characteristic polynomial PA(x). Then PA(A)=0

Linear transformation

Let U,V be finite dimensional space over a field F with bases P=[V1,...,Vm], let A be an mxn matrix with entries in F. The linear transformation TA:U->V wrt the bases P and Q: TA(uj) = Σ(i=1)^m aijvi ; TA(ui) = Σ(j=1)^n ajivj ; A= M(T,P,Q)

Corollary 1

Let V be a vector space with ordered bases P1 and P2. Let T:V->V be a linear transformation. Let A= M(T,P1,P1) and B= M(T,P2,P2). Then PA=PB

Theorem: Let α be a linear map on V. Then the following conditions are equivalent for an element λ of |K: a) λ is an eigenvalue of α b) λ is a root of the characteristic polynomial of α c) λ is a root of the maximal polynomial of α

PROOF: (b) implies (a): suppose that cα(λ) = 0, that is, det(λI-α) = 0. Then λI-α is not invertible. So its kernel is non-zero. Pick a non-zero vector v in Ker(λI-α). Then (λI-α)v = 0, so that α(v) = λv; that is λ is an eigenvalue of α. (c) implies (b) : Suppose that λ is a root of mα(x). Then (x-λ) divides mα(x). But mα(x) divides cα(x), by the Cayley-Hamilton theorem: so (x-λ) divides cα(x) whence λ is a root of cα(x). (a) implies (c): let λ be an eigenvalue of A with eigenvector v. We have α(v)=λv. By induction, α^k(v) = λ^k v for any k, and so f(α)(v) = f(λ)(v) for any polynomial f. Choosing f=mα, we have mα(α) = 0 by definition, so mα(λ)v=0; since v≠0, we have mα(λ)=0 //

Let A,B be similar matrices. Then PA=PB

PROOF: Let C=C(P',P), then B=C^-1AC. The above lemmas applies //

Let T:V->V be a linear map. The following are equivalent A) T is diagonalisable B) V is the direct sun of the eigenspace of T C) T= λ1P1+...+ λnPm, where λ1,..., λn are a distinct eigenvalues and P1,...,Pm are the corresponding projections Pi^2=Pi; P1+...+Pn=Td; PiPj=0, i=/=j

PROOF: Let λ1,..., λn be the distinct eigenvalues of T, and let vi1,...,vim be a basis for the λi-eigenspace of T. Then T is diagonalisable iff the union of these bases is a basis for V. So (a) and (b) are equivalent. Now suppose that (b) holds. Then the previous definition and its converse show that there are projections P1,...,Pm satisfying the conditions of (c) where Im(Pi) is the λi-eigenspace. Now in this case it is easily checked that T and Σ λiPi agree on every vector in V, so they are equal. So (b) implies (c). Finally, if T= Σ λiPi, where the Pi satisfy the conditions of (c), then V is the direct sum of the spices Im(Pi), and Im(Pi) is the λi-eigenspace. So (c) implies (b). //

Cayley-Hamilton Theorem Proof

PROOF: Starting from (2) with A replaced by (xI-A): (3) (xI-A) Adj(xI-A) = det(xI-A)I-CA(x)I Note: we cannot take X=A=>xI-A=0. So, we would have also Fij-0=det of adj does not make sense Adj(xi-A)=1/det0 *0 By definition, adj(xI-A) is a matrix whose entries are polynomials/determinants of (n-1)x(n-1) matrices. Thus, we can write adj(xI-A) as a sum od x-matrices: a polynomial whose coefficients are matrices. So, adj(xI-A) = X^(n-1)B(n-1) +...+XB1+B0 (4) For suitable nxn matrices Bn. The highest power is X^(n-1) : because the entries of adj(xI-A) are (n-1)x(n-1) determinants. Using (3) and (4), PA(x) = (xI-A)adj(xI-A) = (xI-a)(x^(n-1)B1+...+B0) = x^nBn-1 + x^(n-1)(-ABn-1 + Bn-2)+...+x(-AB1+B0)-B1 Recall A- nxn => PA(x) - polynomials of degree n. So, can write (6) PA(x) I =/= (x^n + Cn-1x^(n-1)+...+c1x+c0)I, equating coefficients in (5) and (6): (7) = Bn-1 =I = - ABn-1 + Bn-2 = Cn-1I ... = -AB1+B0 = C1I = -AB0 = C0I We want to find C0,...,Cn-1 from the system of linear equations (7). Multiply First equation by A^n; Second equation by A^(n-1);...; Last equation by A^0=I Sum all these up =>A^nBn-1 + (-ABn-1 + Bn-2)A^(n-1)+...+ (-AB1 + B0)A - AB0 = A^n +C(n-1) A^(n-1) +...+ C1A +C0I. On LHS = 0 => PA(A) = A^n + C(n-1) A^(n-1) +...+ C1A + C0I = Q matrix //

Theorem: For any linear map T:V->V; its minimal polynomial mT(x) divides its characteristic polynomial cA(x)

PROOF: Suppose not: then we can divide cα(x)=mα(x), getting a quotient q(x) and non-zero remainder r(x); that is, cα(x) = mα(x)q(x) + r(x). Substituting α for x, using the fact that cα(α) = mα(α) = 0, we find that r(α) = 0. But the degree of r is less than the degree of mα, so this contradicts the definition of mα as the polynomial as the polynomial of least degree satisfied by α //

If P:V->V is a projection, then V = IM(p) + Ker(p)

PROOF: We have 2 things to do: Im(p) + Ker(p) = V; take any vector v ε V, and let w=P(v) ε Im(p). We claim that v-w ε Ker(p). This holds because P(v-w)= P(v) - P(w) = P(v) - P(P(v)) = P(v) - P^2(v) = 0, since P^2=P. Now v= w + (v-w) is the sum of a vector in Im(P) and one in Ker(P) Im(P)∩Ker(P) = {0}: Take v ε Im(P)∩Ker(P). Then v=P(w) for some vector w; and )=P(v) = P(P(w) = P^2(w)= P(w) = v //

Suppose that π1,π2,...,πr are projections on V satisfying: (a) π1,π2,...,πr = I, where I is the identity transformation (b) πiπj = 0 for i≠j Then V=U1 + U2 +...+ Ur where Ui = Im(πi)

PROOF: We have to show that any vector v can be uniquely written in the form v = u1+...+ur, where ui ε Ui for i=1,...,r. We have v = I(v) = π1(v) +...+ πr(v) = u1+...+ur, where ui = πi(v) ε Im(πi) for i= 1,...,r

Projection

The linear map P:V->V is a projection if P^2=P (Where, as usual, P^2 is defined by P^2(v)=P(P(v)) for all v ε V)

Minimal polynomial

The minimal polynomial mA of A is a monic polynomial of minimal degree such that mA(A) = 0 anx^n+...+a0 is monic if an=1 Similarly, given T:V->V a linear map, the minimal polynomial of T is the monic polynomial of the smallest degree of any matrix representing T.


Ensembles d'études connexes

Fundamentals of Nursing III (Chap 38 Oxygenation & Perfusion Prep U)

View Set

Impressionism, Fauvism, Expressionism and Cubism

View Set

Chapter 32: Health Promotion and Care of the Older Adult

View Set

PSY 242 Chapter 8: Dissociative Disorders and Somatic Symptom Related Disorders

View Set

Nursing Exam 4-Chapter 32 Qustions

View Set

AP Language and Composition Rhetorical Definitions

View Set

Elements and Compounds Test 2017-2018

View Set