M4TH 22I

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

how to write a vector wrt an orthogonal basis

ex. f₁ = (1 2 col) f₂ = (-2 1 col) f₁ and f₂ are an orthogonal bases of R² (7 10) wrt this bases is c₁f₁+c₂f₂ f₁⊕v=f₁(c₁f₁+c₂f₂) = c₁f₁f₁+c₂f₁f₂ c₂f₁f₂=0 since f₁ and f₂ orthogonal ∴f₁⊕v=c₁f₁f₁=c₁|f₁|²→solve for c₁ THIS ONLY WORKS FOR ORTHOGONAL BASES check that each pair ⊥ b4 using

cofactor expansion can be done across any row or col to get same result → pick row/col with most 0's

expansion along ith row: det(A) = ai₁(det(Ai₁) + ai₂(det(Ai₂)) ... expansion along jth col: det(A) = a₁j(det(A₁j) + a₂j(det(A₂j) ... easy case - triangular matrix (a₁₁ ------- 0 a₂₂ ______ 0 0 a₃₃ ___) expansion in first row: det(A) = a₁₁det(A₁₁) - 0det(A₁₂) +0det(A₁₃) a₁₁det(n-1) = a₁₁a₂₂det(n-2) =a₁₁a₂₂a₃₃.. =product of pivots ∴ free var→ det(A)=0 →A⁻¹ doesnt exist

T/F: If 2 matrices are row equivalent, that is they can be reduced by elementary row operations to the same matrix, their determinant is the same

false since row operations affect determinants and all intervitble matrices can be reduced to the identity matrix, but they don;t all have the same det

clicker : suppose u1...ur are vectors in Rⁿ and v is in the span {u1...ur}, then set {v,u1....ur} is linearly indep.

false: if v in span then by defn its dependent

AB = A * cols B

faster way of doing matrix multiplication instead of dot product

eigenspace thrm

for A=nxn nmatrix, if we get n diff eigenvalues, then it is a thrm that each contributes 1 eigenvector, each lin. dep → eigenvector basis exists if less than n eigenvalues → at least one eigenvector will have multiplicity d>1, and will produce 1≤dim nul(A-λ₀I)≤d eigenvectors if λ₀ doesnt produce d eigenvectors → eigenvector basis may not exist

If the det(M)≠0 then

given any X there is one and only one A which will satisfy the equation. given any A there is one and only one X which will satisfy the equation. invertible → one to one

thrm: if u₁....ur orthogonal and non zero, then u₁...ur are linearly independent

ie. orthogonal sets are linearly independent proof: independent → all ci=0 for c₁u₁+c₂u₂...+crur=0 ???????????

what properties of matrix A would give RHS = 0 if (x₁...xn col) not 0

if A has a free variable, there are inf soln and x^ doesnt have to be 0 for RHS to be 0.

to find det(A-λI) for n>2 → use expansion

if an eigenvalue has a multiplicity n >1, it must produce n eigenvalues for the eigenspace to span Rⁿ if an eigenvalue produces 2 eigenvectors, it spans a plane, for ex. vectors on this plane get expanded by that eigenvalue

what do if Ax=b inconsistent?

if cant get Ax-b=0, make Ax-b as close to 0 as possible ie. Ax as close as possible to b pick b₀ in col(A) closest to b and solve Ax=b₀ instead. b₀ = best approximation to b within col(A)

defn of basis of H: minimal # of vectors needed to span all H proof that the vectors are lin. indep:

if not lin. indep then u1 can be written in terms of u2 u3 ... where u1 u2 u3... are the basis of H then, H = span {u2...ud} since u1 can be generated from u2...ud, (d-1) vectors, contradicting defn that basis is minimal set of vectors needed to span H

assuming A is diagonizable, ie. the eigenvector basis exists, Aⁿ for large n can be calculated easily, and Aⁿ tells you how the system modelled by A evolves over time

in diagonal matrix Dⁿ: λ small: λⁿ → 0 λ =±1 : λⁿ → ±1 λ big: λⁿ → ∞ ∴ biggest eigenvalue dominates the system if all eigenvalues small (complex or real) → extinction

Ax = b consistent only if b is

in span{colsA} and A= (u₁...un) and x= (x₁...xn coloumn)

the directions corresponding to the free variables in a soln to a matrix eqn are _____ meaning _____

independent meaning no way to turn one free variable vector into another ∴ 3 free vars → 3 dimensions 2 free vars → plane

For a transformation T: Rⁿ→R^m that takes vectors with n entries as inputs to produce m entries as outputs we call

input space : Rⁿ→domain output space: R^m →codomain set of actual images of T(x) (might not be full codomain) → range of T

dont assume that A is ...

invertible or consistent if its not explicitly stated for a lot of TF questions, it is false becausse A could be inconsistent

key point: if u1...ur are lin. indep. then span{u1...ur}

is all possible vectors c1u1+...cnun, which can be identified with multiples of (c1...cn)

a basis {u₁...ur} is orthonormal if ...

it consists of unit vectors that are orthogonal to each other. the, {u₁...ur} is an orthonormal basis

each independent vector can be uniquely identified by

its weights (c1...cn)

dimension of H defn

largest set of vectors that are linearly indep, smallest possible that span all H

"find c to make A consistent/reversible "

make sure to check if 0 is a soln as well

another way of thinking about matrix multiplication is to see ...

matrix A (n x r) as transforming vector x (r sized) to the multiplication Ax which is in Rⁿ ie. vec. x in R^r → Ax in Rⁿ by A. # cols of A (r) → # rows of A (n)

matrix addition works for ...

matrixes of the same size

least squares line simplest relation: y=B₀+B₁x (y=mx+b) (xi,yi) - observed data point (xi,B₀+B₁xj)-predicted y value on line difference - residual

measuring accuracy of line: -∑(residuals)² →best fit line is one that minimises this -called line of regression of u since all errors assumed to be in y -B₀ and B₁ = regression coefficients

If A=B CA=CB AC=BC If AB=CD, does AEB=CED

no , order matters, cant put stuff in middle

if det(A-λI)≠0 for any real number →

no real eigenvalues λ

Clicker q: if p and q both solve Ax=b where b≠0, do combinations of p and q also solve it?

no. Ap=b=Aq A(p+q) = Ap + Aq = b+b = 2b 2b ≠b

all matrix transformations are linear, however ...

not all linear transformations are matrix ie. cant be written as T(x) = Ax

If the soln of a matrix eqn has n vectors, do we really get an N-dimensional space?

not always, if one vector can be generated from the others

how to use eigenvalues and eigenvectors goal: given A=nxn, find a basis for Rⁿ consisting of eigenvectors of A, ie. want set of vectors f₁ ...fn linearly independent and spanning Rⁿ

nul (A-λ₁I) = f₁ = eigenvector for eigenvalue λ₁ Af₁ = λ₁f₁ given r in Rⁿ → r = c₁f₁ + c₂f₂ ... + cnfn (1) Ar = A(c₁f₁) ... A(cnfn) =c₁Af₁ + ...cnAfn =λ₁c₁f₁+...λncλfn (2) B=basis of Rⁿ = {f₁...fn} r= (c₁...cn col) wrt B by (1) Ar = (λ₁c₁...λncn col) wrt B by (2) point: A may be complicated, but relative to eigenvector basis B, A is simple

defn 1 of linear independence geometrically: defn 2 of linear independence geometrically:

on a grid, you cant get to the same place point two different ways (no diagonals) 2: no loops can be formed with 2 different vectors

A⁻¹ = 1/A = matrix that reverses multiplication by A for only nxn A

opposite of T which transforms x → Ax

an orthogonal basis for a subspace H is a basis consisting of orthogonal vectors

orthogonal bases are extremely handy because its very easy to solve equations and find coordinates wrt the bases

the solution of Ax=b is just the soln of Ax=0 ...

plus any solution of Ax=b

difference between codomain and range range of T = span {cols A}

range is subset of codomain ie. for codomain of R², both (2,3 col) and (3 0 col) are in it, but both might not be in range of T(x), the actual images

the general linear model, for fitting to something other than straight line XB=y still, but X changes depending on number of parameters, B

residual vector ε: ε=y-XB y=XB+ε = linear model With X and y, goal is to find B that minimizes ε by solving the normal eqns

The inverses of a product ABC is the product of the inverses in _____ order

reverse ; shoes + sock rule C⁻¹B⁻¹A⁻¹

Since the matrix A and B are related by the left multiplication of that invertible matrix M, they share many characteristic, including rank and they row space. The reason being that the left multiplication by M can be understood as simply being a set of (invertible) row operations and the rank and row space are properties that are preserved under (invertible) row operations.

row space basis = rows with pivots

matrix multiplication BA is only defined when output of A has...

same size as input of B

span{vi . . . vp} defn

set of all vectors that can be generated or written as a linear combination of a fixed set {vi . . . vp} ie. all vectors that can be written in the form c1v1 + c2v2 . . . cpvp. subset of R^n spanned by vectors v1 ... vp

nullspace of mxn matrix A defn: Nul(A) = { x∈Rⁿ : Ax=0 }

set of all vectors x ∈ Rⁿ that solve Ax=0 thinking of A as T(x) → nul(A) is set of all inputs mapped to the origin by transformation x→Ax

if A⁻¹ exists, then Ax=b is consistent for every b in Rⁿ

since there must a pivot in every col of A therefore cols A lin. indep.

In general, if W= span of u₁...ur , W⊥ found by:

solving matrix with eqns x⊕u₁=...=x⊕ur=0

notation: v=(c1...cd col)₈ where B = {u1...ud} ci = coordinates of vector wrt basis B ci = free vars from solution of Ax=0 B = special solns from soln of Ax=0

standard basis is e1 e2 for R² however, (1,1 col) and (1,-1 col) are also a basis (B) for R² the vector (2 0 col) can be written as (2 0) wrt standard basis, or (1 1) wrt basis B since (2 0) = 1(1 1) +1(1 -1)

subspace defn

subspace in Rⁿ is generalization of a line in R² or a plane in R³ through the origin

nul(A) of mxn matrix is..

subspace of input space Rⁿ since it follows the 3 subspace criteria

col(A) of mxn matrix is...

subspace of output space R^m since it follows the 3 subspace criteria

spans of vectors are...

subspaces

A (T(e₁) T(e₂)...T(e₉)) A ( Te₁ Te₂) is a matrix such that T(x) = Ax for all x so long as T is linear

the cols of matrix A are the outputs of T for the input vectors e₁,e₂...e₉

if the only eigenvalue for a 2x2 matrix has multiplicity 2 but only produces only 1 eigenvector, does the eigenvector basis exist?

the eigenspace is a line since only 1 free variable → doesnt span R² → no eigenvector basis exists

a basis for nul(A) is given by ...

the n-r 'special solns' of Ax=0

in inhomogenous systems Ax=b where b ≠ 0, the matrix eqn may/may not be consistent. If consistent...

the solution is closely related to the corresponding homogenous system Ax=0. Soln to Ax=b is just a translation of Ax=0, whether it be a plane or line. Ax=b, being translated, doesn't include the trivial 0 vector as a solution.

Ax=0, a homogenous eqn, always consistent because

the trivial solution x^ = 0 works

key concept: If T is a linear transformation of v, a vector obtained as a linear combination v=c₁u₁...crur

then T(v) = some linear comb of Tu₁...Tur by rules 1&2: T(v) = c₁Tu₁ + c₂Tu₂...+CrTur

for square nxn matrixes, BA and AB are both defined, however.. A² makes sense when...

they may not be the same A is a square matrix

dim col(A) + dim nul(A) =

total # vars in A

Suppose that a transformation T is linear, how do you find A?

transformation of vector (pq) (pq) = p(e₁) + q(e₂) by rule 2: T(pe₁ + qe₂) = T(pe₁) + T(qe₂) by rule 1: =pTe₁ + qTe₂ =(pq col) (Te₁ Te₂) = (pq)(A) ∴ find Te₁...Te₂ by rotating e₁, (1,0) and e₂, (0,1)

The orthogonal projection of y onto v is the same as the orthogonal projection of y onto cv whenever c≠ 0.

true

use basis of H as follows:

u1...ud is a basis of H meaning: any v in H can be written as lin comb of u1...ud existence: c1...cd exist because H=span {u1..ud} uniqueness: lin indep of u1...ud

another defn (2) of linear independence

u1...un are linearly independent if c1u1 +...cnun =0, meaning all c=0 is the only soln ie. the only soln to ax=0, (u1...ur | 0) is the trivial solution.

W's spanned by vector u's. Write b=(135 col) as sum of vector in W and vector ⊥ to W

ui⊕uj=0 → orthogonal basis b=b₀+(b-b₀) and error b-b₀ ⊥ col(A), ie. ⊥ (1,3,-2)&(5,1,4) b₀=c₁(1,3,-2 col) + c₂(5,1,4 col) b =c₁(1,3,-2 col) + c₂(5,1,4 col) +error To find c₁→ dot everything with u₁ To get c₂→dot with u₂ find error b-b₀ knowing b and b₀ point: b₀ is ortho projection of b to col(A), proj(colA)b

how to get b₀, best approx without orthogonal basis? method works even for u₁...ur dependent, only requires that u₁...ur span W

u₁...ur basis for W b₀=c₁u₁+...crur b=b₀+error = c₁u₁+...crur + error, and error ⊥ W method: take dot with u₁...ur successively to get sys of eqns for c₁...cr to solve →r unknowns, r eqns, one for each dot with u₁...ur

key formula relating dot product to transposes

u⊕v=(v+)(u)=|v||u|cosθ for A orthogonal → Au⊕Av=u⊕v

analyzing dynamical systems

vectors in Rⁿ = states of system, ex. population state A = transition, change of state Pη+₁=APη → Pη=AⁿP₀ which can be computed via diagonalization formula

difference between set of vectors that span subspace H and a set of vectors that are the basis for subspace H

vectors spanning dont need lin. indep, can have infinite number of vectors spanningH -basis vectors span AND lin indep

linear independent vectors defn 1

vectors u₁....un are linearly independent if different weights c₁.....cn give different values of b, c₁u₁+.......cnun ∴ information contained in weights (c1....cn) uniquely identifies the vector and you cant get the vector 2 different ways

any x in Rⁿ can be split into x=w₁+w₂ where w₁ and w₂ are:

w₁ is best approx to x⁺ in W w₂ is error (x-w₁) or w₂ is best approx to x in W⊥ and w₁ is error consequence: |x|²=|w₁|²+|w₂|²

overview of matrix multiplication:

x in Rⁿ → Ax in R^m → B(Ax) in R^p BA is the matrix which takes x in Rⁿ → B(Ax) in R^p BA ≠ AB

transformation : reflection in x axis A (xy col) → (x, -y col) input and output same sized →square matrix

x(??)+ y(??) = (x -y) reflection matrix: [ [1,0],[0,-1] ] no matter what vector x is, this matrix will result in reflection in x axis

General fact relating Ax=b and Ax=0 for same Ax: if x=p solves Ax=b and x=q solves Ax=0, then...

x= p+q also solves Ax=b proof: A(p+q) = Ap + Aq = b+0 = b ∴ can get all soln to Ax=b from any one sol to Ax=b (ex. p) and adding it to q, soln set of Ax=0. x=p is any one vector that satisfies Ax=b

ex. y=<23 col>,u=<4,-7 col> y=sum of 2 ortho vectors, 1 in W=span{u} and 1 in span{u⊥}, the orthogonal complement ie. want y=w₁+w₂

y=w₁+w₂ y=c₁u+w₂: w₁ in span u and w₂ error ⊥ span{u} find c₁ by dotting with u →y⊕u=c₁u⊕u+w₂⊕u =c₁|u|²+0 -13=c₁*65→ c₁=-13/65 w₁=-13/65u w₂=y-w₁

Is A² = I where A is transformation that flips point over y=3x

yes T(Tx) = x A(Ax) = x A²x=x for all x ∴A²=I →A=A⁻¹ Reflections are their own inverses

A=orthogonal matrix. v,u = 2 vectors in Rⁿ is angle btwn u and v same as angle btwn Av, Au?

yes u⊕v=|u||v|cosθ Au⊕Av=|Au||Av|cosθ₂ |u|=|Au| and |v|=|Av| for ortho bases Au⊕Av =(Au+)(Av) =(u+)(A+)(A)(v) =(u+)(v)=u⊕v ∴angle preserved since both angle and length preserved → ortho matrices preserve length

For every vector v and subspace W, is the vector v-proj(w)v ⊥ W?

yes since by defn proj(w)v=v₀

Clicker q: if p and q both solve Ax = 0 then does p-q?

yes. Ap=0=Aq A(p=q) = Ap-Aq = 0- 0 = 0 If p and q both solve Ax=0, then so does cp+dq for any c and d constants

W⊥, orthogonal complement, defn

"w perp" set of all vectors ⊥ to all vectors in W (all errors) x in W⊥ → x⊕w=0 for all w in W If W is plane in R³→W⊥ is normal lines of W If W line in R³→W is plane with normal vector w

dot product of 2 vectors x⊕y x=(x₁.....xn) y=(y₁......yn) =x⁺y =x transpose * y or |x| |y| cosθ

'transpose' is switching row and cols of A length(magnitude of vector) = sqrt(sum of squares)

A = u1 u2 u3 A = 3x3 matrix suppose x = (1 1 1 col) lies in nul(A) Then, col(A) = span{u1 u2} TF

(111) in nul(A) →A(111)=(000) ∴u1 + u2 + u3 = 0 by defn, col(A) = span{colsA} does span{u1 u2} = span{u1 u2 u3} ie. is u3 dependent? YES since u3 = -u1 -u2 ∴ true

matrix normal eqn for solving inconsistent Ax=b when A not orthogonal , instead of usual method dotting with every u₁...ur.

(A+)Ax=(A+)b solving for x, which is the least squares solution could have free vars if cols A not lin. indep.

If A⁻¹ and B⁻¹ both exist , then so does (AB)⁻¹

(AB)⁻¹ = (B⁻¹A⁻¹) AB = do B first, then A RHS → undo A first, then B

transpose rule (AB)+ =

(B+)(A+)

cols of BA =

(Te1 Te2) where Te1 = B(Ae₁) Te2 = B(Ae₂) cols of A = u1...un = Ae1...Aen cols of B = Bu1...Bun = BAe1...BAen ∴ to get BA, multiply B by cols of A

the conditions that guarantee that det(M)=0:

- When A=0 there is more than one X which satisfies the equation. - There is some value of A for which no value of X satisfies the equation.

If the det(M)=0 then

- some values of Awill have no values of X which will satisfy the equation. - some values of A (such as A=0 ie. solns other than trivial exist ) will allow more than one X to satisfy the equation. - given any X there is one and only one A which will satisfy the equation. X → 1 A A ≠ 1X

relationship btwn W and W⊥

-(W⊥)⊥ = W -no common vectors except 0 -any vector x in Rⁿ can be written uniquely as x=w₁+w₂ with w₁ being in W and w₂ being in W⊥ -sum of dimensions of W and W⊥ = n for W and W⊥ in Rⁿ

matrix transformations ↔ linear transformations

-Every matrix transformation T(x) = Ax satisfies the two rules → linear -Every linear transformation satisfies the 2 rules → can be written as T(x) = Ax so its a matrix transformation

conditions for existence of A⁻¹

-rref form is identity matrix -A has pivot in every row/col → no free vars -note: A is coefficient matrix

the zero vector is in the eigenspace for all A since

0 is always a soln to every Ax = λx eqn

methods of finding dets

1) cofactor exp along row/col 2) row reduce to echelon form, keeping track of steps along the way combination of these two. ex. 2), then 1)

given any A, you can turn it into a triangular matrix; knowing the det of the triangular form, you can find det(A) ∴ keep tracks of steps taken to make A →∆A to relate det(A) to det(∆A)

1) flip 2 rows → det changes sign 2) multiply row by k →k(det) 3)add/subtract k*rowi - row j → no change →only works for number matrices →if variables exist, use cofactor expansion

general procedure for finding eigenvalues

1) form A-λI from A 2) det(A-λI) = P(λ), the characteristic polynomial with degree n for nxn matrix 3) solve P(λ)=0 to get roots

proof breakdown of fact that dim(W) + dim(W⊥)=n

1) show that the zero vector is the only common vector in W and W⊥ 2) show that any x can be written as x=w₁+w₂ (uniqueness of decomposition) 3) show that basis of W and W⊥ span Rⁿ

ex. .find best fitting line for (2,1),(5,2),(7,3)(8,3)

1) x coords → design matrix X X=[1,2; 1,5; 1,7; 1,8] 2) y coords → observation vector y=[1 2 3 3 col] 3)use normal matrix eqn to solve, ie. find (X+)X, (X+)y, then find B in (X+)XB=(X+)y

y= blah u=blah , find distance btwn y and the line thru u and the origin 3 ways?

1) y=c₁u+error(w₂) → find c₁ then error=y-w₁, compute length 2)find vector ⊥ to u and project y to that line 3)use pythagorus |y|²=|w₁|²+|w₂|² where w₁ is projection of y onto u

steps in finding a least square soln of Ax=b for non orthogonal bases

1)calculate (A+)A 2)calc (A+)b 3)(A+)Ax=(A+)b, solving for x with row reductions or x₀=(A+*A)⁻¹(A+)b for 2x2 matrices

Ax=b inconsistent has a unique least squares soln when:

1)cols linearly independent 2)A+*A invertible b-Ax₀ is 'least squares error'

calculating 3x3 determinants by cofactor expansion across the 1st row works for higher nxn matrices as well

1. get Aij for every ij = 1j (123-456-789) A₁₁ (56-89) A₁₂ (46-79) A₁₃ (45-78) 2. get aij for every ij, aij is entry of A in ij position a₁₁ (1) a₁₂ (2) 3. calc all det(Aij) which is known since its 2x2 4.make a checkerboard +-+- starting with + 4. formula: +a₁₁(det(A₁₁)) -a₁₂det(A₁₂)) +a₁₃(det(A₁₃))

How to find bases for nul(A) and col(A)

1. rref(A) 2. basis of col(A) = pivot cols of og A (not rrefA) 3. basis of nul(A) → find x for Ax=0 , x = basis

cases where dependence is easy to test

1. when m >n , ie. more vectors u1...ur in Rⁿ than there are equations → cant have pivot in every row → free vars present 2. if there exists non trivial soln (by inspection) where Ax=0 for 2 vectors of any size 3.if one of vectors is 0

Given Tu₁ and Tu₂, how would you find Tv, a random vector?

1. write v as linear comb of u₁ and u₂, ie. find weights (using matrix or eyeballing) 2.Tv = c₁Tu₁ and c₂Tu₂

multiple regression - 2 indep variables, 1 dependent y=B₀f₀(u,v)+B₁f₁(u,v)+...Bkfk(u,v) where f₀....fk are known and B₀...Bk are unknown weights, still linear

2 variables → least squares plane X=[1, u₁, v₁...; 1,u₂,v₂...etc] once X is defined, solve using normal matrix eqns to find B vector as usual

sizes of A and B in matrix multiplication

A = mxn ;Rⁿ→R^m B = pxm ;R^m →R^p (pxm) x (mxn) = (pxn) BA = p rows, n cols

by writing vectors in terms of the eigenvector basis...

A essentially gets replaced by a diagonal matrix with eigenvalues as the entries (λ₁ 0, 0 λ₂) (c₁ c₂ col)wrtB = (λ₁c₁, λ₂c₂)wrtB A x = B

ABC =

A(BC) = (AB)C

A is "diagonizable" if it can be written as

A=PDP⁻¹

A⁻¹ if it exists, is another nxn matrix with

AA⁻¹ = A⁻¹A = Iⁿ linear trans of A followed by linear trans of A⁻¹ = identity matrix, nothing changes Iⁿx=x for all x

how to find A⁻¹

AA⁻¹=I cols of A⁻¹ = u1 u2 u3 Au1 Au2 Au3 = e1 e2 e3 Solve: Au1=e1 Au2=e2 Au3=e3

cancellations

AB=AB and A⁻¹ exists A⁻¹(AB) = A⁻¹(AC) IB = IC B=C Order matters so if AB=CA , A⁻¹CA≠C and B≠C

det(AB) = det(A)*det(B) det(BA) = det(B)*det(B)

AB≠BA ubt their determinants are the same

which are true A. The column space of A is the range of the mapping x→Ax. B. The null space of an m×n matrix is in Rm. C. Col(A) is the set of all vectors that can be written as Ax for some x. D. The null space of A is the solution set of the equation Ax=0. E. If the equation Ax=b is consistent, then Col(A) is Rm. F. The kernel of a linear transformation is a vector space.

ACDF

matrix eqns Ax = b can be written as vector eqns and augmented matrix as well Ax = [a1 a2 ... an] [ x1 ... xn col]

Ax = b x1a1+x2a2...xnan [a1 a2 ... an | b] all 3 notations can be solved via row reducing the augmented matrix

proof that A has no free vars if A⁻¹ exists

Ax=0 A⁻¹Ax=A⁻¹0 Ix=A⁻¹0 x=0 → trivial soln only soln → no free vars

a matrix A is one to one if..

Ax=b has a unique soln for all b in Rⁿ each x → diff B

since the eigenvalue eqn Ax=λx has 2 unknowns, its quadratic. finding λ:

Ax=λx Ax =λIx Ax - λIx = 0 (A- λI)x=0 →x≠0 by eigenvalue defn → A-λI has a free var→ cant be invertible and det=0 ∴ find det(A- λI) by solving aλ²+bλ+c=0 aλ²+bλ+c is det(A- λI) for 2x2 matrix

summary of eigenvectors and eigenvalues

Ax=λx, x≠0, λ=scalar finding λ → solve det(A-λI)=0 find nul (A-λI) for each λ prev. found to get x, the eigenvector →eigenvector basis simplifies complex matrices into simple ones, basically turning A into a scalar

If A has free variables, ...

A⁻¹ cannot exist → analogous to cant divide by 0

if det(A) ≠ 0 →

A⁻¹ exists

for these transformations, what is the inverse T(x) rotates x by 45° T(x) rotates x by 90°

A⁻¹= A⁸ A⁻¹=A⁴

A and B are n×n matrices. Check the true statements below: A. If det(A) is zero, then two rows or two columns are the same, or a row or a column is zero. B. If two row interchanges are made in sucession, then the determinant of the new matrix is equal to the determinant of the original matrix. C. The determinant of A is the product of the diagonal entries in A. D. det(A+)=(−1)det(A)

B

matrix multiplication notation

BA = B(Ax) = multiply by A first, then B

Let m<n. Then U= {u1,u2,...,um} in Rn can form a basis for Rn if the correct n−m vectors are added to U // TF

F

Let m>n. Then U= {u1,u2,...,um} in Rn can form a basis for Rn if the correct m−n vectors are removed from U. TF

F

clicker q: if there is a pivot in every coloumn of an augmented matrix, the system is consistent. T/F

F because if the RHS has a pivot , the system is inconsistent. Only in coefficent matrix (no RHS) this is true.

Whenever a system has free variables, the solution set contains many solutions

F →maybe inconsistent

The equation Ax = 0 gives an explicit descriptions of its solution set.

FALSE - The equation gives an implicit description of the solution set.

If A is an m × n matrix, then the range of the transformation x 7→ Ax is Rm

FALSE Rm is the codomain, the range is where we actually land.

The solution set of a linear system involving variables x1, ..., xn is a list of numbers (s1, ...sn) that makes each equation in the system a true statement when the values s1, ...,sn are substituted for x1, ..., xn, respectively.

FALSE This describes one element of the solution set, not the entire set

The solution set of Ax = b is the set of all vectors of the form w = p + vh where vh is any solution of the equation Ax = 0

FALSE This is only true when there exists some vector p such that Ap = b.

If an n × p matrix U had orthonormal columns, then U(U+) x = x for all x in Rⁿ .

FALSE This only holds if U is square

If S is a linearly dependent set, then each vector is a linear combination of the other vectors in S.

FALSE- For example, [1, 1] , [2, 2] and [5, 4] are linearly dependent but the last is not a linear combination of the first two

A matrix with orthonormal columns is an orthogonal matrix.

FALSE. It must be a square matrix.

If the linear transformation x 7→ Ax maps R n into R n then A has n pivot points.

FALSE. Since A is n × n the linear transformation x 7→ Ax maps R n into R n . This doesn't tell us anything about A.

The solution set of Ax = b is obtained by translating the solution set of Ax = 0.

FALSE. This only applies to a consistent system.

T/F: the orthogonal projection y^ of y⁺ onto a subspace W can sometimes depend on the orthogonal basis for W used to compute y^

False: projection y^ is just the closest vector and doesnt depend on choice of basis

Suppose a matrix A has n rows and m columns.

If n<m then the m columns of A may span Rn. This can happen even if the matrix has a zero or repeated column - infinite number of vectors can span; they dont need to be linearly independent

When does it occur that Ax=b is consistent for all b?

Matrices A whose echelon form has a pivot in the bottom row produce matrix eqns Ax=b that are consistent for all b

row space of A ie. col(A+) ⊥ Nul(A)

Nul(A+) ⊥ col(A)

nxn matrices behave like usual numbers and follow same conventions except...

ORDER MATTERS A(x+y) = Ax + Ay (A-B)C = AC - BC

uses of diagonalization eqn A=PDP⁻¹

P = eigenvector matrix D = diagonal eigenvalue matrix Aⁿ = (PDP⁻¹)ⁿ = PDP⁻¹ PDP⁻¹... P&P⁻¹ cancel leaving Aⁿ = PDⁿP⁻¹ useful for analyzing how dynamical systems evolve, ex. migration model

subspaces of R²

R² → 2D subspace y=mx → 1D subspace 0 vector → 0D subspace

det(A) volume interpretation

Seeing A=(u1...un) as a transformation T(x) with u1=T(e₁), det(A) tells us how volume changes as the box with sides e₁, e₂, e₃.... is transformed to the box with sides u₁, u₂, u₃...

A, B matrices such that AB is defined. Suppose first 2 cols in B are equal. Then, first 2 cols of A are equal. T/F?

T because cols of A are A(Be1) = A times cols of B → first 2 cols of A are equal

Given A, write T for the corresponding matrix transformation

T(x^) = Ax^ analogous to f(x) = mx but with vectors

If W is a subspace of Rⁿ and if v is in both W and W ⊥, then v must be the zero vector.

TRUE

If the columns of an n × p matrix U are orthonormal, then U(U+) y is the orthogonal projection of y onto the column space of U.

TRUE

If y = z1 + z2 where z1 is in a subspace W and z2 is in W ⊥, then z1 must be the orthogonal Projection of y onto W .

TRUE

The general least squares problem is to find an x that makes Ax as close as possible to b. Any solution of A T Ax = A T b is a least squares solution of Ax = b. TRUE, this is how we can find the least squares solution. If the columns of A are linearly independent, the the equation Ax = b has exactly one least squares solution. TRUE Then A T A is invertible so we can solve A T Ax = A T b for x by taking the inverse.

TRUE

If the columns of A are linearly independent, then the equation Ax = b has exactly one least squares solution.

TRUE Then (A+)A is invertible so we can solve (A+)Ax=(A+)b for x by taking the inverse.

If ||u||² + ||v||² = ||u + v||² , then u and v are orthogonal.

TRUE By Pythagorean Theorem

A least-squares solution of Ax = b is a vector x^ that satisfies Ax^ = b^ where b^ is the orthogonal projection of b onto ColA.

TRUE Remember the projection gives us the best approximation.

The equation Ax = b is homogeneous if the zero vector is a solution.

TRUE. If the zero vector is a solution then b = Ax = A0 = 0. So the equation is Ax = 0, thus homogenous.

Two vectors are linearly dependent if and only if they lie on a line through the origin.

TRUE. If they lie on a line through the origin then the origin, the zero vector, is in their span thus they are linearly dependent.

The effect of adding p to a vector is to move the vector in the direction parallel to p.

TRUE. We can also think of adding p as sliding the vector along p

knowledge of Te₁ and Te₂ sufficient to find any T(x) since

Te₁ and Te₂ = cols of A with which x is multiplied This matrix A, is the standard matrix for linear trans. T

If the columns of an m × n matrix A are orthonormal, then the linear mapping x→ Ax preserves length.

True

For an m × n matrix A, vectors in the null space of A are orthogonal to vectors in the row space of A

True row space = col(A+)

T/F If A = invertible nxn matrix, then cols of A span Rⁿ

True since Ax=b consistent for all b in Rⁿ when A⁻¹ exists

TF: A = orthogonal matrix T = lin. trans sending x → Ax then, |Tx| = |x| ? ie. T preserves length?

True. Tx=Ax by defn |Tx|²=Tx⊕Tx=Ax⊕Ax =(Ax+)(Ax) by fact that u⊕v=(v+)u =(x+)(A+)(A)(x) by transpose rules (A+)(A)=I since A is orthogonal =(x+)(x)=|x|² ∴|x|=|Tx| so length is preserved point: orthogonal matrices preserve length

typical layout of discrete dynamical system

Vη+₁=MVη Vη = state of system at time t Vη+₁ = state of system at t+1 M=fixed matrix discrete dynamical system because t is discontinuous Vη=MⁿV₀

example of finding W⊥ from W

W = subspace of R⁴ spanned by u₁=<1111 col> and u₂=<1,1,-1,1 col> which not ⊥ since ⊕≠0 x in W⊥: <x₁,x₂,x₃,x₄ col> x⊕u₁=x⊕u₂=0 →x₁+x₂+x₃+x₄=0 →x₁+x₂-x₃+x₄=0 find x and W⊥ by putting in matrix and solving W and W⊥ both planes →n=2+2=4

Ax=b notation in linear models

X=design matrix B=parameter vector y=observation vector

reflection in line y=x (pq) → (qp)

[ [0,1],[1,0] ]

rotation by angle ∅ ccw

[ [cos∅, sin∅], [-sin∅,cos∅] ]

A⁻¹ for 2x2 matrix formula

[[a,b],[c,d]]⁻¹ = (1/(ad-bc)) [[d,-b],[-c,a]] if ad-bc=0 → free vars and A⁻¹ dne

kernel = nul space

a matrix is onto if the range is the codomain, ie. linearly independent cols span Rm

eigenvalue defn

a number λ such that Ax = λx has a non-zero soln, x.

Suppose T: Rⁿ → R^m is a transformation. We say that T is linear if: linear = corresponding to matrix transformation

a) T(x+y) = T(x) + T(y) where x and y are vectors in Rⁿ b) T(cx) = cT(x) where c is constant

Thrm for m x n coefficient matrix, the A part in a matrix eqn:

a) for each b in Rm, the span, the eqn Ax = b has a soln b) Each b in Rm is a linear combination of the columns of A c) cols of A span Rm d) A has a pivot position in every row a-c are equivalent and a) is the effect of d)

A set of vectors H in Rⁿ is called a subspace if ...

a) the 0 vector lies in H b) u,v in H→ u+v also in H c) u in H, c scalar →cu in H if all lin combs of H are in H

solutions of matrix eqns can be written :

as x= xiui where xi is the free variable and ui is the direction vector ie. isolate free variable

diagonalization

assume there exists an eigenvector basis for Rⁿ P = [f₁.......f₀] eigenvector matrix AP = A[f₁.......f₀] =λ₁f₁....λ₀f₀ =(f₁.....f₀) [λ₁0000, 0λ₂000, 00λ₃00], diagonal matrix D → AP = PD assuming col(P) independent → eigenvector matrix invertible, P⁻¹ exists P⁻¹AP=D → A=PDP⁻¹

column space of mxn matrix A defn:

b ∈ R^m that are linear combinations of the cols of A in terms of A as a T , col(A) is range of transformation x→Ax

proof of (A+)Ax=(A+)b

b-b₀ ⊥ col(A) → b-Ax₀ ⊥ col(A) a col of A = aj aj⊕(b-Ax₀)=0 → (aj+)(b-Ax₀)=0 (aj+)=a row of A+ ∴(A+)(b-Ax)=0→(A+)Ax=(A+)b

dimension of nul(A) "nullity" is the number of ____ in A

basic/free columns in A , n-r where r is pivot cols

dimension of col(A) "rank" is number of pivot cols in A, denoted 'r'

basis for col(A) given by the r pivot cols of A

for 2x2 matrix, Ax=b can be solved 2ways

by row reduction as usual or by x=A⁻¹b

when trying to find best approx for Ax=b, minimize |Ax-b| as a function of vector x

b₀ is orthogonal projection of b onto col(A) thrm: b₀ ⊥ all vectors in col (A) b-b₀ = error vector

proof that c1u1+...cnun must =0 for u1...un to be linearly independent

c1u1+...cnun =v^ d1u1+...dnun = v^ (c1-d1)u1+...(cn-dn)un = 0 If you could get the same vector in 2 ways, then subtracting the two vectors would =0. but in the above eqn, if c≠d, you cant get 0 ∴cant get to same vector 2 diff ways

row equivalent

can get from one to the other using elementary row operations.

dependence mean that at least one of the u in (u1...un)

can be produced from the span of the others if c1u1+c2u2...+cnun=0 but c1≠0, then -c1u1=c2u2+...cnun and u1=(-1/c1)(c2u2+...cnun)

How to determine if vector x is in nul(A)

check does Ax=0

How to check if vector b in col(A)

check if Ax=b is consistent

How to test if a transformation is linear

check if it follows the two rules

how to check if a vector b is in the range of T(x)

check if the linear system Ax=b is consistent if consistent → in range

to check if a set of vectors is a basis for H...

check if they span H, and check if theyre linearly indep.

general formula for ci in terms of orthogonal bases

ci = v⊕f₁/|f₁|² ∴dont need to row reduce Ax=v to get ci, can just use formula AS LONG AS ITS ORTHOGONAL BASIS

A = nxn matrix, if Ax=b inconsistent for some b in Rⁿ, what can be said about Ax=0?

cols of A dont span Rⁿ and there is a non trivial soln to Ax=0

mean deviation form of fitting linear systems

compute x°, the average of x values x*=x-x° Using x* in the design matrix X, the two cols of X will be orthogonal and you can just use the dot product formula to find B

if given 2/3 vectors in an orthogonal basis, use ___ to find 3rd

cross product

det(Aⁿ) = det(A)ⁿ

det(A⁻¹)=1/det(A) ∴det(A) ≠ 0 → A⁻¹ exists

A*A⁻¹ = I

det(I) = det(A)*det(A⁻¹) = 1

all possible bases of H contain the same number of vectors, this number is the ____

dimension of H

what does this mean: is b^ in the span{v1...vp}

does x1v1 + x2v2 ... +xpvp = b or [vi . . . vp | b] have a solution?

fast way to calc BA

dot product of vectors entry in i-j position of BA = dot product of i row of B with j col of A

how to find eigenvectors knowing eigenvalues

eigenvector is x in Ax=λx transform of x by A is parallel to x, it lies on the same line. if λ>1 → expands , λ<1 → contracts eigenvector is found by finding nul(A-λI) knowing λ

how to find b₀ by dot product

error vector (b-b₀) ⊥ vectors spanning col(A) so error vector ⊕ span(col(A)) = 0

A = vectors (u₁....un) det(u₁ u₂ u₃...) = ±volume of box with sides u₁...un

ex. for 2 vectors, det(u₁ u₂) = 2d area of parallelogram

fitting other curves in general y=B₀f₀(x)+B₁f₁(x)+...Bkfk(x) f₀...fk = known functions B₀...Bk are unknown parameters still a linear model since unknowns are linear

ex. for y=B₀+B₁x+B₂x² y=XB+ε y=[y₁,y₂...col] X=[1,x₁,x₁²..; 1,x₂,x₂²...etc] B=[B₀,B₁,B₂] ε=[ε₁,ε₂...]


Kaugnay na mga set ng pag-aaral

Live Keyboard Shortcuts - Showing and Hiding Views

View Set

GSMST Literary Devices and Literary Terms

View Set

Compare Two Texts With Different Genres 8th Grade IXL

View Set

RN pharmacology online practice A

View Set

DEVELOPING AND IMPLEMENTING SECURITY POLICIES

View Set