matrix quiz 4
special case V-> V transformation
fix a basis b for v then we get [t(x)]b = a[x]b [t]b = A
complex conjugation
flip the sign so if z= x+iy then the conjugate is z= x-iy so z*conjugate = |z|^2 = modulus^2
characteristic polynomial
for any matrix n x n A, treating lamda as a variable then det(A-lamdaI) is a polynomial of degree n. and then the eigenvalues of A are the roots of the characteristic polynomial. in particular, there are at most n eigenvalues
auxiliary polynomial
for yk+n + a1yk+n-1+...+ an = 0: P(x) = x^n + a1 x^n-1 + ... + an can find this by plugging in the order of the polynomial for n (so if 2 terms then n =2)
how to find modulus and angle
graph out the x and y from the coefficients of the complex. modulus is square root of x^2 + y^2 and then find the angle from the positive x axis using trig
similar
if for some invertible matrix P, P^-1AP = B then A is similar to B
multiplicity
if lamda is a root appearing e times within the polynomial , e is the multiplicity ex. in a diagonal matrix if a1 = a2 then it's a multiple root with a multiplicity >= 2
markov chain
if p is a stoichastic matrix, x is a prob vector, then Px is also a probability vector so a markov chain is a signal {xk} xk = Pk+1 for a stoichastic matrix P where xk = Pkxo
fundamental theorem of algebra
if p(t) is a polynomial over C of positive degree then it has a root in C therefore all roots of p(t) = ao+a1t...ant^n an does not equal 0 are an(t-z1)(t-z2)....(t-zn)
steady state vector
probability vetor such that Pq = q -identity matrix is a stoichastic matrix, every probability vector is steady state -if a,b are nxn stoichastic matrices so is AB you find a strady state vector by rewriting the expression (P-Id)q = 0 find P-Id. then solve for what PROBABILITY vector this could be multiplied by to get 0 vector if p is a stoichastic matrix then it has a unique steady state vector and any markov chain will converse to this vector lim k-> infinity x^k = steady state vector
finding eigenvalues of complex numbers
put into the characteristic polynomial. use quadratic formula. then under neath the quadratic once simplified graph that and find the modulus and the angle. then thatll give u the z and then the z to add and do the plus or minus to get the final eigens
modulus of z
r = square root of x^2 + y^2
z in complex
r(costheta + i sin theta)
multiplication of z in terms of modulus and angles
r1r2 (cos(theta1 + theta2) + i sin(theta1+theta2)
how to find the eigenvalues and corresponding eigenvector
you can rewrite the A-lamda i and solve for det = 0 for the vector write x y as mult matrix and answer. then write the equations for both sides and it will give u the relationship of the variables for the vector
homogenous linear difference equation
{zk} = 0
addition of 2 complex numbers
(x1+x2) + i(y1+y2)
multiplication of 2 complex numbers
(x1x2-y1y2) + i (x1y2+x2y1)
z^2
-1
diagonalization procedure
1) calculate the eigenvalues (if the number of eigenvalues = n then it can be diagonalized) 2. find the eigenspaces w1...wp for ai's (where wi = null space of (A-aiI). if w1...wp do not generate R^n, the matrix can't be diagonalized 3. if they do generate R^n, take the basis from R^n for the wi's. so P = [v1...vn] and P^-1 A P is a diagonal matrix
eigenvector
A is a NONZERO vector x exists in R^N if for some Ax = lamda x. this means A-(lamda * identity) x = 0 vector. which means it is not invertible and the determinant of A-(lamda * identity) =0 if the matrix is a nxn matrix w distinct eigenvalues then the eigenvectors are linearly independent.
calculating A^m with similar diagonal matrix
A=PBP^-1 so A^m = PB^mP^-1 and then B^m is just a1^m down the diagonal
converting to first order equations
Axk = xk+1 and you try to find a nonzero vector v and a # lamda such that Av = lamda v. then we can take xk = lamda^k v for all k Axk = lamda^k A v = lamda^k lamda v . lamda^k+1v = xk+1
complex numbers notation
C is the set of complex numbers
linear transformation with ordered basisis
[T(x)]c = A[x]B. if A is diagonalizable then T is diagonalizable to get ordered basis choose the standard basis (plug in 1, t, t^2 etc into the expression and see what you get and those are the coefficients.) make all these vectors into matrix A. then if [x]b = a0,a1,a2 then plug in these coefficients for x in the corresponding spots for the rule and then youll find [t(x)c] from the new coefficients
signal
a function where integers -> real #s or non negative integers-> real #s represented as {xk} k exists in Z, xk exists in R
stoichastic matrix
a n xn matrix with all its columns probability matrix
stoichastic matrix eigenvalue
a steady vector x means that x is a probability vector and Px = x so P has an eigenvalue of 1
argument of z
angle theta
solution to homogeneous linear difference equation
any scalar multiple {(-a1)^k} also roots of the auxiliary polynomial-> if no roots then no real solution
linear difference equations
aoyk+n + a1yk+n-1 + ... anyk = zk for all k. usually ao = 1 and an does not equal 0 (linear difference of order n )
characteristic polynomial of A relation to invertible matrix P
characteristic polynomial of A and B = P^-1AP are the same
how to show that a matrix has a particular eigen value
do A- lamda I plugging in that eigen for lamda. then solve for the determinant and check to see if it equals 0
eigenvalues and row operations
does not work
eigenvalue for an identity matrix`
eigenvalue = 1 and any nonzero vector is an eigenvector bc Ax = x
probability vector
every value in the vector is greater than or equal to 0 and the sum of all values in the matrix = 1
how to do diagonalization example
find the characteristic polynomial, find the eigenvalues. then get the eigen spaces by doing A-lamdaI and finding their null spaces. then put these 2 null spaces into a matrix which equals P. can verify by checking that P^-1 A P = [eigen 1 0 0 eigen 2] if you only get one eigen for a 2x2 this is not in R^n because for every x to go to 0 it has to be the 0 matrix so if u dont have the zero matrix it is not in the subspace and cant be diagonalized
how to find eigenvector
find the null space or to find the eigenvector associated with the value take the A-lamdaI and reduce down to nullspace
eigenvalues for upper triangular
lamda = ai (elements on the diagonal at that position) so that A-lamdaI = 0x0x0.... down the diagonal
linear difference equation w eigenvector xo
lamda^k+1 * xo =xk+1
finding the eigenvalues that correspond to eigenvectors
mult the vector by A and then have that equal to lamda times the vector and solve for lamda
dimension of the eigenspace
row reduce the A-lamda I and it is the dimension of the null space
S
set of all signals, a vector space. to show linearly independent so you would have x1 * the signal + x2 * second signal... = 0 and then combine them and you get infinitely many solutions because if you write each for k=0,k=1,k=2 in vector col, it is invertible bc det does not = 0
diagonalizable
similar to a diagonal matrix. if A is diagonalizable then R^n has a basis consisting of the eigenvectors of A (converse also true. if it has eigenvectors forming a basis on R^n then A is diagonalizable) diagonalizable if it has n distinct eigenvalues CAUTION not all matrices are diagonalizable (if there are no real roots, or if it does not have a basis of the eigenvectors)
square root of z
square root of r * (costheta/2 + i sintheta/2)
how to check if basis of eigenvectors
take the eigenvalues and try to solve for the eigenvector. if there is not n values (ex. only one value is nonzero but there's two spaces so R^n)
vandermoude
the matrix that goes 1, lamda1,lamda1^2 and so on down and then all ones across top and then in same order for the lamda 2's and 3's etc the determinant doesn't = 0 if lamda i does not equal lamda j
eigenspace
the null space of A-lamda I
T S->S
the solution space is just Ker T and Ker T is n dimensional
coordinate changes
v is a finite dimensional space B is an ordered basis then [v]b is respect to basis B. where v = c1b1+...+cnbn unique expressions. and the transformation is one to one and onto ToU(e1) = T(U(ei)) = T(b1) = ei. so ToU is identity on r^n and UoT identity on v. if we want to find a relation from [v]b and [v]c, you apply TcoU to Vb to get [V]c . so [[bi]c...[bn]c] Pb->c [v]b = [v]c to go from B to C, you write b in terms of C. so you put all of C's cols in the matrix, and then go col by col for B putting in the augmented and solving for that column