# Math 262 Intro to Linear Algebra Final Exam

Eigenvectors and Eigenvalues

Almost all vectors change direction, when they are multiplied by A. Certain exceptional vectors x are in the same direction as Ax. Those are the "eigenvectors". Multiply an eigenvector by A, and the vector Ax is a number times the original x. The basic equation is Ax = λx. The number λ is an eigenvalue of A.

It can be shown that the set V = R³ with vector addition and scalar multiplication defined by x1 y1 x1+y1+1 x2⊕y2=x2+y2+1 x3 y3 x3+y3 and x1 a+ax1-1 a⊗x2=a+ax2-1 x3 ax3 is a vector space. a) Find zero vector

By theorem 4.2a we have x1 0 +0x1 -1 0⊗x2 =0 +0x2 -1 x3 0x3 -1 = -1 0 -1 Thus, 0 =-1 0

Det(A+B) = det(A) + det(B) for nXn matrices A and B

False

For x, y ∈ Rⁿ, ||x + y|| ≥ ||x|| + ||y||

False

If A is a 3x7 matrix then the smallest possible dimension of the null space of A is 3.

False

If A is an m*n matrix and B is an n*r matrix, then the matrix products AB and A(^T)B(^T) are defined

False

If an n×n matrix A cannot be written as a product of elementary matrices, then Ax=0 has only the trivial solution

False

Cramer's rule

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution.

How you show if a vector is symmetric

It is symmetric if A=A^t

Suppose B is a 3x3 matrix where det(B)=-2. Furthermore, let A be the following matrix: -1 2 0 A= 2 4-3 3 0 4

Part a), b), c), d) and e) on the next cards

How to find a basis

Put them in columns and write with the leading ones

Rank and nullity theorem

Rank-nullity = number of rows

Let L : V → W be a linear transformation. Then ker L = range L =

ker L is a subspace of V range L is a subspace of W

standard matrix

put the natural basis into the transposition

Elementary matrix

tion 86 An elementary matrix is a matrix is a matrix that can be obtained by performing a single elementary row operation on an identity

The SUM of two n*n SYMMETRIC matrices is SYMMETRIC

true

Let x = ↑[3 0 sqrt2 5]. Find a unit vector in the opposite direction as x.

||x|| = sqrt(3^2 + 0^2 + (sqrt2)^2 + 5^2) = sqrt(9+0+2+25) =sqrt36 =6 - u = (1/||x||) x = ↑[-3/6 0 -sqrt2/6 -5/6]

Suppose that for two matrices A and B, A^(-1)=[2 -3 7 4] and B^(-1) = [4 3 2 5]. Compute (AB)^(-1)

(AB)^(-1) = B^-1 × A^-1

Theorem: Let V be a vector space with operations vector addition ⊕ and scalar multiplication ⊗. Then 0⊗u = 0 for all u∈V. Complete the proof below by supplying the appropriate properties of a vector space. To be clear, the answers to part (a)-(d) are the property numbers from the vector space definition. Proof: - Let V be a vector space and u∈V. Note that 0⊗u = (0+0)⊗u = 0⊗u ⊕ 0⊗u by property (a) - Now add the negative of 0⊗u to both sides to obtain 0⊗u⊕-(0⊗u) = (0⊗u ⊕ 0⊗u) ⊕ -(0⊗u) by property (b) = 0⊗U⊕0 by property (c) = 0⊗U by property (d) - Since 0⊗U⊕-(0⊗U) = 0, we have that 0⊗U = 0

(a) 6 (b) 2 (c) 4 (d) 3

Given subspaces h and K of a vector space V, the sum of H and K, written as H + K, is the set of all vectors in V that can be written as the sum of two vectors, one in H and the other in K; that is H+K = {w|w=u+v for some u∈H and some v∈k}. For example, if w1∈H+K then w1 = u1 and v1 where u1 ∈ H and v1 ∈ K. prove that H+K is a subspace of V

- Clearly H+K⊆V because H⊆V and K⊆V. Furthermore, H+K≠∅ because H and K are nonempty as they are subspaces. - Let w, w2 ∈ H+K, which implies that w1 = u1 + v1 and w2 = u2 + v2 for some u1, u2∈H and v1,v2∈K. Note that W1+W2=(U1+V1)+(U2+V2) =(U1+U2)+(V1+V2). - Since (U1+U2)∈H and (V1+V2)∈K because H and K are closed under vector addition, we have that W1+W2∈H+K. - Let a∈R and note that aW1 = a(U1+V1) = aU1+aV1 - Since aU1∈H and aV1∈K because H and K are closed under scalar multiplication, we have that aW1∈H+K. Thus, H+K ⊆V, H+K≠∅, and H+K is closed, under vector addition and scalar multiplication. Hence, H+K is a subspace of V.

Consider the following linear system of equations: x1 + 3x2 = 2 3x1 + hx2 = k Find all values of h and k such that the system has (a) no solution, (b) a unique solution, and (c) an infinite number of solutions. Please write your answers in the box below.

- First write this in augmented form: 1 3 | 2 → 1 3 | 2 3 h | k R2←R2 - 3R1 0 h-9 | k-6 - There is no solution when h-9=0 and k-6≠0 -> No solution when h=9 and k≠6 - There is an infinite number of solutions when h-9=0 and k-6=0 -> Infinite number of solutions when h=9 and k=6 - There is a single solution when h≠9, which is the complement of parts (a) and (c). (a) h=9 and k≠6 (b) h≠9 (c) h=9 and k=6

Let r be a real number and A =[a(ij)] be an n×n matrix. Prove directly from the definitions of scalar multiplication and transpose that (rA)^t = rA^t. Be sure to show all steps.

- Let A = [a(ij)] be an m×n matrix. - Then for all i,j we have entij ((rA)^t) = entji (rA) = r * entji (A) = r * entij (A^t) = entij (rA^t). - Thus, (rA)^t = rA^t.

Let A = [a(ij)] and B = [b(ij)] be compatible matrices, and r be a real number. Prove directly from the definitions of scalar and matrix multiplications that A(rB) = r(AB). Be sure to show all steps.

- Let A = [a(ij)] be an m×p matrix and B = [b(ij)] be a p×n matrix, and r be a real number. Then for all i,j we have entij (A(rB)) = ∑(k=1 to p) a(ik)entkj (rB) = ∑(k=1 to p) a(ik)(r bkj) = r ∑(k=1 to p) a(ik)(bkj) = r entij (AB) = entij (r (AB)). - Hence, A(rB) = r(AB).

Let A and B be n×n matrices. Using matrix algebra, show that if A and B are both skew-symmetric then (AB)^t = BA. Be sure to show all steps

- Let A and B be skew symmetric matrices →A^t = -A and B^t = -B - Note that, (AB)^t = B^t × A^t = (-B)(-A) = (-1)(-1) BA = BA.

Suppose that x1 and x2 are both solutions to the linear system Ax=b, and that x3 is a solution to the corresponding homogeneous system Ax=0. Show that y=½x1 + ½x2 + ½x3 is a solution Ax=b. Be sure to show all steps

- Let x1 and x2 be solutions to Ax=b, and x3 be a solution to Ax=0. - Thus, Ax1=b, Ax2=b1 and Ax3=0. - Observe that, A (½x1+½x2+½x3)=A(½x1)+A(½x2)+A(½x3) = ½Ax1+½Ax2+½Ax3 = ½b + ½b + ½0 = b. - Thus, ½x1+½x2+½x3 is a solution to Ax=b.

Determine all values of k so that the set S = {kt² + 2k, t²+t+1, kt+1} is a linearly dependent set in P₂ (the set of all polynomials of degree at most 2)

- The set s is linearly dependent when their are nontrivial solutions to a₁(kt²+2k) + a₃(t²+t+1) + a₃(kt+1) = 0t²+0t+0 or equivalently, (ka₁+a₂)t² + (a₂+ka₃)t + (2ka₁+a₂+a₃)=0t²+0t+0 - This leads to the following linear system: ka₁+a₂ =0 a₂+ka₃=0

Let L : R³ → R³ be defined by L(x,y,z) = (x,y,0). (Projection onto the xy-plane.)

- kerL = {(x,y,z) | (x,y,o) = (0,0,0)} kerL consists of (x,y,z) that are solutions of the system x =0 y =0 z is arbitrary and x=y=0. kerL = span{(0,0,1)}. - range L = span {(1,0,0), (0,1,0)} - L is not one-to-one (e.g. L(1,2,3) = L(1,2,5) = (1,2,0).) - L is not onto (range L≠R³)

Proof Worksheet

Let s= {v1, v2, ..., vn} be a linearly independent subset of a vector space v. Suppose that w∈V and that S∪{w} = {v1, v2, ..., vn, w} is linearly dependent. Then w∈span(s). Task #1: Write out what it means for S={v1, v2, ..., vn} to be a linearly independent subset of a vector space v according to the definition. Task #2: Write out what it means for S∪{w}={v1, v2, ..., vn, w} to be linearly dependent acording to the definition Task #3: Write out what it means for w∈span(s) according to the definition.

Linearly dependent and independent

A set of vectors fv1; v2; : : : ; vng in Vm is said to be linearly independent if the vector equation x1v1 + x2v2 + + xnvn = 0m has only the trivial solution. Otherwise the set of vectors is said to be linearly dependent

Transformation

A transformation (or function or mapping), T, from Vn to Vm is a rule that assigns each vector x 2 Vn to a vector T (x) 2 Vm. The set Vn is called the domain of T and the set Vm is called the codomain (or target set) of T. If T is a transformation from Vn to Vm, then we write T : Vn ! V

Vector spaces

A vector space is a non-empty set of objects, V , called vectors, on which are deÖned two operations, called addition and multiplication by scalars (real numbers), subject to the ten axioms listed below. The axioms must hold for all vectors u; v; and w in V and for all scalars c and d. 1. The sum of u and v, denoted by u + v is in V . 2. The scalar multiple of u by c, denoted by cu, is in V . 3. u + v = v + u. 4. (u + v) + w = u + (v + w). 5. There is a zero vector, 0, in V such that u + 0 = u. 6. There is a vector u in V such that u + (u) = 0. 7. c (u + v) = cu + cv. 8. (c + d) u = cu + du. 9. c (du) = (cd) u. 10. 1u = u

linear transformation

69 A transformation T : Vn ! Vm is said to be a linear transformation if both of the following conditions are satisÖed: 1. T (u + v) = T (u) + T (v) for all vectors u and v in Vn. 2. T (cu) = cT (u) for all vectors u in Vn and for all scal

Solve the equation A = (C+X)B+C for X, given that A, B and C are n×n nonsingular matrices

A = (C+X)B+C A-C = (C+X)B (A-C)B^-1 = (C+X)BB^-1 (A-C)B^-1 = C+X X = (A-C)B^-1 - C

Basis

A basis for a vector space, V , is a Önite indexed set of vectors, B = fv1; v2; : : : ; vng, in V that is linearly independent and also spans V (meaning that Span (B) = V

Homogeneous and Non-homogeneous

A homogeneous linear system is one of the form Ax = 0m where A is an m n matrix. A nonñhomogeneous linear system is a linear system that is not homogeneous.

rank

If A is an m n matrix, then the rank of A, denoted by rank (A), is deÖned to be the dimension of the column space of A. Thus rank (A) = dim (Col(A)).

Isomorphic

If V and W are vector spaces and if there exists an isomorphism T : V ! W, then V is said to be isomorphic to W. Isomorphism is an equivalence relation on the set of all vector spaces. This is because: 1. Every vector space is isomorphic to itself. (Why?) 2. If V is isomorphic to W, then W is isomorphic to V . (Why?) 3. If V is isomorphic to W and W is isomorphic to X, then V is isomorphic to X. (Why?)

Subspace

If V is a vector space and W is a nonñempty subset of V , then W is said to be a subspace of V if W is itself a vector space under the same operations of addition and multiplication by scalars that hold in V

Linear combinations

Suppose that u1; u2; : : : ; un are vectors in Vm. (Note that this is a set of n vectors, each of which has dimension m). Also suppose that c1; c2; : : : ; cn are scalars. Then the vector c1u1 + c2u2 + ... + cnun is called a linear combination of the vectors u1; u2; : : : ; un.

Linear Combination

Suppose that v1,v2,. . . ,vn are vectors in a vector space, V , and suppose that c1,c2,. . . ,cn are scalars. Then c1v1 + c2v2 + + cnvn is said to be a linear combination of the vectors v1,v2,. . . ,vn.

Linearly independence

We are now going to define the notion of linear independence of a list of vectors. This concept will be extremely important in the following, especially when we introduce bases and the dimension of a vector space. Definition 3. A list of vectors (v1, . . ., vm) is called linearly independent if the only solution for a1, . . ., am ∈ F to the equation a1v1 + · · · + amvm = 0 is a1 = · · · = am = 0.

Find the values of a, b, and c so that the following matrix multiplication holds

a =2 b = 1/2 c = 1/10

Leading entry

The leading entry in a row of a matrix is the Örst (from the left) nonñzero entry in that row

Nullspace (kernel)

The nullspace (also called the kernel) of a linear transformation T : V ! W is the set of all vectors in V that are transformed by T into the zero vector in W. This space (which is a subspace of V ) is denoted by ker (T). Thus ker (T) = { v∈V | T(v) = 0w}

Trivial solution

The solution x = 0n is called the trivial solution of the homogeneous system Ax = 0m. A nonñtrivial solution of the homogeneous system Ax = 0m is a solution x 2 Vn such that x 6= 0n.

Any matrix that is row equivalent to the identity matrix is nonsingular

True

Every vector space has at least one subspace

True

Every vector space is isomorphic to Rn

True

If A is a 4x4 matrix with rank(A) = 3, then the linear system Ax = b has an infinite number of solutions for any vector b.

True

If S = {V1, V2, ..., Vn} spans an n-dimensional vector space V, then S must be a basis for V.

True

If a set {V1, V2, ..., Vp} spans a finite-dimensional vector space V and T is a set of more than p vectors in V, then T is linearly dependent

True

The (i,j)th entry of the adjoint of an n×n matrix A is (-1)^(i+j)det(M(ji))

True

The reduced row echelon form of a matrix is unique

True

if a linear system of equations has two different solutions then it must have infinitely many solutions

True

Orthogonal

Two vectors, u and v, in Vn are said to be orthogonal to each other if u T v = v Tu = [0]. If u and v are any two vectors in Vn, then u T v = v Tu. Thus we could simply state the above deÖnition by saying that u and v are orthogonal to each other if u T v = [0].

e) Find det(4I₃), where I₃ is the 3x3 identity matrix

det(4I₃) = 4 0 0 det 0 4 0 = 4³ = 64 0 0 4

a) Compute det(A) by performing a cofactor expansion along the third row. Please be sure to show your work

det(A)= (3)(-1)^(3+1) 2 0 + 4 -3 (0)(-1)^(3+2) -1 0+ 2 -3 (4)(-1)^(3+3) -1 2= 2 4 3[(2)(-3) - (4)(0)] + 0 + 4[(-1)(4)-(2)(2)] = 3(-6)+4(-8) = -50

d) Find det(AB^t)

det(AB^t)=det(A)det(B^t) = det(A)det(B) = (-50)(-2) = 100

b) Find det (B^-1)

det(B^-1) = 1/det(B) = 1/-2

c) Find det (B^3)

det(B^3) = det(BBB) = det(B)^3 = (-2)^3 = -8