Linear Algebra

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

7.1 14. b. part 2 Which of the following matrix properties are needed to show that B is symmetric? Assume R, S, and T are nxn matrices and select all properties that apply.

(R^T)^T = R (RS)^T = S^(T)R^(T)

6.1 15. a. Which of the following statements should be used to find cz*u?

(cu)*v = c(u*v) Because cz*u=0, cz is orthogonal to u.

6.1 15. b. Which of the following statements should be used to find (z1+z2)*u?

(u+v)*w=u*w+v*w Because (z1+z2)*u=0, z1+z2 is orthogonal to u

7.1 14. a. part 1 Given any x in Rn, compute Bx and show that Bx is the orthogonal projection of x onto u. Bx=(uu^T)x by definition.

(uu^T)x = u(u^(T)x) because matrix multiplication is associative.

4.5 11. part 5 The coordinate vector in P3 for -2+t^2 is...

-2 0 4 0

4.5 11. part 6 The coordinate vector in P3 for -12t+8t^3 is...

0 -12 0 8

4.5 11. part 10 What is the dimension of the vector space P3?

4

4.5 11. part 9 Since there are ___ pivots in this matrix, the columns of this matrix ____________ a linearly independent set.

4 form

4.3 10. In the vector space of all​ real-valued functions, find a basis for the subspace spanned by {sin t, sin 2t, sin t cos t}.

A basis for this subspace is {sin t, sin 2t}

5.3 12. part 3 What is the inverse of​ A?

A^-1 = PD^(-1)P^(-1) Therefore, A^-1 is also diagonalizable

6.1 12. c. Assume all vectors are in Rn. If the distance from u to v equals the distance from u to −​v, then u and v are orthogonal.

By the definition of​ orthogonal, u and v are orthogonal if and only if u•v=0. This happens if and only if 2u*v=-2u*v, which happens if and only if the squared distance from u to v equals the squared distance from u to minus−v. Requiring the squared distances to be equal is the same as requiring the distances to be​ equal, so the given statement is true.

4.6 10. Suppose the solutions of a homogeneous system of eight linear equations in nine unknowns are all multiples of one nonzero solution. Will the system necessarily have have a solution for every possible choice of constants on the right sides of the​ equations? Explain.

Consider the system Ax=0, where A is an 8x9 matrix. Because the solutions of the system are all multiples of one nonzero solution, dimNulA=1. By the Rank Theorem, rankA=9-dimNulA=8. Since dimColA=rankA=8 and Col A is a subspace of R8, it follows that Col A = R8. This means that each vector b in R8 must be in Col A, and so Ax=b has a solution for all b. That is, the system will have a solution for every possible choice of constants on the right sides of the equations.

2.5 8. Part 4 The proof is complete because...

Ep...E1 A = Ep...E1 B C = IC = C

4.2 12. c. The column space of A, Col(A) is the set of all solutions of Ax=b

False because Col(A)={b: b=Ax for some x in Rn}

4.1 10. d. R2 is a subspace of R3

False because R2 is not even a subset of R3

1.1 10. b. Is the statement​ A 5×6 matrix has six​ rows" true or​ false? Explain.

False, because a 5×6 matrix has five rows and six columns.

4.5 10. a. The set R2 is a two-dimensional subspace of R3.

False, the set R2 is not even a subset of R3

4.6 5. b. If V=R2, B={b1,b2}, and C={c1,c2}, then row reduction of [ c1 c2 b1 b2] to [I P] produces a matrix P that satisfies [x]B=P[x]C for all x in V.

False. Matrix P satisfies [x]C=P[x]B for all x in V.

1.3 11. e. The weights c1, ..., cp in a linear combination c1v1 + ... + cpvp cannot all be zero.

False. Setting all the weights equal to zero results in the vector 0.

1.8 10. c. If A is an mxn matrix, then the range of the transformation x --> Ax is Rm

False. The range of the transformation is the set of all linear combinations of the columns of​ A, because each image of the transformation is of the form Ax.

1.5 9. e. The solution set of Ax=b is the set of all vectors of the form w = p + vh, where vh is any solution of the equation Ax=0

False. The solution set could be empty. The statement is only true when the equation Ax=b is consistent for some given b​, and there exists a vector p such that p is a solution.

1.9 10. e. The standard matrix of a horizontal shear transformation from R2 to R2 has the form [a 0] [0 d]

False. The standard matrix has the form [1 k] [0 1]

6.2 10. b. Assume all vectors are in Rn. If a set S = {u1, ..., up} has the property that ui*uj=0 whenever i=/=j, then S is an orthonormal set.

False. To be​ orthonormal, the vectors in S must be unit vectors as well as being orthogonal to each other.

1.3 11. a. When u and v are nonzero​ vectors, ​ Span{u​,v​} contains only the line through u and the line through v and the origin.

False. ​Span{u​,v​} includes linear combinations of both u and v.

5.4 11. part 1 Let V be a vector space with a basis B = {b1, ..., bn}, let W be the same space V with a basis C = {c1, ..., cn} and let I be the identity transformation I: V --> W. Find the matrix for I relative to B and C.

For each j, I(bj) = bj and [I(bj)]c = [bj]c

1.8 12. Let T: Rn --> Rm be a linear transformation, and let {v1, v2, v3} be a linearly dependent set in Rn. Explain why the set {T(v1), T(v2), T(v3)} is linearly dependent.

Given that the set {v1, v2, v3} is linearly dependent, there exist c1, c2, c3, not all zero, such that c1v1+c2v2+c3v3=0. It follows that c1T(v1)+c2T(v2)+c3T(v3)=0. Therefore, the set T(v1), T(v2), T(v3)} is linearly dependent.

7.1 14. b. part 1 Symbolically define symmetry.

If B is symmetric, then B=B^T

2.2 5. Use matrix algebra to show that if A is invertible and D satisfies AD=I, then D=A^-1

Left multiply each side of the equation AD=I by A^-1 to obtain A^-1AD=A^-1 I, ID=A^-1, and D=A^-1

4.4 12. part 1 Let B = {b1, . . . ,bn} be a basis for a vector space V. Explain why the B-coordinate vectors of b1, . . . ,bn are the columns e1, . . . , en of the n × n identity matrix.

Let B = {b1, . . . ,bn} be a basis for a vector space V. By the Unique Representation Theorem, for each x in V, there exists a unique set of scalars c1, ..., cn that x=c1b1+...+cnbn. By the definition of a basis, b1, ..., bn are in V.

5.4 11. part 2 Combine these identities with the definition of the transformation matrix to find the matrix for I relative to B and C.

M = [ [b1]c [b2]c ... [bn]c ]

5.4 10. part 5 How can the condition determined in the previous step be shown?

Multiply PQ^-1 by QP^-1 and show that the identity matrix results. Therefore, B is similar to C because C can be written in the form R^(-1)BR

1.2 8. Part 2 In the given augmented​ matrix, is the rightmost column a pivot​ column?

No

1.5 12. a. A is a 3x3 matrix with three pivot positions. Does the equation Ax=0 have a nontrivial solution?

No

1.2 11. A system of linear equations with fewer equations than unknowns is sometimes called an underdetermined system. Can such a system have a unique​ solution? Explain.

No, it cannot have a unique solution. Because there are more variables than​ equations, there must be at least one free variable. If the linear system is consistent and there is at least one free​ variable, the solution set contains infinitely many solutions. If the linear system is​ inconsistent, there is no solution.

1.2 7. Part 7 Free variables are variables that can take on any value. How many free variables are in the​ system?

None, because x1, x2, x3, and x4 are all fixed values

7.1 13. part 3 Rewrite R using this identity

P^(T)AP

7.1 13. part 2 P^-1 equals...

P^T because P is orthogonal

6.4 8. part 3 Conversely, suppose y belongs to Col Q. Then y=Qx for some x. Since R is invertible, what does the equation A=QR imply?

Q=AR^-1

7.1 13. part 6 R is symmetric because

R^T = P^(-1)AP = R

7.1 13. part 4 Evaluate the transpose and simplify

R^T = P^(T)A^(T)P

4.2 13. part 1 Let V and W be vector​ spaces, and let T​ : V→W be a linear transformation. Given a subspace U of​ V, let​ T(U) denote the set of all images of the form ​T(x​), where x is in U. Show that​ T(U) is a subspace of W. To show that ​T(U) is a subspace of W​, first show that the zero vector of W is in ​T(U).

Since U is a subspace of​ V, the zero vector of​ V, 0V​, is in U. Since T is​ linear, ​T(0V​)=0W​, where 0W is the zero vector of W. So 0W is in​ T(U).

5.2 9. Use a property of determinants to show that A and A^T have the same characteristic polynomial

Start with det(A^(T) - λI) = det(A^(T) - λI^(T)) = det(A - λI)^(T) then use the formula det A^T = det A

2.2 6. Suppose A is nxn and the equation Ax=0 has only the trivial solution. Explain why A has n pivot columns and A is row equivalent to In.

Suppose A is nxn and the equation Ax=0 has only the trivial solution. Then there are no free variables in this equation, thus A has n pivot columns. Since A is square and the n pivot positions must be in different rows, the pivots in an echelon form of A must be on the main diagonal. Hence A is row equivalent to the nxn identity matrix, In.

4.3 12. part 3 Based on the previous statements, this follows...

T(c1v1+...+cpvp)=0

2.1 15. Prove the theorem (AB)^T = B^T A^T

The (i,j) entry of (AB)^T is the (j,i) entry of AB, which is aj1b1i+...+ajnbni The entries in row i of B^T are b1i,...,bni The entries in column j of A^T are aj1,...,ajn The (i,j) entry in B^T A^T is aj1b1i+...+ajnbni Therefore (AB)^T = B^T A^T

1.2 6. a. In some​ cases, a matrix may be row reduced to more than one matrix in reduced echelon​ form, using different sequences of row operations.

The statement is false. Each matrix is row equivalent to one and only one reduced echelon matrix.

4.1 10. c. A vector space is also a subspace of itself.

True because the axioms for a vector space include all the conditions for being a subspace.

7.1 12. a. If B=PDP^T where P^T=P^-1 and D is a diagonal matrix, then B is a symmetric matrix.

True. B^T = (PDP^T)^T = P^(TT)D^(T)P^(T) = PDP^(T) = B

6.1 12. a. Assume all vectors are in Rn. v*v = ||v||^2

True. By the definition of the length of a vector v, ||v|| = sqrt (v*v)

6.1 13. c. Assume all vectors are in Rn. If x is orthogonal to every vector in a subspace W, then x is in W^perp

True. If x is orthogonal to every vector in W, then x is said to be orthogonal to W. The set of all vectors x that are orthogonal to W is denoted W^perp

2.1 11. e. The transpose of a sum of matrices equals the sum of their transposes.

True. This is a generalized statement that follows from the theorem (A+B)^T = A^T + B^T

6.1 12. b. Assume all vectors are in Rn. For any scalar c, u*(cu) = c(u*v)

True. This is a valid property of the inner product.

1.5 12. b. A is a 3x3 matrix with three pivot positions. Does the equation Ax=b have at least one solution for every possible b?

Yes

5.3 12. part 2 What does it mean if A is​ invertible?

Zero is not an eigenvalue of​ A, so the diagonal entries in D are not​ zero, so D is invertible.

6.1 13. e. Assume all vectors are in Rn. For an mxn matrix A, vectors in the null space of A are orthogonal to vectors in the row space of A.

[True.] By the theorem of orthogonal complements, (Row A)^perp = [Nul A] It follows, by the definition of [orthogonal complements], that vectors in the null space of A [are] orthogonal to vectors in the row space of A.

7.1 14. a. part 3 Therefore, Bx=(u*x)u is the orthogonal projection of x onto u because u is ________________

a unit vector

3.2 17. find a formula for det(rA) when A is an nxn matrix

det(rA) = r^(n) * detA

4.5 11. part 11 Since the dimension of P3 is _____ the number of elements in the ________ set formed by the given Hermite polynomials, the given set of Hermite polynomials ______ a basis for P3.

equal to linearly independent forms

3.2 16. part 4 Show that if A is invertible, then det A^-1 = 1/det A Therefore, why is det A^-1 = 1/det A ?

since (det A)(det A^-1)=det I by basic algebra, det A^-1 = 1/det A

4.6 11. Consider an mxn matrix A. Which of the subspaces Row A, Col A, Nul A, Row A^T, Col A^T, Nul A^T are in Rm and which are in Rn? How many distinct subspaces are in this list?

subspaces in Rm: Nul A^T Col A Row A^T subspaces in Rn: Row A Col A^T Nul A there are 4 distinct subspaces in the given list

7.1 14. a. part 2 Multiplying a column vector of Rn on the right by __________ is the same as multiplying the column vector by the scalar u*x.

u^(T)x

7.1 13. part 7 Why does this imply that R is diagonal? Since R is ______________ all entries below the diagonal are _________. Since R is also ____________ each entry above the diagonal is equal to the corresponding entry below the diagonal. Thus, all _______________ entries in R must equal ____________. This is the definition of a diagonal matrix.

upper triangular 0 symmetric non-diagonal 0

4.4 13. part 2 Add w=k1v1+...+k8v8 and 0=c1v1+...+c8v8

w+0 = (k1+c1)v1+...+(k8+c8)v8

1.2 7. Part 6 Write the system of equations corresponding to the augmented matrix found above to determine the number of free variables.

x1 = a x2 = b x3 = c x4 = d

6.4 8. part 4 So...

y=AR^(-1)x=A(R^(-1)x) which shows that y is in Col A.

6.4 8. part 1 Suppose A=QR where R is an invertible matrix. Show that A and Q have the same column space. [Hint: given y in Col A, show that y=Qx for some x. Also, given y in Col Q, show that y=Ax for some x.] If y is in Col A, then which of these is true?

y=Ax for some x

6.7 8. part 5 Use these results to simplify the expression on the right side of the​ Cauchy-Schwarz inequality.

||u|| ||v|| = a+b

6.7 8. part 4 Evaluate ||v|| for v = sqrt (b), sqrt (a) where b>=0

||v|| = sqrt(b+a)

4.5 10. c. A vector space is​ infinite-dimensional if it is spanned by an infinite set.

​False, because a basis for the vector space may have only finitely many​ elements, which would make the vector space​ finite-dimensional.

1.1 11. a. Is the statement​ "Two matrices are row equivalent if they have the same number of​ rows" true or​ false? Explain.

​False, because if two matrices are row equivalent it means that there exists a sequence of row operations that transforms one matrix to the other.

3.1 14. b. The determinant of a triangular matrix is the sum of the entries on the main diagonal.

​False, because the determinant of a triangular matrix is the product of the entries along the main diagonal.

4.5 10. b. The number of variables in the equation Ax=0 equals the dimension of Nul A.

​False, because the number of free variables is equal to the dimension of Nul A.

4.5 10. d. If dim V=n and if S spans​ V, then S is a basis of V.

​False, in order for S to be a​ basis, it must also have n elements.

4.5 10. e. The only three-dimensional subspace of R3 is R3 itself.

​True, because any three linearly independent vectors in R3 span all of R3, so there is no three-dimensional subspace of R3 that is not R3 itself.

4.2 12. e. The range of a linear transformation is a vector space.

​True, the range of a linear transformation​ T, from a vector space V to a vector space​ W, is a subspace of W.

1.8 11. b. Every matrix transformation is a linear transformation.

​True. Every matrix transformation has the properties ​T(u+v​)=​T(u​)+​T(v​) and ​T(cu​)=​cT(u​) for all u and v​, in the domain of T and all scalars c.

1.2 7. Part 5 Use the augmented matrix to determine if the linear system is consistent. Is the linear system represented by the augmented matrix​ consistent?

​Yes, because the rightmost column of the augmented matrix is not a pivot column.

4.5 11. part 4 The coordinate vector in P3 for 2t is...

0 2 0 0

3.2 16. part 3 Show that if A is invertible, then det A^-1 = 1/det A To what scalar must this new determinant be equal?

1

7.1 14. c. u is an eigenvector of ____________ because __________________

1 Bu = (uu^T)u = u(u^(T)u) = u

4.5 11. part 3 Express each of the Hermite polynomials as coordinate vectors relative to the standard polynomial basis of P3. The coordinate vector in P3 for 1 is...

1 0 0 0

4.5 11. part 8 Form a matrix using the four coordinate vectors in P3 for the Hermite polynomials. In order from left to right, use the vectors for 1, 2t, -2+4t^2, and -12t+8t^3.

1, 0, -2, 0 0, 2, 0, -12 0, 0, 4, 0 0, 0, 0, 8

6.7 8. part 6 To compare the geometric mean sqrt(ab) with the arithmetic mean (a+b)/(2), substitute the expressions found for |<u,v>| and ||u|| ||v|| into the Cauchy-Schwarz inequality and divide both sides of the inequality by __________. The conclusion is that the geometric mean sqrt(ab) is _________________________ the arithmetic mean (a+b)/(2).

2 less than or equal to

2.5 8. Part 3 Applying the same sequence of row operations to __________ amounts to left-multiplying _________ by the product ______________.

A A Ep...E1

5.4 10. part 1 Verify the following statement. The matrices are square. If B is similar to A and C is similar to A, then B is similar to C. How is the phrase "B is similar to A" expressed symbolically?

A = P^(-1)BP

5.4 10. part 2 How is the phrase​ "C is similar to​ A" expressed​ symbolically?

A = Q^(-1)CQ

7.1 13. part 5 A^T equals

A because A is symmetric

1.9 5. Let T: R^n -> R^m be a linear transformation. Then there exits a unique matrix A such that T(x) = Ax for all x in R^n. In fact...

A is the mxn matrix whose jth column is the vector T(ej), where ej is the jth column of the identity matrix in R^n: A = [T(e1) . . . T(en)]

1.2 8. Part 1 Suppose a system of linear equations has a 3×5 augmented matrix whose fifth column is not a pivot column. Is the system​ consistent? Why or why​ not? To determine if the linear system is​ consistent, use the portion of the Existence and Uniqueness​ Theorem, shown below.

A linear system is consistent if and only if the rightmost column of the augmented matrix "is not" a pivot column. That​ is, if and only if an echelon form of the augmented matrix has "no row" of the form​ [0 ... 0​ b] with b nonzero.

3.2 16. part 1 Show that if A is invertible, then det A^-1 = 1/det A What theorems should be used to examine the quantity det A^-1?

A square matrix A is invertible if and only if det A =/= 0 If A and B are nxn matrices, then det AB=(detA)(detB)

4.4 13. part 3 What conclusions can be drawn from the statements above?

At least one of the weights in w+0=w=(k1+c1)v1+...+(k8c8)v8 differs from the corresponding weight in w=k1v1+...+k8v8 Thus, each w in V can be expressed in more than one way as a linear combination of v1,...,v8

2.5 8. Part 2 Given the relevant pieces of information from the previous​ step, there exist elementary matrices E1, ..., Ep corresponding to row operations that reduce ______________ to I, in the sense that _______________ __________________= I

B Ep...E1 B=I

7.1 14. b. part 5 Apply the necessary properties to show that B^2 = B.

B^2 = (uu^T)(uu^T) = u(u^(T)u)u^T = uu^T = B

7.1 14. b. part 3 Apply the necessary properties to show that B is symmetric.

B^T = (uu^T)^T = (u^T)^(T)u^(T) = uu^T = B

6.1 15. b. What can you conclude about z1+z2? Why?

Because z1+z2 is orthogonal to u, it is in W^perp

5.4 10. part 3 Set these two equations equal to each other and solve for C.

C = QP^(-1)BPQ^(-1)

6.1 15. a. How can two vectors be shown to be orthogonal?

Determine if the dot product of the two vectors is zero.

4.3 9. e. If B is an echelon form of a matrix​ A, then the pivot columns of B form a basis for ColA.

False because the columns of an echelon form B of A are not necessarily in the column space of A.

4.3 9. d. The standard method for producing a spanning set for Nul A sometimes fails to produce a basis for Nul A.

False because the method always produces an independent set.

4.3 9. a. A linearly independent set in a subspace H is a basis for H.

False because the subspace spanned by the set must also coincide with H

1.1 10. c. Is the statement "The solution set of a linear system involving variables x1, ..., xn is a list of numbers (s1, ..., sn) that makes each equation in the system a true statement when the values s1, ..., sn are substituted for x1, ..., xn, respectively" true or false? Explain.

False, because the description applies to a single solution. The solution set consists of all possible solutions.

3.1 14. a. The cofactor expansion of det A down a column is a negative of the cofactor expansion along a row

False, because the determinant of A can be computed by cofactor expansion across any row or down any column. Since the determinant of A is well​ defined, both of these cofactor expansions will be equal

1.1 11. c. Is the statement​ "Two equivalent linear systems can have different solution​ sets" true or​ false? Explain.

False, because two systems are called equivalent if they have the same solution set.

6.3 9. e. All vectors and subspaces are in Rn. If an nxp matrix U has orthonormal columns, then UU^T x=x for all x in Rn.

False. Let W be the subspace spanned by the columns of U. Then U^(T)Ux = projw x. If p=n, then W will be all of Rn, so the statement is true for all x in Rn. If p < n, then W will not be all of Rn, so the statement is not true for all x in Rn.

5.2 10. b. det A^T = (-1)det A

False. det A^T = det A for any nxn matrix A

7.1 12. b. An orthogonal matrix is orthogonally diagonalizable.

False. A matrix is orthogonally diagonalizable if and only if it is symmetric. An orthogonal matrix is not necessarily symmetric.

6.3 9. d. All vectors and subspaces are in Rn. The best approximation to y by elements of a subspace of W is given by the vector y-projw y.

False. The Best Approximation Theorem says that the best approximation to y is projw y.

5.2 10. a. If A is 3x3, with columns a1, a2, a3, then det A equals the volume of the parallelpiped determined by a1, a2, a3

False. |det A| equals the volume of the parallelpiped determined by a1, a2, a3. It is possible that |det A | =/= det A.

5.3 11. b. Assume​ A, B,​ P, and D are n×n matrices.

False. A diagonalizable matrix can have fewer than n eigenvalues and still have n linearly independent eigenvectors.

5.3 11. a. Assume​ A, B,​ P, and D are n×n matrices. A matrix A is diagonalizable if A has n eigenvectors.

False. A diagonalizable matrix must have n linearly independent eigenvectors.

1.5 10. a. A homogeneous system of equations can be inconsistent.

False. A homogeneous equation can be written in the form Ax=0, where A is an mxn matrix and 0 is the zero vector in Rm. Such a system Ax=0 always has at least one solution, namely x=0. Thus, a homogeneous system of equations cannot be inconsistent.

1.9 10. d. A mapping T: Rn --> Rm is one to one if each vector in Rn maps onto a unique vector in Rm.

False. A mapping T is said to be one to one if each b in Rm is the image of at most one x in Rn.

1.8 10. d. Every linear transformation is a matrix transformation.

False. A matrix transformation is a special linear transformation of the form x --> Ax where A is a matrix

1.5 10. b. If x is a nontrivial solution of Ax=0​, then every entry in x is nonzero.

False. A nontrivial solution of Ax=0 is a nonzero vector x that satisfies Ax=0. ​Thus, a nontrivial solution x can have some zero entries so long as not all of its entries are zero.

1.3 11. d. The vector v results when a vector u−v is added to the vector v.

False. Adding u−v to v results in u.

5.1 11. e. A is an nxn matrix. To find the eigenvalues of​ A, reduce A to echelon form.

False. An echelon form of a matrix A usually does not display the eigenvalues of A.

5.3 11. d. Assume​ A, B,​ P, and D are n×n matrices.

False. An invertible matrix may have fewer than n linearly independent​ eigenvectors, making it not diagonalizable.

4.4 10. b. The correspondence [x]b --> [x] is called the coordinate mapping.

False. By the definition, the correspondence [x] --> [x]b is called the coordinate mapping.

6.1 12. d. Assume all vectors are in Rn. For a square matrix A, vectors in Col A are othogonal to vectors in Nul A.

False. By the theorem of orthogonal​ complements, it is known that vectors in Col A are orthogonal to vectors in Nul A^T. Using the definition of orthogonal​ complements, vectors in Col A are orthogonal to vectors in Nul A if and only if the rows and columns of A are the​ same, which is not necessarily true.

2.4 8. b. If A = [ A11 A12 ] [ A21 A22 ] and B = [B1] [B2] then the partitions of A and B are conformable for block multiplication.

False. For a product AB to​ exist, the column partition of A must match the row partition of B. From the given​ information, it is not known whether these partitions match.

1.4 12. d. If the coefficient matrix A has a pivot position in every​ row, then the equation Ax=b is inconsistent.

False. If A has a pivot position in every​ row, the echelon form of the augmented matrix could not have a row such as​ [0 0 0​ 1], and Ax=b must be consistent.

1.4 11. c. The equation Ax = b is consistent if the augmented matrix [ A b ] has a pivot position in every row

False. If the augmented matrix [ A b ] has a pivot position in every row, the equation Ax=b may or not be consistent. One pivot position may be in the column representing b.

1.4 12. f. If A is an m×n matrix whose columns do not span Rm​, then the equation Ax=b is consistent for every b in Rm.

False. If the columns of A do not span Rm, then A does not have a pivot position in every row, and row reducing [ A b ] could result in a row of the form [ 0 0 ... 0 c ], where c is a nonzero real number.

5.1 12. d. A is an nxn matrix. The eigenvalues of a matrix are on its main diagonal.

False. If the matrix is a triangular​ matrix, the values on the main diagonal are eigenvalues.​ Otherwise, the main diagonal may or may not contain eigenvalues.

4.6 9. b. Row operations preserve the linear dependence relations among the rows of A.

False. Row operations may change the linear dependence relations among the rows of A.

5.2 10. d. A row replacement operation on A does not change the eigenvalues

False. Row operations on a matrix usually change its eigenvalues.

6.1 13. b. Assume all vectors are in Rn. For any scalar c, ||cv|| = c||v||

False. Since length is always positive, the value of ||cv|| will always be positive. By the same logic, when c is negative, the value of c||v|| is negative.

1.7 14. If v1, v2, v3 are in R3 and v3 is not a linear combination of v1, v2, then {v1, v2, v3} is linearly independent.

False. Take v1 and v2 to be multiples of one vector and take v3 to be not a multiple of that vector. For example, v1= [1] [1] [1] and v2 = [2] [2] [2] and v3 = [1] [0] [0] Since at least one of the vectors is a linear combination of the other two, the three vectors are linearly dependent.

1.9 10. a. If A is a 4x3 matrix, then the transformation x --> Ax maps R3 onto R4.

False. The columns of A do not span R4.

4.6 9. a. If B is any echelon form of​ A, then the pivot columns of B form a basis for the column space of A.

False. The columns of an echelon form B of A are often not in the column space of A.

5.1 12. a. A is an nxn matrix. If Ax=λx for some scalar λ​, then x is an eigenvector of A.

False. The condition that Ax=λx for some scalar λ is not sufficient to determine if x is an eigenvector of A. The vector x must be nonzero.

5.1 11. a. A is an nxn matrix. If Ax=λx for some vector x, then λ is an eigenvalue of A.

False. The condition that Ax=λx for some vector x is not sufficient to determine if lambdaλ is an eigenvalue. The equation Ax=λx must have a nontrivial solution.

1.8 10. b. If A is a 3x5 matrix and T is a transformation defined by T(x)=Ax, then the domain of T is R3.

False. The domain is actually R5, because in the product Ax, if A is an mxn matrix then x must be a vector in Rn.

1.5 9. d. The equation x=p+tv describes a line through v parallel to p.

False. The effect of adding p to v is to move v in a direction parallel to the line through p and 0. So the equation x=p+tv describes a line through p parallel to v.

1.5 9. b. The equation Ax=0 gives an explicit description of its solution set

False. The equation Ax=0 gives an implicit description of its solution set. Solving the equation amounts to finding an explicit description of its solution set.

1.4 11. a. The equation Ax=b is referred to as a vector equation.

False. The equation Ax=b is referred to as a matrix equation because A is a matrix.

1.5 9. c. The homogeneous equation Ax=0 has the trivial solution if and only if the equation has at least one free variable.

False. The homogeneous equation Ax=0 always has the trivial solution.

2.1 11. d. For appropriately sized matrices A, B, and C, (ABC)^T = C^T A^T B^T.

False. The left to right order of (ABC)^T, is C^T B^T A^T. The order cannot be changed in general.

2.1 11. b. If A and B are 3x3 matrices and B=[b1 b2 b3], then AB= [Ab1 + Ab2 + Ab3]

False. The matrix [ Ab1 + Ab2 + Ab3 ] is a 3x1 matrix, and AB must be a 3x3 matrix. The plus signs should be spaces between the 3 columns.

1.8 11. c. If T: Rn --> Rm is a linear transformation and if c is in Rm, then a uniqueness question is "Is c in the range of T?"

False. The question "is c in the range of T?" is the same as "does there exist an x whose image is c?" This is an existence question.

1.7 10. d. If a set in Rn is linearly dependent, then the set contains more than n vectors. True or false?

False. There exists a set in Rn that is linearly dependent and contains n vectors. One example is a set in R2 consisting of two vectors where one of the vectors is a scalar multiple of the other.

5.1 12. b. A is an nxn matrix. If v1 and v2 are linearly independent​ eigenvectors, then they correspond to distinct eigenvalues.

False. There may be linearly independent eigenvectors that both correspond to the same eigenvalue.

2.3 10. d. If the equation Ax=b has at least one solution for each b in Rn, then the transformation x-> Ax is not one-to-one

False; by the invertible matrix theorem if Ax=b has at least one solution for each b in Rn, then the transformation x --> Ax is one to one

4.1 10. e. A subset H of a vector space V is a subsapce of V if the following conditions are satisfied: (i) the zero vector of V is in H, (ii) u, v, and u+v are in H, and (iii) c is a scalar and cu is in H.

False; parts (ii) and (iii) should state that u and v represent all possible elements of H.

2.3 10. e. If there is a b in Rn such that the equation Ax=b is consistent the the solution is unique

False; the fact that there is a b in Rn so that the equation Ax=b is consistent, does not imply that Ax=b has at least one solution for each b in Rn. Thus, there could be more than one solution.

2.3 10. b. If the linear transformation x-->Ax maps Rn into Rn then the row reduced echelon form of A is I

False; the invertible matrix theorem states that the linear transformation x --> Ax must map Rn onto Rn, not into Rn for A to be invertible. Therefore, the row reduced echelon form of A may not be I.

6.7 8. part 1 Given a>=0, and b>=0, let u = sqrt(a), sqrt(b) v = sqrt (b), sqrt (a). Use the Cauchy-Schwarz inequality to compare the geometric mean sqrt(ab) with the arithmetic mean (a+b)/(2).

For all u and v in an inner product space V, the Cauchy-Schwarz inequality states that | <u, v> | <= ||u|| ||v||

6.1 15. c. What properties must W^perp have for it to be a subspace of Rn?

For each u in W^perp and each scalar c, the vector cu is in W^perp For each u and v in W^perp, the sum u+v is in W^perp The zero vector is in W^perp

4.5 11. part 7 How can these vectors be shown to be linearly​ independent?

Form a matrix using the vectors as columns and determine the number of pivots in the matrix.

5.3 12. part 1 Show that if A is both diagonalizable and​ invertible, then so is A^−1. What does it mean if A is​ diagonalizable?

If A is​ diagonalizable, then A=PDP^−1 for some invertible P and diagonal D.

5.1 13. Show that if A^2 is the zero matrix, then the only eigenvalue of A is 0.

If Ax=λx for some x =/= 0, then 0x = A^(2)x = A(Ax) = A(λx) = λAx = λ^(2)x = 0 since x is nonzero, λ must be zero. Thus, each eigenvalue of A is zero.

2.3 12. Explain why the columns of A^2 span Rn whenever the columns of an nxn matrix A are linearly independent

If the columns of A are linearly independent and A is square, then A is invertible, by the IMT. Thus, A^2, which is the product of invertible matrices, is also invertible. So, by the IMT, the columns of A^2 span Rn.

2.2 7. Suppose A is nxn and the equation Ax=b has a solution for each b in Rn. Explain why A must be invertible. [Hint: Is A row equivalent to In?]

If the equation Ax=b has a solution for each b in Rn, then A has a pivot position in each row. Since A is square, the pivots must be on the diagonal of A. It follows that A is row equivalent to In. Therefore, A is invertible.

1.3 13. a. Let A = [ 6 4 10 14 ] [ -3 6 3 1 ] [ 3 -2 1 5 ] and b = [ 14 ] [ 1 ] [ 5 ]

Is b in W? No, because the​ row-reduced form of the augmented matrix has a pivot in the rightmost column.

1.3 12. a. Let A = [ 1 0 -7 ] [ 0 3 -2 ] [ -3 6 3 ] and b = [ 2 ] [ 1 ] [ -2 ] Denote the columns of A by a1, a2, a3, and let W = Span{a1, a2, a3} Is b in ​{a1​,a2​,a3​}? How many vectors are in ​{a1​,a2​,a3​}?

Is b in ​{a1​,a2​,a3​}? No How many vectors are in ​{a1​,a2​,a3​}? 3

4.5 11. part 1 The first four Hermite polynomials are 1, 2t, -2+4t^2, and -12t+8t^3. These polynomials arise naturally in the study of certain important differential equations in mathematical physics. Show that the first four Hermite polynomials form a basis of P3. To show that the first four Hermite polynomials form a basis of P3, what theorem should be used?

Let V be a p-dimensional vector space, p >= 1. Any linearly independent set of exactly p elements in V is automatically a basis for V.

2.3 13. Let A and B be nxn matrices. Show that if AB is invertible so is B.

Let W be the inverse of AB. Then WAB=I and (WA)B=I. Therefore, matrix B is invertible by part j of the IMT

4.4 13. part 1 Suppose {v1, ..., v8} is a linearly dependent spanning set for a vector space V. Show that each w in V can be expressed in more than one way as a linear combination of v1,...,v8. [Hint: Let w=k1v1+...+k8v8 be an arbitrary vector in V. Use the linear dependence of {v1,...,v8} to produce another representation of w as a linear combination of v1,...,v8.]

Let w=k1v1+...+k8v8 be an arbitrary vector in V. Since the set {v1,...,v8} is linearly dependent, there exist scalars c1,...,c8, not all zero, such that 0=c1v1+..._c8v8

1.2 8. Part 3 In the echelon form of the augmented​ matrix, is there a row of the form​ [0 0 0 0​ b] with b​ nonzero?

No Therefore, by the Existence and Uniqueness​ Theorem, the linear system is "consistent".

1.4 14. b. Could a set of n vectors in Rm span all of Rm when n is less than​ m? Explain.

No. The matrix A whose columns are the n vectors has m rows. To have a pivot in each​ row, A would have to have at least m columns​ (one for each​ pivot.)

1.4 14. a. Could a set of three vectors in R4 span all of R4​? Explain.

No. The matrix A whose columns are the three vectors has four rows. To have a pivot in each​ row, A would have to have at least four columns​ (one for each​ pivot.)

2.1 14. Let A be an m x n matrix, and let B and C have sizes for which the indicated sums and products are defined. Prove that (B+C)A=BA+CA.

Prove that (B+C)A=BA+CA. The (i,j)-entry of (B+C)A equals the (i,j)-entry of BA+CA, because n E (bik+cik)akj k=1 is equal to n E bikakj k=1 plus n E cikakj k=1

2.1 14. Let A be an m x n matrix, and let B and C have sizes for which the indicated sums and products are defined. Prove that A(B+C)=AB+AC and that (B+C)A=BA+CA. Use the row-column rule. The (i,j)-entry in A(B+C) can be written in either of the two ways below. ai1(b1j+c1j)+...+ain(bnj+cnj) or n E aik(bkj+ckj) k=1

Prove that A(B+C)=AB+AC. The (i,j)-entry of A(B+C) equals the (i,j)-entry of AB+AC, because n E aik(bkj+ckj) k=1 is equal to n E aikbkj k=1 plus n E aikckj k=1

5.4 10. part 4 From this​ equation, what must be true for B to be similar to​ C?

QP^-1 must be the inverse of PQ^-1

7.1 13. part 1 Suppose A=PRP^-1, where P is orthogonal and R is upper triangular. Show that if A is symmetric, then R is symmetric and hence is actually a diagonal matrix. Solve A=PRP^-1 for R.

R = P^(-1)AP

7.1 14. b. part 4 Which of the following matrix properties are needed to show that B^2 = B? Assume that R, S, and T are nxn matrices and select all properties that apply.

R(ST) = (RS)T R^2 = RR

6.1 15. c. Which of the following statements finishes the proof that W^perp is a subspace of Rn?

Since 0 is orthogonal to every vector, it follows that 0 is in W^perp, and so W^perp is a subspace of Rn.

2.5 9. Suppose A=​QR, where Q and R are n×n​, R is invertible and upper​ triangular, and Q has the property that Q^(T)Q=I. Show that for each b in Rn​, the equation Ax=b has a unique solution. What computations with Q and R will produce the​ solution?

Since Q is square and Q^(T)Q=I, Q is invertible by the IMT with its inverse Q^-1=Q^T. Thus, A is the product of invertible matrices and hence invertible and will be nxn. Thus, by Theorem 5, the equation Ax=b has a unique solution for all b. The equation Ax=b is equivalent to QRx=b, which is equivalent to Q^(T)QRx=Q^(T)b, which reduces to Rx=Q^(T)b, and finally x=R^(-1)Q^(T)b. A good algorithm for finding x is to compute Q^(T)b and then row reduce the matrix [R Q^(T)b]. The reduction is fast in this case because R is a triangular matrix.

4.3 12. part 4 The conclusion

Since T is linear, T(0)=0. Since T is one-to-one, c1v1+...+cpvp=0, where c1, ..., cp are not all zero. Therefore, {v1,...,vp} is linearly dependent.

​4.2 13. part 6 The preceding result helps to show why​ T(U) is closed under multiplication by scalars. Recall that every element of​ T(U) can be written as ​T(x​) for some x in U.

Since T is​ linear, ​T(cx​)=​cT(x​) and c​T(x​) is in​ T(U). Thus,​ T(U) is closed under multiplication by scalars.

4.2 13. part 4 Use these results to explain why​ T(U) is closed under vector addition in W.

Since T is​ linear, ​T(x​)+​T(y​)=​T(x+y​). So ​T(x​)+​T(y​) is in​ T(U), and​ T(U) is closed under vector addition in W.

4.2 13. part 2 Let v and w be in​ T(U). Relate v and w to vectors in U.

Since T(U) is the set of all images from U, there exist x and y in U such that T(x)=v and T(y)=w.

4.2 13. part 3 ​Next, show that​ T(U) is closed under vector addition in W. Let ​T(x​) and ​T(y​) be in​ T(U), for some x and y in U.

Since x and y are in U and U is a subspace of​ V, x+y is also in U.

​4.2 13. part 5 Next, show that​ T(U) is closed under multiplication by scalars. Let c be any scalar and x be in U.

Since x is in U and U is a subspace of​ V, cx is in U.​ Thus, ​T(cx​) is in​ T(U).

1.2 7. Part 2 Use the given assumption that the coefficient matrix of the linear system of four equations in four variables has a pivot in each column to determine the dimensions of the coefficient matrix.

The coefficient matrix has four rows and four columns.

2.1 13. Suppose the fifth column of B is the sum of the last last two columns. What can be said about the fifth column of​ AB? Why?

The fifth column of AB is the sum of the last two columns of AB. If B is [ b1 b2 ... bp ], then the fifth column of AB is Ab5 by definition. It is given that b5 = bp-1 + bp. By matrix vector multiplication, Ab5 = A(bp-1 + bp) = Abp-1 + Abp

2.1 13. Suppose the first column of B is the sum of the second and second and third columns. What can be said about the first column of​ AB? Why?

The first column of AB is the sum of the second and third columns of AB. If B is [ b1 b2 ... bp ] then the first column of AB is Ab1 by definition. It is given that b1 = b2 + b3. By matrix vector multiplication, Ab1 = A(b2 + b3) = Ab2 + Ab3

​4.2 13. part 7 Use these results to explain why​ T(U) is a subspace of W.

The image of the transformation​ T(U) is a subspace of W because​ T(U) contains the zero vector of W and is closed under vector addition and multiplication by a scalar.

1.8 4. How many rows and columns must a matrix A have in order to define a mapping from R6 into R7 by the rule T(x)=Ax?

The matrix must have 7 rows and 6 columns.

1.3 13. b. Show that the second column of A is in W.

The second column of A is the vector a2. The vector a2 is in W because a2 can be written as a linear combination c1a1 + c2a2 + c3a3 where c1, c2, and c3 are scalars Thus the second column of A is in W because a2 = 0a1 + 1a2 + 0a3

4.1 3. Determine if the given set is a subspace of P2. Justify your answer. The set of all polynomials of the form p(t)=at^2 where a is in R

The set is a subspace of P2. The set contains the zero vector of P2, the set is closed under vector addition, and the set is closed under multiplication by scalars.

1.2 6. b. The row reduction algorithm applies only to augmented matrices for a linear system.

The statement is false. The algorithm applies to any​ matrix, whether or not the matrix is viewed as an augmented matrix for a linear system.

1.2 6. e. If one row in an echelon form of an augmented matrix is left bracket [0 0 0 5 0 ​] then the associated linear system is inconsistent.

The statement is false. The indicated row corresponds to the equation 5x(subscript 4) = 0, which does not by itself make the system inconsistent.

1.2 6. d. Finding a parametric description of the solution set of a linear system is the same as solving the system.

The statement is false. The solution set of a linear system can only be expressed using a parametric description if the system has at least one solution.

1.7 10. c. If a set contains fewer vectors than there are entries in the​ vectors, then the set is linearly independent. True or​ false?

The statement is false. There exists a set that contains fewer vectors than there are entries in the vectors that is linearly dependent. One example is a set consisting of two vectors where one of the vectors is a scalar multiple of the other vector.

1.2 6. c. A basic variable in a linear system is a variable that corresponds to a pivot column in the coefficient matrix.

The statement is true. It is the definition of a basic variable.

4.6 12. Let A be an mxn matrix. Explain why the equation Ax=b has a solution for all b in Rm if and only if the equation A^(T)x=0 has only the trivial solution.

The system Ax=b has a solution for all b in Rm if and only if the columns of A span Rm, or dimColA = m. The equation A^(T)x=0 has only the trivial solution if and only if dimNulA^T=0. Since Col A=Row A^T, dimColA=dimRowA^T=RankA^T=m-dimNulA^T by the Rank Theorem. Thus, dimColA=m if and only if dimNulA^T=0

1.2 7. Part 8 Why does this system have a unique solution?

The system is consistent and has no free variables.

1.2 9. Suppose the coefficient matrix of a system of linear equations has a pivot position in every row. Explain why the system is consistent.

The system is consistent because the rightmost column of the augmented matrix is not a pivot column.

1.2 7. Part 1 Suppose the coefficient matrix of a linear system of four equations in four variables has a pivot in each column. Explain why the system has a unique solution. What must be true of a linear system for it to have a unique​ solution? Select all that apply.

The system is consistent. The system has no free variables.

6.1 15. c. Of these​ properties, all but one have already been proven for W^perp. Which has​ not?

The zero vector is in W^perp

4.1 4. Determine if the given set is a subspace of P8. Justify your answer. All polynomials of degree at most 8, with positive real numbers as coefficients.

The zero vector of P8 IS NOT in the set because zero IS NOT a positive real number. The set IS closed under vector addition because the sum of two positive real numbers IS a positive real number. The set IS NOT closed under multiplication by scalars because the product of a scalar and a positive real number IS NOT NECESSARILY a positive real number. The set is NOT a subspace of P8.

4.3 12. part 1 Suppose that {v1, ..., vp} is a subset of V and T is a one-to-one linear transformation, so that an equation T(u)=T(v) always implies u=v. Show that if the set of images {T(v1), ..., T(vp)} is linearly dependent, then {v1, ..., vp} is linearly dependent. If the set {T(v1), ..., T(vp)} is linearly dependent then...

There exist scalars c1, ..., cp, not all zero, such that c1T(v1)+...+cpT(vp)=0

1.2 10. Suppose a 4x7 coefficient matrix for a system has four pivot columns. Is the system​ consistent? Why or why​ not?

There is a pivot position in each row of the coefficient matrix. The augmented matrix will have eight columns and will not have a row of the form left bracket [ 0 0 0 0 0 0 0 1 ​], so the system is consistent.

1.3 12. c. The vector a3 is in W = Span{a1, a2, a3} because a3 can be written as a linear combination c1a1 + c2a2 + c3a3 where c1, c2, and c3 are scalars.

Thus, a3 is in W because a3 = 0a1 + 0a2 + 1a3

4.1 12. Let H and K be subspaces of a vector space V. The intersection of H and​ K, written as H∩​K, is the set of v in V that belong to both H and K. Show that H∩K is a subspace of V.​ (See the figure to the​ right.) Give an example in set of real numbers R squaredℝ2 to show that the union of two subspaces is​ not, in​ general, a subspace.

To show that H∩K is a subspace of​ V, first show that the zero vector of V is in H∩K. Both H and K contain the zero vector of V because they are subspaces of V.​ Thus, the zero vector of V is in H∩K. --- ​Next, show that H∩K is closed under vector addition. Let u and v be in H∩K. Then u and v are in H. Likewise, u and v are in K. --- What does this imply about u+v? Since H is a subspace, u+v is in H, and since K is a subspace, u+v is in K. --- Use these results to explain why H∩K is closed under vector addition. Since u+v is in H and u+v is in K, u+v is in H∩K. Thus, H∩K is closed under vector addition. --- Next, show that H∩K is closed under multiplication by a scalar. Let u be in H∩K and let c be a scalar.​ Thus, u must be in H and K. What does this imply about cu​? Since H is a subspace, cu is in H, and since K is a subspace, cu is in K. --- Use these results to explain why H∩K is closed under multiplication by a scalar. Since cu is in H and cu is in K, cu is in H∩K. Thus, H∩K is closed under multiplication by a scalar. --- Use these results to explain why H∩K is a subspace of V. The set H∩K is a subspace of V because H∩K contains the zero vector of V and is closed under vector addition and multiplication by a scalar. --- Give an example in R2 to show that the union of two subspaces is not, in general, a subspace. Let H be the x-axis and let K be the y-axis. Then both H and K are subspaces of R2, but H∪K is not closed under vector addition. Thus H∪K is not a subspace of set of real numbers R2.

4.1 10. b. If u is a vector in a vector space​ V, then ​(−​1)u is the same as the negative of u.

True because for each u in V, -u=(-1)u

4.2 12. b. The column space of an m×n matrix is in set of real numbers Rm.

True because the column space of an mxn matrix A is a subspace of Rm

4.2 12. a. A null space is a vector space.

True because the null space of an m×n matrix A is a subspace of set of real numbers Rn

4.6 9. d. The row space of A^T is the same as the column space of A.

True because the rows of A^T are the columns of (A^T)^T=A.

4.3 9. b. If a finite set S of nonzero vectors spans a vector space​ V, then some subset of S is a basis for V.

True by the Spanning Set Theorem.

4.3 9. c. A basis is a linearly independent set that is as large as possible.

True by the definition of a basis.

4.1 10. a. A vector is any element of a vector space.

True by the definition of a vector space

1.1 11. d. Is the statement​ "A consistent system of linear equations has one or more​ solutions" true or​ false? Explain.

True, a consistent system is defined as a system that has at least one solution.

7.1 12. c. The dimension of an eigenspace of a symmetric matrix equals the multiplicity of the corresponding eigenvalue.

True, according to the Spectral Theorem.

1.1 11. b. Is the statement​ "Elementary row operations on an augmented matrix never change the solution set of the associated linear​ system" true or​ false? Explain.

True, because the elementary row operations replace a system with an equivalent system.

1.1 10. d. Is the statement​ "Two fundamental questions about a linear system involve existence and​ uniqueness" true or​ false? Explain.

True, because two fundamental questions address whether the solution exists and whether there is only one solution.

1.1 10. a. Is the statement​ "Every elementary row operation is​ reversible" true or​ false? Explain.

True, because​ replacement, interchanging, and scaling are all reversible.

4.2 12. d. The null space of A, Nul(A) is the kernel of mapping x-->Ax

True, the kernel of a linear transformation T, from a vector space V to a vector space W, is the set of all u in V such that ​T(u​)=0. ​Thus, the kernel of a matrix transformation ​T(x​)=Ax is the null space of A.

4.2 12. f. The set of all solutions of a homogeneous linear differential equation is the kernel of a linear transformation.

True, the linear transformation maps each function f to a linear combination of f and at least one of its​ derivatives, exactly as these appear in the homogeneous linear differential equation.

6.3 9. b. All vectors and subspaces are in Rn. In the Orthogonal Decomposition Theorem, each term in y^hat = (y*u1)/(u1*u1) (u1) + ... + (y*up)/(up*up) (up) is itself an orthogonal projection of y onto a subspace of W.

True. Since (y*ui)/(ui*ui) (ui) is the projection of y onto ui and the span of each ui is a one-dimensional subspace of W, each resulting projection must be onto the subspace spanned by ui.

6.3 9. c. All vectors and subspaces are in Rn. If y=z1+z2, where z1 is in a subspace W of Rn and z2 is in W^perp, then z1 must be the orthogonal projection of y onto W.

True. Since the orthogonal decomposition of y into components that exist in W and W^perp is unique, z1 must correspond to the orthogonal projection of y onto W.

6.2 10. c. Assume all vectors are in Rn. If the columns of an m×n matrix A are​ orthonormal, then the linear mapping x-->Ax preserves lengths.

True. ||Ax||=||x||

6.3 9. a. All vectors and subspaces are in Rn. If W is a subspace of Rn and if v is in both W and W^perp, then v must be the zero vector.

True. If v is in W, then projw v = v Since the W^perp component of v is equal to v-projw v, the W^perp component of v must be 0. A similar argument can be formed for the W component of v based on the orthogonal projection of v onto the subspace W^perp. Thus, v must be 0.

1.5 9. a. A homogeneous equation is always consistent.

True. A homogeneous equation can be written in the form Ax=0, where A is an mxn matrix and 0 is the zero vector in Rm. Such a system Ax=0 always has at least one solution, namely, x=0. Thus a homogeneous equation is always consistent.

1.8 10. a. A linear transformation is a special type of function.

True. A linear transformation T is a function from Rn to Rm that assigns to each vector x in Rn a vector T(x) in Rm. The set Rn is called the domain of T, and Rm is called the codomain of T.

5.1 11. c. A is an nxn matrix. A number c is an eigenvalue of A if and only if the equation ​(A−​cI)x=0 has a nontrivial solution.

True. A number c is an eigenvalue of A if and only if the equation Ax=cx has nontrivial​ solutions, and Ax=cx and ​(A−​cI)x=0 are equivalent equations.

4.4 10. c. In some cases, a plane in R3 can be isomorphic to R2.

True. A plane in R3 that passes through the origin is isomorphic to R2.

1.5 10. d. The equation Ax=b is homogeneous if the zero vector is a solution.

True. A system of linear equations is said to be homogeneous if it can be written in the form Ax=0, where A is an mxn matrix and 0 is the zero vector in Rm. If the zero vector is a solution, then b = Ax = A0 = 0.

1.3 11. c. Asking whether the linear system corresponding to an augmented matrix [ a1 a2 a3 b ] has a solution amounts to asking whether b is in Span {a1,a2,a3}.

True. An augmented matrix has a solution when the last column can be written as a linear combination of the other columns. A linear system augmented has a solution when the last column of its augmented matrix can be written as a linear combination of the other columns.

5.1 12. e. A is an nxn matrix. An eigenspace of A is a null space of a certain matrix.

True. An eigenspace of A corresponding to the eigenvalue λ is the null space of the matrix ​(A−λ​I).

6.2 10. e. Assume all vectors are in Rn. An orthogonal matrix is invertible.

True. An orthogonal matrix is a square invertible matrix U such that U^-1 = U^T

2.1 11. c. If A is an nxn matrix then (A^2)^T= (A^T)^2

True. Applying the property (AB)^T = B^T A^T to the equation A^2 = AA gives (A^2)^T = A^T A^T, which simplifies to (A^2)^T = (A^T)^2

5.1 12. c. A is an nxn matrix. A​ steady-state vector for a stochastic matrix is actually an eigenvector.

True. A​ steady-state vector for a stochastic matrix is actually an eigenvector because it satisfies the equation Ax=x.

6.1 13. d. Assume all vectors are in Rn. If ||u||^(2) + ||v||^(2) = ||u+v||^2 then u and v are orthogonal.

True. By the Pythagorean Theorem, two vectors u and v are orthogonal if and only if ||u+v||^2 = ||u||^(2) + ||v||^(2)

5.1 11. d. A is an nxn matrix. Finding an eigenvector of A may be​ difficult, but checking whether a given vector u is in fact an eigenvector is easy.

True. Checking whether a given vector u is in fact an eigenvector is easy because it only requires checking that u is a nonzero vector and finding if Au is a scalar multiple of u.

1.8 11. a. The range of the transformation x --> Ax is the set of all linear combinations of the columns of A.

True. Each image T(x) is of the form Ax. Thus, the range is the set of all linear combinations of the columns of A.

6.2 10. a. Assume all vectors are in Rn. Not every orthogonal set in Rn is linearly independent.

True. Every orthogonal set of nonzero vectors is linearly​ independent, but not every orthogonal set is linearly independent.

1.8 11. e. A linear transformation T: Rn --> Rm always maps the origin of Rn to the origin of Rm.

True. For a linear transformation, T(0) is equal to 0.

1.5 10. c. The effect of adding p to a vector is to move the vector in a direction parallel to p.

True. Given v and p in R2 or R3, the effect of adding p to v is to move v in a direction parallel to the line through p and 0.

5.1 11. b. A is an nxn matrix. A matrix A is not invertible if and only if 0 is an eigenvalue of A.

True. If 0 is an eigenvalue of​ A, then there are nontrivial solutions to the equation Ax=0x. The equation Ax=0x is equivalent to the equation Ax=0​, and Ax=0 has nontrivial solutions if and only if A is not invertible.

1.4 11. f. If A is an m×n matrix and if the equation Ax=b is inconsistent for some b in Rm​, then A cannot have a pivot position in every row.

True. If A is an m×n matrix and if the equation Ax=b is inconsistent for some b in Rm​, then the equation Ax=b has no solution for some b in Rm.

1.4 12. e. The solution set of a linear system whose augmented matrix is [ a1 a2 a3 b ] is the same as the solution set of Ax=b, if A = [ a1 a2 a3 ].

True. If A is an m×n matrix with columns [ a1 a2 ... an ], and b is a vector in Rm, the matrix equation Ax=b has the same solution set of the system of linear equations whose augmented matrix is [ a1 a2 ... an b]

4.6 9. e. If A and B are row​ equivalent, then their row spaces are the same.

True. If B is obtained from A by row​ operations, the rows of B are linear combinations of the rows of A and​ vice-versa.

6.2 10. d. Assume all vectors are in Rn. The orthogonal projection of y onto v is the same as the orthogonal projection of y onto cv whenever c =/= 0.

True. If c is any nonzero scalar and if v is replaced by cv in the definition of the orthogonal projection of y onto v​, then the orthogonal projection of y onto cv is exactly the same as the orthogonal projection of y onto v.

2.4 8. a. If A = [ A1 A2 ] and B = [ B1 B2 ], with A1 and A2 the same sizes as B1 and B2, respectively, then A+B = [ A1+B1 A2+B2 ].

True. If matrices A and B are the same size and are partitioned in exactly the same way, the matrix sum A+B will be partitioned the same way.

1.4 11. e. If the columns of an m×n matrix A span Rm​, then the equation Ax=b is consistent for each b in Rm

True. If the columns of A span Rm​, then the equation Ax=b has a solution for each b in Rm.

1.7 10. b. If three vectors in R3 lie in the same plane as R3, then they are linearly dependent. True or false?

True. If three vectors in R3 lie in the same plane in R3, then at least one of the vectors is a linear combination of the other two. Since at least one of the vectors is a linear combination of the other two, the three vectors are linearly dependent.

6.1 12. e. Assume all vectors are in Rn. If vectors v1, ..., vp span a subspace W and if x is orthogonal to each vj for j=1, ..., p, then x is in W^perp

True. If x is orthogonal to each vj, then x is also orthogonal to any linear combination of those vj. Since any vector in W can be described as a linear combination of vj, x is orthogonal to all vectors in W.

5.2 10. c. The multiplicity of a root r of the characteristic equation of A is called the algebraic multiplicity of r as an eigenvalue of​ A

True. It is the definition of the algebraic multiplicity of an eigenvalue of A.

5.3 11. c. Assume​ A, B,​ P, and D are n×n matrices.

True. Let v be a nonzero column in P and let λ be the corresponding diagonal element in D. Then AP=PD implies that Av=λv​, which means that v is an eigenvector of A.

6.1 13. a. Assume all vectors are in Rn. u*v-v*u=0

True. Since the inner product is commutative, u*v=v*u. Subtracting v*u from each side of this equation gives u*v-v*u=0

1.7 10. a. If u and v are linearly independent, and if w is in Span{u,v}, then {u,v,w} is linearly dependent. True or false?

True. Since w is in Span{u,v}, w is a linear combination of u and v. Since w is a linear combination of u and v, the set {u,v,w} is linearly dependent.

1.5 10. e. If Ax=b is​ consistent, then the solution set of Ax=b is obtained by translating the solution set of Ax=0.

True. Suppose the equation Ax=b is consistent for some given b, and let p be a solution. Then the solution set of Ax=b is the set of all vectors of the form w=p+vh, where vh is any solution of the homogeneous equation Ax=0.

4.6 5. a. The sets B and C are bases for a vector space V. The columns of P (C <-- B) are linearly independent.

True. The columns of P (C <-- B) are linearly independent because they are the coordinate vectors of the linearly independent set B.

4.6 9. c. The dimension of the null space of A is the number of columns of A that are not pivot columns.

True. The dimension of Nul A equals the number of free variables in the equation Ax=0.

1.4 11. b. A vector b is a linear combination of the columns of a matrix A if and only if the equation Ax=b has at least one solution.

True. The equation Ax = b has the same solution set as the equation x1a1 + x2a2 + ... + xnan = b

1.4 12. b. If the equation Ax=b is​ consistent, then b is in the set spanned by the columns of A.

True. The equation Ax=b has a nonempty solution set if and only if b is a linear combination of the columns of A.

1.4 11. d. The first entry in the product Ax is a sum of products.

True. The first entry in Ax is the sum of the products of corresponding entries in x and the first entry in each column of A.

1.8 11. d. A linear transformation preserves the operations of vector addition and scalar multiplication.

True. The linear transformation T(cu+dv) is the same as cT(u)+dT(v) in Rm therefore, vector addition and scalar multiplication are preserved

1.4 12. c. Any linear combination of vectors can always be written in the form Ax for a suitable matrix A and vector x.

True. The matrix A is the matrix of coefficients of the system of vectors.

1.4 12. a. Every matrix equation Ax=b corresponds to a vector equation with the same solution set.

True. The matrix equation Ax=b is simply another notation for the vector equation x1a1 + x2a2 + ... + xnan = b where a1, ..., an are the columns of A.

2.1 11. a. The first row of AB is the first row of A multiplied on the right by B.

True. The row-column rule for computing the product AB states that (AB)ij = ai1b1j + ai2b2j + ... + ainbnj, where (AB)ij denotes the (i,j)-entry of AB and A is an mxn matrix.

4.4 10. a. If B is the standard basis for Rn, then the B-coordinate vector of an [x] in Rn is [x] itself.

True. The standard basis consists of the columns of the nxn identity matrix. So, [x]b = x1e1+...+xnen = x

1.9 10. c. The columns of the standard matrix for a linear transformation from Rn to Rm are the images of the columns of the nxn identity matrix under T.

True. The standard matrix is the mxn matrix whose jth column is the vector T(ej), where ej is the jth column of the identity matrix in Rn.

1.9 10. b. Every linear transformation from Rn to Rm is a matrix transformation.

True. There exists a unique matrix A such that T(x)=Ax for all x in Rn.

1.8 10. e. A transformation T is linear if and only if T(c1v1+c2v2)=c1T(v1)+c2T(v2) for all v1 and v2 in the domain of T and for all scalars c1 and c2

True. This equation correctly summarizes the properties necessary for a transformation to be linear.

1.3 11. b. Any list of five real numbers is a vector in set of real numbers ℝ5.

True. set of real numbers ℝ5 denotes the collection of all lists of five real numbers

2.3 10. a. If there is an nxn matrix D such that AD=I, then DA=I

True; by the invertible matrix theorem if AD=I is true then A and D are both invertible, with D=A^-1 and A=D^-1 Therefore, DA=I.

2.3 10. c. If the columns of A are linearly independent then the columns of A span Rn

True; by the invertible matrix theorem if the columns of A are linearly independent, then the columns of A must span Rn.

2.4 10. In the study of engineering control of physical systems, a standard set of differential equations is transformed by Laplace transforms into the system of linear equations shown below, where A is nxn, B is nxm, C is mxn, s is a variable, the vector u is in Rm, y is in Rm, and x is in Rn. [ A-sIn B] [x] = [0] [C Im] [u] = [y] Assume A-sIn is invertible and view the equation above as a system of two matrix equations. Solve the top equation for x and substitute into the bottom equation. The result is an equation of the form W(s)u=y, where W(s) is a matrix that depends on s. Find W(s) and describe how it is related to the partitioned system matrix on the left.

W(s) = Im - C(A-sIn)^-1B Describe how W(s) is related to the partitioned system matrix on the left side of the matrix equation. W(s) is the Schur complement of A-sIn.

2.5 8. Part 1 Suppose A=​BC, where B is invertible. Show that any sequence of row operations that reduces B to I also reduces A to C. The converse is not​ true, since the zero matrix may be factored as 0=B*0.

Which of the following pieces of information in the problem statement are relevant for showing that any sequence of row operations that reduces B to I also reduces A to​ C? Select all that apply. [ ]A. The zero matrix may be factored as 0=B*0. [ ]B. The converse is not true. [x]C. A=BC [x]D. B is invertible.

1.5 13. a. A is a 2x5 matrix with three pivot positions. Does the equation Ax=0 have a nontrivial solution?

Yes

1.5 13. b. A is a 2x5 matrix with three pivot positions. Does the equation Ax=b have at least one solution for every possible b?

Yes

1.3 12. b. Set up the appropriate augmented matrix for determining if b is in W.

[ 1 0 -7 2 ] [ 0 3 -2 1 ] [ -3 6 3 -2 ] Is b in W? ​Yes, because the​ row-reduced form of the augmented matrix does not have a pivot in the rightmost column. How many vectors are in W? Infinitely many

1.2 7. Part 3 Let the coefficient matrix be in reduced echelon form with a pivot in each​ column, since each matrix is equivalent to one and only one reduced echelon matrix. Construct a matrix with the dimensions determined in the previous step that is in reduced echelon form and has a pivot in each column.

[ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ]

1.2 7. Part 4 Now find an augmented matrix in reduced echelon form that represents a linear system of four equations in four variables for which the corresponding coefficient matrix has a pivot in each column. Choose the correct answer below.

[ 1 0 0 0 a ] [ 0 1 0 0 b ] [ 0 0 1 0 c ] [ 0 0 0 1 d ]

1.9 11. Determine if the specified linear transformation is ​(a​) ​one-to-one and ​(b​) onto. Justify each answer. T(x1,x2,x3)=(x1-7x2+2x3, x2-5x3) a. Is the linear transformation one to one? b. Is the linear transformation onto?

a. T is not one to one because the columns of the standard matrix A are linearly dependent. b. T is onto because the columns of the standard matrix A span R2.

6.1 15. Let W be a subspace of Rn, and let W^perp be the set of all vectors orthogonal to W. Show that W^perp is a subspace of Rn using the following steps.

a. Take z in W^perp, and let u represent any element of W. Then z*u=0. Take any scalar c and show that cz is orthogonal to u. (Since u was an arbitrary element of W, this will show that cz is in W^perp) b. Take z1 and z2 in W^perp, and let u be any element of W. Show that z1+z2 is orthogonal to u. What can you conclude about z1+z2? Why? c. Finish the proof that W^perp is a subspace of Rn.

4.4 12. part 2 Since b1, ..., bn are in V and since for each x in V, there exists a unique set of scalars c1, ..., cn such that x=c1b1+...cnbn, what is true of each bk for k=1, ..., n?

bk = c1b1 + ... + cnbn for some unique set of scalars c1, ..., cn

4.4 12. part 3 Rewrite the expression for bk given that the scalars c1, ..., cn are unique by the Unique Representation Theorem

bk = c1b1 + ... cnbn = 0*b1 + ... + 1*bk + ... + 0*bn Thus, the coordinate vector [bk]B of bk is ek, or the kth column of the nxn identity matrix.

4.3 12. part 2 Since T is linear...

c1T(v1)+...+cpT(vp)=T(c1v1+...+cpvp)

3.2 16. part 2 Show that if A is invertible, then det A^-1 = 1/det A Consider the quantity (det A)(det A^-1) to what does this equal?

det I (that's an i)

6.4 8. part 2 Then...

y=QRx=Q(Rx) which shows that y is a linear combination of the columns of Q using the entries in Rx as weights. Therefore, y belongs to Col Q.

6.1 15. b. What are the values of z1*u and z2*u?

z1*u = 0 z2*u = 0

4.5 11. part 2 Write the standard basis of the space P3 of polynomials, in order of ascending degree.

{1, t, t^2, t^3}

6.7 8. part 2 Evaluate | <u, v> | for u = sqrt(a), sqrt(b) and v = sqrt (b), sqrt (a) where a>=0, and b>=0

| <u, v> | = 2 sqrt(ab)

6.7 8. part 3 Evaluate ||u|| for u = sqrt(a), sqrt(b) where a>=0

||u|| = sqrt(a+b)


संबंधित स्टडी सेट्स

GEB1101:M1-C1.2.3.4: Environment of Business & Business Ownership

View Set

Ch 17 Labor & Birth Complications (FINAL)

View Set

N115: Week 2:: CNS Pharmacology Objectives and Study Questions

View Set

Quoting, Paraphrasing, Summarizing

View Set

A&P Chapter 5: Integumentary System

View Set

Chapter 15 (Darwin and Evolution)

View Set