5.5-6.4 True or False
5.5 #11: If A = PDP⁻¹, where P is an invertible matrix and D is a diagonal matrix, then the solution of y' = Ay is P⁻¹z, where z is a solution of z' = Dz.
False
5.5 #9: If P⁻¹AP is a diagonal matrix D, then the change of variable z = Py transform the matrix y' = Ay into z' = Dz
False
6.1 #64:The norm of a multiple of a vector is the same multiple of the norm of the vector.
False
6.1 #65: The norm of a sum of vectors is the sum of the norms of the vectors.
False
6.1 #70: If u•v = 0, then u = 0 or v = 0.
False
6.1 #76: If A is an nxn matrix and u and v are vectors in R^n, then Au•v = u•Av
False
6.2 #49: Combining the vectors in two orthonormal subsets of R^n produces another orthonormal subset of R^n
False
6.3 #34: If F and G are subsets of R^n and F⊥ = G⊥, then F = G
False
6.2 #52: In the QR factorization of a matrix, both factors are upper triangular matrices.
False A = QR where Q is an orthogonal matrix and R is an upper triangular matrix, they are both not upper triangular matrices
6.1 #71: For all vectors u and v in R^n, |u•v| = ||u||•||v||.
False Cauchy-Schwarz Inequality: |u•v| ≤ ||u||•||v||
6.2 #41: Any orthogonal subset of R^n is linearly independent.
False If the subset contains the zero vector, then it is not linearly independent
6.4 #31: For any inconsistent system of linear equations Ax = b, the vector z for which ||Az-b|| is a minimum is unique.
False In general, the solution set for an inconsistent system of linear equations is a set of nonzero vectors rather than a single unique vector
6.4 #30: The method of least squares can be used only to approximate data with a straight line.
False It can be used to approximate any order polynomial
6.1 #62: The dot product of two vectors in R^n is a vector in R^n
False It is a scalar
6.1 #63: The norm of a vector equals the dot product of the vector with itself.
False It is the square root of the dot product of the vector with itself
5.5 #1: The row sums of the transition matrix of a Markov chain are all 1.
False The columns sum to 1
6.4 #28: For a given set of data plotted in the xy-plane, the least-squares line is the unique line in the plane that minimizes the sum of the vertical distances from the data points to the line.
False The least-squares line minimizes the sum of the SQUARES of the vertical distances from the data points to the line
5.5 #2: If the transition matrix of a Markov chain contains zero entries, then it is not regular.
False Transition matrix A is not regular if and only if A^k where k is some power besides 1 has zero entries
6.1 #78: If u and v are orthogonal vectors in R^n, then ||u+v|| = ||u|| + ||v||.
False Triangle Inequality: ||u+v|| ≤ ||u|| + ||v||
6.2 #50: If x is orthogonal to y and y is orthogonal to z, then x is orthogonal to z.
False x•y = 0 y•z = 0 Let x=z (both equations above are still true) x•z = z•z = ||z|| which is not 0, unless z is the zero vector, thus x is not necessarily orthogonal to z
5.5 #8: The general solution of y' = ky is y = ke^t.
False. When plugging y = ke^t, the answer is incorrect.
5.5 #10: A differential equation a₃y''' + a₂y'' + a₁y' + a₀y = 0 where a₃, a₂, a₁, a₀ are scalars, can be written as a system of linear differential equations.
True
5.5 #12: In a Fibonacci sequence, each term after the first two is the sum of the two preceding terms.
True
5.5 #3: If A is the transition matrix of a Markov chain and p is any probability vector, then Ap is a probability vector.
True
5.5 #5: If A is the transition matrix of a regular Markov chain, then as m approaches infinity, the vectors A^m * p approach the same probability vector for every probability vector p.
True
5.5 #7: Every regular transition matrix has a unique probability vector that is an eigenvector corresponding to eigenvalue 1.
True
6.1 #61: Vectors must be of the same size for their dot product to be defined.
True
6.1 #67: The orthogonal projection of a vector on a line is a vector that lies along the line.
True
6.1 #68: The norm of a vector is always a nonnegative real number.
True
6.1 #69: If the norm of v equals 0, then v = 0.
True
6.1 #72: For all vectors u and v in R^n, u•v = v•u.
True
6.1 #74: For all vectors u and v in R^n and every scalar c, (cu)•v = u•(cv).
True
6.1 #75: For all vectors u, v, and w in R^n, u•(v+w) = u•v + u•w.
True
6.1 #77: For every vector v in R^n, ||v|| = ||-v||.
True
6.1 #79: If w is the orthogonal projection of u on a line through the origin of R², then u-w is orthogonal to every vector on the line.
True
6.1 #80: If w is the orthogonal projection of u on a line through the origin of R², then w is the vector on the line closest to u.
True
6.2 #44: If S is an orthogonal set of n nonzero vectors in R^n, then S is a basis for R^n.
True
6.2 #46: For any nonzero vector v, (1/||v||)v is a unit vector
True
6.2 #51: The Gram-Schmidt process transforms a linearly independent set into an orthogonal set.
True
6.3 #33: For any nonempty subset S of R^n, S⊥⊥ = S
True
6.3 #35: The orthogonal complement of any nonempty subset of R^n is a subspace of R^n.
True
6.4 #29: If [a₀, a₁] is a solution of the normal equations for the data, then y = a₀ + a₁x is the equation of the least-squares line.
True
6.2 #43: Any subset of R^n consisting of a single vector is an orthogonal set.
True An orthogonal set must have one or more vectors
6.2 #45: If {v₁, v₂, ... vk} is an orthonormal basis for a subspace W and w is a vector in W then w = (w•v₁)v₁ + (w•v₂)v₂ + ... + (w•vk)vk
True Because ||vi|| = 1 for all vectors in set v, w = proj(w on v₁) etc... simplifies to (w•v₁)v₁ etc...
6.4 #32: Every consistent system of linear equations Ax = b has a unique solution of least norm.
True Given a consistent system of equations, there is always a single, unique solution of least norm.
6.2 #48: Every orthonormal subset is linearly independent.
True If they two vectors not linearly independent, then their dot product would not equal 0 thus they could not belong in an orthonormal subset
5.5 #4: If A is the transition matrix of a Markov chain and p is any probability vector, then as m approaches infinity, the vectors A^m * p approach a probability vector.
True It is approaching the steady state probability vector
6.2 #42: Every nonzero subspace of R^n has an orthogonal basis.
True The Gram-Schmidt algorithm transforms any nonzero subspace into an orthogonal basis of the same subspace
5.5 #6: Every regular transition matrix has 1 as an eigenvalue.
True The eigenvector corresponding to an eigenvalue of 1 is the steady state vector
6.1 #73: The distance between vectors u and v in R^n is ||u-v||.
True The vector u-v forms a triangle with vectors u and v (it is the side connecting vectors u and v, thus it is the distance between them)
6.2 #47: The set of standard vectors e₁, e₂, ..., en is an orthonormal basis for R^n.
True When taking the dot product of the standard vectors with each other, you get 0.
6.1 #66: The squared norm of a sum of orthogonal vectors is the sum of the squared norms of the vectors.
True ||u + v||² = ||u||² + ||v||² only true if u•v = 0