LA_3

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

For any scalar c, ||cv|| = c||v||. t/f?

FALSE - need absolute value of c

A matrix with orthonormal columns is an orthogonal matrix? t/f

False - It also has to be a square matrix.

The best approximation to y by elements of a subspace W is given by the vector y - proj_w y? t/f

False - Just the projection of y onto w

If the vectors in an orthogonal set of nonzero vectors are normalized, then some of the new vectors may not be orthogonal? t/f

False - Normalizing only effects the magnitude of vectors, not their direction.

If a set S = {u_1, ... u_p} has the property that u_i x u_j = 0 whenever i != j , then S is an orthonormal set? t/f

False - S is an orthogonal set. To be orthonormal the magnitude of all vectors must be 1. That is achieved through normalization

If W = Span {x1, x2, x3} with {x1, x2, x3} linearly independent, and if {v1, v2, v3} is an orthogonal set in W, then {v1, v2, v3} is a basis for W? t/f

False - The three orthogonal vectors must be nonzero to be a basis for a three-dimensional subspace.

If an n x p matrix U has orthonormal columns, then UU^Tx = x for all x in Rn? t/f

False - UU^Tx = x if and only if x is in the column space of U which is not always all of R, such as if n is less than p

If L is a line through 0 and if y-hat is the orthogonal projection of y onto L, then ||y-hat|| gives the distance from y to L? t/f

False - distance is ||y - y-hat||

The orthogonal projection y-hat of y onto a subspace W can sometimes depend on the orthogonal basis for W used to compute y-hat? t/f

False. y-hat does not depend on the particular orthogonal basis used to compute it. If a different orthogonal basis for W were used to construct an orthogonal projection of y, then this projection would also be the closest point in W to y, namely, y-hat.

A positive definite quadratic form Q satisfies Q(x) > 0 for all x in Rn? t/f

False. It is positive definite if Q(x) > 0 for all x != 0 Q(x) = 0 when x = 0

Orthogonal diagonalization requires n linearly independent and orthonormal eigenvectors. When is this possible?

Thus A is symmetric.

For any scalar c, u.(cv) = c(u.v) t/f?

True

If the columns of an m x n matrix A are orthonormal, then the linear mapping x -> Ax preserves lengths? t/f

True

If the eigenvalues of a symmetric matrix A are all positive, then the quadratic form x^T * Ax is positive definite? t/f

True

If vectors v_1, ..., v_p span a subspace W and if x is orthogonal to each v_j for j = 1, ..., p, then x is in W_perp. t/f?

True

If y = z1 + z2, where z1 is in a subspace W and z2 is in W_perp, then z1 must be the orthogonal projection of y onto W? t/f

True

Not every linearly independent set in Rn is an orthogonal set? t/f

True

The orthogonal projection of y onto v is the same as the orthogonal projection of y onto cv whenever c != 0? t/f

True

In a QR factorization, say A = QR (when A has linearly independent columns), the columns of Q form an orthonormal basis for the column space of A? t/f

True The QR Factorization If A is an m x n matrix with linearly independent columns, then A can be factored as A = QR, where Q is an m x n matrix whose columns form an orthonormal basis for Col A and R is an n x n upper triangular invertible matrix with positive entries on its diagonal.

If y is in a subspace W, then the orthogonal projection of y onto W is y itself? t/f

True If y is in W = Span {u_1, ..., u_p}, then proj_w y = y.

Not every orthogonal set in Rn is linearly independent? t/f

True - If the set contains the zero vector among other orthogonal vectors, then is is not linearly independent.

If x is not in a subspace W, then x - proj_w x multiplied by x is not zero? t/f

True - If x is not in a subspace W, then x cannot equal the projection of x onto W, because the projection of x onto W is in W

An orthogonal matrix is invertible? t/f

True - It is a square matrix whos columns are linearly independent so it has full rank

If W is a subspace of Rn and if v is in both W and W_perp, then v must be the zero vector? t/f

True - The Orthogonal Decomposition Theorem

If the columns of an n x p matrix U are orthonormal, then UU^Ty is the orthogonal projection of y onto the column space of U? t/f

True - The columns of U are linearly independent and therefore form a basis for W

In the Orthogonal Decomposition Theorem, each term in formula (2) for y hat is itself an orthogonal projection of y onto a subspace of W? t/f

True - When W is a one-dimensional subspace, the formula (2) for proj_w y contains just one term. Thus, when dim W > 1, each term in (2) is itself an orthogonal projection of y onto a one-dimensional subspace spanned by one of the u's in the basis for W

For each y and each subspace W, the vector y - proj_w y is orthogonal to W? t/f

True - by Orthogonal Decomposition Theorem

If x is orthogonal to every vector in a subspace W, then x is in W_perp. t/f?

True, according to the definition of W_perp

The principal axes of a quadratic form x^T * Ax are eigenvectors of A? t/f

True.

A Cholesky factorization of a symmetric matrix A has the form A = R^T*R, for an upper triangular matrix R with positive diagonal entries? t/f

True. A fast way to determine whether a symmetric matrix A is positive definite is to attempt to factor A in the form A = R^T * R, where R is upper triangular with positive diagonal entries. (A slightly modified algorithm for an LU factorization is one approach.) Such a Cholesky factorization is possible if and only if A is positive definite.

A quadratic form has no cross-product terms if and only if the matrix of the quadratic form is a diagonal matrix? t/f

True. In some cases, quadratic forms are easier to use when they have no cross-product terms—that is, when the matrix of the quadratic form is a diagonal matrix. Fortunately, the cross-product term can be eliminated by making a suitable change of variable.

The matrix of a quadratic form is a symmetric matrix? t/f

True. Recall - A quadratic form on Rn is a function Q defined on Rn whose value at a vector x in Rn can be computed by an expression of the form Q(x) x^T * Ax, where A is an n x n symmetric matrix. The matrix A is called the matrix of the quadratic form

u.v - v.u = 0 t/f?

True. The dot product is commutative

What is the Pythagorean Theorem of orthogonality

Two vectors u and v are orthogonal if and only if || u + v||^2 = ||u||^2 = ||v||^2

What is an orthogonal vector

Two vectors u and v in Rn are orthogonal (to each other) if u.v = 0.

Describe the least-squares error of an approximation

When a least-squares solution x hat is used to produce Ax hat as an approximation to b, the distance from b to Ax hat is called the least-squares error of this approximation.

What happens when the vectors in an orthogonal set of nonzero vectors are normalized to have unit length?

When the vectors in an orthogonal set of nonzero vectors are normalized to have unit length, the new vectors will remain orthogonal, and the new set will then be an orthonormal set

Introduce the general least-squares problem

Think of Ax as an approximation to b. The smaller the distance between b and Ax, given by ||b - Ax||, the better the approximation. The general least-squares problem is to find an x that makes ||b - Ax|| as small as possible. The adjective "least-squares" arises from the fact that ||b - Ax|| is the square root of a sum of squares.

What is a unit vector?

A vector whose length is 1 is called a unit vector. If we divide a nonzero vector v by its length—that is, multiply by 1/||v|| we obtain a unit vector u because the length of u is (1/||v||).||v||. The process of creating u from v is also called normalizing v, and we say that u is in the same direction as v.

A symmetric matrix is a matrix A such that...

A^T = A Such a matrix is necessarily square. Its main diagonal entries are arbitrary, but its other entries occur in pairs—on opposite sides of the main diagonal.

What is an orthonormal set?

A set {u_1, ... u_p} is an orthonormal set if it is an orthogonal set of unit vectors. If W is the subspace spanned by such a set, then {u_1, ... u_p} is an orthonormal basis for W, since the set is automatically linearly independent. The simplest example of an orthonormal set is the standard basis {e_1, ... e_p} for Rn. Any nonempty subset of {e_1, ... e_p} is orthonormal, too

Define the orthonormal set & basis relation

A set {u_1, ..., u_p} is an orthonormal set if it is an orthogonal set of unit vectors. If W is the subspace spanned by such a set, then {u_1, ..., u_p} is an orthonormal basis for W, since the set is automatically linearly independent.

What is a quadratic form on Rn?

A quadratic form on Rn is a function Q defined on Rn whose value at a vector x in Rn can be computed by an expression of the form Q(x) = x^T * Ax, where A is an n x n symmetric matrix. The matrix A is called the matrix of the quadratic form.

What is an Orthogonal Set?

A set of vectors {u_1, ... u_p} in Rn is said to be an orthogonal set if each pair of distinct vectors from the set is orthogonal, that is, if u_i x u+j = 0 whenever i != j . A graphic example may be three line segments that are mutually perpendicular.

The theorem behind the Gram-Schmidt Process shows that any nonzero subspace W of Rn has an orthogonal basis, because...

...because an ordinary basis {x_1, ..., x_p} is always available, and the Gram-Schmidt process depends only on the existence of orthogonal projections onto subspaces of W that already have orthogonal bases.

If z is orthogonal to u1 and to u2 and if W = Span {u_1, u_2}, then z must be in W_perp? t/f

1. A vector z is in W^perp if and only if z is orthogonal to every vector in a set that spans W. 2. W^perp is a subspace of Rn.

The vector y hat in the Best Approximation Theorem is called the best approximation to y by elements of W. State two other relevant points.

1. Some problems require a given y must be replaced, or approximated, by a vector v in some fixed subspace W. The distance from y to v, given by ||y - v||, can be regarded as the "error" of using v in place of y. The theorem says that this error is minimized when v = y hat 2. y hat does not depend on the particular orthogonal basis used to compute it. If a different orthogonal basis for W were used to construct an orthogonal projection of y, then this projection would also be the closest point in W to y, namely, y hat

An m x n matrix U has orthonormal columns if and only if... what?

An m x n matrix U has orthonormal columns if and only if U_TU = I

An n x n matrix A is orthogonally diagonalizable if and only if ...?

An n x n matrix A is orthogonally diagonalizable if and only if A is a symmetric matrix.

An n x n matrix A is said to be orthogonally diagonalizable if ...?

An n x n matrix A is said to be orthogonally diagonalizable if there are an orthogonal matrix P (with P^-1 = P^T) and a diagonal matrix D such that: A = PDP^T = PDP^-1 An n x n matrix A is orthogonally diagonalizable if and only if A is a symmetric matrix!!

The set of eigenvalues of a matrix A is sometimes called the spectrum of A. Describe the spectral theorem for symmetric matrices - a to d.

An n x n symmetric matrix A has the following properties: a. A has n real eigenvalues, counting multiplicities. b. The dimension of the eigenspace for each eigenvalue lambda equals the multiplicity of lambda as a root of the characteristic equation. c. The eigenspaces are mutually orthogonal, in the sense that eigenvectors corresponding to different eigenvalues are orthogonal. d. A is orthogonally diagonalizable

Complete this sentence, "An orthogonal basis for a subspace W of Rn is a basis for W... "

An orthogonal basis for a subspace W of Rn is a basis for W

Describe key characteristics of an orthogonal, square invertible matrix U?

An orthogonal matrix is a square invertible matrix U such that U^1 = U^T, and such a matrix has orthonormal columns. It is easy to see that any square matrix with orthonormal columns is an orthogonal matrix. Surprisingly, such a matrix must have orthonormal rows, too.

Constructing an orthonormal basis by hand...?

An orthonormal basis is constructed easily from an orthogonal basis {v_1, ..., v_p}: simply normalize (i.e., "scale") all the v_k. When working problems by hand, this is easier than normalizing each v_k as soon as it is found (because it avoids unnecessary writing of square roots).

Theorem: The QR Factorization

If A is an m x n matrix with linearly independent columns, then A can be factored as A = QR, where Q is an m x n matrix whose columns form an orthonormal basis for Col A and R is an n x n upper triangular invertible matrix with positive entries on its diagonal.

What is an outcome of an orthogonal set S of nonzero vectors in Rn?

If S = {u_1, ... u_p} is an orthogonal set of nonzero vectors in Rn, then S is linearly independent and therefore a basis for the subspace spanned by S. The outcome is an Orthogonal Basis

What is an orthogonal complement and it implementation W_perp?

If a vector z is orthogonal to every vector in a subspace W of Rn, then z is said to be orthogonal to W. The set of all vectors z that are orthogonal to W is called the orthogonal complement of W and is denoted by W_perp.

What is an inner product

If u and v are vectors in Rn, then we regard u and v as n x 1 matrices. The transpose u^T is a 1 x n matrix, and the matrix product u^Tv is a 1 x 1 matrix, which we write as a scalar. The number uTv is called the inner product of u and v, and often it is written as u.v also called the dot product

If x represents a variable vector in Rn, what is a change of variable?

If x represents a variable vector in Rn, then a change of variable is an equation of the form: x = Py or equivalently; y = P^-1x Where P is an invertible matrix and y is a new variable vector in Rn. Here y is the coordinate vector of x relative to the basis of Rn determined by the columns of P.

If {u_1, ... u_p} is an orthogonal basis for W and if y happens to be in W, then the formula for proj_w y is exactly the same as the representation of y. Define the formula

If y is in W = Span {u_1, ... u_p}, then proj_w y = y.

If you orthogonally diagonalize A. Describe the associated unit eigenvectors?

If you can orthogonally diagonalize A, the eigenvectors are automatically orthogonal if they correspond to distinct eigenvalues. In this case they also provide an orthonormal basis for Rn.

What does it mean when the normal equations for a least-squares problem are ill-conditioned

In some cases, the normal equations for a least-squares problem can be ill-conditioned; that is, small errors in the calculations of the entries of A^TA can sometimes cause relatively large errors in the solution Ox. If the columns of A are linearly independent, the least-squares solution can often be computed more reliably through a QR factorization of A

Define the relationship between the orthogonal complement, the row space and Null space of A

Let A be an mxn matrix. The orthogonal complement of the row space of A is the null space of A, and the orthogonal complement of the column space of A is the null space of A^T:

Theorem of Quadratic Forms and Eigenvalues

Let A be an n x n symmetric matrix. Then a quadratic form x^T * Ax is: a. positive definite if and only if the eigenvalues of A are all positive b. negative definite if and only if the eigenvalues of A are all negative, or c. indefinite if and only if A has both positive and negative eigenvalues. Also, Q is said to be positive semidefinite if Q(x) >= 0 for all x, and to be negative semidefinite if Q(x) <= 0 for all x.

The Principal Axes Theorem

Let A be an n x n symmetric matrix. Then there is an orthogonal change of variable, x = Py, that transforms the quadratic form x^T * Ax into a quadratic form y^T = Dy with no cross-product term. The columns of P in the theorem are called the principal axes of the quadratic form xT * Ax. The vector y is the coordinate vector of x relative to the orthonormal basis of Rn given by these principal axes.

Consider, P is square, has orthonormal columns, and is an orthogonal matrix. Describe A = PDP^1 and the resulting theorem.

P^-1 = P^T If A is symmetric, then any two eigenvectors from different eigenspaces are orthogonal

What is the Gram-Schmidt process?

The Gram-Schmidt process is a simple algorithm for producing an orthogonal or orthonormal basis for any nonzero subspace of Rn

What factorization is possible for any m x n matrix A?

The factorization A = QDP^-1 is possible for any m x n matrix A! A special factorization of this type, called the singular value decomposition, is one of the most useful matrix factorizations in applied linear algebra

The most important aspect of the least-squares problem is...

The most important aspect of the least-squares problem is that no matter what x we select, the vector Ax will necessarily be in the column space, Col A. So we seek an x that makes Ax the closest point in Col A to b.

Describe the singular values of A.

The singular values of A are the square roots of the eigenvalues of A^TA, denoted by sigma_1, ... sigma_n, and they are arranged in decreasing order. The singular values of A are the lengths of the vectors Av_1, ... Av_n.

Define proj_w y

The vector y hat is called the orthogonal projection of y onto W and often is written as proj_w y.


संबंधित स्टडी सेट्स

CHAPTER 7 - Data Warehousing Concepts

View Set

Which of the following with a PC repair technician work on

View Set

Chapter 11 Developing and Managing Products

View Set

IT435 Final - Pallets and Palletizing

View Set