Math 244 midterm review (detail)

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

(4.3.5)True. Choosing any vector v in S, the scalar multiple 0v = 0 still belongs to S.

" a nonempty set S of a vector space V that is closed inder scalar multiplication contains the zero vector of V

"(4.4.5) TRUE. To say that a set S of vectors in V spans V is to say that every vector in V belongs to span(S). So V is a subset of span(S). But of course, every vector in span(S) belongs to the vector space V , and so span(S) is a subset of V . Therefore, span(S) = V .

" A set S of vectors in a vector space V spans V if and only if the linear span of S is V

"(4.4.10). FALSE. For instance, it is easily verified that {x2, x2 + x, x2 + 1} is a spanning set for P2, and yet, it contains only polynomials of degree 2.

" A spanning set for the vector space P2 must contain a polynominal of each degree 0,1, and 2

(4.1.5)FALSE. There is no such name for a vector whose components are all positive.

" A vector whose components are all positive is called a positive vector

"(4.4.7). FALSE. There are vector spaces that do not contain finite spanning sets. For instance, if V is the vector space consisting of all polynomials with coefficients in R, then since a finite spanning set could not contain polynomials of arbitrarily large degree, no finite spanning set is possible for V .

" Every vector space V has a finite spannig set

"(4.1.4)TRUE. The vector (−1)·(x1, x2, . . . , xn) is precisely (−x1,−x2, . . . ,−xn), and this is the additive inverse of (x1, x2, . . . , xn).

" For every vector(x1,x2....xn) in Rn, the vector (-1)*(x1,x2,...xn) is an additive inverse

"(4.4.3)TRUE. Every vector in V can be expressed as a linear combination of the vectors in S, and therefore, it is also true that every vector in W can be expressed as a linear combination of the vectors in S. Therefore, S spans W, and S is a spanning set for W.

" If S is a spanning set for a vector space V, then S is a spanning set for W

"(4.2.5)TRUE. This is part 1 of Theorem 4.2.6. ""the zero vector is unique""

" the additive inverse of vector v in a vector space V is unique

"(5.7.2) TRUE. The eigenspace E is equal to the null space of the n × n matrix A − Iλ, and this null space is a subspace of Rn.

" each eigenspace of an nxn matrix is a subspace of Rn

"(5.7.3) TRUE. The dimension of an eigenspace never exceeds the algebraic multiplicity of the corresponding Eigenvalue.

" if A has an eigenvalue λ of algebraic multiplicity 3, then eigenspace Eλ cannot be more than three-dimensional

"(4.4.8) FALSE. To illustrate this, consider V = R2, and consider the spanning set S = {(1, 0), (0, 1), (1, 1)}. The proper subset S0 = {(1, 0), (0, 1)} is still a spanning set for V . Therefore, it is possible for a proper subset of a spanning set for V to still be a spanning set for V .

" If S is a spanning set for a vector space V, then any proper subset S' of S is not a spanning set for V

"(4.4.4) FALSE. To illustrate this, consider V = R2, and consider the spanning set {(1, 0), (0, 1), (1, 1)}. Then the vector v = (2, 2) can be expressed as a linear combination of the vectors in S in more than one way: v = 2(1, 1) and v = 2(1, 0) + 2(0, 1). Many other illustrations, using a variety of different vector spaces, are also possible.

" If S is a spanning set for a vector space V, then every vector v in V must be uniquely expressible as a linear combination of the vectors in S

"(4.4.2) FALSE. In order to say that S spans V , it must be true that all vectors in V can be expressed as a linear combination of the vectors in S, not simply "some" vector.

" If some vector v in a vector space V is a linear combination of vectors In a set S, then S spans V

"(4.2.2)FALSE. The statement would be true if it was required that v be nonzero. However, if v = 0, then rv = sv = 0 for all values of r and s, and r and s need not be equal. We conclude that the statement is not true.

" If v Is a vector space V, and r and s are scalars such that rv = sv, then r = s

"(4.2.4)TRUE. We have (x + y) + ((−x) + (−y)) = (x + (−x)) + (y + (−y)) = 0 + 0 = 0, where we have used the vector space axioms in these steps. Therefore, the additive inverse of x + y is (−x) + (−y).

" If x and y are vectors in a vector space V, then the additive inverse of x+y is (-x) + (-y)

"(4.4.1)TRUE. By its very definition, when a linear span of a set of vectors is formed, that span becomes closed under addition and under scalar multiplication. Therefore, it is a subspace of V .

" The linear span of a set of vectors in a vector space V forms a subspace of V

"(4.4.6). FALSE. This is not necessarily the case. For example, the linear span of the vectors (1, 1, 1) and (2, 2, 2) is simply a line through the origin, not a plane.

" The linear span of two vectors in R3 is a plane through the origin

(4.3.1)FALSE. The null space of an m × n matrix A is a subspace of Rn, not Rm.

" The null space of an mXn matrix A with real elements is a subspace of Rm

"(4.3.3)TRUE. If b = 0, then the line is y = mx, which is a line through the origin of R2, a one-dimensional subspace of R2. On the other hand, if b 6= 0, then the origin does not lie on the given line, and therefore since the line does not contain the zero vector, it cannot form a subspace of R2 in this case.

" The points in R2 that lie on the line y = mx +b for a subspace of R2 if and only if b = 0

"(4.5.2) TRUE. We have seven column vectors, and each of them belongs to R5. Therefore, the number of vectors present exceeds the number of components in those vectors, and hence they must be linearly dependent.

" The set of column vectors of a 5 x 7 matrix A must be linearly dependent

(4.5.3). FALSE. For instance, the 7 × 5 zero matrix, 07×5, does not have linearly independent columns.

" The set of column vectors of a 7 x 5 matrix A must be linearly independent

"(4.2.8)FALSE. This set is not closed under scalar multiplication. If k < 0 and x is a positive real number, the result kx is a negative real number, and therefore no longer belongs to the set of positive real numbers.

" The set of positive real numbers, with the usual operations of addition and scalar multiplication, forms a vector space

"(4.1.3)TRUE. The solution set refers to collections of the unknowns that solve the linear system. Since this system has 6 unknowns, the solution set will consist of vectors belonging to R6.

" The sollution set to a linear system of 4 equations and 6 unknown equations consists of a collection of vectors in R6

(4.4.12). TRUE. This is explained in True-False Review Question 7 above.

" The vector space P of all polynomials with real coefficients cannot be spanned by a finite set S

"(4.4.9) TRUE. The general matrix a b c 0 d e 0 0 f in this vector space can be written as aE11 + bE12 + cE13 + dE22 + eE23 + fE33, and therefore the matrices in the set {E11,E12,E13,E22,E23,E33} span the vector space.

" The vector space of 3x3 upper triangular matrices is spanned by the matrices Eij, where 1 <= I <= j <= 3

"(4.2.1)TRUE. This is part 1 of Theorem 4.2.6. ""the zero vector is unique""

" The zero vector in vector space V is unique

"(4.1.11)FALSE. If the three vectors lie on the same line or the same plane, the resulting object may determine a one-dimensional segment or two-dimensional area. For instance, if x = y = z = (1, 0, 0), then the vectors x,y, and z rest on the segment from (0, 0, 0) to (1, 0, 0), and do not determine a three-dimensional solid region.

" Three vectors x,y, and z in R3 always determine a 3-dimensional solid reigon in R3

"(4.1.2) TRUE. The unique additive inverse of (x, y, z) is (−x,−y,−z).

" each vector (x,y,z) in ℝ3 has exactly one additive inverse

"(4.1.7)TRUE. When the vector x is scalar multiplied by zero, each component becomes zero: 0x = 0. This is the zero vector in Rn.

" for every vector in Rn, the vector 0x is the zero vector in Rn

"(4.3.7)FALSE. For instance, if we consider V = R3, then the xy-plane forms a subspace of V , and the x-axis forms a subspace of V . Both of these subspaces contain in common all points along the x-axis. Other examples abound as well.

" if V is a vector space, then two different subspaces of V can conain no common vectors other than 0

"(4.3.4)FALSE. The spaces Rm and Rn, with m < n, are not comparable. Neither of them is a subset of the other, and therefore, neither of them can form a subspace of the other.

" if m < n, then Rm is a subspace of Rn

"(4.4.11) FALSE. For instance, consider m = 2 and n = 3. Then one spanning set for R2 is {(1, 0), (0, 1), (1, 1), (2, 2)}, which consists of four vectors. On the other hand, one spanning set for R3 is {(1, 0, 0), (0, 1, 0), (0, 0, 1)}, which consists of only three vectors.

" if m<n, the any spanning set for Rn must contain more vectors than any spanning set for Rm

(4.5.9) FALSE. The illustration given in part (c) of Example 4.5.22 gives an excellent case-in-point here. Basically, you cannot conclude anything from this result; it may or may not be dependent

" if the Wronskian of a set of functions is identically 0 at every point of an interval I, then the set of functions is linearly dependent

"(4.5.5). TRUE. This is stated in Theorem 4.5.21: ""let f1,f2,f3.......fk be functions in C^(k-1)(I). If W[f1,f2,....fk] is nonzero at some point x0 in I then [f1,f2,....fk] is linearly independent on I""

" if the Wronskian of a set of functions is nonzero at some point x0 in an interval I, then the set of functions is linearly independent

"(4.1.12)FALSE. The components of kx only remain even integers if k is an integer. But, for example, if k = pi, then the components of kx are not even integers at all, let alone even integers.

" if x and y are vectors in R2 whose components are even integers and k is a scalar, then x and y and kx are also vectors in R2 whose components are even integers

(4.5.6). TRUE. If we can write v = c1v1 +c2v2 +· · ·+ckvk, then {v, v1, v2, . . . , vk} is a linearly dependent set.

" it is possible to express one of the vectors in a set S as a linear combination of the others, the S is a linearly dependent set

(4.1.8)TRUE. This is seen geometrically from addition and subtraction of geometric vectors.

" the parallelogram whose sides are determined by vectors x and y in R2 have diagonals determined by the vectors x +y and x-y

"(4.2.3)FALSE. This set is not closed under scalar multiplication. In particular, if k is an irrational number such as k = and v is an integer, then kv is not an integer.

" the set Z of vectors, together with the usual operations of addition and scalar multiplicaion, forms a vector space

"(4.2.7)This set is not closed under addition, since 1 + 1 is NOT in the set {0, 1}. Therefore, (A1) fails, and hence, this set does not form a vector space. (It is worth noting that the set is also not closed under scalar multiplication.)

" the set {0,1}, with the usual operations of addition and scalar multipication, forms a vector space.

"(4.2.6)TRUE. This is called the trivial vector space. Since 0 + 0 = 0 and k · 0 = 0, it is closed under addition and scalar multiplication. Both sides of the remaining axioms yield 0, and 0 is the zero vector, and it is its own additive inverse.

" the set {0}, with the usual operations of addition and scalar multiplication, forms a vector space

"(4.1.10)TRUE. Recalling that i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1), we have 5i − 6j +sqrt(2k) = 5(1, 0, 0) − 6(0, 1, 0) + sqrt(2)(0, 0, 1) = (5,−6,sqrt(2)), as stated.

" the vector 5i - 6j + sqrt(2k) in R3 is the same as (5,-6,sqrt(2))

"(5.8.1) TRUE. The terms "diagonalizable" and "nondefective" are synonymous. The diagonalizability of a matrix A hinges on the ability to form an invertible matrix S with a full set of linearly independent eigenvectors of the matrix as its columns. This, in turn, requires the original matrix to be nondefective.

" A square matrix A is diagonalizable if and only I fit is nondefective

"(5.7.6) FALSE. Many examples will show that this statement is false, including the n × n identity matrix In for n 2. The matrix In is not defective, and yet, has = 1 occurring with algebraic multiplicity n.

" If a matrix A has a repreated eigenvalue, then it is defective

"(5.6.4) FALSE. Many examples of this can be found. As a simple one, consider A = 0 0 0 0 and B = 0 1 0 0 . We have det(A − I) = (−)2 = 2 = det(B − I). Note that every nonzero vector in R2 is an eigenvector of A corresponding to = 0. However, only vectors of the form a0 with a 6= 0 are eigenvectors of B. Therefore, A and B do not have precisely the same set of eigenvectors. In this case, every eigenvector of B is also an eigenvector of A, but not conversely.

" If two matrices A and B have exactly the same characteristic [eigen] polynomial, then A and B must have exactly the same set of eigenvectors.

"(5.6.2)TRUE. When we compute det(A − Iλ) for an upper or lower triangular matrix A, the determinant is the product of the entries lying along the main diagonal of A − Iλ: det(A − Iλ) = (a11 − λ)(a22 − λ) . . . (ann − λ). The roots of this characteristic equation are precisely the values a11, a22, . . . , ann along the main diagonal of the matrix A.

" The eigenvalues of an upper or lower triangular matrix A are the entries appearing on the main diagonal of A

"(5.6.7) FALSE. This is not true, in general, when the linear combination formed involves eigenvectors corresponding to different eigenvalues. For example, let A = 1 0 0 2 , with eigenvalues = 1 and = 2. It is easy to see that corresponding eigenvectors to these eigenvalues are, respectively, v1 = 10 and v2 = 01 . However, note that A(v1 + bfv2) = 12 , which is not of the form (v1 + v2), and therefore v1 + v2 is not an eigenvector of A. As a more trivial illustration, note that if v is an eigenvector of A, then 0v is a linear combination of {v} that is no longer an eigenvector of A.

" a linear combination of a set of eigenvectors of matrix A is again a eigenvector of A

"(5.8.7) TRUE. Since I_n^(−1)*A* I_n = A, A is similar to itself.

" a square matrix A is always similar to itself

"(5.8.4) FALSE. An n × n matrix is diagonalizable if and only if it has n linearly independent eigenvectors. Besides, every matrix actually has infinitely many eigenvectors, obtained by taking scalar multiples of a single eigenvector v.

" and nxn matrix is diagonalizable if and only if it has n eigenvectors.

"(5.7.7) TRUE. Eigenvectors corresponding to distinct eigenvalues are always linearly independent, as proved in the text in this section.

" if (λ1,v1) and (λ2,v2) are two eigenvalue.eigenvector pairs of a matrix A with λ1 not= λ2, then {v1,v2} is linearly independent

"(5.8.6) TRUE. Assuming that A is diagonalizable, then there exists an invertible matrix S and a diagonal matrix D such that S−1AS = D. Therefore, D2 = (S−1AS)2 = (S−1AS)(S−1AS) = S−1ASS−1AS = S−1A2S. Since D2 is still a diagonalizable matrix, this equation shows that A2 is diagonalizable.

" if A is a diagonalizable matrix, then so is A^2

"(5.8.2) TRUE. If we assume that A is diagonalizable, then there exists an invertible matrix S and a diagonal matrix D such that S^(−1)AS = D. Since A is invertible, we can take the inverse of each side of this equation to obtain D^(−1) = (S^(−1)AS)^(−1) = S^(−1)A^(-1)S, and since D^(−1) is still a diagonal matrix, this equation shows that A^(−1) is diagonalizable.

" if A is an invertible, diagonalizable matrix, then so is A^(-1) [A inverse].

"(5.8.8) TRUE. The sum of the dimensions of the eigenspaces of such a matrix is even, and therefore not equal to n. This means we cannot obtain n linearly independent eigenvectors for A, and therefore, A is defective (and not diagonalizable).

" if A is an nxn matrix with n odd whose eigenspaces are all even dimensional, then A is not diagonalizable.

"(5.6.6) TRUE. The characteristic equation of an n × n matrix A, det(A − I) is a polynomial of degree n i the indeterminate . Since such a polynomial always possesses n roots (with possible repeated or complex roots) by the Fundamental Theorem of Algebra, the statement is true.

" if A is an nxn matrix, then A has n eigenvalues, including possible repeated eigenvalues and complex eigenvectors

"(5.6.5) TRUE. Geometrically, all nonzero points v = (x, y) in R2 are oriented in a different direction from the origin after a 90 rotation than they are initially. Therefore, the vectors v and Av are not parallel.

" if A is the 2x2 matrix of the linear tranformation T:R2 -> R2 that rotates the points of the xy-plany counterclockwise by 90 degrees, then A has no real eigenvalues

"(5.7.4) TRUE. Eigenvectors corresponding to distinct eigenspaces are linearly independent. Therefore if we choose one (nonzero) vector from each distinct eigenspace, the chosen vectors will form a linearly independent Set.

" if S is a set consisting of exactly one nonzero vector from each eigenspace of matrix A, then S is linearly independent.

"(5.8.5) TRUE. Assume A is an n × n matrix such that p() = det(A − Iλ) has no repeated roots. This implies that A has n distinct eigenvalues. Corresponding to each eigenvalue, we can select an eigenvector. Since eigenvectors corresponding to distinct eigenvalues are linearly independent, this yields n linearly independent eigenvectors for A. Therefore, A is nondefective, and hence, diagonalizable.

" if the characteristic [eigen] polynomial p(λ) of a matrix A has no repeated roots, then A is diagonalizable

"(5.7.5) TRUE. Since each eigenvalue of the matrix A occurs with algebraic multiplicity 1, we can simply choose one eigenvector from each eigenspace to obtain a basis of eigenvectors for A. Thus, A is nondefective.

" if the eigenvalues of a 3x3 matrix A are λ = -1,2,3, then A is nondefective

"(5.8.3) FALSE. For instance, the matrices A = I_2 and B = 1 1 0 1 both have eigenvalue = 1 (with multiplicity 2). However, A and B are not similar. [Reason: If A and B were similar, then S^(−1)AS = B for some invertible matrix S, but since A = I_2, this would imply that B = I_2, contrary to our choice of B Above.]

" if two matrices A and B have the same set of eigenvalues (including multiplicities), then they are similar

"(5.6.8) TRUE. This is basically a fact about roots of polynomials. Complex roots of real polynomials always occur in complex conjugate pairs. Therefore, if = a+ib (b 6= 0) is an eigenvalue of A, then so is = a−ib.

" if λ is an eigenvalue of the matrix A, then λ^2 is an eigenvalue of A^2

(4.6.1). FALSE. It is not enough that S spans V . It must also be the case that S is linearly independent.

A basis for a vector space V is a set S of vectors that spans V

(5.6.1) FALSE. If v = 0, then Av = v = 0, but by definition, an eigenvector must be a nonzero vector.

An eigenvector corresponding to the eigenvalue λ of a matrix A is any vector v such that Av = λv

"(4.5.1). FALSE. For instance, consider the vector space V = R2. Here are two different minimal spanning sets for V : {(1, 0), (0, 1)} and {(1, 0), (1, 1)}. Many other examples of this abound.

Every vector space V posseses a uniqe minimal spanning set

(4.6.11). FALSE. The set of all 3 × 3 upper triangular matrices forms a 6-dimensional subspace of M3(R), not a 3-dimensional subspace. One basis is given by {E11,E12,E13,E22,E23,E33}

The set of all 3x3 upper triangular matrices forms a three-dimensional subspace of M3(ALL REAL NUMBERS)

"(4.1.1) FALSE. The vectors (x, y) and (x, y, 0) do not belong to the same set, so they are not even comparable, let alone equal to one another

The vector (x,y) in ℝ2 has the same vector (x,y,0) in ℝ3

"(4.5.8). FALSE. None of the vectors (1, 0), (0, 1), and (1, 1) in R2 are proportional to each other, and yet, they form a linearly dependent set of vectors.

a set of three vectors in a vector space V is a linearly dependent if and only if all three vectors are proportional to one another

(4.6.3). TRUE. Any set of two non-proportional vectors in R2 will form a basis for R2.

a vector space V can have many different bases

(5.7.1) TRUE. This is the definition of a nondefective matrix.

an nxn matrix A is nondefective if it has n linearly independent eigenvectors

"(4.5.4). TRUE. Any linear dependencies within the subset also represent linear dependencies within the original, larger set of vectors. Therefore, if the nonempty subset were linearly dependent, then this would require that the original set is also linearly dependent. In other words, if the original set is linearly independent, then so is the nonempty subset.

any nonempty subset of a linearly independent set of vectors is linearly independent

(4.6.4). FALSE. We have dim[Pn] = n + 1 and dim[Rn] = n.

dim[Pn] = dim[Rn]

(4.6.10). TRUE. We can build such a subset by choosing vectors from the set as follows. Choose v1 to be any vector in the set. Now choose v2 in the set such that v2 62 span{v1}. Next, choose v3 in the set such that v3 62 span{v1, v2}. Proceed in this manner until it is no longer possible to find a vector in the set that is not spanned by the collection of previously chosen vectors. This point will occur eventually, since V is finite-dimensional. Moreover, the chosen vectors will form a linearly independent set, since each vi is chosen from outside span{v1, v2, . . . , vi−1}. Thus, the set we obtain in this way is a linearly independent set of vectors that spans V , hence forms a basis for V .

every set of vectors that spans a finite-dimensional vector space V contains a subset which forms a basis for V

(4.6.6). TRUE. We have dim[P3] = 4, and so any set of more than four vectors in P3 must be linearly dependent (a maximal linearly independent set in a 4-dimensional vector space consists of four vectors).

five vectors in P3 must be linearly dependent

"(4.3.6)FALSE. This set is not closed under addition. For instance, the point (1, 1, 0) lies in the xy-plane, the point (0, 1, 1) lies in the yz-plane, but (1, 1, 0) + (0, 1, 1) = (1, 2, 1) does not belong to S. Therefore, S is not a subspace of V .

if V = R3 and S consists of all points on the xy-plane, the xz-plane,and the yz-plane, then S is a subspace of V

(4.6.2). FALSE. For example, R2 is not a subspace of R3, since R2 is not even a subset of R3.

if V and W ware vector spaces of dimensions m and n, respectively, and if n > m, then W is a subspace of V

(4.6.5). FALSE. For example, if V = R2, then the set S = {(1, 0), (2, 0), (3, 0)}, consisting of 3 > 2 vectors, fails to span V , a 2-dimensional vector space.

if V is a n-dimensional vector space, then any set S of m vectors with m > n must span V

(4.6.9). FALSE. Only linearly independent sets with fewer than n vectors can be extended to a basis for V .

if V is an n-dimensional vector space, then every set S with fewer than n vectors can be extended to a basis for V

(4.5.7). TRUE. This is a rephrasing of the statement in True-False Review Question 5 above.

if a set S in a vector space V contains a linearly dependent subset, then S itself is a linearly dependent set

"(4.1.6)FALSE. The correct result is (s + t)(x + y) = (s + t)x + (s + t)y = sx + tx + sy + ty,

if s and t are scalars and x and y are vectors in Rn, then (s+t)(x+y) = sx + ry

(5.6.3)TRUE. The eigenvalues of a matrix are precisely the set of roots of its characteristic equation. Therefore, two matrices A and B that have the same characteristic equation will have the same eigenvalues.

if two matrices A and B have the same characteristic polynomial, then A and B must have the same set of eigenvalues

"(4.1.9)FALSE. k < 0, then kx is a vector in the third quadrant. For instance, (1, 1) lies in the first quadrant, but (−2)(1, 1) = (−2,−2) lies in the third quadrant.

if x is a vector in the first quadrant of R2, then any scalar multiple kx of x is still a vector in the first quadrant of R2

"(4.6.8). TRUE. Since M3(R) is 9-dimensional, any set of 10 vectors in this vector space must be linearly dependent by Theorem 4.6.4: "If a finite-dimensional vector space has a basis consisting of m vectors, then any set of more than m vectors is linearly dependent"

ten vectors in M3(ALL REAL NUMBERS) must be linearly dependent

"(4.3.2)FALSE. It is not necessarily the case that 0 belongs to the solution set of the linear system. In fact, 0 belongs to the solution set of the linear system if and only if the system is homogeneous.

the solution set of any linear system of m equations in n variables forms the subspace of Cn

(4.6.7). FALSE. For instance, the two vectors 1 + x and 2 + 2x in P3 are linearly dependent.

two vectors in P3 must be linearly independent


Set pelajaran terkait

FUNDAMENTALS OF DIGITAL MARKETING

View Set

Patho 370 final (select all that apply) ONLY

View Set

AP GOV: BROWN V. BOARD OF EDUCATION (1954)

View Set

Ch. 13 Key Pediatric Nursing Interventions

View Set

ATI Mood Disorder and Suicide Questions

View Set

Spanish Technical Things (Pronouns, conjugation, negatives)

View Set

HR Management Test 2 Study Guide

View Set

Chapter 11 Assessment for Education: Achievement and Aptitude Tests

View Set

BUS 215 Ch6 - Variable Costing and Segment Reporting: Tools for Management

View Set