Linear Algebra

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

If U, V and W are vector spaces such that W is a subspace of V and U is a subspace of V, then W = U. T/F

"If U, V and W are vector spaces such that W is a subspace of V and U is a subspace of V", it's possible that parts of W exist that are not part of U and vice versa. Hence, it's not necessarily true that W would be equal to U, and we would have to regard this statement as FALSE.

If V and W are both subspaces of a vector space U, then the intersection of V and W is also a subspace. T/F

"If V and W are both subspaces of a vector space U, then its TRUE that the intersection of V and W is also a subspace" on the premise that if V and W are subspaces of vector V, then the intersection of V and W is also a subspace.

If W is a subspace of R^2, then W must contain the vector (0,0). T/F

"If W is a subspace of R2 , then W must contain the vector (0,0)" is a TRUE statement on the premise of the additive inverse property, which holds when we consider x and y to be vectors in R2, and when we let y = -x, if follows that x+y is equal to x + (-x), and to the zero vector. QED.

If W is a subspace of a vector space V, then it has closure under addition as defined in V.

"If W is a subspace of a vector space V, then it has closure under addition as defined in V" is a TRUE statement by the definition of a subspace, which states that every subspace of a vector space has all the properties of the vector space.

Ten Axioms

1) Closure Under Addition 2) Associative Property 3) Additive Identity 4) Additive Inverse 5) Communicative Property 6) Closure Under Scalar Multiplication 7) Associative Property 8) Scalar Identity 9 & 10) Distributive Property

A real vector space is a set X with:

1) a special element 0 2) three operations, namely addition, multiplication and inverse and their axioms

Definition of row equivalence:

A matrix that can be changed to another by a sequence of elementary row operations.

Under what conditions will a set consisting of a single vector be linearly independent?

A set consisting of a single vector will be linearly independent so long as its solution is zero, and the vector is nonzero, as written: cv = 0, where c is the solution, 0, and v is the nonzero vector.

trivial solution

A vector is called trivial if all its coordinates are 0, i. e. if it is the zero vector.

A matrix equation is of the form:

Ax = b, where: A is a matrix of constants x is the column vector of unknowns b is a column vector of constants (solutions)

Additive Identity:

Because the zero function is also a continuous function, the additive identity property is satisfied.

The set of all ordered triples (x,y,z) of real numbers, where y >= 0, with the standard operations of R^3 is a vector space. T/F

FALSE. We can show that by multiplying the ordered set of triples of (1,2,3) by the scalar -2, we get (-2, -4, -6) which is not greater than 0 and therefore shows that the vector space of (1,2,3) is not closed under scalar multiplication.

The additive inverse of a vector is not unique. T/F

False

The inverse of the produce of two matrices is the produce of their inverses, that is (AB)^-1 = (A^-1)(B^-1)

False

Set S, k > 2, is linearly independent if and only if at least one of the vectors can be written as a linear combination of the other vectors. T/F

False The theorum pertaining to a property of linearly dependent sets demonstrates that a set S is linearly dependent if and only if at least one of the vectors V can be written as a linear combo of other vectors in S.

Adding a multiple of one column a matrix to another column changes only the sign of the determinant. T/F

False. Adding a multiple of one column of a matrix to another column does not change the value of the determinant.

If dim(V) = n, then there exists a set of n-1 vectors in V that will span V. T/F

False. For V to be spanned by a set of vectors of n dimensions, the set of vectors must have at least n dimensions. Given that dim(V) = n and set of vectors has n-1 vectors, then set of vectors does not have at least as many dimensions as V and therefore cannot span V.

The set of all first-degree polynomials with the standard operations is a vector space. T/F

False. The sum of two (2) first-degree polynomials is not a polynomial of the first degree and therefore is not closed under addition.

If dim(V) = n, then there exists a set of n+1 vectors in V that will span V. T/F

For V to be spanned by a set of vectors of n dimensions, the set of vectors must have at least n dimensions. Given that the set of vectors has n+1 dimensions, the set of vectors has a set of n+1 vectors which span V.

Inverse:

For an element x in X, -x is also a member in X.

Cramer's Rule:

Given a system of linear equations, Cramer's Rule is a handy way to solve for just one of the variables without having to solve the whole system of equations. They don't usually teach Cramer's Rule this way, but this is supposed to be the point of the Rule: instead of solving the entire system of equations, you can use Cramer's to solve for just one single variable.

Addition:

Given two (2) elements x,y in the set X, the sum of x+y should also be a member of X.

Scalar Identity

Holds, as any continuous function multiplied by itself, is equal to a continuous function.

If dim(V) = n, then any set of n+1 vectors in V must be linearly dependent. T/F

If S is a basis for a vector space V, then every set containing more than n vectors in V is linearly dependent. Therefore, the statement asserting if dim(V) = n, then any set of n+1 vectors in V must be linearly dependent is true.

If dim(V) = n, then any set of n-1 vectors in V must be linearly independent. T/F

If dim(V) = n, then set n-1 vectors could be linearly dependent. For instance, they could all be multiples of each other. Therefore, the statement asserting that if dim(V) = n, then any set of n-1 vectors in V must be linearly independent is false.

Prove whether the set if a subspace of C[0,1].

In order for S to be a subspace of [0,1], it must satisfy three (3) conditions: 1) S, i.e. vector 0 is a zero of the space, (x) = 0 on [0,1] Because (x)dx = 0 dx = 0 , then S. 2) if f, gS, then f+gS, i.e. closure under addition axiom is satisfied (f+g)(x) dx = f(x) + g(x) dx = f(x) dx + g(x) dx = 0+0 = 0 3) the closure under scalar multiplication axiom is satisfied, such that fS (f)(x) dx = (f(x)) dx = f(x) dx = 0 = 0. i.e. fS Because conditions 1, 2 and 3 are satisfied, S is a subspace of the set C[0,1].

If A is a fixed 2 x 3 matrix, prove that the set is not a subspace of R^3.

In order to be closed under addition, Ax1 + Ax2 must equal A(x1 + x2). When x1 , x2 R3 such that Ax1= and Ax2 = , it's clear that: a) Ax1 + Ax2 = & b) A(x1 + x2) = i.e. Ax1 + Ax2 is not equal to A(x1 + x2), meaning that the set is not closed under addition and therefore is not a subspace.

Prove that any set of vectors containing the zero vector is linearly dependent.

Let V be a vector space and let (v1,v2, ... vn) represent any set of vectors in the vector space. Letting one of the vectors be a zero vector, v1 = 0 and setting v1 = 1, we are able to write the vector in another form, which can be expressed as: 1 * 0 + 0 * v2 + 0 * vn = 0 Although v1 's constant is 1, and therefore is a nontrivial solution, we can still express the zero vector as a nontrivial linear combination of the vectors in S, which contradicts the linear independence of the set of vectors. Therefore, this set of vectors are linearly dependent and any set of vectors containing the zero-vector is linearly dependent.

Is the set of all n × n invertible matrices a subspace?

No, because the sum of two invertible matrices is not necessarily invertible.

Determine whether the set of all third-degree polynomials with the standard operations is a vector space.

No, fails closure under addition.

When W is the set of all non-negative functions in C(−∞, ∞).

Not a subspace

A vector space consists of four entities: 1) a set of vectors 2) a set of scalars 3 & 4) two operations T/F

TRUE. We can reference the definition of a vector space, stating that V is a set on which two operations (vector addition and scalar multiplication) are defined and that V is a vector space if all axioms are satisfied for every vector u, v and w in V and for every scalar (real number) c and d.

7 & 8) Distributive Property:

The distributive property holds as 1) the product of a scalar and continuous function is also continuous and 2) the sum of any two continuous functions is also continuous.

Homogenous Linear Equations

The important question for a homogeneous system is whether or not there is any non-trivial solution

Under what conditions will a diagonal matrix (a matrix in which the entries outside the main diagonal are all zero) be invertible?

The inverse exists if the numbers along the diagonal of the square matrix are not zero.

Closure Under Scalar Multiplication:

The product of a scalar and a continuous function is also continuous. And therefore, any scalar, c, multiplied by a set of continuous functions falls within the the vector space as defined by C[0,1].

Every vector space V contains two proper subspaces that are the zero subspace and itself. T/F

The statement asserting that "Every vector space V contains two proper subspaces that are the zero subspace and itself" is FALSE because any subspace, x, is by definition, only a proper subspace of V if it is neither the whole space or V nor the zero subspace.

Every vector space V contains at least one subspace that is the zero subspace. T/F

The statement asserting that "Every vector space V, contains at least one subspace that is a zero subspace" is TRUE on the premise that every vector contains at least two subspaces, which include the vector space itself and the zero subspace.

The set of all integers with the standard operations is a vector space. T/F

The statement that the set of all integers with the standard operations is a vector space is FALSE because, in conducting scalar multiplication on the vector, u = 3, by the scalar, c = 5/2, we get 15/2, which does not fall within the set of all integers and therefore does not satisfy the axiom of closure under scalar multiplication.

Closure Under Addition:

The sum of any two (2) continuous functions is again, a continuous function. And therefore, the sum of a set of continuous functions fall within the the vector space as defined by C[0,1].

How many possible matrices that could be categorized as being in Reduced-Row Echelon Form (RREF) exist?

These are the four possibilities because: 1) There must be a 1 as the leading variable, and to be defined as being in RREF, a 0 must go under the 1. It follows that the first column has only one accepted form. 2) The only other possible number to put in the bottom right corner (besides a 1, as was already specified with the first possible matrix in RREF) is a 0. And we may still have a free variable in the upper right hand corner, of any value.

A vector can be a real number, an n-tuple, a matrix, a polynomial, a continuous function and so on.

True

If A can be row reduced to the identify matrix, then A is nonsingular.

True

Once a theorem has been proved for an abstract vector space, you need not provide separate proofs for n-tuples, matrices and polynomials.

True

The set of all n-tuples is called n-space and is denoted by Rn.

True

The standard operations in Rn are vector addition and scalar multiplication. T/F

True

If the matrices A, B and C satisfy BA = CA, and A is invertible, then B = C

True Given: 1. A is invertible. 1. AB = BC 1. I = Identity Matrix So, 1. A-1(AB) = A-1(AC) 2. (A-1·A)B = (A-1·A)C 3. IB=IC i.e.:B=C e.g., the statement asserting B's equivalence to C, given 1) BA = CA and 2) A is invertible is true.

A vector space consists of four entities: 1) a set of vectors 2) a set of scalars 3) & two operations

True, those two operations consisting of vector addition and scalar multiplication

To show that a set is not a vector space, it is sufficient to show that just one axiom is not satisfied. T/F

True. By definition of a vector space, V is a vector if and only if all the axioms are satisfied for every vector u, v and q in V and every scalar c and d is satisfied.

If a subset S spans a vector space V, then every vector in V can be written as a linear combination of the vectors in S.

True. By definition of the span of a set which states that a subset of a vector space S is said to span a set of V if every vector in V can be written as a linear combination of vectors in S.

Multiplying a column of a matrix by a nonzero constant results in the determinant being multiplied by the same nonzero constant. T/F

True. Given A and B are square matrices, multiplying a column of a matrix by a nonzero constant results in the determinant being multiplied by the same nonzero constant by the theorum stating: If B is obtained from A by multiplying a row of A by a non-zero constant c, then det B = c (det A)

If one row of a square matrix is a multiple of another row, then the determinant is 0. T/F

True. If two rows of a square matrix are equal, then its determinant is 0 by the theorum stating: If A is a square matrix and two rows (or columns) are equal, then det A = 0.

The set of all pairs of real numbers of the form (0,y) with the standard operations on R^2 is a vector space. T/F

True. It's TRUE that all pairs of real numbers of the form (0, y) with the standard operations on R2 is a vector space, because the set satisfies the ten axioms of vector space.

Two matrices are column-equivalent when one matrix can be obtained by performing elementary column operations on the other.

True. Two matrices are column equivalent when one matrix can be obtained by performing elementary column operations on the other by definition of column equivalence. So if B can be obtained through successive application of elementary column operations on A, then B is column equivalent to A.

Interchanging two rows of a given matrix changes the sign of its determinant. T/F

True. Given matrix A and B are square matrices, interchanging two rows of a given matrix changes the sign of the determinant by virtue of of theorum stating: If B is obtained from interchanging two rows (or columns) of A, then det B = -det A

If one row of a square matrix is a multiple of another row, then the determinant is 0.

True. We can demonstrate that if one row of a square matrix is a multiple of another row, then the determinant is 0.

Definition of column equivalence:

Two matrices are column equivalent if their transpose matrices are row equivalent.

Is W a subspace of V? W is the set of all functions continuous on [-1,1] V is the set of all functions integrable on [-1,1]

We know that continuous functions are also integrable on a given interval. So each continuous function belonging to W also belongs to V. 1) Closed under addition: the sum of two continuous functions is continuous. So f+g belongs to W. 2) Closed under scalar multiplication: cf belongs to W. Thus W is a subspace of V.

Determine whether the set of all polynomials of degree four or less with the standard operations is a vector space.

Yes

Associative Property:

[(f+g) + h] = [f+(g+h)], and therefore the associative property holds.

non-trivial solution

a non-zero solution for a homogeneous system

Skew-Symmetric

a skew-symmetric matrix is a square matrix whose transpose equals its negative

Symmetric

a symmetric matrix is a square matrix that is equal to its transpose

A linear equation is of the form:

a1x1+a2x2...+anxn = b where: 1) the coefficients are real numbers 2) b is constant

Associative Property

c(df) = (cd)f. And therefore, the associative property holds.

A free variable:

can take on any real value, set equal to t

if det A != 0, then Ax=0 has...

exactly one solutions, x = 0

Additive Inverse:

f(x) + (-f)(x) = 0. And therefore, the function -f is the inverse of f.

Communicative Property:

f+g = g+f. And therefore, the communicative property holds.

A vector (all of the same dimension) is linearly independent if the vector equation...

has only one solution, 0

We know a system is not linearly independent...

if a solution apart from 0 exists

if det A = 0, then Ax=0 has...

infinitely many solutions

A variable that is dependent on the other variable is not the free, but rather the _____ variable.

pivot

A matrix size is equal to:

row x columns

A matrix is defined as:

row x columns, and this is how we determine if we can multiply matrices


संबंधित स्टडी सेट्स

BTS - All Night (feat. Juice WRLD)

View Set

Omni-Channel for Lightning Experience

View Set

Research Quiz 1 -- Chapters 1, 13, 14

View Set

Network+ Quiz #1 (Chapter 1,2,3)

View Set

Careers, Salaries, and Lifetime Income

View Set

electrical signaling by neurons part 1

View Set