334 Master Set

Ace your homework & exams now with Quizwiz!

Picture of an equilibrium solution

This shows the velocity value for which dv/dt = 0(pg. 4)

Characteristic equation of a second order, homogeneous equation

The the two roots of the characteristic equation, which may be real and different, real but repeated, or complex conjugates, allow us to derive solutions to the given differential equation (pg. 140)

Theorem 2.4.1: All first-order linear initial value problems have a unique solution

The theorem asserts both the existence and uniqueness of solutions to first order, linear differential equations with an initial value (pg. 69)

What are systems of differential equations?

(pg. 20)

example of a system of ordinary differential equations

(pg. 20)

Condition for critical damping

(pg. 200)

Forced vibrations with damping equation and picture

(pg. 207-209)

Forced vibrations with damping solution

(pg. 209)

Example of a nonlinear differential equation

(pg. 21)

Condition for resonance

(pg. 210)

amplitude modulation

(pg. 214)

beat

(pg. 214)

Differential operator L of order n

(pg. 221)

Fundamental set of solutions

(pg. 223)

Linearly dependent functions

(pg. 223)

Generalized nth-order, linear, nonhomogeneous, ordinary differential equation

(pg. 225)

What does it mean for a system to be locally linear?

(pg. 521)

Jacobian matrix

(pg. 522)

Liapunov function

(pg. 558)

Theorem 9.6.3

(pg. 560)

Theorem 9.6.4: when is the function V(x, y) = ax² + bxy + cy² positive definite? When is it negative definite?

(pg. 561)

Theorem 2.4.2: this tells us about the solutions of nonlinear first order differential equations. The fundamental existence and uniqueness theorem.

(pg. 70)

phase line

(pg. 81)

A critical threshold

(pg. 87)

Logistic growth with threshold

(pg. 87)

Semi-stable solutions

(pg. 89)

What are the ways of solving nonhomogeneous systems of differential equations?

1) Diagonalization 2) Method of Undetermined Coefficients 3) Variation of Parameters 4) Laplace Transforms

How do we diagonalize an nxn matrix A?

1) Find n linearly independent eigenvectors 2) Construct the nxn matrix T by making the eigenvectors found in part 1) the columns of T. 3) Find the inverse of T 4) Construct the nxn diagonal matrix where the entries of D are the eigenvalues of A corresponding to the eigenvector columns of T (in the right order) 5) Check that TDT⁻¹ = A (pg. 426)

Forces at play in a oscillating spring problem

1) The force of gravity mg 2) The spring force -k u(t) 3) The damping force -γu'(t) 4) The driving (aka external aka applied) force F(t) (pg. 194)

center

Context: System of 2x2 homogeneneous differential equations: x' = Ax (A is a 2x2 matrix with real elements). A center is a critical point that arises when the matrix A has complex conjugate eigenvalues with only imaginary parts (the real parts are zero). (pg. 503)

System of differential equations initial value problem

(pg. 362)

What is a solution to a system of differential equations?

(pg. 362)

General system of n first order linear equations

(pg. 363)

Theorem 7.1.2

(pg. 363)

What is Φ(t) (phi; special fundamental matrix)? How is it found? Why is it useful?

(pg. 422-423)

FIXME: finish reading 7.7 and learn about Diagonal matrices

(pg. 423-427)

FIXME: why did we learn about diagonlization in section 7.7? Would we want to diagonalize the matrix A in the equation x' = Ax ?

(pg. 426)

generalized eigenvector

(pg. 433)

repeated eigenvalues with only one linearly independent eigenvector

(pg. 433)

Linear approximation in terms of the Jacobian

(Class notes)

Use the following diagram to explain the difference between stability and asymptotic stability

(b) - both situations start with an initial displacement. (a) - with air resistance, the motion of the pendulum oscillates to zero (asymptotic stability) (c) - without air resistance, the motion of the pendulum forever oscillates (stability) (pg. 512)

Taylor series for sinx

(centered at x₀ = 0??)

Theorem: Finding portions of the basin of attraction for a critical point at the origin

(class)

Theorem: the Laplace transform of the convolution of two functions

(in class)

Stability theorem

(lecture)

The derivatives of a power function represented by a power series with a given interval of convergence

(pg 250)

Tangent line method AKA Euler method

(pg. 103)

Wronskian determinant (order of 2)

(pg. 148-149)

Theorem 3.2.3: what the Wronskian tells us about the solutions of second order homogenous inital value problems

(pg. 149)

Theorem 3.2.5

(pg. 151)

Euler's formula for arbitary complex numbers

(pg. 160)

Complex roots of the characteristic equation

(pg. 162)

unstable critical points

Critical points that are not stable (pg. 509)

general solution vs. fundamental set of solutions (pg. 150)

FIXME

What is 0!

It is defined to be 1 (google)

Expanded mxn matrix A.

Notice how the indexing works (pg. 386)

Show that the difference of two solutions to a non-homogeneous equation is a solution to the corresponding homogeneous equation

(pg. 225)

Theorem 4.1.3: Fundamental sets of nth order, linear, homogeneous, ordinary differential equations on an interval I are linearly independent on I and vice versa

(pg. 225)

The generalized, nth order, linear, homogeneous equation

The right hand side is 0 (pg. 222)

Characteristic polynomial for the nth order, linear, homogeneous differential equation with real constant coefficients

(pg. 228)

Real and Unequal roots to the generalized characteristic equation

(pg. 229)

Quotient of two power series that share the same interval of convergence

(pg. 249)

How to find the Taylor series coefficients for a given function f(x)

(pg. 250)

What is a Taylor series?

(pg. 250)

What does it mean for a function to be analytic?

(pg. 250-251)

Bessel equation

(pg. 254)

Legendre equation

(pg. 254)

Generalized definitions of ordinary and singular points

(pg. 266)

General formula of a first order, ordinary differential equation

(pg. 31)

Definition of the Laplace transform. When does this exist?

(pg. 312)

Write the generalized form of the first order linear differential equation

(pg. 32)

Laplace transform of cos(at)

(pg. 321)

Laplace transform of exp(at)

(pg. 321)

Laplace transform of sin(at)

(pg. 321)

Laplace transform of sinh(at) and cosh(at)

(pg. 321)

Laplace transform table

(pg. 321)

Laplace transform of the Dirac delta function

(pg. 346)

How do we take the Laplace Transform of the product of a shifted dirac function and some other function f(t)?

(pg. 346) Something with this?

Notation for convolution of two functions f and g

(pg. 350)

Theorem 6.6.1: How to find the inverse transform of the product of two functions F(S)G(s) using convolutions

(pg. 350)

When are two matrices equal?

(pg. 369)

Zero matrix

(pg. 369)

Matrix scalar multiplication

(pg. 370)

Matrix subtraction

(pg. 370)

How to find the length or magnitude of a vector x (where the elements are real or imaginary)

(pg. 372)

Identity matrix

(pg. 372)

What does it mean for a matrix to be nonsingular or invertible? What about singular or noninvertible?

(pg. 372)

Elementary row operations

(pg. 373)

Linearly dependent vs. independent set of vectors

(pg. 382)

Eigenvector equation

(pg. 384)

normalized eigenvectors

(pg. 385)

The Wronskian of a set of n solutions to the homogeneous system of equations

(pg. 392)

Theorem 7.4.3

(pg. 393)

Summary of 7.4

(pg. 394)

fundamental modes

(pg. 415)

Solution of the IVP (for systems of first order DFEQs) in terms of the fundamental matrix

(pg. 421-422)

Systems of nonhomogeneous equations: Method of Diagonalization

(pg. 440-441) (see example 1)

Systems of nonhomogeneous equations: method of undetermined coefficients

(pg. 442) (see example 2)

Systems of nonhomogeneous equations: method of variation of parameters

(pg. 443 - 445) (see example 3)

Systems of nonhomogeneous equations: method of Laplace Transforms

(pg. 446 447) (see example 4)

Equilibrium solutions aka critical points

(pg. 496)

trajectory, phase plane, and phase portrait

(pg. 496)

rate constant aka growth rate

(pg. 5)

Table 9.1.1: MEMORIZE AND UNDERSTAND THIS TABLE

(pg. 504)

Distinguish between autonomous and nonautonomous systems

(pg. 508)

Figure 9.2.1: Asymptotic stability vs. stability

(pg. 510)

Schematic perturbation of real and equal eigenvalues

(pg. 520)

Inverse of a 2x2 matrix

review

Reduction of order

(pg. 171)

Summary of Method of Undetermined Coefficients

(pg. 181)

table 3.5.1: forms to particular solutions of nonhomogeneous equations

(pg. 182)

Example of an ordinary differential equation

(pg. 19)

Examples of partial differential equation

(pg. 19)

Natural frequency of an undamped free oscillating system

(pg. 197)

Damped vibration picture

(pg. 199)

Explain the method of undetermined coefficients for nth order differential equations

1) The method of undetermined coefficients is used for nth order, linear, nonhomogeneous equations with constant coefficients. 2) Find the general solution to the corresponding nonhomogeneous equation 3) Find a particular solution to the original differential equation 4) The sum of the results in steps 2 and 3 form a general solution to the original equation 5) If initial conditions are given, we can use them to find values for the remaining constants in the general solution (pg. 236 - 237)(pg. 181)

What is a McClaurin series?

A Taylor series expansion of a function about 0

Logistic growth model for population

A better model than exponential growth because it models environmental limitations. It behaves exponentially for small populations (pg. 80)

What is an initial condition? What is an initial value problem?

A condition a solution of a differential equation must satisfy. In general if we have some differential equation F(f', t), and an initial condition f(0) = c, then we are looking for the specific function f of the family of solutions to F such that f(0) = c. When we are given a differential equation and an initial condition, we have an initial value problem (pg. 12)

What does it mean for a matrix to be diagonaliziable? When is a matrix diagonalizable?

A diagonalizable matrix must be square (nxn) and have n linearly independent eigenvectors (pg. 425)

Definition of non-linear differential equations

A differential equation is nonlinear if it's not linear (pg. 21)

Theorem 6.1.2: Conditions under which the Laplace transform exists

A function f has a laplace transform if it is piecewise continuous and is of exponential order (pg. 312)

What does it mean for a function to be piecewise continuous?

A function f is said to be piecewise continous on an interval α ≤ t ≤ β if the interval can be partitioned by a finite number of points α = t₀ < t₁ < ... < tn = β so that 1) f is continous on each OPEN subinterval ti-1 < t < ti. 2) f approaches a finite limit as the endpoints of each subinterval are approached from within the subinterval. In other words, f is piecewise continuous on α ≤ t ≤ β if it is continuous there except for a finite number of jump discontinuities (pg. 310)

Exponential order

A function is of exponential order if it can be bounded above by an exponential function. A function must be of exponential order in order to have a Laplace transform (pg. 313)

What is a general solution? What are integral curves?

A general solution to a differential equation is one that represents a family of solution curves (by letting its parameter(s) vary). The solution curves are called integral curves and each integral curve is associated with a particular value of the parameter(s) (pg. 12)

What does it mean for a matrix to be singular? Nonsingular?

A matrix is singular if its determinant is zero. Singular matrices do not have inverses. If the determinant is nonzero, the matrix is nonsingular. Nonsingular matrices have inverses and are square (pg. 378)

Liapunov's second method (aka direct method)

A method for determining the FIXME (pg. 555)

phase portrait

A plot that shows a representative sample of trajectories for a given system of differential equations (we've only drawn two dimensional phase portraits. (pg. 396)

Single point

A point x₀ such that P(x₀) = 0. P is a function of x (a polynomial?) in the following equation (pg. 254)

Ordinary point

A point x₀ such that P(x₀) ≠ 0. P is a function of x (a polynomial?) in the following equation (pg. 254)

What does it mean for a power series to converge?

A power series is said to converge at a point x if the following limit exists for that x. A series may converge for all x, or may converge for some values of x and not for others (pg. 247-248)

Homogeneous, linear, second order differential equations. Nonhomogeneous

A second order differential equation is homogeneous if we can write it as a linear combination of y'', y' and y (t doesn't need to be linear) (pg. 138)

Linear, second order differential equations (general formula)

A second order differential equation is linear if we can write it as a linear combination of y, y', and y'' (the coefficients can be functions of our independent variable t) (pg. 138)

separatrix

A trajectory that bounds a basin of attraction (pg. 514)

What is the radius of convergence of a power series? What is the interval of convergence?

All power series centered about x₀ are convergent at x₀. However, for many power series, there exists a number p > 0 for which the power series converges in the interval |x - x₀| < p. We call p the radius of convergence and the interval |x - x₀| < p the interval of convergence (pg. 248)

Give a definition of exact equations

An equation of having the form of (6) with properties of (7) (pg. 95-96)

Definition of an improper integral

An improper integral over an unbounded interval is defined as the limit of integrals over finite intervals (pg. 309)

Matrix transpose

An mxn matrix becomes an nxm matrix. The rows and columns are switched (pg. 369)

What is a separable first order differential equation?

Any first order differential equation that can be put into the following form (pg. 42)

Nonlineaer second order differential equations

Any second order differential equation that isn't linear (pg. 138)

What is the geometric series?

Any series that can be put in the following form

What is a power series?

Any series that can be put in the following form. The c_n terms are called the coefficients of the series. The a term is a constant and we say that we have a power series about a.

Show that any first order differential equation can be written in the following form

Begin with the general first order differential equation df/dx = f(x, y) (pg. 42)

node aka nodal source

Context: System of 2x2 homogeneneous differential equations: x' = Ax (A is a 2x2 matrix with real elements). A critical point (the origin) from which solutions diverge from. These occur when the eigenvalues are REAL and satisfy 0 < r₂ < r₁ (pg. 497)

node aka nodal sink

Context: System of 2x2 homogeneneous differential equations: x' = Ax (A is a 2x2 matrix with real elements). A critical point (the origin) to which all solutions converge to as t→oo. These can occur when eigenvalues are REAL and satisfy r₁<r₂<0 (pg. 497)

improper or degenerate node

Context: System of 2x2 homogeneneous differential equations: x' = Ax (A is a 2x2 matrix with real elements). A critical point that occurs when A has a single eigenvalue with a geometric multiplicity 1(i.e. a single linearly independent eigenvector). (pg. 500)

proper node aka star point

Context: System of 2x2 homogeneneous differential equations: x' = Ax (A is a 2x2 matrix with real elements). A critical point that occurs when the eigenvalues of matrix A are equal but we still have two linearly independent eigenvectors. The picture shows the case when r₁=r₂ < 0. If r₁=r₂ > 0 then we simply reverse the direction of the arrows (pg. 499)

Saddle point

Context: System of 2x2 homogeneneous differential equations: x' = Ax (A is a 2x2 matrix with real elements). A critical point. Occur when the eigenvalues of matrix A are REAL and have OPPOSITE signs

Spiral point, spiral sink, spiral source

Context: System of 2x2 homogeneneous differential equations: x' = Ax (A is a 2x2 matrix with real elements). A spiral point is a critical point that arises when the matrix A has complex conjugate eigenvalues. If the real part of the eigenvalues is positive, solutions diverge away from the origin and we call it a spiral source. On the other hand, if the real parts are negative, solutions converge to the origin and we call it a spiral sink (pg. 501)

Spiral point. When do these occur? When are they asymptotically stable?

Context: System of 2x2 homogeneneous differential equations: x' = Ax (A is a 2x2 matrix with real elements). The eigenvalues of matrix A are COMPLEX CONJUGATES with nonzero real part. When these conditions are met, the critical point (the origin) is called a spiral point. If the real part of the eigenvalues is negative, solutions spiral into origin and we say it is Asymptotically stable. If they are positive, the solutions spiral away from the origin and the solutions are unstable (pg. 410)

saddle point

Context: System of 2x2 homogeneneous differential equations: x' = Ax (A is a 2x2 matrix with real elements). The eigenvalues of matrix A are REAL and have OPPOSITE sign. When this occurs the critical point (the origin) is classified as a saddle point (pg. 400)

node (source vs. sink). When do these occur?

Context: System of 2x2 homogeneneous differential equations: x' = Ax (A is a 2x2 matrix with real elements). The eigenvalues of matrix A are REAL, different, and have the SAME sign. When this occurs the critical point (the origin) is classified as a node. If the eigenvalues are both negative, we have a nodal sink. If the eigenvalues are both positive, we have a nodal source (pg. 402)

Center (pg. 410)

Context: System of 2x2 homogeneneous differential equations: x' = Ax (A is a 2x2 matrix with real elements). The eigenvalues of matrix A are purely imaginary (complex with zero real parts). In this case, solutions travel in closed curves about the origin (can be either clockwise or counterclockwise) (pg. 410)

improper node

Context: x' = Ax (where A is 2x2 with constant coefficients) A critical point that occurs when we have repeated eigenvalues but only one linearly independent eigenvector (pg. 432)

Properties of convolutions

Convolution is commutative, distributive, and associative. The convolution of a function with zero is zero (pg. 351)

How to transform an nth order differential equation into a system of equations

FIXME

How do we classify the critical point if both eigenvalues are complex iwht nonzero real part?

FIXME (pg. 501)

How do we classify the critical point if both eigenvalues are real and equal???

FIXME. Two cases: they're equal and postive and they're equal and negative (pg. 498)

Theorem 9.6.2: When the origin is an unstable critical point. Conditions for instability

FIXME: don't really understand this (pg. 558)

True or false: if a series is convergent, then it must converge absolutely

False: E.g. the series ∑(-1)^n/n converges but does not converge absolutely.

Repeated root of the generalized characteristic equation

For an equation of order n, if a root of Z(r) = 0, say r =r₁, has multiplicity s (where s ≤ n), then the following are corresponding solutions to the differential equation (pg. 232)

Fundamental set of solutions of a system of differential equations

Fundamental sets are 1) linearly independent and 2) Span the solution space (pg. 392)

What is an isolated critical point?

Given a nonlinear system, a critical point is said to be isolated if there exists a circle centered about the critical within which there are no other critical points (pg. 520)

Critical points

Given an autonomous first order differential equation dy/dt = f(y), the roots of f(y) are the critical points (pg. 80)

Eigenvalues and Eigenvectors

Given an eigenvector equation (see picture) the scalar lambdas are called eigenvalues of matrix A, while the NONZERO solution of the equation obtained by using such a value of λ are called the eigenvectors corresponding to that eigenvalue (pg. 384)

What is the rate of growth or decline in a population problem?

Given dy/dt = ry, the rate of growth or decline is r (pg. 79)

homogeneous vs non homogeneous systems of equations

Given the matrix equation Ax = b, if b is the zero vector, then the system is homogeneous; otherwise, it is non homogeneous (pg. 378)

Theorem 3.2.7 (Abel's Theorem)

I don't really understand this(pg. 154)

Homogeneous equations with constant coefficients. How do we solve these equations? What is its characteristic equation?

If P, Q, and R of a homogeneous differential equation are (real??) constants. This equation can always be solved easily in terms of the elementary functions of calculus (pg. 139)

What does it mean for a power series to converge absolutely?

If a series converges absolutely, then the series also converges; however, the converse is not necessarily true (pg. 248)

Linear vs. nonlinear systems of differential equations

If each of the functions F₁, ..., Fn in the following equation is a linear function of the dependent variables x₁, ..., xn, then the system of equations is said to be linear; otherwise, it is nonlinear (pg. 363)

globally asymptotically stable

If every trajectory approaches a critical point, then we say that it is globally asymptotically stable (pg. 527)

How to tell if the row or column vectors of a square matrix are linearly independent

If the determinant of the matrix containing the vectors is nonzero, then the row vectors are linearly independent and the column vectors are linearly independent

homogeneous system of first order linear equations vs. non homogeneous

If the functions g₁(t), ..., gn(t) are all zero, then we say the system is homogeneous. Otherwise, it is non homogeneous (pg. 363)

Orthogonal vectors

If the inner product of two vectors is zero, the two vectors are orthogonal (pg. 373)

What is an equilibrium solution in general?

If we are given a differential equation dy/dt = f(y, t), it's the value of y such that dy/dt = 0 (pg. 3)

Theorem 3.2.6: complex valued solutions to homogeneous second order differential equations

If we have a complex valued solution of Eq. (2), then its real part and imaginary part are also solutions of the equation(pg. 153)

Theorem 7.4.1: The principle of superposition

If we have two solutions to the homogeneous system of equations, then any linear combination of these solutions is also a solution. It is proved by differentiating the linear combination and using the fact that the two solutions satisfy the homogeneous system of equations. By repeatedly applying this theorem, we can show that any finite linear combination of solutions of E.Q. (3) is also a solution (pg. 391)

What is an integrating factor?

In the context of solving first order, linear differential equations, it's a function µ(t) we multiply our equation by to solve it (pg. 32 see example 2)

Figure 9.3.1: Schematic perturbation of purely imaginary engivenavalues

In the first case (left picture), the critical point is at first classified as a stable center because the eigenvalues are purely imaginary. After the perturbation, the eigenvalues become complex with positive real part, making the critical point an unstable spiral point. In the second case (right picture) the purely imaginary eigenvalues are shifted to complex with negative real part. Thus the stable center is shifted to a stable spiral point (pg. 520)

asymptotically stable critical points

In words this says that solutions that start "sufficiently" close to our critical point not only must stay "close" but must eventually approach the critical point as t approaches infinity. Note that asymptotic stability is a stronger condition than stability (pg. 510)

Definition of stable critical points

In words, this says that solutions that start "sufficiently close" to our critical point stay close (pg. 509)

Example 1: For what values of c does the improper integral of exp(ct) where t ≥ 0 converge? Where does it diverge?

It converges if c < 0 and diverges if c ≥ 0 (pg. 310)

What is the harmonic series? Does it diverge or converge?

It diverges

What is the ratio test?

It helps us test the convergence of power series at a given point. It tells us that if the limit of the ratio of successive terms in a power series is less than 1 (at the given point), then the series is absolutely convergent and thus convergent (at the given point). If the limit is greater than 1, the series in divergent (at the given point). If the limit is 1, the test is inconclusive and the series my be conditionally convergent, absolutely convergent, or divergent at that point (pg. 248)

Unit impulse function aka Dirac delta function

It imparts an impulse of magnitude one at t = 0 but is zero for all values of t other than zero. The area under the curve is also infinity (pg. 344)

Matrix addition

It is both commutative and associative (pg. 369)

What does it mean for a function to vanish in an interval?

It means it takes on the value of zero at some point in a the interval (internet)

What is the following statement saying?

It tells us that if a power series converges for some interval |x₁ - x₀| < p (for p > 0), then the power series converges in all intervals |x - x₀| < q, where 0 < q < p. It also tells us that if a power series diverges for some interval |x₁ - x₀| < p (for p > 0), then the power series diverges in all intervals |x - x₀| < q, where 0 < p < q (pg. 248)

Example 3: For what values of p does the improper integral of t^(-p) where t ≥ 1 converge?

It will converge if p > 1 but will diverge if p ≤ 1 (pg. 310)

Matrix adjoint

Its the transpose of the the conjugate or the conjugate of the transpose. (pg. 369)

positive definite, negative definite, positive semidefinite, and negative semidefinite

Let V(x,y) be a function defined on some domain D containing the origin. Also let V(0,0) = 0. Then, (i) V is positive definite if V(x,y) > 0 for (x,y) ≠ (0, 0) (ii) V is negative definite if V(x, y) < 0 for (x,y) ≠ (0, 0) (iii) V is positive semidefinite if V(x,y) ≥ 0 for (x,y) ≠ (0, 0) (iiii) V is negative semidefinite if V(x,y) ≤ 0 for (x,y) ≠ (0, 0) (pg. 557)

Sum or difference of power series that share the same interval of convergence

The sum or difference of two power series with the same interval of convergence is another power series that converges on the same interval (except possibly at the end points??) (pg. 249)

self-adjoint aka Hermitian matrices

Matrices that are equal to their adjoints. These include real symmetric matrices. Hermitian matrices always have the following useful properties 1) All eigenvalues are real 2) There always exists a full set of n linearly independent eigenvectors, regardless of the algebraic multiplicities of the eigenvalues 3) The inner product of two eigenvectors corresponding to different eigenvalues is zero. Thus if all eigenvalues are simple, then the associated eigenvectors form an orthogonal set of vectors 4) Corresponding to an eigenvalue of algebraic multiplicity m, it is possible to choose m eigenvectors that are mutually orthogonal. Thus the full set of n eigenvectors can always be chosen to be orthogonal as well as linearly independent (pg. 387)

fundamental matrix

Note that a fundamental matrix is non singular since its columns are linearly independent vectors. (pg. 421)

resonance

Notice that resonance occurs when the driving frequency is the same as the natural frequency (pg. 211)

Theorem 6.2.1: how do we get the Laplace transform of f'?

Notice that the Laplace transform of f' is expressed in terms of the Laplace transform of f. Notice that we also need f(0). Don't forget to multiply the Laplace transform of f by s (pg. 317)

Taylor Series of sinx and cosx

Notice that these converge for all x

Theorem 2.6.1. How do we tell if an equation is exact and, if so, how do we find a function ψ(x,y) that satisfies it?

Notice that this theorem relies on Clairut's theorem. This tells us that if we have an equation of the form *, then there exists a function ψ that satisfies * iff M and N satisfy (10). In other words, given an equation of the form *, we check to see if it's exact by checking equation (10). The condition (10) is both necessarily and sufficient for * to be exact (pg. 96)

Laplace Transform of the Heaviside function (unit step function)

Notice the integral is zero for t < c. The integrand for t ≥ c is just e^(-st). The result follows easily after that (pg. 329)

Generalized Wronskian

Notice we only go up to the (n-1) derivative but its size is nxn (pg. 223)

Ordinary differential equations vs. partial differential equations

Ordinary differential equations have solution functions that are in terms of a single independent variable. Partial differential equations have solution functions that are in terms of multiple independent variables (pg. 19)

Damped free vibrations

Oscillating problem without an applied force. Means we get a second order, linear, homogeneous differential equation (pg. 198)

Undamped free vibrations

Oscillations that are free of damping and applied forces. In this situation the corresponding differential equation is the following (see picture). The solution of this equation is sinusoidal and has a constant amplitude, meaning the system will oscillate indefinitely. The angular frequency for this situation is given by ω₀ = √k/m (pg. 196)

Under what conditions is our critical point a node? When is this asymptotically stable? Unstable?

Our critical point is a node if both eigenvalues are real, have the same sign, and are unequal. In the case that they're negative, the node is classified as asymptotically stable (nodal sink; solutions converge to the origin). In the case that they're positive, the node is classified as unstable (nodal source; solutions converge away from the origin) (pg. 504)

Under what conditions is our critical point a saddle point? Is this stable or unstable?

Our critical point is a saddle point if the eigenvalues are real and have opposite signs. The saddle point is always unstable (pg. 504)

How to express the steady state solution with a single trigonometric function

R is the amplitude of the steady state solution, m is the mass of the object, γ is the damping coefficient, ω₀ is the natural frequency, and ω is the driving frequency (pg. 210)

What does it mean for two matrices, A and B, to be similar?

Review

How to find the inverse of a matrix A using elementary row operations

Set up the augmented matrix [A|I] and row reduce it to [I|A⁻¹]. In other words, the same set of row operations that transforms A to I will transform I to A⁻¹ (pg. 373)

Shifted dirac function

Shifted dirac function (pg. 345)

What does it mean for an improper integral to converge? To diverge?

Since an improper integral is defined in terms of limits, if the limit exists, we say the integral converges; otherwise, we say it diverges (pg. 309)

Algebraic multiplicity of eigenvalues. Geometric multiplicity. Simple

Since the characteristic equation for an nxn matrix A is a polynomial of degree n, the eigenvalues can have an algebraic multiplicity m (m is the number of times the eigenvalue is a root of the characteristic equation). Each eigenvalue can have q linearly independent eigenvectors where 1 ≤ q ≤ m. We call q the geometric multiplicity of the eigenvalue; it represents the number of linearly independent eigenvectors it has. If each eigenvalue of A is simple (has algebraic multiplicity 1), then each eigenvalue also has geometric multiplicity 1 (pg. 386)

Hooke's Law

States that the force exerted by a spring is proportional to its placement. The negative sign indicates that the force is in the opposite direction of the displacement. The constant k is called the spring constant and has units of force per meter. A high spring constant corresponds to a stiff spring while a low spring constant corresponds to a a loose spring.

Radius of convergence for ratios of polynomials

Suppose p(x) and q(x) are polynomials with no common factors. The radius of convergence of p(x)/q(x) about x₀ is the distance from x₀ to the nearest zero of q(x) in the complex plane

Matrix conjugate

Take the complex conjugate of each element (pg. 369)

Intrinsic growth rate

The K value in our logistic equation. It's also called the saturation level or the environmental carrying capacity. Most solutions to the logistic equation approach the line y = K. (pg. 80 and pg. 82)

period

The amount of time for an oscillating system to complete a single full cycle (pg. 197)

Theorem 4.1.1: The generalized fundamental existence and uniqueness of solutions to nth-order, linear, ordinary initial value problems

The asserts that all nth-order, linear, ordinary initial value problems have a unique solution that exists on an interval I, where I is the interval in which the coefficient functions are all continuous, and where t₀ ∈ I (pg. 222)

General solution of the homogeneous system of equations

The basis for an nth order system of differential equations, aka fundamental set of solutions. The set is linearly independent and the set spans the solution space (pg. 392)

How to know if two series are identical

The coefficients need to be identical (pg. 250)

Definition of linear differential equations

The differential equation F(t, y, y', ..., y^(n)) is linear if F is a linear combination of y, y', ..., y^(n) notice it doesn't need to be linear in the independent variable (pg. 20-21)

Dot product

The dot product of two vectors x and y with the same number of elements is x^T y = y^T x (pg. 371)

Characteristic equation

The equation we use to find the eigenvalues of a system. It's a polynomial equation of degree n in λ. The values of λ that satisfy the characteristic equation may be real or complex and are called the eigenvalues of the matrix A (pg. 384)

Explain how the method of variation of parameters works for nth order liner nonhomogeneous equations

The following formula is developed in section 4.4. It generalizes the results found for the second order equations. y_m is the mth solution to the corresponding homogeneous equation g(s) is the nonhomogeneous term W(s) is the Wronskian of the n solutions to the homogeneous equation W_m(s) is the Wronskian found by replacing the mth column by the column (0,0,...,0,1) For an nth order differential equation, we need to find n solutions to the corresponding homogeneous equation, (n-1) derivatives of those solutions, n Wronskians, calculate n integrals and finally add them up. (pg. 242)

What is the impulse of a force?

The force integrated over time g(t) is the force function. Tau is the half interval of the period It's a measure of the strength of a forcing function (pg. 343)

Quasi frequency

The frequency at which a damped system oscillates (pg. 199)

Theorem 9.6.1: When the origin is an asymptotically stable critical point. Conditions for asymptotically stability and stability

The function V is called a Liapunov function (pg. 558)

Theorem 7.1.1: existence and uniqueness theorem for systems of differential equations

The functions F1, ..., Fn and partial derivatives need to be continuous (pg. 362)

Summary of how to solve second order, linear, homogeneous differential equations with constant coefficients.

The general solution varies based whether the roots of the characteristic equation are real and different, complex and different, or real but equal (pg. 171)

Amplitude of an undamped free oscillating system

The maximum displacement from equilibrium

What is the order of a differential equations?

The order of a differential equation is the order of the highest derivative that appears in the equation (pg. 20)

In the context of forced vibrations with damping, what is the transient solution?

The part of the solution that fades away with time. It comes from the general solution of the homogeneous equation (pg. 209)

What is the steady state solution in the context of forced vibrations with damping?

The part of the solution that persists with time (or as long as the external force is applied). It has the same frequency as the external force and comes from the particular solution of the differential equation (pg. 210)

Quasi period

The period we calculate when using the the quasi frequency (pg. 199)

What is linearization?

The process of approximating a nonlinear equation by a linear one is linearization (pg. 21)

What is a similarity transformation?

The process of transforming an nxn matrix A (where A has n linearly independent eigenvectors) into a diagonal matrix D, where the entries of D are the eigenvalues of the matrix A (pg. 425)

Product of two power series that share the same interval of convergence

The product is a power series that has at least the same interval of convergence (pg. 249)

Matrix multiplication

The product of two matrices is defined if the number columns of the first matrix is the number of rows in the second matrix. The resulting matrix has the same number of rows as the first matrix and the same number of columns as the second matrix. They ijth element of the resulting matrix is found by taking the dot product of the ith row in the first matrix with the jth column in the second matrix. For appropriately sized matrices, matrix multiplication is associative and distributive, but is not typically commutative (pg. 370)

Theorem 3.5.1: The difference of two solutions to a second order, linear, nonhomogeneous differential equation is a solution to the corresponding homogeneous equation

The proof is straight forward and takes advantage of the linearity of the differential equations (pg. 176)

Theorem 7.4.5

The real and imaginary parts of a complex solution to a homogeneous system of differential equations are also solutions (pg. 394)

basin of attraction aka region of asymptotic stability

The set of points whose trajectory converges to a critical point is called the basin of attraction of that critical point (pg. 514)

Unstable equilibrium solution

The solution y = 0 in the following picture is an unstable equilibrium solution. The only way to guarantee that the solution remains near zero is to make sure that the initial value is exactly zero. (pg. 83)

Asymptotically stable solution

The solution y = K in the following picture is an asymptotically stable solution to the logistic equation. Our solution will approach K as long as y₀∈(0, infinity). "Solutions that start close, stay close) (pg. 83)

Theorem 5.3.1

The theorem helps us find a lower bound on the radius of convergence for a power series solutions to a second order differential equation. It tells us that this lower bound for the radius of convergence can't be less than the smaller of the radii of convergence of the series for p = Q/P and q = R/P. If P, Q, and R are polynomials, and if we assume any common factors to Q and P have been canceled, then the radius of converge of the power series for Q/P about the point x₀ is precisely the distance from x₀ to the nearest zero of P (this can be in the complex plane) (pg. 267)

Corollary 6.2.2: how do we get the Laplace transform of the nth derivative of f?

The theorem tell us to find the Laplace transform of f as well as n-1 derivatives. (pg. 318)

Row Reduction AKA Gaussian elimination

The transformation of a matrix by a sequence of elementary row operations is referred to as row reduction or Gaussian elimination (pg. 373)

Threshold level

The value of T in the following equation (pg. 85)

Interval of convergence of a power series

The values of x for which a series converges expressed as an interval about a specific x value x₀. The end points of the interval need to be checked as the power series may or may not converge at those points (pg. 249)

Phase plane

The x₁x₂-plane. It's where we visualize solutions to systems of differential equations where n=2 (pg. 396)

critical points for an autonomous system

These are important because they correspond to constant or equilibrium solutions (pg. 509)

Complex roots to the generalized characteristic equation

These must come in conjugate pairs λ±iµ. If r = λ+iµ is a complex root of the generalized complex equation then e^(λ+iµ) is a solution to the differential equation as well as its real and imaginary parts e^(λt)cos(µt) and e^(λt)sin(µt) (pg. 230)

What is a direction field? AKA slope field?

These should be constructed using a computer (pg. 3-5)

What does the term autonomous mean?

They refer to a class of first order equations in which the independent variable does not appear explicitly (pg. 78)

General solution of the logistic equation given an initial value y(0) = y₀

They used partial fraction decomposition to derive this (pg. 83)

Theorem 6.3.1: The Laplace Transform of a shifted function

This Theorem tells us that a translation of f(t) a distance c in the positive x-direction corresponds to multiplying its Laplace transform F(s) by e^(-cs) (pg. 330)

Solution of undamped free vibration problem

This give the displacement of the system at any time t. R is the maximum displacement from equilibrium. ω₀ = √k/m gives the angular frequency. The phase or phase angle is given by δ. Notice that the amplitude is constant so the oscillations will continue indefinitely (pg. 196)

Rational root theorem

This helps us find the rational roots to nth order polynomials (pg. 230)

Scalar (aka inner) product

This is defined for vectors with the same number of elements. Notice that if two vectors have only real elements, then the scalar product is also their dot product (pg. 371)

Derive the equilibrium solutions for the logistic equation

This is done by setting the logistic equation equal to zero and solving for y. We get y = 0 and y = K. They correspond to when no change or variation in the value of y as t increases (pg. 80)

Show that the Laplace transform is a linear operator

This means that the Laplace transform of a linear combination of...FIXME two functions is the sum or difference of the Laplace transforms of the two functions (pg. 314)

Picture of a direction field

This picture show that if v is above the terminal velocity, vt, then dv/dt is negative and v will decrease to approach vt. If v is below the terminal velocity, then dv/dt is positive and v will increase to approach vt. (example 2 pg. 4)

FIXME: explain how to use Laplace transforms to solve IVPs.

This process allows us to use algebra, rather than calculus, to solve IVPs.

Section 7.4 problem 3

This showed that the Wronskains of two fundamental sets of solutions of the homogeneous system of equations differ only by a multiplicative constant (pg. 395 and pg. 393)

Theorem 3.2.2 (Principle of Superposition)

This tells us that any linear combination of two solutions of a given homogeneous linear second order differential equation is also also solution, see proof on pg. 148 (pg. 147)

Theorem 4.1.2

This tells us that if an nth order, linear, homogeneous differential equation with n solutions whose Wronskian is not-zero, then the n solutions form a fundamental set of solutions. (pg. 223)

Theorem 3.2.4:

This tells us that if and only if the Wronskian of of two solutions of a second order homogenous differential equation is not everywhere zero, then the linear combination of those solutions contains all the solutions of the equation. In other words, to find the general solution, and therefore all the solutions, of an equation of form (2), we need only find two solutions of the given equation whose Wronskain is nonzero. (pg. 149-150)

Theorem 7.4.2: The basis of a homogeneous system of equations

This tells us that if we have n linearly independent solutions of an nth order homogeneous system of differential equations, then every solution of the system can be expressed as a linear combination of those n solutions, and this linear combination is unique. In other words, the n solutions form a basis for the solution set (i.e. they're linearly independent and they span the solution space) (pg. 392)

Theorem 6.3.2:

This tells us that multiplication of a function f(t) by e^(ct) results in a translation of the transform F(s) a distance c in the positive s direction, and conversely (pg. 332)

Theorem 9.3.2

This tells us that the stability of the critical points of the nonlinear system x' = Ax + g(x) have the same stability as the critical points of the corresponding linearization x' = Ax (provided the eigenvalues aren't purely imaginary) (pg. 523)

Theorem 6.1.1: The improper integral comparison test

This theorem provides the tools needed to test the convergence of a given function f. It tells us to find a function g where |f| ≤ g when t ≥ M and that if the improper integral of g converges, the improper integral of f will also converge. Conversely, if f ≥ g ≥ 0 for t ≥ M, and if the improper integral of g diverges, then the improper integral of f will also diverge. The functions most useful for comparison purposes are exp(ct) and t^(-p) (pg. 311)

Theorem 7.4.4

This theorem states that the homogeneous system of equations always has at least one fundamental set of solutions. The vectors are called unit vectors (pg. 393)

Theorem 9.3.1: When is the critical point of the two-dimensional system x' = Ax asymptotically stable? Stable? Unstable?

This theorem tells us that the critical point of the system x' = Ax is stable when the eigenvalues are real and negative or have negative real part. The critical point is stable if the eigenvalues are purely imaginary. The critical point is unstable if the eigenvalues are real and either of them is positive or if they have positive real part (pg. 519)

Theorem 3.5.2: The general solution to the second order, linear, nonhomogeneous equation

This theorem tells us that to construct a a general solution to a second order, linear, nonhomogeneous equation, we need to find a fundamental set of solutions to the corresponding homogeneous equation as well as any solution to the nonhomogeneous one, and then to add them (pg. 176)

Theorem 3.2.1: The existence and uniqueness theorem for linear second order differential equations

This theorem tells us three things: 1) The initial value problem has a solution (existence) 2) The solution is unique 3) The solution is defined on the interval where the coefficients are continuous and is at least twice differentiable there (pg. 146)

Theorem 3.6.1: Method of Variation of parameters

This theorem tells us to find a fundamental solution set to the corresponding homogeneous equation, to calculated their Wronskian, and to calculate the integrals. The result gives a particular solution to the nonhomogeneous equation The constants obtained in the integration end up being absorbed in the corresponding general solution to the homogeneous equation so they can be assumed to be zero. (Pg. 189-190)

T/F: The condition detA ≠ 0 is both necessary and sufficient for A to be invertible

True. (detA ≠ 0 ) iff (A is invertible). If detA = 0, then A is singular (pg. 373)

Logistic equation (Verhulst equation).

Used to model population growth. It better models populations than exponential models (pg. 80)

Exponential growth model for population

We assume that rate of change of the population is proportional to the current population value: dy/dt = ry. Given an inital condition y(0) = y₀, the solution is y = y₀e^(rt)(pg. 79)

General nth order linear differential equation form

We assume that the coefficient functions are continuous, real-valued functions on some interval I, and that P₀ is nowhere zero in this interval (pg. 221)

General second order differential equation (t is the independent variable and y is the dependent variable).

We have a second order differential equation if we can isolate the second order derivative and get a function of t, y and dy/dt (pg. 137)

Jordan Forms

What are these? How are they used? Why are they useful? (pg. 434)

Critically damped

When an oscillating system has the minimum damping needed such that the system will return immediately to equilibrium after a displacement (pg. 200)

Overly damped

When our damping coefficient is more than 2√km (pg 200)

Integral Transform

When we use an integral to transform a function f into another function F, where we call F the transform of f. (pg. 312)

How can you check if a function is a solution to a differential equation?

You plug the function into the differential equation and see if it's satisfied (pg. 22)

recurrence relation

a recurrence relation is an equation that recursively defines a sequence or multidimensional array of values. Once one or more initial terms are given, each further term of the sequence or array is defined as a function of the preceding terms (pg. 256)

What are differential equations?

equations containing derivatives (pg. 1)

partial fraction decomposition

http://lpsa.swarthmore.edu/BackGround/PartialFraction/PartialFraction.html

Explain how to use the method of variation of parameters

http://tutorial.math.lamar.edu/Classes/DE/VariationofParameters.aspx (pg. 188-190)

unit step aka Heaviside function

pg. 328

How to horizontally shift a function using a Heaviside function

pg. 330

A given differential equation has infinitely many fundamental sets of solutions (pg. 153)

see example 6

Phase or phase angle

δ measures the displacement of the wave from it normal position corresponding to δ = 0 (pg. 197)


Related study sets

Case Problem Analysis 18.1: Identifying the Facts & Issues

View Set

3. www.biology.arizona.edu, a) Acids and Bases b) Clinical correlates of pH levels,

View Set

How long does it take for each planet to orbit the Sun?

View Set

CHAPTER 12 - BUSINESS LAW - (TRUE/FALSE)

View Set

Cellular Regulation HESI Quiz - Fall 2020

View Set