Econometrics Final

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

biased

HC standard errors are __________ but are generally more accurate than uncorrected standard errors in large sample.

unacceptable

How high is to high for r in the case multicollinearity? r is high if it causes ___________ large variances.

previous

It is called serial correlation when the stochastic error term in a time-series is affected by events that took place in a ___________ time period.

variance

Normally, a larger sample will reduce the __________ of the estimated coefficients diminishing impact of multicollinearity.

coefficients

OLS requires the equation to be linear in the ___________.

Xs

One model has the variance of the error term related to an exogenous variable Zi which may or may not be one of the _____ in the equation

same

Positive Serial Correlation: row (ρ) > 0 implies that the error term tends to have the _______ sign from one time period to the next

e

While logs come in more than one variety, we'll use natural logs which are calculated to the base of ___ which is a constant equal to 2.7183.

coefficients (Beta estimates)

With Multicollinearity, coefficients will become sensitive to changes in specification thus Adding/dropping variables and/or observations will often cause major changes in ___________ when significant multicollinearity exists.

no

With ___ serial correlation, different observations of the error term are completely uncorrelated with each other conforming to CA IV.

NOT

With heteroskedasticity, error term variance is ______ constant and depends on the observation:

can

HC standard errors _____ be used in t-tests and other hypothesis tests.

switch

Negative Serial Correlation: row (ρ) < 0 implies that the error term has a tendency to ______ signs from negative to positive and back again in consecutive observations

OLS

Heteroskedasticity typically causes _______ to no longer be the minimum variance estimator.

a^b = x

A logarithm -or log, is the exponent to which a given base must be taken in order to produce a specific number: the log of: a(x) = b is ________(Equation).

distributing

A distributed lag model explains the current value of Y as a function of current and past values of X, thus "__________" the impact of X over a number of time periods.

masks

A dominant variable, perfect multicollinearity is so highly correlated with the dependent variable that it _______ the effects of other independent variables.

percentage

A left-hand semi-log is preferred for any model in which the dependent variable adjusts in _______________ to a unit change in an independent variable.

periods

A random walk is a time-series variable in which the next period's value equals this _______ value plus the stochastic error term.

constant

A time series variable Xt is stationary if: the mean of Xt is _________ over time, the variance of Xt is constant over time, and the simple correlation coefficient between Xt and Xt-k depends on the length of the lag (k) but on no other variable (for all k).

variance

A time series variable Xt is stationary if: the mean of Xt is constant over time, the ___________ of Xt is constant over time, and the simple correlation coefficient between Xt and Xt-k depends on the length of the lag (k) but on no other variable (for all k).

simple correlation

A time series variable Xt is stationary if: the mean of Xt is constant over time, the variance of Xt is constant over time, and the ___________ ___________ coefficient between Xt and Xt-k depends on the length of the lag (k) but on no other variable (for all k).

serially

After GLS is applied: 1. The error term is not ___________ correlated. 2. The slope coefficient β1 is the same as the slope coefficient of the original equation. 3. The dependent variable has changed which means the GLS is not comparable to the OLS .

original

After GLS is applied: 1. The error term is not serially correlated. 2. The slope coefficient beta 1 (β1) is the same as the slope coefficient of the _______ equation. 3. The dependent variable has changed which means the GLS is not comparable to the OLS

OLS

After GLS is applied: 1. The error term is not serially correlated. 2. The slope coefficient β1 is the same as the slope coefficient of the original equation. 3. The dependent variable has changed which means the GLS is not comparable to the _______ Adj-R^2

biased

Although Newey-West standard errors are ________ but are generally more correct than uncorrected standard errors.

autoregressive

An ______________ pattern is when positive differences follow positive differences and negative differences follow negative differences.

interaction

An ______________ term is an independent variable that is a multiple of two or more other independent variables.

function

An equation is linear in the coefficients only if the Betas: 3. Do not themselves include some sort of __________.

not

An equation is linear in the coefficients only if the coefficients - the Betas: 2. Are _____ multiplied or divided by other coefficients

simplest

An equation is linear in the coefficients only if the coefficients: 1. Appear in their ________ form (not raised to a power)

straight line

An equation is linear in the variables if plotting the function in terms of Y and X generates a _____ ____.

coefficient

An interaction term has its own regression ________________(hspwr & hspwr^2).

constant

An intercept dummy changes the ____________ or intercept term.

absolute

As a result, logs can be used in econometrics if we want to reduce the ________ size of the numbers associated with the same actual meaning.

decline

As long as 0 < gamma-λ < 1, the coefficients smoothly __________.

variance

As measurement errors decrease in size, so should the ____________ of the error term.

serial correlation

As row(ρ) approaches 1 in absolute value, the error term lagged (𝜀_(𝑡−1)) becomes more important in determining the error term not lagged (𝜀_𝑡) and a high degree of _______ ________ exists.

graph

Before testing for heteroskedasticity, start with asking: 1. Are there any obvious specification errors? 2. Are there any early warning signs of heteroskedasticity? 3. Does a _________ of the residuals show any evidence of heteroskedasticity?

unbiased

Best linear ___________ estimator, BLUE "best" means giving the lowest variance of the estimate, as compared to other unbiased, linear estimators.

residuals

Breusch-Pagan test has three steps: Step 1: Estimate the original equation and obtain the __________.

dependent

Breusch-Pagan test has three steps: 2: Use the squared residuals as the ________ variable in an auxiliary equation.

proportionality

Breusch-Pagan test investigates whether the squared residuals can be explained by possible ____________ factors.

homoskedastic

Breusch-Pagan test: Test the overall significance of the auxiliary Equation with a chi-square test. H0: α1 = α2 = 0 -___________ HA: H0 is false -heteroskedastic

N x R2

Breusch-Pagan test: The appropriate test statistic here is __________.

cannot

Breusch-Pagan test: If N x R2 < critical chi-square value, then we ___________ reject the null hypothesis of homoskedasticity.

>

Breusch-Pagan test: If N x R2 ___ critical chi-square value (from Table B-6), then we reject the null hypothesis of homoskedasticity and conclude that it's likely that we have heteroskedasticity.

slope

Breusch-Pagan test: this test statistic has a chi-square distribution with degrees of freedom equal to the # of ______ coefficients.

V

Classical Assumption ____ assumes homoscedasticity.

nonstationarity

Cointegration consists of matching the degree of _______________ of the variables in a way that makes the error term stationary.

less

Decision rule for Dickey-Fuller test for unit root: Reject the null hypothesis of unit root (nonstationarity) if calculated test statistic (ד) is _____ or equal the critical (ד)c value

stationary

Decision rule for Dickey-Fuller test for unit root: we reject the null hypothesis (ד ≤ דc) for the series Yt , it is ___________.

less

Decision rule for Dickey-Fuller test for cointegration: Reject the null hypothesis of unit root (nonstationarity) if calculated test statistic (ד) is ________ or equal the criticalד c value (i.e., ד ≤ דc) .

severity

Detecting Multicolliniearity: Multicollinearity exists in every equation. Important question is how much exists. The __________ can change from sample to sample.

transformed

Even if theory is not clear it is wise to avoid choosing a functional form on the basis of fit alone for two reasons: 1. Adj-R^2 are difficult to compare if the dependent variable is ___________.

forecast errors

Even if theory is not clear it is wise to avoid choosing a functional form on the basis of fit alone for two reasons: 2. An incorrect functional form may provide a reasonable fit within the sample but have the potential to make large ______ ______ when used outside the range of the sample.

specification

Existence of multicollinearity might not mean anything. If you delete a variable that belongs in model, you cause ______________ bias (omitted variable bias).

F-test

Expansion of test Granger originally suggested: Test if a granger-causes Y. First, estimate original model: Then test the null that the coefficients of lagged A's are jointly equal to zero with an _________. If you reject the null, then A is said to Granger-cause Y. We can run the test in other direction i.e., use A as our dependent variable and test the null that the coefficients of lagged Y's variable are jointly equal to zero with an F-test.

previous

First-order serial correlation is when the current value of the error term is a function of the _________ value of the error term.

chi-square

For large sample, LM has a _____________ distribution with degrees of freedom equal to one (the number of restrictions).

explode

For row(ρ) to exceed 1 is unreasonable, since the error term effectively would "__________".

before

Granger causality is a circumstance in which one time-series variable consistently and predictably changes __________ another variable.

dependent

Heteroskedasticity often occurs in data sets in which there is a large disparity between largest and smallest observed values of the ___________ variable.

data

Heteroskedasticity can also occur in any model, time series or cross-sectional, where the quality of ______ collection changes dramatically.

change

Heteroskedasticity can occur in a time-series model with a significant amount of _________ in the dependent variable.

impure

Heteroskedasticity caused by an error in specification is referred to as __________ heteroskedasticity.

biased

Heteroskedasticity causes OLS estimates of the standard errors to be __________, leading to unreliable hypothesis testing.

greater

If LM is ________ than critical value, we reject the null and conclude that there is serial correlation in the original model.

function

If X2 is accidentally omitted, then: where Et*=B2X2t+Et Et* will tend to exhibit detectable serial correlation when: 1. X2 itself is serially correlated (X2t is a _________ of X2t-1). 2. The size of ε is small compared to the size of B2X2 .

LM

If _____ (N*R^2) is greater than critical value, we reject the null and conclude that there is serial correlation in the original model.

causes

If one variable precedes "Granger-causes" another, we can't be sure the first variable "________" the other to change.

problem

If r is high then the two variables are quite correlated and multicollinearity is a potential ____________.

NO

If row(ρ) = 0, there is ____ serial correlation.

fall

If standard error increases, t-score must ______ and confidence intervals also increase.

standard

If there are no obvious specification errors, the heteroskedasticity is probably pure in nature and one of the following two remedies should be considered. 1. Heteroskedasticity-corrected __________ errors 2. Redefining the variables

Redefining

If there are no obvious specification errors, the heteroskedasticity is probably pure in nature and one of the following two remedies should be considered. 1. Heteroskedasticity-corrected standard errors 2. ___________ the variables

regression

If we run a regression in which the dependent variable and one or more independent variables are spuriously correlated, the result is a spurious ___________.

0

If you omit the constant term and its true value is not ______, then its impact is forced into the other coefficients

stationary

If |Gamma(Y)|<1 then Yt is ___________.

nonstationary

If |Gamma(Y)|=1 then Yt is ___________.

nonstationary

If |Gamma(Y)|>1 then Yt is ___________.

imperfectly

Imperfect multicollinearity occurs when two (or more) explanatory variables are ___________ linearly related.

coefficients

Imperfect multicollinearity: linear functional relationship between two or more independent variables so strong that it can significantly affect the estimations of __________.

omitted

Impure heteroskedasticity almost always originates from an _____________ variable rather than an incorrect functional form.

spurious correlation:

In a country with extensive inflation almost any nominal variable will appear to be highly correlated with all other nominal variables. Reason: Nominal variables are not adjusted for inflation, so every nominal variable will have a powerful inflationary component. Such a problem is an example of ____________ _________.

multicollinear

In a distributed lag model OLS causes a number of problems: Various lags of X are likely to be severely ____________, making estimated coefficients imprecise.

coefficient

In a distributed lag model OLS causes a number of problems: Degrees of freedom tend to decrease for two reasons: Each lagged X has a _________ to estimate. Each lagged X decreases the sample size by 1.

1

In a distributed lag model OLS causes a number of problems: Degrees of freedom tend to decrease for two reasons: Each lagged X has a coefficient to estimate. Each lagged X decreases the sample size by ____.

declining

In a distributed lag model OLS causes a number of problems: There is no guarantee that the estimated betas will follow the smoothly ___________ pattern theory would suggest

100

In a left-hand semi-log, if X1 increases by one unit, then Y will change by Beta-1x______ percent, holding X2 constant.

decrease

In a right hand semi-log functional form if Beta-1 > 0, the impact of changes in X1 on Y _________ as X1 gets bigger.

bigger

In a right hand semi-log functional form if Beta-1 > 0, the impact of changes in X1 on Y decrease as X1 gets _______.

X1

In a right hand semi-log functional form if Beta-1 > 0, the impact of changes in ____ on Y decrease as X1 gets bigger.

0.01

In a right-hand semi-log model, a 1% change in X1 is associated with a change in Y of ________xBeta-1, holding X2 constant.

elasticity

In double-log form, slope coefficients can be interpreted as __________.

double-log

In some cases of heteroskedasticity, the only redefinition needed is to switch from a linear to __________ functional form.

constant

In the linear model the slopes are __________ but the elasticities are not.

without

It's possible to have large multicollinearity effects _________ having a large V.I.F.

time

Lagged independent variables can be used when X is expected to impact Y after a period of _______.

neither

Left-Hand Semi-log Form has _________ a constant slope nor a constant elasticity.

Y (dependent)

Left-Hand Semi-log Form is Logarithm used on the _________ variable.

strength

Magnitude of row(ρ) indicates the ___________ of the serial correlation.

effects

Major consequences of multicollinearity, the variances and standard errors of the estimates increase and it becomes difficult to precisely identify the separate _________ of multicollinearity variables.

specification

Major consequences of multicollinearity: 4. Estimated coefficients will become sensitive to changes in ____________.

unaffected

Major consequences of multicollinearity: The overall fit of the equation and estimation of the coefficients of nonmulticollinear variables will be largely ___________.

t-scores

Multicollinearity tends to decrease _______ mainly because of the formula for the t-statistic. t=B/SE

betas

Newey-West standard errors take account of the serial correlation by changing the estimated standard errors without changing the estimated __________.

lag

Not all situations imply a simultaneous relationship between dependent and independent variables. In many cases time elapses between the changes in an independent variable and the dependent variable. The period of time between the cause and effect is called a ______.

perfect

OLS is incapable of generating estimates of regression coefficients where _________ multicollinearity is present.

BLUE

OLS is still ______ with multicollinearity.

nonstationarity

One cause of spurious correlation is _______________.

VI

Perfect multicollinearity is a violation of CA _____.

movements

Perfect multicollinearity is the case where the variation in one explanatory variable can be completely explained by ___________ in another explanatory variable.

independent

Polynomial functional forms express Y as a function of the __________ variables, X, some of which are raised to powers other than one.

bias

Pure heteroskedasticity does NOT cause _______ in the coefficient estimates.

nothing

Remedies for Multicollinearity: Remedy 1: Do _______

redundant

Remedies for Multicollinearity: Remedy 2: Drop a __________ variable

Increase

Remedies for Multicollinearity: Remedy 3: ______________ the size of the sample

natural

Right hand Semi-log functional form is a variant of the double-log equation in which some but NOT all of the variables are expressed in __________ log.

nonstationarity

Second, the standard method of testing for ___________ is the Dickey-Fuller test.

IV

Serial correlation breaks CA ____.

overestimates

Serial correlation causes the OLS estimates of the standard error's beta (SE(B)) to be biased, leading to unreliable hypothesis testing. Typically, OLS underestimates the standard error's beta (SE(B)), thus, it ________________ the t-scores.

normal

Serial correlation in a distributed lag model causes _________ effects: no bias in the coefficients, OLS is no longer the minimum variance estimator, causes bias in the standard errors.

bias

Serial correlation in a dynamic model causes

bias

Serial correlation in a dynamic model causes ____ in the coefficient produced by OLS.

III

Serial correlation in a dynamic model causes a violates Classical Assumption ______ which assumes the error term is not correlated with any of the explanatory variables.

discrete

Simplest way to visualize pure heteroskedasticity is to picture a world in which observations of error term can be grouped into two distributions: "wide" and "narrow." This case is referred to as __________ heteroskedasticity.

Lagrange

Since serial correlation causes bias in dynamic models, it is important to test for it. the __________ Multiplier (LM) is valid.

random walk

Some variables are nonstationary because they rapidly increase over time. Adding a time trend to the regression model can help avoid spurious regression in this case. Unfortunately, for many variables, this does not alleviate nonstationarity. Nonstationarity often takes the form of a "_______________"

causal

Spurious correlation is a strong relationship between two or more variables that is NOT caused by a real underlying _________ relationship

TSS

Suppose you were trying to compare a linear equation with a semi-log version of the same equation the ______ of the dependent variable is not the same in the two models so you cannot use

violation

Suppressing the constant term leads to a _________ of the Classical Assumptions.

auxiliary

The White Test: The test statistic is N x R2 and the degrees of freedom is equal to the number of slope coefficients in _____________ equation.

reported

The Dickey-Fuller critical values depend on the version of test used and are _________ by Stata.

unit

The Dickey-Fuller test examines the null hypothesis that the variable in question has a ______ root.

intercept

The Durbin-Watson d test is only applicable if the following three assumptions are met: 1. The regression model includes an _______ term.

normally distributed

The Durbin-Watson d test is only applicable if the following three assumptions are met: 2. The serial correlation is first-order in nature: where row(ρ) is the autocorrelation coefficient and u is a _________ _________ error term.

dependent (i.e. Y-t)

The Durbin-Watson d test is only applicable if the following three assumptions are met: 3. The regression model does not include a lagged ___________ (i.e. Y-t) variable as an independent variable.

first

The Durbin-Watson d test is used to determine if there is _________-order serial correlation.

inconclusive

The Durbin-Watson test has three regions acceptance region rejection region ______________ region

negative

The Durbin-Watson test is unusual in two respects: Econometricians almost never test for the ___________ serial correlation.

residuals

The LM test involves three steps (assume an equation with two independent variables): 1. Obtain __________ from estimated equation. 2. Specify the auxiliary equation: e=X1+X2...Xk 3. Use OLS to estimate auxiliary and test the null hypothesis that α3 = 0 with the following test statistic: LM=N*R^2

auxiliary

The LM test involves three steps (assume an equation with two independent variables): 1. Obtain residuals from estimated equation. 2. Specify the ____________ equation: e=X1+X2...Xk 3. Use OLS to estimate auxiliary and test the null hypothesis that αk = 0 with the following test statistic: LM=N*R^2

α3 = 0

The LM test involves three steps: 1. Obtain residuals from estimated equation. 2. Specify the auxiliary equation: e=X1+X2...Xk 3. Use OLS to estimate auxiliary and test the null hypothesis that __________ with the following test statistic: LM=N*R^2

original

The Lagrange multiplier (LM) test tests for serial correlation by analyzing how well the lagged residuals explain the residual of the _____________ equation in an equation that also includes all the original explanatory variables.

significant

The Lagrange multiplier (LM): If lagged residuals are jointly statistically ___________, then the null hypothesis of no serial correlation is rejected, implying that there is an evidence of serial correlation.

iterative

The Prais-Winsten method is a two-step, _________ technique.

up

The White test has the following weakness: As the number of explanatory variables in original regression rises, the number of right hand variables in the White test auxiliary equation goes ________ much faster.

Estimate

The White test has three steps: Step 1: ___________ the original equation and obtain the residuals.

square

The White test has three steps: Step 2: Use the squared residuals as the dependent variable in an auxiliary equation with each X from the original equation, the _________ of each X, and the product of each X times every other X as explanatory variables.

chi-square

The White test has three steps: Step 3: Test the overall significance of Equation with a ________ test (like Breusch-Pagan test). H0: α1 = α2= α3= α4= α5 = 0 -homoskedastic errors HA: H0 is false

independent

The White test investigates whether the squared residuals can be explained by the equation's ____________ variables, their squares, and their cross-products.

double-log

The ___________ functional form is when the natural log of Y is the dependent variable and the natural log of X is the independent variable(s).

elasticities

The ____________ in a double-log equation are constant and the slopes are not.

model

The consequences of serial correlation depend the type of _______, distributed lag or dynamic.

bias

The existence of serial correlation in the error term of an equation violates CA IV, and the estimation of the equation with OLS has at least three consequences: 1. Pure serial correlation does not cause _____ in the coefficient estimates.

estimates

The existence of serial correlation in the error term of an equation violates CA IV, and the estimation of the equation with OLS has at least three consequences: 1. Pure serial correlation does not cause bias in the coefficient ____________.

minimum variance

The existence of serial correlation in the error term of an equation violates CA IV, and the estimation of the equation with OLS has at least three consequences: 2. Serial correlation causes OLS to no longer be the _____________ ___________ estimator of all the linear unbiased estimators.

standard errors

The existence of serial correlation in the error term of an equation violates CA IV, and the estimation of the equation with OLS has at least three consequences: 3. Serial correlation causes the OLS estimates of the ___________ _________ (SE(B)) to be biased, leading to unreliable hypothesis testing. Typically, OLS underestimates the standard error's beta (SE(B)), thus, it overestimates the t-scores.

biased

The existence of serial correlation in the error term of an equation violates CA IV, and the estimation of the equation with OLS has at least three consequences: 3. Serial correlation causes the OLS estimates of the standard error's beta (SE(B)) to be _________, leading to unreliable hypothesis testing. Typically, OLS underestimates the standard error's beta (SE(B)) , thus, it overestimates the t-scores.

unreliable

The existence of serial correlation in the error term of an equation violates CA IV, and the estimation of the equation with OLS has at least three consequences: 3. Serial correlation causes the OLS estimates of the standard error's beta (SE(B)) to be biased, leading to _________ hypothesis testing. Typically, OLS underestimates the standard error's beta (SE(B)) , thus, it overestimates the t-scores.

severe

The higher the V.I.F., the more _________ the effects of multicollinearity but, there are no formal critical V.I.F values.

slope

The linear regression model is based on the assumption that the ________ of the relationship between the independent variable Y, and dependent variable X, is constant.

significant

The overall fit of the equation and estimation of the coefficients of nonmulticollinear variables will be largely unaffected: will not fall much, if at all, with __________ multicollinearity.

high

The overall fit of the equation and estimation of the coefficients of nonmulticollinear variables will be largely unaffected: Combination of ______ Adj-R^2 and no statistically significant variables is an indication of multicollinearity.

reject

The overall fit of the equation and estimation of the coefficients of nonmulticollinear variables will be largely unaffected: It is possible for an F-test of overall significance to ________ the null even though none of the individual t-tests do.

decreasing

The right hand semi-log form should be used when the relationship between X1 and Y is hypothesized to have some type of increasing at a _____________ rate form.

> <

The sign of row(ρ) indicates the nature of the serial correlation in an equation: Positive Serial Correlation: ρ __ 0 Negative Serial Correlation: p __ 0

intercept

The slope dummy changes both the ____________ and the slope

slope

The slope dummy changes both the intercept and the _________.

e^b = x

The symbol for a natural log is "ln," so ln(x) = b means that (=) ________

overstated

The t-scores and overall fit of such spurious regressions are likely to be ____________ and untrustworthy.

cointegrated

The time-series Xt and Yt are ____________ when, even though Xt and Yt are nonstationary, it's possible for linear combinations of these nonstationary variables to be stationary.

together

The use of an incorrect functional form tends to group positive and negative residuals _________, causing positive impure serial correlation.

without

The use of r to detect multicollinearity has a major limitation: Groups of variables acting together can cause multicollinearity even _______ any single simple correlation coefficient being high.

distributed lag

There are a number of different tests for Granger causality. They all involve ____________ _______ models in one form or another.

Improving

There are essentially three strategies for attempting to rid a dynamic model of serial correlation: 1. ___________ the specification 2. Instrumental variables 3. Modified GLS

unbiased

There are five major consequences of multicollinearity: 1. Estimates will remain ________.

increase

There are five signs of multicollinearity: 2. The variances and standard errors of the estimates will ________.

White

There are many tests for heteroskedasticity; two popular: 1. Breusch-Pagan test 2. ________ test

BP

There are many tests for heteroskedasticity; two popular: 1. _____ test 2. White test

true

There are no generally accepted, _______ statistical tests for multicollinearity.

Newey-West

There are two main remedies for pure serial correlation: 1. Generalized Least Squares (GLS) 2. _____________ standard errors

Generalized

There are two main remedies for pure serial correlation: 1. _____________ Least Squares (GLS) 2. Newey-West standard errors

Durbin-Watson

There are two main ways to detect serial correlation: Formal: testing for serial correlation using the _______-_________ d test.

residuals

There are two main ways to detect serial correlation: Informal: observing a pattern in the ___________.

Simple correlation coefficient

Two common tests for multicollinearity: 1. __________ ____________ ____________ 2. Variance inflation factors (VIFs)

points

Time series data involve a single entity over multiple _________ in time.

stochastic error

Time-series have some characteristics that make them more difficult to deal with than cross-section. The _____________ ________ term in a time-series is often affected by events that took place in a previous time period. This is called serial correlation!

fixed

Time-series have some characteristics that make them more difficult to deal with than cross-section. The order of observations in a time series is _________.

complex

Time-series have some characteristics that make them more difficult to deal with than cross-section. The theory underlying time-series analysis can be quite ___________.

smaller

Time-series have some characteristics that make them more difficult to deal with than cross-section. Time-series samples tend to be much ________ than cross-sectional ones.

spurious

To ensure equations are not ________, we must test for nonstationarity.

sample size

To test for positive serial correlation using the Durbin-Watson test, determine the __________ ________, T, and the number of explanatory variables, K, and then use Statistical Table B-4 to find the upper critical d value (dU) and the lower critical d value (dL).

OLS residuals

To test for positive serial correlation using the Durbin-Watson test, the following steps are required: Obtain the _____ __________ from the equation to be tested and calculate the d statistic.

reject

To test for positive serial correlation using the Durbin-Watson test, the following steps are required: Set up the test hypotheses and decision rule: Hypotheses H0: ρ ≤ 0 (no positive serial correlation) HA: ρ > 0 (positive serial correlation) if d < dL _________ the null(Ho) Thus, there is an evidence of positive serial correlation

small

Two alternatives for correcting serial correlation in a dynamic model 2. in a _______ sample, use OLS in the face of serial correlation,

distributed lag

Two alternatives for correcting serial correlation in dynamic models 1. Estimate a ___________ _____ model

severity

Variance inflation factor (VIF) is a method of detecting the __________ of multicollinearity by looking at the extent a given variable can be explained by all other variables in an equation.

all

Variance inflation factor (VIF) is a method of detecting the severity of multicollinearity by looking at the extent to which a given explanatory variable can be explained by ____ other explanatory variables in an equation.

t

We use __ instead of i to denote time series data.

cointegrated

When preforming the Dicky-Fuller test for cointegration If we reject the null hypothesis (ד ≤ דc), for residuals, then Yt and Xt series are ____________.

constant

Where perfect multicollinearity is present you cannot "hold all the other independent variables in the equation _________."

between

With multicollinearity adding/dropping variables and/or observations will often cause major changes in Beta estimates this occurs because OLS is forced to emphasize small differences ____________ variables in order to distinguish the effect of one multicollinear variable.

proportionality

Z is called a _____________ factor.

r

___ is in the range of r is +1 to -1. and the sign of r indicates the direction of the correlation.

Impure

_________ Serial Correlation is serial correlation caused by a specification error.

Pure

_________ heteroskedasticity—referred to as heteroskedasticity—occurs when CA V is violated (in a correctly specified equation).

Magnitude

__________ of row(ρ) indicates the strength of the serial correlation.

Interaction

____________ terms can involve: -Two quantitative variables -Two dummy variables or -Quantitative variable and dummy variable

autocorrelation

row (ρ)is the first-order _______________ coefficient.

relationship

row(ρ) measures the functional ___________ between the value of an observation of the error term and the value of a previous observation of an error term.

positive

the Durbin-Watson d statistic for T observations means: Extreme ___________ serial correlation: d = 0

negative

the Durbin-Watson d statistic for T observations means: Extreme ___________ serial correlation: d ≈ 4

no

the Durbin-Watson d statistic for T observations means: _____ serial correlation: d ≈ 2

power

the natural log of 7.389 is 2. Why? Because 2 is the _________ of e that produces 7.389


Ensembles d'études connexes

Chapter 6- Annuities - Structure, Design, Funding, Premiums, Payments

View Set

Principles of Management, Chapter 7: Innovation and Change

View Set

Share the Promise! Identifying Brand Promise

View Set

Constitutional Convention Review

View Set

Project Management Final Exam Practice

View Set

Research: Chapter 10 Supplemental Questions

View Set

pharm exam 4 review questions part 1 (dosage calc)

View Set