# Econometrics

How to do an F-Test with R^2

(Ru - Rr)/d) / ((1-Ru)/Dof)

Dummy Variables:

*Additive dummy variables vertically shift the line *Multiplicative dummy variables rotate the line *Interactive dummy variables (2 dummy vars. together)

Implications and causes of of Endogenous Variables:

*Causes: Measurement error, Auto-regression and Auto-correlated errors, Simultaneity, and Omitted Variables *Effects: Estimators are biased, but consistent -Use Newey West SE or IV instrumentation

Weak Stationarity:

*Holds when there is a constant mean and variance over time, and the covariance between parts is only determined by how far apart they are • E(y) = µ/(1-ø) • V(y) = sigma^2/(1-ø^2)

How are IV regression SE different?:

*IV gives standard errors which are much bigger than OLS Standard errors *If the r^2 is 1 and the instruments are perfect at predicting, then V(bOLS) = V(bIV) • V(bOLS) = sigma^2/sumnation of y^2 • V(bIV) = sigma^2/(R^2)*(sumnation of y^2)

What is the Economic Interpretation of the Error Correction Term in an ECM?

*If ECT > 0, X(t-1) > predicted X (over equilibrium) *If ECT < 0, X(t-1) < predicted X (under equilibrium) Coefficient on Error term= around a 100*B% adjustment to any disequilibrium in each quarter

Endogeneity:

*When an explanatory variable is correlated with the error term *Test if any of the explanatory variables have a non-zero cov. with the error term *IV regression or NW SE to solve

How to test for Instrument Relevance?:

-Regress the instruments on x, and test the joint significance of the coefficients

List of all Tests

1) Chow Test for Structural Change 2) 2nd Chow Test for Predictive Failure 3) Whites Test for Heteroscedasticity 4) Breusch-Godfrey (LM) for Serial Correlation 5) Durbin-Watson for Serial Correlation 6) Augmented Dickey Fuller for Unit Root 7) J-Test for Instrument Exogeneity 8) Hausman-Wu Test for endogeneity of a RHS variable 9) ADF w/ adjusted Mackinnon CV for Cointegration 10) Ramsey RESET Test for Functional Misspecification 11) Hausman Test

OLS advantages compared to LDV:

1) Coefficients are easy to interpret 2) Model is easy to estimate 3) You can do IV estimation

Time Series Assumptions:

1) Contemporaneous Exogeneity: Errors are not correlated with past values of the dependent variable 2) Stationarity: Requires that the mean and variance can not change over time, and that the covariance between one year and a previous year must be the same across time 3) Weak Dependence: Correlation between todays value and a previous value must go to zero as time goes on 4) OLS estimator is not unbiased, but it is consistent (need more than 40 observations) • Random sampling is replaced with stationarity and weak dependence; can estimate model by OLS if you have stationarity and weak dependence

What are the problems with Limited Dependent Variable Models?:

1) Error term is not normal as it can only take on 2 values; uniform distribution 2) Heteroscedastic error term 3) The OLS coefficient is not bounded between 0-1

How to determine lag length in ADF/Unit Root?:

1) No more than the cube root of N 2) # significant lags in the PACF 3) Look at Graph *If lag length is too large, the power of the test will suffer *If lag length is too small, the serial correlation in the errors will bias the test

What is the difference between Pooled OLS, FE and RE?:

1) RE and OLS are more efficient, but are biased when the cov. between ai (individual slope coefficients) and x is non-zero *FE doesn't pool data together for estimations on individuals with a small amount of observations; therefore will always be consistent *Can't use RE and Pooled OLS when cov ≠ 0, because they are inconsistent

Three S's to discuss with results involving regression coefficients:

1) Sign 2) Size 3) Significance

What 5 steps accompany any hypothesis test?:

1) Specify H0 2) Specify H1 3) Specify alpha 4) Calculate test statistic & p-value 5) Conclude

OLS Assumptions:

1) Strict Exogeneity: The error term is independent of x independent variables 2) Homoscedasticity: V(e|x) = sigma^2 3) Distribution of the error terms is normal 4) Orthogonality: Cov between x and residuals is 0; error term is independent of

R^2:

1- RSS/TSS = ESS/TSS • Not a useful mechanism because you can always make it larger by throwing in any arbitrary variable

How to test for Predictive Failure?:

2nd Chow Test: Make dummy variables come into effect after n, and test if the coefficients are jointly equal to zero *(Make sure to allow for default with DV)

Spurious Regression:

A mathematical relationship in which two or more events or variables are not related to each other, yet it may be wrongly inferred that they are, due to either coincidence or the presence of a certain third, unseen factor *If you plot two purely unrelated series, you will find a significant relation between the two about 20% of the time

Fixed Effects Model:

A model with different intercepts for each individual but the same slopes (Fixed effects are constant across individuals) *cannot include variables which are time invariant *Less efficient than RE, but consistent if cov(ai,x) is non zero

AutoCorrelation Function:

A pictorial representation of serial correlation over time, which is measured in the form of a correlation between zt and zt-k for different values of k Corr(x, xt-j) (j is independent variable)

White Noise Process:

A process that is distributed within the same mean, the same variance, and is not correlated over time *z = µ + e

Partial AutoCorrelation Function:

A representation of serial correlation over time, only picking up the partial effect of zt-j on z; holding constant zt-1, zt-2, ... zt-j+1

Durbin-Watson Statistic:

A test for serial correlation *DW statistic: 2- 2(sumnation of et*et-1)/( sumnation of et^2) *Ho: ø=0 H1: ø>(<)0 *If 0<DW<2 = positive serial correlation 2<DW<4 = negative serial correlation • Breusch-Godfrey overtook Durbin Watson because of the inconclusive region • Inconclusive region between dl and du (for +ve SC) • Inconclusive region between 4-du and 4-dl (for -ve SC)

Time heterogeneity:

Allows for differences over time *(Different between time) Captures anything specific about a Time that doesn't change with individuals

Error Correction Model:

An approach for estimating both short-term and long-term effects of one time series on another; Directly estimates the speed at which a dependent variable returns to equilibrium after a change in other variables; the fact that last periods deviation from the long-run equilibrium (its error), influences its short run dynamics ECM ∆y= a + ø∆yt-1 + ∂∆xt-1 + aet-1 + u aet-1 = error correction term *In an error correction model, the coefficient on the error term will always be negative because you want a value below equilibrium to positively influence it next period (or a value above to negatively influence it) *The coefficient on the disequilibrium term measures the speed of getting back to the equilibrium

How to choose order of test for Serial Correlation?:

Annual (p=1), quarterly (p=4), monthly (p=12)

Random Effects Model:

Assumptions: includes usual OLS assumptions in addition to **1) peoples shocks are unrelated to their x variables cov(ai,x)=0 *Exploits the non-zero covariance in the shocks error terms (ai) to transform the equation, such that the error term is serially uncorrelated with i

Why do the Mackinnon Critical Values have to be adjusted in a test for Cointegration?:

Because OLS finds the minimum of the RSS (and hence the most stationary relationship)

Why is there a default category with Dummy Variables?:

Because of perfect collinearity; the default is a linear function of all the other models; and you cant include the same information twice

How to test for Serial Correlation?:

Breusch-Godfrey (LM), or Durbin-Watson

Individual heterogeneity:

Captures anything specific about an individual that doesn't change over time *(Different between individuals)

How to test for Structural change?:

Chow Test: Test that the parameters on equations 2a (1→n) and 2b (n+1→N) taken together are the same as the parameters on equation 1 (1→N) (restricted) F-test w 1=Rest. 2a+2b=Unrest.

Newey-West Standard Errors:

Consistent in the presence of Heteroscedasticity and serial correlation *OLS SE(1 + f) f= correlation in errors; *negative correlation = -f *positive correlation = +f

Weak Dependence:

Correlation between todays value of Y and a previous value of Y must go to zero as time goes on *An assumption for time series data

Most Important thing to do with Dummy Variables when putting them in an equation:

Define what the Dummy Variables are!

How to test for Non-Stationarity (Unit Root):

Dickey-Fuller test for Unit Root: *Regress ∆Y on a lagged value of ∂y, (x) lagged values of ∆Y, and an error term, to order (x) *Test that the coefficient on the lagged value of y (∂) = 0 *Ho: ∂ = 0 (Unit Root); H1: ∂ > 0 (stationarity) *Use Mackinnon Critical values Model A: No constant/no trend Model B: Constant/no trend Model C: Constant/trend

Bias:

E(b) = B1 + B2cov(x,e)/var(x) *The sign of the bias depends on B2 (The coefficient on e) and the covariance

Strict Exogeneity in Time Series:

Error term is not correlated with the dependent variable

Contemporaneous Exogeneity:

Errors are not correlated with past values of the dependent variable

Breusch-Godfrey test for Serial Correlation:

Estimate residuals from original equation, then regress them on alpha, all variables and (x) lagged residuals for an order (x) test, and test the coefficients on the lagged residuals equals 0 *H0: ∂ = 0 No serial correlation; *H1: ∂ ≠ 0 Serial correlation LM Test- Uses n*R^2 ~ X^2, where n= number of observations to estimate the regression on the on the residuals *Apparent serial correlation could be caused by an omitted relevant variable

Central Limit Theorem:

Everything goes to a normal distribution when you add enough distributions together (minimum of 30); *As DoF goes to infinity, T distribution becomes Z distribution

How to go from T-ratio to F-statistic?:

Find F-statistic then solve for RSS (T-Ratio squared is the F-statistic)

Moving Average Process:

Goldfish! Zt = ∂et-1 + et Remembers for 1 period in ACF

How to test for Endogeneity?:

Hausman-Wu Test: Tests the endogeneity of a right hand side variable, X * Regress the residuals from the instrument relevance test (a regression of y2 on the exogenous variables and the instruments), and test that the coefficient on it equals 0 (exogenous) equals 1 (endogenous H0: ∂=0 (exogenous) H1: ∂≠0 (endogenous)

The order of a series:

How many times a series needs to be differenced to be stationary

Weak Instruments:

If an F test on them gives a value less than 10 as this implies a bias of less than 10% *F-test for significance of the coefficients on the instruments in a regression of the troublesome variable on all other variables and the instruments

Durbins H-test:

If the Durbin-Watson test is smaller than your R^2, it is most likely a spurious regression *Better than D-W Statistic when there is a lagged dep. var. *2- 2(sumnation of etXet-1)/( sumnation of et^2) = ø ø(n/(1-n(OLS var. on LDV)))^0.5

Gauss-Markov Theorem:

In a linear regression model where the errors are uncorrelated, have equal variance, and E(e)=0, the BLUE (Best Linear Unbiased Estimator) of the coefficients is given by OLD (Ordinary Least Squares) *Best= giving the lowest variance of the estimators

What are Dynamic Models, and what is their Complication?:

Includes lags of dep. and indep. variables in panel data *Complication that the lagged dependent variable is correlated with the error term, regardless of whether this error term is itself autocorrelated

IV Assumptions:

Instrument Endogeneity: Instruments are unrelated to the error term cov(z,ei) = 0 Instrument Relevance: Instruments must have a non-zero correlation with endogenous variables o Want an F-test value larger than 10 on Relevant test, otherwise you have weak instruments, E(b) = B + 1/F

What is the advantage of Panel Data?:

It allows you to control for unobserved heterogeneity

How to test for Instrument Exogeneity?:

J-Test Estimate the error term from the regression, and then regress the error term on all variables, testing the hypothesis that the coefficients on the instruments equal zero H0: ∂=0 (Exogenous), H1: ∂≠0 (Endogenous) J= mF ~ X^2(m - g) M= number of instruments; G= number of RHS endogenous variables; F= f-statistic from running error term regression *Can only be tested if the number of instruments is strictly greater than the number of RHS endogenous regressors

How to Interpret Coefficients?:

Linear-Linear • Change in y from unit increase in x Linear-Log: • Change in y from a 1% increase in x Log-linear • % change in y from a unit increase in x • 100(exp(b)-1) exactly, 10% rule! Log-log • % change in y for a 1% change in x Watch out for quadratic! • Y= BX+AX^2 • B + 2AX = change in y from unit increase in x

Measurement Errors:

Measurement errors in explanatory variables causes estimates of coefficients to be both biased and inconsistent *Bias in Measurement error = E(b) = B + Vi, need E(Vi) = 0, and Cov(Vi, X) = 0

Covariance:

Measures the average cross product of deviations of x, around its mean, with y, around its mean *Cov(x,y) = E(x*y)

How to Calculate RSS:

Multiply the Variance (or SE^2) of the regression by the DoF

How to Calculate TSS

Multiply the Variance of the Dependent Variable by N-1

What happens to coefficients and SE when a variable (x) is divided by C?:

Multiply the coefficient by C, all other coefficients are unchanged WHAT HAPPENS TO THE SE??

Is R^2 a useful mechanism for distinguishing between models?

No, because you can always make it larger by adding in new variables

How to test for Incorrect Functional Form?:

Ramsey RESET Test for functional misspecification -Regress the normal equation, save the values of y, and then regress y on all variables, and y^2 hat, y^3 hat... eg. then do an F-test on joint significance on the coefficients on y^2 hat, y^3 hat... eg. -With one independent variable, all you need to do is regress y on x and x^2, then T-test if the coefficient on x^2 is significant *This test has a very low power, implying that if you pass this test, you still may have problems of misspecification/incorrect functional form

Stationarity:

Requires that the mean and variance can not change over time, and that the covariance between one year and a previous year must be the same across time

Pooled OLS:

Same intercept and same slope *Estimates are consistent for large N, T or N&T

Correlation:

Scale free measure of covariance *Corr(x,y)= Cov(x,y)/(Var(x)*Var(Y))^.5

Perron's Result:

Shows that a trend stationary series which exhibits a structural break in the level or the growth, cannot be easily distinguished from a nonstationary process

How to test for Cointegration?:

Step 1: Find the long-run equilibrium equation, and test the stationarity of the error term with an ADF test Step 2: Estimate an ECM ∆y= a + ø∆yt-1 + ∂∆xt-1 + aet-1 + u (only needed for CoInt. analysis) *If you don't have cointegration in step 1, then you can't build an error correction model as you don't have an equilibrium model *Mackinnon N= Number of I(1) variables in the equation

How is the T-ratio related to the F-statistic?:

T ratio squared is the F-statistic

Hausman Test:

Tests for correlation between unobserved heterogeneity, ai, and the explanatory variables. It tests for estimator differences between fixed effects and random effects models H0: Corr(ai, xit) = 0 (RE is best) H1: Corr(ai, xit) ≠ 0 (RE is inappropriate, FE is best)

Difference between Dickey-Fuller and ADF?:

The Augmented Dickey Fuller test of order (x) includes (x) lagged ∆Y

In an F-test the number of restrictions must equal?:

The number of coefficients in the unrestricted model minus the number of coefficients in the restricted model

Serial Correlation:

The presence of some form of linear dependence over time, cross correlation of a signal with itself at different points in time *Doesn't cause bias in estimators, but screws up SE *With a lagged dep. var., the OLS estimate becomes Biased and Inconsistent

How to calculate Power:

The probability that a test correctly rejects a null hypothesis 1) Define region of acceptance 2) Specify the Critical Values of rejection 3) Compute power by computing that the sample estimate will fall outside of the region of acceptance, using the true parameter

How is Variance related to SE?:

The square root of the variance is the SE

Contemporaneous Effect

The total effect after 1 period

How to calculate ME for Probit model

Theta(X'B)*B *Find value at means, this equals a Z-score, find the probability associated with it and multiply that by the coefficient *Theta(X'B|x=1) - Theta(X'B|x=0) for binary!

How do structural breaks affect data?:

They make the data look more stationary than it should be

Mackinnon Critical Values:

Used for Augmented Dickey Fuller test, because it will reject the null hypothesis of a Unit Root (Non-Stationarity) too frequently *Need to adjusted for Cointegration *n= Number of I(1) variables in the equation

How is the Variance related to the Expectation of a Variable?:

V(x) = E(x^2)

Perfect collinearity:

When one of the variables is an exact linear function some other x variables in the model *eg. Y=x1 + x2 ;;;; x2 = 2+3X1

Cointegration:

When two processes have a similar stochastic trend so that they are stationary when combined • Any linear combination of two I(1) series will be I(1) unless cointegrated • Can only find cointegration between 2 series of the same order because the regressions would be unbalanced

How to test for Heteroscedasticity?:

Whites Test for Heteroscedasticity *Estimate residuals, save squared residuals (variance), then regress squared residuals on a constant and all variables *Test if the coefficients on the variables are equal to zero *Ho= homoscedasticity; H1= heteroscedasticity *Stata estat hottest *Apparent heteroscedasticity could be caused by an omitted relevant variable

Auto Regressive Process:

Zt = øzt-1 + et ; the absolute value of ø is less than 1 (assumed to be stationary) *smooth decay in ACF

What is the equation for Bias?:

b = B1 + ∂Cov(x,z)/Var(x)

How to Calculate Confidence Intervals:

x ± Z*(SE)

Probit Model:

• 2π-1/2exp(-z2/2)dz • Assumes that the error term is normally distributed • Assume that sigma = 1 because we can only estimate b/sigma, and neither separately • Continuous ME: CDF(x)*(1-CDF(X))B1 • Binary ME: (CDF|x=1) - (CDf|x=0)

Continuous vs. Binary Marginal Effects

• Continuous ME: CDF(x)*(1-CDF(X))B1 • Binary ME: (CDF|x=1) - (CDf|x=0)

LDV (Logit/Probit) Assumptions:

• Estimator is consistent (approaches the true parameter as n increases) • Estimator is asymptotically normally distributed • Estimator is the most efficient of all alternative estimators (small SE)

Logit Model:

• Fatter tails than a normal distribution • exp(x)/(1+exp(x)) = CDF • Continuous ME: CDF(x)*(1-CDF(X))B1 • Binary ME: (CDF|x=1) - (CDf|x=0)

What models do you use X^2 in?:

• Hausman test FE vs. RE. • LM test (Breusch Godfrey)