Chapter 9- Serial correlation
What is the most commonly assumed form of serial correlation
first order serial correlation- which is when the current value of the error term is a function of the previous value of the error term
best remedy for impure serial correlation
try and correct the omitted variable bias or fix the functional form
first order auto-correlation coefficient
measures the functional relationship between the value of an observation of the error term and the value of the previous observation of the error term
what is Ro
p
what sign is used for the first order autocorrelation coefficient
p
How to use the LM test
1. Obtain residuals from the estimate equation 2. Specify the auxiliary equation that include as independent variables all those on the right hand side of the equation as well as the lagged residuals. 3. Use OLS to estimate auxiliary and test the null hypothesis that α3 = 0 with the following test statistic: LM=N*R^2 N=sample size R^2= unadjusted coefficient of determination, both of the auxiliary equation of step 2
How do you calculate the durbin-watson test
1. Obtain the OLS residuals from the regression and calculate the d statistic. 2. Consult the statistical tables to find the upper critical value Du and the lower critical value Dl (based on the sample size and the number of independent variables) 3. Then follow the decision rule based on your null hypothesis.
What are the characteristics of time series that make them more difficult to deal with than cross sectional data
1. The order of observations is fixed 2. They tend to be much smaller than cross sectional ones 3. Theory underlying it can be quite complex 4. The stochastic error term in a time series set are often affected by events that took place in the previous period.- serial correlation
Consequences of serial correlation
1. Pure serial correlation does not cause bias in the coefficient estimates 2. Serial correlation causes OLS to no longer be the minimum variance indicator of all linear unbiased estimators. 3. Causes the OLS estimates of SE(b hat) to biased, leading to unreliable hypothesis testing
When can the Durbin-Watson test be used
1. The regression model includes an intercept (standard) 2. The autocorrelation is first-order in nature 3. The regression model does not include a lagged dependent variable as the independent variable.
Prais-Winsten Method
An iterative process that first estimates ρ and then estimates the GLS equation, as follows: 1. First step is to run a standard OLS regression and attempt to estimate p based on the residuals from the regression. 2. Then transform the regression model (using GLS method) 3. We can now run OLS estimates using the transformed equation and the value of p estimated in step 1 Repeat steps 1-3 until there is only a very small change in p.
What is the other name for the Generalised Least Squares
quasi-differenced
remedies for Autocorrelation impure
Correct specification errors Change functional form Formulate as first difference model Specify dynamic structure of model
If there is a lagged dependent variable as an independent variable what is used to calculate autocorrelation
Durbin H statistic
Two unusual aspects Durbin Watson test
Econometricians almost never test the one-sided null hypothesis that there is negative serial correlation Durbin Watson test has an acceptance region, a rejection region and an inconclusive region
Given the null hypothesis of no serial correlation and a two-sided alternative hypothesis
H0: p=0 H1: p not =0 d<dl (reject H0) d> 4-dl (reject H0) 4-du>d>du ( do not reject H0) anything else inconclusive
Given the null hypothesis of no positive serial correlation and a one-sided alternative hypothesis
H0= p<=0 (no positive serial correlation) H1= p>0 (positive serial correlation) decision rule d<dl (reject H0) d>du (Do no reject) dl<=d<=du (inconclusive)
LOOK AT EXAMPLES IN BOOK
LOOK AT EXAMPLES IN BOOK
What remedy for autocorrelation do we usually prefer
Newey-West SEs
Newey-West Standard Errors
Newey-West SEs are standard errors which take into account auto without changing the beta estimates. The new standard errors are still biased but they are closer to their true value than those of OLS (not as accurate as GLS) Newey-West standard errors tend to be larger than OLS standard errors—thus producing lower t-scores.
Pure serial correlation
Occurs when Classical Assumption IV, which assumes uncorrelated observations of the error term, is violated in a correctly specified equation
What was the previous best GLS method to use compared to the best one now
Perhaps the best known of these to be used with GLS equations is the Cochrane-Orcutt method, however it is better to use the Prais-Winsten method.
Generalised Least Squares
Rids an equation of pure first-order serial correlation and restores the minimum variance property to its estimation.
impure serial correlation
Serial correlation that is caused by a specification error such as an omitted variable or an incorrect functional form
The LaGrange Multiplier test (LM)
Tests for serial correlation by analysing how well the lagged residuals explain the residual of the original equation in an equation that also includes all the original explanatory variables. If the lagged residuals are statistically significant, the the null hypothesis of no serial correlation is rejected.
Downfalls of the GLS methos
The GLS method changes the coefficient and we don't want that because autocorrelation does not affect the coefficient so we don't want it to actually change it The second problem is more important. GLS works well if p hat is close to the actual p, but the GLS p hat is biased in small samples and if this occurs it introduces bias into the B hats
Using the Durbin Watson test what is the null hypothesis
There is no autocorrelation
why is there different t-scores in the Newey-west
They change the standard errors which changes the t-scores
the durbin-watson test d statistic for t observations is
don't mind the second bit When the et's are the OLS residuals. d=0 is strong positive autocorrelation - this means the value of the error term in t is equal to the error term in t-1 d=4 strong negative correlation d=2 no autocorrelation- when there is no serial correlation the mean distribution of d is =2
What do we call it when we get a positive/negative value for p
positive autocorrelation- this implies the error term is positive form some observations then negative for some then positive for some negative autocorrelation- this implies the error term is positive then negative then positive then negative. This is much less likely than positive
what does the magnitude of p indicate
the strength of the serial correlation in an equation
Durbin-Watson test
used to determine if there is first-order serial correlation in the error term of an equation by examining the residuals of a particular estimation of that equation
when is the t-score usually significant
when it is greater than 1.96
remedies for Autocorrelation pure
• Generalized Least Squares • Newey-West Standard Errors