week 3 - Lecture 6 - Autoregressive models
if we have a random walk process we cannot use .............. because we know it is ..... and ..........
OLS regressions - non stationary - strong dependent
random walk process: take the model yt = yt-1 + ut where ut ~ N(0,1) - in the AR(1) process, there is no intercept term, the AR coefficient = 1 which implies .............. - how do we get today's value yt? - the shock ut has what distribution?
non-stationarity taking yesterday's value yt-1 and adding a shock ut to it gets today's value yt ut has a standard normal distribution
random walk process: - consider two time series processes {yt} and {xt} we impose what 2 conditions?
1. yt and xt are independent of each other (yt has no correlation with xt) 2. they are non stationary and strongly dependent (which is a violation of TS1)
AR(1) Autocorrelations: What is an ACF?
A plot of auto-correlations for increasing lags
Estimating an AR(1) Model: - assuming the process is generated by an autoregressive process, then we can estimate the values for β1,β2 using what?
ARIMA Function
Estimating an AR(1) Model: instead of estimating yt = β0 + β1yt-1 + ut, ARIMA estimates what?
ARIMA estimates (yt-µ) = β1 (yt-1-µ) + ut
AR(1) Autocorrelations: To look at how the βs change as the window of k increases, (distance in time) we use what?
Auto-correlation Function (ACF)
Estimating an AR(1) Model: What does ARIMA mean?
Autoregressive integrated moving averagee
AR(1) Autocorrelations: Cov(yt,yt-1) = ........... --> p1 = corr(yt,yt-1) = β1
Cov(yt,yt-1) = σ^2 β1 --> p1 = corr(yt,yt-1) = β1
why is a random walk process non-stationary? -non stationary because β1 = 1 * take our expected value of yt, E[yt] y0 = 0, y1 = y0 + yt1 what is E(y1)?
E(y1) = E(y0) + E(ut1) = 0
why is a random walk process non-stationary? * take our expected value of yt, E[yt] y0 = 0, y1 = y0 + yt1 yt = y1 + u2 what is E(y2)?
E(y2) = E(y1) + E(u2) = 0
AR(1) Model: we take the model yt = β0 + β1yt-1 + ut - acknowledge that {yt: t=0,1,2...} is a stochastic process and we interested in the unconditional moments - E[yt] = µ = β0 / 1-β1 expected value of yt is a function of the parameters of the AR model, and not dependent on time - var(yt) =σ^2 = σ^2u/1-β^21 * if both E[yt] and var[yt] are constant over time then it is ...............
covariant stationary
AR(1) Autocorrelations: the autocorrelations and autocovariances are also functions of what?
functions of the AR(1) Parameters
Estimating an AR(1) Model: what is meant by moving average?
how does the history of the shocks affect today's sequence?
Estimating an AR(1) Model: what is meant by integrated?
how many times do we have to difference the data until we get stationary data?
Estimating an AR(1) Model: what is meant by autoregressive?
how much the past values of the process impact today
AR(1) Autocorrelations: this looks at what?
how today's value is correlated with the past
AR(1) Autocorrelations: What does an ACF look like if its weakly dependent?
lines go relatively quickly to 0 if weakly dependent
consequences of estimating a model using OLS (random walk processes): what will our results show if we use non-stationary, strongly dependent data in an OLS regression?
mean nothing cannot interpret the parameters
univariate time series models: AR models are used to show what?
show how the value of a variable a point in time is related to its history
why is a random walk process non-stationary? what is the main condition(s) for OLS?
stationary and weak dependence (random walk fails both conditions)
AR(1) Model: we take the model yt = β0 + β1yt-1 + ut - acknowledge that {yt: t=0,1,2...} is a .. process and we interested in what unconditional moments?
stochastic process - E[yt] = µ = β0 / 1-β1 expected value of yt is a function of the parameters of the AR model, and not dependent on time - var(yt) =σ^2 = σ^2u/1-β^21
transforming Non-stationary data: - take a random walk process yt = yt-1 + ut what can we do to transform this model to make it stationary?
take the difference between the two periods ∆yt = yt-yt-1 = ut ut is independent of all other error terms, taking the difference gives us the average
AR(1) Autocorrelations: - with strong dependency, if β1 = 1 this implies what?
that the correlation will be the same between two points close together and between a point in the distant past -> strong dependence
why is a random walk process non-stationary? referring to variance of yt, why it not covariant stationary?
var(yt) = tσ^2 the variance of the error term = 0 with random walk process var(y1) = var(yo) + var(u1) = 1 as variance of y0 = 0 and variance of u1 = 1 var(y2) = var(y1) + var(u2) = 2
AR(1) Autocorrelations: Cov(yt,yt-1) = σ^2 β1 --> p1 = corr(yt,yt-1) = β1 what happens as we increase the separation of the observations? e.g yt,yt-2?
we increase the power on the β1 parameter cov(yt,yt-2) = σ^2 β^21 --> p2 = corr(yt,yt-2) = β^21
AR(1) Autocorrelations: Cov(yt,yt-1) = σ^2 β1 --> p1 = corr(yt,yt-1) = β1 as we increase the separation of the observations we increase the power on the β1 parameter cov(yt,yt-2) = σ^2 β^21 --> p2 = corr(yt,yt-2) = β^21 * for weak dependency we want what to happen?
we want the correlation, as k goes further into the past, to become smaller and smaller approaching 0
random walk process: weak dependence implies what?
weak dependency implies that the history o the process has a diminishing impact on today's value
AR(1) Model: - for an AR(1) process to be stationary what condition must be met?
|β1| < 1 in the equation where β0/1-β1 = E[yt] yesterdays event has a diminishing impact on today and will revert to 0
consequences of estimating a model using OLS (random walk processes): - yt and xt are random walk processes yt and xt are independent (no correlation) in the model yt = α0 + α1xt + εt what is α1=?
α1 = 0 because yt and xt have no correlation
random walk process: xt = yt-1 + vt vt ~ N(0,1) why do we fail to meet the stationarity condition?
β1 = 1
AR(1) Model: we take the model yt = β0 + β1yt-1 + ut what is β1yt-1? what is ut?
β1yt-1 explanatory variable is the lag of the dependent variable yt-1 ut is iid, zero mean and var(ut) = σ^2u (constant variance)