Linear Regression and Ordinary Least Squares
What is assumption A1 for classic linear regression
Linearity in parameters
What is assumption A2 for classic linear regression
There are at least as many observations as parameters, and there is no linear dependence among the explanatory variables.
What do we have to assume in a finite sample inference
A0, A1, A2, A5
What is the objective of our multiple linear regressions
to estimate the set of population parameters (β0, β1, β2, . . . , βK).
What are the two types of heterogeneity
(1) Observed/systematic, i.e., E(yi | x1i , x2i , . . . , xKi), (2) Unobserved/idiosyncratic, i.e., ui .
What is sum of squared deviations
(X-Xbar)², represents the sum of squared differences from the mean
How many assumptions are there for the classic linear regression model
6 (A0 to A1)
multiple linear regression model
A linear regression model with two or more independent variables.
What is the Var(Ui) if A3 and A4 hold
E(u^2 i ) − (E(ui))^2 = E(u^2 i ) = σ^2 .
What is the critical identifying assumption
E(u|x) = E(u) u is said to be mean independent of x therefore, E(u|x) = E(u) = 0.
Mean independence assumption for multiple linear regression model now states...
E(u|x1, x2, . . . , xK) = E(u); with an intercept, E(u) = 0, and so E(u|x1, x2, . . . , xK) = E(u) = 0.
The population regression function (PRF) is then given by
E(y|x) = β0 + β1x.
How do you measure 'fit' r R^2
ESS/TSS
What is assumption A3 for classic linear regression
Exogeneity: E(ui |x1i , x2i , . . . , xKi) = 0, i.e., the error terms are independent of and uncorrelated with the regressors.
What is the first step of deriving the OLS
Find ∂ SSD / ∂β0 and Fine ∂ SSD / ∂βk
What is assumption A4 for classic linear regression
Homoskedasticity: Var(ui |x1i , x2i , . . . , xKi) = σ 2 , i.e., the error terms have identical conditional variances.
If ∆u = 0, what does this imply
Implies that β1 = ∆y/∆x and that we are observing ceteris paribus
What is the second step of deriving OLS
Make ∂ SSD / ∂β0 = 0 and make ∂ SSD / ∂βk = 0 And assumption A1 and A2 hold
What is assumption A5 for classic linear regression
Normality: ui |x1i , x2i , . . . , xKi ∼ Normal (0, σ2 ), i.e., the error terms are normally distributed with mean 0 and variance σ 2 .
What is the OLS estimator sensitive to
Outliers
What is assumption A0 for classic linear regression
Random sampling
What is a dummy variable?
Result is either 1 or 0
What is the equation that links RSS, ESS, TSS
TSS = ESS + RSS
What does E(βˆ k) = βk, k = 0, . . . , K mean
The OLS estimator is unbiased
What is βˆ
The best linear unbiased estimator (BLUE) in the classical linear regression model with fixed regressors. The Gauss-Markov Theorem proves that the OLS estimator is BLUE, i.e., that βˆ has minimum variance.
What is a well known way to estimate parameters in linear models
The method of ordinary least squares (OLS)
To construct confidence intervals and perform hypothesis tests, we ...
Use the estimate for the asymptotic variance to form standard errors. Note that standard errors are, in fact, asymptotic standard errors. We then compare test statistics of various kinds to critical values from the standard normal distribution.
What is the simple linear regression model
a relationship between a dependent variable and one independent variable
What does the OLS estimator do
chooses a value of (β0, β1, . . . , βK) to minimize the sum of squared deviations (SSD), i.e., which minimizes the expression
Unbiasedness and consistency of the OLS estimator do not require the
conditional homoskedasticity assumption.
How does the Gauss-Markov Theorem proves that βˆ is the best linear unbiased estimator
it has the minimum conditional variance among unbiased linear estimators, in the classical linear regression model with stochastic regressors.
How can we measure precision
we can estimate confidence intervals around the true parameters; we can also perform a range of hypothesis tests. now we need to know something about the (conditional) distribution of βˆ.