Maximum Likelihood Estimates
What is a joint PDF?
probability of obtaining some particular value of y given some parameter values θ
When is GLS efficient?
when it reduces to OLS
What are the steps of MLE?
1. Write down the likelihood function L(θ;y) = f(y;θ) We assum the observations y are independent therefore: L(θ;y) = fy1(y1;θ)fy2(y2;θ)fy3(y3;θ)...=product of marginal pdfs 2. Calculate the log likelihood lnL(θ;y). Because of the independence assumption, it is just the sum of the log marginal pdfs. 3. Calculate the maximum by setting the log likelihood function to zero and taking derivative wrt θ 4. Solve all of the equations in the score simultaneously and find expressions for θ. These are the ML estimates. θHat ML
What type of model does OLS ALWAYS minimise the sum of squared residuals
A linear model
What do ML estimates always require?
Always require assumption about distribution of error term
Why can we use the PDF of U as the PDF of y?
Because Xbeta is fixed therefore y's randomness is coming only from u. Hence the distribution of y must be exactly the same as the distribution of u. They will have different means though. Fu is the joint PDF of the error vector
When can you only use the F-Test?
Can only use the F-Test when we know the distribution.
FGLS consistent?
Consistent but biased estimates. Does not give Correct standard errors.
What is a likelihood function?
Equal to the joint PDF the other way round. Takes y and tells us the likelihhod/probability of obtaining some parameter values θ. i.e. whats the msot likely combination of parameters given the combination.
What assumption do we make about the observations y for MLE? What does the Likelihood Function Equal now?
Independence. Equals product of the marginal PDFs
What Estimator do we use when there is a non-linear relationship?
Maximum Likelihood Estimator
What does Beta hat OLS minimise?
Minimised the sum of squared residuals for the linear model?
How to do Weigthed Least Squares
Only GLS can do weighted least squares
White Standard Error Coefficients
Produces same coefficients as OLS Gives correct OLS standard errors
When does GLS reduce down to OLS?
When E(u) = 0, X is fixed and full rank, Var(u) = Sigma^2*I
When cant we use OLS?
When we have a non-linear relationship between y and x
What is the likelihood function for the linear model?
Xβ is just a fixed component. I.e. it always occurs with probability 1. Hence we can take the PDF of u isntead. β and Sigma^2 is what we want estimates for.
Is ML estimator consitent?
Yes it is a general property of MLE.