Maximum Likelihood Estimation
maximum likelihood function
value of θ that makes it most likely to have observed sample x
Finite sample properties of MLE (3)
1) invariance - if θ^ is MLE of θ, then MLE for continuous function h(θ) is h(θ^) 2) unbiasedness, full efficiency - if θ^ is unbiased estimator of θ whose variance achieves CRLB, MLE unique and equal to θ^ 3) sufficiency - if S(x) is sufficient stat for θ & unique MLE θ^ of θ exists, then θ^ is function of S(x)
How to perform inference using MLE?
estimate asymptotic variance find estimator for inverse fisher info matrix Or minus the inverse of Hessian is also valid (-Hn(θ^)⁻¹
Cramer Rao Lower Bound of variance of unbiased estimator θ^u Significance?
estimator is asymptotically efficient in the class of consistent and asymptotically normally distributed estimators, as long as make correct distribution assumptions
likelihood to have observed the values
evaluate f(x;θ) at particular values of x given θ
joint density
f(x;θ)
Hessian
matrix of second order derivatives of log-likelihood function
Negative definite Hessian
means that θ includes maximum values of parameters