305 1

Ace your homework & exams now with Quizwiz!

maximum likelihood estimate

the parameter value for which the probability of the observed data takes its greatest value. (answer/estimate from the problem)

theta(e) vs theta hat

theta(e) is the estimate that comes out. theta hat is the estimator

unbiased estimator

A statistic is said to be an unbiased estimate of a given parameter when the mean of the sampling distribution of that statistic can be shown to be equal to the parameter being estimated. For example, the mean of a sample is an unbiased estimate of the mean of the population from which the sample was drawn. If the following holds: E[u(X1,X2,...,Xn)]=θE[u(X1,X2,...,Xn)]=θ then the statistic u(X1,X2,...,Xn)u(X1,X2,...,Xn) is an unbiased estimator of the parameter θ. Otherwise, u(X1,X2,...,Xn)u(X1,X2,...,Xn) is a biased estimator of θ.

maximum likelihood estimator

An estimate of a population parameter that is obtained using maximum likelihood estimation. (1/n)sum(Xi)

method of moments

Definitions. (1) E(Xk)E(Xk) is the kth (theoretical) moment of the distribution (about the origin), for k = 1, 2, ... (2) E[(X−μ)k]E[(X−μ)k] is the kth (theoretical) moment of the distribution (about the mean), for k = 1, 2, ... (3) Mk=1n∑i=1nXkiMk=1n∑i=1nXik is the kth sample moment, for k = 1, 2, ... (4) M∗k=1n∑i=1n(Xi−X¯)kMk∗=1n∑i=1n(Xi−X¯)k is the kth sample moment about the mean, for k = 1, 2, ...

the estimation process starting with a random sample from a random variable X with pdf fX(x; ) and ending with an estimate theta(e) for the unknown parameter theta.

If the Xi are independent Bernoulli random variables with unknown parameter p, then the probability mass function of each Xi is: f(xi;p)=pxi(1−p)1−xif(xi;p)=pxi(1−p)1−xi for xi = 0 or 1 and 0 < p < 1. Therefore, the likelihood function L(p) is, by definition: L(p)=∏i=1nf(xi;p)=px1(1−p)1−x1×px2(1−p)1−x2×⋯×pxn(1−p)1−xnL(p)=∏i=1nf(xi;p)=px1(1−p)1−x1×px2(1−p)1−x2×⋯×pxn(1−p)1−xn for 0 < p < 1. Simplifying, by summing up the exponents, we get : L(p)=p∑xi(1−p)n−∑xiL(p)=p∑xi(1−p)n−∑xi Now, in order to implement the method of maximum likelihood, we need to find the p that maximizes the likelihood L(p). The "trick" is to take the derivative of ln(L(p)) (with respect to p) rather than taking the derivative of L(p). In this case, the natural logarithm of the likelihood function is: logL(p)=(∑xi)log(p)+(n−∑xi)log(1−p)logL(p)=(∑xi)log(p)+(n−∑xi)log(1−p) Now, taking the derivative of the log likelihood, and setting to 0, we get: partial derivative Now, multiplying through by p(1−p), we get: (∑xi)(1−p)−(n−∑xi)p=0 Upon distributing, we see that two of the resulting terms cancel each other out: eqn leaving us with: ∑xi−np=0∑xi−np=0 Now, all we have to do is solve for p. In doing so, you'll want to make sure that you always put a hat ("^") on the parameter, in this case p, to indicate it is an estimate: p^=∑i=1nxinp^=∑i=1nxin or, alternatively, an estimator: p^=∑i=1nXin

order statistics

Let X1, X2, . . . , Xn be continuous, independent and identically distributed (iid) random variables with common PDF f (x) and common CDF F(x) (i.e. a random sample). Then: 1. The PDF of the maximum Xmax = max(X1, X2 . . . , Xn) is fXmax (x) = n(F(x))^(n−1) f (x). 2. The PDF of the minimum Xmin = min(X1, X2 . . . , Xn) is fXmin(x) = n(1 − F(x))^(n−1) f (x).

MVUE

Minimum-variance unbiased estimator

how to find the maximum likelihood estimate given a random sample from a fixed random variable with a given pdf.

Suppose that X is a discrete random variable with the following probability mass function: where 0 ≤ θ ≤ 1 is a parameter. The following 10 independent observations X 0 1 2 3 P(X) 2θ/3 θ/3 2(1 − θ)/3 (1 − θ)/3 were taken from such a distribution: (3,0,2,1,3,2,1,0,2,1). What is the maximum likelihood estimate of θ. Solution: Since the sample is (3,0,2,1,3,2,1,0,2,1), the likelihood is L(θ) = P(X = 3)P(X = 0)P(X = 2)P(X = 1)P(X = 3) × P(X = 2)P(X = 1)P(X = 0)P(X = 2)P(X = 1) (2) Substituting from the probability distribution given above, we have Clearly, the likelihood function L(θ) is not easy to maximize. 3 Let us look at the log likelihood function l(θ) = log L(θ) = C + 5 log θ + 5 log(1 − θ) where C is a constant which does not depend on θ. It can be seen that the log likelihood function is easier to maximize compared to the likelihood function. Let the derivative of l(θ) with respect to θ be zero: and the solution gives us the MLE, which is ˆθ = 0.5. We remember that the method of moment estimation is ˆθ = 5/12, which is different from MLE.

likelihood function

The ___ is the joint probability distribution of the data, treated as a function of the unknown coefficients.

relative efficiency

The most efficient estimator among a group of unbiased estimators is the one with the smallest variance. Relative efficiency = variance of first estimator/ variance of second estimator An estimator is said to be efficient if in the class of unbiased estimators it has minimum variance.

random sample

a sample that fairly represents a population because each member has an equal chance of inclusion

Know the maximum likelihood estimators for the Bernoulli, Poisson, exponential, and normal with mean and variance

calculator


Related study sets

POSI 2310: Topic 3.2 Ch.7 Political Participation

View Set

ECON210 Chapter 3—Supply & Demand

View Set