STAT 211

Handout 6 (Chapter 6): Point Estimation

A point estimate of a parameter  is a single number that can be regarded as the most plausible value of . A point estimator, =  + error of estimation, is an unbiased estimator of  if E()=  for every possible value of . Otherwise, it is biased and Bias = E()- .

Read the example 6.2 (your textbook).

Example: When X is a binomial r.v. with parameters, n and p, the sample proportion X/n is an unbiased estimator of p.

The standard error of an estimator is its standard deviation . The estimated standard error of an estimator is . The best point estimate, minimum variance unbiased estimator (MVUE) has an unbiased mean and minimum variance.

Minimum Variance Unbiased Estimator (MVUE): Among all estimators of  that are unbiased, choose the one that has minimum variance. The resulting is MVUE.

Example: For normal distribution, is the MVUE for 


Example: The following graph is generated by creating 500 samples with size 5 from N(0,1) and calculating the sample mean and the sample median for each sample.

Example: A sample of 15 students who had taken calculus class yielded the following information on brand of calculator owned: T H C T H H C T T C C H S S S (T: Texas Instruments, H: Hewlett Packard, C=Casio, S=Sharp).

(a)Estimate the true proportion of all such students who own a Texas Instruments calculator.

(b)Three out of four calculators made by only Hewlett Packard utilize reverse Polish logic. Estimate the true proportion of all such students who own a calculator that does not use reverse Polish logic.

Exercise 6-3:

Given data yield the following summary statistics.

Variable N Mean Median TrMean StDev SE Mean

thickness 16 1.3481 1.3950 1.3507 0.3385 0.0846

Variable Minimum Maximum Q1 Q3

thickness 0.8300 1.8300 1.0525 1.6425

(a)A point estimate of the mean value of coating thickness.

(b)A point estimate of the median value of coating thickness.

(c)A point estimate of the value that separates the largest 10% of all values in the coating thickness distribution from the remaining 90%.

(d)Estimate P(X<1.5) (The proportion of all thickness values less than 1.5)

(e)Estimated standard error of the estimator used in (a).

Exercise 6-8: In a random sample of 80 components of a certain type, 12 are found to be defective.

(a)A point estimate of the proportion of all such components that are not defective.

(b)Random select 5 of these components and connect them in series for the system. Estimate the proportion of all such systems that work properly.

Exercise 6-12:

X: yield if 1st type of fertilizer.E(X)= Var(X)=

Y: yield of 2nd type of fertilizerE(Y)=Var(Y)=

is an unbiased estimator for if

Exercise 6-13:

,

is an unbiased estimator for if

METHODS OF OBTAINING POINT ESTIMATORS

  1. The Method of Moments (MME)

Let X1,X2,….,Xn be a random sample from a pmf or pdf. For k=1,2,…., the kth population moment of the distribution is E(Xk). The kth sample moment is .

Steps to follow : If you have only one unknown parameter

(i)calculate E(X).

(ii)equate it to .

(iii)Solve for unknown parameter (such as 1).

If you have two unknown parameters, you also need to compute the following to solve two unknown parameters with two equations.

(iv)calculate E(X2).

(v)equate it to .

(vi)Solve for the second unknown parameter (such as 2).

If you have more than two unknown parameters, repeat the same steps for k=3,….. until you can solve it.

  1. The Method of Maximum Likelihood (MLE)

Likelihood function is the joint pmf or pdf of X which is the function of unknown  values when x's are observed. The maximum likelihood estimates are the  values which maximize the likelihood function.

Steps to follow:

(i) Determine the likelihood function.

(ii) Take the natural logarithm of the likelihood function.

(iii) Take a first derivative with respect to each unknown  and equate it to zero (if you have m unknown parameters, you will have m equations as a result of derivatives).

(iv) Solve for unknown 's.

(v)Check if it really maximizes your function by looking at a second derivative.

The Invariance Principle: Let .,,..., be the MLE's of the parameters ,,...,. Then the MLE of any function h(,,...,) of these parameters is the function h(.,,...,) of the MLE's

Example:

(1) Let X1,…,Xn be a random sample of normally distributed random variables with the mean  and the standard deviation .

The method of moment and the maximum likelihood estimates of  and  are and .

(2) Let X1,…,Xn be a random sample of exponentially distributed random variables with parameter .

The method of moment estimate and the maximum likelihood estimate of  are .

(3) Let X1,…,Xn be a random sample of binomial distributed random variables with parameter p.

The method of moment estimate and the maximum likelihood estimate of p are X/n.

(4) Let X1,…,Xn be a random sample of Poisson distributed random variables with parameter .

The method of moment estimate and the maximum likelihood estimate of  are .

All the estimates above are unbiased? Some Yes but others No. (will be discussed in class)

Exercise 6-20: random sample of n bike helmets are selected.

X: number among the n that are flawed =0,1,2,…..,n

p=P(flawed)

(a)Maximum likelihood estimate (MLE) of p if n=20 and x=3?

(b)Is the estimator in (a) unbiased?

(c)Maximum likelihood of (1-p)5 (none of the next five helmets examined is flawed)?

(d)Instead of selecting 20 helmets to examine, examine the helmets in succession until 3 flawed ones are found. What would be different in X and p?

Exercise 6-22:

(a)

Set E(X)= and then solve for .

The given data yield = 0.80 then the method of moment estimator for  is =3

(b) L=Likelihood=

ln(L)=

=0 then solve for .

The given data yield -2.4295 then the maximum likelihood estimator for  is =3.1161

Proposition: Under very general conditions on the joint distribution of the sample when the sample size is large, the MLE of any parameter  is approximately unbiased and has a variance that is nearly as small as can be achieved by an estimator.