Statistics 550 Notes 3

Reading: Section 1.3

Decision Theoretic Framework: Framework for evaluating and choosing statistical inference procedures

I. Motivating Example

A cofferdam protecting a construction site was designed to withstand flows of up to 1870 cubic feet per second (cfs). An engineer wishes to estimate the probability that the dam will be overtopped during the upcoming year. Over the previous 25-year periods, the annual maximum flood levels of the dam has exceeded 1870 cfs 5 times. The engineer models the data on whether the flood level has exceeded 1870 cfs as independent Bernoulli trials with the same probability that the flood level will exceed 1870 cfs in each year.

Some possible estimates of based on iid Bernoulli trials :

(1)

(2) , the posterior mean for a uniform prior on .

(3) , the posterior mean for a Beta(2,2) prior on (called the Wilson estimate, recommended by Moore and McCabe, Introduction to the Practice of Statistics).

How should we decide which of these estimates to use?

The answer depends in part on how errors in the estimation of affect us.

Example 1 of decision problem: The firm wants the engineer to provide her best “guess” of , the probability of an overflow, i.e., to estimate by . The firm wants the probability of an overflow to be at most 0.05. Based on the estimate of , the engineer’s firm plans to spend an additional dollars to shore up the dam where and is an increasing function. By spending this money, the firm will make the probability of an overflow be . The cost of an overflow to the firm is $C. The expected cost to the firm of using an estimate of (for a true initial probability of overflow of ) is .

We want to choose an estimate which provides low expected cost.

Example 2 of decision problem: Another decision problem besides estimating might be that the firm wants to decide whether or ; if , the firm would like to build additional support for the additional dam. This is an example of a testing problem of deciding whether a parameter lives in one of two subsets that form a partition of the sample space. The cost to the firm of making the wrong decision about whether or depends on what type of error was made (deciding that when in fact or deciding that when in fact ).

The decision theoretic framework involves:

(1)clarifying the objectives of the study;

(2)pointing to what the different possible actions are

(3)providing assessments of risk, accuracy, and reliability of statistical procedures

(4)providing guidance in the choice of procedures for analyzing outcomes of experiments.

II. Components of the Decision Theory Framework (Section 1.3.1)

We observe data from a distribution , where we do not know the true but only know that (the statistical model).

The true parameter vector is sometimes called the “state of nature.”

Action space: The action space is the set of possible actions, decisions or claims that we can contemplate making after observing the data .

For Example 1, the action space is the possible estimates of (probability of the dam being overtopped), .

For Example 2, the action space is {decide that , decide that }.

Loss function: The loss function is the loss incurred by taking the action when the true parameter vector is .

The loss function is assumed to be nonnegative. We want the loss to be small.

Relationship between loss function and utility function in economics. The loss function is related to the utility function in economics. If the utility of taking the action when the true state of nature is is , then we can define the loss as

When there is uncertainty about the outcome of interest after taking the action (as in Example 1), then we can replace the utility with the expected utility under the von Neumann-Morganstern axioms for decision making under uncertainty (W. Nicholson, Microeconomic Theory, 6th ed., Ch. 12).

Ideally, we choose the loss function based on the economics of the decision problem as in Example 1. However, more commonly, the loss function is chosen to qualitatively reflect what we are trying to do and to be mathematically convenient.

Commonly used loss functions for point estimates of a real valued parameter :

Denote our estimate of by .

The most commonly used loss function is

quadratic (squared error) loss: .

Other choices that are less computationally convenient but perhaps more realistically penalize large errors less are:

(1) absolute value loss, ;

(2 ) Huber’s loss functions,

for some constant k

(3) zero-one loss function

for some constant k

Decision procedures: A decision procedure or decision rule specifies how we use the data to choose an action . A decision procedure is a function from the sample space of the experiment to the action space.

For Example 1, decision procedures include and .

Risk function: The loss of a decision procedure will vary over repetitions of the experiment because the data from the experiment is random. The risk function is the expected loss from using the decision procedure when the true parameter vector is :

Example: For quadratic loss in point estimation of , the risk function is the mean squared error:

This mean square error can be decomposed as bias squared plus variance.

Proposition 3.1:

Proof: We have
Example 3: Suppose that an iid sample X1,...,Xn is drawn from the uniform distribution on [0,] where is an unknown parameter and the distribution of Xi is

Several point estimators:

1. . Note: W1 is biased, .

2. . Note: Unlike W1, W2 is unbiased because .

3. W3=2. Note: W3 is unbiased,

Comparison of three estimators for uniform example using mean squared error criterion

1.

The sampling distribution for W1 is

and

To calculate , we calculate and use the formula .

Thus,

.

2.

Note .

Thus, ,

and

Because W2 is unbiased,

3.

To find the mean square error, we use the fact that if iid with mean and variance , then has mean and variance

We have

Thus, , and

and .

W3 is unbiased and has mean square error .

The mean square errors of the three estimators are the following:

W1 /
W2 /
W3 /

For n=1, the three estimators have the same MSE.

For n>1,

So W2 is best, W1 is second best and W3 is the worst.

III. Admissibility/Inadmissibility of Decision Procedures

A decision procedure is inadmissible if there exists another decision procedure such that for all and for at least one . The decision procedure is said to dominate ; there is no justification for using rather than .

In Example 3, W1 and W3 are inadmissible point estimators under squared error loss for .

A decision procedure is admissible if it is not inadmissible, i.e., if there does not exist a decision procedure such that for all and for at least one .

IV. Selection of a decision procedure:

We would like to choose a decision procedure which has a “good” risk function.

Ideal: We would like to construct a decision procedure that is at least as good as all other decision procedures for all , i.e., such that for all and all other decision procedures .

This is generally impossible!

Example 2: For X1,...,Xn iid , is an admissible point estimator of for squared error loss.

Proof: Suppose is inadmissible. Then there exists a decision procedure that dominates . This implies that . Hence,. Since is nonnegative, this implies .

Let be the event that . We will show that for all . This means that with probability 1 for all , which means that for all ; this contradicts dominates and proves that is admissible.

To show that for all , we use the importance sampling idea that the expectation of a random variable X under a density f can be evaluated as the expectation of the random variable Xf(X)/g(X) under a density g as long as f and g have the same support:

(0.1)

Since =0, the random variable

is zero with probability one under Thus, by (0.1), for all . ■

Comparison of risk under squared error loss for and .

Although is admissible, it does not have good risk properties for many values of .

Approaches to choosing a decision procedure with good risk properties:

(1) Restrict class of decision procedures and try to choose optimal procedure within this class, e.g., for point estimation, we might only consider unbiased estimators of such that for all .

(2) Compare risk functions by global criterion. We shall discuss Bayes and minimax criteria.

I. Example 1 (Example 1.3.5 from Bickel and Doksum)

We are trying to decide whether to drill a location for oil. There are two possible states of nature,

location contains oil and location doesn’t contain oil. We are considering three actions, =drill for oil, =sell the location or =sell partial rights to the location.

The following loss function is decided on

(Drill)
/ (Sell)
/ (Partial rights)

(Oil) / 0 / 10 / 5
(No oil) / 12 / 1 / 6

An experiment is conducted to obtain information about resulting in the random variable X with possible values 0,1 and frequency function given by the following table:

Rock formation
X
0 / 1
(Oil) / 0.3 / 0.7
(No oil) / 0.6 / 0.4

represents the presence of a certain geological formation that is more likely to be present when there is oil.

The possible nonrandomized decision procedures are

Rule
1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9
x=0 / / / / / / / / /
x=1 / / / / / / / / /

The risk of at is

The risk functions are

Rule
1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9
/ 0 / 7 / 3.5 / 3 / 10 / 6.5 / 1.5 / 8.5 / 5
/ 12 / 7.6 / 9.6 / 5.4 / 1 / 3 / 8.4 / 4 / 6

The decision rules 2, 3, 8 and 9 are inadmissible but the decision rules 1, 4, 5, 6 and 7 are all admissible.

V Bayes Criterion

The Bayesian point of view leads to a natural global criterion.

Suppose a person’s prior distribution about is and the model is that has probability density function (or probability mass function) . Then the joint (subjective) pdf (or pmf) of is .

The Bayes risk of a decision procedure for a prior distribution , denoted by, is the expected value of the risk over the joint distribution of :

.

For a person with subjective prior probability distribution , the decision procedure which minimizes minimizes the person’s (subjective) expected loss and is the best procedure from this person’s point of view. The decision procedure which minimizes the Bayes risk for a prior is called the Bayes rule for the prior .

Example continued: For prior, and , the Bayes risks are

Rule
1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9
/ 9.6 / 7.48 / 8.38 / 4.92 / 2.8 / 3.7 / 7.02 / 4.9 / 5.8

Thus, rule 5 is the Bayes rule for this prior distribution.

The Bayes rule depends on the prior. For prior and , the Bayes risks are

Rule
1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9
/ 6 / 7.3 / 6.55 / 4.2 / 5.5 / 4.75 / 4.95 / 6.25 / 5.5

Thus, rule 4 is the Bayes rule for this prior distribution.

A non-subjective interpretation of Bayes rules: The Bayes approach leads us to compare procedures on the basis of

if is discrete with frequency function or

if is continuous with density .

Such comparisons make sense even if we do not interpret as a prior density or frequency, but only as a weight function that reflects the importance we place on doing well at the different possible values of .

For example, in Example 1, if we felt that doing well at both and are equally important, we would set .

VI. Minimax Criteria

The minimax criteria minimizes the worst possible risk. That is, we prefer to , if and only if

.

A procedure is minimax (over a class of considered decision procedures) if it satisfies

.

Among the nine decision rules considered for Example 2, rule 4 is the minimax rule.

Rule
1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9
/ 0 / 7 / 3.5 / 3 / 10 / 6.5 / 1.5 / 8.5 / 5
/ 12 / 7.6 / 9.6 / 5.4 / 1 / 3 / 8.4 / 4 / 6
max{,
} / 12 / 7.6 / 9.6 / 5.4 / 10 / 6.5 / 8.4 / 8.5 / 6

Game theory motivation for minimax criterion: Suppose we play a two-person zero sum game against Nature. Then the minimax decision procedure is the minimax strategy for the game.

Comments on the minimax criteria: The minimax criteria is very conservative. It aims to give maximum protection against the worst can happen. The principle would be compelling if the statistician believed that Nature was a malevolent “opponent” but in fact Nature is just the inanimate state of the world.

Although the minimax criterion is conservative, in many cases the principle does lead to reasonable procedures.

VII. Other Global Criteria for Decision Procedures

Two compromises between Bayes and minimax criteria that have been proposed are:

(1) Restricted Risk Bayes: Suppose that M is the maximum risk of the minimax decision procedure. Then, one may be willing to consider decision procedures whose maximum risk exceeds M , if the excess is controlled, say, if

(0.2)

where is the proportional increase in risk that one is willing to tolerate. A restricted risk Bayes decision procedure for the prior is then obtained by minimizing the Bayes risk among all decision procedures that satisfy (0.2).

For Example 1 and prior ,

Rule
1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9
/ 9.6 / 7.48 / 8.38 / 4.92 / 2.8 / 3.7 / 7.02 / 4.9 / 5.8
Max
Risk / 12 / 7.6 / 9.6 / 5.4 / 10 / 6.5 / 8.4 / 8.5 / 6

For =0.1 (maximum risk allowed = (1+0.1)*5.4=5.94), decision rule 4 is the restricted risk Bayes procedure; for =0.25 (maximum risk allowed = (1+0.25)*5.4=6.75), decision rule 6 is the restricted risk Bayes procedure.

(2) Gamma minimaxity. Let be a class of prior distributions. A decision procedure is gamma-minimax (over a class of considered decision procedures) if

Thus, the estimator minimizes the maximum Bayes risk over those priors in the class .

Computational issues: We will study more on how to find Bayes and minimax point estimators in Chapter 3. The restricted risk Bayes procedure is appealing but it is difficult to compute.

VIII. Randomized decision procedures

A randomized decision procedure is a decision procedure which assigns to each possible outcome of the data , a random variable , where the values of are actions in the action space. When , a draw from the distribution of will be taken and will constitute the action taken.

We will show in Chapter 3 that for any prior, there is always a nonrandomized decision procedure that has at least as small Bayes risk as a randomized decision procedure (so we can ignore randomized decision procedures in looking for the Bayes rule).

Students of game theory will realize that a randomized decision procedure may lead to a lower maximum risk than a nonrandomized decision procedure.

Example: For Example 1, a randomized decision procedure is to flip a fair coin and use decision rule 4 if the coin lands heads and decision rule 6 if the coin lands tails – i.e., with probability 1 and with probability 0.5 and with probability 0.5. The risk of this randomized decision procedure is

,

which has lower maximum risk than decision rule 4 (the minimax rule among nonrandomized decision rules).

Randomized decision procedures are somewhat impractical – it makes the statistician’s inferences seem less credible if she has to explain to a scientist that she flipped a coin after observing the data to determine the inferences.

We will show in Chapter 1.5 that a randomized decision procedure cannot lower the maximum risk if the loss function is convex.

1