Statistics 510: Notes 7

Reading: Sections 3.4, 3.5

Next week’s homework will be e-mailed and posted on the web site www-stat-wharton.upenn.edu/~dsmall/stat510-f05 by tonight.

I. Independent Events

Review:

is independent of if knowledge that has occurred does not change the probability that occurs, i.e.,

Independence can be expressed in the following way:

Two events and are said to be independent if .

Two events and that are not independent are said to be dependent.

Example 1: Suppose that we toss two fair dice (green and red). Let E be the event that the sum of the two dice is 6 and F be the event that the green die equals 4. Are E and F independent?

Suppose Eis independent of F. We will now show that E is also independent of .

Proof: Assume that E is independent of F. Since , and and are mutually exclusive, we have that

or equivalently,

By similar reasoning, it follows that if E is independent of F, then (i) is independent of and (ii) is independent of .

Independence for more than two events

Independence becomes more complicated when we consider more than two events. We consider events to be mutuallyindependent if knowing that some subset of the events has occurred does not affect the probability that an event has occurred where .

Consider three events . Does pairwise independence of the events (i.e., is independent of , is independent of and is independent of ) guarantee mutual independence? No.

Example 2: A fair coin is tossed twice. Let denote the event of heads on the first toss, denote the event of heads on the second toss, and denote the event that exactly one head is thrown. Verify that are pairwise independent but that .

We define events to be mutuallyindependent if

for every subset of these events. If are mutuallyindependent , then knowing that some subset of the events has occurred does not affect the probability that an event has occurred where .

Example 3: Suppose that a fair coin is flipped three times. Let be the event of a head on the first flip; a tail on the second flip; and a head on the third flip. Are , and mutually independent?

Example 4: Recall the Los Angeles Times (August 24, 1987) article from Notes 3 on the infectivity of AIDS

“Several studies of sexual partners of people infected with the virus show that a single act of unprotected vaginal intercourse has a surprisingly low risk of infecting the uninfected partner – perhaps one in 100 to one in 1000. For an average, consider the risk to be one in 500. If there are 100 acts of intercourse with an infected partner, the odds of infection increase to one in five.

Statistically, 500 acts of intercourse with one infected partner or 100 acts with five partners lead to a 100% probability of infection (statistically, not necessarily in reality).”

Suppose that virus transmissions in 500 acts of intercourse are mutually independent events and that the probability of transmission in any one act is 1/500. Under this model, what is the probability of infection?

Repeated Independent Trials

Example 4 is a special case of a common setup in which the overall probability experiment consists of a sequence of identical, independent subexperiments (In Example 3 the subexperiments are whether virus is transmitted in one act of intercourse).

Example 5: On her way to work, a commuter encounters four traffic signals. The distance between each of the four is sufficiently great that the probability of getting a green light at any intersection is independent of what happened at any prior intersection. If each light is green for 40 seconds of every minute, what is the probability that the driver has to stop at least three times?

Repeated independent trials problems sometimes involve experiments consisting of a countably infinite number of subexperiments. Solving these problems often requires using the formula for the sum of a geometric series:

if

Example 6: Independent trials, consisting of rolling a pair of fair dice are performed. What is the probability that an outcome of 5 appears before an outcome of 7 when the outcome of a roll is the sum of the dice?

II. is a probability

The conditional probability is a probability function on the events in the sample space S and satisfies the usual axioms of probability:

(a)

(b)

(c) If are mutually exclusive events, then

Thus, all the formulas we have derived for manipulating probabilities in Chapter 2 apply to conditional probabilities.

Example 7: The following is a simple genetic model. Assume that genes in an organism occur in pairs and that each member of the pair can be either of the types a or A. The possible genotypes of an organism are then AA, Aa and aa (Aa and aA are equivalent). When two organisms mate, each independently contributes one of its two genes; either one of the pair is transmitted with probability .5.

A female chimp has given birth. It is not certain, however, which of two male chimps is the father. Before any genetic analysis has been performed, it is felt that the probability that male number 1 is the father is p, and the probability that male number 2 is the father is 1-p. DNA obtained from the mother, male number 1 and male number 2 indicate that on one specific location of the genome the mother has the gene pair AA, male number 1 has the gene pair aa and male number 2 has the gene pair Aa. If it results that the baby chimp has the gene pair Aa, what is the probability that male number 1 is the father?

Conditional independence: An important concept in probability theory is that of the conditional independence of events. We say that events and are conditionally independent given F if, given that F occurs, the conditional probability that occurs is unchanged by information as to whether or not occurs.

More formally, and are said to be conditionally independent given F if

or, equivalently,

.

Example 8: An insurance company believes that people can be divided into two classes: those who are accident-prone and those who are not. Their statistics show that an accident-prone person will have an accident at some time within a fixed 1-year period with probability .4, whereas this probability decreases to .2 for a non-accident prone person. 30 percent of the population is accident-prone. Consider a two-year period. Assume that the event that a person has an accident in the first year is conditionally independent of the event that a person has an accident in the second year given whether or not the person is accident prone. What is the conditional probability that a randomly selected person will have an accident in the second given that the person had an accident in the first year?