CHAPTER 7, Sections7.1 & 7.2 Revised Jan 27, 2011

Inference for the Mean of a Population when sigma is unknown, Sec 7.1

Previously we made the assumption that we knew the population standard deviation, σ. We then developed a confidence interval and used tests of significance to evaluate evidence for or against a hypothesis, all with a known σ.

In many situations,σ is unknown. In this section, we will continue doing inference for the population mean, but we will use the sample standard deviation, s, as a substitute for the unknown population standard deviation. The procedures are called t procedures.

Confidence Interval for a Mean

First, Assumptions for Inference about the population mean:

  • Our data are a simple random sample (SRS) of size n taken from a normally distributed population with mean µ and standard deviation σ, both of which are unknown.
  • Unless a small sample is used, the assumption that the data comes from a SRS is more important than the assumption that the population distribution is normal.

Because we do not know the population sigma, σ, we make two changes in our procedure:

  1. The sample standard deviation , s, is used in place of the unknown σ to estimate the standard deviation of the sample mean. The result is called the standard error of the mean, SEM.

SEM =

Where sis the sample standard deviation andn is the sample size.

  1. We calculate a different test statistic, t instead of z, and our P Value comes from the t distribution instead of the Normal distribution.

The t-distributions:

  • The t-distribution is used when we do not know the population sigma, σ. The t-distributions have density curves similar in shape to the standard normal curve, but with more spread.
  • The t-distributions have more probability in the tails and less in the center when compared with the standard normal distribution. This is because substituting the estimate s for the unknown value of σ introduces more uncertainty.
  • For sample sizes less than 30, s tends to underestimate σ. As the sample size increases, the t-density curve approaches the N(0,1) distribution. This is because s estimates σ with less bias as the sample size increases.

The t Distributions

Suppose that an SRS of size n is drawn from a population, but the values of µ and σ are unknown. Then the one-sample t statistic

has the t distribution with n-1 degrees of freedom. There is a separate distribution for every sample size. For any given sample size the degrees of freedom = n-1. Each line on the t table is for one specific degree of freedom.

For any line on the t table there are columns which show the value of t which coincides with certain specific probabilities in the right tail of the t distribution.

The One-Sample t Confidence Interval

Suppose that a SRS of size n is drawn from a normal population having unknown mean µ and σ. A level C confidence interval for µ is

where t* is the value for the t(n-1) density curve with area C between –t* and t*. Xbar is the point estimate, and t* (s)/√(n) is the margin of error.

This interval is exact when the population distribution is normal and is approximately correct for large n in other cases.

Examples :

  1. Suppose X, Bob’s golf scores, are approximately normally distributed with unknown mean and standard deviation. A SRS of n=16 scores is selected and a sample mean of = 77 and a sample standard deviation, s, = 3 is calculated. Calculate a 90% confidence interval for .
  1. (Example 7.1 in Textbook) Corn soy blend, CSB is highly nutritious, low-cost fortified food that can be incorporated into different food preparations worldwide. As part of a study to evaluate appropriate vitamin C levels in this commodity, measurements were taken on samples of CSB produced in a factory. The following data are the amounts of vitamin C, measured in milligrams per 100 grams of blend, for a random sample of size 8 from a production run. Compute a 95% confidence interval for where is the population mean vitamin C content of the CSB.

26 31 23 22 11 22 14 31

By hand:

= 22.5s = 7.191 n = 8 df = 7

Using SPSS:
analyze > descriptive statistics > explore
Move “vitaminC” to “dependent list”.
Click “statistics” and select “descriptives” and change/keep a 95% confidence interval.
Click “continue” followed by “OK”.

Descriptives

Statistic / Std. Error
Vitamin C / Mean / 22.50 / 2.542
95% Confidence Interval for Mean / Lower Bound / 16.49
Upper Bound / 28.51
5% Trimmed Mean / 22.67
Median / 22.50
Variance / 51.714
Std. Deviation / 7.191
Minimum / 11
Maximum / 31
Range / 20
InterquartileRange / 14
Skewness / -.443 / .752
Kurtosis / -.631 / 1.481

The One-Sample t test:

  1. State the Null and Alternative hypothesis.
  2. Find the test statistic:
    Suppose that a SRS of size n is drawn from a normal population having unknown mean µand unknown σ. To test the hypothesis based on a SRS of size n, compute the one-sample t statistic
  3. Calculate the p-value.
    In terms of a random variable T having the t(n-1) distribution, the P-value for a test ofagainst
    is , one side right
    is , one side left
    is 2, two side
    These P-values are exact if the population distribution is normal and are approximately correct for large n in other cases.
  4. State the conclusions in terms of the problem. Use the given α or if the value is not specified use α = 0.05 as the default. Then compare the P-value to the α level.

If P-value α, then reject . We have sufficient evidence to reject Ho.

If P-value > α, then we fail to reject . We lack sufficient evidenceto reject.

Either way, we should use the words of the story in the conclusion.

Examples:

  1. Experiments on learning in animals sometimes measure how long it takes mice to find their way through a maze. Suppose the population mean time is 18 seconds for one particular maze. A researcher thinks that loud noise will decrease the time it takes a mouse to complete the maze. She measures how long each of 30 mice take to complete the maze with loud noise stimulus. She gets a sample mean, Xbar = 16 seconds and a sample standard deviation, s = 3 seconds. Do a one sided hypothesis test to test the researchers assertions with α = 0.1.
  1. (Example 7.2 in Textbook) Suppose that we know that sufficient vitamin C was added to the CSB mixture to produce a mean vitamin C content in the final product of 40 mg/100 g. It is suspected that some of the vitamin C is lost in the production process. To test this hypothesis we can conduct a one-sided test to determine if there is sufficient evidence to conclude that vitamin C is lost. A sample of 8 batches of CSB was tested for vitamin C. The sample mean = 22.50 and the sample standard deviation = 7.191. Use α = 0.05 level.

By hand: Ho: µ = 40 Ha: µ < 40 one side left test

Using SPSS:

analyze > compare means > One sample T test
Move “vitaminc” into the “test variable box” and type in 40 for the test value.
To change the confidence interval, Click“options” and change confidence interval from 95% to whatever. I did not do this as I will keep the 95% default.

Click“continue”. Lastly click “OK”.

One-Sample Statistics

N / Mean / Std. Deviation / Std. Error Mean
Vitamin C / 8 / 22.50 / 7.191 / 2.542

One-Sample Test

Test Value = 40
t / df / Sig. (2-tailed) / Mean Difference / 95% Confidence Interval of the Difference
Lower / Upper
vitamin C / -6.883 / 7 / .000 / -17.500 / -23.51 / -11.49

Matched Pairs Design:

A common design to compare two treatments is the matched pairs design. One type of matched pair design has 2 subjects who are similar in important aspects matched in pairs and each treatment is given to one of the subjects in each pair.

A 2nd common design does not use matched subjects. Instead each subject is given 2 treatments in random order. Ex: Each subject does a left hand test and a right hand test.

Another common design uses before-treatment and after-treatment observations on each of the subjects in the experiment. Each subject thereby provides data on the difference, or improvement, or reduction associated with the treatment.

Assumptions for the matched pair t test:

  1. The data values are paired and we analyze the line-by-line differences.
  2. The line-by-line differences are independent of each other.
  3. The line-by-line differences are normally distributed with unknown population mean and unknown population standard deviation.

Paired t Procedures:

To compare the responses to the two treatments in a matched pairs design, determine the difference between the two treatments for each subject and analyze the observed differences.

Example: (Problem 7.31 is done by hand and using SPSS):

The researchers studying vitamin C in CSB in example 7.1 were also interested in a similar commodity called wheat soy blend (WSB). Both these commodities are mixed with other ingredients and cooked. Loss of vitamin C as a result of cooking was a concern of the researchers. One preparation used in Haiti called gruel can be made from WSB, salt, sugar, milk, banana, and other optional items to improve the taste. Five samples of gruel prepared in Haitian households were obtained. The vitamin C content of these 5 samples was measured before and after cooking.

Set up appropriate hypotheses and carry out a significance test for these data.

Hypotheses: Ho: µ of differences (before – after) =0; µ = pop mean of differences

Ha: µ of differences (before – after) >0

Here are the data:

Sample / 1 / 2 / 3 / 4 / 5 / Xbar / s
Before / 73 / 79 / 86 / 88 / 78 / 80.8 / 6.140
After / 70 / 77 / 79 / 86 / 67 / 75.8 / 7.530
Difference / 3 / 2 / 7 / 2 / 11 / 5.0 / 3.937

BY HAND:

Xbar of differences = 5.0 Std Dev of differences =s = 3.937

Test statistic, t = 2.840 P Value is between .02 and .025.

Exact P Value = .0234( from computer).

Conclusion: if α = .05: Reject Ho. There is sufficient evidence to conclude that vitamin C is lower after cooking.

Using SPSS:
> Analyze > Compare Means > Paired – Sample T test.
Move “before and after” to “paired variable box” (whichever variable is listed first will come first in the subtraction)
Click “OK”

Paired Samples Statistics
Mean / N / Std. Deviation / Std. Error Mean
Pair 1 / Before / 80.8000 / 5 / 6.14003 / 2.74591
After / 75.8000 / 5 / 7.52994 / 3.36749
Paired Samples Test
Paired Differences / t / df / Sig. (2-tailed)
Mean / Std. Deviation / Std. Error Mean / 95% Confidence Interval of the Difference
Lower / Upper
Pair 1 / Before - After / 5.00000 / 3.93700 / 1.76068 / .11156 / 9.88844 / 2.840 / 4 / .047

(P-Values for t tests are given by SPSS as Sig (2 tailed) and must be divided by 2 for one-side tests). One side P Value = .0235.

A confidence interval or statistical test is called robust if the confidence level or P-value does not change very much when the assumptions of the procedure are violated. The t procedures are robust against non-normality of the population when there are no outliers, especially when the distribution is roughly symmetric and unimodal.

Robustness and use of the One-Sample tand Matched Pair t procedures:

  • Unless a small sample is used, the assumption that the data comes from a SRS is more important than the assumption that the population distribution is normal.
  • n<15: Use t procedures only if the data are close to normal with no outliers.
  • n is 15 - 39: The t procedure can be used except in the presence of outliers or strong skewness.
  • n≥40: The t procedure can be used even for clearly skewed distributions.

Comparing Two Means:

Two-Sample Problems: Sec 7.2

A situation in which two populations or two treatments based on separate samples are compared.

A two-sample problem can arise:

  • from a randomized comparative experiment which randomly divides the units into two groups and imposes a different treatment on each group.
  • From a comparison of two random samples selected separately from two different populations.

Note: Do not confuse two-sample designs with matched pair designs! In the two-sample t problems, each group is composed of separate subjects, ie, no subject is in both groups, and each subject furnishes only one piece of data, ie, no subject is tested twice.

Assumptions for Comparing Two Means:

  • Two independent simple random samples, from two distinct populations are compared. The same variable is measured on both samples. The sample observations are independent,ie, neither sample has an influence on the other.
  • Both populations are normally distributed.
  • The means and and standard deviations and of both populations are unknown.

Typically we want to compare two population means by giving a confidence interval for their difference, , or by testing the hypothesis of no difference,

The Two-Sample t Confidence Interval:

Suppose that an SRS of size is drawn from a normal population with unknown mean and that an independent SRS of size is drawn from another normal population with unknown mean . The confidence interval for the difference between population means, is given by

The value of t* is determined for the confidence level, C, desired. The confidence interval formula is still composed of the same two components, (1) a point estimate for the difference between population means, and (2) a margin of error which expresses the uncertainty involved.

Note that the two sample sizes do not have to be equal.

Here, t* is the value for the t(k) density curve with area C between –t* and t*. The value of the degrees of freedom,k, is approximated by software, or if we do the calculations by hand, we determine the df using the smaller of the two sample sizes.

Two-Sample t Procedure For Tests Of Significance:

  1. Write the hypotheses in terms of the difference between means.

one side right or

one side left or

two side

  1. Calculate the test statistic. A SRS of size is drawn from a normal population with unknown mean and a SRS of size is drawn from another normal population with unknown mean . Sigma 1 and sigma 2 are also unknown.
    To test the hypothesis the two-sample t statistic is:

    and use P-values or critical values for the t(k) distribution, where the degrees of freedom k is either approximated by software, or is based on the df for the smaller of the the two sample sizes. Note: SPSS calculates a value for degrees of freedom using a complicated formula which takes the sample sizes and sample standard deviations into account. This value does not have to be an integer. The df value can be as small as the df for the smaller sample, or it can be as large as the sum of the two dfs. The closer the sample sizes are, and the closer the two sample standard deviations are, the higher the calculated df value will be.
  1. Calculate the P-value.
    Note: Unless we use software, we can only get a range for the P-value. We use the following formulas:
    use one side right test
    use one side left test
    use 2two side test

Note: If doing the calculations by hand you should use the df for the smaller of the two sample sizes. The resulting procedure is conservative.

  1. State the conclusions in terms of the problem. Choose a significance level such as α = 0.05, then compare the P-value to the α level.
    If P-value α, then reject ; we have sufficient evidence to reject Ho
    If P-value > α, then fail to reject ; we lack sufficient evidence ……..

Robustness and use of the Two-Sample t Procedures:

The two-sample t procedures are more robust than the one-sample t methods, particularly when the distributions are not symmetric. They are robust in the following circumstances:

  • If two samples are equal size and the two populations that the samples come from have similar distributions then the t distribution is accurate for a variety of distributions even when the sample sizes are as small as .
  • When the two population distributions are different, larger samples are needed.
  • : Use two-sample t procedures if the data are close to normal. If the data are clearly non normal or if outliers are present, do not use t.
  • n1 + n2 is between 15-39: The t procedures can be used except in the presence of outliers or strong skewness.
  • : The t procedures can be used even for clearly skewed distributions.

Examples:

  1. The U.S. Department of Agriculture (USDA) uses many types of surveys to obtain important economic estimates. In one pilot study they estimated wheat prices in July and in September using independent samples. Here is a brief summary from the report:

Monthn (SEM)
July903.500.023

September453.610.029
a. Note that the standard error of the sample mean,(SEM) was reported, instead of sample standard deviations. We find thestandard deviation for each of the samples as follows:

  1. Use a significance test to examine whether or not the price of wheat was the same in July and September. Be sure to give details and carefully state your conclusion.

Ho: µ july = µ sept

Ha: µ july ≠ µ sept two side test because direction not indicated

  1. Give a 95% confidence interval for the increase in the population mean between July and September.
  1. The survey for Study Habits and Attitudes (SSHA) is a psychological test designed to measure the motivation, study habits, and attitudes toward learning of college students. These factors, along with ability are important in explaining success in school. Scores on the SSHA range from 0 to 200. A selective private college gives the SSHA to a SRS of both male and female first-year students.

Here are the data for 18 women:

154109137115152140145178101

103126126137165165129200148

Here are the data for20 men:

1081401149118011512692169146

109132758811315170115187104

  1. Examine each sample graphically, with special attention to outliers and skewness. Is use of a t procedure acceptable for these data?

STEMPLOT
MEN / STEM / WOMEN
50 / 7
8 / 8
21 / 9
984 / 10 / 139
5543 / 11 / 5
6 / 12 / 669
2 / 13 / 77
60 / 14 / 058
1 / 15 / 24
9 / 16 / 55
17 / 8
70 / 18
19
20 / 0

  1. Most studies have found that the mean SSHA score for men is lower than the mean score in a comparable group of women. Test this supposition here. That is, state the hypotheses, carry out the test and obtain a P-value, and give your conclusions.

Using SPSS:
Note: The data needs to be typed in using two columns. In the first column you need to put all the scores. In the second column define the grouping variable as gender and enter “ women” next to the women’s scores and “men” next to the men’s scores.
Analyze > Compare means > Independent Sample T test
Move “score” to “Test Variable” box and “gender” to “grouping variable” box.
Click “define groups” and enter “women” for group 1 and “men” for group 2.
Click “continue” followed by “OK”.