Stat6305 — Unit2:PartialSolutions 1

2.1.1. Enter the data into three appropriately labeled columns of a Minitab worksheet (unstacked format).
Print the data.

MTB > print c1 c2 c3

Data Display

Row Beef Meat Poultry

1 186 173 129

2 181 191 132

3 176 182 102

4 149 190 106

5 184 172 94

6 190 147 102

7 158 146 87

8 139 139 99

9 175 175 170

10 148 136 113

11 152 179 135

12 111 153 142

13 141 107 86

14 153 195 143

15 190 135 152

16 157 140 146

17 131 138 144

18 149

19 135

20 132

2.1.2.Just from looking at the number of digits in the observations, what do you suspect may be true of the
Poultry group?

Since there are several two digit numbers in the Poultry group and only three digit numbers in the Beef and Meat groups, it appears that the mean caloric value of the Poultry hot dogs is likely lower than that of Beef or Meat.

2.2.1.Use the menus to make a high resolution ("professional graphics") dotplots on the same scale. Discuss the differences between this graphic and the collection of three dotplots shown above [in the questions].

The differences between this compound dotplot and the one in the text are that the dots in this plot are larger and easier to see, the scale is customized to the range of the data, and the font is more “professional.” (The professional plot has a lot of wasted space, so it can accommodate more groups if necessary, and takes more computer memory. When you cut/paste character graphs, include the line before and after the plot to avoid "breaking" the graph, make sure the transferred plot has enough horizontal space, and appears in Courier type.)

2.2.2.Make a compact display of the data, involving numerical or graphical descriptive methods, suitable for presentation in a report. The purpose is to display the important features of the data for a non-statistical audience.”

To communicate the crucial aspects of these data to a non-statistical audience, one might present the following:

Descriptive Statistics: Beef, Meat, Poultry

Total

Variable Count Mean Minimum Maximum

Beef 20 156.85 111.00 190.00

Meat 17 158.71 107.00 195.00

Poultry 17 122.47 86.00 170.00

One might also include the standard deviation of each group. Dotplots contain more information than boxplots, but for some purposes boxplots might be better, see below (But a really nonstatistical audience probably won't know what standard deviations and quartiles are.) In MTB 14 and later you can use menus (which make subcommands) to tailor exactly which descriptive statistics are printed by the 'describe' command.

Clearly, there is more than one right answer to this question. The point is to think about what descriptive methods you are presenting, to whom, and why.

2.3.1.How would you cut and paste from your browser to enter the hot dog data into a single column? How would you use the set command to enter the subscripts?”

To cut and paste from the browser into a single column, select a row of numbers and use ctl-C to copy them. Type Set C4 in the MTB command window and paste the data after the DATA prompt. Click the enter key and repeat until all the data has been entered. Type END when done.

To enter the subscripts, perform the following commands:

MTB > set c5

DATA> 20(1) 17(2) 17(3)

DATA> end

2.3.2.“Minitab boxplots for comparing groups can most conveniently be made using stacked data. Use the menus to learn how to make three box plots on the same scale for the three groups of hot dogcalorie measurements.”

To create the boxplots, select Graph  Boxplot  Multiple Y’s Simple and enter C1, C2, C3 in the graph variables dialog box.

2.4.1.Use the Fisher LSD method to interpret the pattern of differences among group means for the hot dog data.

The one way ANOVA with Fisher LSD multiple comparison:

One-way ANOVA: Hot Dogs versus Group

Source DF SS MS F P

Group 2 14491 7245 12.19 0.000

Error 51 30320 595

Total 53 44811

S = 24.38 R-Sq = 32.34% R-Sq(adj) = 29.68%

Individual 95% CIs For Mean Based on

Pooled StDev

Level N Mean StDev ------+------+------+------+---

1 20 156.85 22.64 (------*------)

2 17 158.71 25.24 (------*------)

3 17 122.47 25.48 (------*------)

------+------+------+------+---

120 135 150 165

Pooled StDev = 24.38

Fisher 95% Individual Confidence Intervals

All Pairwise Comparisons among Levels of Group

Simultaneous confidence level = 87.93%

Group = 1 subtracted from:

Group Lower Center Upper ------+------+------+------+-

2 -14.29 1.86 18.00 (-----*----)

3 -50.53 -34.38 -18.23 (-----*----)

------+------+------+------+-

-30 0 30 60

Group = 2 subtracted from:

Group Lower Center Upper ------+------+------+------+-

3 -53.03 -36.24 -19.45 (-----*-----)

------+------+------+------+-

-30 0 30 60

For both 1 - 3 and  - 3, zero is not in the confidence interval and, therefore, there is a significant difference between groups 1 and 3 and groups 2 and 3. However, zero is contained in the confidence interval for 1 -  and, therefore, there is no statistically significant difference between groups 1 and 2. These results are illustrated graphically as:

158.71156.85122.47

Meat BeefPoultry

In an underline diagram, levels must be arranged in ascending (or descending) order of their sample means.

Means are often shown together with the names of the levels.

This is an "unbalanced" design: the groups are of unequal sizes. Thus the values of LSD may differ from one comparison to another:

Here the value of LSD used to compare Meat vs. Poultry will be different from the value used to compare Meat vs. Beef.

LSD12 = t* sp (1/n1 + 1/n2)1/2 = (2.008)(24.38)(1/20 + 1/17)1/2 = 16.15. Compare with [18.00 – (–14.29)] / 2 = 16.15.

LSD23 = (2.008)(24.38)(1/17 + 1/17)1/2 = 16.79. Compare with [–19.45 – (–53.03)] / 2 = 16.79.

2.4.3.Use the Tukey HSD method to interpret the pattern of differences among group means.

The one way ANOVA results with the Tukey HSD test are as follows:

One-way ANOVA: Hot Dogs versus Group

Source DF SS MS F P

Group 2 14491 7245 12.19 0.000

Error 51 30320 595

Total 53 44811

S = 24.38 R-Sq = 32.34% R-Sq(adj) = 29.68%

Individual 95% CIs For Mean Based on

Pooled StDev

Level N Mean StDev ------+------+------+------+---

1 20 156.85 22.64 (------*------)

2 17 158.71 25.24 (------*------)

3 17 122.47 25.48 (------*------)

------+------+------+------+---

120 135 150 165

Pooled StDev = 24.38

Tukey 95% Simultaneous Confidence Intervals

All Pairwise Comparisons among Levels of Group

Individual confidence level = 98.05%

Group = 1 subtracted from:

Group Lower Center Upper ------+------+------+------+

2 -17.54 1.86 21.25 (------*-----)

3 -53.77 -34.38 -14.98 (------*-----)

------+------+------+------+

-30 0 30 60

Group = 2 subtracted from:

Group Lower Center Upper ------+------+------+------+

3 -56.40 -36.24 -16.07 (------*------)

------+------+------+------+

-30 0 30 60

The summary results (5% level) are the same for Tukey's HSD as for Fisher' LSD with zero not in the confidence interval for 1 - 3 and  - 3 and zero is in the confidence interval for 1 - . These results are illustrated graphically as:

MeatBeefPoultry

Strictly speaking, the Tukey procedure is meant only for balanced designs. See the approximation in Ott/ Longnecker for slightly unbalanced data.

The harmonic mean sample size is 3 / (1/20 + 1/17 + 1/17) = 17.89.The correct Studentized range value has t = 3, and df = 51, not available in the table. Use q 3.42 by interpolation (or round df down to 40 and use q 3.44). Thus, W = HSD  3.42[595 / 17.89]1/2 = 19.7 or HSD  3.41[595 / 18]1/2 = 19.6 formaking all three comparisons.

[In R, qtukey(.95, 3, 51) returns 3.414.]

In Minitab the Tukey CI for the difference between levels 1 and 2 is (–17.54, 21.25), which is of length 2HSD = 38.79, so Minitab is using HSD = 19.4. The CI for 1 vs. 3 is (–53.77, –14.98), which also implies HSD = 19.4. But Minitab has HSD = 20.2 for comparing levels 2 and 3. In contradiction to O/L p447 where the harmonic mean for all groups is used for each comparison, Minitab seems to be taking the harmonic mean only of the two sample sizes for the two groups being compared. (A simulation could settle which method is more accurate in our particular case.)
2.5.1.Use menus in Minitab to do an Anderson-Darling test of the null hypothesis that the residuals fit a normal distribution. Say what menu path you used. What is your conclusion?

In addition to the method shown, one approach is to store the residuals in a column of the worksheet. Then use Graph  Probability Plot to obtain a normal probability plot with Anderson-Darling test as shown above the question. The P-value is 0.018, indicating a significant departure from normality (discussed in the notes).

2.5.2.Use menus to do Bartlett's test for homogeneity of variances. Say what menu path you used. What is your conclusion? Why is doing Hartley's Fmax test by hand not an option here?

To test for equality of variances using the Bartlett’s test, the following menu commands are performed:

Stat  Basic Statistics  2 Variances.

The output is as follows:

The p-value for Bartlett's test is .863, which is greater than .05. Thus, the null hypothesis of equal variances cannot be rejected.

To find out about Levene's test go to the "?" on the menu bar and then do a search for "Levene". Here is part of the explanation you will retrieve:

"The computational method for Levene's Test is a modification of Levene's procedure [2, 7]. This method considers the distances of the observations from their sample median rather than their sample mean. Using the sample median rather than the sample mean makes the test more robust for smaller samples."

Here "robust" means relatively unlikely to give an incorrect answer if the data are not normally distributed.

You can get explanations of most Minitab procedures in this way. Some of them (for example this one) are written in a sufficiently user-friendly way as to be helpful, some are pretty obscure.

2.5.3.“Use menus (stacked one-way ANOVA) to find the "fits" for this ANOVA. How are the "fits" found for each group in this case? Make a scatterplot of residuals vs. fits. (GRAPH ► Plot, Y=residuals, X=fits) and interpret the result.”

The “fits” are the mean values for each group:

Beef:156.850

Meat:158.706

Poultry: 122.471

A scatterplot of residuals vs. fits follows:

If the residuals are a random sample from a normal population, their values should not be dependent upon the mean of the group from which they are generated. From the above plot, it appears that the residual values are independent of means.

Because the fits are distinct for each group (type of hot dog) the effect is something like a vertical dotplot of residuals broken out by group.

2.6.1.An approximate nonparametric test results from ranking the data and performing a standard ANOVA on the ranks (ignoring that the ranks are not normal). If we rank the data in 'Calories' then the smallest Calorie value will be assigned rank 1 and the largest will be assigned rank 54. Below are commands to carry out the procedure, which gives results similar to those already seen. Notice that the resulting confidence intervals are on the rank scale. Approximately what are the Calorie values that correspond to the endpoints shown for the three confidence intervals?

MTB > name c15 'RnkCal'

MTB > rank 'Calorie' 'RnkCal'

MTB > onew 'RnkCal' 'Type'

The output of the ANOVA using rank as the response variable is as follows:

One-way ANOVA: RnkCal versus Group

Source DF SS MS F P

Group 2 3933 1966 10.93 0.000

Error 51 9177 180

Total 53 13110

S = 13.41 R-Sq = 30.00% R-Sq(adj) = 27.25%

Individual 95% CIs For Mean Based on Pooled

StDev

Level N Mean StDev +------+------+------+------

1 20 33.13 13.60 (------*------)

2 17 33.47 14.51 (------*------)

3 17 14.91 11.98 (------*------)

+------+------+------+------

8.0 16.0 24.0 32.0

Pooled StDev = 13.41

Note that the above confidence intervals are calculated as y–i ± t.025,51sw/(ni)1/2, where t.025,51 = 2.00758
and sW = 13.41. Intervals are of different lengths because the sample sizes differ. All three CIs are based
on the same variance estimate 13.41 – because we're assuming all three populations have the same variance.

To transform the above confidence intervals from the rank scale to the calorie scale, the approximate rank-valued endpoint is replaced by the corresponding calorie value. This is common practice in communicating with clients, who usually want to have information expressed in terms of their original measurements and may not even know what ranks are.

Group / Left Rank
Endpoint / Right Rank
Endpoint / Left Calorie Endpoint / Right Calorie Endpoint
1 / 27.11 / 39.15 / 145 / 170
2 / 26.94 / 40.00 / 145 / 172
3 / 8.38 / 21.44 / 108 / 139

These can be obtained from the sorted list observations with their respective ranks shown below:

RnkCal / Calories / RnkCal / Calories
1.0 / 86 / 29.0 / 147
2.0 / 87 / 30.0 / 148
3.0 / 94 / 31.5 / 149
4.0 / 99 / 31.5 / 149
5.5 / 102 / 33.5 / 152
5.5 / 102 / 33.5 / 152
7.0 / 106 / 35.5 / 153
8.0 / 107 / 35.5 / 153
9.0 / 111 / 37.0 / 157
10.0 / 113 / 38.0 / 158
11.0 / 129 / 39.0 / 170
12.0 / 131 / 40.0 / 172
13.5 / 132 / 41.0 / 173
13.5 / 132 / 42.5 / 175
16.0 / 135 / 42.5 / 175
16.0 / 135 / 44.0 / 176
16.0 / 135 / 45.0 / 179
18.0 / 136 / 46.0 / 181
19.0 / 138 / 47.0 / 182
20.5 / 139 / 48.0 / 184
20.5 / 139 / 49.0 / 186
22.0 / 140 / 51.0 / 190
23.0 / 141 / 51.0 / 190
24.0 / 142 / 51.0 / 190
25.0 / 143 / 53.0 / 191
26.0 / 144 / 54.0 / 195
27.5 / 146
27.5 / 146

Permutation Test. (Additional Method not shown in the notes).

The R code below simulates the permutation distribution of the F-statistic and compares the result with the
F-distribution. The data are not normal, but the standard F-test gives the same result as the permutation test.

Beef =

c(186, 181, 176, 149, 184, 190, 158, 139, 175, 148,

152, 111, 141, 153, 190, 157, 131, 149, 135, 132)

Meat =

c(173, 191, 182, 190, 172, 147, 146, 139, 175, 136,

179, 153, 107, 195, 135, 140, 138)

Poultry =

c(129, 132, 102, 106, 94, 102, 87, 99, 170, 113,

135, 142, 86, 143, 152, 146, 144)

Calories = c(Beef, Meat, Poultry)

Type = as.factor(c(rep(1, length(Beef)), rep(2, length(Meat)), rep(3, length(Poultry))))

m = 10000; f = numeric(m)

for (i in 1:m) {

ptyp = sample(Type, length(Type))

f[i] = anova(lm(Calories ~ ptyp))$F[1] }

hist(f, prob=T, col="wheat", ylim=c(0,1))

ff = seq(0, max(f), length=1000)

lines(ff, df(ff, 2, 51), lwd=2, col="blue")

fval = anova(lm(Calories ~ Type))$F[1]; fval

mean(f > fval)

> fval = anova(lm(Calories ~ Type))$F[1]; fval

[1] 12.18682

> mean(f > fval)

[1] 1e-04

NOTE: Type must be a factor vector; if numeric, the ANOVA tables are for regression of Calories on Type.

2.7.1.Use Minitab's capability to generate random samples. Sample 10 observations from each of 15 groups, all known to have a normal distribution with mean 100 and standard deviation 10. Thus, we know that there are no real differences among means. Yet, by comparing the two groups that happen to have the largest and smallest group sample means, we can often find a bogus significant difference with the Fisher procedure. In practice, you shouldn't look at Fisher LSDs unless the main F-test rejects.

Here is a (slightly simplified, but still accurate) version of the commands generated by the menu steps in the questions. The advantage of using commands is that, once entered and used, the commands can be cut from the Worksheet and pasted at the active MTB > prompt (at the very bottom of the material in Session window) and used again for another simulation without any further typing. The rest of the prompts will appear when you press Enter.

MTB > set c22

DATA> 1(1:15)10

DATA> end.

MTB > rand 150 c21;

SUBC> norm 100 10.

MTB > onew C21 C22;

SUBC> fisher.

Here is printout for one run. You are looking for Fisher CIs that don't cover 0. (An early hint of where to look is from the default CIs at the start; look among intervals that don't cover means of other intervals.)

There is a lot of output from each run. Here we highlight the Fisher CIs that don't cover 0. There is no guarantee that your first run will produce any such Fisher CIs, but there is a fairly high probability. If you don't get any on the first run, try again. We show two runs: The first has an example (close call), the second has stronger examples.

First run. All output shown.

One-way ANOVA: C21 versus C22

Source DF SS MS F P

C22 14 1103.7 78.8 0.91 0.551Note: Not Significant! Shouldn't even look at Fisher LSD.

Error 135 11712.8 86.8

Total 149 12816.5

S = 9.315 R-Sq = 8.61% R-Sq(adj) = 0.00%

Individual 95% CIs For Mean Based on

Pooled StDev

Level N Mean StDev ------+------+------+------+

1 10 101.31 9.76 (------*------)

2 10 98.78 9.32 (------*------)

3 10 104.10 9.45 (------*------)

4 10 99.47 6.21 (------*------)

5 10 98.12 6.84 (------*------)

6 10 99.41 7.18 (------*------)

7 10 97.58 9.88 (------*------)

8 10 102.35 10.12 (------*------)

9 10 96.24 7.56 (------*------)

10 10 99.26 10.12 (------*------)

11 10 104.07 11.91 (------*------)

12 10 104.27 10.31 (------*------)

13 10 101.13 7.99 (------*------)

14 10 104.52 12.41 (------*------)

15 10 104.29 8.20 (------*------)

------+------+------+------+

95.0 100.0 105.0 110.0

Pooled StDev = 9.31

Fisher 95% Individual Confidence Intervals

All Pairwise Comparisons among Levels of C22

Simultaneous confidence level = 19.24%

C22 = 1 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

2 -10.770 -2.532 5.706 (------*------)

3 -5.449 2.790 11.028 (------*------)

4 -10.082 -1.844 6.394 (------*------)

5 -11.433 -3.195 5.043 (------*------)

6 -10.147 -1.908 6.330 (------*------)

7 -11.976 -3.738 4.500 (------*------)

8 -7.202 1.037 9.275 (------*------)

9 -13.311 -5.073 3.166 (------*------)

10 -10.289 -2.050 6.188 (------*------)

11 -5.479 2.759 10.997 (------*------)

12 -5.280 2.959 11.197 (------*------)

13 -8.418 -0.180 8.058 (------*------)

14 -5.036 3.202 11.440 (------*------)

15 -5.267 2.971 11.209 (------*------)

------+------+------+------+--

-10 0 10 20

C22 = 2 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

3 -2.916 5.322 13.560 (------*------)

4 -7.550 0.688 8.926 (------*------)

5 -8.901 -0.663 7.575 (------*------)

6 -7.614 0.624 8.862 (------*------)

7 -9.444 -1.206 7.032 (------*------)

8 -4.669 3.569 11.807 (------*------)

9 -10.779 -2.540 5.698 (------*------)

10 -7.756 0.482 8.720 (------*------)

11 -2.947 5.291 13.530 (------*------)

12 -2.747 5.491 13.729 (------*------)

13 -5.886 2.352 10.590 (------*------)

14 -2.504 5.734 13.972 (------*------)

15 -2.735 5.503 13.741 (------*------)

------+------+------+------+--

-10 0 10 20

C22 = 3 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

4 -12.872 -4.634 3.605 (------*------)

5 -14.223 -5.985 2.254 (------*------)

6 -12.936 -4.698 3.540 (------*------)

7 -14.766 -6.528 1.711 (------*------)

8 -9.991 -1.753 6.485 (------*------)

9 -16.100 -7.862 0.376 (------*------)(Close: RH end barely positive.)

10 -13.078 -4.840 3.398 (------*------)

11 -8.269 -0.031 8.208 (------*------)

12 -8.069 0.169 8.407 (------*------)

13 -11.208 -2.970 5.269 (------*------)

14 -7.826 0.412 8.651 (------*------)

15 -8.057 0.181 8.420 (------*------)

------+------+------+------+--

-10 0 10 20

C22 = 4 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

5 -9.589 -1.351 6.887 (------*------)

6 -8.302 -0.064 8.174 (------*------)

7 -10.132 -1.894 6.344 (------*------)

8 -5.357 2.881 11.119 (------*------)

9 -11.467 -3.228 5.010 (------*------)

10 -8.444 -0.206 8.032 (------*------)

11 -3.635 4.603 12.842 (------*------)

12 -3.435 4.803 13.041 (------*------)

13 -6.574 1.664 9.902 (------*------)

14 -3.192 5.046 13.284 (------*------)

15 -3.423 4.815 13.053 (------*------)

------+------+------+------+--

-10 0 10 20

C22 = 5 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

6 -6.952 1.287 9.525 (------*------)

7 -8.781 -0.543 7.695 (------*------)

8 -4.007 4.232 12.470 (------*------)

9 -10.116 -1.878 6.361 (------*------)

10 -7.094 1.145 9.383 (------*------)

11 -2.284 5.954 14.192 (------*------)

12 -2.084 6.154 14.392 (------*------)

13 -5.223 3.015 11.253 (------*------)

14 -1.841 6.397 14.635 (------*------)

15 -2.072 6.166 14.404 (------*------)

------+------+------+------+--

-10 0 10 20

C22 = 6 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

7 -10.068 -1.830 6.409 (------*------)

8 -5.293 2.945 11.183 (------*------)

9 -11.403 -3.164 5.074 (------*------)

10 -8.380 -0.142 8.096 (------*------)

11 -3.571 4.667 12.906 (------*------)

12 -3.371 4.867 13.105 (------*------)

13 -6.510 1.728 9.967 (------*------)

14 -3.128 5.110 13.349 (------*------)

15 -3.359 4.879 13.118 (------*------)

------+------+------+------+--

-10 0 10 20

C22 = 7 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

8 -3.463 4.775 13.013 (------*------)

9 -9.573 -1.334 6.904 (------*------)

10 -6.551 1.688 9.926 (------*------)

11 -1.741 6.497 14.736 (------*------)

12 -1.541 6.697 14.935 (------*------)

13 -4.680 3.558 11.796 (------*------)

14 -1.298 6.940 15.178 (------*------)

15 -1.529 6.709 14.947 (------*------)

------+------+------+------+--

-10 0 10 20

C22 = 8 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

9 -14.348 -6.109 2.129 (------*------)

10 -11.325 -3.087 5.151 (------*------)

11 -6.516 1.722 9.961 (------*------)

12 -6.316 1.922 10.160 (------*------)

13 -9.455 -1.217 7.022 (------*------)

14 -6.073 2.165 10.404 (------*------)

15 -6.304 1.934 10.172 (------*------)

------+------+------+------+--

-10 0 10 20

C22 = 9 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

10 -5.216 3.022 11.261 (------*------)

11 -0.407 7.832 16.070 (------*------)(Close: LH end barely negative)

12 -0.207 8.031 16.270 (------*------)(Close: LH end barely negative)

13 -3.346 4.893 13.131 (------*------)

14 0.036 8.275 16.513 (------*------)Borderline example: Look at numbers

15 -0.195 8.043 16.282 (------*------)(LH end barely negative)

------+------+------+------+--

-10 0 10 20

C22 = 10 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

11 -3.429 4.809 13.048 (------*------)

12 -3.229 5.009 13.247 (------*------)

13 -6.368 1.870 10.109 (------*------)

14 -2.986 5.252 13.491 (------*------)

15 -3.217 5.021 13.260 (------*------)

------+------+------+------+--

-10 0 10 20

C22 = 11 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

12 -8.039 0.200 8.438 (------*------)

13 -11.177 -2.939 5.299 (------*------)

14 -7.795 0.443 8.681 (------*------)

15 -8.027 0.212 8.450 (------*------)

------+------+------+------+--

-10 0 10 20

C22 = 12 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

13 -11.377 -3.139 5.100 (------*------)

14 -7.995 0.243 8.482 (------*------)

15 -8.226 0.012 8.250 (------*------)

------+------+------+------+--

-10 0 10 20

C22 = 13 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

14 -4.856 3.382 11.620 (------*------)

15 -5.087 3.151 11.389 (------*------)

------+------+------+------+--

-10 0 10 20

C22 = 14 subtracted from:

C22 Lower Center Upper ------+------+------+------+--

15 -8.469 -0.231 8.007 (------*------)

------+------+------+------+--

-10 0 10 20

Second run. Here we save space by showing only the interesting clumps of output.

MTB > set c22

DATA> 1(1:15)10

DATA> end.

MTB > rand 150 c21;

SUBC> norm 100 10.

MTB > onew C21 C22;

SUBC> fisher.

One-way ANOVA: C21 versus C22

Source DF SS MS F P

C22 14 1748 125 1.15 0.321 Main F-test not significant.

Error 135 14653 109

Total 149 16401

S = 10.42 R-Sq = 10.66% R-Sq(adj) = 1.39%

Individual 95% CIs For Mean Based on

Pooled StDev

Level N Mean StDev -+------+------+------+------

1 10 99.96 9.29 (------*------)

2 10 99.41 14.84 (------*------)

3 10 100.00 13.15 (------*------)

4 10 96.66 10.06 (------*------)

5 10 101.82 10.18 (------*------)

6 10 95.18 10.41 (------*------)

7 10 103.64 12.28 (------*------)

8 10 97.17 7.79 (------*------)

9 10 103.94 8.42 (------*------)

10 10 100.03 9.68 (------*------)

11 10 98.30 8.49 (------*------)

12 10 99.35 8.98 (------*------)

13 10 98.53 7.73 (------*------)

14 10 101.46 12.66 (------*------)

15 10 89.65 9.32 (------*------)(Involved in all 3 examples)

-+------+------+------+------

84.0 91.0 98.0 105.0

Pooled StDev = 10.42

Fisher 95% Individual Confidence Intervals

All Pairwise Comparisons among Levels of C22

Simultaneous confidence level = 19.24%

...

C22 = 7 subtracted from:

C22 Lower Center Upper +------+------+------+------

8 -15.68 -6.46 2.75 (------*------)

9 -8.91 0.30 9.51 (------*------)

10 -12.82 -3.60 5.61 (------*------)

11 -14.55 -5.34 3.88 (------*------)

12 -13.50 -4.28 4.93 (------*------)

13 -14.32 -5.11 4.11 (------*------)

14 -11.39 -2.17 7.04 (------*------)

15 -23.20 -13.99 -4.77 (------*------)Strong example

+------+------+------+------

-24 -12 0 12

...

C22 = 9 subtracted from:

C22 Lower Center Upper +------+------+------+------

10 -13.12 -3.90 5.31 (------*------)

11 -14.85 -5.64 3.58 (------*------)

12 -13.80 -4.58 4.63 (------*------)

13 -14.62 -5.41 3.81 (------*------)

14 -11.69 -2.47 6.74 (------*------)

15 -23.50 -14.29 -5.07 (------*------)Strong example

+------+------+------+------

-24 -12 0 12

...

C22 = 14 subtracted from:

C22 Lower Center Upper +------+------+------+------

15 -21.03 -11.81 -2.60 (------*------)Strong example

+------+------+------+------

-24 -12 0 12

Note: Working at the 5% level you would expect that once in 20 runs the main F-test would give a P-value less than 5%, leading to a totally wrong interpretation (5% = 1/20).

Based on notes by Elizabeth Ellinger, Spring 2004, as expanded and modified by Bruce E. Trumbo, Winter 2008. Copyright © 2008 by Bruce E. Trumbo. All rights reserved.