Lab 3BISC 1113. 1

ANOVA for Comparing More Than Two Samples

An important caution when using statistical tests:

In many experiments there will be more than two treatment groups. The t-test should not be used to compare more than two groups because when multiple t-tests are performed on the same data set, there is an increased likelihood of falsely reporting a significant difference.

When testing whether or not the means of several independent groups of observations are significantly different, a powerful parametric statistical approach is Analysis of Variance (ANOVA). It was developed by one of the founders of modern statistical theory, Sir Ronald Fisher, in the 1920s.

An ANOVA answers the question: “Is there a significant difference among the treatment means?” It indicates only whether a difference occurs among the means of the groups. It does not indicate by itself which pairs of means might be different. To determine which pairs of means differ a "post hoc" test is performed. In this course the post hoc test we will use is called the Tukey-Kramer HSD (“Honestly Significant Difference”) test. John Tukey, active in the mid-Twentieth Century, pioneered many statistical testing procedures.

The output of the ANOVA will give you a P-value, the interpretation of which is similar to that derived from a t-test. If P ≤ 0.05, there is a significant difference among the means. Follow the instructions below to explore how you can use the program JMP to run an ANOVA and Tukey-Kramer HSD test. If you encounter problems, ask your instructor for additional directions.

In the following example, 20 individuals of one strain of flour beetle were distributed among three food types in three replicate containers (i.e., n = 3) for a total of nine containers. The beetles were counted after 10 weeks, and these are the data we will analyze. We can say that this is a single-factor experiment, in which the factor (food type) has three levels. This in fact describes a statistical model that apportions the sources of variation within the entire data set into the variation explained by the different factor or treatment levels (food types) and the unexplained variation caused by differences among replicates within each food type. The ANOVA evaluates the relative contributions of these sources of variation by computing their ratio. This is the F-ratio, and is closely related to the t-statistic. The F-ratio is a measure of the average variation of response among treatment levels (treatment effect) versus the average variation of response within treatments. If this ratio is sufficiently large it indicates that there is likely structure in the data that we have identified by breaking the overall data into our treatment groups. In other words, we have evidence of a likely treatment effect, and that there are differences among treatment levels. They are unlikely to simply represent random samples from some common population. With a significantly large F-ratio the ANOVA tells us that we have explained a large portion of the overall variation in the entire data set by breaking it up into treatment levels, rather than by merely taking the overall mean of all replicates (9 total replicates in this case). F-ratios are like signal-to-noise ratios; we are interested in the “signal” (the amount of variation explained by our treatment(s), versus the amount of “noise” or random, “natural”, unexplained variation among replicates. This noise is estimated as error, meaning “wandering” of the observed data from values expected based on treatment means.

The amount of total variation in the overall data set that is explained by apportioning data by treatment level is measured as a the treatment sum of squares (TrSS), and when divided by the treatment degrees of freedom gives the treatment mean square (TrMS); this is a variance term (remember how we calculated a variance as

[SS/(n-1)]?). The same is done for the error (sometimes called residual) term, and an error MS (MSE) is calculated. The actual F-ratios are ratios of mean squares (variances) for the treatment and error terms: Fs = TrMS/MSE, with Fs here being the F-ratio calculated based on our experiment’s samples (“s”).

As in t-testing, the P-values of the F-ratio, under the assumption of no treatment effects (i.e., assuming the null hypothesis to be true) have been tabulated. F-ratios have two types of degrees of freedom (df), one simply the number of treatment levels minus 1, and a second called the error (residual) df, based on the df associated with that “within-treatment” variance term (mean square error). When the P-value for an experimentally-derived Fs –ratio exceeds the values of the F-ratio tabulated under the null hypothesis, we can say that there are significant differences among the means of our treatment levels.

Certain assumptions are made in this analysis (e.g., normality of the data, equal sample variances) but moderate departures from these are usually tolerable, and often can be rectified by transforming the raw data (e.g., taking their logarithms, and re-running the analysis). If a P-value for the Fs –ratio is marginally significant (e.g., P = 0.03), one should be cautious in interpretation, especially when small sample sizes are involved.

If (and only if) a significant Fs –ratio is obtained in the overall ANOVA, then are you free to examine “interesting” differences between individual means. This is done by post hoc testing (also called unplanned, or multiple, comparisons). There are a number of these, but the HSD test is a good one because it is statistically conservative (helping to avoid a false claim of significance).

Getting started in JMP:

ONE-WAY ANOVA - multiple treatments of one experimental group

  • Enter the data into a JMP data sheet as shown below: Note that type of flour (replicates) is indicated by identical names – THIS IS VERY IMPORTANT.

Once your data are entered:

  • Under the Analyze menu, select Fit Y by X
  • The value that you “set” is the independent variable. This variable belongs on the X axis. In this case the categorical (or nominal) variable is the selected flour type). Set this as your X value.
  • The dependent (ordinal) variable is what you measure, and in this example would be the number of adults. Set this continuous variable as your Y value.
  • Click on OK and you will get something like this:

Here are your raw data plotted by factor or treatment level (type of flour).

  • Pull down the red arrow in the upper left-hand corner and select Means, ANOVA. This following screen will appear:

To test differences between pairs of means, back at the red arrow Select: Compare Means, then, all - Tukey HSD. You will see the following print-out:

A good reference site on ANOVAs and experimental design hosted by Valerie Easton and John H. McColl will help you understand the ANOVA. Visit this site at

Expressing the results of an ANOVA as a table in a lab report or scientific paper:

One is always on solid ground by presenting the overall ANOVA results in a condensed table. Here is an example of a minimized ANOVA table that you might find in some published scientific papers. Note the conventional use of * to denote significance: * = P < 0.05; ** = 0.001 < P < 0.01; *** = P < 0.001. This is a fairly universal notation scheme. With exact probabilities now easily available through computer programs include the last column (P) to the right. Other options would be to add the SS for each term and the total. Just the essentials are included here, so it is assumed that all the other arithmetic in calculating the Fratio and evaluation of its significance is correct! You can see in this figure how it is possible to combine graphical data and statistical analysis. Such concision and condensing is encouraged!

Here is a concise and clear option for expressing the Tukey results in a figure:

Alternatively, the Tukey test results could be recorded in a table (but, in both cases, you must also somewhere (e.g., Results text) tell the overall ANOVA results):

Table 1. Means (± SE) for a test of the hypothesis that food type affects flour beetle (Tribolium castaneum) population growth in a 70-d period. Tukey levels1 not sharing the same letter are significantly different.

Number of Beetles

Type of flourMean ± SETukey Level

Corn196 ± 13.3 B

Wheat385 ± 17.3 A

White 132 ± 42.1 B

1Tukey HSD test.

Results text and stats: How to express ANOVA stat results in the text of your report:

Start with the BIOLOGY: Minimize the statistical analysis.

All the populations grew considerably in the three flour types. After 70 d, the mean number of adult beetles in the starting population (20) had increased by a factor of 20 in wheat flour but only by about 10fold in corn and fold in white flour (Fig. 1). Mean population sizes in corn or white flour were not significantly different from each other but were significantly different from the mean population size in wheat flour (ANOVA, F = 22.7, df = 2,6, P = 0.002; Tukey HSD test).

Note that the Tukey test results are included here in text form, after the results of the significant overall ANOVA are given first.

The 2-Way ANOVA: multiple treatments of multiple experimental groups

The 2-way ANOVA is useful when examining the effects of two categorical variables both individually and together on an experimental response. For example suppose you wanted to see how heart rate (experimental response) was affected by heart medication (variable 1) before and during exercise (variable 2). The ANOVA allows you to look at the effect of the medications, the effect of exercise as well as whether or not the medications and the exercise interacted.

Using our plant data: We might ask is Transpiration (the experimental response) affected by the 2 environmental conditions (variable 1 – the 2 light and wind conditions) or by plant type (variable 2 – the 4 plants)

  • Input the data as illustrated below. Treatment, Plant and Replicate columns have the data type 'character' 'nominal' and Transpiration rate and Total Resistance are 'numeric' 'continuous'

This test is very similar to the one-way Anova so here we provide guidelines only. Ask for help if you are confused.

Detailed instructions for a 2 way ANOVA in JMP:

  • Under the analyze menu select Fit Model
  • To specify the model, click on transpiration rate or total resistance from the Select Columns field and then and click Y (indicating the dependent variable) in the Pick Role Variables field.
  • To construct the test
  • highlight Treatment in the Select Columns field and then click Add in the Construct Model Effects field
  • Next highlight Plant in the Select Columns field and then click Add in the Construct Model Effects field
  • Last highlight both Treatment and Plant by clicking on Treatment and then holding down the shift key as you click on Plant. Click on Cross to add the interaction between Treatment and Plant.
  • The Model Specification window should now look like this

  • To run the model click on Run Model
  • Note there is no post hoc test for the 2 way ANOVA
  • Scroll down through the output until you find Effects Tests: Click on the arrow so the effects table is visible. You will see 3 p values.
  • Copy and paste this table into a word doc for later inclusion of an edited version in the results. To do this: click on the white cross symbol in the formatting bar then place this cross over the inverted triangle to the left of Effects Test and click. The Effects test table should be highlighted in blue.
  • Under the edit menu select Copy and then paste the table into a word file for later use in your results.

It is useful to look at the LS means plot for each effect (use drop down menu under red arrows). Just as with the t-test and 1-way ANOVA: If there is an effect of environmental conditions (HLHW vs. LLLW) on transpiration rate (or total resistance) the p value will be less than 0.05. If there is an effect of Plant type on on transpiration rate the P value will be less than 0.05. If the 2 variables interact (Treatment X Plant) than the p will also be less than 0.05. Consequently, a significant interaction row (Prob > 0.05) means that you need to know both which plant species you are working with and the environmental conditions in order to predict transpiration rate.

  • The plots to the right on the screen labeled Treatment, Plant and Treatment X Plant are there to aid your interpretation of the effects table.
  • Under each red triangle, pull down the LS Means Plot to visualize how these data are represented.
  • For Treatment, you will see a plot of the means and SE for each Treatment. The interpretation of the treatment effect should be very clear. Note that all the plant species data have been pooled for this test.
  • Next look at the LS Means plot for the Plant effect. Which plant species seem to have higher transpiration rates than the others? Note that environmental conditions (Treatments) have been pooled for this test.
  • The Treatment X Plant plot shows both the effect of the different environmental conditions (Treatment) and the differences among the plant species (Plant). Which had the greater effect—environmental conditions or plant species? Did the plant species all respond similarly to the different environmental conditions?

If you have conducted this test for transpiration, repeat the methods but with Total Resistance as the Y variable. Is the interpretation of the test results the same?

Reporting the 2-wayANOVA in your lab report:

Results EFFECTS TABLE:

Edit the row titles and delete all but the mean, F ratios and p values from the effects table you copied and pasted. The Table title should include:

Table #: Anova effects tests comparing transpiration rate in 4 plant species in 2 environmental conditions.

mean±SD / F ratio / p

RESULT TEXT:

Include your Anova results in the text portion following similar format guidelines already presented for t-tests and One-way Anova’s (ANOVA, fratio: --, --, --, df --, --, --, p --, --, --) or cite the table (Table 1)

More info on Anova’s:

Rice Virtual Lab in Statistics:

SPSS Tutorial:

Making Pretty Graphs in JMP

You can stick with Excel but if you want to try using JMP, here is how to do it:

Graph > Chart > Select Plant as your x level, and any of the continuous variables as your Statistics > pull down the Statistics menu and select mean > OK > then under Chart menu on your new graph select Y options > Std Error Bars. You can clean it up by fixing axis labels and removing non-essential legends. To copy into Word, use the white cross in the JMP menu to highlight what you want > copy and Paste. Voila! Is that better than Excel? Be sure you plot SD not SE.