KNOWLEDGEBASE - ARTICLE #1081

If one-way ANOVA overall has P>0.05, is it possible for all the multiple comparisons tests to be "not significant"? What about the opposite? If the overall P is less than 0.05, must at least one multiple comparison test be "significant"?

Do the multiple comparisons tests following one-way ANOVA provide useful information even if the overall ANOVA results are not statistically significant?

Since multiple comparison tests are often called 'post tests', you'd think they logically follow the one-way ANOVA. In fact, this isn't so.

"An unfortunate common practice is to pursue multiple comparisons only when the null hypothesis of homogeneity is rejected." (Hsu, page 177)

Will the results of multiple tests be valid if the overall P value for the ANOVA is greater than 0.05?

Surprisingly, the answer is yes. With one exception, post tests are valid even if the overall ANOVA did not find a significant difference among means.

The exception is the first multiple comparison test invented, the protected Fisher Least Significant Difference (LSD) test. The first step of the protected LSD test is to check if the overall ANOVA rejects the null hypothesis of identical means. If it doesn't, individual comparisons should not be made.  But this protected LSD test is outmoded, and no longer recommended.

Is it possible to get a 'significant' result from a multiple comparisons test even when the overall ANOVA was not significant?

Yes it is possible. The exception is Scheffe's test (which no GraphPad product offers). It is intertwined with the overall F test. If the overall ANOVA has a P value greater than 0.05, then the Scheffe's test won't find any significant post tests. In this case, performing post tests following an overall nonsignificant ANOVA is a waste of time but won't lead to invalid conclusions. But other multiple comparison tests can find significant differences (sometimes) even when the overall ANOVA showed no significant differences among groups.

How can I understand the apparent contradiction between an ANOVA saying, in effect, that all group means are identical and a post test finding differences?

The overall one-way ANOVA tests the null hypothesis that all the treatment groups have identical mean values, so any difference you happened to observe is due to random sampling. Each post test tests the null hypothesis that two particular groups have identical means.

The post tests are more focused, so have power to find differences between groups even when the overall ANOVA is not significant.

If the overall ANOVA finds a statistically significant difference among group means, will multiple comparison testing be certaint to find a statistically significant difference between at least one pair of means? 

If one-way ANOVA reports a P value of <0.05, you reject the null hypothesis that all the data come from populations with the same mean. In this case, it seems to make sense that at least one of the multiple comparisons tests will find a significant difference between pairs of means. But this is not necessarily true.

It is possible that the overall mean of group A and group B combined differs significantly from the combined mean of groups C, D and E. Perhaps the mean of group A differs from the mean of groups B through E. Scheffe's post test detects differences like these (but this test is not offered by GraphPad InStat or Prism). If the overall ANOVA P value is less than 0.05, then Scheffe's test will definitely find a significant difference somewhere (if you look at the right comparison, also called contrast). The multiple comaprisons tests offered by GraphPad InStat and Prism only compare group means, and it is quite possible for the overall ANOVA to reject the null hypothesis that all group means are the same yet for the post test to find no significant difference among group means.

Are the results of the overall ANOVA useful at all?

ANOVA tests the overall null hypothesis that all the data come from groups that have identical means. If that is your experimental question -- does the data provide convincing evidence that the means are not all identical -- then ANOVA is exactly what you want. More often, your experimental questions are more focused and answered by multiple comparison tests (post tests). In these cases, you can safely ignore the overall ANOVA results and jump right to the post test results.

Note that the multiple comparison calculations all use the mean-square result from the ANOVA table. So even if you don't care about the value of F or the P value, the post tests still require that the ANOVA table be computed.

Beware of trends

One-way ANOVA totally ignores the order of the columns. So if each column represents a time point or dose or anything else quantitiative, ANOVA totally ignores that part of the experimental design. The only exception is the mulitple comparisons test for trend (built into Prism) which tests for, essentially, a correlation between column order and column mean. 



Keywords: post-hoc Tukey Newman Keuls Dunnett

Explore the Knowledgebase

Analyze, graph and present your scientific work easily with GraphPad Prism. No coding required.