If one-way ANOVA reports a P value of <0.05, you reject the null hypothesis that all the data come from populations with the same mean. In this case, it seems to make sense that at least one of the folow-up multiple comparisons tests will find a significant difference between pairs of means.
This is not necessarily true.
It is possible that the overall mean of group A and group B combined differs significantly from the combined mean of groups C, D and E. Perhaps the mean of group A differs from the mean of groups B through E. Scheffe's post test detects differences like these (but this test is not offered by Prism). If the overall ANOVA P value is less than 0.05, then Scheffe's test will definitely find a significant difference somewhere (if you look at the right comparison, also called contrast). The multiple comparisons tests offered by Prism only compare group means, and it is quite possible for the overall ANOVA to reject the null hypothesis that all group means are the same yet for the followup tests to find no significant difference among group means.
You may find it surprising, but all the multiple comparisons tests offered by Prism are valid even if the overall ANOVA did not find a significant difference among means. It is certainly possible that any of the tests offered by Prism can find significant differences even when the overall ANOVA showed no significant differences among groups. These tests are more focused, so have power to find differences between groups even when the overall ANOVA is not significant.
"An unfortunate common practice is to pursue multiple comparisons only when the null hypothesis of homogeneity is rejected." (Hsu, page 177)
"...these methods [e.g., Bonferroni, Tukey, Dunnet, etc.] should be viewed as substitutes for the omnibus test because they control alphaEW at thee desired level all by themselves. Requiring a significant omnibus test before proceeding to perform any of these analyses, as is sometimes done, only serves to lower alphaEW below the desired level (Bernhardson, 1975) and hence inappropriately decreases power" (Maxwell and Delaney, p. 236)
There are two exceptions, but both are tests not offered by Prism.
•Scheffe's test (not available in Prism) is intertwined with the overall F test. If the overall ANOVA has a P value greater than 0.05, then no post test using Scheffe's method will find a significant difference.
•Restricted Fisher's Least Significant Difference (LSD) test (not available in Prism). In this form of the LSD test, the multiple comparisons tests are performed only if the overall ANOVA finds a statistically significant difference among group means. But this restricted LSD test is outmoded, and no longer recommended. The LSD test in Prism is unrestricted -- the results don't depend on the overall ANOVA P value and don't correct for multiple comparisons.
ANOVA tests the overall null hypothesis that all the data come from groups that have identical means. If that is your experimental question -- does the data provide convincing evidence that the means are not all identical -- then ANOVA is exactly what you want. More often, your experimental questions are more focused and answered by multiple comparison tests. In these cases, you can safely ignore the overall ANOVA results and jump right to the multiple comparisons results (some people disagree with this statement).
Note that the multiple comparison calculations all use the mean-square result from the ANOVA table. So even if you don't care about the value of F or the P value, the post tests still require that the ANOVA table be computed.
1. J. Hsu, Multiple Comparisons, Theory and Methods. ISBN: 0412982811.
2. SE Maxwell, HD Delaney, Designing Experiments and Analyzing Data: A Model Comparison Perspective, Second Edition, ISBN: 978-0805837186