Confidence intervals from multiple comparisons tests 

Confidence intervals from multiple comparisons tests 


If you don't need to make a decision from each comparison, you don't need each comparison to be reported as "statistically significant" or not. In this situation, ignore the conclusions about statistical significance and the P values. Instead focus on how large each difference is and how wide each confidence interval is. When thinking about confidence intervals, you need to think about how large a difference you would consider to be scientifically important. How small a difference would you consider to be scientifically trivial? Use scientific judgment and common sense to answer these questions. Statistical calculations cannot help, as the answers depend on the context and goals of the experiment.
If you do want to focus on confidence intervals, then make sure you pick a multiple comparisons method that can report them: The methods of Tukey, Dunnett, Bonferroni, and Sidak.
Note that the confidence intervals reported with multiple comparisons tests (except for Fisher's LSD) adjust for multiple comparisons. Given the usual assumptions, you can be 95% confident that all the the true population values are contained within the corresponding confidence interval, which leaves a 5% chance that any one or more of the intervals do not include the population value. They are sometimes called simultaneous confidence intervals.