If you choose the Bonferroni, Tukey, Dunnett or Dunn (nonparametric) multiple comparisons test, Prism can compute a multiplicity adjusted P value for each comparison. This is a choice on the Options tab of the ANOVA dialog. It is checked by default.

A separate adjusted P values is computed for each comparison in a family of comparisons.

The value of each adjusted P value depends on the entire family. The adjusted P value for one particular comparison would have a different value if there were a different number of comparisons or if the data in the other comparisons were changed.

Because the adjusted P value is determined by the entire family of comparisons, it cannot be compared to an individual P value computed by a t test or Fishers Least Significant Difference test.

Choosing the compute adjusted P values won't change Prism's reporting of statistical significance. Instead Prism will report an additional set of results -- the adjusted P value for each comparison.

Multiplicity adjusted P values are not reported by most programs. If you choose to report adjusted P values, be sure to explain that they are multiplicity adjusted P values, and to give a reference. Avoid ambiguous terms such as exact P values.

## What are multiplicity adjusted P values?

Before defining adjusted P values, let's review the meaning of  a P value from a single comparison. The P value is the answer to two equivalent questions:

If the null hypothesis were true, what is the chance that random sampling would result in a difference this large or larger?

What is the smallest definition of the threshold (alpha) of statistical significance at which this result would be statistically significant?

The latter form of the question is less familiar, but equivalent to the first. It leads to a definition of the adjusted P value, which is the answer to this question:

What is the smallest significance level, when applied to the entire family of comparisons, at which this particular comparison will be deemed statistically significant?

The idea is pretty simple. There is nothing special about significance levels of 0.05 or 0.01... You can set the significance level to any probability you want. The adjusted P value is the smallest familywise significance level at which a particular comparison will be declared statistically significant as part of the multiple comparison testing.

Here is a simple way to think about it. You perform multiple comparisons twice. The first time you set the familywise significance level to 5%. The second time, you set it to 1% level. If a particular comparison is statistically significant by the first calculations (5% significance level) but is not for the second (1% significance level), its adjusted P value must be between 0.01 and 0.05, say 0.0323.