# Fixed bug:Multiple comparisons after two-way repeated measures ANOVA with three or more columns (fixed in 4.03/4.0c)

There is a bug in how GraphPad Prism 3 and 4 (up to 4.02 and 4.0b) compute the post tests following repeated measures ANOVA. Also see this related page.

**When does the bug occur and how does it affect the results?**

The bug only occurs when you have arranged the data so that each row represents a different time point ard related (repeated measures) values are stacked into subcolumns. The post test calculations are correct when each column represents a different time point.

Depending on your data, the effect of the bug can vary. In many cases, the effect of the bug won't be noticeable. In other cases, the analysis mistakenly has too much power and finds 'significant' differences when it shouldn't. In other cases, the bug results in too little power and doesn't find 'significant' differences when it should.

**How can I get the right answer?**

This bug has been fixed in release GraphPad Prism 4.03 (windows) and 4.0c (Mac).

There also is a workaround that will let you get the right post test results quite easily from older versions of Prism 3 or 4. The trick is to do the ANOVA twice. First do repeated-measures ANOVA to get the overall results (but not post tests). Then do ordinary, not repeated measures, two-way ANOVA to get post-test results. Why? When you are doing post tests comparing two groups at one time point, the fact that the experiment was done in a repeated-measures fashion is completely irrelevant. For each post test, you compare one group of subjects treated one way and another group of subjects treated the other way. It really doesn't matter if you used those same subjects at other time points (repeated measures) or different subjects at other time points (ordinary ANOVA). So the post tests following ordinary ANOVA are exactly right. With the bug fixed in 4.03 and 4.0c, the post tests following ordinary ANOVA and repeated-measures ANOVA (when each row represents a different time point) are identical.

**Is two-way repeated measures ANOVA the right way to analyze my data?**

Before recalculating results, first consider whether repeated-measures two-way ANOVA is the best way to analyze your data. I think this method is overused by biologists when one of the factors is time or concentration.

**Detailed explanation**

The post tests compare the difference between two group means with the square root of an appropriate Mean Square (MS) value from the ANOVA table, taking into account the corresponding number of degrees of freedom (DF). Prism only offers post tests comparing two columns at a certain row, but the repeated measures factor can be by row or by column.

When each column represents a different time point, there is no bug. You are looking at one treatment (for each post test) and comparing two time points. This is sort of like a paired t test. In this case, the ANOVA table partitions the overall variation into that due to the treatment factor, that due to the repeated measurement factor, that due to variation between subjects, and remaining experimental (residual) error. The appropriate MS value to use in the post test is MS_{residual}, and the corresponding DF value is DF_{residual}. Prism has always handled this situation correctly. For this case (each column is a different time point), the post test correctly corrects for the repeated measurements.

When each row represents a different time point, the bug causes Prism to do the calcualations incorrectly. For each post test, you are comparing two groups at one time point. So for that comparison, the fact that the study was repeated measures really doesn't matter. At each time point you are looking at, you have one group of subjects treated one way and another group of subjects treated the other way. While the ANOVA table separated variation between subjects from experimental (residual) error, this distinction is irrelevant to the post test. So the correct MS value to use in the post test combines both the MS_{residual} and the MS_{subject}. This MS value (which is called MS_{withincells} and is not shown on the ANOVA table), can be computed from the sum-of-squares (SS) terms in the ANOVA table.

MSwithincells = [(SSsubject + SSresidual) / (DFsubject + DFresidual)]

The appropriate degrees of freedom for the post test is the sum of DF_{subject} and DF_{residual}.

MS_{withincells} can also be defined in this equivalent form:

(MSsubject*DFsubject + MSresidual*DFresidual) / (DFsubject + DFresidual)

You can see that MS_{withincells} is a weighted average of MS_{subject} and MS_{residual}, so its value lies between MS_{subject} and MS_{residual}.

When you perform regular (not repeated measures) ANOVA, the MS_{residual} term accounts for all the variability not explained by either the row or column factor. So the MS_{residual }term in ordinary two-way ANOVA is identical to MS_{withincells} defined above for repeated measures ANOVA.

The bug is that Prism 3 and 4 (up to 4.02 and 4.0b) uses the MS_{subject} and DF_{subject} terms when performing post tests.

When MS_{subject} and MS_{residual} have similar values, the bug has very little impact. When MS_{subject} and MS_{residual} have very different values, the effect of the bug on the post tests can be profound. Since MS_{subject} can either be larger or smaller than MS_{residual}, the effect of the bug can go in either direction.

**References:**

G Keppel and TD Wickens, page 452.

SE Maxwell and HD Delaney, page 604.