Please enable JavaScript to view this site.

Correct for multiple comparisons by controlling the False Discovery Rate

Controlling the False Discovery Rate (FDR) is a great method for coping with multiple comparisons. Prism 6 offered this as part of the multiple t test analysis. Prism now offers it in three places.

As a followup to ANOVA

Prism has long offered multiple comparisons tests after ANOVA to control the Type I error rate for the family of comparisons. Now Prism 7 lets you use an alternative strategy for multiple comparisons following ANOVA (one-, two- or three-way): controlling the False Discovery Rate (FDR).

 

Multiple comparisons for P values computed elsewhere

Prism offers an analysis to analyze a stack of P values computed elsewhere. You enter a set of P values into a column and choose this analysis. Prism graphs the rank of each P value vs. the P value itself. This is a standard way to visualize the distribution of a set of P values.

Which P values are small enough so the corresponding finding should be flagged as "statistically significant"? Choose to control the False Discovery Rate or the familywise Type I error rate using the method of Bonferroni, Sidak or Holm.

clip0095

 

Multiple t tests, one per row

Prism 6 introduced an analysis to run multiple t tests, one per row. If you choose the method that controls the Prism 7 also reported q values (also called adjusted P values) for each comparison.

 

Three algorithms for using the FDR method

Whenever you choose to use the FDR approach to decide which P values are small enough to be a "discovery", Prism lets you choose one of three methods for controlling the FDR.

clip0094

 

© 1995-2019 GraphPad Software, LLC. All rights reserved.