Please enable JavaScript to view this site.

Defining the FDR

Here again is the table from the previous page predicting the results from many comparisons. The only difference, is that I changed the term "statistically signfiicant" to "discovery" because that is more commonly used with the false discovery rate approach.


"Discovery"

"Not a discovery"

Total

No difference.

Null hypothesis true

A

B

A+B

A difference truly exists

C

D

C+D

Total

A+C

B+D

A+B+C+D

The top row represents the results of comparisons where the null hypothesis is true -- the treatment really doesn't work. Nonetheless, some comparisons will mistakenly yield a P value small enough so that comparison is deemed a "discovery".

The second row shows the results of comparisons where there truly is a difference. Even so, you won't get a P value small enough to call that finding a "discovery" in every experiment.

A, B, C and D represent numbers of comparisons, so the sum of A+B+C+D equals the total number of comparisons you are making.

Of course, you can only make this table in theory. If you collected actual data, you'd never know if the null hypothesis is true or not, so could not assign results to row 1 or row 2.

The usual approach to statistical significance and multiple comparisons asks the question:

If the null hypothesis is true what is the chance of getting "statistically significant" results?

The False Discovery Rate (FDR) answers a different question:

If a comparison is a "discovery", what is the chance that the null hypothesis is true?

In the table, above the False Discovery rate is the ratio A/(A+C).

Controlling the FDR with Q

When dealing with multiple comparisons, you may want to set a FDR value (usually called Q) and then use that value when deciding which comparisons are "discoveries" and which are not with the intention that the actual false discovery rate is no higher than Q.

If you are only making a single comparison, you can't do this without defining the prior odds and using Bayesian reasoning. But if you have many comparisons, simple methods let you control the FDR approximately.  You can set the desired value of Q, and the FDR method will decide if each P value is small enough to be designated a "discovery".  If you set Q to 10%, you expect about 90% of the discoveries (in the long run) to truly reflect actual differences, while no more than 10% are false positives. In other words, you expect A/(A+C) to equal 10% (the value you set for Q).

q values or adjusted P values

There are two ways to think about the false discovery rate.

You enter a value for Q (note the uppercase; the desired false discovery rate) and, using that definition, the program tells you which comparisons are discoveries and which are not. In Prism, you enter Q as a percentage.

For each comparison, the program computes a q value (note the lower case). This value is also called an adjusted P value. The way to think about this value is as follows. If you had set Q above to this value, then the comparison you are looking at now would be right at the border of being a discovery or not. Prism reports q as a decimal fraction.

 

 

 

© 1995-2019 GraphPad Software, LLC. All rights reserved.