# Data Analysis Blog

### Testing for equivalence

In most experimental situations, your goal is to show that one treatment is better than another. But in some situations, your goal is just the opposite -- to prove that a one treatment is indistinguishable from another. You are not trying to prove that one treatment makes a statistically significant difference in the outcome. Rather, you are trying to prove that two treatments are essentially equivalent, that any difference is of no practical consequence.

It is tempting to simply run a standard statistical test (i.e. t test if the outcome is on a continuous scale, Fisherâ€™s test if the outcome is binary) to see if the result is statistically significant, and then interpret the results simply. If the difference is statistically significant, then it seems clear that the two treatments are not equivalent. And if the difference is not statistically significant, it seems to make sense to conclude that the two treatments are equivalent.

But that approach is wrong, and leads to invalid conclusions.

To correctly interpret experiments testing for equivalence: Look at the confidence intervals, use plenty of common sense, and don't bother with P values or statements of statistical significance. I've included a short explanation in the Prism 6 Statistics guide.