Coping with outliers
When analyzing data, you'll sometimes find that one value is far from the others. Such a value is called an "outlier", a term that is usually not defined rigorously. When you encounter an outlier, you may be tempted to delete it from the analyses. But first stop, take a deep breath, and think.
Is the assumption of a Gaussian distribution reasonable?
Most outlier tests assume that the data (all but the outlier) were sampled from a Gaussian distribution. If this assumption is not true, then the method may identify "outliers" that are part of the same distribution as the others. This is especially a problem with data distributed as a lognormal distribution.
Is the number correct?
Was the number entered into the computer incorrectly (maybe two digits were transposed)? If so, fix the problem.
Were there were any experimental problems with that value? Did that tube or plate or filter look funny? If so, you have justification to exclude the value resulting from that tube without needing to perform any calculations.
Is the outlier caused by biological diversity?
If each value comes from a different person or animal, the outlier may be a correct value. It is an outlier not because of an experimental mistake, but rather because that individual may be different from the others. This may be the most exciting finding in your data!
After answering no to those three questions...
After answering no to those three questions, you have to decide what to do with the outlier. There are two possibilities.
- One possibility is that the outlier was due to chance. In this case, you should keep the value in your analyses. The value came from the same population as the other values, so should be included.
- The other possibility is that the outlier was due to a mistake - bad pipetting, voltage spike, holes in filters, etc. Since including an erroneous value in your analyses will give invalid results, you should remove it. In other words, the value comes from a different population than the other and is misleading.
The problem, of course, is that you can never be sure which of these possibilities is correct.
Clearly, no mathematical calculation will tell you for sure whether the outlier came from the same or different population than the others. But statistical calculations can answer this question: If the values really were all sampled from a Gaussian distribution, what is the chance that you'd find one value as far from the others as you observed? If this probability is small, then you will conclude that the outlier is likely to be an erroneous value, and you have justification to exclude it from your analyses.
Statisticians have devised several methods for detecting outliers. All the methods first quantify how far the outlier is from the other values. This can be the difference between the outlier and the mean of all points, the difference between the outlier and the mean of the remaining values, or the difference between the outlier and the next closest value. Next, standardize this value by dividing by some measure of scatter, such as the SD of all values, the SD of the remaining values, or the range of the data. Finally, compute a P value answering this question: If all the values were really sampled from a Gaussian population, what is the chance of randomly obtaining an outlier so far from the other values? If the P value is small, you conclude that the deviation of the outlier from the other values is statistically significant.
The Grubbs' method for assessing outliers is particularly easy to understand . This method is also called the ESD method (extreme studentized deviate).
The most that Grubbs' test (or any outlier test) can do is tell you that a value is unlikely to have come from the same Gaussian population as the other values in the group. You then need to decide what to do with that value. I would recommend removing significant outliers from your calculations in situations where experimental mistakes are common, so long as biological variability is not a possibility and you document your decision. Others feel that you should never remove an outlier unless you noticed an experimental problem