Please enable JavaScript to view this site.

Navigation: PRINCIPLES OF STATISTICS > Outliers

How it works: ROUT method

Scroll Prev Top Next More

The basics of ROUT

The ROUT method was developed as a method to identify outliers from nonlinear regression. Learn more about the ROUT method.

Briefly, it first fits a model to the data using a robust method where outliers have little impact. Then it uses a new outlier detection method, based on the false discovery rate, to decide which points are far enough from the prediction of the model to be called outliers.

When you ask Prism to detect outliers in a stack of column data, it simply adapts this method. It considers the values you entered to be Y values, and fits the model Y= M, where M is a robust mean. [If you want to do this with Prism's nonlinear regression analysis, you'd need to assign arbirtrary X values to each row, and then fit to the model Y = X*0 + M. )

This method can detect any number of outliers (up to 30% of the sample size).

Prism can perform the ROUT test with as few as three values in a data set.

What is Q?

The ROUT method is based on the False Discovery Rate (FDR), so you specify Q, which is the maximum desired FDR. The interpretation of Q depends on whether there are any outliers in the data set.

When there are no outliers (and the distribution is entirely Gaussian), Q is very similar to alpha. Assuming all data come from a Gaussian distribution, Q is the chance of (falsely) identifying one or more outliers..

When there are outliers in the data, Q is the maximum desired false discovery rate. If you set Q to 1%, then you are aiming for no more than 1% of the identified outliers to be false (are in fact just the tail of a Gaussian distribution) and at least 99% to be actual outliers (from a different distribution).

Comparing ROUT to Grubbs' method

I performed simulations to compare the Grubbs' and ROUT methods of detecting outliers.  Briefly, the data were sampled from a Gaussian distribution. In most cases, outliers (drawn from a uniform distribution with specified limits) were added. Each experimental design was simulated 25,000 times, and I tabulated the number of simulations with zero, one, two, or more than two outliers.

When there are no outliers, the ROUT and Grubbs' tests perform almost identically. The value of Q specified for the ROUT method is equivalent to the value of alpha you set for the Grubbs' test.

When there is a single outlier, the Grubb's test is slightly better able to detect it. The ROUT method has both more false negatives and false positives. In other words, it is slightly more likely to miss the outlier, and is also more likely to find two outliers even when the simulation only included one. This is not so surprising, as Grubbs' test was  designed to detect a single outlier. While the difference between the two methods is clear, it is not substantial.

When there are two outliers in a small data set, the ROUT test does a much better job. The  iterative Grubbs' test is subject to masking, while the ROUT test is not. Whether or not masking is an issue depends on how large the sample is and how far the outliers are from the mean of the other values. In situations where masking is a real possibility, the ROUT test works much better than Grubbs' test.  For example, when n=10 with two outliers, the Grubbs test never found both outliers and missed both in 98.8% of the simulations (in the remaining 1.2% of simulations, the Grubbs' test found one of the two outliers). In contrast, the ROUT method identified both outliers in 92.8% of those simulations, and missed both in only 6% of simulations.

Summary:

Grubbs' is slightly better than the ROUT method for the task it was designed for: Detecting a single outlier from a Gaussian distribution.

The ROUT method is much better than the iterative Grubbs' test at detecting two outliers in some situations.

Reference

Motulsky HM and Brown RE, Detecting outliers when fitting data with nonlinear regression – a new method based on robust nonlinear regression and the false discovery rate, BMC Bioinformatics 2006, 7:123. Download from http://www.biomedcentral.com/1471-2105/7/123.

 

© 1995-2019 GraphPad Software, LLC. All rights reserved.