Please enable JavaScript to view this site.

What is stepwise regression?

Many multiple regression programs can choose variables automatically. You give the program data on lots of variables, and it decides which ones to actually use. The appeal of automatic variable selection is clear. You just put all the data into the program, and it makes all the decisions for you. Why stepwise? Because the automatic procedure fits several models in steps, adding (or removing) variables from the model to find the "best" one.

Prism does not offer automatic variable selection.

The problem with stepwise?

The problem is multiple comparisons. How many models does a multiple regression program compare when given data with k independent variables and instructed to use the all-subsets method to compare the fit of every possible model? Each variable can be included or excluded from the final model, so the program will compare 2k models. For example, if the investigator starts with 20 variables, then automatic variable selection compares 220 models (more than a million), even before considering interactions.

When you read a paper presenting results of multiple regression, you may not even know the number of variables with which the investigator started. Peter Flom (1) explains why this ignorance makes it impossible to interpret the results of multiple regression with stepwise variable selection:

If you toss a coin ten times and get ten heads, then you are pretty sure that something weird is going on. You can quantify exactly how unlikely such an event is, given that the probability of heads on any one toss is 0.5. If you have 10 people each toss a coin ten times, and one of them gets 10 heads, you are less suspicious, but you can still quantify the likelihood. But if you have a bunch of friends (you don’t count them) toss coins some number of times (they don’t tell you how many) and someone gets 10 heads in a row, you don’t even know how suspicious to be. That’s stepwise.

The consequences of automatic variable selection are pervasive and serious (1,2):

The final model fits too well. R2 is too high.

The best-fit parameter values are too far from zero. This makes sense. Since variables with low absolute values have been eliminated, the remaining variables tend to have absolute values that are higher than they should be.

The confidence intervals are too narrow, so you think you know the parameter values with more precision than is warranted.

When you test whether the parameters are statistically significant, the P values are too small and cannot be interpreted.

References

1.Flom, P. L., & Cassell, D. L. (2007). Stopping stepwise: Why stepwise and similar selection methods are bad, and what you should use. NorthEast SAS Users Group.

2.Harrell, F. (2015). Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis. 2nd edition. Springer. ISBN: 978-3319194240

 

 

 

© 1995-2019 GraphPad Software, LLC. All rights reserved.