East Carolina University
Department of Psychology


Stepwise Regression

 =
 Voodoo Regression

    It is pretty cool, but not necessarily very useful, and just plain dangerous in the hands of somebody not well educated in the multiple regression techniques, including effects of collinearity, redundancy, and suppression.  Here are some quotes from others I have collected from the now departed STAT-L.


Derksen, S. and H. J. Keselman. 1992. Backward, forward and stepwise automated subset selection algorithms: Frequency of obtaining authentic and noise variables. British Journal of Mathematical and Statistical Psychology, 45: 265-282.

Date:         Fri, 5 Mar 1993 18:26:18 GMT
Sender:       STATISTICAL CONSULTING <STAT-L@MCGILL1.BITNET>
From:         Steve Blinkhorn <steve@PRD.CO.UK>
Subject:      Re: Stepwise Procedure....
 
    A brief abstract of the BJMSP article I referred to in my earlier posting has been requested, so here is the abstract from the paper, plus odd extracts from elsewhere:
 
    The use of automated subset search algorithms is reviewed and issues concerning model selection and selection criteria are discussed.   In addition, a Monte Carlo study is reported which presents data regarding the frequency with which authentic and noise variables are selected by automated subset algorithms.   In particular, the effects of the correlation between predictor variables, the number of candidate predictor variables, the size of the sample, and the level of significance for entry and deletion of variables were studied for three automated subset selection algorithms: BACKWARD ELIMINATION, FORWARD SELECTION and STEPWISE.   Results indicated that:

  1. the degree of correlation between the predictor variables affected the frequency with which authentic predictor variables found their way into the final model;
  2. the number of candidate predictor variables affected the number of noise variables that gained entry to the model;
  3. the size of the sample was of little practical importance in determining the number of authentic variables contained in the final model;
  4. the population multiple coefficient of determination could be faithfully estimated by adopting a statistic that is adjusted by the total number of candidate predictor variables rather than the number of variables in the final model.
  5. the degree of collinearity between predictor variables was the most important factor influencing the selection of authentic variables.
  6. the number of candidate predictor variables affected the number of noise variables that gained entry to the model
  7. Even in the most favourable case investigated ..... 20 per cent of the variables finding their way into the model were noise.   In the worst case .... 74 per cent of the selected variables were noise.
  8.  the average number of authentic variables found in the final subset models was always less than half the number of available authentic predictor variables.
  9.  the `data mining' approach to model building is likely to result in final models containing a large percentage of noise variables which will be interpreted incorrectly as authentic.

Conclusions (mine, not theirs):

  1. Treat all claims based on stepwise algorithms as if they were made by Saddam Hussein on a bad day with a headache having a friendly chat with George Bush.
  2.  Buy a subscription to the British Journal of Mathematical and Statistical Psychology before the pound goes back up to $2

======================Frank Harrell Jr, 19 Feb 1996======ssc
Frank E Harrell Jr feh@biostat.mc.duke.edu
Associate Professor of Biostatistics
Division of Biometry Duke University Medical Center
----------------------------------------------------------------------
Subject: Reasons not to do stepwise (or all possible regressions)

Here are SOME of the problems with stepwise variable selection.

1. It yields R-squared values that are badly biased high
2. The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution
3. The method yields confidence intervals for effects and predicted values that are falsely narrow (See Altman and Anderson Stat in Med)
4. It yields P-values that do not have the proper meaning and the proper correction for them is a very difficult problem
5. It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large; see Tibshirani, 1996).
6. It has severe problems in the presence of collinearity
7. It is based on methods (e.g. F tests for nested models) that were intended to be used to test pre-specified hypotheses.
8. Increasing the sample size doesn't help very much (see Derksen and Keselman)
9. It allows us to not think about the problem
10. It uses a lot of paper

Note that 'all possible subsets' regression does not solve any of these problems.


References
----------
author = "Altman, D. G. and Andersen, P. K.",  journal = "Statistics in Medicine",  pages = "771-783",  title = "Bootstrap investigation of the stability of a {C}ox  regression model",  volume = "8",  year = "1989"
Shows that stepwise methods yields confidence limits that are far too narrow.

author = {Derksen, S. and Keselman, H. J.},  journal = {British Journal of Mathematical and Statistical Psychology},
pages = {265-282},
title = {Backward, forward and stepwise automated subset selection algorithms: {F}requency of obtaining authentic and noise variables},  volume = {45},  year = {1992},
 

author = {Roecker, Ellen B.},  journal = {Technometrics},  pages = {459-468},  title = {Prediction error and its estimation for subset--selected models},  volume = {33},  year = {1991}
Shows that all-possible regression can yield models that are "too small".

author = {Mantel, Nathan},  journal = {Technometrics},  pages = {621-625},  title = {Why stepdown procedures in variable selection},  volume = {12},year = {1970},

author = "Hurvich, C. M. and Tsai, C. L.",  journal = American Statistician,  pages = "214-217",  title = "The impact of model selection on inference in linear regression",  volume = "44",  year = "1990" 

author = {Copas, J. B.},  journal = "Journal of the Royal Statistical Society B",  pages = {311-354},  title = {Regression, prediction and shrinkage (with discussion)},  volume = {45},  year = {1983},

Shows why the number of CANDIDATE variables and not the number in the final model is the number of d.f. to consider.

author = {Tibshirani, Robert},  journal = "Journal of the Royal Statistical Society B",  pages = {267-288},  title = {Regression shrinkage and selection via the lasso},  volume = {58},  year = {1996}, 

==========================Ira Bernstein, 29 Apr 1996==========
From: "IRA H BERNSTEIN" <BERNSTEI@albert.uta.edu>
Subject: Re: When should Stepwise reg be used?

I think that there are two distinct questions here: (a) _when_ is stepwise selection appropriate and (b) _why_ is it so popular.

I would probably only argue slightly with "never" as an answer to the use of stepwise selection since I don't know what knowledge we would lose if all papers using stepwise regression were to vanish from journals at the same time programs providing their use were to become terminally virus-laden. However, I have been in situations that looked like "I have good reason to look at variables A, B, and C;
then look at D, and E, but I have no basis to favor F over G or vice versa past that point." Older versions of SPSS (I haven't used newer versions since switching to SAS a decade ago) allowed this mixture, and I would personally not object to it as long as the strategy were defined in advance and made clear to readers.

As to part (b), I think that there are two groups that are inclined to favor its usage. One consists of individuals with little formal training in data analysis who confuse knowledge of data analysis with knowledge of the syntax of SAS, SPSS, etc. They seem to figure that "if its there in a program, its gotta be good and better than actually thinking about what my data might look like". They are fairly easy to spot and to condemn in a right-thinking group of well-trained data analysts (like ourselves). However, there is also a second group who are often well trained (and may be here in this group ready to flame me). They believe in statistics uber alles--given any properly obtained data base, a suitable computer
program can objectively make substantive inferences without active consideration of the underlying hypotheses. If stepwise selection is the parent of this line blind data analysis, then automatic variable respecification in confirmatory factor analysis is the child.


==========================Kent Campbell, 30 Apr 1996=========
From: campbell@acs.ryerson.ca (Kent Campbell)
Subject: Re: When should Stepwise reg be used?

try generating some random data sets and then analyzing them with stepwise regression. It is quite likely that you will discover all sorts of "significant" relationships. I have done this in a controlled manner and found that the type 1 error (using the default settings in spss) is much higher than 5%. So one reason why stepwise is so popular is that it produces statistically significant results when fed garbage.
Best wishes,
Kent.
============================Carl Huberty, 13 Feb 1996=========
From: carl huberty <CHUBERTY@UGA.CC.UGA.EDU>
Subject: Re: When are stepwise and backward regression methods appropriate?

About the only time stepwise methods are remotely appropriate is when you have a large number of variables and you want to do some "pre screening" of the variable set -- and you would need a "large" N/p ratio to do such an analysis then. There are MUCH better ways to assess variable ordering and to determine good variable subsets. DOWN WITH STEPWISE!!

Carl

===========================Ronay M Conroy 7/5/96============
Message-ID: <v02120d02adb60ca879df@[193.1.229.68]>
From: rconroy@rcsi.ie (Ronan M Conroy)
Subject: Re: When should Stepwise reg be used?

I am struck by the fact that Judd and McClelland in their excellent book "Data Analysis: A Model Comparison Approach" (Harcourt Brace Jovanovich, ISBN 0-15-516765-0) devote less than 2 pages to stepwise methods. What they do say, however, is worth repeating:

1. Stepwise methods will not necessarily produce the best model if there are redundant predictors (common problem).
2. All-possible-subset methods produce the best model for each possible number of terms, but larger models need not necessarily be subsets of smaller ones, causing serious conceptual problems about the underlying logic of the investigation.

3. Models identified by stepwise methods have an inflated risk of capitalising on on chance features of the data. They frequently fail when applied to new datasets. They are rarely tested in this way.

4. Since the interpretation of coefficients in a model depends on the other terms included, "it seems unwise," to quote J and McC, "to let an automatic algorithm determine the questions we do and do not ask about our data". RC adds that stepwise methods abusers frequently would rather not think about their data, for reasons that are funny to describe over a second Guinness.

5. I quote this last point directly, as it is sane and succinct:

"It is our experience and strong belief that better models and a better understanding of one's data result from focused data analysis, guided by substantive theory." (p 204)

They end with a quote from Henderson and Velleman's paper "Building multiple regression models interactively". Biometrics 1981;37:391-411

"The data analyst knows more than the computer"

and add "failure to use that knowledge produces inadequate data analysis."

Personally, I would no more let an automatic routine select my model than I would let some best-fit procedure pack my suitcase.


 

Links

 

Back to the Stat Help PageBack to the Stat Help Page

spider in web
Contact Information for the Webmaster,
Dr. Karl L. Wuensch


This page most recently revised on 28-August-2015