Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references.
New in the Second Edition:
A separate chapter on Bayesian methods
Complete revision of the chapter on estimation
A major example from the field of near infrared spectroscopy
More emphasis on cross-validation
Greater focus on bootstrapping
Stochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible
Software available on the Internet for implementing many of the algorithms presented
Subset Selection in Regression, Second Edition remains dedicated to the techniques for fitting and choosing models that are linear in their parameters and to understanding and correcting the bias introduced by selecting a model that fits only slightly better than others. The presentation is clear, concise, and belongs on the shelf of anyone researching, using, or teaching subset selecting techniques.
Table of Contents
Prediction, Explanation, Elimination or What?
How Many Variables in the Prediction Formula?
Alternatives to Using Subsets
'Black Box' Use of Best-Subsets Techniques
Using Sums of Squares and Products Matrices
Orthogonal Reduction Methods
Gauss-Jordan v. Orthogonal Reduction Methods
Interpretation of Projections
Appendix A: Operation Counts for All-Subsets Regression
FINDING SUBSETS WHICH FIT WELL
Objectives and Limitations of this Chapter
Sequential Replacement Algorithm
Replacing Two Variables at a Time
Generating All Subsets
Using Branch-and-Bound Techniques
Ridge Regression and Other Alternatives
The Non-Negative Garrote and the Lasso
Conclusions and Recommendations
Is There any Information in the Remaining Variables?
Is One Subset Better than Another?
Appendix A: Spjftvoll's Method - Detailed Description
WHEN TO STOP?
What Criterion Should We Use?
Cross-Validation and the PRESS Statistic
Likelihood and Information-Based Stopping Rules
Appendix A. Approximate Equivalence of Stopping Rules
ESTIMATION OF REGRESSION COEFFICIENTS
Choice Between Two Variables
Selection Bias in the General Case, and its Reduction
Conditional Likelihood Estimation
Estimation of Population Means
Estimating Least-Squares Projections
Appendix A: Changing Projections to Equate Sums of Squares
'Spike and Slab' Prior
Normal prior for Regression Coefficients
Picking the Best Model
CONCLUSIONS AND SOME RECOMMENDATIONS
"Overall, this is a fine volume … and should be in the possession of all involved in the business of linear regression analysis."
-Zentralblatt für Mathematik
"Miller is to be commended for pulling together a lot of literature...and going straight to the guts of a complex problem. The book is essential reading for anyone doing or pondering research in this area. I also recommend it highly to anyone teaching regression…"
-Journal of the American Statistical Association