2004 FeatureSelectionL1VsL2Regulariz

From GM-RKB
(Redirected from Ng, 2004)
Jump to navigation Jump to search

Subject Headings:

Notes

Cited By

Quotes

Abstract

We consider supervised learning in the presence of very many irrelevant features, and study two different regularization methods for preventing overfitting. Focusing on logistic regression, we show that using L<inf>1</inf> regularization of the parameters, the sample complexity (i.e., the number of training examples required to learn "well,") grows only logarithmically in the number of irrelevant features. This logarithmic rate matches the best known bounds for feature selection, and indicates that L<inf>1</inf> regularized logistic regression can be effective even if there are exponentially many irrelevant features as there are training examples. We also give a lower-bound showing that any rotationally invariant algorithm--- including logistic regression with L<inf>2</inf> regularization, SVMs, and neural networks trained by backpropagation---has a worst case sample complexity that grows at least linearly in the number of irrelevant features.

References

,

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2004 FeatureSelectionL1VsL2RegularizAndrew Y. NgFeature Selection, L 1 Vs. L 2 Regularization, and Rotational Invariancehttp://www-robotics.stanford.edu/~ang/papers/icml04-l1l2.pdf10.1145/1015330.1015435