Bootstrap Aggregating Algorithm
(Redirected from Bootstrap aggregating)
- AKA: Bagging.
- See: Bootstrap Learning Algorithm, Wagging Algorithm, AdaBoost, Meta-Algorithm, Overfitting.
- (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/Bootstrap_aggregating Retrieved:2014-10-28.
- Bootstrap aggregating, also called bagging, is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression. It also reduces variance and helps to avoid overfitting. Although it is usually applied to decision tree methods, it can be used with any type of method. Bagging is a special case of the model averaging approach.
- (Domingos, 2012) ⇒ Pedro Domingos. (2012). “A Few Useful Things to Know About Machine Learning.” In: Communications of the ACM Journal, 55(10). doi:10.1145/2347736.2347755
- (Sammut & Webb, 2011) ⇒ Claude Sammut (editor), and Geoffrey I. Webb (editor). (2011). “Bagging.” In: (Sammut & Webb, 2011) p.73
- (Bühlmann, 2005) ⇒ Peter Bühlmann. (2005). “16.2 Bagging and Related Methods." website
- QUOTE: Bagging (Breiman, 1996), a sobriquet for bootstrap aggregating, is an ensemble method for improving unstable estimation or classification schemes. Breiman (Breiman, 1996) motivated bagging as a variance reduction technique for a given base procedure, such as decision trees or methods that do variable selection and fitting in a linear model. It has attracted much attention, probably due to its implementational simplicity and the popularity of the bootstrap methodology. At the time of its invention, only heuristic arguments were presented why bagging would work. Later, it has been shown in (Bühlmann & Yu, 2002) that bagging is a smoothing operation which turns out to be advantageous when aiming to improve the predictive performance of regression or classification trees. In case of decision trees, the theory in (Bühlmann & Yu, 2002) confirms Breiman's intuition that bagging is a variance reduction technique, reducing also the mean squared error (MSE). The same also holds for subagging (subsample aggregating), defined in Sect. 16.2.3, which is a computationally cheaper version than bagging. However, for other (even complex) base procedures, the variance and MSE reduction effect of bagging is not necessarily true; this has also been shown in (Buja & Stuetzle, 2002) for the simple case where the estimator is a $ U$-statistics.
- (Buja & Stuetzle, 2002) ⇒ Andreas Buja, and Werner Stuetzle. (2002). “Observations on Bagging." Preprint (2002). Available from http://ljsavage.wharton.upenn.edu/~buja See: (Buja & Stuetzle, 2006).
- (Chang et al., 2003) ⇒ E.Y. Chang, B. Li, G. Wu, and K. Goh. (2003). “Statistical Learning for Effective Visual Information Retrieval.” In: Proceedings 2003 International IEEE Conference on Image Processing (ICIP 2003).
- (Bühlmann & Yu, 2002) ⇒ Peter Bühlmann, and B. Yu. (2002). “Analyzing Bagging.” In: Annals of Statistics 30.
- (Bauer & Kohavi, 1999) ⇒ Eric Bauer, and Ron Kohavi. (1999). “An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting and Variants.” In: Machine Learning, 36(1-2).