Bootstrap Aggregating Algorithm

(Redirected from Bootstrap aggregating)
Jump to: navigation, search

A bootstrap aggregating algorithm is an ensemble meta-algorithm that repeatedly selects a random sample with replacement of the training set (a bootstrap sample) and fits the model to these samples.






  • (Bühlmann, 2005) ⇒ Peter Bühlmann. (2005). “16.2 Bagging and Related Methods." website
    • QUOTE: Bagging (Breiman, 1996), a sobriquet for bootstrap aggregating, is an ensemble method for improving unstable estimation or classification schemes. Breiman (Breiman, 1996) motivated bagging as a variance reduction technique for a given base procedure, such as decision trees or methods that do variable selection and fitting in a linear model. It has attracted much attention, probably due to its implementational simplicity and the popularity of the bootstrap methodology. At the time of its invention, only heuristic arguments were presented why bagging would work. Later, it has been shown in (Bühlmann & Yu, 2002) that bagging is a smoothing operation which turns out to be advantageous when aiming to improve the predictive performance of regression or classification trees. In case of decision trees, the theory in (Bühlmann & Yu, 2002) confirms Breiman's intuition that bagging is a variance reduction technique, reducing also the mean squared error (MSE). The same also holds for subagging (subsample aggregating), defined in Sect. 16.2.3, which is a computationally cheaper version than bagging. However, for other (even complex) base procedures, the variance and MSE reduction effect of bagging is not necessarily true; this has also been shown in (Buja & Stuetzle, 2002) for the simple case where the estimator is a $ U$-statistics.




  • (Bühlmann & Yu, 2002) ⇒ Peter Bühlmann, and B. Yu. (2002). “Analyzing Bagging.” In: Annals of Statistics 30.