Boosting Learning Algorithm

From GM-RKB
(Redirected from Boosting)
Jump to navigation Jump to search

A Boosting Learning Algorithm is a nonparametric learning algorithm/ensemble learning algorithm that makes repeated use of a weak learner to create a Strong Learner (ensemble model).



References

2015

  • (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Boosting_(machine_learning) Retrieved:2015-2-9.
    • Boosting is a machine learning ensemble meta-algorithm for reducing bias primarily and also variance in supervised learning, and a family of machine learning algorithms which convert weak learners to strong ones. Boosting is based on the question posed by Kearns and Valiant (1988, 1989) :[1] Can a set of weak learners create a single strong learner? A weak learner is defined to be a classifier which is only slightly correlated with the true classification (it can label examples better than random guessing). In contrast, a strong learner is a classifier that is arbitrarily well-correlated with the true classification. Robert Schapire's affirmative answer in a 1990 paper to the question of Kearns and Valiant has had significant ramifications in machine learning and statistics, most notably leading to the development of boosting. When first introduced, the hypothesis boosting problem simply referred to the process of turning a weak learner into a strong learner. "Informally, [the hypothesis boosting] problem asks whether an efficient learning algorithm […] that outputs a hypothesis whose performance is only slightly better than random guessing [i.e. a weak learner] implies the existence of an efficient algorithm that outputs an hypothesis of arbitrary accuracy [i.e. a strong learner]." Algorithms that achieve hypothesis boosting quickly became simply known as "boosting". Freund and Schapire's arcing (Adapt[at]ive Resampling and Combining), [2] as a general technique, is more or less synonymous with boosting. [3]
  1. Michael Kearns(1988); Thoughts on Hypothesis Boosting, Unpublished manuscript (Machine Learning class project, December 1988)
  2. Yoav Freund and Robert E. Schapire (1997); A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting, Journal of Computer and System Sciences, 55(1):119-139
  3. Leo Breiman (1998); Arcing Classifier (with Discussion and a Rejoinder by the Author), Annals of Statistics, vol. 26, no. 3, pp. 801-849: "The concept of weak learning was introduced by Kearns and Valiant (1988 , 1989 ), who left open the question of whether weak and strong learnability are equivalent. The question was termed the boosting problem since [a solution must] boost the low accuracy of a weak learner to the high accuracy of a strong learner. Schapire (1990) proved that boosting is possible. A boosting algorithm is a method that takes a weak learner and converts it into a strong learner. Freund and Schapire (1997) proved that an algorithm similar to arc-fs is boosting.

2012

2011

2009

2004

  • (Zhao & Yu, 2004) ⇒ Peng Zhao, and Bin Yu. (2004). “Boosted Lasso." Tech Report, Statistics Department, U. C. Berkeley.
    • Another vastly popular and recent idea is Boosting. Since its inception in 1990 (Schapire, 1990; Freund, 1995; Freund and Schapire, 1996), it has become one of the most successful machine learning methods and is now understood as an iterative gradient descent method leading to an additive model (Breiman, 1998; Mason et al., 1999; Friedman et al., 2000).
  • (Melli, 2004) ⇒ Gabor Melli. (2004). Review of: Yoav Freund, and Robert E. Schapire, “A Short Introduction to Boosting”
    • Kearns & Valiant (1989) proved that learners performing only slightly better than random, can be combined to form an arbitrarily good ensemble hypothesis.
    • Schapire (1990) provided the first polynomial time Boosting algorithm.
    • Freund (1995) “Boosting a weak learning algorithm by majority”
    • Freund & Schapire (1995) AdaBoost. Solved many practical problems of boosting algorithms. “Ada” stands for adaptive.

2000

  • (Webb, 2000) ⇒ Geoffrey I. Webb. (2000). “MultiBoosting: A Technique for Combining Boosting and Wagging.” Machine Learning. 40(2).

1999

1996

1995

  • (Freund, 1995) ⇒ Yoav Freund. (1995). “Boosting a Weak Learning Algorithm by Majority.” Information and Computation, 121.

1990

  • (Shapire, 1990) ⇒ Robert E. Schapire. (1990). “The Strength of Weak Learnability.” In: Machine Learning, 5(2).

1988

  • (Kearns, 1988) ⇒ Michael Kearns. (1988). “Thoughts on Hypothesis Boosting.” Unpublished manuscript.