Bagged Trees Algorithm

From GM-RKB
(Redirected from bagged trees)
Jump to navigation Jump to search

A Bagged Trees Algorithm is a bagging algorithm that uses a decision tree learning algorithm.



References

2015

  • http://en.wikipedia.org/wiki/Random_forest#Tree_bagging
    • The training algorithm for random forests applies the general technique of bootstrap aggregating, or bagging, to tree learners. Given a training set X = x1, …, xn with responses Y = y1, …, yn, bagging repeatedly selects a random sample with replacement of the training set and fits trees to these samples … After training, predictions for unseen samples x' can be made by averaging the predictions from all the individual regression trees on x': :[math]\displaystyle{ \hat{f} = \frac{1}{B} \sum_{b=1}^B \hat{f}_b (x') }[/math] or by taking the majority vote in the case of decision trees.

      This bootstrapping procedure leads to better model performance because it decreases the variance of the model, without increasing the bias. This means that while the predictions of a single tree are highly sensitive to noise in its training set, the average of many trees is not, as long as the trees are not correlated. Simply training many trees on a single training set would give strongly correlated trees (or even the same tree many times, if the training algorithm is deterministic); bootstrap sampling is a way of de-correlating the trees by showing them different training sets.

2006