# Thomas G. Dietterich

Jump to navigation
Jump to search

Thomas G. Dietterich is a person.

## References

- Professional Homepage: http://web.engr.oregonstate.edu/~tgd/
- DBLP Page: http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/d/Dietterich:Thomas_G=.html
- Google Scholar Author Page: http://scholar.google.com/citations?user=09kJn28AAAAJ

### 2004

- (Dietterich et al., 2004) ⇒ Thomas G. Dietterich, Adam Ashenfelter, and Yaroslav Bulatov. (2004). “Training Conditional Random Fields via Gradient Tree Boosting.” In: Proceedings of the twenty-first International Conference on Machine learning. doi:10.1145/1015330.1015428

### 2002

- (Dietterich, 2002) ⇒ Thomas G. Dietterich. (2002). “Machine Learning for Sequential Data: A review.” In: Structural, Syntactic and Statistical Pattern Recognition; Lecture Notes in Computer Science, 2396.

### 2000

- (Dietterich, 2000a) ⇒ Thomas G. Dietterich. (2000). “An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization.” In: Machine Learning Journal, 40(2). doi:10.1023/A:1007607513941
- (Dietterich, 2000b) ⇒ Thomas G. Dietterich. (2000). “Ensemble Methods in Machine Learning.” In: First International Workshop on Multiple Classifier Systems. doi:10.1007/3-540-45014-9_1.
- ABSTRACT: Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include error-correcting output coding, Bagging, and boosting. This paper reviews these methods and explains why ensembles can often perform better than any single classifier. Some previous studies comparing ensemble methods are reviewed, and some new experiments are presented to uncover the reasons that Adaboost does not overfit rapidly.

### 1997

- (Margineantu & Dietterich, 1997) ⇒ Dragos D. Margineantu, and Thomas G. Dietterich. (1997). “Pruning adaptive boosting.” In: Proceedings of the Fourteenth International Conference on Machine Learning (ICML 1997).

### 1995

- (Dietterich & Bakiri, 1995) ⇒ Thomas G. Dietterich, and Ghulum Bakiri. (1995). “Solving Multiclass Learning Problems via Error-Correcting Output Codes.” In: Journal of Artificial Intelligence Research, 2.
- ABSTRACT: Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k > 2 values (i.e., k “classes
*). The definition is acquired by studying collections of training examples of the form [x_i, f (x_i)]. Existing approaches to multiclass learning problems include direct application of multiclass algorithms such as the decision-tree algorithms C4.5 and CART, application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and application of binary concept learning algorithms with distributed output representations. This paper compares these three approaches to a new technique in which error-correcting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range of multiclass learning tasks. We also demonstrate that this approach is robust with respect to changes in the size of the training sample, the assignment of distributed representations to particular classes, and the application of overfitting avoidance techniques such as decision-tree pruning. Finally, we show that---like the other methods --- the error-correcting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems.*

- ABSTRACT: Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k > 2 values (i.e., k “classes