Decision Tree Training System

From GM-RKB
(Redirected from Decision Tree Inducer)
Jump to navigation Jump to search

A Decision Tree Training System is a supervised model-based training system that implements a decision tree training algorithm to solve a decision tree training task.



References

2017a

2017b

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Decision_tree_learning Retrieved:2017-10-15.
    • Decision tree learning uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modelling approaches used in statistics, data mining and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees ; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees.

      In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making). This page deals with decision trees in data mining.

2017c

2011


  1. Breiman L, Friedman JH, Olshen R, Stone C (1984) Classification and regression trees. Wadsworth & Brooks, Pacific Grove
  2. Kass GV (1980) An exploratory technique for investigating large quantities of categorical data. Appl Stat 29:119–127
  3. Hunt EB, Marin J, Stone PJ (1966) Experiments in induction. Academic, New York
  4. Quinlan JR (1983) Learning efficient classification procedures and their application to chess end games. In: Michalski RS, Carbonell JG, Mitchell TM (eds) Machine learning. An artificial intelligence approach, Tioga, Palo Alto, pp 463–482
  5. Quinlan JR (1986) Induction of decision trees. Mach Learn 1:81–106
  6. Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32.
  7. Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: Saitta L (ed) Proceedings of the 13th International Conference on Machine Learning, Bari. Morgan Kaufmann, pp 148–156