Classification Tree Learning Algorithm: Difference between revisions
Jump to navigation
Jump to search
(Created page with "A Classification Tree Learning Algorithm is a Decision Tree Learning Algorithm that can produce a Classification Tree. * <B>Context:</B> ** It can be implemented i...") |
m (Text replacement - "> ↵" to "> ") |
||
(23 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
A [[Classification Tree Learning Algorithm]] is a [[ | A [[Classification Tree Learning Algorithm]] is a [[decision tree learning algorithm]] that is a [[supervised classification algorithm]]. | ||
* <B>Context:</B> | * <B>Context:</B> | ||
** It can be implemented | ** It can (typically) make us of a [[Classification Tree Splitting Criterion]]. | ||
** It can (typically) make us of a [[Classification Tree Pruning Function]]. | |||
** It can be implemented by an [[Classification Tree Learning System]] (to solve a [[classification tree learning task]] which requires [[classification tree]]s). | |||
** It can be a [[Symbolic Learning Algorithm]]. | |||
** … | |||
* <B>Example(s):</B> | |||
** [[C4.5 Algorithm]]. | |||
** [[ID3 Algorithm]]. | |||
** [[FOIL Algorithm]]. | |||
** [[CART Algorithm]]. | |||
** … | |||
* <B>Counter-Example(s):</B> | * <B>Counter-Example(s):</B> | ||
** a [[Ranking Tree Learning Algorithm]]. | |||
** a [[Regression Tree Learning Algorithm]]. | ** a [[Regression Tree Learning Algorithm]]. | ||
* <B>See:</B> [[Kernel-based Classification Algorithm]]. | * <B>See:</B> [[Random Forests Algorithm]], [[Kernel-based Classification Algorithm]]. | ||
---- | ---- | ||
---- | ---- | ||
== References == | |||
=== 2012 === | |||
* ([[2012_AFewUsefulThingstoKnowAboutMach|Domingos, 2012]]) ⇒ [[Pedro Domingos]]. ([[2012]]). “[http://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf A Few Useful Things to Know About Machine Learning].” In: [[Communications of the ACM Journal]], 55(10). [http://dx.doi.org/10.1145/2347736.2347755 doi:10.1145/2347736.2347755] | |||
** QUOTE: Algorithm 1 (below) shows a bare-bones decision tree learner for Boolean domains, using information gain and greedy search(20). InfoGain(<math>x_j, y</math>) is the mutual information between feature <math>x_j</math> and the class <math>y</math>. MakeNode(x,c_0,c_1) returns a node that tests feature x and has c_0 as the child for x=0 and c_1 as the child for x=1. | |||
<B>LearnDT</B>(TrainSet) | |||
if all examples in TrainSet have the same class y_* <B>then</B> | |||
return MakeLeaf(y_*) | |||
if no feature x_j has InfoGain(x_j,y) > 0 then | |||
y_* ← Most frequent class in TrainSet | |||
return MakeLeaf(y_*) | |||
x_* ← argmax_{x_j}, InfoGain(x_j,y) | |||
TS_0 ← Examples in TrainSet with x_* = 0 | |||
TS_1 ← Examples in TrainSet with x_* = 1 | |||
return MakeNode(x_*,l), LearnDT(TS0),LearnDT(TS_1)) | |||
[[Category:Concept]] |
Latest revision as of 02:41, 27 March 2024
A Classification Tree Learning Algorithm is a decision tree learning algorithm that is a supervised classification algorithm.
- Context:
- It can (typically) make us of a Classification Tree Splitting Criterion.
- It can (typically) make us of a Classification Tree Pruning Function.
- It can be implemented by an Classification Tree Learning System (to solve a classification tree learning task which requires classification trees).
- It can be a Symbolic Learning Algorithm.
- …
- Example(s):
- Counter-Example(s):
- See: Random Forests Algorithm, Kernel-based Classification Algorithm.
References
2012
- (Domingos, 2012) ⇒ Pedro Domingos. (2012). “A Few Useful Things to Know About Machine Learning.” In: Communications of the ACM Journal, 55(10). doi:10.1145/2347736.2347755
- QUOTE: Algorithm 1 (below) shows a bare-bones decision tree learner for Boolean domains, using information gain and greedy search(20). InfoGain([math]\displaystyle{ x_j, y }[/math]) is the mutual information between feature [math]\displaystyle{ x_j }[/math] and the class [math]\displaystyle{ y }[/math]. MakeNode(x,c_0,c_1) returns a node that tests feature x and has c_0 as the child for x=0 and c_1 as the child for x=1.
LearnDT(TrainSet) if all examples in TrainSet have the same class y_* then return MakeLeaf(y_*) if no feature x_j has InfoGain(x_j,y) > 0 then y_* ← Most frequent class in TrainSet return MakeLeaf(y_*) x_* ← argmax_{x_j}, InfoGain(x_j,y) TS_0 ← Examples in TrainSet with x_* = 0 TS_1 ← Examples in TrainSet with x_* = 1 return MakeNode(x_*,l), LearnDT(TS0),LearnDT(TS_1))