SVM-based Classification Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
m (Text replacement - ". ----" to ". ----")
m (Text replacement - ". " to ". ")
 
Line 20: Line 20:
=== 2006 ===
=== 2006 ===
* ([[2006_AnEmpiricalComparisonofSupervis|Caruana & Niculescu-Mizil, 2006]]) ⇒ [[Rich Caruana]], and [[Alexandru Niculescu-Mizil]]. ([[2006]]). “[http://www.cs.cornell.edu/~caruana/ctp/ct.papers/caruana.icml06.pdf An Empirical Comparison of Supervised Learning Algorithms].” In: [[Proceedings of the 23rd International Conference on Machine learning]]. ISBN:1-59593-383-2 [http://dx.doi.org/10.1145/1143844.1143865 doi:10.1145/1143844.1143865]  
* ([[2006_AnEmpiricalComparisonofSupervis|Caruana & Niculescu-Mizil, 2006]]) ⇒ [[Rich Caruana]], and [[Alexandru Niculescu-Mizil]]. ([[2006]]). “[http://www.cs.cornell.edu/~caruana/ctp/ct.papers/caruana.icml06.pdf An Empirical Comparison of Supervised Learning Algorithms].” In: [[Proceedings of the 23rd International Conference on Machine learning]]. ISBN:1-59593-383-2 [http://dx.doi.org/10.1145/1143844.1143865 doi:10.1145/1143844.1143865]  
** QUOTE: [[2006_AnEmpiricalComparisonofSupervis|This paper]] presents results of a large-scale [[empirical system comparison|empirical comparison]] of ten [[supervised classification algorithm|supervised learning algorithm]]s using eight [[performance criteria]]. [[2006_AnEmpiricalComparisonofSupervis|We]] evaluate the performance of [[SVM-based Classification Algorithm|SVMs]], [[classification neural nets|neural nets]], [[logistic regression]], [[naive bayes]], [[memory-based learning]], [[random forests]], [[decision trees]], [[bagged trees]], [[boosted trees]], and [[boosted stumps]] on eleven [[Supervised Binary Classification Task|binary classification problem]]s using a variety of [[classification performance metric|performance metric]]s: [[accuracy]], [[F-score]], [[Lift]], [[ROC Area]], [[average precision]], [[precision/recall break-even point]], [[squared error]], and [[cross-entropy]].  
** QUOTE: [[2006_AnEmpiricalComparisonofSupervis|This paper]] presents results of a large-scale [[empirical system comparison|empirical comparison]] of ten [[supervised classification algorithm|supervised learning algorithm]]s using eight [[performance criteria]]. [[2006_AnEmpiricalComparisonofSupervis|We]] evaluate the performance of [[SVM-based Classification Algorithm|SVMs]], [[classification neural nets|neural nets]], [[logistic regression]], [[naive bayes]], [[memory-based learning]], [[random forests]], [[decision trees]], [[bagged trees]], [[boosted trees]], and [[boosted stumps]] on eleven [[Supervised Binary Classification Task|binary classification problem]]s using a variety of [[classification performance metric|performance metric]]s: [[accuracy]], [[F-score]], [[Lift]], [[ROC Area]], [[average precision]], [[precision/recall break-even point]], [[squared error]], and [[cross-entropy]].


----
----

Latest revision as of 13:27, 2 August 2022

A SVM-based Classification Algorithm is an SVM training algorithm that is a supervised classification algorithm.



References

2004

  • (Hastie et al., 2004) ⇒ Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. (2004). “The Entire Regularization Path for the Support Vector Machine.” In: The Journal of Machine Learning Research, 5.
    • QUOTE: The support vector machine (SVM) is a widely used tool for classification. Many efficient implementations exist for fitting a two-class SVM model. The user has to supply values for the tuning parameters: the regularization cost parameter, and the kernel parameters. It seems a common practice is to use a default value for the cost parameter, often leading to the least restrictive model. In this paper we argue that the choice of the cost parameter can be critical. We then derive an algorithm that can fit the entire path of SVM solutions for every value of the cost parameter, with essentially the same computational cost as fitting one SVM model.

2006