# Support Vector-based Classification Model Training Algorithm

(Redirected from SVM-based Classifier)

Jump to navigation
Jump to search
A Support Vector-based Classification Model Training Algorithm is an Support Vector-based Model Training Algorithm that can produce a Support Vector-based Classification Model.

**AKA:**SVM Classifier, SVM-based Classifier, SVM Classification Algorithm.**Context:**- It can be a Kernel-based Predictive Classifier.
- It uses Support Vectors to define the Decision Boundary.
- It can be based on the Margin between the two Classes.
- Optimal hyperplane is the one with maximal margin of separation between the two classes.
- It can be learned by a Support Vector Machine Learning Algorithm.
- It can be:
- …

**Counter-Example(s):**- an SVM Ranker.
- an SVM Estimator.

**See:**Kernel Function, Supervised Machine Learning Algorithm, SVMlight, Maximum-Margin Hyperplane, Maximum-Margin Classifier.

## References

### 2009

- http://en.wikipedia.org/wiki/Support_vector_machine#Motivation
- Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a
*new*data point will be in. In the case of support vector machines, a data point is viewed as a [math]\displaystyle{ p }[/math]-dimensional vector (a list of [math]\displaystyle{ p }[/math] numbers), and we want to know whether we can separate such points with a [math]\displaystyle{ p-1 }[/math]-dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the*maximum-margin hyperplane*and the linear classifier it defines is known as a*maximum margin classifier*.

- Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a
- http://www.kernel-methods.net/tutorials/KMtalk.pdf
- http://www.cs.cornell.edu/Courses/cs478/2007sp/lectures/08-svm_kernels.pdf
- http://www.cs.unm.edu/~jmk/cs531/PatternRec-SVM.ppt

### 2004

- (Moschitti, 2004) ⇒ Alessandro Moschitti. (2004). “A study on Convolution Kernels for Shallow Semantic Parsing.” In: Proceedings of the 42nd Conference on Association for Computational Linguistic (ACL 2004).
- (Hastie et al., 2004) ⇒ Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. (2004). “The Entire Regularization Path for the Support Vector Machine.” In: The Journal of Machine Learning Research, 5.
- The
**support vector machine**(SVM) is a widely used tool for**classification**. Many efficient implementations exist for fitting a**two-class SVM model**. The user has to supply values for the tuning parameters: the regularization cost parameter, and the kernel parameters.

- The

### 2001

- (Schölkopf & Smola, 2001) ⇒ Bernhard Schölkopf, and Alexander J. Smola. (2002). “Learning With Kernels." MIT Press. ISBN:0262194759

### 1999

- (Joachims, 1999) ⇒ Thorsten Joachims. (1999). “Making large-Scale SVM Learning Practical.” In: Advances in Kernel Methods - Support Vector Learning, Bernhard Schölkopf and C. Burges and Alexander J. Smola (ed.), MIT-Press.

### 1995

- (Vapnik, 1995) ⇒ Vladimir N. Vapnik. (1995). “The Nature of Statistical Learning Theory.” Springer. ISBN:0387945598.