Support Vector Machine (SVM) Training Algorithm

From GM-RKB
(Redirected from support vector machines)
Jump to: navigation, search

A Support Vector Machine (SVM) Training Algorithm is a kernel-based supervised learning algorithm that can produce a support vector-based predictive model.



References

2018

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Support_vector_machine Retrieved:2018-4-8.
    • In machine learning, support vector machines (SVMs, also support vector networks[1]) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.

      In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces.

      When data are not labeled, supervised learning is not possible, and an unsupervised learning approach is required, which attempts to find natural clustering of the data to groups, and then map new data to these formed groups. The support vector clustering[2] algorithm created by Hava Siegelmann and Vladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data, and is one of the most widely used clustering algorithms in industrial applications.

2017

2017a

2015

2013

2011

  • Yoshua Bengio. (2011). "Why is kernelized SVM much slower than linear SVM?." In: Quora
    • QUOTE: ... Basically, a kernel-based SVM requires on the order of [math]n^2[/math] computation for training and order of [math]nd[/math] computation for classification, where n is the number of training examples and d the input dimension (and assuming that the number of support vectors ends up being a fraction of n, which is shown to be expected in theory and in practice). Instead, a 2-class linear SVM requires on the order of [math]nd[/math] computation for training (times the number of training iterations, which remains small even for large n) and on the order of [math]d[/math] computations for classification. ...

2009

2007

2004

2001

2000a

2000b

2000c

1999

1995a

1995b

1992

1971

  • (Vapnik & Chervonenkis, 1971) ⇒ Vladimir N. Vapnik, and A. Chervonenkis. (1971). “On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities.” Theory of Probability and Its Applications.

  1. Cortes, Corinna; Vapnik, Vladimir N. (1995). “Support-vector networks". Machine Learning. 20 (3): 273–297. doi:10.1007/BF00994018.
  2. Ben-Hur, Asa; Horn, David; Siegelmann, Hava; and Vapnik, Vladimir N.; "Support vector clustering"; (2001); Journal of Machine Learning Research, 2: 125–137