Kernel-based SVM Algorithm: Difference between revisions
(Created page with "A Kernel-based SVM Algorithm is an SVM algorithm that ... * <B>See:</B> Linear SVM. ---- ---- ==References == === 2016 === * https://www.quora.com/Why-is-kerneliz...") |
m (Text replacement - " nd " to " find ") |
||
(7 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
A [[Kernel-based SVM Algorithm]] is an [[SVM algorithm]] that ... | A [[Kernel-based SVM Algorithm]] is an [[SVM algorithm]] that ... | ||
* <B>Example(s):</B> | |||
** a [[Radial-Kernel SVM]]. | |||
* <B>See:</B> [[Linear SVM]]. | * <B>See:</B> [[Linear SVM]]. | ||
---- | ---- | ||
---- | ---- | ||
==References == | |||
== References == | |||
=== 2016 === | === 2016 === | ||
* https://www.quora.com/Why-is-kernelized-SVM-much-slower-than-linear-SVM | * https://www.quora.com/Why-is-kernelized-SVM-much-slower-than-linear-SVM | ||
** QUOTE: Basically, a [[kernel-based SVM]] requires on the order of n^2 computation for training and order of | ** QUOTE: Basically, a [[Kernel-based SVM Algorithm|kernel-based SVM]] requires on the order of n^2 computation for training and order of find computation for classification, where n is the number of training examples and d the input dimension (and assuming that the number of support vectors ends up being a fraction of n, which is shown to be expected in theory and in practice). Instead, a 2-class [[linear SVM]] requires on the order of find computation for training (times the number of training iterations, which remains small even for large n) and on the order of d computations for classification. | ||
---- | ---- | ||
__NOTOC__ | __NOTOC__ | ||
[[Category:Concept]] |
Latest revision as of 17:59, 23 August 2023
A Kernel-based SVM Algorithm is an SVM algorithm that ...
- Example(s):
- See: Linear SVM.
References
2016
- https://www.quora.com/Why-is-kernelized-SVM-much-slower-than-linear-SVM
- QUOTE: Basically, a kernel-based SVM requires on the order of n^2 computation for training and order of find computation for classification, where n is the number of training examples and d the input dimension (and assuming that the number of support vectors ends up being a fraction of n, which is shown to be expected in theory and in practice). Instead, a 2-class linear SVM requires on the order of find computation for training (times the number of training iterations, which remains small even for large n) and on the order of d computations for classification.