Kernel-based SVM Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
m (Text replacement - ". ----" to ". ----")
m (Text replacement - " nd " to " find ")
 
Line 11: Line 11:
=== 2016 ===
=== 2016 ===
* https://www.quora.com/Why-is-kernelized-SVM-much-slower-than-linear-SVM
* https://www.quora.com/Why-is-kernelized-SVM-much-slower-than-linear-SVM
** QUOTE: Basically, a [[Kernel-based SVM Algorithm|kernel-based SVM]] requires on the order of n^2 computation for training and order of nd computation for classification, where n is the number of training examples and d the input dimension (and assuming that the number of support vectors ends up being a fraction of n, which is shown to be expected in theory and in practice). Instead, a 2-class [[linear SVM]] requires on the order of nd computation for training (times the number of training iterations, which remains small even for large n) and on the order of d computations for classification.
** QUOTE: Basically, a [[Kernel-based SVM Algorithm|kernel-based SVM]] requires on the order of n^2 computation for training and order of find computation for classification, where n is the number of training examples and d the input dimension (and assuming that the number of support vectors ends up being a fraction of n, which is shown to be expected in theory and in practice). Instead, a 2-class [[linear SVM]] requires on the order of find computation for training (times the number of training iterations, which remains small even for large n) and on the order of d computations for classification.


----
----

Latest revision as of 17:59, 23 August 2023

A Kernel-based SVM Algorithm is an SVM algorithm that ...



References

2016

  • https://www.quora.com/Why-is-kernelized-SVM-much-slower-than-linear-SVM
    • QUOTE: Basically, a kernel-based SVM requires on the order of n^2 computation for training and order of find computation for classification, where n is the number of training examples and d the input dimension (and assuming that the number of support vectors ends up being a fraction of n, which is shown to be expected in theory and in practice). Instead, a 2-class linear SVM requires on the order of find computation for training (times the number of training iterations, which remains small even for large n) and on the order of d computations for classification.