Kernel-based Learning Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
m (Text replacement - "lems]]" to "lem]]s")
m (Text replacement - "ers]] " to "er]]s ")
Line 28: Line 28:
=== 2017 ===
=== 2017 ===
* (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Kernel_method Retrieved:2017-1-19.
* (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Kernel_method Retrieved:2017-1-19.
** In [[machine learning]], '''kernel methods''' are a class of algorithms for [[pattern analysis]], whose best known member is the [[support vector machine]] (SVM). The general task of pattern analysis is to find and study general types of relations (for example [[Cluster analysis|clusters]], [[ranking]]s, [[principal components]], [[correlation]]s, [[Statistical classification|classifications]]) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into [[feature vector]] representations via a user-specified ''feature map'': in contrast, kernel methods require only a user-specified ''kernel'', i.e., a [[similarity function]] over pairs of data points in raw representation.        <P>        Kernel methods owe their name to the use of [[Positive-definite kernel|kernel function]]s, which enable them to operate in a high-dimensional, ''implicit'' feature space without ever computing the coordinates of the data in that space, but rather by simply computing the [[inner product]]s between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "'''kernel trick'''". Kernel functions have been introduced for sequence data, [[Graph kernel|graphs]], text, images, as well as vectors.        <P>        Algorithms capable of operating with kernels include the [[kernel perceptron]], support vector machines (SVM), [[Gaussian process]]es, [[principal components analysis]] (PCA), [[canonical correlation analysis]], [[ridge regression]], [[spectral clustering]], [[Adaptive filter|linear adaptive filters]] and many others. Any [[linear model]] can be turned into a non-linear model by applying the kernel trick to the model: replacing its features (predictors) by a kernel function.        <P>        Most kernel algorithms are based on [[convex optimization]] or [[Eigenvalue, eigenvector and eigenspace|eigenproblem]]s and are statistically well-founded. Typically, their statistical properties are analyzed using [[statistical learning theory]] (for example, using [[Rademacher complexity]]).
** In [[machine learning]], '''kernel methods''' are a class of algorithms for [[pattern analysis]], whose best known member is the [[support vector machine]] (SVM). The general task of pattern analysis is to find and study general types of relations (for example [[Cluster analysis|clusters]], [[ranking]]s, [[principal components]], [[correlation]]s, [[Statistical classification|classifications]]) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into [[feature vector]] representations via a user-specified ''feature map'': in contrast, kernel methods require only a user-specified ''kernel'', i.e., a [[similarity function]] over pairs of data points in raw representation.        <P>        Kernel methods owe their name to the use of [[Positive-definite kernel|kernel function]]s, which enable them to operate in a high-dimensional, ''implicit'' feature space without ever computing the coordinates of the data in that space, but rather by simply computing the [[inner product]]s between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "'''kernel trick'''". Kernel functions have been introduced for sequence data, [[Graph kernel|graphs]], text, images, as well as vectors.        <P>        Algorithms capable of operating with kernels include the [[kernel perceptron]], support vector machines (SVM), [[Gaussian process]]es, [[principal components analysis]] (PCA), [[canonical correlation analysis]], [[ridge regression]], [[spectral clustering]], [[Adaptive filter|linear adaptive filter]]s and many others. Any [[linear model]] can be turned into a non-linear model by applying the kernel trick to the model: replacing its features (predictors) by a kernel function.        <P>        Most kernel algorithms are based on [[convex optimization]] or [[Eigenvalue, eigenvector and eigenspace|eigenproblem]]s and are statistically well-founded. Typically, their statistical properties are analyzed using [[statistical learning theory]] (for example, using [[Rademacher complexity]]).


=== 2016 ===
=== 2016 ===

Revision as of 00:44, 19 August 2024

A Kernel-based Learning Algorithm is a supervised learning algorithm that uses a kernel function (that maps into a high-dimension space and whose instance similarity score in the original space has low computational complexity - typically through inner product operations).



References

2017

2016

2011

2006

2004