Kernel Matrix

From GM-RKB
(Redirected from kernel matrix)
Jump to: navigation, search

A Kernel Matrix is a symmetric and positive semidefinite matrix that encodes the relative positions of all points.



References

2018a

2018b

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Kernel_method#Mathematics:_the_kernel_trick Retrieved:2018-8-10.
    • Theoretically, a Gram matrix [math] \mathbf{K} \in \mathbb{R}^{n \times n} [/math] with respect to [math] \{\mathbf{x}_1, \dotsc, \mathbf{x}_n\} [/math] (sometimes also called a "kernel matrix" ), where [math] K_{ij} = k(\mathbf{x}_i, \mathbf{x}_j) [/math] , must be positive semi-definite (PSD). Empirically, for machine learning heuristics, choices of a function [math] k [/math] that do not satisfy Mercer's condition may still perform reasonably if [math] k [/math] at least approximates the intuitive idea of similarity. Regardless of whether [math] k [/math] is a Mercer kernel, [math] k [/math] may still be referred to as a "kernel". If the kernel function [math] k [/math] is also a covariance function as used in Gaussian processes, then the Gram matrix [math] \mathbf{K} [/math] can also be called a covariance matrix.

2018c

2018d

2018e

2017a

2017b

2016

2015

2007

  • (Nguyen & Ho, 2007) ⇒ Canh Hao Nguyen, and Tu Bao Ho (2007, January). "Kernel Matrix Evaluation". In IJCAI (pp. 987-992).
    • QUOTE: Training example set [math]\{x_i\}_{i=1,\cdots, n} \subset X [/math] with the corresponding target vector [math]y=\{ y_i \}^T_{i=1, \cdots, n} \subset \{−1, 1\}^n[/math]. Suppose that [math]y_1 = \cdots = y_{n_+} = 1[/math] and [math]y_{n_++1} = .. = y_{n_++n_−} = −1;\; n_+[/math] examples belong to class [math]1,\; n_−[/math] examples belong to class [math]−1,\; n_+ + n_− = n[/math]. Under a feature map [math]\phi[/math], the kernel matrix is defined as:

      [math]K = \{k_{ij} = \langle \phi(x_i), \phi(x_j)\rangle\}_{i=1, \cdots, n,\;j=1,\cdots, n}[/math]

2004



  1. Theorem 7.2.10 Let [math] v_1,\ldots,v_m [/math] be vectors in an inner product space with inner product [math] \langle{\cdot,\cdot}\rangle [/math] and let [math] G = [\langle{v_j,v_i}\rangle]_{i,j=1}^m \in M_m [/math] . Then
    (a) is Hermitian and positive-semidefinite
    (b) is positive-definite if and only if the vectors [math] v_1,\ldots,v_m [/math] are linearly-independent.
    (c) [math] \operatorname{rank}G=\dim\operatorname{span}\{v_1,\ldots,v_m\} [/math]