Karhunen-Loeve Transform

From GM-RKB
(Redirected from Hotelling transform)
Jump to navigation Jump to search

A Karhunen-Loeve Transform is a statistics theorem that represents a stochastic process as an infinite linear combination of orthogonal functions (analogous to a Fourier series representation of a function on a bounded interval).



References

2015

  • (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Karhunen–Loève_theorem Retrieved:2015-2-16.
    • In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève), also known as the Kosambi–Karhunen–Loève theorem is a representation of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. Stochastic processes given by infinite series of this form were first considered by Damodar Dharmananda Kosambi. There exist many such expansions of a stochastic process: if the process is indexed over [a, b], any orthonormal basis of L2([a, b]) yields an expansion thereof in that form. The importance of the Karhunen–Loève theorem is that it yields the best such basis in the sense that it minimizes the total mean squared error.

      In contrast to a Fourier series where the coefficients are real numbers and the expansion basis consists of sinusoidal functions (that is, sine and cosine functions), the coefficients in the Karhunen–Loève theorem are random variables and the expansion basis depends on the process. In fact, the orthogonal basis functions used in this representation are determined by the covariance function of the process. One can think that the Karhunen–Loève transform adapts to the process in order to produce the best possible basis for its expansion.

      In the case of a centered stochastic process {Xt}t ∈ [a, b] (centered means E[Xt] 0 for all t ∈ [a, b]) satisfying a technical continuity condition, admits a decomposition :[math]\displaystyle{ X_t = \sum_{k=1}^\infty Z_k e_k(t) }[/math]

      where are pairwise uncorrelated random variables and the functions are continuous real-valued functions on [a, b] that are pairwise orthogonal in L2([a, b]). It is therefore sometimes said that the expansion is bi-orthogonal since the random coefficients are orthogonal in the probability space while the deterministic functions are orthogonal in the time domain. The general case of a process that is not centered can be brought back to the case of a centered process by considering XtE[Xt] which is a centered process.

      Moreover, if the process is Gaussian, then the random variables are Gaussian and stochastically independent. This result generalizes the Karhunen–Loève transform. An important example of a centered real stochastic process on [0, 1] is the Wiener process; the Karhunen–Loève theorem can be used to provide a canonical orthogonal representation for it. In this case the expansion consists of sinusoidal functions.

      The above expansion into uncorrelated random variables is also known as the Karhunen–Loève expansion or Karhunen–Loève decomposition. The empirical version (i.e., with the coefficients computed from a sample) is known as the Karhunen–Loève transform (KLT), principal component analysis, proper orthogonal decomposition (POD), Empirical orthogonal functions (a term used in meteorology and geophysics), or the Hotelling transform.


2013

  • http://fourier.eng.hmc.edu/e161/lectures/klt/node3.html
    • QUOTE: Now we consider the Karhunen-Loeve Transform (KLT) (also known as Hotelling Transform and Eigenvector Transform), which is closely related to the Principal Component Analysis (PCA) and widely used in data analysis in many fields.

      Let [math]\displaystyle{ {\bf\phi}_k }[/math] be the eigenvector corresponding to the kth eigenvalue [math]\displaystyle{ \lambda_k }[/math] of the covariance matrix [math]\displaystyle{ {\bf\Sigma}_x }[/math] , i.e., :[math]\displaystyle{ {\bf\Sigma}_x {\bf\phi}_k=\lambda_k{\bf\phi}_k\;\;\;\;\;\;(k=1,\cdots,N) }[/math] or in matrix form:  :[math]\displaystyle{ \left[ \begin{array}{ccc}\cdots &\cdots &\cdots \\ \cdots & \sigma_{ij} &\cdots \\\cdots &\cdots &\cdots \end{array} \right] \left[ \begin{array}{c} \\{\bf\phi}_k \\\\\end{array} \right]=\lambda_k\left[ \begin{array}{c} \\{\bf\phi}_k \\\\\end{array} \right] \;\;\;\;\;\;(k=1,\cdots,N) }[/math] As the covariance matrix [math]\displaystyle{ {\bf\Sigma}_x={\bf\Sigma}_x^{*T} }[/math] is Hermitian (symmetric if [math]\displaystyle{ {\bf x} }[/math] is real), its eigenvector [math]\displaystyle{ {\bf\phi}_i }[/math]'s are orthogonal:  :[math]\displaystyle{ \langle {\bf\phi}_i,{\bf\phi}_j\rangle={\bf\phi}^T_i {\bf\phi}^*_j = \left\{ \begin{array}{ll} 1 & i=j \\0 & i\ne j \end{array} \right. }[/math] and we can construct an [math]\displaystyle{ N \times N }[/math] unitary (orthogonal if [math]\displaystyle{ {\bf x} }[/math] is real) matrix [math]\displaystyle{ {\bf\Phi} }[/math]  :[math]\displaystyle{ {\bf\Phi}\stackrel{\triangle}{=}[{\bf\phi}_1, \cdots,{\bf\phi}_{N}] }[/math] satisfying  :[math]\displaystyle{ {\bf\Phi}^{*T} {\bf\Phi} = {\bf I},\;\;\;\;\mbox{i.e.,}\;\;\;\; {\bf\Phi}^{-1}={\bf\Phi}^{*T} }[/math] The [math]\displaystyle{ N }[/math] eigenequations above can be combined to be expressed as:  :[math]\displaystyle{ {\bf\Sigma}_x{\bf\Phi}={\bf\Phi}{\bf\Lambda} }[/math] or in matrix form: : [math]\displaystyle{ \left[ \begin{array}{ccc}\ddots &\cdots &\cdots \\ \vdots & \sigma_{ij} &\vdots \\\cdots &\cdots &\ddots \end{array} \right] [{\bf\phi}_1,\cdots,{\bf\phi}_{N}]=[{\bf\phi}_1,\cdots,{\bf\phi}_{N}] \left[ \begin{array}{ccc} \lambda_1 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & \lambda_{N} \end{array} \right] }[/math] Here [math]\displaystyle{ {\bf\Lambda} }[/math] is a diagonal matrix [math]\displaystyle{ {\bf\Lambda}=diag(\lambda_1, \cdots, \lambda_{N} ) }[/math]. Left multiplying [math]\displaystyle{ {\bf\Phi}^T={\bf\Phi}^{-1} }[/math] on both sides, the covariance matrix [math]\displaystyle{ {\bf\Sigma}_x }[/math] can be diagonalized:  :[math]\displaystyle{ {\bf\Phi}^{*T}{\bf\Sigma}_x{\bf\Phi}={\bf\Phi}^{-1} {\bf\Sigma}_x {\bf\Phi} = {\bf\Phi}^{-1}{\bf\Phi}{\bf\Lambda}={\bf\Lambda} }[/math] a signal vector [math]\displaystyle{ {\bf x} }[/math] , we can define a unitary (orthogonal if [math]\displaystyle{ {\bf x} }[/math] is real) Karhunen-Loeve Transform of [math]\displaystyle{ {\bf x} }[/math] as: :[math]\displaystyle{ {\bf y}=\left[ \begin{array}{c} y_1\\\vdots \\y_{N} \end{array} \right] ={\bf\Phi}^{*T} {\bf x}=\left[ \begin{array}{ccccc} && \phi^{*T}_1 &&\\&&\vdots&& \\ &&\phi^{*T}_{N}&&\end{array}\right]\left[\begin{array}{c}x_1\\\vdots,\\x_N\end{array}\right] }[/math] where the ith component [math]\displaystyle{ y_i }[/math] of the transform vector is the projection of [math]\displaystyle{ {\bf x} }[/math] onto [math]\displaystyle{ {\bf\phi_i} }[/math] :  :[math]\displaystyle{ y_i=\langle {\bf\phi}_i,{\bf x} \rangle={\bf\phi}_i^T{\bf x}^* }[/math] Left multiplying [math]\displaystyle{ {\bf\Phi}=({\bf\Phi}^{*T})^{-1} }[/math] on both sides of the transform [math]\displaystyle{ {\bf y}={\bf\Phi}^{*T} {\bf x} }[/math] , we get the inverse transform: :[math]\displaystyle{ {\bf x}={\bf\Phi} {\bf y}=\left[\begin{array}{ccc}&&\\{\bf\phi}_1&\cdots&{\bf\phi}_{N} \\ &&\\&&\end{array}\right] \left[ \begin{array}{c} y_1\\\vdots \\y_{N} \end{array} \right] = \sum_{i=1}^{N} y_i \phi_i }[/math] We see that by this transform, the signal vector [math]\displaystyle{ {\bf x} }[/math] is now expressed in an N-dimensional space spanned by the N eigenvectors [math]\displaystyle{ {\bf\phi}_i }[/math] ([math]\displaystyle{ i=1,\cdots,N }[/math]) as the basis vectors of the space.

1981