Covariance Matrix

From GM-RKB
(Redirected from Covariance matrix)
Jump to navigation Jump to search

A covariance matrix is a symmetric positive semi-definite matrix of covariances between elements of a random vector (in which the Main Diagonal are variances and the remaining elements are covariances).



References

2015

  • (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/covariance_matrix Retrieved:2015-2-16.
    • In probability theory and statistics, a covariance matrix (also known as dispersion matrix or variance–covariance matrix) is a matrix whose element in the i, j position is the covariance between the i th and j th elements of a random vector (that is, of an vector of random variables). Each element of the vector is a scalar random variable, either with a finite number of observed empirical values or with a finite or infinite number of potential values specified by a theoretical joint probability distribution of all the random variables.

      Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the x and y directions contain all of the necessary information; a 2×2 matrix would be necessary to fully characterize the two-dimensional variation.

      Because the covariance of the i th random variable with itself is simply that random variable's variance, each element on the principal diagonal of the covariance matrix is the variance of one of the random variables. Because the covariance of the i th random variable with the j th one is the same thing as the covariance of the j th random variable with the i th one, every covariance matrix is symmetric. In addition, every covariance matrix is positive semi-definite.

2013

  • http://en.wikipedia.org/wiki/Covariance_matrix#Definition
    • QUOTE: Throughout this article, boldfaced unsubscripted X and Y are used to refer to random vectors, and unboldfaced subscripted Xi and Yi are used to refer to random scalars.

      If the entries in the column vector :[math]\displaystyle{ \mathbf{X} = \begin{bmatrix}X_1 \\ \vdots \\ X_n \end{bmatrix} }[/math] are random variables, each with finite variance, then the covariance matrix Σ is the matrix whose (i, j) entry is the covariance :[math]\displaystyle{ \Sigma_{ij} = \mathrm{cov}(X_i, X_j) = \mathrm{E}\begin{bmatrix} (X_i - \mu_i)(X_j - \mu_j) \end{bmatrix} }[/math] where  : [math]\displaystyle{ \mu_i = \mathrm{E}(X_i)\, }[/math] is the expected value of the ith entry in the vector X. In other words, we have : [math]\displaystyle{ \Sigma = \begin{bmatrix} \mathrm{E}[(X_1 - \mu_1)(X_1 - \mu_1)] & \mathrm{E}[(X_1 - \mu_1)(X_2 - \mu_2)] & \cdots & \mathrm{E}[(X_1 - \mu_1)(X_n - \mu_n)] \\ \\ \mathrm{E}[(X_2 - \mu_2)(X_1 - \mu_1)] & \mathrm{E}[(X_2 - \mu_2)(X_2 - \mu_2)] & \cdots & \mathrm{E}[(X_2 - \mu_2)(X_n - \mu_n)] \\ \\ \vdots & \vdots & \ddots & \vdots \\ \\ \mathrm{E}[(X_n - \mu_n)(X_1 - \mu_1)] & \mathrm{E}[(X_n - \mu_n)(X_2 - \mu_2)] & \cdots & \mathrm{E}[(X_n - \mu_n)(X_n - \mu_n)] \end{bmatrix}. }[/math] The inverse of this matrix, [math]\displaystyle{ \Sigma^{-1} }[/math] is the inverse covariance matrix, also known as the concentration matrix or precision matrix;[1] see precision (statistics). The elements of the precision matrix have an interpretation in terms of partial correlations and partial variances.[citation needed]

  1. Wasserman, Larry (2004). All of Statistics: A Concise Course in Statistical Inference. ISBN 0-387-40272-1. 


  • http://fourier.eng.hmc.edu/e161/lectures/klt/node3.html
    • QUOTE: … Let [math]\displaystyle{ {\bf\phi}_k }[/math] be the eigenvector corresponding to the kth eigenvalue [math]\displaystyle{ \lambda_k }[/math] of the covariance matrix [math]\displaystyle{ {\bf\Sigma}_x }[/math] , i.e., :[math]\displaystyle{ {\bf\Sigma}_x {\bf\phi}_k=\lambda_k{\bf\phi}_k\;\;\;\;\;\;(k=1,\cdots,N) }[/math] or in matrix form:  :[math]\displaystyle{ \left[ \begin{array}{ccc}\cdots &\cdots &\cdots \\ \cdots & \sigma_{ij} &\cdots \\\cdots &\cdots &\cdots \end{array} \right] \left[ \begin{array}{c} \\{\bf\phi}_k \\\\\end{array} \right]=\lambda_k\left[ \begin{array}{c} \\{\bf\phi}_k \\\\\end{array} \right] \;\;\;\;\;\;(k=1,\cdots,N) }[/math] As the covariance matrix [math]\displaystyle{ {\bf\Sigma}_x={\bf\Sigma}_x^{*T} }[/math] is Hermitian (symmetric if [math]\displaystyle{ {\bf x} }[/math] is real), its eigenvector [math]\displaystyle{ {\bf\phi}_i }[/math]'s are orthogonal:

2011

2008

2006