Cross-Correlation Measure

From GM-RKB
Jump to navigation Jump to search

A Cross-Correlation Measure is a correlation measure of two time series as a function of the time lag of one relative to the other.



References

2016

  • (Wikipedia, 2016) ⇒ http://wikipedia.org/wiki/cross-correlation Retrieved:2016-4-1.
    • In signal processing, cross-correlation is a measure of similarity of two series as a function of the lag of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology.

      For continuous functions f and g, the cross-correlation is defined as: : [math]\displaystyle{ (f \star g)(\tau)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(t)\ g(t+\tau)\,dt, }[/math] where [math]\displaystyle{ f^* }[/math] denotes the complex conjugate of [math]\displaystyle{ f }[/math] and [math]\displaystyle{ \tau }[/math] is the lag.

      Similarly, for discrete functions, the cross-correlation is defined as: : [math]\displaystyle{ (f \star g)[n]\ \stackrel{\mathrm{def}}{=} \sum_{m=-\infty}^{\infty} f^*[m]\ g[m+n]. }[/math] The cross-correlation is similar in nature to the convolution of two functions.

      In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal power.

      In probability and statistics, the term cross-correlations is used for referring to the correlations between the entries of two random vectors X and Y, while the autocorrelations of a random vector X are considered to be the correlations between the entries of X itself, those forming the correlation matrix (matrix of correlations) of X. This is analogous to the distinction between autocovariance of a random vector and cross-covariance of two random vectors. One more distinction to point out is that in probability and statistics the definition of correlation always includes a standardising factor in such a way that correlations have values between −1 and +1.

      If [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are two independent random variables with probability density functions f and g, respectively, then the probability density of the difference [math]\displaystyle{ Y - X }[/math] is formally given by the cross-correlation (in the signal-processing sense) [math]\displaystyle{ f \star g }[/math] ; however this terminology is not used in probability and statistics. In contrast, the convolution [math]\displaystyle{ f * g }[/math] (equivalent to the cross-correlation of f(t) and g(−t) ) gives the probability density function of the sum [math]\displaystyle{ X + Y }[/math] .