Allan variance

From GM-RKB
Jump to navigation Jump to search

A Allan variance is a measure of frequency drift in clocks, oscillators and amplifiers .



References

2016

[math]\displaystyle{ \sigma_y^2(\tau). \, }[/math]
The Allan deviation (ADEV) is the square root of Allan variance. It is also known as sigma-tau, and is expressed mathematically as
[math]\displaystyle{ \sigma_y(\tau).\, }[/math]
The M-sample variance is a measure of frequency stability using M samples, time T between measures and observation time [math]\displaystyle{ \tau }[/math]. M-sample variance is expressed as
[math]\displaystyle{ \sigma_y^2(M, T, \tau).\, }[/math]
The Allan variance is intended to estimate stability due to noise processes and not that of systematic errors or imperfections such as frequency drift or temperature effects. The Allan variance and Allan deviation describe frequency stability, i.e. the stability in frequency. See also the section entitled “Interpretation of value” below.
There are also different adaptations or alterations of Allan variance, notably the modified Allan variance MAVAR or MVAR, the total variance, and the Hadamard variance. There also exist time stability variants such as time deviation TDEV or time variance TVAR. Allan variance and its variants have proven useful outside the scope of timekeeping and are a set of improved statistical tools to use whenever the noise processes are not unconditionally stable, thus a derivative exists.
The general M-sample variance remains important since it allows dead time in measurements and bias functions allows conversion into Allan variance values. Nevertheless, for most applications the special case of 2-sample, or "Allan variance" with [math]\displaystyle{ T = \tau }[/math] is of greatest interest (...)
Definition: The Allan variance is defined as
[math]\displaystyle{ \sigma_y^2(\tau) = \langle\sigma_y^2(2, \tau, \tau)\rangle }[/math]
which is conveniently expressed as
[math]\displaystyle{ \sigma_y^2(\tau) = \frac{1}{2}\langle(\bar{y}_{n+1}-\bar{y}_n)^2\rangle = \frac{1}{2\tau^2}\langle(x_{n+2}-2x_{n+1}+x_n)^2\rangle }[/math]
where [math]\displaystyle{ \tau }[/math] is the observation period, [math]\displaystyle{ \bar{y}_n }[/math] is the nth fractional frequency average over the observation time [math]\displaystyle{ \tau }[/math].
The samples are taken with no dead-time between them, which is achieved by letting
[math]\displaystyle{ T = \tau \, }[/math]