Dickey–Fuller Test

From GM-RKB
(Redirected from Dickey–Fuller test)
Jump to navigation Jump to search

A Dickey–Fuller Test is a statistical test for the null hypothesis of whether a unit root is present in a autoregressive model.



References

2016

Explanation: A simple AR(1) model is
[math]\displaystyle{ y_{t}=\rho y_{t-1}+u_{t}\, }[/math]
where [math]\displaystyle{ y_{t} }[/math] is the variable of interest, [math]\displaystyle{ t }[/math] is the time index, [math]\displaystyle{ \rho }[/math] is a coefficient, and [math]\displaystyle{ u_{t} }[/math] is the error term. A unit root is present if [math]\displaystyle{ \rho = 1 }[/math]. The model would be non-stationary in this case.
The regression model can be written as
[math]\displaystyle{ \nabla y_{t}=(\rho-1)y_{t-1}+u_{t}=\delta y_{t-1}+ u_{t}\, }[/math]
where [math]\displaystyle{ \nabla }[/math] is the first difference operator. This model can be estimated and testing for a unit root is equivalent to testing [math]\displaystyle{ \delta = 0 }[/math] (where [math]\displaystyle{ \delta \equiv \rho - 1 }[/math]). Since the test is done over the residual term rather than raw data, it is not possible to use standard t-distribution to provide critical values. Therefore this statistic [math]\displaystyle{ t }[/math] has a specific distribution simply known as the Dickey–Fuller table.