Hausman Test

From GM-RKB
Jump to navigation Jump to search

A Hausman Test is a statistical testing of an estimator's statistical consistency when compared to a less effective but consistent estimator.



References

2016

(...) Consider the linear model y = bX + e, where y is the dependent variable and X is vector of regressors, b is a vector of coefficients and e is the error term. We have two estimators for b: b0 and b1. Under the null hypothesis, both of these estimators are consistent, but b1 is efficient (has the smallest asymptotic variance), at least in the class of estimators containing b0. Under the alternative hypothesis, b0 is consistent, whereas b1 isn’t.
Then the Wu–Hausman statistic is
[math]\displaystyle{ H=(b_{1}-b_{0})'\big(\operatorname{Var}(b_{0})-\operatorname{Var}(b_{1})\big)^\dagger(b_{1}-b_{0}), }[/math]
where denotes the Moore–Penrose pseudoinverse. Under the null hypothesis, this statistic has asymptotically the chi-squared distribution with the number of degrees of freedom equal to the rank of matrix Var(b0) − Var(b1).
If we reject the null hypothesis, it means that b1 is inconsistent. This test can be used to check for the endogeneity of a variable (by comparing instrumental variable (IV) estimates to ordinary least squares (OLS) estimates). It can also be used to check the validity of extra instruments by comparing IV estimates using a full set of instruments Z to IV estimates that use a proper subset of Z. Note that in order for the test to work in the latter case, we must be certain of the validity of the subset of Z and that subset must have enough instruments to identify the parameters of the equation.
Hausman also showed that the covariance between an efficient estimator and the difference of an efficient and inefficient estimator is zero.