Rao's Score Test

From GM-RKB
Jump to navigation Jump to search

A Rao's Score Test is a statistical hypothesis test that assesses whether a statistical model parameter is equal to a specific value.

  • AKA: Lagrange Multiplier (LM) Test.
  • Context:
    • It can be based on the gradient of the Likelihood Function, which is known as the 'score'.
    • It can be used when the true value of a parameter is hypothesized to be close to a certain value, making it most powerful in such scenarios.
    • It can be advantageous in situations where the unconstrained Maximum Likelihood Estimate is difficult to compute, as it only requires the computation of the restricted estimator.
    • It can have an asymptotic Chi-Squared Distribution under the null hypothesis, a fact first proved by C. R. Rao in 1948.
    • It can be used in various fields of study, including economics, psychology, biology, and more, for hypothesis testing in complex statistical models.
    • It can be part of a suite of tests including the Wald Test and Likelihood-Ratio Test, offering a unique advantage in terms of computational feasibility.
    • It can be particularly useful in cases where the parameter space includes boundary points, or when the alternative hypothesis is not well-specified.
    • It can be used for tasks, such as:
      • Testing if a coefficient in a regression model significantly differs from zero.
      • Assessing whether a particular variable has an effect in a logistic regression model.
      • Evaluating the presence of a unit root in time series analysis.
      • Testing for heteroskedasticity in regression models.
      • Assessing the independence of two categorical variables.
      • Evaluating the goodness-of-fit for a distribution.
      • ...
  • Example(s):
  • Counter-Example(s):
    • Wald Test, which uses the estimated coefficients of the model to test hypotheses.
    • Likelihood-Ratio Test, which compares the likelihoods of two models to test hypotheses.
    • Durbin-Watson Test, used for detecting autocorrelation in the residuals from a statistical regression analysis.
    • Dickey-Fuller Test, a specific form of unit root test for stationarity in time series data.
    • ...
  • See: Fisher Information, Likelihood-Ratio Test, Unit Root Test, KPSS Test, Econometrics, Boundary Point.


References

2023

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Score_test Retrieved:2023-11-26.
    • In statistics, the score test assesses constraints on statistical parameters based on the gradient of the likelihood function—known as the score—evaluated at the hypothesized parameter value under the null hypothesis. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. While the finite sample distributions of score tests are generally unknown, they have an asymptotic χ2-distribution under the null hypothesis as first proved by C. R. Rao in 1948, a fact that can be used to determine statistical significance. Since function maximization subject to equality constraints is most conveniently done using a Lagrangean expression of the problem, the score test can be equivalently understood as a test of the magnitude of the Lagrange multipliers associated with the constraints where, again, if the constraints are non-binding at the maximum likelihood, the vector of Lagrange multipliers should not differ from zero by more than sampling error. The equivalence of these two approaches was first shown by S. D. Silvey in 1959, which led to the name Lagrange multiplier test that has become more commonly used, particularly in econometrics, since Breusch and Pagan's much-cited 1980 paper. The main advantage of the score test over the Wald test and likelihood-ratio test is that the score test only requires the computation of the restricted estimator. This makes testing feasible when the unconstrained maximum likelihood estimate is a boundary point in the parameter space.Further, because the score test only requires the estimation of the likelihood function under the null hypothesis, it is less specific than the likelihood ratio test about the alternative hypothesis.

2016

  • (Wikipedia, 2016) ⇒ http://en.wikipedia.org/wiki/Score_test Retrieved 2016-08-07
    • Rao's score test, or the score test (often known as the Lagrange multiplier test in econometricsis a statistical test of a simple null hypothesis that a parameter of interest [math]\displaystyle{ \theta }[/math] is equal to some particular value [math]\displaystyle{ \theta_0 }[/math]. It is the most powerful test when the true value of [math]\displaystyle{ \theta }[/math] is close to [math]\displaystyle{ \theta_0 }[/math]. The main advantage of the Score-test is that it does not require an estimate of the information under the alternative hypothesis or unconstrained maximum likelihood. This constitutes a potential advantage in comparison to other tests, such as the Wald test and the generalized likelihood ratio test (GLRT). This makes testing feasible when the unconstrained maximum likelihood estimate is a boundary point in the parameter space. (...) Let [math]\displaystyle{ L }[/math] be the likelihood function which depends on a univariate parameter [math]\displaystyle{ \theta }[/math] and let [math]\displaystyle{ x }[/math] be the data. The score [math]\displaystyle{ U(\theta) }[/math] is defined as
[math]\displaystyle{ U(\theta)=\frac{\partial \log L(\theta \mid x)}{\partial \theta}. }[/math]
The Fisher information is
[math]\displaystyle{ \mathcal{I}(\theta) = - \operatorname{E} \left[\left. \frac{\partial^2}{\partial\theta^2} \log L(X;\theta)\right|\theta \right]\,. }[/math]
The statistic to test [math]\displaystyle{ \mathcal{H}_0:\theta=\theta_0 }[/math] is
[math]\displaystyle{ S(\theta_0) = \frac{U(\theta_0)^2}{I(\theta_0)} }[/math]
which has an asymptotic distribution of [math]\displaystyle{ \chi^2_1 }[/math], when [math]\displaystyle{ \mathcal{H}_0 }[/math] is true.