Likelihood Ratio Test

From GM-RKB
(Redirected from LRT)
Jump to navigation Jump to search

A Likelihood Ratio Test is a statistical hypothesis test based on the ratio of the likelihood function between the null and alternative hypotheses or models.



References

2009

  • http://en.wikipedia.org/wiki/Likelihood-ratio_test
    • QUOTE:In statistics, a likelihood ratio test is a statistical test used to compare the fit of two models, one of which (the null model) is a special case of the other (the alternative model). The test is based on the likelihood ratio, which expresses how many times more likely the data are under one model than the other. This likelihood ratio, or equivalently its logarithm, can then be used to compute a p-value, or compared to a critical value to decide whether to reject the null model in favour of the alternative model. When the logarithm of the likelihood ratio is used, the statistic is known as a log-likelihood ratio statistic, and the probability distribution of this test statistic, assuming that the null model is true, can be approximated using Wilks' theorem.

      In the case of distinguishing between two models, each of which has no unknown parameters, use of the likelihood ratio test can be justified by the Neyman–Pearson lemma, which demonstrates that such a test has the highest power among all competitors.

      Each of the two competing models, the null model and the alternative model, is separately fitted to the data and the log-likelihood recorded. The test statistic (usually denoted D)[citation needed] is twice the difference in these log-likelihoods: :[math]\displaystyle{ \begin{align} D & = -2\ln\left( \frac{\text{likelihood for null model}}{\text{likelihood for alternative model}} \right) &= -2\ln(\text{likelihood for null model}) + 2\ln(\text{likelihood for alternative model})] \end{align} }[/math]

2009

2007


2003

  • (Johnson, 2003) ⇒ Don Johnson. (2003). “The Likelihood Ratio Test."
    • QUOTE:In a binary hypothesis testing problem, four possible outcomes can result. Model M0 did in fact represent the best model for the data and the decision rule said it was (a correct decision) or said it wasn't (an erroneous decision). The other two outcomes arise when model M1 was in fact true with either a correct or incorrect decision made. The decision process operates by segmenting the range of observation values into two disjoint decision regions ℜ0 and ℜ1. All values of r fall into either ℜ0 or ℜ1. If a given r lies in ℜ0, for example, we will announce our decision "model ℳ0 was true"; if in ℜ1, model ℳ1 would be proclaimed. To derive a rational method of deciding which model best describes the observations, we need a criterion to assess the quality of the decision process. Optimizing this criterion will specify the decision regions. … The Bayes' decision criterion seeks to minimize a cost function associated with making a decision.

2003

2000

  • (Hosmer & Lemeshow) ⇒ David W. Hosmer, and Stanley Lemeshow. (2000). “Applied Logistic Regression, 2nd edition." John Wiley and Sons. ISBN:0471356328
    • QUOTE:The comparison of observed to predicted values using the likelihood function is based on the following expression

      [math]\displaystyle{ D }[/math] = -2ln [(likelihood of the fitted model) / (likelihood of the saturated model) ]. (1.9).

      The quantity inside the large brackets in the expression is called the likelihood ratio. Using minus twice its log is necessary to obtain a quantity whose distribution is known and can therefore be used for hypothesis testing purposes. Such a test is called the likelihood ratio test.

1992

1978

  • (Turnbull & Weiss, 1978) ⇒ B. W. Turnbull and L. Weiss. (1978). “A Likelihood Ratio Statistic for Testing Goodness of Fit with Randomly Censored Data.” In: Biometrics, 34(3). http://www.jstor.org/stable/2530599