Cook's Distance

From GM-RKB
Jump to navigation Jump to search

A Cook's Distance estimates the influential observation of a data point when performing a least-squares optimization.



References

2015

  • (Wikipedia, 2015) ⇒ ttps://www.wikiwand.com/en/Cook's_distance Retrieved 2016-07-24
    • In statistics, Cook's distance or Cook's D is a commonly used estimate of the influence of a data point when performing a least-squares regression analysis. In a practical ordinary least squares analysis, Cook's distance can be used in several ways: to indicate influential data points that are particularly worth checking for validity; or to indicate regions of the design space where it would be good to be able to obtain more data points. It is named after the American statistician R. Dennis Cook, who introduced the concept in 1977.
Definition
Data points with large residuals (outliers) and/or high leverage may distort the outcome and accuracy of a regression. Cook's distance measures the effect of deleting a given observation. Points with a large Cook's distance are considered to merit closer examination in the analysis. For the algebraic expression, first define
[math]\displaystyle{ \underset{n \times 1}{\mathbf{y}} = \underset{n \times p}{\mathbf{X}} \quad \underset{p \times 1}{\boldsymbol{\beta}} \quad + \quad \underset{n \times 1}{\boldsymbol{\epsilon}} }[/math]
where [math]\displaystyle{ \boldsymbol{\epsilon} \sim \mathcal{N}\left( 0, \sigma^{2} \mathbf{I} \right) }[/math] is the error term, [math]\displaystyle{ \boldsymbol{\beta} = \left[ \beta_{0} \, \beta_{1} \dots \beta_{p-1} \right]^{\mathsf{T}} }[/math] is the coefficient matrix, and [math]\displaystyle{ \mathbf{X} }[/math] is the design matrix including a constant. The least squares estimator then is [math]\displaystyle{ \mathbf{b} = \left( \mathbf{X}^{\mathsf{T}} \mathbf{X} \right)^{-1} \mathbf{X}^{\mathsf{T}} \mathbf{y} }[/math], and consequently the fitted (predicted) values for the mean of [math]\displaystyle{ \mathbf{y} }[/math] are
[math]\displaystyle{ \mathbf{\hat{y}} = \mathbf{X} \mathbf{b} = \mathbf{X} \left( \mathbf{X}^{\mathsf{T}} \mathbf{X} \right)^{-1} \mathbf{X}^{\mathsf{T}} \mathbf{y} = \mathbf{H} \mathbf{y} }[/math]
where [math]\displaystyle{ \mathbf{H} \equiv \mathbf{X} (\mathbf{X}^{\mathsf{T}} \mathbf{X})^{-1} \mathbf{X}^{\mathsf{T}} }[/math] is the projection matrix (or hat matrix). The [math]\displaystyle{ i }[/math]-th diagonal element of [math]\displaystyle{ \mathbf{H} \, }[/math], given by [math]\displaystyle{ h_{i} \equiv \mathbf{x}_i^{\mathsf{T}} (\mathbf{X}^{\mathsf{T}} \mathbf{X})^{-1} \mathbf{x}_{i} }[/math], is known as the leverage of the [math]\displaystyle{ i }[/math]-th observation. Similarly, the [math]\displaystyle{ i }[/math]-th element of the residual vector [math]\displaystyle{ \mathbf{e} = \mathbf{y} - \mathbf{\hat{y}} = \left( \mathbf{I} - \mathbf{H} \right) \mathbf{y} }[/math] is denoted by [math]\displaystyle{ e_{i} }[/math]. With this, we can define Cook's distance as
[math]\displaystyle{ D_i = \frac{e_{i}^{2}}{s^{2} p}\left[\frac{h_{i}}{(1-h_{i})^2}\right], }[/math]
where [math]\displaystyle{ s^{2} \equiv \left( n - p \right)^{-1} \mathbf{e}^{\top} \mathbf{e} }[/math] is the mean squared error of the regression model.