# Coefficient of Determination

A Coefficient of Determination is a statistic for the amount of variability of a continuous dependent variable that can be accounted for by a regression model (typically a linear regression model) on one or more regressors

**Context:**- It can be calculated by squaring the r-value (Pearson Product-Moment Correlation Coefficient).
- It can be interpreted as the proportion of a response variable's variation that is explained (or not explained) by a regressed model and its regressor variables.

**Example(s)**- [math]R^2 = 1[/math] can indicate that the fitted model explains all variability (not that there is a cause-and-effect relationship).
- [math]R^2 = 0[/math] can indicate that no 'linear' relationship (for straight line regression, this means that the straight line model is a constant line (slope=0, intercept=) between the response variable and regressors).
- [math]R^2 = 0.7[/math] can indicate that approximately seventy percent of the variation in the response variable can be explained by the explanatory variable. The remaining thirty percent can be explained by unknown, lurking variables or inherent variability.

**Counter-Example(s):****See:**Shrinkage.

## References

### 2012

- http://en.wikipedia.org/wiki/Overfitting
- QUOTE:... Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new data set than on the data set used for fitting. In particular, the value of the coefficient of determination will shrink relative to the original training data.

- http://en.wikipedia.org/wiki/Coefficient_of_determination
- QUOTE:In statistics, the
**coefficient of determination***R*^{2}*is used in the context of statistical models whose main purpose is the prediction of future outcomes on the basis of other related information. It is the proportion of variability in a data set that is accounted for by the statistical model.*^{[1]}It provides a measure of how well future outcomes are likely to be predicted by the model.*There are several different definitions of*RR^{2}which are only sometimes equivalent. One class of such cases includes that of linear regression. In this case, if an intercept is included thenR^{2}is simply the square of the sample correlation coefficient between the outcomes and their predicted values, or in the case of simple linear regression, between the outcomes and the values of the single regressor being used for prediction. In such cases, the coefficient of determination ranges from 0 to 1. Important cases where the computational definition ofR^{2}can yield negative values, depending on the definition used, arise where the predictions which are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data, and where linear regression is conducted without including an intercept. Additionally, negative values of^{2}may occur when fitting non-linear trends to data.^{[2]}In these instances, the mean of the data provides a fit to the data that is superior to that of the trend under this goodness of fit analysis.

- QUOTE:In statistics, the

*http://en.wikipedia.org/wiki/Coefficient_of_determination#Definitions**QUOTE:The better the linear regression (on the right) fits the data in comparison to the simple average (on the left graph), the closer the value of [math]R^2[/math] is to one. The areas of the blue squares represent the squared residuals with respect to the linear regression. The areas of the red squares represent the squared residuals with respect to the average value.]] A data set has values*ŷ*y*, each of which has an associated modelled value_{i}*f*(also sometimes referred to as_{i}_{i}*). Here, the values*y_{i}are called the observed values and the modelled values*f*are sometimes called the predicted values._{i}The "variability" of the data set is measured through different sums of squares: :[math]SS_\text{tot}=\sum_i (y_i-\bar{y})^2,[/math] the total sum of squares (proportional to the sample variance); :[math]SS_\text{reg}=\sum_i (f_i -\bar{y})^2,[/math] the regression sum of squares, also called the explained sum of squares. :[math]SS_\text{err}=\sum_i (y_i - f_i)^2\,[/math], the sum of squares of residuals, also called the residual sum of squares. In the above [math]\bar{y}[/math] is the mean of the observed data: [math]\bar{y}=\frac{1}{n}\sum_i^n y_i [/math] where n is the number of observations.

The notations [math]SS_{R}[/math] and [math]SS_{E}[/math] should be avoided, since in some texts their meaning is reversed to

**R**esidual sum of squares and**E**xplained sum of squares, respectively. The most general definition of the coefficient of determination is :[math]R^2 \equiv 1 - {SS_{\rm err}\over SS_{\rm tot}}.\,[/math]

- ↑ Steel, R. G. D. and Torrie, J. H.,
*Principles and Procedures of Statistics,*New York: McGraw-Hill, 1960, pp. 187, 287. - ↑ Cameron, A.C., Windmeijer, F.A.G., (1997). “An R-squared measure of goodness of fit for some common nonlinear regression models.” In: Journal of Econometrics, Volume 77, Issue 2, April 1997, Pages 329-342.