Coefficient of Determination

From GM-RKB
(Redirected from R-squared)
Jump to navigation Jump to search

A Coefficient of Determination is a statistic for the amount of variability of a continuous dependent variable that can be accounted for by a regression model (typically a linear regression model) on one or more regressors

  • AKA: R-Squared.
  • Context:
  • Example(s)
    • [math]\displaystyle{ R^2 = 1 }[/math] can indicate that the fitted model explains all variability (not that there is a cause-and-effect relationship).
    • [math]\displaystyle{ R^2 = 0 }[/math] can indicate that no 'linear' relationship (for straight line regression, this means that the straight line model is a constant line (slope=0, intercept=) between the response variable and regressors).
    • [math]\displaystyle{ R^2 = 0.7 }[/math] can indicate that approximately seventy percent of the variation in the response variable can be explained by the explanatory variable. The remaining thirty percent can be explained by unknown, lurking variables or inherent variability.
  • Counter-Example(s):
  • See: Shrinkage.


References

2022

  • (Wikipedia, 2022) ⇒ https://en.wikipedia.org/wiki/Coefficient_of_determination Retrieved:2022-6-28.
    • In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s).

      It is a statistic used in the context of statistical models whose main purpose is either the prediction of future outcomes or the testing of hypotheses, on the basis of other related information. It provides a measure of how well observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model. There are several definitions of R2 that are only sometimes equivalent. One class of such cases includes that of simple linear regression where r2 is used instead of R2. When only an intercept is included, then r2 is simply the square of the sample correlation coefficient (i.e., r) between the observed outcomes and the observed predictor values. If additional regressors are included, R2 is the square of the coefficient of multiple correlation. In both such cases, the coefficient of determination normally ranges from 0 to 1. There are cases where the computational definition of R2 can yield negative values, depending on the definition used. This can arise when the predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data. Even if a model-fitting procedure has been used, R2 may still be negative, for example when linear regression is conducted without including an intercept, or when a non-linear function is used to fit the data. In cases where negative values arise, the mean of the data provides a better fit to the outcomes than do the fitted function values, according to this particular criterion. The coefficient of determination can be more (intuitively) informative than MAE, MAPE, MSE, and RMSE in regression analysis evaluation, as the former can be expressed as a percentage, whereas the latter measures have arbitrary ranges. It also proved more robust for poor fits compared to SMAPE on the test datasets in the article. When evaluating the goodness-of-fit of simulated (Ypred) vs. measured (Yobs) values, it is not appropriate to base this on the R2 of the linear regression (i.e., Yobs= m·Ypred + b).The R2 quantifies the degree of any linear correlation between Yobs and Ypred, while for the goodness-of-fit evaluation only one specific linear correlation should be taken into consideration: Yobs = 1·Ypred + 0 (i.e., the 1:1 line).

2012

2012

  • http://en.wikipedia.org/wiki/Coefficient_of_determination#Definitions
    • QUOTE:The better the linear regression (on the right) fits the data in comparison to the simple average (on the left graph), the closer the value of [math]\displaystyle{ R^2 }[/math] is to one. The areas of the blue squares represent the squared residuals with respect to the linear regression. The areas of the red squares represent the squared residuals with respect to the average value.]] A data set has values yi, each of which has an associated modelled value fi (also sometimes referred to as ŷi). Here, the values yi are called the observed values and the modelled values fi are sometimes called the predicted values.

      The "variability" of the data set is measured through different sums of squares: :[math]\displaystyle{ SS_\text{tot}=\sum_i (y_i-\bar{y})^2, }[/math] the total sum of squares (proportional to the sample variance); :[math]\displaystyle{ SS_\text{reg}=\sum_i (f_i -\bar{y})^2, }[/math] the regression sum of squares, also called the explained sum of squares. :[math]\displaystyle{ SS_\text{err}=\sum_i (y_i - f_i)^2\, }[/math], the sum of squares of residuals, also called the residual sum of squares. In the above [math]\displaystyle{ \bar{y} }[/math] is the mean of the observed data: [math]\displaystyle{ \bar{y}=\frac{1}{n}\sum_i^n y_i }[/math] where n is the number of observations.

      The notations [math]\displaystyle{ SS_{R} }[/math] and [math]\displaystyle{ SS_{E} }[/math] should be avoided, since in some texts their meaning is reversed to Residual sum of squares and Explained sum of squares, respectively. The most general definition of the coefficient of determination is :[math]\displaystyle{ R^2 \equiv 1 - {SS_{\rm err}\over SS_{\rm tot}}.\, }[/math]


2010

2008