Weighted Least Squares Algorithm

From GM-RKB
(Redirected from weighted least squares)
Jump to navigation Jump to search

A Weighted Least Squares Algorithm is a generalized least squares algorithm where all the off-diagonal entries of Ω (the correlation matrix of the residuals) are 0.



References

2016

  • (Wikipedia, 2016) ⇒ http://wikipedia.org/wiki/Least_squares#Weighted_least_squares Retrieved:2016-2-7.
    • A special case of generalized least squares called weighted least squares occurs when all the off-diagonal entries of Ω (the correlation matrix of the residuals) are null; the variances of the observations (along the covariance matrix diagonal) may still be unequal (heteroscedasticity).

      The expressions given above are based on the implicit assumption that the errors are uncorrelated with each other and with the independent variables and have equal variance. The Gauss–Markov theorem shows that, when this is so, [math]\displaystyle{ \hat{\boldsymbol{\beta}} }[/math] is a best linear unbiased estimator (BLUE). If, however, the measurements are uncorrelated but have different uncertainties, a modified approach might be adopted. Aitken showed that when a weighted sum of squared residuals is minimized, [math]\displaystyle{ \hat{\boldsymbol{\beta}} }[/math] is the BLUE if each weight is equal to the reciprocal of the variance of the measurement : [math]\displaystyle{ S = \sum_{i=1}^{n} W_{ii}{r_i}^2,\qquad W_{ii}=\frac{1}{{\sigma_i}^2} }[/math] The gradient equations for this sum of squares are : [math]\displaystyle{ -2\sum_i W_{ii}\frac{\partial f(x_i,\boldsymbol {\beta})}{\partial \beta_j} r_i=0,\qquad j=1,\ldots,n }[/math] which, in a linear least squares system give the modified normal equations, : [math]\displaystyle{ \sum_{i=1}^{n}\sum_{k=1}^{m} X_{ij}W_{ii}X_{ik}\hat{ \beta}_k=\sum_{i=1}^{n} X_{ij}W_{ii}y_i, \qquad j=1,\ldots,m\,. }[/math] When the observational errors are uncorrelated and the weight matrix, W, is diagonal, these may be written as : [math]\displaystyle{ \mathbf{\left(X^TWX\right)\hat {\boldsymbol {\beta}}=X^TWy}. }[/math] If the errors are correlated, the resulting estimator is the BLUE if the weight matrix is equal to the inverse of the variance-covariance matrix of the observations.

      When the errors are uncorrelated, it is convenient to simplify the calculations to factor the weight matrix as [math]\displaystyle{ \mathbf{w_{ii}}=\sqrt{\mathbf{W_{ii}}} }[/math] . The normal equations can then be written as : [math]\displaystyle{ \mathbf{\left(X'^TX'\right)\hat{\boldsymbol{\beta}}=X'^Ty'}\, }[/math] where : [math]\displaystyle{ \mathbf{X'}=\mathbf{wX}, \mathbf{y'}=\mathbf{wy}.\, }[/math] For non-linear least squares systems a similar argument shows that the normal equations should be modified as follows. : [math]\displaystyle{ \mathbf{\left(J^TWJ\right)\boldsymbol \Delta \beta=J^TW \boldsymbol\Delta y}.\, }[/math] Note that for empirical tests, the appropriate W is not known for sure and must be

      estimated. For this feasible generalized least squares (FGLS) techniques may be used.

2003