Linear Function Fitting Algorithm

From GM-RKB
Jump to navigation Jump to search

A Linear Function Fitting Algorithm is a parametric regression algorithm that accepts a linear probabilistic distribution and can be implemented into a linear regression system (to solve a linear regression task).



References

2011

2006

2009

  • (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Linear_regression#Estimation_methods
    • QUOTE: Numerous procedures have been developed for parameter estimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of a closed-form solution, robustness with respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such as consistency and asymptotic efficiency. Some of the more common estimation techniques for linear regression are summarized below.
      • Ordinary least squares (OLS) is the simplest and thus most common estimator. It is conceptually simple and computationally straightforward. OLS estimates are commonly used to analyze both experimental and observational data. The OLS method minimizes the sum of squared residuals, and leads to a closed-form expression for the estimated value of the unknown parameter β: [math]\displaystyle{ \hat\beta = (X'X)^{-1} X'y = \big(\, \tfrac{1}{n}{\textstyle\sum} x_i x'_i \,\big)^{-1} \big(\, \tfrac{1}{n}{\textstyle\sum} x_i y_i \,\big) }[/math] The estimator is unbiased and consistent if the errors have finite variance and are uncorrelated with the regressors[1]: [math]\displaystyle{ \operatorname{E}[\,x_i\varepsilon_i\,] = 0. }[/math] It is also efficient under the assumption that the errors have finite variance and are homoscedastic, meaning that E[εi2|xi] does not depend on i. The condition that the errors are uncorrelated with the regressors will generally be satisfied in an experiment, but in the case of observational data, it is difficult to exclude the possibility of an omitted covariate z that is related to both the observed covariates and the response variable. The existence of such a covariate will generally lead to a correlation between the regressors and the response variable, and hence to an inconsistent estimator of β. The condition of homoscedasticity can fail with either experimental or observational data. If the goal is either inference or predictive modeling, the performance of OLS estimates can be poor if multicollinearity is present, unless the sample size is large.

        In simple linear regression, where there is only one regressor (with a constant), the OLS coefficient estimates have a simple form that is closely related to the correlation coefficient between the covariate and the response.

      • Generalized least squares (GLS) is an extension of the OLS method, that allows efficient estimation of β when either heteroscedasticity, or correlations, or both are present among the error terms of the model, as long as the form of heteroscedasticity and correlation is known independently of the data. To handle heteroscedasticity when the error terms are uncorrelated with each other, GLS minimizes a weighted analogue to the sum of squared residuals from OLS regression, where the weight for the ith case is inversely proportional to var(εi). This special case of GLS is called “weighted least squares”. The GLS solution to estimation problem is [math]\displaystyle{ \hat\beta = (X'\Omega^{-1}X)^{-1}X'\Omega^{-1}y, }[/math] where Ω is the covariance matrix of the errors. GLS can be viewed as applying a linear transformation to the data so that the assumptions of OLS are met for the transformed data. For GLS to be applied, the covariance structure of the errors must be known up to a multiplicative constant.
      • Iteratively reweighted least squares (IRLS) is used when heteroscedasticity, or correlations, or both are present among the error terms of the model, but where little is known about the covariance structure of the errors independently of the data.[2] In the first iteration, OLS, or GLS with a provisional covariance structure is carried out, and the residuals are obtained from the fit. Based on the residuals, an improved estimate of the covariance structure of the errors can usually be obtained. A subsequent GLS iteration is then performed using this estimate of the error structure to define the weights. The process can be iterated to convergence, but in many cases, only one iteration is sufficient to achieve an efficient estimate of β.[3][4]
      • Instrumental variables regression (IV) can be performed when the regressors are correlated with the errors. In this case, we need the existence of some auxiliary instrumental variables zi such that E[ziεi] = 0. If Z is the matrix of instruments, then the estimator can be given in closed form as: [math]\displaystyle{ \hat\beta = (X'Z(Z'Z)^{-1}Z'X)^{-1}X'Z(Z'Z)^{-1}Z'y }[/math]
      • Optimal instruments regression is an extension of classical IV regression to the situation where E[εi|zi] = 0.
      • Least absolute deviation (LAD) regression is a robust estimation technique in that it is less sensitive to the presence of outliers than OLS (but is less efficient than OLS when no outliers are present). It is equivalent to maximum likelihood estimation under a Laplace distribution model for ε.[5]
      • Quantile regression focuses on the conditional quantiles of y given X rather than the conditional mean of y given X. Linear quantile regression models a particular conditional quantile, often the conditional median, as a linear function β′x of the predictors.
      • Maximum likelihood estimation can be performed when the distribution of the error terms is known to belong to a certain parametric family ƒθ of probability distributions.[6] When fθ is a normal distribution with mean zero and variance θ, the resulting estimate is identical to the OLS estimate. GLS estimates are maximum likelihood estimates when ε follows a multivariate normal distribution with a known covariance matrix.
      • Adaptive estimation. If we assume that error terms are independent from the regressors [math]\displaystyle{ \varepsilon_i \perp \mathbf{x}_i }[/math], the optimal estimator is the 2-step MLE, where the first step is used to non-parametrically estimate the distribution of the error term.[7]
      • Mixed models are widely used to analyze linear regression relationships involving dependent data when the dependencies have a known structure. Common applications of mixed models include analysis of data involving repeated measurements, such as longitudinal data, or data obtained from cluster sampling. They are generally fit as parametric models, using maximum likelihood or Bayesian estimation. In the case where the errors are modeled as normal random variables, there is a close connection between mixed models and generalized least squares.[8] Fixed effects estimation is an alternative approach to analyzing this type of data.
      • Principal component regression (PCR) [9][10] is used when the number of predictor variables is large, or when strong correlations exist among the predictor variables. This two-stage procedure first reduces the predictor variables using principal component analysis then uses the reduced variables in an OLS regression fit. While it often works well in practice, there is no general theoretical reason that the most informative linear function of the predictor variables should lie among the dominant principal components of the multivariate distribution of the predictor variables. The partial least squares regression is the extension of the PCR method which does not suffer from the mentioned deficiency.
      • Total least squares (TLS) [11] is an approach to least squares estimation of the linear regression model that treats the covariates and response variable in a more geometrically symmetric manner than OLS. It is one approach to handling the "errors in variables" problem, and is sometimes used when the covariates are assumed to be error-free.
      • Ridge regression,[12][13][14] and other forms of penalized estimation such as the Lasso,[15] deliberately introduce bias into the estimation of β in order to reduce the variability of the estimate. The resulting estimators generally have lower mean squared error than the OLS estimates, particularly when multicollinearity is present. They are generally used when the goal is to predict the value of the response variable y for values of the predictors x that have not yet been observed. These methods are not as commonly used when the goal is inference, since it is difficult to account for the bias.
      • Least angle regression [16] is an estimation procedure for linear regression models that was developed to handle high-dimensional covariate vectors, potentially with more covariates than observations.
      • The Theil–Sen estimator is a simple robust estimation technique that choose the slope of the fit line to be the median of the slopes of the lines through pairs of sample points. It has similar statistical efficiency properties to simple linear regression but is much less sensitive to outliers.
    • Other robust estimation techniques, including the α-trimmed mean approach, and L-, M-, S-, and R-estimators have been introduced.
  1. Lai, T.L.; Robbins,H; Wei, C.Z. (1978). "Strong consistency of least squares estimates in multiple regression". Proceedings of the National Academy of Sciences USA 75 (7). 
  2. del Pino, Guido (1989). "The Unifying Role of Iterative Generalized Least Squares in Statistical Algorithms". Statistical Science 4 (4): 394–403. doi:10.1214/ss/1177012408. JSTOR 2245853. 
  3. Carroll, Raymond J. (1982). "Adapting for Heteroscedasticity in Linear Models". The Annals of Statistics 10 (4): 1224–1233. doi:10.1214/aos/1176345987. JSTOR 2240725. 
  4. Cohen, Michael; Dalal, Siddhartha R.; Tukey,John W. (1993). "Robust, Smoothly Heterogeneous Variance Regression". Journal of the Royal Statistical Society. Series C (Applied Statistics) 42 (2): 339–353. JSTOR 2986237. 
  5. Narula, Subhash C.; Wellington, John F. (1982). "The Minimum Sum of Absolute Errors Regression: A State of the Art Survey". International Statistical Review 50 (3): 317–326. doi:10.2307/1402501. JSTOR 1402501. 
  6. Lange, Kenneth L.; Little, Roderick J. A.; Taylor,Jeremy M. G. (1989). "Robust Statistical Modeling Using the t Distribution". Journal of the American Statistical Association 84 (408): 881–896. doi:10.2307/2290063. JSTOR 2290063. 
  7. Stone, C. J. (1975). "Adaptive maximum likelihood estimators of a location parameter". The Annals of Statistics 3 (2): 267–284. doi:10.1214/aos/1176343056. JSTOR 2958945. 
  8. Goldstein, H. (1986). "Multilevel Mixed Linear Model Analysis Using Iterative Generalized Least Squares". Biometrika 73 (1): 43–56. doi:10.1093/biomet/73.1.43. JSTOR 2336270. 
  9. Hawkins, Douglas M. (1973). "On the Investigation of Alternative Regressions by Principal Component Analysis". Journal of the Royal Statistical Society. Series C (Applied Statistics) 22 (3): 275–286. JSTOR 2346776. 
  10. Jolliffe, Ian T. (1982). "A Note on the Use of Principal Components in Regression". Journal of the Royal Statistical Society. Series C (Applied Statistics) 31 (3): 300–303. JSTOR 2348005. 
  11. Nievergelt, Yves (1994). "Total Least Squares: State-of-the-Art Regression in Numerical Analysis". SIAM Review 36 (2): 258–264. doi:10.1137/1036055. JSTOR 2132463. 
  12. Swindel, Benee F. (1981). "Geometry of Ridge Regression Illustrated". The American Statistician 35 (1): 12–15. doi:10.2307/2683577. JSTOR 2683577. 
  13. Draper, Norman R.; van Nostrand,R. Craig (1979). "Ridge Regression and James-Stein Estimation: Review and Comments". Technometrics 21 (4): 451–466. doi:10.2307/1268284. JSTOR 1268284. 
  14. Hoerl, Arthur E.; Kennard,Robert W.; Hoerl,Roger W. (1985). "Practical Use of Ridge Regression: A Challenge Met". Journal of the Royal Statistical Society. Series C (Applied Statistics) 34 (2): 114–120. JSTOR 2347363. 
  15. Tibshirani, Robert (1996). "Regression Shrinkage and Selection via the Lasso". Journal of the Royal Statistical Society. Series B (Methodological) 58 (1): 267–288. JSTOR 2346178. 
  16. Efron, Bradley; Hastie,Trevor; Johnstone,Iain Johnstone;Tibshirani,Robert (2004). "Least Angle Regression". The Annals of Statistics 32 (2): 407–451. doi:10.1214/009053604000000067. JSTOR 3448465. 

2009

2004

1991