Linear Least-Squares Regression Algorithm

From GM-RKB
(Redirected from Linear Least Squares)
Jump to navigation Jump to search

A Linear Least-Squares Regression Algorithm is a least-squares function fitting algorithm that is a linear regression algorithm and can be applied by a linear least-squares system (to solve a linear least-squares optimization).



References

2015

  • (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Linear_least_squares_(mathematics) Retrieved:2015-12-27.
    • In statistics and mathematics, linear least squares is an approach fitting a mathematical or statistical model to data in cases where the idealized value provided by the model for any data point is expressed linearly in terms of the unknown parameters of the model. The resulting fitted model can be used to summarize the data, to predict unobserved values from the same system, and to understand the mechanisms that may underlie the system.

      Mathematically, linear least squares is the problem of approximately solving an overdetermined system of linear equations, where the best approximation is defined as that which minimizes the sum of squared differences between the data values and their corresponding modeled values. The approach is called "linear" least squares since the assumed function is linear in the parameters to be estimated. Linear least squares problems are convex and have a closed-form solution that is unique, provided that the number of data points used for fitting equals or exceeds the number of unknown parameters, except in special degenerate situations. In contrast, non-linear least squares problems generally must be solved by an iterative procedure, and the problems can be non-convex with multiple optima for the objective function. If prior distributions are available, then even an underdetermined system can be solved using the Bayesian MMSE estimator.

      In statistics, linear least squares problems correspond to a particularly important type of statistical model called linear regression which arises as a particular form of regression analysis. One basic form of such a model is an ordinary least squares model. The present article concentrates on the mathematical aspects of linear least squares problems, with discussion of the formulation and interpretation of statistical regression models and statistical inferences related to these being dealt with in the articles just mentioned. See outline of regression analysis for an outline of the topic.

2014

  • http://mathworld.wolfram.com/LeastSquaresFitting.html
    • QUOTE: The linear least squares fitting technique is the simplest and most commonly applied form of linear regression and provides a solution to the problem of finding the best fitting straight line through a set of points. In fact, if the functional relationship between the two quantities being graphed is known to within additive or multiplicative constants, it is common practice to transform the data in such a way that the resulting line is a straight line, say by plotting T vs. sqrt(l) instead of T vs. l in the case of analyzing the period T of a pendulum Eric Weisstein's World of Physics as a function of its length l. For this reason, standard forms for exponential, logarithmic, and power laws are often explicitly computed. The formulas for linear least squares fitting were independently derived by Gauss and Legendre.

2011

  • http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29#Motivational_example
    • As a result of an experiment, four [math]\displaystyle{ (x, y) }[/math] data points were obtained, [math]\displaystyle{ (1, 6), }[/math] [math]\displaystyle{ (2, 5), }[/math] [math]\displaystyle{ (3, 7), }[/math] and [math]\displaystyle{ (4, 10) }[/math] (shown in red in the picture on the right). It is desired to find a line [math]\displaystyle{ y=\beta_1+\beta_2 x }[/math] that fits "best" these four points. In other words, we would like to find the numbers [math]\displaystyle{ \beta_1 }[/math] and [math]\displaystyle{ \beta_2 }[/math] that approximately solve the overdetermined linear system [math]\displaystyle{ \begin{alignat}{3} \beta_1 + 1\beta_2 &&\; = \;&& 6 & \\ \beta_1 + 2\beta_2 &&\; = \;&& 5 & \\ \beta_1 + 3\beta_2 &&\; = \;&& 7 & \\ \beta_1 + 4\beta_2 &&\; = \;&& 10 & \\ \end{alignat} }[/math] of four equations in two unknowns in some "best" sense.

      The least squares approach to solving this problem is to try to make as small as possible the sum of squares of "errors" between the right- and left-hand sides of these equations, that is, to find the minimum of the function : [math]\displaystyle{ \begin{align}S(\beta_1, \beta_2) =& \left[6-(\beta_1+1\beta_2)\right]^2 +\left[5-(\beta_1+2\beta_2) \right]^2 \\ &+\left[7-(\beta_1 + 3\beta_2)\right]^2 +\left[10-(\beta_1 + 4\beta_2)\right]^2.\end{align} }[/math]

      The minimum is determined by calculating the partial derivatives of [math]\displaystyle{ S(\beta_1, \beta_2) }[/math] with respect to [math]\displaystyle{ \beta_1 }[/math] and [math]\displaystyle{ \beta_2 }[/math] and setting them to zero. This results in a system of two equations in two unknowns, called the normal equations, which give, when solved :[math]\displaystyle{ \beta_1=3.5 }[/math] :[math]\displaystyle{ \beta_2=1.4 }[/math]

      and the equation [math]\displaystyle{ y=3.5+1.4x }[/math] of the line of best fit. The residuals, that is, the discrepancies between the [math]\displaystyle{ y }[/math] values from the experiment and the [math]\displaystyle{ y }[/math] values calculated using the line of best fit are then found to be [math]\displaystyle{ 1.1, }[/math] [math]\displaystyle{ -1.3, }[/math] [math]\displaystyle{ -0.7, }[/math] and [math]\displaystyle{ 0.9 }[/math] (see the picture on the right). The minimum value of the sum of squares is [math]\displaystyle{ S(3.5, 1.4)=1.1^2+(-1.3)^2+(-0.7)^2+0.9^2=4.2. }[/math]

1976

  • (Edwards, 1976) ⇒ Allen Louis Edwards. (1976). “An Introduction to Linear Regression and Correlation." W. H. Freeman. ISBN: 0716705613