Linear Least-Squares Regression System

From GM-RKB
Jump to navigation Jump to search

A Linear Least-Squares Regression System is a linear regression system that is a least-squares regression system which implements a linear least-squares algorithm to solve a linear least-squares regression task.

Input: Output:
import numpy as np

import matplotlib.pyplot as plt

f = np.poly1d([5, 1])

x = np.linspace(0, 10, 30)

y = f(x) + 6*np.random.normal(size=len(x))

xn = np.linspace(0, 10, 200)

a = np.vstack([x, np.ones(len(x))]).T

cf=np.linalg.lstsq(a, y)[0]

print 'Regression Coefficients', cf

yn=cf[1]+cf[0]*xn

plt.plot(x,y,'bo',label='data')

plt.plot(xn,yn,'g-',label='lsq')

plt.xlabel('$x$',size=24)

plt.ylabel('$y$',size=24)

plt.legend()

plt.show()

Regression Coefficients [ 4.95315091 0.02915413]

linalg.lstsq.png


References

2017a

2017b

2017c

  • (Scipy, 2017) ⇒ The Scipy community (2008-2009). “numpy.linalg.lstsq" https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html Last updated on Jun 10, 2017
    • numpy.linalg.lstsq(a, b, rcond=-1)

      Return the least-squares solution to a linear matrix equation.

      Solves the equation a x = b by computing a vector x that minimizes the Euclidean 2-norm || b - a x ||^2. The equation may be under-, well-, or over- determined (i.e., the number of linearly independent rows of a can be less than, equal to, or greater than its number of linearly independent columns). If a is square and of full rank, then x (but for round-off error) is the “exact” solution of the equation (...)

2017d

  • (Scipy, 2017) ⇒ The Scipy community (2008-2009). “scipy.optimize.lsq_linear" https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.lsq_linear.html Last updated on Jun 10, 2017
    • scipy.optimize.lsq_linear(A, b, bounds=(-inf, inf), method='trf', tol=1e-10, lsq_solver=None, lsmr_tol=None, max_iter=None, verbose=0)

      Solve a linear least-squares problem with bounds on the variables.

      Given a m-by-n design matrix A and a target vector b with m elements, lsq_linear solves the following optimization problem:

      minimize 0.5 * ||A x - b||**2

      subject to lb <= x <= ub

      This optimization problem is convex, hence a found minimum (if iterations have converged) is guaranteed to be global(...)

2017 e.

2017F

  • (Scipy, 2017) ⇒ The Scipy community (2008-2009). “scipy.sparse.linalg.lsmr" https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.lsmr.html Last updated on Jun 10, 2017
    • numpy.linalg.lstsq(a, b, rcond=-1)

      Return the least-squares solution to a linear matrix equation.

      Solves the equation a x = b by computing a vector x that minimizes the Euclidean 2-norm || b - a x ||^2. The equation may be under-, well-, or over- determined (i.e., the number of linearly independent rows of a can be less than, equal to, or greater than its number of linearly independent columns). If a is square and of full rank, then x (but for round-off error) is the “exact” solution of the equation.

2014

2012