Linear Least-Squares L2-Regularized Regression Task

From GM-RKB
Jump to navigation Jump to search

A Linear Least-Squares L2-Regularized Regression Task is a linear least-squares regression task that is a regularized linear regression task which applies the l2-norm.



References

2017a

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Tikhonov_regularization Retrieved:2017-8-20.
    • Tikhonov regularization, named for Andrey Tikhonov, is the most commonly used method of regularization of ill-posed problems. In statistics, the method is known as ridge regression, in machine learning it is known as weight decay, and with multiple independent discoveries, it is also variously known as the Tikhonov–Miller method, the Phillips–Twomey method, the constrained linear inversion method, and the method of linear regularization. It is related to the Levenberg–Marquardt algorithm for non-linear least-squares problems.

      Suppose that for a known matrix [math]\displaystyle{ A }[/math] and vector [math]\displaystyle{ \mathbf{b} }[/math] , we wish to find a vector [math]\displaystyle{ \mathbf{x} }[/math] such that: : [math]\displaystyle{ A\mathbf{x}=\mathbf{b} }[/math] The standard approach is ordinary least squares linear regression. However, if no [math]\displaystyle{ \mathbf{x} }[/math] satisfies the equation or more than one [math]\displaystyle{ \mathbf{x} }[/math] does — that is, the solution is not unique — the problem is said to be ill posed. In such cases, ordinary least squares estimation leads to an overdetermined (over-fitted), or more often an underdetermined (under-fitted) system of equations. Most real-world phenomena have the effect of low-pass filters in the forward direction where [math]\displaystyle{ A }[/math] maps [math]\displaystyle{ \mathbf{x} }[/math] to [math]\displaystyle{ \mathbf{b} }[/math] . Therefore, in solving the inverse-problem, the inverse mapping operates as a high-pass filter that has the undesirable tendency of amplifying noise (eigenvalues / singular values are largest in the reverse mapping where they were smallest in the forward mapping). In addition, ordinary least squares implicitly nullifies every element of the reconstructed version of [math]\displaystyle{ \mathbf{x} }[/math] that is in the null-space of [math]\displaystyle{ A }[/math] , rather than allowing for a model to be used as a prior for [math]\displaystyle{ \mathbf{x} }[/math] . Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as: : [math]\displaystyle{ \|A\mathbf{x}-\mathbf{b}\|^2 }[/math] where [math]\displaystyle{ \left \| \cdot \right \| }[/math] is the Euclidean norm. In order to give preference to a particular solution with desirable properties, a regularization term can be included in this minimization: : [math]\displaystyle{ \|A\mathbf{x}-\mathbf{b}\|^2+ \|\Gamma \mathbf{x}\|^2 }[/math] for some suitably chosen Tikhonov matrix, [math]\displaystyle{ \Gamma }[/math] . In many cases, this matrix is chosen as a multiple of the identity matrix ([math]\displaystyle{ \Gamma= \alpha I }[/math] ), giving preference to solutions with smaller norms; this is known as L2 regularization. In other cases, lowpass operators (e.g., a difference operator or a weighted Fourier operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous. This regularization improves the conditioning of the problem, thus enabling a direct numerical solution. An explicit solution, denoted by [math]\displaystyle{ \hat{x} }[/math] , is given by: : [math]\displaystyle{ \hat{x} = (A^\top A+ \Gamma^\top \Gamma )^{-1}A^\top\mathbf{b} }[/math] The effect of regularization may be varied via the scale of matrix [math]\displaystyle{ \Gamma }[/math] . For [math]\displaystyle{ \Gamma = 0 }[/math] this reduces to the unregularized least squares solution provided that (ATA)−1 exists. L2 regularization is used in many contexts aside from linear regression, such as classification with logistic regression or support vector machines, and matrix factorization.

2017b

  • (Zhang, 2017) ⇒ Xinhua Zhang (2017). “Regularization" in “Encyclopedia of Machine Learning and Data Mining” (Sammut & Webb, 2017) pp 1083 - 1088 ISBN: 978-1-4899-7687-1, DOI: 10.1007/978-1-4899-7687-1_718
    • QUOTE: An Illustrative Example: Ridge Regression

      Ridge regression is illustrative of the use of regularization. It tries to fit the label [math]\displaystyle{ y }[/math] by a linear model [math]\displaystyle{ \left \langle \mathbf{w},\mathbf{x}\right \rangle }[/math] (inner product). So we need to solve a system of linear equations in [math]\displaystyle{ \mathbf{w} }[/math]: [math]\displaystyle{ (\mathbf{x}_{1},\ldots, \mathbf{x}_{n})^{\top }\mathbf{w} =\mathbf{ y} }[/math], which is equivalent to a linear least square problem: [math]\displaystyle{ \min _{\mathbf{w}\in \mathbb{R}^{p}}\left \|X^{\top }\mathbf{w} -\mathbf{ y}\right \|^{2} }[/math]. If the rank of X is less than the dimension of [math]\displaystyle{ \mathbf{w} }[/math], then it is overdetermined and the solution is not unique.

      To approach this ill-posed problem, one needs to introduce additional assumptions on what models are preferred, i.e., the regularizer. One choice is to pick a matrix [math]\displaystyle{ \Gamma }[/math] and regularize [math]\displaystyle{ \mathbf{w} }[/math] by [math]\displaystyle{ \left \|\Gamma \mathbf{w}\right \|^{2} }[/math]. As a result we solve [math]\displaystyle{ \min _{\mathbf{w}\in \mathbb{R}^{p}}\left \|X^{\top }\mathbf{w} -\mathbf{ y}\right \|^{2} +\lambda \left \|\Gamma ^{\top }\mathbf{w}\right \|^{2} }[/math], and the solution has a closed form [math]\displaystyle{ \mathbf{w}^{{\ast}} = (XX^{\top } +\lambda \Gamma \Gamma ^{\top })X\mathbf{y} }[/math]. [math]\displaystyle{ \Gamma }[/math] can be simply the identity matrix which encodes our preference for small norm models.

      The use of regularization can also be justified from a Bayesian point of view. Treating [math]\displaystyle{ \mathbf{w} }[/math] as a multivariate random variable and the likelihood as [math]\displaystyle{ \exp \left (-\left \|X^{\top }\mathbf{w} -\mathbf{ y}\right \|^{2}\right ) }[/math], then the minimizer of [math]\displaystyle{ \left \|X^{\top }\mathbf{w} -\mathbf{ y}\right \|^{2} }[/math] is just a maximum likelihood estimate of [math]\displaystyle{ \mathbf{w} }[/math]. However, we may also assume a prior distribution over [math]\displaystyle{ \mathbf{w} }[/math], e.g., a Gaussian prior [math]\displaystyle{ p(\mathbf{w}) \sim \exp \left (-\lambda \left \|\Gamma ^{\top }\mathbf{w}\right \|^{2}\right ) }[/math]. Then the solution of the ridge regression is simply the maximum a posterior estimate of [math]\displaystyle{ \mathbf{w} }[/math].

2017c

2017D

2017 e.

  • (Scikit-Learn, 2017) ⇒ "1.1.2. Ridge Regression" http://scikit-learn.org/stable/modules/linear_model.html#ridge-regression
    • QUOTE: 1.1.2. Ridge Regression

      Ridgeregression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of coefficients. The ridge coefficients minimize a penalized residual sum of squares,

      [math]\displaystyle{ \underset{w}{min\,} {{|| X w - y||_2}^2 + \alpha {||w||_2}}^2 }[/math]

      Here, [math]\displaystyle{ \alpha \geq 0 }[/math] is a complexity parameter that controls the amount of shrinkage: the larger the value of [math]\displaystyle{ \alpha }[/math], the greater the amount of shrinkage and thus the coefficients become more robust to collinearity.