Regularized Linear Regression Task

From GM-RKB
Jump to navigation Jump to search

A Regularized Linear Regression Task is a linear regression task that is a based on the minimization of a regularized objective function.



References

2017a

2017b

2017c

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Regularization_(mathematics)#Use_of_regularization_in_classification Retrieved:2017-8-20.
    • One particular use of regularization is in the field of classification. Empirical learning of classifiers (learning from a finite data set) is always an underdetermined problem, because in general we are trying to infer a function of any [math]\displaystyle{ x }[/math] given only some examples [math]\displaystyle{ x_1, x_2, ... x_n }[/math] .

      A regularization term (or regularizer) [math]\displaystyle{ R(f) }[/math] is added to a loss function: : [math]\displaystyle{ \min_f \sum_{i=1}^{n} V(f(\hat x_i), \hat y_i) + \lambda R(f) }[/math] where [math]\displaystyle{ V }[/math] is an underlying loss function that describes the cost of predicting [math]\displaystyle{ f(x) }[/math] when the label is [math]\displaystyle{ y }[/math] , such as the square loss or hinge loss; and [math]\displaystyle{ \lambda }[/math] is a parameter which controls the importance of the regularization term. [math]\displaystyle{ R(f) }[/math] is typically chosen to impose a penalty on the complexity of [math]\displaystyle{ f }[/math] . Concrete notions of complexity used include restrictions for smoothness and bounds on the vector space norm. A theoretical justification for regularization is that it attempts to impose Occam's razor on the solution, as depicted in the figure. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters.

      Regularization can be used to learn simpler models, induce models to be sparse, introduce group structure into the learning problem, and more.

      The same idea arose in many fields of science. For example, the least-squares method can be viewed as a very simple form of regularization . A simple form of regularization applied to integral equations, generally termed Tikhonov regularization after Andrey Nikolayevich Tikhonov, is essentially a trade-off between fitting the data and reducing a norm of the solution. More recently, non-linear regularization methods, including total variation regularization, have become popular.