Supervised Model-based Estimation Algorithm: Difference between revisions
Jump to navigation
Jump to search
m (Text replacement - ". " to ". ") |
m (Text replacement - "QUOTE: " to "QUOTE: ") |
||
Line 20: | Line 20: | ||
=== 2006 === | === 2006 === | ||
* ([[1996_RegressionShrinkageAndSelViaLasso|Tibshirani, 1996]]) ⇒ [[Robert Tibshirani]]. ([[1996]]). “[http://people.ee.duke.edu/~lcarin/Lasso.pdf Regression Shrinkage and Selection via the Lasso].” In: [[Journal of the Royal Statistical Society]], Series B, 58(1). | * ([[1996_RegressionShrinkageAndSelViaLasso|Tibshirani, 1996]]) ⇒ [[Robert Tibshirani]]. ([[1996]]). “[http://people.ee.duke.edu/~lcarin/Lasso.pdf Regression Shrinkage and Selection via the Lasso].” In: [[Journal of the Royal Statistical Society]], Series B, 58(1). | ||
** QUOTE: | ** QUOTE: Consider the usual [[Supervised Regression Task|regression situation]]: we have [[data]] <math>(\mathbf{x}^i, y^i), i=1,2,...,N \ ,</math> where <math>\mathbf{x}^i=(x_{i1},..., x_{ip})^T</math> and <math>y_i</math> are the [[Regressor Variable|regressors]] and [[Response Variable|response]] for the ''i''th [[Observation record|observation]]. The [[Ordinary Least Squares Estimate|ordinary least squares (OLS) estimates]] are obtained by minimizing the [[Residual Squared Error|residual squared error]]. | ||
---- | ---- |
Latest revision as of 20:46, 29 December 2022
A Supervised Model-based Estimation Algorithm is a supervised estimation algorithm that is a model-based learning algorithm.
- AKA: Regression Method.
- Context:
- It can range from being an Eager Model-based Estimation Algorithm to being a Lazy Model-based Estimation Algorithm.
- It can be solved by a Model-based Estimation System (to solve a model-based estimation task).
- Example(s):
- Counter-Example(s):
- See: Model-based.
References
2006
- (Tibshirani, 1996) ⇒ Robert Tibshirani. (1996). “Regression Shrinkage and Selection via the Lasso.” In: Journal of the Royal Statistical Society, Series B, 58(1).
- QUOTE: Consider the usual regression situation: we have data [math]\displaystyle{ (\mathbf{x}^i, y^i), i=1,2,...,N \ , }[/math] where [math]\displaystyle{ \mathbf{x}^i=(x_{i1},..., x_{ip})^T }[/math] and [math]\displaystyle{ y_i }[/math] are the regressors and response for the ith observation. The ordinary least squares (OLS) estimates are obtained by minimizing the residual squared error.