Eager Model-based Estimation Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
 
m (Text replacement - "]]↵*" to "]]. *")
 
(23 intermediate revisions by the same user not shown)
Line 1: Line 1:
#REDIRECT [[Eager Model-based Estimation Algorithm]]
An [[Eager Model-based Estimation Algorithm]] is an [[eager estimation algorithm]] that is an [[model-based estimation algorithm]] (which eagerly produces a [[number prediction model]] from all [[training data]]).
* <B>Context:</B>
** It can be applied by an [[Eager Model-based Estimation System]] (to solve an [[eager model-based estimation task]]).
** It can range from being a [[Parametric Regression Algorithm]] to being a [[Non-Parametric Regression Algorithm]].
** It can (often) be a [[Function Fitting Algorithm]].
** …
* <B>Example(s):</B>
** an [[Ordinary Linear Regression Algorithm]].
** a [[Generalized Linear Regression Algorithm]].
** a [[Logistic Regression Algorithm]].
** …
* <B>Counter-Example(s):</B>
** [[Lazy Model-based Regression Algorithm]].
** [[Eager Instance-based Regression Algorithm]].
** [[Eager Model-based Classification Algorithm]].
* <B>See:</B> [[Error Term]], [[Eager Model-based Learning]], [[Conditional Expectation]], [[Average Value]], [[Data-Driven Prediction Algorithm]].
 
----
----
 
== References ==
 
=== 2016 ===
* (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/Regression_analysis Retrieved:2016-6-3.
** In [[statistical model]]ing, '''regression analysis''' is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a [[dependent variable]] and one or more [[independent variable]]s (or 'predictors'). More specifically, regression analysis helps one understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression analysis estimates the [[conditional expectation]] of the dependent variable given the independent variables – that is, the [[average value]] of the dependent variable when the independent variables are fixed. Less commonly, the focus is on a [[quantile]], or other [[location parameter]] of the conditional distribution of the dependent variable given the independent variables. In all cases, the estimation target is a [[function (mathematics)|function]] of the independent variables called the '''regression function'''. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function which can be described by a [[probability distribution]].        <P>        Regression analysis is widely used for [[prediction]] and [[forecasting|forecast]]ing, where its use has substantial overlap with the field of [[machine learning]]. Regression analysis is also used to understand which among the independent variables are related to the dependent variable, and to explore the forms of these relationships. In restricted circumstances, regression analysis can be used to infer [[causality|causal relationships]] between the independent and dependent variables. However this can lead to illusions or false relationships, so caution is advisable;  for example, [[correlation does not imply causation]]. Many techniques for carrying out regression analysis have been developed. Familiar methods such as [[linear regression]] and [[ordinary least squares]] regression are [[parametric statistics|parametric]], in that the regression function is defined in terms of a finite number of unknown [[parameter]]s that are estimated from the [[data]]. [[Nonparametric regression]] refers to techniques that allow the regression function to lie in a specified set of [[function (mathematics)|functions]], which may be [[dimension|infinite-dimensional]]. The performance of regression analysis methods in practice depends on the form of the [[data collection|data generating process]], and how it relates to the regression approach being used. Since the true form of the data-generating process is generally not known, regression analysis often depends to some extent on making assumptions about this process. These assumptions are sometimes testable if a sufficient quantity of data is available. Regression models for prediction are often useful even when the assumptions are moderately violated, although they may not perform optimally. However, in many applications, especially with small [[effect size|effects]] or questions of [[causality]] based on [[observational study|observational data]], regression methods can give misleading results. <ref> David A. Freedman, ''Statistical Models: Theory and Practice'', Cambridge University Press (2005) </ref> <ref> R. Dennis Cook; Sanford Weisberg [http://links.jstor.org/sici?sici=0081-1750%281982%2913%3C313%3ACAIAIR%3E2.0.CO%3B2-3 Criticism and Influence Analysis in Regression], ''Sociological Methodology'', Vol. 13. (1982), pp. 313–361 </ref> In a narrower sense, regression may refer specifically to the estimation of continuous response variables, as opposed to the discrete response variables used in [[statistical classification|classification]].  The case of a continuous output variable may be more specifically referred to as '''metric regression''' to distinguish it from related problems.
<references/>
 
----
 
__NOTOC__
[[Category:Concept]]

Latest revision as of 17:51, 4 October 2023

An Eager Model-based Estimation Algorithm is an eager estimation algorithm that is an model-based estimation algorithm (which eagerly produces a number prediction model from all training data).



References

2016

  • (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/Regression_analysis Retrieved:2016-6-3.
    • In statistical modeling, regression analysis is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables (or 'predictors'). More specifically, regression analysis helps one understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression analysis estimates the conditional expectation of the dependent variable given the independent variables – that is, the average value of the dependent variable when the independent variables are fixed. Less commonly, the focus is on a quantile, or other location parameter of the conditional distribution of the dependent variable given the independent variables. In all cases, the estimation target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function which can be described by a probability distribution.

      Regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Regression analysis is also used to understand which among the independent variables are related to the dependent variable, and to explore the forms of these relationships. In restricted circumstances, regression analysis can be used to infer causal relationships between the independent and dependent variables. However this can lead to illusions or false relationships, so caution is advisable; for example, correlation does not imply causation. Many techniques for carrying out regression analysis have been developed. Familiar methods such as linear regression and ordinary least squares regression are parametric, in that the regression function is defined in terms of a finite number of unknown parameters that are estimated from the data. Nonparametric regression refers to techniques that allow the regression function to lie in a specified set of functions, which may be infinite-dimensional. The performance of regression analysis methods in practice depends on the form of the data generating process, and how it relates to the regression approach being used. Since the true form of the data-generating process is generally not known, regression analysis often depends to some extent on making assumptions about this process. These assumptions are sometimes testable if a sufficient quantity of data is available. Regression models for prediction are often useful even when the assumptions are moderately violated, although they may not perform optimally. However, in many applications, especially with small effects or questions of causality based on observational data, regression methods can give misleading results. [1] [2] In a narrower sense, regression may refer specifically to the estimation of continuous response variables, as opposed to the discrete response variables used in classification. The case of a continuous output variable may be more specifically referred to as metric regression to distinguish it from related problems.

  1. David A. Freedman, Statistical Models: Theory and Practice, Cambridge University Press (2005)
  2. R. Dennis Cook; Sanford Weisberg Criticism and Influence Analysis in Regression, Sociological Methodology, Vol. 13. (1982), pp. 313–361