Difference between revisions of "Cross-Validation Evaluation Task"

From GM-RKB
Jump to: navigation, search
(2005)
 
(One intermediate revision by the same user not shown)
Line 24: Line 24:
 
== References ==
 
== References ==
  
 +
=== 2020 ===
 +
* (SciKit-Learn, 2020) ⇒ https://scikit-learn.org/stable/modules/cross_validation.html Retrieved: 2020-02-15.
 +
** QUOTE: However, by [[partitioning]] the available [[data]] into three [[set]]s, we drastically reduce the number of [[sample]]s which can be used for [[learning the model]], and the results can depend on a particular [[random choice]] for the pair of ([[Training Dataset|train]], [[Validation Datatset|validation]]) [[set]]s. <P>A solution to this problem is a procedure called [[Cross-Validation Task|cross-validation (CV for short)]]. A [[test set]] should still be held out for final [[evaluation]], but the [[validation set]] is no longer needed when doing [[CV]]. In the basic approach, called [[k-fold CV]], the [[training set]] is split into k smaller [[set]]s (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the [[K-Fold Cross-Validation Task|k “folds”]]:
 +
*** A [[model is trained]] using $k-1$ of the [[fold]]s as [[training data]];
 +
*** the resulting [[model]] is validated on the remaining part of the [[data]] (i.e., it is used as a [[test set]] to compute a [[performance measure]] such as [[accuracy]]).
 +
:: The [[performance measure]] reported by [[k-fold cross-validation]] is then the [[average]] of the values computed in the [[loop]]. This approach can be [[computationally expensive]], but does not waste too much [[data]] (as is the case when fixing an arbitrary [[validation set]]), which is a major advantage in problems such as [[inverse inference]] where the number of samples is very small.<P><div style="text-align:center"><html><img src="https://scikit-learn.org/stable/_images/grid_search_cross_validation.png" width=50%/></html></div>
 
=== 2019 ===
 
=== 2019 ===
 
* (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Cross-validation_(statistics) Retrieved:2019-5-1.
 
* (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Cross-validation_(statistics) Retrieved:2019-5-1.
Line 33: Line 39:
 
** ABSTRACT: It is widely known that [[significant]] [[in-sample]] [[evidence of predictability]] does not guarantee [[significant]] [[out-of-sample]] [[predictability]]. This is often interpreted as an indication that [[in-sample evidence]] is likely to be spurious and should be discounted. [[In this paper, we]] question this interpretation. Our analysis shows that neither data mining nor dynamic misspecification of the model under the null nor unmodelled structural change under the null are plausible explanations of the observed tendency of in-sample tests to reject the no-predictability null more often than [[out-of-sample]] tests. [[We]] provide an alternative explanation based on the higher power of in-sample tests of predictability in many situations. [[We]] conclude that results of in-sample tests of predictability will typically be more credible than results of [[out-of-sample]] tests.
 
** ABSTRACT: It is widely known that [[significant]] [[in-sample]] [[evidence of predictability]] does not guarantee [[significant]] [[out-of-sample]] [[predictability]]. This is often interpreted as an indication that [[in-sample evidence]] is likely to be spurious and should be discounted. [[In this paper, we]] question this interpretation. Our analysis shows that neither data mining nor dynamic misspecification of the model under the null nor unmodelled structural change under the null are plausible explanations of the observed tendency of in-sample tests to reject the no-predictability null more often than [[out-of-sample]] tests. [[We]] provide an alternative explanation based on the higher power of in-sample tests of predictability in many situations. [[We]] conclude that results of in-sample tests of predictability will typically be more credible than results of [[out-of-sample]] tests.
 
----
 
----
 +
__NOTOC__
 
[[Category:Concept]]
 
[[Category:Concept]]
 
[[Category:Machine Learning]]
 
[[Category:Machine Learning]]
 
[[Category:Statistical Inference]]
 
[[Category:Statistical Inference]]

Latest revision as of 00:53, 15 February 2020

A Cross-Validation Evaluation Task is a Out-of-Sample Evaluation Task that estimates how accurate a predictive model will perform in practice.



References

2020

The performance measure reported by k-fold cross-validation is then the average of the values computed in the loop. This approach can be computationally expensive, but does not waste too much data (as is the case when fixing an arbitrary validation set), which is a major advantage in problems such as inverse inference where the number of samples is very small.

2019

  • (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Cross-validation_(statistics) Retrieved:2019-5-1.
    • Cross-validation, sometimes called rotation estimation, [1] [2] [3] or out-of-sample testing is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. In a prediction problem, a model is usually given a dataset of known data on which training is run (training dataset), and a dataset of unknown data (or first seen data) against which the model is tested (called the validation dataset or testing set). [4] [5] The goal of cross-validation is to test the model's ability to predict new data that was not used in estimating it, in order to flag problems like overfitting or selection bias and to give an insight on how the model will generalize to an independent dataset (i.e., an unknown dataset, for instance from a real problem). One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset (called the validation set or testing set). To reduce variability, in most methods multiple rounds of cross-validation are performed using different partitions, and the validation results are combined (e.g. averaged) over the rounds to give an estimate of the model's predictive performance. In summary, cross-validation combines (averages) measures of fitness in prediction to derive a more accurate estimate of model prediction performance.[6]
  1. Geisser, Seymour (1993). Predictive Inference. New York, NY: Chapman and Hall. ISBN 978-0-412-03471-8.
  2. Kohavi, Ron (1995). "A study of cross-validation and bootstrap for accuracy estimation and model selection". Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence. San Mateo, CA: Morgan Kaufmann. 2 (12): 1137–1143. CiteSeerX 10.1.1.48.529.
  3. Devijver, Pierre A.; Kittler, Josef (1982). Pattern Recognition: A Statistical Approach. London, GB: Prentice-Hall.
  4. "What is the difference between test set and validation set?". Retrieved 10 October 2018.
  5. "Newbie question: Confused about train, validation and test data!". Archived from the original on 2015-03-14. Retrieved 2013-11-14.CS1 maint: BOT: original-url status unknown (link)
  6. Grossman, Robert; Seni, Giovanni; Elder, John; Agarwal, Nitin; Liu, Huan (2010). "Ensemble Methods in Data Mining: Improving Accuracy Through Combining Predictions". Synthesis Lectures on Data Mining and Knowledge Discovery. Morgan & Claypool. 2: 1–126. doi:10.2200/S00240ED1V01Y200912DMK002.

2005