Predictive Inference

From GM-RKB
(Redirected from Predictive inference)
Jump to navigation Jump to search

A Predictive Inference is a data-driven inference that involves prediction of future observations based on past observations.

See: Statistical Inference, Data Processing, Predictive Model, Statistical Modelling, Inductive Inference.



References

2021

  • (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/predictive_inference Retrieved:2021-11-27.
    • Predictive inference is an approach to statistical inference that emphasizes the prediction of future observations based on past observations.

      Initially, predictive inference was based on observable parameters and it was the main purpose of studying probability,but it fell out of favor in the 20th century due to a new parametric approach pioneered by Bruno de Finetti. The approach modeled phenomena as a physical system observed with error (e.g., celestial mechanics). De Finetti's idea of exchangeability—that future observations should behave like past observations—came to the attention of the English-speaking world with the 1974 translation from French of his 1937 paper, and has since been propounded by such statisticians as Seymour Geisser.[1]

2000

  • (Shimodaira, 2000) ⇒ Hidetoshi Shimodaira. (2000). “Improving Predictive Inference under Covariate Shift by Weighting the Log-likelihood Function.” Journal of statistical planning and inference, 90(2).
    • ABSTRACT: A class of predictive densities is derived by weighting the observed samples in maximizing the log-likelihood function. This approach is effective in cases such as sample surveys or design of experiments, where the observed covariate follows a different distribution than that in the whole population. Under misspecification of the parametric model, the optimal choice of the weight function is asymptotically shown to be the ratio of the density function of the covariate in the population to that in the observations. This is the pseudo-maximum likelihood estimation of sample surveys. The optimality is defined by the expected Kullback–Leibler loss, and the optimal weight is obtained by considering the importance sampling identity. Under correct specification of the model, however, the ordinary maximum likelihood estimate (i.e. the uniform weight) is shown to be optimal asymptotically. For moderate sample size, the situation is in between the two extreme cases, and the weight function is selected by minimizing a variant of the information criterion derived as an estimate of the expected loss. The method is also applied to a weighted version of the Bayesian predictive density. Numerical examples as well as Monte-Carlo simulations are shown for polynomial regression. A connection with the robust parametric estimation is discussed.

1993

  • (Gkisser, 1993) ⇒ Seymour Gkisser. (1993). “Predictive Inference: An Introduction.” Chapman and Hall / CRC,
    • ABSTRACT: The author's research has been directed towards inference involving observables rather than parameters. In this book, he brings together his views on predictive or observable inference and its advantages over parametric inference. While the book discusses a variety of approaches to prediction including those based on parametric, nonparametric, and nonstochastic statistical models, it is devoted mainly to predictive applications of the Bayesian approach. It not only substitutes predictive analyses for parametric analyses, but it also presents predictive analyses that have no real parametric analogues. It demonstrates that predictive inference can be a critical component of even strict parametric inference when dealing with interim analyses. This approach to predictive inference will be of interest to statisticians, psychologists, econometricians, and sociologists.