Approximate Bayesian Inference Algorithm

(Redirected from Approximate Inference)
Jump to: navigation, search

An approximate Bayesian inference algorithm is a Bayesian inference algorithm that is an approximation algorithm.





  • (Mott & Lester, 2006) ⇒ Bradford W. Mott, and James C. Lester. (2006). “U-director: A decision-theoretic narrative planning architecture for storytelling environments.” In: Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems. doi:10.1145/1160633.1160808
    • QUOTE: Because exact inference in Bayesian networks is known to be extraordinarily inefficient (in the worst case NP-hard), U-DIRECTOR exploits recent advances in approximate Bayesian inference via stochastic sampling. The accuracy of these methods depends on the number of samples used. Moreover, stochastic sampling methods typically have an “anytime” property which is particularly attractive for real-time applications. … a performance analysis was conducted to measure the network update time using an exact Bayesian inference algorithm (Clustering [17]) and two approximate Bayesian inference algorithms (EPIS-BN [36] and Likelihood weighting [31]).


  • (Bishop, 2006) ⇒ Christopher M. Bishop. (2006). “Pattern Recognition and Machine Learning." Springer, Information Science and Statistics. ISBN:0387310738
    • QUOTE: A central task in the application of probabilistic models is the evaluation of the posterior distribution p(Z|X) of the latent variables Z given the observed (visible) data variables X, and the evaluation of expectations computed with respect to this distributions. The model might also contain some determinist parameter, which we will leave implicit for the moment, or it may be a fully Bayesian model in which any unknown parameter are given prior distribution and are absorbed into the set of latent variables denoted by the vector Z. For instance, in the EM algorithm we need to evaluate the expectation of the complete-data log likelihood with respect to the posterior distribution of the latent variables. … In such situations, we need to resort to approximation schemes, and these fall broadly into two classes, according to whether they rely on stochastic or deterministic approximations.


  • Zoubin Ghahramani. (2004). “Bayesian methods in machine learning." Seminar Talk, Oct 18 2004 at University of Birmingham.
    • Bayesian methods It can be applied to a wide range of probabilistic models commonly used in machine learning and pattern recognition. The challenge is to discover approximate inference methods that can deal with complex models and large scale data sets in reasonable time. In the past few years Variational Bayesian (VB) approximations have emerged as an alternative to MCMC methods. …




  • (Yuan & Druzdzel, 1993) ⇒ C. Yuan, and M. Druzdzel. (1993). “An Importance Sampling Algorithm Based on Evidence Pre-Propagation.” In: Proceedings of the Nineteenth Annual Conference on Uncertainty in Artificial Intelligence.
  • (Tzeras & Hartman, 1993) ⇒ Kostas Tzeras, and Stephan Hartmann. (1993). “Automatic Indexing Based on Bayesian Inference Networks.” In: Proceedings of the ACM SIGIR 1993 Conference. doi:10.1145/160688.160691


  • (Shachter & Peot, 1990) ⇒ R. Shachter, and M. Peot. (1990). “Simulation approaches to general probabilistic inference on belief networks.” In: Proceedings of the Fifth Annual Conference on Uncertainty in Artificial Intelligence.


  • (Kass & Steffey, 1989) ⇒ R. Kass, and D. Steffey. (1989). “Approximate Bayesian Inference in Conditionally Independent Hierarchical Models (parametric empirical Bayes models).” In: Journal of the American Statistical Association, 84(407).