1992 ABayesianMethodForTheIndOfProbNets

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Bayesian Belief Network, Bayesian Network Training.

Notes

Cited by

  • ~2,298 …

1995

Quotes

Author Keywords

probabilistic networks, Bayesian belief networks, machine learning, induction

Abstract

This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction]] of probabilistic expert systems. We extend the basic method to handle [missing data]] and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems.

1. Introduction

In this paper, we present a Bayesian method for constructing a probabilistic network from a database of records, which we call cases. Once constructed, such a network can provide insight into probabilistic dependencies that exist among the variables in the database. One application is the automated discovery of dependency relationships. The computer program searches for a [probabilistic-network structure]] that has a high posterior probability given the database, and outputs the structure and its probability. A related task is computer-assisted hypothesis testing: The user enters a hypothetical structure of the dependency relationships among a set of variables, and the program calculates the probability of the structure given a database of cases on the variables.

A Bayesian belief-network structure, BS, is augmented by conditional probabilities, BP, to form a Bayesian belief network B. Thus, B = (BS, BP). For brevity, we call B a belief network. For each node1 in a belief-network structure, there is a conditional-probability function that relates this node to its immediate predecessors (parents). We shall use Ti, to denote the parent nodes of variable xi. If a node has no parents, then a prior-probability function, P(xi), is specified. A set of probabilities is shown in table 2 for the belief-network structure in figure 1. We used the probabilities in table 2 to generate the cases in table 1 by applying Monte Carlo simulation.

We shall use the term conditional probability to refer to a probability statement, such as P(x2 = present | x1 = present). We use the term conditional-probability assignment to denote a numerical assignment to a conditional probability, as, for example, the assignment P(x2 = present | x1 = present) = 0.8. The network structure BS1 in figure 1 and the probabilities BP1 in table 2 together define a belief network which we denote as B1. Belief networks are capable of representing the probabilities over any discrete sample space: The probability of any sample point in that [Discrete Sample Space|space]] can be computed from the probabilities in the belief network. The key feature of belief networks is their explicit representation of the conditional independence and dependence among events. In particular, investigators have shown (Kiiveri, Speed, & Carlin, 1984; Pearl, 1988; Shachter, 1986) that the joint probability of any particular instantiation2 of all [math]\displaystyle{ n }[/math] variables in a belief network can be calculated as follows:


,

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
1992 ABayesianMethodForTheIndOfProbNetsGregory F. Cooper
Edward Herskovits
A Bayesian Method for the Induction of Probabilistic Networks from DataMachine Learning (ML) Subject Areahttp://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.88.443610.1007/BF009941101992