Directed Conditional Graphical Model Family

From GM-RKB
(Redirected from Bayesian Network Metamodel)
Jump to navigation Jump to search

A Directed Conditional Graphical Model Family is a conditional graphical model family that is a directed graphical model family.



References

2017

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Bayesian_network Retrieved:2017-6-23.
    • A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

      Formally, Bayesian networks are DAGs whose nodes represent random variables in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses. Edges represent conditional dependencies; nodes that are not connected (there is no path from one of the variables to the other in the Bayesian network) represent variables that are conditionally independent of each other. Each node is associated with a probability function that takes, as input, a particular set of values for the node's parent variables, and gives (as output) the probability (or probability distribution, if applicable) of the variable represented by the node. For example, if [math]\displaystyle{ m }[/math] parent nodes represent [math]\displaystyle{ m }[/math] Boolean variables then the probability function could be represented by a table of [math]\displaystyle{ 2^m }[/math] entries, one entry for each of the [math]\displaystyle{ 2^m }[/math] possible combinations of its parents being true or false. Similar ideas may be applied to undirected, and possibly cyclic, graphs; such as Markov networks.

      Efficient algorithms exist that perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.

2011

2007

2003a

2003b

  • (Korb & Nicholson, 2003) ⇒ Kevin B. Korb, and Ann E. Nicholson. (2003). “Bayesian Artificial Intelligence." CRC Press.
    • QUOTE: Bayesian networks (BNs) are graphical models for reasoning under uncertainty, where the nodes represent variables (discrete or continuous) and arcs represent direct connections between them. There direct connections are often causal connections. In addition, BNs model the quantitative strength of the connections between variables, allowing probabilistic beliefs about them to be updated automatically as new information becomes available. … The only constraint on the arcs allowed in a BN is that there must not be any directed cycles.

2001

1999

1998

1997

1996a

  • (Buntine, 1996) ⇒ Wray Buntine. (1996). “A Guide to the Literature on Learning Probabilistic Networks from Data.” In: IEEE Transactions on Knowledge and Data Engineering, 8.

1996b

1993

1992

1988