# Markov Logic Network (MLN)

(Redirected from Markov Logic Network)

Jump to navigation
Jump to search
A Markov Logic Network (MLN) is a Markov Network based on set of 2-Tuples [math]\displaystyle{ (F,w) }[/math] where [math]\displaystyle{ F }[/math] is a first-order logic sentence, and [math]\displaystyle{ w }[/math] is a real number weight (exponential).

**Context:**- It can be a member of a Markov Logic Network Family.
- It can be thought of as an Undirected Par-Factor Graph.
- It can be used as a Statistical Relational Language that combines First-Order Logic and Markov Networks.
- It can support Soft Logical Constraints which become less probable given conflicting data.
- It can range from being a Tractable Markov Logic Network to being an Intractable Markov Logic Network.

**Example(s):****Counter-Example(s):****See:**Markov Logic; Statistical Relational Learning; Markov Random Field; Log-Linear Model; Graphical Model; Satisfiability; Inductive Logic Programming; Knowledge-based Model Construction; Markov Chain Monte Carlo; Pseudo-Likelihood;Bayesian Network; Graphical Model Learning; Markov Chain.

## References

### 2017

- (Sammut & Webb, 2017) ⇒ Claude Sammut. (2017). "Markov Chain Monte Carlo". In: (Sammut & Webb, 2011). DOI:10.1007/978-1-4899-7687-1_952.
- QUOTE: A Markov Chain Monte Carlo (MCMC) algorithm is a method for sequential sampling in which each new sample is drawn from the neighborhood of its predecessor. This sequence forms a Markov chain, since the transition probabilities between sample values are only dependent on the last sample value. MCMC algorithms are well suited to sampling in high-dimensional spaces.

### 2013

- (Wikipedia, 2013) ⇒ http://en.wikipedia.org/wiki/Markov_logic_network
- A
**Markov logic network**(or MLN) is a probabilistic logic which applies the ideas of a Markov network to first-order logic, enabling uncertain inference. Markov logic networks generalize first-order logic, in the sense that, in a certain limit, all unsatisfiable statements have a probability of zero, and all tautologies have probability one.

- A

### 2009a

- (Domingos & Lowd, 2009) ⇒ Pedro Domingos, and Daniel Lowd. (2009). “Markov Logic: An Interface Layer for Artificial Intelligence." Morgan & Claypool. doi:10.2200/S00206ED1V01Y200907AIM007

### 2009b

- (Kok & Domingos, 2009) ⇒ Stanley Kok, and Pedro Domingos. (2009). “Learning Markov Logic Network Structure via Hypergraph Lifting.” In: Proceedings of ICML 2009.

### 2006

- (Richardon & Domingos, 2006) ⇒ Matthew Richardson, and Pedro Domingos. (2006). “Markov Logic Networks.” In: Machine Learning, 62. doi:10.1007/s10994-006-5833-1.
- We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause).