Maximum Entropy-based Learning Algorithm

From GM-RKB
Jump to: navigation, search

A Maximum Entropy-based Learning Algorithm is a supervised discriminative classification algorithm that favors class predictions with maximum entropy (least biased estimates).



References

2009

2004


2003

  • (Caticha, 2004) ⇒ Ariel Caticha. (2004). “Relative Entropy and Inductive Inference.” In: AIP conference proceedings, vol. 707, no. 1, pp. 75-96 . American Institute of Physics,
    • QUOTE: We discuss how the method of maximum entropy, MaxEnt, can be extended beyond its original scope, as a rule to assign a probability distribution, to a full‐fledged method for inductive inference. ...

      ... The method of maximum entropy, MaxEnt, as conceived by Jaynes [1], is a method to assign probabilities on the basis of partial information of a certain kind. The type of information in question is called testable information and consists in the specification of the family of acceptable distributions. The infor- mation is “testable” in the sense that one should be able to test whether any candidate distribution belongs or not to the family.

      The purpose of this paper is to discuss how MaxEnt can be extended beyond its original scope, as a rule to assign a probability distribution, to a full-fledged method for inductive inference. To distinguish it from MaxEnt the extended method will henceforth be abbreviated as ME. [2] ...

1999

1996

1989

1979

1957

  • (Jaynes, 1957) ⇒ E. T. Jaynes. (1957). “Information Theory and Statistical Mechanics.
    • "Information theory provides a constructive criterion for setting up probability distributions on the basis of partial knowledge, and leads to a type of statistical inference which is called the maximum entropy estimate. It is least biased estimate possible on the given information; i.e., it is maximally noncommittal with regard to missing information.