Progol Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
No edit summary
No edit summary
Line 43: Line 43:


=== 2003 ===
=== 2003 ===
* (Ray et al., 2003) ⇒  [[Oliver Ray]], [[Krysia Broda]], and [[Alessandra Russo]] (2003, September). [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.212.6602&rep=rep1&type=pdf "Hybrid Abductive Inductive Learning: A Generalisation Of Progol"]. In: International Conference on Inductive Logic Programming. [https://doi.org/10.1007/978-3-540-39917-9_21 DOI:10.1007/978-3-540-39917-9_21].
* (Ray et al., 2003) ⇒  [[Oliver Ray]], [[Krysia Broda]], and [[Alessandra Russo]] (2003, September). "[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.212.6602&rep=rep1&type=pdf Hybrid Abductive Inductive Learning: A Generalisation Of Progol]". In: International Conference on Inductive Logic Programming. [https://doi.org/10.1007/978-3-540-39917-9_21 DOI:10.1007/978-3-540-39917-9_21].
** QUOTE:  The [[Muggleton Progol System|Progol system of Muggleton]] [[[#1995|10]]] is a [[state-of-the-art]] and widely applied [[ILP system]] that has been successful in significant [[real world application]]s. [[Progol5]] [[[#2000|7]]] is the latest [[system]] in the [[Progol Algorithm|Progol]] family, which is based on the [[inference]] method of [[Bottom Generalisation]] ...
** QUOTE:  The [[Muggleton Progol System|Progol system of Muggleton]] [[[#1995|10]]] is a [[state-of-the-art]] and widely applied [[ILP system]] that has been successful in significant [[real world application]]s. [[Progol5]] [[[#2000|7]]] is the latest [[system]] in the [[Progol Algorithm|Progol]] family, which is based on the [[inference]] method of [[Bottom Generalisation]] ...


=== 2001a ===
=== 2001a ===
* (Muggleton & Firth, 2001) ⇒ [[Stephen Muggleton]], and [[John Firth]] (2001). [http://kameken.clique.jp/AI2007/tutorial4.4.pdf "Relational Rule Induction with CProgol 4.4: A Tutorial Introduction"]. In: Relational data mining (pp. 160-188). Springer, Berlin, Heidelberg. [https://doi.org/10.1007/978-3-662-04599-2_7 DOI: 10.1007/978-3-662-04599-2_7]
* (Muggleton & Firth, 2001) ⇒ [[Stephen Muggleton]], and [[John Firth]] (2001). "[http://kameken.clique.jp/AI2007/tutorial4.4.pdf Relational Rule Induction with CProgol 4.4: A Tutorial Introduction]". In: Relational data mining (pp. 160-188). Springer, Berlin, Heidelberg. [https://doi.org/10.1007/978-3-662-04599-2_7 DOI: 10.1007/978-3-662-04599-2_7]
** QUOTE: This chapter describes the theory and use of [[CPROGOL4.4]], a [[publicly distributed version]] of the [[PROGOL]] family of [[ILP system]]s. In order to follow the examples in this chapter, it is assumed that the reader is familiar with the aims of [[ILP]], [[Horn clause logic]] and [[Prolog notation]] for [[clause]]s. It is also assumed that the [[reader]] is familiar with the concepts of [[deductive logic]] in order to understand the [[theoretical foundation]]s of [[PROGOL]] (...)
** QUOTE: This chapter describes the theory and use of [[CPROGOL4.4]], a [[publicly distributed version]] of the [[PROGOL]] family of [[ILP system]]s. In order to follow the examples in this chapter, it is assumed that the reader is familiar with the aims of [[ILP]], [[Horn clause logic]] and [[Prolog notation]] for [[clause]]s. It is also assumed that the [[reader]] is familiar with the concepts of [[deductive logic]] in order to understand the [[theoretical foundation]]s of [[PROGOL]] (...)


=== 2001b ===
=== 2001b ===
* (Ozaki & Furukawa, 2001) ⇒ [[Tomonobu Ozaki]], and [[Koichi Furukawa]] (2001, September). [ "Application Of Pruning Techniques For Propositional Learning To Progol"]. In: International Conference on Inductive Logic Programming. [https://doi.org/10.1007/3-540-44797-0_17 DOI:10.1007/3-540-44797-0_17]
* (Ozaki & Furukawa, 2001) ⇒ [[Tomonobu Ozaki]], and [[Koichi Furukawa]] (2001, September). "Application Of Pruning Techniques For Propositional Learning To Progol". In: International Conference on Inductive Logic Programming. [https://doi.org/10.1007/3-540-44797-0_17 DOI:10.1007/3-540-44797-0_17]
** QUOTE: By introducing [[unordered search]] to [[Progol Algorithm|Progol]], we can incorporate the effective [[pruning mechanism]] into [[Progol Algorithm|Progol]], which is basically independent of the order of [[literal]] in [[MSH]].
** QUOTE: By introducing [[unordered search]] to [[Progol Algorithm|Progol]], we can incorporate the effective [[pruning mechanism]] into [[Progol Algorithm|Progol]], which is basically independent of the order of [[literal]] in [[MSH]].


=== 2000 ===
=== 2000 ===
* (Muggleton & Bryant, 2000) ⇒  [[Stephen H. Muggleton]], and [[Christopher H. Bryant]] (2000, July). [https://core.ac.uk/download/pdf/104787.pdf "Theory Completion Using Inverse Entailment"]. In: International conference on inductive logic programming. [https://doi.org/10.1007/3-540-44960-4_8 DOI:10.1007/3-540-44960-4_8].
* (Muggleton & Bryant, 2000) ⇒  [[Stephen H. Muggleton]], and [[Christopher H. Bryant]] (2000, July). "[https://core.ac.uk/download/pdf/104787.pdf Theory Completion Using Inverse Entailment]". In: International conference on inductive logic programming. [https://doi.org/10.1007/3-540-44960-4_8 DOI:10.1007/3-540-44960-4_8].
** QUOTE:  [[Progol5.0]] is tested on two different [[data-set]]s. The first [[dataset]] involves a grammar which translates numbers to their representation in [[English]]. The second [[dataset]] involves hypothesising the function of unknown genes within a network of metabolic pathways. On both [[dataset]]s near complete [[recovery]] of performance is achieved after relearning when randomly chosen portions of [[background knowledge]] are removed. [[Progol5.0]]’s [[running time]]s for experiments in this paper were typically under 6 seconds on a standard laptop [[PC]].
** QUOTE:  [[Progol5.0]] is tested on two different [[data-set]]s. The first [[dataset]] involves a grammar which translates numbers to their representation in [[English]]. The second [[dataset]] involves hypothesising the function of unknown genes within a network of metabolic pathways. On both [[dataset]]s near complete [[recovery]] of performance is achieved after relearning when randomly chosen portions of [[background knowledge]] are removed. [[Progol5.0]]’s [[running time]]s for experiments in this paper were typically under 6 seconds on a standard laptop [[PC]].


=== 1998 ===
=== 1998 ===
* (Eineborg & Lindberg, 1998) ⇒ (1998, July). [https://link.springer.com/chapter/10.1007%2FBFb0027315 "Induction Of Constraint Grammar-Rules Using Progol"]. In: International Conference on Inductive Logic Programming (pp. 116-124). Springer, Berlin, Heidelberg. [https://doi.org/10.1007/BFb0027315 DOI:10.1007/bfb0027315]
* (Eineborg & Lindberg, 1998) ⇒ (1998, July). "[https://link.springer.com/chapter/10.1007%2FBFb0027315 Induction Of Constraint Grammar-Rules Using Progol]". In: International Conference on Inductive Logic Programming (pp. 116-124). Springer, Berlin, Heidelberg. [https://doi.org/10.1007/BFb0027315 DOI:10.1007/bfb0027315]
** QUOTE: The paper reports a pilot study aiming at [[Rule Induction Task|inducing rule]]s for [[WSD|disambiguating word]]s with different possible [[part of speech]] readings in unrestricted [[Swedish]] text. The [[rule]]s, which are inspired by [[Constraint Grammar]], are learnt using the [[Progol inductive logic programming system]]. The [[training data]] is sampled from the [[Part-Of-Speech Tagging|part of speech tagged]] one million word [[Stockholm-Umeå Corpus]]. The results show that the [[Rule Induction|induction]] of [[disambiguation]] [[rule]]s using [[Progol Algorithm|Progol]] is a realistic way of [[learning rule]]s of good quality with a minimum of manual effort. When tested on unseen data, 97% of the words retain the correct reading after [[tagging]] leaving an ambiguity of 1.15 readings per word.
** QUOTE: The paper reports a pilot study aiming at [[Rule Induction Task|inducing rule]]s for [[WSD|disambiguating word]]s with different possible [[part of speech]] readings in unrestricted [[Swedish]] text. The [[rule]]s, which are inspired by [[Constraint Grammar]], are learnt using the [[Progol inductive logic programming system]]. The [[training data]] is sampled from the [[Part-Of-Speech Tagging|part of speech tagged]] one million word [[Stockholm-Umeå Corpus]]. The results show that the [[Rule Induction|induction]] of [[disambiguation]] [[rule]]s using [[Progol Algorithm|Progol]] is a realistic way of [[learning rule]]s of good quality with a minimum of manual effort. When tested on unseen data, 97% of the words retain the correct reading after [[tagging]] leaving an ambiguity of 1.15 readings per word.


=== 1997 ===
=== 1997 ===
* (Cussens, 1997) ⇒ [[James Cussens]] (1997, September). [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.54.7266&rep=rep1&type=pdf "Part-Of-Speech Tagging Using Progol"]. In: International Conference on Inductive Logic Programming. [https://doi.org/10.1007/3540635149_38 DOI:10.1007/3540635149_38]
* (Cussens, 1997) ⇒ [[James Cussens]] (1997, September). "[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.54.7266&rep=rep1&type=pdf Part-Of-Speech Tagging Using Progol]". In: International Conference on Inductive Logic Programming. [https://doi.org/10.1007/3540635149_38 DOI:10.1007/3540635149_38]
** QUOTE: A [[Tagging System|system for ‘tagging’]] words with their [[part-of-speech (POS) tag]]s is constructed. The system has two components: a [[lexicon]] containing the set of possible [[POS]] [[tag]]s for a given word, and [[rule]]s which use a word's context to eliminate possible tags for a word. The [[Inductive Logic Programming (ILP) system]] [[Progol Algorithm|Progol]] is used to [[Rule Induction|induce these rule]]s in the form of [[definite clause]]s. The final theory contained 885 [[clause]]s. For background knowledge, [[Progol Algorithm|Progol]] uses a simple [[grammar]], where the [[programming language tag|tag]]s are [[terminal]]s and [[programming language predicate|predicate]]s such as [[noun (noun phrase)]] are [[non-terminal]]s. [[Progol Algorithm|Progol]] was altered to allow the caching of information about clauses generated during the [[induction process]] which greatly increased [[efficiency]]. The system achieved a per-word accuracy of 96.4% on known words drawn from sentences without [[quotation mark]]s (...)
** QUOTE: A [[Tagging System|system for ‘tagging’]] words with their [[part-of-speech (POS) tag]]s is constructed. The system has two components: a [[lexicon]] containing the set of possible [[POS]] [[tag]]s for a given word, and [[rule]]s which use a word's context to eliminate possible tags for a word. The [[Inductive Logic Programming (ILP) system]] [[Progol Algorithm|Progol]] is used to [[Rule Induction|induce these rule]]s in the form of [[definite clause]]s. The final theory contained 885 [[clause]]s. For background knowledge, [[Progol Algorithm|Progol]] uses a simple [[grammar]], where the [[programming language tag|tag]]s are [[terminal]]s and [[programming language predicate|predicate]]s such as [[noun (noun phrase)]] are [[non-terminal]]s. [[Progol Algorithm|Progol]] was altered to allow the caching of information about clauses generated during the [[induction process]] which greatly increased [[efficiency]]. The system achieved a per-word accuracy of 96.4% on known words drawn from sentences without [[quotation mark]]s (...)


=== 1995 ===
=== 1995 ===
* ([[Muggleton, 1995]]) ⇒ [[Stephen Muggleton]] (1995). [https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.31.1630&rep=rep1&type=pdf "Inverse Entailment And Progol"]. New Generation Computing, 13(3-4), 245-286. [https://doi.org/10.1007/BF03037227 DOI:10.1007/BF03037227]
* ([[Muggleton, 1995]]) ⇒ [[Stephen Muggleton]] (1995). "[https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.31.1630&rep=rep1&type=pdf Inverse Entailment And Progol]". New Generation Computing, 13(3-4), 245-286. [https://doi.org/10.1007/BF03037227 DOI:10.1007/BF03037227]
** QUOTE: This paper firstly provides a re-appraisal of the development of techniques for [[inverting deduction]], secondly introduces [[Mode-Directed Inverse Entailment (MDIE)]] as a generalisation and enhancement of previous approaches and thirdly describes an implementation of [[MDIE]] in the [[Progol system]]. [[Progol Algorithm|Progol]] is implemented in [[C]] and available by [[anonymous ftp]]. The re-assessment of previous techniques in terms of [[inverse implication]] leads to new [[result]]s for [[learning]] from [[positive data]] and [[inverting implication]] between [[pair]]s of [[clause]]s.
** QUOTE: This paper firstly provides a re-appraisal of the development of techniques for [[inverting deduction]], secondly introduces [[Mode-Directed Inverse Entailment (MDIE)]] as a generalisation and enhancement of previous approaches and thirdly describes an implementation of [[MDIE]] in the [[Progol system]]. [[Progol Algorithm|Progol]] is implemented in [[C]] and available by [[anonymous ftp]]. The re-assessment of previous techniques in terms of [[inverse implication]] leads to new [[result]]s for [[learning]] from [[positive data]] and [[inverting implication]] between [[pair]]s of [[clause]]s.
----
----

Revision as of 17:57, 14 February 2020

A Progol Algorithm is an inductive logic programming algorithm that combines inverse entailment with general-to-specific search via a refinement graph.



References

2019a

2019b

  • (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/PROGOL Retrieved:2019-11-13.
    • Progol is Stephen Muggleton's implementation of inductive logic programming used in computer science that combines "Inverse Entailment" with "general-to-specific search" through a refinement graph. [1] "Inverse Entailment" is used with mode declarations to derive the most-specific clause within the mode language which entails a given example. This clause is used to guide a refinement-graph search.

      Unlike the searches of Ehud Shapiro's model inference system (MIS) and J. Ross Quinlan's FOIL Progol's search is efficient and has a provable guarantee of returning a solution having the maximum "compression" in the search-space. To do so it performs an admissible A*-like search, guided by compression, over clauses which subsume the most specific clause.

      Progol deals with noisy data by using the "compression measure" to trade-off the description of errors against the hypothesis description length. Progol allows arbitrary Prolog programs as background knowledge and arbitrary definite clauses as examples. Despite this bench-tests show that the efficiency of Progol compares favourably with FOIL.

2010

2003

2001a

2001b

2000

1998

1997

1995