2002 FastExactInferenceWithAFactored Model

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Stanford Parser, Natural Language Parser

Notes

Cited By

Quotes

Abstract

We present a novel generative model for natural language tree structures in which semantic (lexical dependency) and syntactic (PCFG) structures are scored with separate models. This factorization provides conceptual simplicity, straightforward opportunities for separately improving the component models, and a level of performance comparable to similar, non-factored models. Most importantly, unlike other modern parsing models, the factored model admits an extremely effective A* parsing algorithm, which enables efficient, exact inference.

References

  • [1] D. Hindle and M. Rooth. Structural ambiguity and lexical relations. Computational Linguistics, 19(1):103–120, 1993.
  • [2] Michael Collins. Head-Driven Statistical Models for Natural Language Parsing. PhD thesis, University of Pennsylvania, 1999.
  • [3] Eugene Charniak. A maximum-entropy-inspired parser. NAACL 1, pp. 132–139, 2000.
  • [4] R. Bod. What is the minimal set of fragments that achieves maximal parse accuracy? ACL 39, pp. 66–73, 2001.
  • [5] I. A. Mel0 ?cuk. Dependency Syntax: theory and practice. State University of New York Press, Albany, NY, 1988.
  • [6] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Technical Report GCNU TR 2000-004, GCNU, University College London, 2000.
  • [7] Eugene Charniak. Tree-bank grammars. Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI ’96), pp. 1031–1036, 1996.
  • [8] M. Johnson. PCFG models of linguistic tree representations. Computational Linguistics, 24:613–632, 1998.
  • [9] J. Eisner and G. Satta. Efficient parsing for bilexical context-free grammars and head-automaton grammars. ACL 37, pp. 457–464, 1999.
  • [10] D. Klein and Christopher D. Manning. Parsing with treebank grammars: Empirical bounds, theoretical models, and the structure of the Penn treebank. ACL 39/EACL 10, pp. 330–337, 2001.
  • [11] J. K. Baker. Trainable grammars for speech recognition. D. H. Klatt and J. J. Wolf, editors, Speech Communication Papers for the 97th Meeting of the Acoustical Society of America, pp. 547–550, 1979.
  • [12] John D. Lafferty, D. Sleator, and D. Temperley. Grammatical trigrams: A probabilistic model of link grammar. Proceedings of AAAI Fall Symposium on Probabilistic Approaches to Natural Language, 1992.
  • [13] D. Klein and Christopher D. Manning. Parsing and hypergraphs. Proceedings of the 7th International Workshop on Parsing Technologies (IWPT-2001), 2001.
  • [14] Eugene Charniak, S. Goldwater, and M. Johnson. Edge-based best-first chart parsing. Proceedings of the Sixth Workshop on Very Large Corpora, pp. 127–133, 1998.
  • [15] D. M. Magerman. Statistical decision-tree models for parsing. ACL 33, pp. 276–283, 1995.
  • [16] Michael Collins. A new statistical parser based on bigram lexical dependencies. ACL 34, pp. 184–191, 1996.
  • [17] Michael Collins. Discriminative reranking for natural language parsing. ICML 17, pp. 175–182, 2000.
  • [18] J. Goodman. Parsing algorithms and metrics. ACL 34, pp. 177–183, 1996.,


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2002 FastExactInferenceWithAFactored ModelDan Klein
Christopher D. Manning
Fast Exact Inference with a Factored Model for Natural Language Parsinghttp://nlp.stanford.edu/~manning/papers/lex-parser.pdf