2014 LookingforHyponymsinVectorSpace

From GM-RKB
(Redirected from Rei & Briscoe, 2014)
Jump to navigation Jump to search

Subject Headings: Is-a-Type-of Relation Mention, Distributional Word Embedding, Neural Network-based Word Embedding, word2vec, Dependency-based Vector Space Model.

Notes

Cited By

Quotes

Abstract

The task of detecting and generating hyponyms is at the core of semantic understanding of language, and has numerous practical applications. We investigate how neural network embeddings perform on this task, compared to dependency-based vector space models, and evaluate a range of similarity measures on hyponym generation. A new asymmetric similarity measure and a combination approach are described, both of which significantly improve precision. We release three new datasets of lexical vector representations trained on the BNC and our evaluation dataset for hyponym generation.


References

  • Øistein E. Andersen, Julien Nioche, Edward J. Briscoe, and John Carroll. 2008. The BNC parsed with RASP4UIMA. In: Proceedings of the Sixth International Language Resources and Evaluation Conference (LREC08), Marrakech, Morocco.
  • Marco Baroni and Alessandro Lenci. 2011. How we BLESSed distributional semantic evaluation. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, Edinburgh.
  • Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In: Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 23–32.
  • Chris Biemann. 2005. Ontology learning from text: A survey of methods. LDV Forum, 20(2002):75–93.
  • Gilles Bisson, Claire Nédellec, and Dolores Ca˜namero. 2000. Designing clustering methods for ontology building-The Mo’K workbench. In ECAI Ontology Learning Workshop.
  • Ted Briscoe, John Carroll, and Rebecca Watson. 2006. The second release of the RASP system. In: Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions, number July, pages 77–80, Sydney, Australia. Association for Computational Linguistics.
  • Sharon A. Caraballo. 1999. Automatic construction of a hypernym-labeled noun hierarchy from text. Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics, pages 120–126.
  • Philipp Cimiano and Steffen Staab. 2005. Learning concept hierarchies from text with a guided hierarchical clustering algorithm. In ICML-Workshop on Learning and Extending Lexical Ontologies by using Machine Learning Methods.
  • Daoud Clarke. 2009. Context-theoretic semantics for natural language: an overview. In: Proceedings of the Workshop on Geometrical Models of Natural Language Semantics, number March, pages 112– 119. Association for Computational Linguistics.
  • Paul R Cohen. 1995. Empirical Methods for Artificial Intelligence. The MIT Press, Cambridge, MA.
  • Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. Proceedings of the 25th International Conference on Machine learning.
  • James R. Curran. 2003. From distributional to semantic similarity. Ph.D. thesis, University of Edinburgh.
  • Ido Dagan, Lillian Lee, and Fernando C. N. Pereira. 1999. Similarity-based models of word cooccurrence probabilities. Machine Learning, 31:1–31.
  • Gregory Grefenstette. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers, Norwell, MA, USA.
  • Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In: Proceedings of the 14th conference on Computational linguistics (COLING ’92), number July, page 539, Morristown, NJ, USA. Association for Computational Linguistics.
  • Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distributional similarity for lexical inference. Natural Language Engineering, 16(04):359–389.
  • Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In: Proceedings of the 17th international conference on Computational linguistics- Volume 2, pages 768–774. Association for Computational Linguistics.
  • Tomáš Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient Estimation of Word Representations in Vector Space. ICLR Workshop, pages 1–12.
  • Tomáš Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic Regularities in Continuous Space Word Representations. (June):746–751.
  • George A. Miller. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11):39–41.
  • Andriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. Proceedings of the 24th International Conference on Machine learning - ICML ’07, pages 641–648.
  • Eric W. Noreen. 1989. Computer Intensive Methods for Testing Hypotheses: An Introduction. Wiley, New York.
  • Gerhard Paaß, J¨org Kindermann, and Edda Leopold. 2004. Learning prototype ontologies by hierachical latent semantic analysis.
  • Patrick Pantel and Deepak Ravichandran. 2004. Automatically labeling semantic classes. In: Proceedings of HLT/NAACL.
  • Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2005. Learning syntactic patterns for automatic hypernym discovery. In Advances in Neural Information Processing Systems.
  • Idan Szpektor and Ido Dagan. 2008. Learning entailment rules for unary templates. In: Proceedings of the 22nd International Conference on Computational Linguistics (COLING ’08), pages 849–856, Morristown, NJ, USA. Association for Computational Linguistics.
  • Joseph Turian, Lev Ratinov, and Y Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In: Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics.
  • Peter D. Turney. 2012. Domain and function: A dualspace model of semantic relations and compositions. Journal of Artificial Intelligence Research, 44:533– 585.
  • Akira Ushioda. 1996. Hierarchical clustering of words and application to NLP tasks. In Fourth Workshop on Very Large Corpora, pages 28–41.
  • Andreas Wagner. 2000. Enriching a lexical semantic net with selectional preferences by means of statistical corpus analysis. In ECAI Workshop on Ontology Learning.
  • Julie Weeds and David Weir. 2005. Co-occurrence retrieval: A flexible framework for lexical distributional similarity. Computational Linguistics.
  • Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional similarity. Proceedings of the 20th international conference on Computational Linguistics - COLING ’04.
  • Maayan Zhitomirsky-Geffet and Ido Dagan. 2009. Bootstrapping Distributional Feature Vector Quality. Computational Linguistics, 35(3):435–461, September.

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2014 LookingforHyponymsinVectorSpaceMarek Rei
Ted Briscoe
Looking for Hyponyms in Vector Space2014