Christopher D. Manning

From GM-RKB
Jump to navigation Jump to search

Christopher D. Manning is a person.



References

2023

2022

2021

2020

2018

2017

2016

  • (Manning, 2016) ⇒ Christopher D. Manning. (2016). “Texts as Knowledge Bases." Invited Talk at the 5th Workshop on Automated Knowledge Base Construction (AKBC-2016).
    • ABSTRACT: Much of text understanding is either towards the end of the spectrum where there is no representation of linguistic conceptual structure (bag-of-words models) or near the other extreme where complex representations are employed (first order logic, AMR, ...). I've been interested in how far one can get with just a little bit of appropriate linguistic structure. I will summarize two recent case studies, one using deep learning and the other natural logic. Enabling a computer to understand a document so that it can use the knowledge within it, for example, to answer reading comprehension questions is a central, yet still unsolved, goal of NLP. I’ll introduce our recent work on the Deepmind QA dataset - a recently released large dataset constructed from news articles. On the one hand, we show that (simple) neural network models are surprisingly good at solving this task and achieving state-of-the-art accuracies; on the other hand, we did a careful hand-analysis of a small subset of the problems and argue that we are quite close to a performance ceiling on this dataset, and what this task needs is still quite far from genuine deep / complex understanding. I will then turn to the use of Natural Logic, a weak proof theory on surface linguistic forms which can nevertheless model many of the common-sense inferences that we wish to make over human language material. I will show how it can support common-sense reasoning and be part of a more linguistically based approach to open information extraction which outperforms previous systems. I show how to augment this approach with a shallow lexical classifier to handle situations where we cannot find any supporting premises. With this augmentation, the system gets very promising results on answering 4th grade science questions, improving over both the classifier in isolation, a strong IR baseline, and prior work. Joint work with Gabor Angeli and Danqi Chen.
  • (Nivre et al., 2016) ⇒ Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajič, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, Daniel Zeman. (2016). “Universal Dependencies V1: A Multilingual Treebank Collection.” In: Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16).

2015

2014

2013

2012

2011

2009

2008

2006

2005

2004

2003

2002

2001

2000

1999