AKBC-2016 Workshop

From GM-RKB
Jump to navigation Jump to search

An AKBC-2016 Workshop is an AKBC workshop that took place in 2016.



References

2016

PAPERS
INVITED TALKS
  • (Etzioni, 2016) ⇒ Oren Etzioni. (2016). “The Allen AI Science Challenge: Results, Lessons, and Open Questions." Invited Talk at the 5th Workshop on Automated Knowledge Base Construction (AKBC-2016).
    • ABSTRACT: AI2 is utilizing 8th grade science questions as a task to both drive and assess progress in AI. This talk will report on the latest results from this effort, known as the Aristo project, and their implications for AKBC and for NLP & KR more broadly.
  • (McCallum, 2016) ⇒ Andrew McCallum. (2016). “Universal Schema for Representation and Reasoning from Natural Language." Invited Talk at the 5th Workshop on Automated Knowledge Base Construction (AKBC-2016).
    • ABSTRACT: Interest in creating KBs has often been motivated by the desire to support reasoning on information that would otherwise be expressed in noisy free text and spread across multiple documents. However, distilling knowledge into a restricted KB can lose important semantic diversity and context. Traditionally a KB has a single hand-designed schema of entity- and relation-types. In contrast, universal schema operates on the union of many input schemas, including a great diversity of free textual expressions. However, previous work on universal schema still distills many textual contexts of the relation between an entity pair into a single embedded vector. In this talk I will introduce universal schema, then describe recent work leading toward (a) having the textual entity- and relation-mentions themselves represent the KB, (b) using universal schema and neural attention models to provide generalization, (c) logical reasoning on top of this text-KB, and (d) future work on reinforcement learning to guide the search for proofs of the answers to queries.
  • (Manning, 2016) ⇒ Christopher Manning. (2016). “Texts as Knowledge Bases." Invited Talk at the 5th Workshop on Automated Knowledge Base Construction (AKBC-2016).
    • ABSTRACT: Much of text understanding is either towards the end of the spectrum where there is no representation of linguistic/conceptual structure (bag-of-words models) or near the other extreme where complex representations are employed (first order logic, AMR, ...). I've been interested in how far one can get with just a little bit of appropriate linguistic structure. I will summarize two recent case studies, one using deep learning and the other natural logic. Enabling a computer to understand a document so that it can use the knowledge within it, for example, to answer reading comprehension questions is a central, yet still unsolved, goal of NLP. I’ll introduce our recent work on the Deepmind QA dataset - a recently released large dataset constructed from news articles. On the one hand, we show that (simple) neural network models are surprisingly good at solving this task and achieving state-of-the-art accuracies; on the other hand, we did a careful hand-analysis of a small subset of the problems and argue that we are quite close to a performance ceiling on this dataset, and what this task needs is still quite far from genuine deep / complex understanding. I will then turn to the use of Natural Logic, a weak proof theory on surface linguistic forms which can nevertheless model many of the common-sense inferences that we wish to make over human language material. I will show how it can support common-sense reasoning and be part of a more linguistically based approach to open information extraction which outperforms previous systems. I show how to augment this approach with a shallow lexical classifier to handle situations where we cannot find any supporting premises. With this augmentation, the system gets very promising results on answering 4th grade science questions, improving over both the classifier in isolation, a strong IR baseline, and prior work. Joint work with Gabor Angeli and Danqi Chen.
  • (Liang, 2016) ⇒ Percy Liang. (2016). “Querying Unnormalized and Incomplete Knowledge Bases." Invited Talk at the 5th Workshop on Automated Knowledge Base Construction (AKBC-2016).
    • ABSTRACT: In an ideal world, one might construct a perfect knowledge base and use it to answer compositional queries. However, real-world knowledge bases are far from perfect---they can be inaccurate and incomplete. In this talk, I show two ways that we can cope with these imperfections by directly learning to answer queries on the imperfect knowledge base. First, we treat semi-structured web tables as an unnormalized knowledge base and perform semantic parsing on it to answer compositional questions. Second, we show how to embed an incomplete knowledge base to support compositional queries directly in vector space. Finally, we discuss some ideas for combining the best of both worlds.
  • (Bordes, 2016) ⇒ Antoine Bordes. (2016). “Memory Networks for Language Understanding: Successes and Challenges." Invited Talk at the 5th Workshop on Automated Knowledge Base Construction (AKBC-2016).
    • ABSTRACT: This talk will first briefly review Memory Networks, an attention-based neural network architecture introduced in (Weston et al., ICLR15), which has been shown to be able to reach promising performance for question answering on synthetic data. Then, we will explore and discuss the successes and remaining challenges arising when applying Memory Networks to human generated natural language in the context of large-scale question answering, in cases where answers have to be extracted either from Knowledge Bases or directly from raw text.