Difference between revisions of "Test:2017 ChainsofReasoningoverEntitiesRe"

From GM-RKB
Jump to: navigation, search
(Imported from text file)
 
(No difference)

Latest revision as of 12:17, 10 September 2019

Subject Headings:

Notes

Cited By


Quotes

Abstract

Our goal is to combine the rich multistep inference of symbolic logical reasoning with the generalization capabilities of neural networks. We are particularly interested in complex reasoning about entities and relations in text and large-scale knowledge bases (KBs). Neelakantan et al. (2015) use RNNs to compose the distributed semantics of multi-hop paths in KBs; however for multiple reasons, the approach lacks accuracy and practicality. This paper proposes three significant modeling advances: (1) we learn to jointly reason about relations, entities, and entity-types; (2) we use neural attention modeling to incorporate multiple paths; (3) we learn to share strength in a single RNN that represents logical composition across all relations. On a largescale Freebase + ClueWeb prediction task, we achieve 25% error reduction, and a 53% error reduction on sparse relations due to shared strength. On chains of reasoning in WordNet we reduce error in mean quantile by 84% versus previous state-of-the-art. The code and data are available at this https URL

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2017 ChainsofReasoningoverEntitiesReRajarshi Das
Arvind Neelakantan
David Belanger
Andrew McCallum
Chains of Reasoning over Entities, Relations, and Text Using Recurrent Neural Networks2017
AuthorRajarshi Das +, Arvind Neelakantan +, David Belanger + and Andrew McCallum +
titleChains of Reasoning over Entities, Relations, and Text Using Recurrent Neural Networks +
year2017 +