Difference between revisions of "2016 ADecomposableAttentionModelforN"

From GM-RKB
Jump to: navigation, search
(Cited By)
(ContinuousReplacement)
(Tag: continuous replacement)
Line 15: Line 15:
  
 
[[2016_ADecomposableAttentionModelforN|We]] propose a simple [[neural architecture for natural language inference]]. </s>
 
[[2016_ADecomposableAttentionModelforN|We]] propose a simple [[neural architecture for natural language inference]]. </s>
[[2016_ADecomposableAttentionModelforN approach|Our approach]] uses [[attention mechanism|attention]] to decompose [[the problem]] into [[subproblem]]s that can be solved separately, thus making it trivially [[parallelizable]]. </s>
+
[[2016_ADecomposableAttentionModelfor approach|Our approach]] uses [[attention mechanism|attention]] to decompose [[the problem]] into [[subproblem]]s that can be solved separately, thus making it trivially [[parallelizable]]. </s>
 
On the [[Stanford Natural Language Inference (SNLI) dataset]], [[2016_ADecomposableAttentionModelforN|we]] obtain [[state-of-the-art results]] with almost an [[order of magnitude]] [[fewer parameter]]s than [[previous work]] and without relying on any [[word-order information]]. </s>
 
On the [[Stanford Natural Language Inference (SNLI) dataset]], [[2016_ADecomposableAttentionModelforN|we]] obtain [[state-of-the-art results]] with almost an [[order of magnitude]] [[fewer parameter]]s than [[previous work]] and without relying on any [[word-order information]]. </s>
 
Adding [[intra-sentence attention]] that takes a [[minimum amount]] of order into account yields further improvements. </s>
 
Adding [[intra-sentence attention]] that takes a [[minimum amount]] of order into account yields further improvements. </s>

Revision as of 00:19, 13 September 2019

AuthorAnkur P. Parikh +, Oscar Tackstrom +, Dipanjan Das + and Jakob Uszkoreit +
titleA Decomposable Attention Model for Natural Language Inference +
year2016 +