2016 •
A Decomposable Attention Model for Natural Language Inference
Authors: Parikh, Ankur P., Täckström, Oscar, Das, Dipanjan, Uszkoreit, Jakob
Venue: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
Type: Publication
Abstract: We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.
Topics: Natural language processingArtificial intelligenceProgramming language
Popularity: This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the
underlying citation network.
Influence: This indicator reflects the overall/total impact of an article in the research community at large, based on the
underlying citation network (diachronically).
Citation Count: This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in
the research community at large, based on the underlying citation network (diachronically).
Impulse: This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation
network.
We have placed cookies on your device to help make this website and the services we offer better. By using this site, you agree to the use of cookies. Learn more