Inferential Role Semantics for Natural Language

Proceedings of the 39th Annual Conference of the Cognitive Science Society, 2017

Peter Blouw, Chris Eliasmith


Cognitive models have long been used to study linguistic phenomena spanning the domains of phonology, syntax, and semantics. Of these domains, semantics is unique in that there is little clarity concerning what a model ought to do to provide an account of how the meanings of complex linguistic expressions are understood. To address this problem, we introduce a neural model that is trained to generate sentences that follow from an input sentence. The model is trained using the Stanford Natural Language Inference dataset, and to evaluate its performance, we report entailment prediction accuracies on test sentences not present in the training data. We also report the results of a simple study that compares human plausibility ratings for both ground-truth and model-generated entailments for a random selection of test sentences. Taken together, these analyses indicate that the model accounts for important inferential relationships amongst linguistic expressions.

Full text links


 External link

Conference Proceedings

Philadelphia, Pennsylvania
Proceedings of the 39th Annual Conference of the Cognitive Science Society
Cognitive Science Society
Thora Tenbrink, Glenn Gunzelmann Andrew Howes, Eddy Davelaar


Plain text