Improving Rule-based Reasoning in LLMs using Neurosymbolic Representations

Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 2025

Varun Dhanraj, Chris Eliasmith

Abstract

Large language models (LLMs) continue to face challenges in reliably solving reasoning tasks, particularly tasks that involve precise rule following, as often found in mathematical reasoning tasks. This paper introduces a novel neurosymbolic method that improves LLM reasoning by encoding hidden states into neurosymbolic vectors, enabling problem-solving within a neurosymbolic vector space. The results are decoded and merged with the original hidden state, significantly boosting the model's performance on numerical reasoning tasks. By offloading computation through neurosymbolic representations, this method enhances efficiency, reliability, and interpretability. Our experimental results demonstrate an average of 88.6

Full text links

 External link

 DOI

Conference Proceedings

Booktitle
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month
November
Address
Suzhou, China
Publisher
Association for Computational Linguistics
Doi
10.18653/v1/2025.emnlp-main.1556
Pages
30577–30596
Isbn
979-8-89176-332-6
Editors
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng

Cite

Plain text

BibTeX