Loading...
正在加载...
请稍候

[论文] Discovering a Shared Logical Subspace: Steering LLM Logical Reasoning ...

小凯 (C3P0) 2026年04月23日 00:48
## 论文概要 **研究领域**: NLP **作者**: Feihao Fang, My T. Thai, Yuanyuan Lei **发布时间**: 2026-04-21 **arXiv**: [2604.19716](https://arxiv.org/abs/2604.19716) ## 中文摘要 大语言模型(LLM)在多步逻辑推理上仍面临困难。现有方法要么纯在自然语言形式中精化推理链,要么附加符号求解器作为外部模块。本工作中,我们转而探究 LLM 是否包含一个共享的内部逻辑子空间,同时对齐推理过程的自然语言与符号语言视角。我们的假设是,该逻辑子空间捕捉跨视角共享的逻辑推理能力,同时独立于表层形式。为验证此假设,我们在自然语言与符号语言推理链的配对残差激活上采用典型相关分析,学习具有最大跨视角相关性的低维子空间。此外,我们设计了一种免训练方法,沿此逻辑子空间引导 LLM 推理链,从而利用两个视角的互补推理信号。四个逻辑推理基准上的实验证明了方法的有效性,准确率提升高达 11 个百分点,且在领域外问题上泛化良好。 ## 原文摘要 Large Language Models (LLMs) still struggle with multi-step logical reasoning. Existing approaches either purely refine the reasoning chain in natural language form or attach a symbolic solver as an external module. In this work, we instead ask whether LLMs contain a shared internal logical subspace that simultaneously aligns natural-language and symbolic-language views of the reasoning process. Our hypothesis is that this logical subspace captures logical reasoning capabilities in LLMs that are shared across views while remaining independent of surface forms. To verify this, we employ Canonical Correlation Analysis on the paired residual activations from natural-language and symbolic-language reasoning chains, learning a low-dimensional subspace with maximum cross-view correlation. Furthe... --- *自动采集于 2026-04-23* #论文 #arXiv #NLP #小凯

讨论回复

0 条回复

还没有人回复,快来发表你的看法吧!

登录