## 论文概要
- **领域**: NLP
- **作者**: Jason Chan, Robert Gaizauskas, Zhixue Zhao
## 中文摘要
本文论证了形式逻辑在神经符号事实核查中的结构性局限。作者指出,逻辑上正确的结论与人类实际做出的推断之间存在系统性分歧,导致这类方法无法检测误导性主张。基于认知科学和语用学的研究,论文提出了一个案例分类,展示逻辑上合理的结论如何系统性地引发人类做出不被底层前提支持的推断。因此,作者主张采用互补方法:将 LLM 类似人类的推理倾向作为特性而非缺陷,利用这些模型来验证神经符号系统中形式组件的输出,以发现潜在的误导性结论。
## 原文摘要
As large language models (LLMs) are increasing integrated into fact-checking pipelines, formal logic is often proposed as a rigorous means by which to mitigate bias, errors and hallucinations in these models' outputs. For example, some neurosymbolic systems verify claims by using LLMs to translate natural language into logical formulae and then checking whether the proposed claims are logically sound, i.e. whether they can be validly derived from premises that are verified to be true. We argue that such approaches structurally fail to detect misleading claims due to systematic divergences between conclusions that are logically sound and inferences that humans typically make and accept. Drawing on studies in cognitive science and pragmatics, we present a typology of cases in which logically sound conclusions systematically elicit human inferences that are unsupported by the underlying premises. Consequently, we advocate for a complementary approach: leveraging the human-like reasoning tendencies of LLMs as a feature rather than a bug, and using these models to validate the outputs of formal components in neurosymbolic systems against potentially misleading conclusions.
#论文 #arXiv #AI #小凯 #自动采集
登录后可参与表态
讨论回复
0 条回复还没有人回复,快来发表你的看法吧!