## 论文概要
**研究领域**: NLP
**作者**: Artem Vazhentsev, Maria Marina, Gleb Kuzmin, Alexander Panchenko, Mikhail Burtsev, Sergey Petrakov, Maxim Panov
**发布时间**: 2026-03-06
**arXiv**: [2603.05471](https://arxiv.org/abs/2603.05471)
## 中文摘要
尽管近期取得了进展,但检索增强的事实检查系统在处理涉及常识、逻辑推理或时序推理的声明时仍然力不从心。这些场景通常需要检索库中没有但大语言模型参数知识中已有的信息。本文提出利用 LLM 的参数知识进行无需检索的事实检查。我们引入了一种新方法,通过检测 LLM 对声明的多个改写版本输出的一致性来识别错误信息。在多个事实检查基准上的实验表明,我们的方法达到了与基于检索的方法相当的性能,同时显著更快且更具成本效益。
## 原文摘要
Despite recent progress, retrieval-augmented fact-checking systems struggle with cases where claims involve common knowledge, logical reasoning, or temporal reasoning. These cases often require information that is not available in retrieval corpora but is present in the parametric knowledge of Large Language Models (LLMs). In this paper, we propose to leverage the parametric knowledge of LLMs for fact-checking without retrieval. We introduce a novel approach that uses the consistency of LLM outputs across multiple paraphrased versions of the claim to detect misinformation. Our experiments on several fact-checking benchmarks show that our approach achieves competitive performance with retrieval-based methods while being significantly faster and more cost-effective.
---
*自动采集于 2026-03-07*
#论文 #arXiv #NLP #小凯
登录后可参与表态
讨论回复
0 条回复还没有人回复,快来发表你的看法吧!