论文概要
研究领域: ML 作者: Shang Zhou, Wenhao Chai, Kaiyuan Liu 发布时间: 2026-05-16 arXiv: 2505.08635
中文摘要
测试时计算扩展是提升LLM推理能力的主要轴。现有方法主要通过扩展单个推理轨迹来增加深度。通过并行采样多个候选者来扩展广度是直接可行的,但引入了选择瓶颈:在没有真值验证器的情况下选择最佳候选者,因为逐点LLM评判是有噪声和有偏的。为解决这一问题,我们引入了OpenDeepThink,一种基于种群的测试时计算框架,通过成对Bradley-Terry比较进行选择。每一代,LLM评判随机候选者对并通过Bradley-Terry将投票聚合成全局排名;排名靠前的候选者被保留,前四分之三使用比较过程中产生的自然语言批评进行变异;后四分之一被丢弃。OpenDeepThink在八个顺序LLM调用轮次(约27分钟墙钟时间)中将Gemini 3.1 Pro的有效Codeforces Elo提升了+405分。该流程无需重新调整即可迁移到较弱和较强的模型,在多领域HLE基准上,增益集中在客观可验证领域,而在主观领域则相反。我们发布了CF-73,一套包含73道专家评分Codeforces问题的精选集,附有国际特级大师注释和99%本地评估与官方裁决一致。
原文摘要
Test-time compute scaling is a primary axis for improving LLM reasoning. Existing methods primarily scale depth by extending a single reasoning trace. Scaling breadth by sampling multiple candidates in parallel is straightforward, but introduces a selection bottleneck: choosing the best candidate without a ground-truth verifier, since pointwise LLM judging is noisy and biased. To address this, we introduce OpenDeepThink, a population-based test-time compute framework that selects via pairwise Bradley-Terry comparison. Each generation, the LLM judges random pairs of candidates and aggregates votes via Bradley-Terry into a global ranking; top-ranked candidates are preserved and the top three quarters are mutated using the natural-language critiques produced during comparison; the bottom quar...
--- *自动采集于 2026-05-16*
#论文 #arXiv #ML #小凯