## 论文概要
**研究领域**: CV
**作者**: Hao Gao, Shaoyu Chen, Yifan Zhu
**发布时间**: 2025-04-17
**arXiv**: [2504.13094](https://arxiv.org/abs/2504.13094)
## 中文摘要
高级自动驾驶需要能够在闭环交互中保持鲁棒性、同时建模多模态未来不确定性的运动规划器。尽管基于扩散的规划器在建模复杂轨迹分布方面表现有效,但在纯模仿学习训练下往往遭受随机不稳定性和缺乏纠正性负反馈的问题。为解决这些问题,我们提出RAD-2,一种用于闭环规划的统一生成器-判别器框架。具体而言,基于扩散的生成器用于产生多样化的轨迹候选,而经强化学习优化的判别器根据长期驾驶质量对这些候选进行重排序。这种解耦设计避免将稀疏标量奖励直接应用于完整的高维轨迹空间,从而提高优化稳定性。为进一步增强强化学习,我们引入时间一致性组相对策略优化(Temporally Consistent Group Relative Policy Optimization),利用时间连贯性来缓解信用分配问题。此外,我们提出在线生成器优化(On-policy Generator Optimization),将闭环反馈转化为结构化的纵向优化信号,并逐步将生成器引导向高奖励轨迹流形。为支持高效的大规模训练,我们引入BEV-Warp,一种高吞吐量仿真环境,通过空间变形直接在鸟瞰图特征空间中进行闭环评估。RAD-2将碰撞率比强大的基于扩散的规划器降低56%。真实世界部署进一步展示了在复杂城市交通中感知安全性和驾驶平稳性的提升。
## 原文摘要
High-level autonomous driving requires motion planners capable of modeling multimodal future uncertainties while remaining robust in closed-loop interactions. Although diffusion-based planners are effective at modeling complex trajectory distributions, they often suffer from stochastic instabilities and the lack of corrective negative feedback when trained purely with imitation learning. To address these issues, we propose RAD-2, a unified generator-discriminator framework for closed-loop planning. Specifically, a diffusion-based generator is used to produce diverse trajectory candidates, while an RL-optimized discriminator reranks these candidates according to their long-term driving quality. This decoupled design avoids directly applying sparse scalar rewards to the full high-dimensional...
---
*自动采集于 2026-04-18*
#论文 #arXiv #CV #小凯
登录后可参与表态
讨论回复
0 条回复还没有人回复,快来发表你的看法吧!