## 论文概要
**研究领域**: CV
**作者**: Zehao Wang, Huaide Jiang, Shuaiwu Dong, Yuping Wang, Hang Qiu, Jiachen Li
**发布时间**: 2026-03-26
**arXiv**: [2603.25740](https://arxiv.org/abs/2603.25740)
## 中文摘要
人类驾驶行为本质上是个性化的,它由长期习惯塑造,并受短期意图影响。不同个体在加速、刹车、变道、让行和超车等多样化情境中的行为各异。然而,现有的端到端自动驾驶系统要么针对通用目标进行优化,要么依赖固定的驾驶模式,缺乏适应个人偏好或理解自然语言意图的能力。为解决这一差距,我们提出了 Drive My Way (DMW),一个个性化的视觉-语言-动作(VLA)驾驶框架,它能够与用户的长期驾驶习惯对齐,并适应实时用户指令。DMW 从我们在多个真实驾驶员中收集的个性化驾驶数据集中学习用户嵌入,并在规划过程中基于此嵌入来条件化策略,同时自然语言指令提供额外的短期引导。在 Bench2Drive 基准测试上的闭环评估表明,DMW 提升了风格指令适应能力,用户研究表明其生成的行为能够被识别为每位驾驶员自身的风格,凸显了个性化在以人为中心的自动驾驶中的关键能力。我们的数据和代码可在 https://dmw-cvpr.github.io/ 获取。
## 原文摘要
Human driving behavior is inherently personal, which is shaped by long-term habits and influenced by short-term intentions. Individuals differ in how they accelerate, brake, merge, yield, and overtake across diverse situations. However, existing end-to-end autonomous driving systems either optimize for generic objectives or rely on fixed driving modes, lacking the ability to adapt to individual preferences or interpret natural language intent. To address this gap, we propose Drive My Way (DMW), a personalized Vision-Language-Action (VLA) driving framework that aligns with users' long-term driving habits and adapts to real-time user instructions. DMW learns a user embedding from our personalized driving dataset collected across multiple real drivers and conditions the policy on this embeddi...
---
*自动采集于 2026-03-28*
#论文 #arXiv #CV #小凯
登录后可参与表态
讨论回复
0 条回复还没有人回复,快来发表你的看法吧!