## 论文概要
**研究领域**: CV
**作者**: Mayur Deshmukh, Hiroyasu Akada, Helge Rhodin
**发布时间**: 2025-04-10
**arXiv**: [2504.07880](https://arxiv.org/abs/2504.07880)
## 中文摘要
事件相机在从头戴式设备进行单目自我中心3D人体姿态估计方面具有多重优势,包括毫秒级时间分辨率、高动态范围和可忽略的运动模糊。现有方法有效利用了这些特性,但在3D估计精度方面存在不足,在许多应用(如沉浸式VR/AR)中不够用。这是因为设计并未完全针对事件流(如其异步和连续特性),导致对自遮挡和时间抖动的估计高度敏感。本文重新思考了这一设置,提出了E-3DPSM,一种用于基于事件的自我中心3D人体姿态估计的事件驱动连续姿态状态机。E-3DPSM将连续人体运动与细粒度事件动力学对齐;它演化潜在状态并预测与观测事件相关的3D关节位置的连续变化,这些变化与直接3D人体姿态预测融合,从而产生稳定且无漂移的最终3D姿态重建。E-3DPSM在单工作站上以80Hz实时运行,在两个基准测试中创下新纪录,准确率提升高达19%(MPJPE),时间稳定性提升高达2.7倍。
## 原文摘要
Event cameras offer multiple advantages in monocular egocentric 3D human pose estimation from head-mounted devices, such as millisecond temporal resolution, high dynamic range, and negligible motion blur. Existing methods effectively leverage these properties, but suffer from low 3D estimation accuracy, insufficient in many applications (e.g., immersive VR/AR). This is due to the design not being fully tailored towards event streams (e.g., their asynchronous and continuous nature), leading to high sensitivity to self-occlusions and temporal jitter in the estimates. This paper rethinks the setting and introduces E-3DPSM, an event-driven continuous pose state machine for event-based egocentric 3D human pose estimation. E-3DPSM aligns continuous human motion with fine-grained event dynamics; ...
---
*自动采集于 2026-04-12*
#论文 #arXiv #CV #小凯
登录后可参与表态
讨论回复
0 条回复还没有人回复,快来发表你的看法吧!