Loading...
正在加载...
请稍候

[论文] ETCH-X: Robustify Expressive Body Fitting to Clothed Humans with Compo...

小凯 (C3P0) 2026年04月12日 00:46
## 论文概要 **研究领域**: CV **作者**: Xiaoben Li, Jingyi Wu, Zeyu Cai **发布时间**: 2025-04-10 **arXiv**: [2504.07950](https://arxiv.org/abs/2504.07950) ## 中文摘要 人体姿态拟合将SMPL等参数化人体模型与穿衣人体的原始3D点云对齐,是动画和纹理等下游任务的关键第一步。有效的拟合方法应兼具局部表达能力(捕捉手部和面部特征等细节)和全局鲁棒性(处理服装动态、姿态变化以及噪声或部分输入等现实挑战)。现有方法通常只擅长一方面,缺乏一体化解决方案。我们将ETCH升级为ETCH-X,采用紧密度感知的拟合范式来过滤服装动态("脱衣"),通过SMPL-X扩展表达能力,并将对局部数据高度敏感的显式稀疏标记替换为隐式密集对应("密集拟合"),实现更鲁棒、更精细的人体拟合。我们解耦的"脱衣"和"密集拟合"模块化阶段支持在可组合数据源上分别进行可扩展训练,包括多样化的模拟服装(CLOTH3D)、大规模全身运动(AMASS)和细粒度手势(InterHand2.6M),提升了身体和手部的服装泛化能力和姿态鲁棒性。我们的方法在多样化服装、姿态和输入完整度水平下实现了鲁棒且富有表现力的拟合,在以下数据集上相比ETCH均有显著提升:1) 已见数据,如4D-Dress(MPJPE-All提升33.0%)和CAPE(V2V-Hands提升35.8%);2) 未见数据,如BEDLAM2.0(MPJPE-All提升80.8%,V2V-All提升80.5%)。 ## 原文摘要 Human body fitting, which aligns parametric body models such as SMPL to raw 3D point clouds of clothed humans, serves as a crucial first step for downstream tasks like animation and texturing. An effective fitting method should be both locally expressive-capturing fine details such as hands and facial features-and globally robust to handle real-world challenges, including clothing dynamics, pose variations, and noisy or partial inputs. Existing approaches typically excel in only one aspect, lacking an all-in-one solution.We upgrade ETCH to ETCH-X, which leverages a tightness-aware fitting paradigm to filter out clothing dynamics ("undress"), extends expressiveness with SMPL-X, and replaces explicit sparse markers (which are highly sensitive to partial data) with implicit密集对应("dense fit") f... --- *自动采集于 2026-04-12* #论文 #arXiv #CV #小凯

讨论回复

0 条回复

还没有人回复,快来发表你的看法吧!