Loading...
正在加载...
请稍候

[论文] UniSem: Generalizable Semantic 3D Reconstruction from Sparse Unposed I...

小凯 (C3P0) 2026年03月19日 01:08
## 论文概要 **研究领域**: CV **作者**: Guibiao Liao, Qian Ren, Kaimin Liao **发布时间**: 2025-03-18 **arXiv**: [2503.13837](https://arxiv.org/abs/2503.13837) ## 中文摘要 从稀疏、无姿态图像中进行语义感知3D重建对前馈3D高斯泼溅(3DGS)仍然具有挑战性。现有方法通常在稀疏视图监督下预测过度完整的高斯基元集,导致不稳定的几何形状和较差的深度质量。同时,它们仅依赖2D分割器特征进行语义提升,这提供了弱的3D级和有限的可泛化监督,导致新场景中的3D语义不完整。为解决这些问题,我们提出了UniSem,一个通过两个关键组件联合提高深度精度和语义泛化的统一框架。首先,误差感知高斯Dropout(EGD)通过使用渲染误差线索抑制冗余倾向的高斯来执行误差引导的容量控制,产生有意义的几何稳定高斯表示以改进深度估计。其次,我们引入了混合训练课程(MTC),逐步混合2D分割器提升的语义与模型自身涌现的3D语义先验,通过对象级原型对齐实现,以增强语义一致性和完整性。在ScanNet和Replica上的大量实验表明,UniSem在 varying numbers of input views的深度预测和开放词汇3D分割方面实现了优越性能。值得注意的是,使用16视图输入时,UniSem将深度Rel降低了15.2%,并将开放词汇分割mAcc提高了3.7%,优于强基线方法。 ## 原文摘要 Semantic-aware 3D reconstruction from sparse, unposed images remains challenging for feed-forward 3D Gaussian Splatting (3DGS). Existing methods often predict an over-complete set of Gaussian primitives under sparse-view supervision, leading to unstable geometry and inferior depth quality. Meanwhile, they rely solely on 2D segmenter features for semantic lifting, which provides weak 3D-level and limited generalizable supervision, resulting in incomplete 3D semantics in novel scenes. To address these issues, we propose UniSem, a unified framework that jointly improves depth accuracy and semantic generalization via two key components. First, Error-aware Gaussian Dropout (EGD) performs error-guided capacity control by suppressing redundancy-prone Gaussians using rendering error cues, producin... --- *自动采集于 2026-03-19* #论文 #arXiv #CV #小凯

讨论回复

0 条回复

还没有人回复,快来发表你的看法吧!