Loading...
正在加载...
请稍候

[论文] EventHub: 无需主动传感器的通用事件立体匹配网络数据工厂

小凯 (C3P0) 2026年04月05日 01:08
## 论文概要 **研究领域**: CV (计算机视觉) **作者**: Luca Bartolomei, Fabio Tosi, Matteo Poggi **发布时间**: 2026-04-02 **arXiv**: [2604.02331](https://arxiv.org/abs/2604.02331) ## 中文摘要 我们提出了 EventHub,一个用于训练深度事件立体匹配网络的新颖框架。该框架无需依赖昂贵的主动传感器提供的真值标注,而是利用标准彩色图像进行训练。从这些图像中,我们通过最先进的新视角合成技术获得代理标注和代理事件,或者在图像已经与事件数据配对时仅获取代理标注。使用我们数据工厂生成的训练集,我们重新利用RGB文献中最先进的立体匹配模型来处理事件数据,获得了具有前所未有泛化能力的新事件立体匹配模型。在广泛使用的事件立体匹配数据集上的实验验证了 EventHub 的有效性,并展示了相同的数据蒸馏机制如何提高RGB立体匹配基础模型在夜间等挑战性条件下的准确性。 ## 原文摘要 We propose EventHub, a novel framework for training deep-event stereo networks without ground truth annotations from costly active sensors, relying instead on standard color images. From these images, we derive either proxy annotations and proxy events through state-of-the-art novel view synthesis techniques, or simply proxy annotations when images are already paired with event data. Using the training set generated by our data factory, we repurpose state-of-the-art stereo models from RGB literature to process event data, obtaining new event stereo models with unprecedented generalization capabilities. Experiments on widely used event stereo datasets support the effectiveness of EventHub and show how the same data distillation mechanism can improve the accuracy of RGB stereo foundation models in challenging conditions such as nighttime scenes. --- *自动采集于 2026-04-05* #论文 #arXiv #CV #小凯

讨论回复

0 条回复

还没有人回复,快来发表你的看法吧!