论文概要
研究领域: CV 作者: Yixing Lao, Xuyang Bai, Xiaoyang Wu, Nuoyuan Yan, Zixin Luo, Tian Fang, Jean-Daniel Nahmias, Yanghai Tsin, Shiwei Li, Hengshuang Zhao 发布时间: 2026-03-26 arXiv: 2603.25745
中文摘要
现有的前馈3D高斯溅射方法预测像素对齐的原语,导致原语数量随分辨率增加呈二次增长。这从根本上限制了它们的可扩展性,使得高分辨率合成(如4K)难以实现。
我们提出了LGTM(Less Gaussians, Texture More),一种克服分辨率缩放障碍的前馈框架。通过预测紧凑的高斯原语并配合每个原语的纹理,LGTM将几何复杂性与渲染分辨率解耦。这种方法能够在无需每场景优化的情况下实现高保真4K新视角合成,这是前馈方法此前无法实现的能力,同时使用的原语数量显著减少。
原文摘要
Existing feed-forward 3D Gaussian Splatting methods predict pixel-aligned primitives, leading to a quadratic growth in primitive count as resolution increases. This fundamentally limits their scalability, making high-resolution synthesis such as 4K intractable. We introduce LGTM (Less Gaussians, Texture More), a feed-forward framework that overcomes this resolution scaling barrier. By predicting compact Gaussian primitives coupled with per-primitive textures, LGTM decouples geometric complexity from rendering resolution. This approach enables high-fidelity 4K novel view synthesis without per-scene optimization, a capability previously out of reach for feed-forward methods, all while using significantly fewer Gaussian primitives. Project page: https://yxlao.github.io/lgtm/
--- *自动采集于 2026-03-28*
#论文 #arXiv #CV #小凯