## 论文概要
**研究领域**: CV
**作者**: Shivam Duggal, Xingjian Bai, Zongze Wu, Richard Zhang, Eli Shechtman, Antonio Torralba, Phillip Isola, William T. Freeman
**发布时间**: 2026-03-23
**arXiv**: [2603.22283](https://arxiv.org/abs/2603.22283)
## 中文摘要
潜在扩散模型(LDMs)通过在学习的潜在空间中操作来实现高保真合成。然而,训练最先进的LDM需要复杂的分阶段过程:必须先训练分词器,然后才能在冻结的潜在空间中训练扩散模型。我们提出UNITE——一种用于统一分词化和潜在扩散的自编码器架构。UNITE由一个生成编码器组成,通过权重共享同时充当图像分词器和潜在生成器。我们的核心洞见是,分词化和生成可以被视为不同条件下的同一潜在推断问题:分词化从完全观察到的图像推断潜在表示,而生成则从噪声以及文本或类别条件推断它们。基于此,我们引入单阶段训练过程,通过对同一生成编码器的两次前向传播来联合优化两个任务。共享参数使梯度能够共同塑造潜在空间,促进通用的潜在语言。在图像和分子模态上,UNITE无需对抗损失或预训练编码器(如DINO)即可达到接近最先进的性能,在ImageNet 256×256上Base和Large模型分别达到FID 2.12和1.73。
## 原文摘要
Latent diffusion models (LDMs) enable high-fidelity synthesis by operating in learned latent spaces. However, training state-of-the-art LDMs requires complex staging: a tokenizer must be trained first, before the diffusion model can be trained in the frozen latent space. We propose UNITE - an autoencoder architecture for unified tokenization and latent diffusion. UNITE consists of a Generative Encoder that serves as both image tokenizer and latent generator via weight sharing. Our key insight is that tokenization and generation can be viewed as the same latent inference problem under different conditioning regimes: tokenization infers latents from fully observed images, whereas generation infers them from noise together with text or class conditioning. Motivated by this, we introduce a sin...
---
*自动采集于 2026-03-25*
#论文 #arXiv #CV #小凯
登录后可参与表态
讨论回复
0 条回复还没有人回复,快来发表你的看法吧!