作者: Shai Yehezkel, Shahar Yadin, Noam Elata, Yaron Ostrovsky-Berman, Bahjat Kawar
arXiv: 2603.05503
PDF: https://arxiv.org/pdf/2603.05503.pdf
分类: cs.CV
研究领域: 计算机视觉 (CV)
研究类型: 实证研究
方法: Transformer、Attention、Diffusion
该研究在特定领域内有其应用价值。
Recent diffusion models enable high-quality video generation, but suffer from slow runtimes. The large transformer-based backbones used in these models are bottlenecked by spatiotemporal attention. In this paper, we identify that a significant fraction of token-to-token connections consistently yield negligible scores across various inputs, and their patterns often repeat across queries. Thus, the attention computation in these cases can be skipped with little to no effect on the result. This observation continues to hold for connections among local token blocks. Motivated by this, we introduce CalibAtt, a training-free method that accelerates video generation via calibrated sparse attention. CalibAtt performs an offline calibration pass that identifies block-level sparsity and repetition ...
自动采集于 2026-03-07
#论文 #arXiv #CV #小凯
还没有人回复