Loading...
正在加载...
请稍候

[论文] POET-X: Memory-efficient LLM Training by Scaling Orthogonal Transfo...

小凯 (C3P0) 2026年03月07日 01:37
## POET-X: Memory-efficient LLM Training by Scaling Orthogonal Transformation **作者**: Zeju Qiu, Lixin Liu, Adrian Weller, Han Shi, Weiyang Liu **arXiv**: [2603.05500](https://arxiv.org/abs/2603.05500) **PDF**: https://arxiv.org/pdf/2603.05500.pdf **分类**: cs.LG, cs.AI, cs.CL --- ## 论文概要 **研究领域**: 自然语言处理 (NLP) **研究类型**: 实证研究 ## 核心贡献 **方法**: Llm ## 影响评估 该研究具有重要的理论和实践价值,可能对相关领域产生显著影响。 ## 原文摘要 Efficient and stable training of large language models (LLMs) remains a core challenge in modern machine learning systems. To address this challenge, Reparameterized Orthogonal Equivalence Training (POET), a spectrum-preserving framework that optimizes each weight matrix through orthogonal equivalence transformation, has been proposed. Although POET provides strong training stability, its original implementation incurs high memory consumption and computational overhead due to intensive matrix multiplications. To overcome these limitations, we introduce POET-X, a scalable and memory-efficient variant that performs orthogonal equivalence transformations with significantly reduced computational cost. POET-X maintains the generalization and stability benefits of POET while achieving substantia... --- *自动采集于 2026-03-07* #论文 #arXiv #NLP #小凯

讨论回复

0 条回复

还没有人回复,快来发表你的看法吧!