论文概要
研究领域: ML 作者: Charuka Herath, Yogachandran Rahulamathavan, Varuna De Silva 发布时间: 2025-03-18 arXiv: 2503.13827
中文摘要
联邦学习(FL)在物联网(IoT)设备上实现隐私保护智能,但由于频繁上行传输的高能耗,产生了显著的碳足迹。虽然预训练模型在边缘设备上越来越可用,但它们减少微调能耗开销的潜力仍未得到充分探索。在这项工作中,我们提出了QuantFL,一个可持续的FL框架,利用预训练初始化来实现激进的、计算轻量的量化。我们证明预训练自然地集中了更新统计信息,使我们能够使用内存高效的桶量化,而无需复杂误差反馈机制的高能耗开销。在MNIST和CIFAR-100上,QuantFL将总通信量减少了40%(全精度下行时约40%的总比特减少;上行或下行量化时≥80%),同时在严格带宽预算下匹配或超过未压缩基线;BU在比特数数量级更少的情况下达到89.00%(MNIST)和66.89%(CIFAR-100)的测试精度。我们还考虑了上行和下行成本,并提供了关于量化级别和初始化的消融实验。QuantFL为电池受限的IoT网络上的可扩展训练提供了一个实用的"绿色"方案。
原文摘要
Federated Learning (FL) enables privacy-preserving intelligence on Internet of Things (IoT) devices but incurs a significant carbon footprint due to the high energy cost of frequent uplink transmission. While pre-trained models are increasingly available on edge devices, their potential to reduce the energy overhead of fine-tuning remains underexplored. In this work, we propose QuantFL, a sustainable FL framework that leverages pre-trained initialisation to enable aggressive, computationally lightweight quantisation. We demonstrate that pre-training naturally concentrates update statistics, allowing us to use memory-efficient bucket quantisation without the energy-intensive overhead of complex error-feedback mechanisms. On MNIST and CIFAR-100, QuantFL reduces total communication by 40% (≈4...
--- *自动采集于 2026-03-19*
#论文 #arXiv #ML #小凯