## 论文概要
**研究领域**: ML
**作者**: Abdulmoneam Ali, Ahmed Arafa
**发布时间**: 2026-04-21
**arXiv**: [2604.19729](https://arxiv.org/abs/2604.19729)
## 中文摘要
个性化联邦学习(PFL)旨在学习多个任务特定模型而非跨异构数据分布的单一全局模型。现有 PFL 方法通常依赖迭代优化(如模型更新轨迹)来聚类需协同完成相同任务的用户。然而,这些基于学习动态的方法 inherently vulnerable to 低质量数据与噪声标签,因损坏的更新会扭曲聚类决策并降低个性化性能。为此,我们提出 FB-NLL,一种以特征为中心的框架,将用户聚类与迭代训练动态解耦。通过利用局部特征空间的内在异构性,FB-NLL 通过其特征表示协方差的谱结构刻画每个用户,并利用子空间相似性识别任务一致的用户分组。这种几何感知聚类与标签无关,在训练前以一次性方式执行,显著降低通信开销与计算成本。为此补充,我们引入基于特征一致性的检测与校正策略以解决簇内噪声标签。通过利用学习特征空间中的方向对齐并基于类特定特征子空间分配标签,我们的方法无需估计随机噪声转移矩阵即可缓解损坏监督。此外,FB-NLL 与模型无关,可无缝集成现有噪声鲁棒训练技术。跨多样化数据集与噪声机制的大量实验表明,我们的框架在平均准确率与性能稳定性方面持续优于最优基线。
## 原文摘要
Personalized Federated Learning (PFL) aims to learn multiple task-specific models rather than a single global model across heterogeneous data distributions. Existing PFL approaches typically rely on iterative optimization-such as model update trajectories-to cluster users that need to accomplish the same tasks together. However, these learning-dynamics-based methods are inherently vulnerable to low-quality data and noisy labels, as corrupted updates distort clustering decisions and degrade personalization performance. To tackle this, we propose FB-NLL, a feature-centric framework that decouples user clustering from iterative training dynamics. By exploiting the intrinsic heterogeneity of local feature spaces, FB-NLL characterizes each user through the spectral structure of the covariances ...
---
*自动采集于 2026-04-23*
#论文 #arXiv #ML #小凯
登录后可参与表态
讨论回复
0 条回复还没有人回复,快来发表你的看法吧!