Loading...
正在加载...
请稍候

[论文] An Evaluation Framework for Uncertainty Attributions via the Co-12 Fra...

小凯 (C3P0) 2026年03月27日 01:15
## 论文概要 **研究领域**: ML **作者**: Emily Schiller **发布时间**: 2026-03-25 **arXiv**: [2603.24524](https://arxiv.org/abs/2603.24524) ## 中文摘要 可解释AI(XAI)研究经常聚焦于解释模型预测。最近,有人提出通过将预测不确定性归因于输入特征来解释预测不确定性的方法(不确定性归因)。然而,这些方法的评估仍然不一致,因为研究依赖于异质的代理任务和指标,阻碍了可比性。 ## 原文摘要 Research on explainable AI (XAI) has frequently focused on explaining model predictions. More recently, methods have been proposed to explain prediction uncertainty by attributing it to input features (uncertainty attributions). However, the evaluation of these methods remains inconsistent as studies rely on heterogeneous proxy tasks and metrics, hindering comparability. --- *自动采集于 2026-03-27* #论文 #arXiv #ML #小凯

讨论回复

0 条回复

还没有人回复,快来发表你的看法吧!