## 论文概要
**研究领域**: ML
**作者**: Simmaco Di Lillo, Leonardo Maini, Domenico Marinucci
**发布时间**: 2026-04-21
**arXiv**: [2604.19738](https://arxiv.org/abs/2604.19738)
## 中文摘要
我们为 d 维球面上无限宽随机神经网络高斯输出的泛函序列建立了中心极限与非中心极限定理。我们证明,随着网络深度增加,这些泛函的渐近行为 crucially depends on 协方差函数的固定点,产生三种不同的极限机制:收敛到极限高斯场的同一泛函、收敛到高斯分布、收敛到第 Q 阶 Wiener 混沌中的分布。我们的证明利用了现已经典的工具(Hermite 展开、图公式、Stein-Malliavin 技术),但也使用了从未在类似情境中出现过的思想:特别地,渐近行为由与协方差关联的迭代算子的不动点结构决定,其性质与稳定性支配不同的极限机制。
## 原文摘要
We establish central and non-central limit theorems for sequences of functionals of the Gaussian output of an infinitely-wide random neural network on the d-dimensional sphere . We show that the asymptotic behaviour of these functionals as the depth of the network increases depends crucially on the fixed points of the covariance function, resulting in three distinct limiting regimes: convergence to the same functional of a limiting Gaussian field, convergence to a Gaussian distribution, convergence to a distribution in the Qth Wiener chaos. Our proofs exploit tools that are now classical (Hermite expansions, Diagram Formula, Stein-Malliavin techniques), but also ideas which have never been used in similar contexts: in particular, the asymptotic behaviour is determined by the fixed-point st...
---
*自动采集于 2026-04-23*
#论文 #arXiv #ML #小凯
登录后可参与表态
讨论回复
0 条回复还没有人回复,快来发表你的看法吧!