his paper studies large deviation principles and weak convergence, both at the level of finite-dimensional distributions and in functional form, for a class of continuous, isotropic, centered Gaussian random fields defined on the unit sphere. The covariance functions of these fields evolve recursively through a nonlinear map induced by an activation function, reflecting the statistical dynamics of infinitely wide random neural networks as depth increases. We consider two types of centered fields, obtained by subtracting either the value at the North Pole or the spherical average. According to the behavior of the derivative at t=1 of the associated covariance function, we identify three regimes: low disorder, sparse, and high disorder. In the low-disorder regime, we establish functional large deviation principles and weak convergence results. In the sparse regime, we obtain large deviation principles and weak convergence for finite-dimensional distributions, while both properties fail at the functional level sense due to the emergence of discontinuities
@article{dilillo2026largedeviationprinciplesfunctional,title={Large deviation principles and functional limit theorems in the deep limit of wide random neural networks},author={Di Lillo, Simmaco and Macci, Claudio and Pacchiarotti, Barbara},year={2026},archiveprefix={arXiv},primaryclass={math.PR},url={https://arxiv.org/abs/2601.04677},}
Fractal and Regular Geometry of Deep Neural Networks
We study the geometric properties of random neural networks by investigating the boundary volumes of their excursion sets for different activation functions, as the depth increases. More specifically, we show that, for activations which are not very regular (e.g., the Heaviside step function), the boundary volumes exhibit fractal behavior, with their Hausdorff dimension monotonically increasing with the depth. On the other hand, for activations which are more regular (e.g., ReLU, logistic and tanh), as the depth increases, the expected boundary volumes can either converge to zero, remain constant or diverge exponentially, depending on a single spectral parameter which can be easily computed. Our theoretical results are confirmed in some numerical experiments based on Monte Carlo simulations.
@article{dilillo2026fractalregulargeometrydeep,title={Fractal and Regular Geometry of Deep Neural Networks},author={Di Lillo, Simmaco and Marinucci, Domenico and Salvi, Michele and Vigogna, Stefano},year={2026},archiveprefix={arXiv},primaryclass={math.PR},url={https://arxiv.org/abs/2504.06250},}
2025
Neurological long COVID in the outpatient clinic: Is it so long?
Stefano Giuseppe
Grisanti, Sara
Garbarino, Margherita
Bellucci, Cristina
Schenone, and
15 more authors
Abstract Background and purpose Neurological involvement in long COVID (coronavirus disease 2019) is well known. In a previous study we identified two subtypes of neurological long COVID, one characterized by memory disturbances, psychological impairment, headache, anosmia and ageusia, and the other characterized by peripheral nervous system involvement, each of which present a different risk factor profile. In this study, we aimed to clarify the persistence of neurological long COVID symptoms with a significantly longer term follow-up. Methods We prospectively collected data from patients with prior COVID-19 infection who showed symptoms of neurological long COVID. We conducted a descriptive analysis to investigate the progression of neurological symptoms over time at 3-, 6-, 12-, and 18-month follow-ups. We performed a k-means clustering analysis on the temporal evolution of the symptoms at 6, 12, and 18 months. Finally, we assessed the difference between the recovery course of vaccinated and non-vaccinated patients by computing the cumulative recovery rate of symptoms in the two groups. Results The study confirmed the presence of two subtypes of neurological long COVID. Further, 50% of patients presented a complete resolution of symptoms at 18 months of follow-up, regardless of which subtype of neurological long COVID they had. Vaccination against SARS-Cov-2 appeared to imply a higher overall recovery rate for all neurological symptoms, although the statistical reliability of this finding is hampered by the limited sample size of the unvaccinated patients included in this study. Conclusions Neurological long COVID can undergo complete resolution after 18 months of follow-up in 50% of patients and vaccination can accelerate the recovery.
@article{https://doi.org/10.1111/ene.16510,author={Grisanti, Stefano Giuseppe and Garbarino, Sara and Bellucci, Margherita and Schenone, Cristina and Candiani, Valentina and Di Lillo, Simmaco and Campi, Cristina and Barisione, Emanuela and Aloè, Teresita and Tagliabue, Elena and Serventi, Alberto and Pesce, Giampaola and Massucco, Sara and Cabona, Corrado and Lechiara, Anastasia and Uccelli, Antonio and Schenone, Angelo and Piana, Michele and Benedetti, Luana},title={Neurological long COVID in the outpatient clinic: Is it so long?},journal={European Journal of Neurology},volume={32},number={3},pages={e16510},keywords={clustering, COVID-19, neurological long COVID, recovery rate, vaccination},doi={https://doi.org/10.1111/ene.16510},url={https://onlinelibrary.wiley.com/doi/abs/10.1111/ene.16510},year={2025},}
This work investigates the expected number of critical points of random neural networks with different activation functions as the depth increases in the infinite-width limit. Under suitable regularity conditions, we derive precise asymptotic formulas for the expected number of critical points of fixed index and those exceeding a given threshold. Our analysis reveals three distinct regimes depending on the value of the first derivative of the covariance evaluated at 1: the expected number of critical points may converge, grow polynomially, or grow exponentially with depth. The theoretical predictions are supported by numerical experiments. Moreover, we provide numerical evidence suggesting that, when the regularity condition is not satisfied (e.g. for neural networks with ReLU as activation function), the number of critical points increases as the map resolution increases, indicating a potential divergence in the number of critical points.
@article{dilillo2025criticalpointsrandomneural,title={Critical Points of Random Neural Networks},author={Di Lillo, Simmaco},year={2025},archiveprefix={arXiv},primaryclass={stat.ML},url={https://arxiv.org/abs/2505.17000},}
It is well known that randomly initialized, push-forward, fully connected neural networks weakly converge to isotropic Gaussian processes in the limit where the width of all layers goes to infinity. In this paper, we propose to use the angular power spectrum of the limiting fields to characterize the complexity of the network architecture. In particular, we define sequences of random variables associated with the angular power spectrum and provide a full characterization of the network complexity in terms of the asymptotic distribution of these sequences as the depth diverges. On this basis, we classify neural networks as low-disorder, sparse, or high-disorder; we show how this classification highlights a number of distinct features for standard activation functions and, in particular, sparsity properties of ReLU networks. Our theoretical results are also validated by numerical simulations.
@article{doi:10.1137/24M1675746,author={Di Lillo, Simmaco and Marinucci, Domenico and Salvi, Michele and Vigogna, Stefano},title={Spectral Complexity of Deep Neural Networks},journal={SIAM Journal on Mathematics of Data Science},volume={7},number={3},pages={1154-1183},year={2025},doi={10.1137/24M1675746},url={https://doi.org/10.1137/24M1675746},}