Codes and Expansions (CodEx) Seminar
Mauro Maggioni (Johns Hopkins University)
The non-linear single-index model, and other examples where compositional structure helps avoid the curse of dimensionality
We discuss examples where function composition can be exploited in statistical models in order to obtain efficient machine learning algorithms. I will discuss a new, simple model of high-dimensional functions that generalizes the classical single-index model, \(f(x)=g(v\cdot x)\), to the case where the linear inner function projecting on the unknown vector \(v\) is replaced by a nonlinear counterpart, namely the projection onto a(n unknown) one-dimensional curve. We construct estimators for \(f\) that, under suitable assumptions, provably defeat the statistical and computational curse of dimensionality, and achieve nearly optimal learning rates. I will then discuss other instances where function composition leads to avoiding the curse of dimensionality: first the problem of estimating interaction laws in stochastic systems of interacting particles given trajectories, where again considering compositional structure imposed by modeling and symmetries allows one to provably avoid the curse of dimensionality. Second, an application of deep learning in the context of digital twins in cardiology, and in particular the use of operator learning architectures for predicting solutions of parametric PDEs, or functionals thereof, on a family of diffeomorphic domains — the patient-specific hearts — which we apply to the prediction of medically relevant electrophysiological features of heart digital twins. Finally, I will mention the use of compositionality in the recently-introduced Multiscale Markov Decision Processes (MDPs) in Reinforcement Learning, which enable the efficient solution of Markov Decision Processes (MDPs), where scales represent MDPs at different levels of abstraction, that also provide highly transferable skills to new MDPs (also via compositionally).