Codes and Expansions (CodEx) Seminar


Michael Perlmutter (University of California Los Angeles)
Group Invariant Scattering on Graphs, Manifolds, and Other Measure Spaces

Geometric Deep Learning is an emerging field of research that aims to extend the success of convolutional neural networks (CNNs) to data with non-Euclidean geometric structure. Despite being in its relative infancy, this field has already found great success in many applications such as recommender systems, computer graphics, and traffic navigation. In order to improve our understanding of the networks used in this new field, several works have proposed novel versions of the scattering transform, a wavelet-based model of CNNs for graphs, manifolds, and more general measure spaces. In a similar spirit to the original Euclidean scattering transform, these geometric scattering transforms provide a mathematically rigorous framework for understanding the stability and invariance of the networks used in geometric deep learning. Additionally, they also have many interesting applications such as drug discovery, solving combinatorial optimization problems, and predicting patient outcomes from single-cell data. In particular, motivated by these applications to single-cell data, I will also discuss recent work proposing a diffusion maps style algorithm with quantitative convergence guarantees for implementing the manifold scattering transform from finitely many samples of an unknown manifold.