Codes and Expansions (CodEx) Seminar


Alejandro Parada-Mayorga (University of Colorado Denver):
Algebraic Neural Networks: Stability to Deformations

Convolutional architectures play a central role on countless scenarios in machine learning, and the numerical evidence that proves the advantages of using them is overwhelming. Theoretical insights have provided solid explanations about why such architectures work well. These analysis apparently different in nature, have been performed considering signals defined on different domains and with different notions of convolution, but with remarkable similarities in the final results, posing then the question of whether there exists an explanation for this at a more structural level. In this talk we provide an affirmative answer to this question with a first principles analysis introducing algebraic neural networks (AlgNNs), which rely on algebraic signal processing and representation theory of algebras. In particular, we study the stability properties of algebraic neural networks showing that stability results for traditional CNNs, graph neural networks (GNNs), group neural networks, graphon neural networks, or any formal convolutional architecture, can be derived as particular cases of our results. This shows that stability is a universal property – at an algebraic level – of convolutional architectures, and this also explains why the remarkable similarities we find when analyzing stability for each particular type of architecture.