Codes and Expansions (CodEx) Seminar


Nadav Dym (Technion):
Efficient Invariant Embeddings for 3D point sets

In many machine learning tasks, the goal is to learn an unknown function which has some known group symmetries. Equivariant machine learning algorithms exploit this by devising architectures ( = function spaces) which have these symmetries by construction. Examples include convolutional neural networks which respect translation symmetries, neural networks for graphs or sets which respect their permutation symmetries, or neural networks for 3D point sets which additionally respect Euclidean symmetries.

A common theoretical requirement of symmetry based architecture is that they will be able to separate any two objects which are not related by a group symmetry (this property can be used to prove stronger universality results which we will shortly describe). We will review results showing that under very general assumptions such a symmetry preserving separating mapping f exists, and the embedding dimension m can be taken to be roughly twice the dimension of the data. We will then propose a general methodology for efficient computation of such f using random invariants. This methodology is a generalization of the algebraic geometry argument used for the well known proof of phase retrieval injectivity. Time permitting, we will show how this result can be used to achieve efficient separation for point sets with respect to permutation and/or rotation actions.

Based on work with Steven J. Gortler, Snir Hordan and Tal Amir and on the papers:

  • Low Dimensional Invariant Embeddings for Universal Geometric Learning. Nadav Dym and Steven J. Gortler
  • Complete Neural Networks for Euclidean Graphs. Snir Hordan, Tal Amir, Steven J. Gortler and Nadav Dym