Codes and Expansions (CodEx) Seminar


Paulina Hoyos (University of Texas)
Manifold learning in the presence of symmetries

Graph-based manifold learning algorithms assume that data lie on or near a d-dimensional manifold M embedded in some high-dimensional Euclidean space; by using a kernel function to measure pairwise affinities of data points, such algorithms construct a graph Laplacian matrix from the data, whose eigenvalues and eigenvectors are then used for tasks such as dimensionality reduction, function representation and approximation, and denoising. In this work, we consider data sets whose data points satisfy an additional symmetry invariance assumption: given a compact Lie group G, for any g in G and data point x, the point g.x resulting from the action of g on x is a valid data point (which is not necessarily in the data set, but can be added to it). A possible approach for exploiting this symmetry invariance of data is to construct a G-invariant graph Laplacian (G-GL) by analytically incorporating the pairwise affinities between all the pairs of points generated by the action of G on the data set. The G-GL converges to the Fokker-Planck operator on the data manifold M, with a significantly improved convergence rate compared to the standard graph Laplacian.