Codes and Expansions (CodEx) Seminar


Shira Faigenbaum-Golovin (Duke University):
Can We Approximate It? Manifolds, Refinable Functions, and the Approximation Power of Neural Networks

A lot has been said about the power of approximation in low dimensions. In this talk, we will broaden the canvas by discussing two different aspects: manifold approximation in high-dimension, and expansion of existing knowledge on the approximation capabilities of Neural Network. First, I will present a method for denoising and reconstructing a low-dimensional manifold in a high-dimensional space. Given a noisy point cloud situated near a low dimensional manifold, the proposed solution distributes points near the unknown manifold in a noise-free and quasi-uniform manner, by leveraging a generalization of the robust L1-median to higher dimensions. The theoretical properties and the effectiveness of this methodology allowed us to tackle various approximation challenges, including function approximation and the estimation of missing information within the data.

In the second part of my talk, we will delve into the theoretical aspects of Neural Networks through the lens of approximation theory. We will see that ReLu-based Neural Networks can approximate certain functions, up to exponential accuracy, including Refinable functions. These Refinable functions are the solutions of refinement equations, and are the building stones in many constructions; including subdivision schemes used in computer graphics, wavelets, B-splines, as well as several fractals. We will point out the sub-optimality of the classical refinable definition, and propose a different type of refinement that involves not only translation and rescaling but also mirroring; The new construction allows for the replacement of the well-established B-splines, wavelets, and fractals with new functions easily learned by Neural Networks.