Codes and Expansions (CodEx) Seminar


Elizabeth Newman (Tufts University)
Boost Like a (Var)Pro: Trust-Region Gradient Boosting via Variable Projection

Training machine learning models is computationally challenging; one has to solve a high-dimensional, nonconvex optimization problem repeatedly to appropriately calibrate hyperparameters.  Our goal is to reduce the training burden by strategic model design paired with structure-exploiting optimization.  To this end,  we introduce VPBoost (Variable Projection Boosting), a gradient boosting algorithm for separable smooth approximators, i.e., models with a smooth nonlinear featurizer followed by a final linear mapping.   VPBoost fuses variable projection, a training paradigm for separable models that enforces optimality of the linear weights, with a second-order weak learning strategy.  The combination of second-order boosting, separable models, and variable projection give rise to a natural interpretation of VPBoost as a functional trust-region method. We leverage trust-region theory to prove VPBoost converges under mild geometric conditions. Through numerical experiments on synthetic data, image recognition, and scientific machine learning, we demonstrate that VPBoost outperforms gradient-descent-based boosting and attains competitive performance relative to an industry-standard decision tree boosting algorithm.