Regularized Nonlinear Acceleration
Tuesday, December 05, 2017 at 4:00pm to 5:00pm
Building 32, 141
32 VASSAR ST, Cambridge, MA 02139
Speaker: Alexandre d’Aspremont
Affiliation: École Normale Supérieure
We describe a convergence acceleration technique for generic optimization problems. Our scheme computes estimates of the optimum from a nonlinear average of the iterates produced by any optimization method. The weights in this average are computed via a simple linear system, whose solution can be updated online. This acceleration scheme runs in parallel to the base algorithm, providing improved estimates of the solution on the fly, while the original optimization method is running. Numerical experiments are detailed on classical classification problems.
After dual PhDs from Ecole Polytechnique and Stanford University in optimisation and finance, followed by a postdoc at U.C. Berkeley, Alexandre d'Aspremont joined the faculty at Princeton University as an assistant then associate professor with joint appointments at the ORFE department and the Bendheim Center for Finance. He returned to Europe in 2011 thanks to a grant from the European Research Council and is now a research director at CNRS, attached to Ecole Normale Supérieure in Paris. His research focuses on convex optimization and applications to machine learning, statistics and finance.
The LIDS Seminar Series features distinguished speakers who provide an overview of a research area, as well as exciting recent progress in that area. Intended for a broad audience, seminar topics span the areas of communications, computation, control, learning, networks, probability and statistics, optimization, and signal processing.
This series is sponsored by Draper Laboratory.