CBMM Special Seminar: Beyond Empirical Risk Minimization: the lessons of deep learning

Monday, October 28, 2019 at 4:00pm to 5:00pm

Building 46, Singleton Auditorium (46-3002)
43 VASSAR ST, Cambridge, MA 02139

Speaker: Prof. Mikhail Belkin, The Ohio State University

Abstract:  "A model with zero training error is overfit to the training data and will typically generalize poorly"  goes statistical textbook wisdom.  Yet, in modern practice, over-parametrized deep networks with near perfect fit on training data still show excellent test performance.  This apparent contradiction points to troubling cracks in the conceptual foundations of machine learning. While classical analyses of Empirical Risk Minimization rely on balancing the complexity of predictors with training error, modern models are best described by interpolation. In that paradigm  a predictor is chosen by minimizing (explicitly or implicitly) a norm corresponding to a certain inductive bias over a space of functions that fit the training data exactly. I will discuss the nature of the challenge to our understanding of machine learning and point the way forward to first analyses that account for the empirically observed phenomena.  Furthermore, I will show how classical and modern models can be unified within a single  "double descent" risk curve,  which subsumes the classical U-shaped bias-variance trade-off.

Finally, as an example of a particularly interesting inductive bias, I will show evidence that deep over-parametrized autoencoders networks, trained with SGD, implement a form of associative memory with training examples as attractor states.

Event Type


Events By Interest


Events By Audience

Public, MIT Community, Students, Alumni, Faculty, Staff

Events By School

School of Engineering (SoE), School of Science


neuroscience, McGovernMIT, neurosciencemit, artificial intelligence, machine learning, Computational Neuroscience



Center for Brains, Minds and Machines (CBMM)


Contact Email


Add to my calendar

Recent Activity