Videos

Michael Murray - Overfitting: benign, tempered and harmful - IPAM at UCLA

Presenter
September 24, 2024
Abstract
Recorded 24 September 2024. Michael Murray of the University of Bath presents "Overfitting: benign, tempered and harmful" at IPAM's Analyzing High-dimensional Traces of Intelligent Behavior Workshop. Abstract: Conventional wisdom suggests that without explicit regularization an expressive model will fit noisy data at the expense, potentially quite catastrophically, of its test time performance. Surprisingly then, it has been experimentally observed that neural networks can be trained with little, if any, form of explicit regularization to near zero loss on noisy training data and yet still generalize well. Informally, such models are said to exhibit benign or tempered overfitting if they fit noisy data with only a negligible or proportional decrease in test performance respectively. These experiments suggest that a more nuanced understanding of overfitting is required: in particular, the fact that one can train the same network with the same learning algorithm on data with varying degrees of noise and observe, depending on this degree, catastrophic / tempered / benign / no overfitting prompts the question what properties of the data drive these differing outcomes? In this talk I will first motivate the study of these questions in the context of a very simple data model, second I will present results which illustrate the role of the regularity, or signal strength, of the data as well ratio of the number of data points versus the data dimension, in driving transitions between these different overfitting outcomes. Learn more online at: https://www.ipam.ucla.edu/programs/workshops/workshop-i-analyzing-high-dimensional-traces-of-intelligent-behavior/?tab=overview