Boris Hanin - Neural Network Scaling Limits - IPAM at UCLA
Presenter
October 14, 2024
Abstract
Recorded 14 October 2024. Boris Hanin of Princeton University presents "Neural Network Scaling Limits" at IPAM's Theory and Practice of Deep Learning Workshop.
Abstract: Neural networks are often studied analytically through scaling limits: regimes in which taking to infinity structural network parameters such as depth, width, and number of training datapoints results in simplified models of learning. I will survey several such approaches with the goal of illustrating the rich and still not fully understood space of possible behaviors when some or all of the network’s structural parameters are large.
Learn more online at: https://www.ipam.ucla.edu/programs/workshops/workshop-ii-theory-and-practice-of-deep-learning/?tab=overview