Analysis of Gradient Descent on Wide Two-Layer ReLU Neural Networks
Presenter
May 8, 2020
Keywords:
- Neural networks
- Wasserstein gradient flows
- generalization
- nonnegative measures
MSC:
- 90C26
- 62M45
Abstract
In this talk, we propose an analysis of gradient descent on wide two-layer ReLU neural networks that leads to sharp characterizations of the learned predictor and strong generalization performances. The main idea is to study the dynamics when the width of the hidden layer goes to infinity, which is a Wasserstein gradient flow. While this dynamics evolves on a non-convex landscape, we show that its limit is a global minimizer if initialized properly. We also study the "implicit bias" of this algorithm when the objective is the unregularized logistic loss. We finally discuss what these results tell us on the generalization performance. This is based on joint work with Francis Bach.