Stefanie Jegelka - Two aspects of learning algorithms: generalization under shifts & loss functions
Presenter
March 2, 2023
Abstract
Recorded 02 March 2023. Stefanie Jegelka of the Massachusetts Institute of Technology presents "Two aspects of learning algorithms: generalization under shifts and loss functions" at IPAM's Artificial Intelligence and Discrete Optimization Workshop.
Abstract: Graph Neural Networks (GNNs) have become a popular tool for learning algorithmic tasks, in particular related to combinatorial optimization. In this talk, we will focus on the “algorithmic reasoning” task of learning a full discrete algorithm. First, we will focus on the stability of GNNs to data perturbations: what may be an appropriate metric for measuring shift? Under what conditions will a GNN generalize to larger graphs? Second, we will consider loss functions for learning with discrete objects, beyond GNNs. In particular, neural networks work best with continuous, high-dimensional spaces. Can we extend discrete loss functions accordingly?
This talk is based on joint work with Ching-Yao Chuang, Keyulu Xu, Joshua Robinson, Nikos Karalias, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi and Andreas Loukas.
Learn more online at: http://www.ipam.ucla.edu/programs/workshops/artificial-intelligence-and-discrete-optimization/