Videos

Omer Reingold - Algorithmic Fairness, Loss Minimization and Outcome Indistinguishability

Presenter
February 26, 2024
Abstract
Recorded 26 February 2024. Omer Reingold of Stanford University presents "Learn Now, Optimize Later: Algorithmic Fairness, Loss Minimization and Outcome Indistinguishability" at IPAM's EnCORE Workshop on Computational vs Statistical Gaps in Learning and Optimization. Abstract: Training a predictor to minimize a loss function fixed in advance is a dominant paradigm in machine learning. However, loss minimization by itself might fail to satisfy properties that come up naturally in the context of algorithmic fairness. To remedy this, multi-group fairness notions such as multicalibration have been proposed, which require the predictor to share certain statistical properties of the ground truth, even when conditioned on a rich family of subgroups. These notions could be understood from the perspective of computational indistinguishability through the notion of outcome indistinguishability where a predictor can be viewed as giving a model of events that cannot be refuted from empiric evidence within some computational bounds. While motivated in fairness, this alternative paradigm for training an indistinguishable predictor is finding a growing number of appealing applications, where the same predictor can later be used to optimize one of a large set of loss functions, under a family of capacity and fairness constraints and instance distributions. Learn more online at: https://www.ipam.ucla.edu/programs/workshops/encore-workshop-on-computational-vs-statistical-gaps-in-learning-and-optimization/?tab=overview