Sampling through optimization of divergences
Presenter
May 9, 2024
Abstract
Sampling from a target measure when only partial information is available (e.g. unnormalized density as in Bayesian inference, or true samples as in generative modeling )
is a fundamental problem in computational statistics and machine learning. The sampling problem can be formulated as an optimization over the space of probability distributions of a well-chosen discrepancy (e.g. a divergence or distance).
In this talk, we'll discuss several properties of sampling algorithms for some choices of discrepancies (well-known ones, or novel proxies), both regarding their optimization and quantization aspects.