Videos

Structured Matrix Learning from Matrix-Vector Products

Presenter
February 4, 2026
Abstract
I will discuss the problem of approximating an arbitrary target matrix A with a structured matrix, given access to just a limited number of adaptively chosen matrix-vector products with A. This general problem is ubiquitous in computational science, both in algorithmic applications and, more recently, in Scientific Machine Learning (SciML), where it abstracts the central task of operator learning. For common structures, like low-rank matrices, the number of matrix-vector products required to obtain a near-optimal approximation to A is well understood. Indeed, optimal results are obtained by the randomized singular value decomposition (RandSVD) and related methods. However, for a number of other important structures, like sparse or hierarchically structured matrices, results have been more elusive. I will present recent work that makes progress on matvec-efficient algorithms for these structured classes and others. I will also discuss my group's work on trying to develop a general theory for the complexity of structured matrix learning. This talk presents joint work with Noah Amsel, Pratyush Avi, Tyler Chen, Prathamesh Dharangutte, Chinmay Hegde, Feyza Duman Keles, Diana Halikias, Cameron Musco, and David Persson.
Supplementary Materials