Videos

The Effectiveness of Nonconvex Tensor Completion: Fast Convergence and Uncertainty Quantification

Presenter
May 19, 2021
Abstract
Yuxin Chen - Princeton University Recent years have witnessed a flurry of activity in solving statistical estimation and learning problems via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple first-order optimization methods have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. This talk explores the effectiveness of nonconvex optimization for noisy tensor completion --- the problem of reconstructing a low-CP-rank tensor from highly incomplete and randomly corrupted observations of its entries. While randomly initialized gradient descent suffers from a high-volatility issue in the sample-starved regime, we propose a two-stage nonconvex algorithm that is guaranteed to succeed, enabling linear convergence, minimal sample complexity and minimax statistical accuracy all at once. In addition, we characterize the distribution of this nonconvex estimator down to fine scales, which in turn allows one to construct entrywise confidence intervals for both the unseen tensor entries and the unknown tensor factors. Our findings reflect the important role of statistical models in enabling efficient and guaranteed nonconvex statistical learning.
Supplementary Materials