Videos

Acceleration and Adaptive Selection in Asynchronous Iterative Solvers

Presenter
May 5, 2026
Abstract
Asynchronous methods allow workers to proceed with stale data, trading convergence rate for resilience to stragglers and variable delays. A natural question is how to recover the lost convergence quality. We investigate two complementary approaches: subspace acceleration at the coordinator level, and adaptive coordinate selection at the worker level. For subspace acceleration, the Walker-Ni equivalence between Anderson acceleration and GMRES provides a principled motivation: accelerating an asynchronous stationary iteration is equivalent to applying a Krylov method with a non-stationary preconditioner, connecting our setting to the convergence theory of flexible GMRES. Using an experimental framework with controlled delay injection on HPC infrastructure, we find that Anderson's effectiveness under asynchrony depends on the coupling density of the iteration. For adaptive selection, we investigate convergence of randomized coordinate descent with residual-weighted sampling, including a Boltzmann-weighted family that interpolates between uniform and greedy selection while preserving convergence guarantees. These two directions address the same underlying question: how to make optimal use of computational effort under inconsistent information.