Videos

Asymptotic convergence speed of windowed Anderson acceleration: an overview of results and open problems

Presenter
July 24, 2023
Abstract
Anderson acceleration is widely used to speed up convergence of fixed-point iterative methods in scientific computing and optimization. Almost all implementations use a sliding window approach with window size m. For many applications AA(m) dramatically improves the convergence speed, both when iteration counts are small, and asymptotically for large iteration counts. Nevertheless, there are still no known results yet that can bound or quantify the improvement in asymptotic convergence speed provided by windowed AA(m). In this talk I will give an overview of what is known about the asymptotic convergence speed of windowed AA(m). Numerical results show that the root-linear asymptotic convergence factor of AA(m) often strongly depends on the initial guess, and that the worst-case root-linear convergence factor is often substantially faster than the convergence factor of the underlying fixed-point method that AA(m) accelerates. Analysis of AA(m) written as a fixed-point method and the continuity and differentiability of its fixed-point iteration function provides some insight, but general results on quantifying the asymptotic convergence acceleration remain elusive. In the linear case, windowed AA(m) is a Krylov method with some interesting properties, which lead to useful per-iteration convergence bounds, but it appears difficult to translate these to sharp asymptotic bounds. A recent result for the simplest non-trivial linear case of AA(1) provides, for the first time, a full characterization of the root-linear convergence factor of AA(1) as a function of the initial guess, which allows us to compute the average convergence factor gain AA(1) provides over a distribution of initial conditions.