Videos

Regularized Online Optimization: Tracking Regret, Risk Bounds, and Applications to Dynamic Networks

Presenter
October 25, 2011
Abstract
Online optimization methods are useful in a variety of applications with sequential observations of a dynamic environment. Often such methods are designed to minimize an accumulated loss metric, and the analysis techniques are appealing because of their applicability in settings where observations cannot be considered independent or identically distributed, and accurate knowledge of the environmental dynamics cannot be assumed. However, such analyses may mask the role of regularization and adaptivity to environmental changes. This work explores regularized online optimization methods and presents several novel performance bounds. Tracking regret bounds relate the accumulated loss of such an algorithm with that of the best possible dynamic estimate that could be chosen in a batch setting, and risk bounds quantify the roles of both the regularizer and the variability of the (unknown) dynamic environment. The efficacy of the method is demonstrated in an online Ising model selection context applied to U. S. Senate voting data.