Generalized Newton-type methods for energy formulations in image processing
Presenter
March 24, 2009
Keywords:
- Energy minimization
MSC:
- 74G65
Abstract
Many problems in image processing are addressed via the minimization
of a cost functional. The most prominently used optimization
technique is gradient-descent, often used due to its simplicity and
applicability where other techniques, e.g., those coming from
discrete optimization, can not be applied. Yet, gradient-descent
suffers from slow convergence, and often to just local minima which
highly depend on the initialization and the condition number of the
functional Hessian. Newton-type methods, on the other hand, are
known to have a faster, quadratic, convergence. In its classical
form, the Newton method relies on the L2-type norm to define the
descent direction. In this work, we generalize and reformulate this
very important optimization method by introducing Newton-type
methods based on more general norms. Such norms are introduced both
in the descent computation (Newton step), and in the corresponding
stabilizing trust-region. This generalization opens up new
possibilities in the extraction of the Newton step, including
benefits such as mathematical stability and the incorporation of
smoothness constraints. We first present the derivation of the
modified Newton step in the calculus of variation framework needed
for image processing. Then, we demonstrate the method with two
common objective functionals: variational image deblurring and
geometric active contours for image segmentation. We show that in
addition to the fast convergence, norms adapted to the problem at
hand yield different and superior results.