Sparsity-driven Feature-enhanced Imaging

October 17, 2005
  • Image analysis
  • 62H35
We present some of our recent work on coherent image reconstruction. The primary application that has driven this work has been synthetic aperture radar, although we have extended our approach to other modalities such as ultrasound imaging as well. One of the motivations for our work has been the increased interest in using reconstructed images in automated decision-making tasks. The success of such tasks (e.g. target recognition in the case of radar) depends on how well the computed images exhibit certain features of the underlying scene. Traditional coherent image formation techniques have no explicit means to enhance features (e.g. scatterer locations, object boundaries) that may be useful for automatic interpretation. Another motivation has been the emergence of a number of applications where the scene is observed through a sparse aperture. Examples include wide-angle imaging with unmanned air vehicles (UAVs), foliage penetration radar, bistatic imaging, and passive radar imaging. When traditional image formation techniques are applied to these sparse aperture imaging problems, they often yield high sidelobes and other artifacts that make the image difficult to interpret. We have developed a mathematical foundation and associated algorithms for model-based, feature-enhanced imaging to address these challenges. Our framework is based on a regularized reconstruction of the scattering field, which combines an explicit mathematical model of the data collection process with non-quadratic functionals representing prior information about the nature of the features of interest. In particular, the prior information we exploit is that the underlying signals exhibit some form of sparsity. We solve the challenging optimization problems posed in our framework by computationally efficient numerical algorithms that we have developed. The resulting images offer improvements over conventional images in terms of visual and automatic interpretation of the underlying scenes. We also discuss a number of open research avenues inspired by this work.