Next: Estimating the regularization parameter
Up: Symes: Extremal regularization
Previous: Introduction
A mathematical statement of the extremal regularization problem
is (equivalent to)
![\begin{displaymath}
{\rm min}_x (Rx)^TRx \,\,{\rm subject}\,\,{\rm to}\,\,(Ax-d)^T(Ax-d) \le
\sigma^2 d^Td\end{displaymath}](img1.gif)
Here A is the modeling operator, x is the model vector,
d is the data, and R is the regularizing operator. The
noise level
is relative, as that is usually the most
useful way to pose noise estimates. Thus solution of this problem
demands quadratically constrained quadratic minimization.
The solution minimizes whatever quality is represented by R,
subject to fitting the data to a relative error level
.The first order necessary conditions of optimality state
that the solution satisfies
![\begin{displaymath}
\lambda A^T(Ax-d) + R^TRx = 0\end{displaymath}](img3.gif)
![\begin{displaymath}
\lambda \left[(Ax-d)^T(Ax-d) - \sigma^2d^Td\right] = 0\end{displaymath}](img4.gif)
The first condition states parallelism of the gradients of the
constraint and objective functions. The second implies that either
the constraint is satisfied as an equality - i.e. the solution
is on the boundary of the set of constraint-satisfying models -
or else the Lagrange multiplier
vanishes, which means
that the most regular solution actually has a smaller
residual than assumed - i.e.
is larger than the
actual noise level.
The first condition is the familiar normal equation of the
unconstrained problem
![\begin{displaymath}
{\rm min}_x \,\lambda(Ax-d)^T(Ax-d) +(Rx)^TRx\end{displaymath}](img6.gif)
or
![\begin{displaymath}
{\rm min}_x \,(Ax-d)^T(Ax-d) +\epsilon^2(Rx)^TRx\end{displaymath}](img7.gif)
where
is the ``notoriously
elusive'' relative weight between model space (really constraint
space) and data space.
The point of this paper, and the basis of the Moré-Hebden
algorithm, is that the first order conditions make
the
a function of the assumed noise level
.Whenever
can be estimated directly, this relationship
provide a method of estimating
.
Next: Estimating the regularization parameter
Up: Symes: Extremal regularization
Previous: Introduction
Stanford Exploration Project
4/20/1999