next up previous print clean
Next: SCALING THE ADJOINT Up: VIRTUAL-RESIDUAL PRECONDITIONING Previous: The classic underdetermined least-squares

The virtual-residual preconditioning algorithm

Now we are prepared for Sergey Fomel's model-extended preconditioning. We begin with the usual definition of preconditioned variables $ \bold m = \bold S \bold p$ and the opposite of our usual sign convention for the residual.
   \begin{eqnarray}
-\epsilon \ \bold r &=& \bold F \bold m - \bold d \\ -\epsilon \ \bold r &=& \bold F \bold S \bold p - \bold d\end{eqnarray} (39)
(40)
which we arrange with a column vector of unknowns that includes not only $\bold p$ but also $\bold r$.
\begin{displaymath}
\bold 0
\quad\approx\quad \hat {\bold r}
\quad =\quad
\left[...
 ...n{array}
{c}
\bold p \\ \bold r\end{array}\right]
\ - \ \bold d\end{displaymath} (41)
The interesting thing is that we can use our familiar conjugate-gradient programs with this system of equations. We only need to be careful to distinguish the residual in the bottom half of our solution vector, from the virtual residual $\hat {\bold r}$(which should vanish identically) in the iterative fitting problem. The fitting begins with the initializations:
\begin{displaymath}
\hat{\bold p}_0 \quad =\quad\left[
\begin{array}
{c}
\bold p_0 \\ \bold 0\end{array}\right]\end{displaymath} (42)
and
\begin{displaymath}
\hat{\bold r}_0 \quad =\quad
 \bold F\bold S \bold p_0 -\bold d \quad =\quad
 \bold F \bold m_0 -\bold d\end{displaymath} (43)
As the iteration begins we have gradients of the two parts of the model
\begin{eqnarray}
\bold g_m &=& \bold S' \bold F' \hat{\bold r} \\ \bold g_d &=& \epsilon \ \hat{\bold r}\end{eqnarray} (44)
(45)
which imply a perturbation in the theoretically zero residual
\begin{displaymath}
\Delta\hat{\bold r} \quad =\quad
 \bold F\bold S \bold g_m + \epsilon\; \bold g_d\end{displaymath} (46)
Then step sizes and steps are determined as usual for conjugate-direction fitting.

The preconditioning module vr_solver [*] takes three functions as its arguments. Functions oper and prec correspond to the linear operators $\bold F$ and $\bold S$.Function solv implements one step of an optimization descent. Its arguments are a logical parameter forget, which controls a conditional restarting of the optimization, the current effective model x0, the gradient vector g, the data residual vector rr, and the conjugate gradient vector gg. Subroutine solver_vr constructs the effective model vector x, which consists of the model-space part xm and the data-space part xd. Similarly, the effective gradient vector g is split into the the model-space part gm and the data-space part gd.

For brevity I am omitting the code for the virtual-residual solver vrsolver which is in the library. It follows the same pattern as prec_solver [*].


next up previous print clean
Next: SCALING THE ADJOINT Up: VIRTUAL-RESIDUAL PRECONDITIONING Previous: The classic underdetermined least-squares
Stanford Exploration Project
2/27/1998