Next: Geometric interpretation of the
Up: Least-squares solution of the
Previous: Inversion of a 22
Using the results above, the least-squares estimate of
in equation (
) is derived.
Assuming that
, the fitting goal is
| ![\begin{displaymath}
{\bf 0} \approx {\bf r_d} = {\bf Lm}-{\bf d},\end{displaymath}](img283.gif) |
(97) |
with
and
.The normal equations are given by
| ![\begin{displaymath}
\left( \begin{array}
{cc}
{\bf L_s'L_s} & {\bf L_s'L_n} \\...
...gin{array}
{c}
{\bf L_s'd} \\ {\bf L_n'd}\end{array}\right),\end{displaymath}](img284.gif) |
(98) |
where
and
are the unknowns.
The least-square estimate
of
can be
derived from the bottom row of equation (
).
The least-square estimate
of
can be
derived from the top row of equation (
).
We have, then,
| ![\begin{eqnarray}
\hat{{\bf m_s}}&=&({\bf L_s'L_s}-{\bf L_s'L_n}({\bf L_n'L_n})^{...
...
{\bf L_s'L_n})^{-1}{\bf L_n'L_s}({\bf L_s'L_s})^{-1}{\bf L_s'd},\end{eqnarray}](img287.gif) |
(99) |
| |
| (100) |
| |
which can be simplified as follows:
| ![\begin{eqnarray}
\hat{{\bf m_s}}&=&({\bf L_s'}({\bf I}-{\bf L_n}({\bf
L_n'L_n})...
...bf L_n'}({\bf I}-{\bf L_s}({\bf
L_s'L_s})^{-1}{\bf L_s'}){\bf d}.\end{eqnarray}](img288.gif) |
(101) |
| (102) |
is the coherent noise resolution matrix,
whereas
is the signal resolution
matrix Tarantola (1987).
Denoting
and
yields the following simplified expression for
and
:
| ![\begin{displaymath}
\left( \begin{array}
{c}
\hat{{\bf m_s}} \\ \hat{{\bf m_n...
..._s}L_n})^{-1}{\bf L_n'\overline{R_s}}\end{array}\right){\bf d}.\end{displaymath}](img291.gif) |
(103) |
By property of the resolution operators,
and
perform noise and signal filtering, i.e.,
| ![\begin{displaymath}
\begin{array}
{ccc}
{\bf \overline{R_n}d}&=&{\bf \overline...
...
{\bf \overline{R_s}d}&=&{\bf \overline{R_s} n},
\end{array}\end{displaymath}](img294.gif) |
(104) |
if the noise and signal are well predicted by the noise
and signal modeling operators. Nemeth (1996) demonstrates
that the inverse of the Hessian in equation (
)
is well conditioned if the noise and signal operators are orthogonal,
meaning that they predict distinct parts of the model space without
overlapping. If overlapping occurs, a model regularization term can
improve the signal/noise separation.
Next: Geometric interpretation of the
Up: Least-squares solution of the
Previous: Inversion of a 22
Stanford Exploration Project
5/5/2005