Next: Proposed solutions to attenuate
Up: Noise attenuation by filtering
Previous: Introduction
In this section I review some basic notions on inversion.
The least-squares criterion comes directly from the hypothesis
that the PDF of each observable data and each model parameter is gaussian.
These assumptions lead to the general discrete inverse problem
Tarantola (1987). Finding m is then equivalent to minimizing
the quadratic function (or cost/objective function)
| ![\begin{displaymath}
f({\bf m}) = (\bf{Lm - d})'{\bf{C_d^{-1}}}(\bf{Lm - d})+
({\bf{m-m_{prior}}})'{\bf{C_m^{-1}}}({\bf m-m_{prior}}),\end{displaymath}](img45.gif) |
(14) |
where ()' represents the (Hermitian) transpose,
is a mapping of the data
(unknown of the inverse problem),
a seismic (linear) operator,
the seismic data,
and
the data and model
covariance operators and
a model given a priori.
The covariance matrix
combines experimental errors and modeling
uncertainties. Modeling uncertainties describe the difference between what
the operator can predict and what is contained in the data.
Thus the covariance matrix
is often called the noise covariance matrix Sacchi and Ulrych (1995).
It is often assumed that, (1) the variances of the model and of the noise
are uniform, (2) the covariance matrices are diagonal, i.e., the model
and data components are uncorrelated, and (3) no prior model is known
in advance. Given these approximations the objective function becomes
| ![\begin{displaymath}
f({\bf m}) = (\bf{Lm - d})'(\bf{Lm - d})+
\epsilon^2{\bf m}'{\bf m},\end{displaymath}](img50.gif) |
(15) |
where
is a function of the
of the noise and model variances
and
.
The model perturbation
reduces to a damping of the cost
function. In practice, this damping is used to compensate for numerical
instabilities when the parameters (m) are poorly constrained.
The prior assumptions leading to equation (
) are often too
strong when dealing with seismic data because the variance
of the noise/model may be not uniform and the components of the
noise/model not independent. For simplicity I rewrite
the objective function in equation (
) in terms of
``fitting goals'' for m as follows:
| ![\begin{displaymath}
\begin{array}
{rclcl}
{\bf 0} &\approx& {\bf r_d} &=& {\bf ...
... &\approx& \epsilon {\bf r_m} &=& \epsilon{\bf m},
\end{array}\end{displaymath}](img55.gif) |
(16) |
where
is the vector of data residuals and
is the vector of model residuals.
With this notation, it is straightforward to rewrite equation
(
) as
| ![\begin{displaymath}
f({\bf m}) = \Vert{\bf r_d}\Vert^2+\epsilon^2\Vert{\bf r_m}\Vert^2.\end{displaymath}](img58.gif) |
(17) |
The first equality in equation (
) stresses the need for Lm
to fit the input data
. The second equality is often called
the regularization or ``model styling'' term. This term can be useful
for imposing a-priori knowledge on the model parameters
Clapp et al. (2004); Fomel and Claerbout (2003).
When the assumptions leading to equation (
) are
respected, the estimated model
that minimizes
equation (
) is the maximum likelihood model Tarantola (1987).
As stressed before, an important assumption made in equation
(
) is that the data and
model errors
and
are IID. In the situation
where coherent noise contaminates the data, these assumptions
are violated and the covariance operators cannot be approximated with
diagonal operators anymore.
In this Chapter, omitting the model residual vector
in the analysis,
I show that a filtering (or weighting) operator
can be
introduced in equation (
) such that
. This operator can take the form of
a prediction-error filter or a projection filter.
Next: Proposed solutions to attenuate
Up: Noise attenuation by filtering
Previous: Introduction
Stanford Exploration Project
5/5/2005