next up previous print clean
Next: Proposed solutions to attenuate Up: Noise attenuation by filtering Previous: Introduction

A short review of inverse problems

In this section I review some basic notions on inversion. The least-squares criterion comes directly from the hypothesis that the PDF of each observable data and each model parameter is gaussian. These assumptions lead to the general discrete inverse problem Tarantola (1987). Finding m is then equivalent to minimizing the quadratic function (or cost/objective function)  
 \begin{displaymath}
f({\bf m}) = (\bf{Lm - d})'{\bf{C_d^{-1}}}(\bf{Lm - d})+
 ({\bf{m-m_{prior}}})'{\bf{C_m^{-1}}}({\bf m-m_{prior}}),\end{displaymath} (14)
where ()' represents the (Hermitian) transpose, ${\bf m}$ is a mapping of the data (unknown of the inverse problem), ${\bf L}$ a seismic (linear) operator, ${\bf d}$ the seismic data, ${\bf C_d}$ and ${\bf C_m}$ the data and model covariance operators and ${\bf m_{prior}}$ a model given a priori.

The covariance matrix ${\bf C_d}$ combines experimental errors and modeling uncertainties. Modeling uncertainties describe the difference between what the operator can predict and what is contained in the data. Thus the covariance matrix ${\bf C_d}$is often called the noise covariance matrix Sacchi and Ulrych (1995). It is often assumed that, (1) the variances of the model and of the noise are uniform, (2) the covariance matrices are diagonal, i.e., the model and data components are uncorrelated, and (3) no prior model is known in advance. Given these approximations the objective function becomes  
 \begin{displaymath}
f({\bf m}) = (\bf{Lm - d})'(\bf{Lm - d})+
 \epsilon^2{\bf m}'{\bf m},\end{displaymath} (15)
where $\epsilon=\sigma_d^2/\sigma_m^2$ is a function of the of the noise and model variances $\sigma_d$ and $\sigma_m$. The model perturbation ${\bf m}'{\bf m}$ reduces to a damping of the cost function. In practice, this damping is used to compensate for numerical instabilities when the parameters (m) are poorly constrained.

The prior assumptions leading to equation ([*]) are often too strong when dealing with seismic data because the variance of the noise/model may be not uniform and the components of the noise/model not independent. For simplicity I rewrite the objective function in equation ([*]) in terms of ``fitting goals'' for m as follows:  
 \begin{displaymath}
\begin{array}
{rclcl}
 {\bf 0} &\approx& {\bf r_d} &=& {\bf ...
 ... &\approx& \epsilon {\bf r_m} &=& \epsilon{\bf m},
 \end{array}\end{displaymath} (16)
where ${\bf r_d}$ is the vector of data residuals and ${\bf r_m}$ is the vector of model residuals. With this notation, it is straightforward to rewrite equation ([*]) as  
 \begin{displaymath}
f({\bf m}) = \Vert{\bf r_d}\Vert^2+\epsilon^2\Vert{\bf r_m}\Vert^2.\end{displaymath} (17)
The first equality in equation ([*]) stresses the need for Lm to fit the input data ${\bf d}$. The second equality is often called the regularization or ``model styling'' term. This term can be useful for imposing a-priori knowledge on the model parameters Clapp et al. (2004); Fomel and Claerbout (2003). When the assumptions leading to equation ([*]) are respected, the estimated model ${\bf \hat{m}}$ that minimizes equation ([*]) is the maximum likelihood model Tarantola (1987).

As stressed before, an important assumption made in equation ([*]) is that the data and model errors ${\bf r_d}$ and ${\bf r_m}$ are IID. In the situation where coherent noise contaminates the data, these assumptions are violated and the covariance operators cannot be approximated with diagonal operators anymore.

In this Chapter, omitting the model residual vector ${\bf r_m}$ in the analysis, I show that a filtering (or weighting) operator ${\bf W}$ can be introduced in equation ([*]) such that ${\bf W'W \approx C_d^{-1}}$. This operator can take the form of a prediction-error filter or a projection filter.


next up previous print clean
Next: Proposed solutions to attenuate Up: Noise attenuation by filtering Previous: Introduction
Stanford Exploration Project
5/5/2005