next up previous print clean
Next: Discussion Up: Claerbout: Polarity & PEF Previous: EXAMPLES

Minimum-phase equivalent training data set

This leads to a technique that is new to me. I'll describe it first in the one-dimensional world. (In real life, multidimensional cases might be more interesting, for example where dip spectra change rapidly.) The basic problem is to define the appropriate regularization for a prediction-error filter (PEF). Regularization is ordinarily regarded as supplying a prior statement about the model, in this case, about the autoregression filter. We don't think about PEFs as being ``physical'' and the correct prior model and its covariance are not immediately obvious. The answer is that the prior PEF is nothing more and nothing less than the solution to the autoregression equations for a prior ``universal'' data set. In practice, it amounts to having a ``training'' data set. I have noticed an efficient way to merge the information of the training data set with the ``too-small'' local data set.

Given a data set packed in an operator $\bold D$and likewise a training data set $\bold T$,we formulate the fitting goals for finding the PEF $\bold a$ by using a constraint matrix $\bold K$(an identity matrix except for the (1,1) element which is zero).

 
 \begin{displaymath}
\begin{array}
{ccc}
\bold 0 &\approx & \bold D \bold K \bold a \\ \bold 0 &\approx & \bold T \bold K \bold a\end{array}\end{displaymath} (1)

In principle, the training data (and hence the matrix $\bold T$) is very large. Consider however a spectral factorization of the training data set. Say $\bold T'\bold T=\bold B'\bold B$ where $\bold b$is a minimum-phase spectral factorization of the training data set (and $\bold B$ is the packing of $\bold b$ into a convolution operator). For me, this is a new idea, that we express the prior information as a ``training wavelet'' $\bold b$ that we find by spectral factorization of a ``universal'' data set. The idea is that we then find our ``local'' PEF by fitting the goals

 
 \begin{displaymath}
\begin{array}
{ccc}
\bold 0 &\approx & \bold D \bold K \bold a \\ \bold 0 &\approx & \bold B \bold K \bold a\end{array}\end{displaymath} (2)

The result of fitting (2) is theoretically equal to that of (1) but computationally (2) is potentially much easier training wavelet is much more compact (because it is minimum-phase) than the full training data set.


next up previous print clean
Next: Discussion Up: Claerbout: Polarity & PEF Previous: EXAMPLES
Stanford Exploration Project
4/20/1999