next up previous print clean
Next: REFERENCES Up: Nonstationarity: patching Previous: The human eye as

SPACE-VARIABLE DECONVOLUTION

time-variable deconvolution deconvolution ! time-variable filter ! time-variable Filters sometimes change with time and space. We sometimes observe signals whose spectrum changes with position. A filter that changes with position is called nonstationary. We need an extension of our usual convolution operator hconest [*]. Conceptually, little needs to be changed besides changing aa(ia) to aa(ia,iy). But there is a practical problem. Fomel and I have made the decision to clutter up the code somewhat to save a great deal of memory. This should be important to people interested in solving multidimensional problems with big data sets.

Normally, the number of filter coefficients is many fewer than the number of data points, but here we have very many more. Indeed, there are na times more. Variable filters require na times more memory than the data itself. To make the nonstationary helix code more practical, we now require the filters to be constant in patches. The data type for nonstationary filters (which are constant in patches) is introduced in module nhelix, which is a simple modification of module helix [*]. nhelixnon-stationary convolution What is new is the integer valued vector pch(nd) the size of the one-dimensional (helix) output data space. Every filter output point is to be assigned to a patch. All filters of a given patch number will be the same filter. Nonstationary helixes are created with createnhelixmod, which is a simple modification of module createhelixmod [*]. createnhelixmodcreate non-stationary helix Notice that the user must define the pch(product(nd)) vector before creating a nonstationary helix. For a simple 1-D time-variable filter, presumably pch would be something like $(1,1,2,2,3,3,\cdots)$.For multidimensional patching we need to think a little more.

Finally, we are ready for the convolution operator. The operator nhconest [*] allows for a different filter in each patch. nhconestnon-stationary convolution A filter output y(iy) has its filter from the patch ip=aa%pch(iy). The line t=a(ip,:) extracts the filter for the ipth patch. If you are confused (as I am) about the difference between aa and a, maybe now is the time to have a look at beyond Loptran to the Fortran version.[*]

Because of the massive increase in the number of filter coefficients, allowing these many filters takes us from overdetermined to very undetermined. We can estimate all these filter coefficients by the usual deconvolution fitting goal ([*])
\begin{displaymath}
\bold 0 \quad\approx\quad
\bold r \eq \bold Y \bold K \bold a +\bold r_0\end{displaymath} (21)
but we need to supplement it with some damping goals, say  
 \begin{displaymath}
\begin{array}
{lll}
\bold 0 &\approx& \bold Y \bold K \bold ...
 ...d r_0
\\ \bold 0 &\approx& \epsilon\ \bold R \bold a\end{array}\end{displaymath} (22)
where $\bold R$ is a roughening operator to be chosen.

Experience with missing data in Chapter [*] shows that when the roughening operator $\bold R$ is a differential operator, the number of iterations can be large. We can speed the calculation immensely by ``preconditioning''. Define a new variable $\bold m$ by $\bold a=\bold R^{-1}\bold m$and insert it into (22) to get the equivalent preconditioned system of goals.
      \begin{eqnarray}
\bold 0 &\approx & \bold Y \bold K \bold R^{-1}\bold m \\ \bold 0 &\approx & \epsilon \ \bold m\end{eqnarray} (23)
(24)

The fitting (23) uses the operator $\bold Y\bold K\bold R^{-1}$.For $\bold Y$ we can use subroutine nhconest() [*]; for the smoothing operator $\bold R^{-1}$ we can use nonstationary polynomial division with operator npolydiv(): npolydivnon-stationary polynomial division Now we have all the pieces we need. As we previously estimated stationary filters with the module pef [*], now we can estimate nonstationary PEFs with the module npef [*]. The steps are hardly any different. npefnon-stationary PEF Near the end of module npef is a filter reshape from a 1-D array to a 2-D array. If you find it troublesome that nhconest [*] was using the filter during the optimization as already multidimensional, perhaps again, it is time to examine the Fortran code. The answer is that there has been a conversion back and forth partially hidden by Loptran.

Figure [*] shows a synthetic data example using these programs. As we hope for deconvolution, events are compressed. The compression is fairly good, even though each event has a different spectrum. What is especially pleasing is that satisfactory results are obtained in truly small numbers of iterations (about three). The example is for two free filter coefficients (1,a1,a2) per output point. The roughening operator $\bold R$ was taken to be (1,-2,1) which was factored into causal and anticausal finite difference.

 
tvdecon90
tvdecon90
Figure 15
Time variable deconvolution with two free filter coefficients and a gap of 6.


view burn build edit restore

I hope also to find a test case with field data, but experience in seismology is that spectral changes are slow, which implies unexciting results. Many interesting examples should exist in two- and three-dimensional filtering, however, because reflector dip is always changing and that changes the spatial spectrum.

In multidimensional space, the smoothing filter $\bold R^{-1}$can be chosen with interesting directional properties. Sergey, Bob, Sean and I have joked about this code being the ``double helix'' program because there are two multidimensional helixes in it, one the smoothing filter, the other the deconvolution filter. Unlike the biological helixes, however, these two helixes do not seem to form a symmetrical pair.
\begin{exer}
\item
Is 
\texttt{nhconest} \vpageref{/prog:nhconest}
the inverse o...
 ... other half.
Then upgrade that solution from two to about ten filters.\end{exer}


next up previous print clean
Next: REFERENCES Up: Nonstationarity: patching Previous: The human eye as
Stanford Exploration Project
4/27/2004