next up previous print clean
Next: HELIX LOW-CUT FILTER Up: WILSON-BURG SPECTRAL FACTORIZATION Previous: WILSON-BURG SPECTRAL FACTORIZATION

Wilson-Burg theory

Newton's iteration for square roots  
 \begin{displaymath}
a_{t+1} \eq {1\over 2} \ \left( a_t \ +\ {s\over a_t} \right)\end{displaymath} (14)
converges quadratically starting from any real initial guess a0 except zero. When a0 is negative, Newton's iteration converges to the negative square root.

Quadratic convergence means that the square of the error $a_t-\sqrt{s}$at one iteration is proportional to the error at the next iteration
   \begin{eqnarray}
a_{t+1} - \sqrt{s} \quad\sim\quad ( a_t-\sqrt{s})^2 
\eq a_t^2 - 2 a_t \sqrt{s} + s \quad \gt \quad 0\end{eqnarray} (15)
so, for example if the error is one significant digit at one iteration, at the next iteration it is two digits, then four, etc. We cannot use equation (15) in place of the Newton iteration itself, because it uses the answer $\sqrt{s}$ to get the answer at+1, and also we need the factor of proportionality. Notice, however, if we take the factor to be 1/(2at), then $\sqrt{s}$ cancels and equation (15) becomes itself the Newton iteration (14).

Another interesting feature of the Newton iteration is that all iterations (except possibly the initial guess) are above the ultimate square root. This is obvious from equation (15).

We can insert spectral functions in the Newton square-root iteration, for example $s(\omega)$ and $a(\omega)$.Where the first guess a0 happens to match $\sqrt{s}$,it will match $\sqrt{s}$ at all iterations. Something inspires Wilson to write the Newton iteration as
\begin{displaymath}
2\ {a_{t+1}\over a_t} \eq 1 \ +\ {s\over a_t^2}\end{displaymath} (16)
and then express the spectrum $S=\bar A A$ as a Z-transform.  
 \begin{displaymath}
{\bar A_{t+1}(1/Z) \over \bar A_t(1/Z)}
 \ +\ 
 {A_{t+1}(Z) \over A_t(Z)}
\eq
1 \ +\ {S(Z) \over \bar A_t(1/Z)\ A_t(Z)}\end{displaymath} (17)

Now we are ready for the algorithm: Compute the right side of (17) by polynomial division forwards and backwards and then add 1. Then abandon negative lags and take half of the zero lag. Now you have At+1(Z) / At(Z). Multiply out (convolve) the denominator At(Z), and you have the desired result At+1(Z). Iterate as long as you wish.

(Parenthetically, for those people familiar with the idea of minimum phase (if not, see FGDP or PVI), we show that At+1(Z) is minimum phase: Both sides of (17) are positive, as noted earlier. Both terms on the right are positive. Since the Newton iteration always overestimates, the 1 dominates the rightmost term. After masking off the negative powers of Z (and half the zero power), the right side of (17) adds two wavelets. The 1/2 is wholly real, and hence its real part always dominates the real part of the rightmost term. Thus (after masking negative powers) the wavelet on the right side of (17) has a positive real part, so the phase cannot loop about the origin. This wavelet multiplies At(Z) to give the final wavelet At+1(Z) and the product of two minimum-phase wavelets is minimum phase.)

The input of the program is the spectrum S(Z) and the output is the factor A(Z), a function with the spectrum S(Z). I mention here that in later chapters of this book, the factor A(Z) is known as the inverse Prediction-Error Filter (PEF). In the Wilson-Burg code below, S(Z) and A(Z) are Z-transform polynomials but their lead coefficients are extracted off, so for example, $A(z) = (a_0) + (a_1 Z + a_2 Z^2 + \cdots)$is broken into the two parts a0 and aa.  

#$=head1 NAME
#$
#$wilson - wilson's factorization
#$
#$=head1 SYNOPSIS
#$     
#$C<call wilson_init(nmax)>
#$       
#$C<call wilson_factor(niter,s0,ss,a0,aa)> 
#$     
#$C<call wilson_close()>
#$       
#$=head1 PARAMETERS
#$     
#$=over 4  
#$       
#$=item nmax - integer
#$  
#$      maximum number of space needed (n1*10 good number)
#$  
#$=item s0   - real    
#$  
#$      zero lag value of input
#$  
#$=item ss   - type(filter)   
#$  
#$      auto correlation
#$     
#$=item a0   - real
#$     
#$      Output zero lag value 
#$     
#$=item aa   - type(filter)  
#$     
#$      Minimum  phase filter
#$     
#$=back
#$
#$=head1 DESCRIPTION
#$
#$Find a minumum phase filter with the given auto-correaltion function
#$
#$=head1 SEE ALSO
#$  
#$L<cross_wilson>,L<lapfac>
#$  
#$=head1 LIBRARY
#$  
#$B<geef90>
#$  
#$=cut

module wilson { # Wilson's factorization use helicon use polydiv integer, private :: n real, dimension( :), allocatable, private :: auto, bb, cc, b, c contains subroutine wilson_init( nmax) { integer, intent (in) :: nmax; n = nmax allocate ( auto( 2*n-1), bb( 2*n-1), cc( 2*n-1), b(n), c(n)) } subroutine wilson_factor( niter, s0, ss, a0, aa, verb) { integer, intent( in) :: niter # Newton iterations real, intent( in) :: s0 # autocorrelation zero lag type( filter), intent( in) :: ss # autocorrelation, other lags real, intent( out) :: a0 # factor, zero lag type( filter) :: aa # factor, other lags logical, intent( in) :: verb optional :: verb real :: eps integer :: i, stat

auto = 0.; auto( n) = s0; b( 1) =1. # initialize auto( n+ss%lag) = ss%flt # autocorrelation auto( n-ss%lag) = ss%flt # symmetrize input auto. call helicon_init( aa) # multiply polynoms call polydiv_init( 2*n-1, aa) # divide polynoms do i = 1, niter { stat= polydiv_lop(.false.,.false., auto, bb) # bb = S/A stat= polydiv_lop( .true.,.false., cc, bb) # cc = S/(AA') b( 2:n) = 0.5*( cc( n+1:2*n-1 ) + cc( n-1:1 :-1)) / cc( n) # b = plusside(1+cc) eps = maxval( abs( b(2:n))); # "L1 norm" if (present (verb)) { if (verb) write (0,*) i, eps} stat= helicon_lop( .false., .false., b, c) # c = A b aa%flt = c( 1+aa%lag) # put on helix if( eps < epsilon( a0)) break # convergence } a0 = sqrt( cc( n)) } subroutine wilson_close () { deallocate( auto, bb, cc, b, c) call polydiv_close() } }

EXERCISES:

  1. You hear from three different people that a more isotropic representation of the Laplacian is minus one sixth of

    \begin{displaymath}
\begin{array}
{rrr}
-1 & -4 & -1 \\ -4 & 20 & -4 \\ -1 & -4 & -1\end{array}\end{displaymath}

    What changes need to be made to subroutine lapfac()?
  2. Fomel's factorization: A simple trick to avoid division in square root computation is to run Newton's method on the inverse square root instead. The iteration is then $R' = {1\over 2} R(3-R^2 X^2)$where R converges (quadratically) to $1/\sqrt{X^2}$.To get the square root, just multiply R by X2. This leads to a reciprocal version of the Wilson-Burg algorithm. $A'/A + \bar A'/ \bar A = 3 - A \bar A S $Here is how it can work: Construct an inverse autocorrelation -- for example, an ideal isotropic smoother; make a guess for A (min-phase roughener); iterate: (1) compute 3 - A A* S, (2) take its causal part, (3) convolve with A to get A'. Each iteration involves just 3 convolutions (could be even done without helix).


next up previous print clean
Next: HELIX LOW-CUT FILTER Up: WILSON-BURG SPECTRAL FACTORIZATION Previous: WILSON-BURG SPECTRAL FACTORIZATION
Stanford Exploration Project
12/15/2000