next up previous print clean
Next: The basic low-cut filter Up: FAMILIAR OPERATORS Previous: Causal and leaky integration

Backsolving, polynomial division and deconvolution

Ordinary differential equations often lead us to the backsolving operator. For example, the damped harmonic oscillator leads to a special case of equation (23) where $(a_3,a_4,\cdots)=0$.There is a huge literature on finite-difference solutions of ordinary differential equations that lead to equations of this type. Rather than derive such an equation on the basis of many possible physical arrangements, we can begin from the filter transformation $\bold B$ in (4) but put the matrix on the other side of the equation so our transformation can be called one of inversion or backsubstitution. Let us also force the matrix $\bold B$ to be a square matrix by truncating it with $\bold T = \left[ \bold I \quad \bold 0 \right]$, say $\bold A =\left[ \bold I \quad \bold 0 \right] \bold B = \bold T \bold B$.To link up with applications in later chapters, I specialize to 1's on the main diagonal and insert some bands of zeros.  
 \begin{displaymath}
\bold A \bold y
\eq
\left[ 
\begin{array}
{ccccccc}
 1 & 0 &...
 ...\\  
 x_4 \\  
 x_5 \\  
 x_6
 \end{array} \right] 
\eq
\bold x\end{displaymath} (23)
Algebraically, this operator goes under the various names, ``backsolving'', ``polynomial division'', and ``deconvolution''. The leaky integration transformation (19) is a simple example of backsolving when $a_1=-\rho$ and a2=a5=0. To confirm this, you need to verify that the matrices in (23) and (19) are mutually inverse.

A typical row in equation (23) says  
 \begin{displaymath}
x_t \eq y_t \ +\ \sum_{\tau \gt 0} \ a_\tau\ y_{t-\tau}\end{displaymath} (24)
Change the signs of all terms in equation (24) and move some terms to the opposite side  
 \begin{displaymath}
y_t \eq x_t \ -\ \sum_{\tau \gt 0} \ a_\tau\ y_{t-\tau}\end{displaymath} (25)
Equation (25) is a recursion to find yt from the values of y at earlier times.

The most rudimentary form of mathematics in which this recursion occurs is in connection with multiplying and dividing polynomials. The transformation (23) can be derived by multiplying two polynomials, Y(Z) = y0 + y1 Z+ y2 Z2 + y3 Z3 + y4 Z4 + y5 Z5 times A(Z) = 1 + a1 Z + a2 Z2 +a5 Z5. Identifying the k-th power of Z in the product X(Z)=A(Z)Y(Z) gives the k-th row of the transformation (23). Thus the operator in (23) is Y(Z)=X(Z)/A(Z). The polynomials in Z are called Z transforms. They may be recognized as Fourier transforms where $Z=e^{i\omega\Delta t}$.Convolution corresponds to multiplying polynomials (convolving the coefficients) and deconvolution (or backsolving) corresponds to polynomial division.

A causal operator is one that uses its present and past inputs to make its current output. Anticausal operators use the future but not the past. Causal operators are generally associated with lower triangular matrices and positive powers of Z, whereas anticausal operators are associated with upper triangular matrices and negative powers of Z. A transformation like equation (23) but with the transposed matrix would require us to run the recursive solution the opposite direction in time, as we did with leaky integration.

A module to backsolve polydiv is polydiv1.  

module polydiv1 {                   # Polynomial division (recursive filtering)
real, dimension (:), pointer :: aa
#%  _init ( aa)
#%  _lop  ( xx, yy)
integer  ia, ix, iy
real     tt
if( adj)
	do ix= size(xx), 1, -1 {
		tt = yy( ix)
		do ia = 1, min( size(aa), size (xx) - ix) {
			iy = ix + ia
			tt -=  aa( ia) * xx( iy)
			} 
		xx( ix) = xx( ix) + tt
		}
else 
	do iy= 1, size(xx) {
		tt = xx( iy)
		do ia = 1, min( size(aa), iy-1) {
			ix = iy - ia
			tt -=  aa( ia) * yy( ix)
			} 
		yy( iy) =  yy( iy) + tt
		}
}

The more complicated an operator, the more complicated is its adjoint. Given a transformation from $\bold x$ to $\bold y$ that is $\bold T\bold B \bold y = \bold x$,we may wonder if the adjoint transform really is $(\bold T\bold B)' \hat{\bold x}= \bold y$.It amounts to asking if the adjoint of $\bold y = (\bold T\bold B)^{-1}\bold x$ is $\hat{\bold x} = ((\bold T\bold B)')^{-1}\bold y$.Mathematically we are asking if the inverse of a transpose is the transpose of the inverse. This is so because in $\bold A\bold A^{-1} = \bold I = \bold I' = (\bold A^{-1})'\bold A'$the parenthesized object must be the inverse of its neighbor $\bold A'$.

The adjoint has a meaning which is nonphysical. It is like the forward operator except that we must begin at the final time and revert towards the first. The adjoint pendulum damps as we compute it backward in time, but that, of course, means that the adjoint pendulum diverges as it is viewed moving forward in time.


next up previous print clean
Next: The basic low-cut filter Up: FAMILIAR OPERATORS Previous: Causal and leaky integration
Stanford Exploration Project
12/22/2000