This method of finding missing data values uses conjugate gradients and it introduces the notion of simple constraints to least squares fitting. The addition to the basic CG algorithm is uncomplicated, regardless of the amount of missing data or the complexity of its geometrical arrangement. You will see the program that made Figures 1-5 is pleasingly concise.
Formally, our problem is
where
is a filter
(a shifted column matrix like in (
)),
and
and
have elements in data space.
Every point in data space is either given or missing
so where a component in the vector of known givens,
, is nonzero,
that component in
, must be zero, and vice versa.
Recall that the program cgmeth()
solves sets like
.So for that program
is
,
is
,and
is
.The trouble with that program is that it will change the
known data values unless
we recode the filter operator to exclude them.
Instead we'll reorganize the CG algorithm
to include explicitly these constraints so then we can use
any filter program from our library.
The vector is the entire data space
.It is input to the missing-data program containing
.The program begins by loading the negative filtered
into a residual.
The iterations proceed as with cgmeth()
except that the matrix multiply routine
is replaced by the convolution routine contran().
The new ingredient in the missing-data subroutine miss1()
is the simple constraints that the given data cannot be changed.
Thus after the gradient is computed,
the components that correspond to given data values are set to zero.
# fill in missing data on 1-axis by minimizing power out of a given filter. # subroutine miss1( na, a, ny, y, copy, niter) integer na, iy,ny, ir,nr, iter,niter real y(ny) # in: known data with zeros for missing values. # out: known plus missing data. real copy(ny) # in: copy(iy) vanishes where y(iy) is a missing value. real a(na) # in: roughening filter temporary real dy(ny),sy(ny), r(ny+na-1),dr(ny+na-1),sr(ny+na-1) nr = ny+na-1 call contran( 0, na, a, ny, y, r) # r = a*y convolution do ir= 1, nr r(ir) = - r(ir) # residual = - filtered data do iter = 0, niter { # niter= number missing or less call contran( 1, na,a, ny,dy, r) # dy(a,r) correlation do iy= 1, ny if( copy(iy) != 0.) # missing data where copy(iy)==0 dy(iy) = 0. # can't change known data call contran( 0, na,a, ny,dy, dr) # dr=a*dy convolution call cgstep( iter, ny,y,dy,sy, nr,r,dr,sr) # y=y+dy; r=r-dr } return; end
That prevents changes to the given data by motion any distance along the gradient. Likewise motion along previous steps cannot perturb the given data values. So the CG method (finding the minimum power in the plane spanned by the gradient and the previous step) leads to minimum power while respecting the constraints.