Next: Noise: General Case
Up: Symes: Differential semblance
Previous: Hyperbolic Moveout
Until further notice regard F etc. as depending on RMS square
slowness u rather than on interval velocity v. Dependence on v,
through the relatively easily analyzed map
, will be
reintroduced at the end.
A short calculation shows that
![\begin{displaymath}
p(t,x)= \frac{x}{t}u(T_0(t,x)).\end{displaymath}](img78.gif)
Introduce the quantity
, with units of time:
![\begin{displaymath}
\Gamma(t_0,x)=T_0(T^*(t_0,x),x)\end{displaymath}](img80.gif)
That is,
is the zero offset time for which the time at offset
x is the same in the slowness u as the time one obtains for t0,x in
slowness u*.
Then introducing the expression for p, and changing variables from t to
t0 in the integral above, yields
![\begin{displaymath}
J_0[u]=\int\int \,dt_0\,dx\,B_0(t_0,x)(u(\Gamma(t_0,x))-u^*(t_0))^2 (r^*(t_0))^2 + O(\lambda)\end{displaymath}](img82.gif)
where
![\begin{displaymath}
B_0(t_0,x)=
\phi(T^*(t_0,x),x)B^*(T^*(t_0,x),x)\left(\frac{x}{T^*(t_0,x)}\right)^2\end{displaymath}](img83.gif)
depends only on u* and
.
It is now straightforward to compute the first order perturbation
of J0 with respect to u. First,
![\begin{displaymath}
\delta T(t_0,x) = \frac{\frac{x^2}{2}\delta u(t_0)}{T(t_0,x)}\end{displaymath}](img85.gif)
![\begin{displaymath}
0 = \delta (T(T_0(t,x),x)) = \delta T(T_0(t,x),x) +
\frac{\partial T}{\partial t_0}(T_0(t,x),x)\delta T_0(t,x)\end{displaymath}](img86.gif)
![\begin{displaymath}
=\frac{\frac{x^2}{2}\delta u(T_0(t,x))}{t}+\frac{\delta T_0(t,x)}{s(t,x)}\end{displaymath}](img87.gif)
so
![\begin{displaymath}
\delta T_0(t,x)=-\frac{\frac{x^2}{2}s(t,x)\delta u(T_0(t,x))}{t}\end{displaymath}](img88.gif)
whence
![\begin{displaymath}
\delta \Gamma(t_0,x) = \delta T_0(T^*(t_0,x),x)
=-\frac{\frac{x^2}{2}s(T^*(t_0,x),x)\delta u(\Gamma(t_0,x))}{T^*(t_0,x)}\end{displaymath}](img89.gif)
and
![\begin{displaymath}
\delta (u(\Gamma(t_0,x))) = \delta u(\Gamma(t_0,x))
+ \frac{du}{dt_0}(\Gamma(t_0,x))\delta \Gamma(t_0,x))\end{displaymath}](img90.gif)
![\begin{displaymath}
=\delta u(\Gamma(t_0,x))\left(1-\frac{\frac{x^2}{2}s(T^*(t_0,x),x)\frac{du}{dt}(\Gamma(t_0,x))}{T^*(t_0,x)}\right)\end{displaymath}](img91.gif)
Recall that
![\begin{displaymath}
s(t,x)=\frac{t}{T_0(t,x)+\frac{x^2}{2}
\frac{\partial u}{\partial t_0}(T_0(t,x))}\end{displaymath}](img92.gif)
so that
![\begin{displaymath}
s(T^*(t_0,x),x)=
\frac{T^*(t_0,x)}{\Gamma(t_0,x)+\frac{x^2}{2}
\frac{\partial u}{\partial t_0}(\Gamma(t_0,x))}\end{displaymath}](img93.gif)
so
![\begin{displaymath}
\delta (u(\Gamma(t_0,x))) =
\delta u(\Gamma(t_0,x))\left(1-...
...{x^2}{2}
\frac{\partial u}{\partial t_0}(\Gamma(t_0,x))}\right)\end{displaymath}](img94.gif)
![\begin{displaymath}
=\frac{\delta u(\Gamma(t_0,x))}{1+\frac{x^2}{2\Gamma(t_0,x)}...
...Gamma(t_0,x)}{T^*(t_0,x)}s(T^*(t_0,x),x)\delta u(\Gamma(t_0,x))\end{displaymath}](img95.gif)
Putting this all together,
![\begin{displaymath}
\delta J_0[u]= \int\int \,dt_0\,dx\,B_1(t_0,x)(u(\Gamma(t_0,x))-u^*(t_0)) (r^*(t_0))^2\delta u(\Gamma(t_0,x)) + O(\lambda)\end{displaymath}](img96.gif)
where
![\begin{displaymath}
B_1(t_0,x)=\frac{B_0(t_0,x)}{1+\frac{x^2}{2\Gamma(t_0,x)}
\f...
...x))}=\frac{\Gamma(t_0,x)}{T^*(t_0,x)}
B_0(t_0,x)s(T^*(t_0,x),x)\end{displaymath}](img97.gif)
depends on u, u*, and
.To compute the gradient, change variables again to
for
each x. Since
![\begin{displaymath}
\frac{\partial \Gamma}{\partial t_0}(t_0,x)
=\frac{\partial}{\partial t_0}(T^*_0(T(t_0,x),x))\end{displaymath}](img99.gif)
![\begin{displaymath}
=\frac{\partial T_0^*}{\partial t}(T(t_0,x),x)\frac{\partial T}{\partial t_0}
(t_0,x)
=\frac{s^*(T(t_0,x),x)}{s(T(t_0,x),x)}\end{displaymath}](img100.gif)
and so
![\begin{displaymath}
\frac{\partial \Gamma^{-1}}{\partial t_0}(t_0,x)
=\frac{s(T^*(t_0,x),x)}{s^*(T^*(t_0,x),x)}\end{displaymath}](img101.gif)
you get
![\begin{displaymath}
\delta J_0[u]= \int\int \,dt_0\,dx\,B_1^*(t_0,x)(u(t_0)-u^*(...
...t_0,x))) (r^*(\Gamma^{-1}(t_0,x)))^2\delta u(t_0)) + O(\lambda)\end{displaymath}](img102.gif)
with
![\begin{displaymath}
B_1^*(t_0,x)=B_1(\Gamma^{-1}(t_0,x),x)\frac{s(T^*(t_0,x),x)}{s^*(T^*(t_0,x),x)}.\end{displaymath}](img103.gif)
Thus the L2 gradient of J0 is
![\begin{displaymath}
\nabla J_0[u](t_0) = \int \,dx\,B_1^*(t_0,x)(u(t_0)-u^*(\Gamma^{-1}(t_0,x))) (r^*(\Gamma^{-1}(t_0,x)))^2 + O(\lambda)\end{displaymath}](img104.gif)
Both expressions for J0 and its gradient suggest that these quantities are
comparing the trial square slowness u and the target square slowness u* at
different points (eg. t0 vs.
), and this in turn makes
understanding of the implications for determination of u difficult.
Fortunately this is not really the case:
Key Lemma: There exists a function h(t0,x), depending
on velocity v (or slowness u) and also on u* and
, having
the following properties:
- h(t0,x)>0 over the mute zone, and
is uniformly
bounded for t0,x in the mute zone and
; -
![$u(t_0)-u^*(\Gamma^{-1}(t_0,x)) = h(t_0,x)(u(t_0)-u^*(t_0))$](img107.gif)
Proof of Key Lemma: Note first that since
T*(T*0(t,x),x)=t
![\begin{displaymath}
\frac{\partial T^*}{\partial x}(T_0^*(t,x),x)+\frac{\partial...
...l t_0}
(T^*_0(t,x),x)\frac{\partial T^*_0}{\partial x}(t,x) = 0\end{displaymath}](img108.gif)
![\begin{displaymath}
=\frac{xu^*(T_0^*(t,x))}{t}+\frac{1}{s^*(t,x)}\frac{\partial T^*_0}{\partial x}(t,x)\end{displaymath}](img109.gif)
so
![\begin{displaymath}
\frac{\partial T^*_0}{\partial x}(t,x)=-s^*(t,x)\frac{xu^*(T_0^*(t,x))}{t}.\end{displaymath}](img110.gif)
It follows that since
![\begin{displaymath}
\Gamma^{-1}(t_0,x)=T_0^*(T(t_0,x),x),\end{displaymath}](img111.gif)
![\begin{displaymath}
\frac{\partial \Gamma^{-1}}{\partial x}(t_0,x)=
\frac{\parti...
...partial x}(t_0,x)+\frac{\partial T_0^*}{\partial x}(T(t_0,x),x)\end{displaymath}](img112.gif)
![\begin{displaymath}
=s^*(T(t_0,x),x)\frac{xu(t_0)}{T(t_0,x)}-s^*(T(t_0,x),x)\frac{xu^*(T_0^*(T(t_0,x),x))}{T(t_0,x)}\end{displaymath}](img113.gif)
![\begin{displaymath}
=\frac{xs^*(T(t_0,x),x)}{T(t_0,x)}(u(t_0)-u^*(\Gamma^{-1}(t_0,x)))\end{displaymath}](img114.gif)
Thus
![\begin{displaymath}
u(t_0)-u^*(\Gamma^{-1}(t_0,x)) = u(t_0)-u^*(t_0) + \int_0^x\...
...\,\frac{\partial}{\partial x'}(u(t_0)-u^*(\Gamma^{-1}(t_0,x')))\end{displaymath}](img115.gif)
![\begin{displaymath}
=u(t_0)-u^*(t_0) - \int_0^x\,dx'\,\frac{\partial u^*}{\parti...
...a^{-1}(t_0,x'))\frac{\partial \Gamma^{-1}}{\partial x'}(t_0,x')\end{displaymath}](img116.gif)
![\begin{displaymath}
=u(t_0)-u^*(t_0) - \int_0^x\,dx'\,\frac{\partial u^*}{\parti...
...'s^*(T(t_0,x'),x')}{T(t_0,x')}(u(t_0)-u^*(\Gamma^{-1}(t_0,x')))\end{displaymath}](img117.gif)
![\begin{displaymath}
=u(t_0)-u^*(t_0) + \int_0^x\,dx'g(t_0,x')(u(t_0)-u^*(\Gamma^{-1}(t_0,x')))\end{displaymath}](img118.gif)
where
![\begin{displaymath}
g(t_0,x)=-\frac{\partial u^*}{\partial t_0}
(\Gamma^{-1}(t_0,x'))\frac{x's^*(T(t_0,x'),x')}{T(t_0,x')}\end{displaymath}](img119.gif)
This simple integral equation has the solution
![\begin{displaymath}
u(t_0)-u^*(\Gamma^{-1}(t_0,x')) = h(t_0,x)(u(t_0)-u^*(t_0))\end{displaymath}](img120.gif)
where
![\begin{displaymath}
h(t_0,x)=\exp \left(\int_0^x\,dx'g(t_0,x')\right)\end{displaymath}](img121.gif)
has the properties claimed for it in the statement of the lemma. Q.E.D.
Now changing variables in the asymptotic formula for J0, and
applying the above relation to both this and the formula for
, you obtain
![\begin{displaymath}
J_0[u]=\int\int \,dt_0(u(t_0)-u^*(t_0))^2 \,dx\,B^*_0(t_0,x)h(t_0,x)(r^*(\Gamma^{-1}(t_0,x)))^2 + O(\lambda)\end{displaymath}](img123.gif)
![\begin{displaymath}
\nabla J_0[u](t_0) = (u(t_0)-u^*(t_0)) \int \,dx\,B_1^*(t_0,x)h(t_0,x)(r^*(\Gamma^{-1}(t_0,x)))^2 + O(\lambda)\end{displaymath}](img124.gif)
where
![\begin{displaymath}
B_0^*(t_0,x)=B_0(\Gamma^{-1}(t_0,x),x)\frac{s(T^*(t_0,x),x)}{s^*(T^*(t_0,x),x)}.\end{displaymath}](img125.gif)
Now B0* and B1* differ at each point in the mute zone by factors or
divisors of s, s*, T, and the like, and these are bounded over the mute
zone uniformly in
. Therefore there exists a constant C>0
depending only on
for which
![\begin{displaymath}
J_0[u] \le C \int\, dt_0\, (u(t_0)-u^*(t_0))\nabla J_0[u](t_0) + O(\lambda)\end{displaymath}](img126.gif)
and we have proved the
Theorem: If u, the RMS square slowness for
,is a stationary point of J0[u], then
.
That is, for noise free data, any stationary point of J0 is a
global minimizer, up to an asymptotically vanishing error.
Next: Noise: General Case
Up: Symes: Differential semblance
Previous: Hyperbolic Moveout
Stanford Exploration Project
4/20/1999