Next: Variance of the sample
Up: TIME-STATISTICAL RESOLUTION
Previous: Probability and independence
Now let xt be a time series made up of identically distributed
random numbers: mx and
do not depend on time.
Let us also suppose that they are independently chosen; this
means in particular that for any different times t and s (
):
| ![\begin{displaymath}
\E(x_t x_s) \eq \E(x_t) \E(x_s)\end{displaymath}](img45.gif) |
(13) |
Suppose we have a sample of n points of xt and are
trying to determine the value of mx.
We could make an estimate
of the mean mx with the formula
| ![\begin{displaymath}
\hat m_x \eq {1 \over n} \sum^n_{t = 1} x_t\end{displaymath}](img47.gif) |
(14) |
A somewhat more elaborate method of estimating the mean
would be to take a weighted average.
Let wt define a set of weights normalized so that
| ![\begin{displaymath}
\sum w_t \eq 1\end{displaymath}](img48.gif) |
(15) |
With these weights, the more elaborate estimate
of the mean is
| ![\begin{displaymath}
\hat m_x \eq \sum w_t \, x_t\end{displaymath}](img49.gif) |
(16) |
Actually (14) is just a special case of (16);
in (14) the
weights are wt = 1/n.
Further, the weights could be convolved on the random time series,
to compute local averages of this time series, thus smoothing it.
The weights are simply a filter response where the filter coefficients happen
to be positive and cluster together.
Figure 6 shows an example: a random walk function
with itself smoothed locally.
walk
Figure 6
Random walk and itself smoothed (and shifted downward).
Next: Variance of the sample
Up: TIME-STATISTICAL RESOLUTION
Previous: Probability and independence
Stanford Exploration Project
3/1/2001