next up previous print clean
Next: About this document ... Up: Empty bins and inverse Previous: Abandoned theory for matching

PREJUDICE, BULLHEADEDNESS, AND CROSS VALIDATION

First we first look at data $\bold d$.Then we think about a model $\bold m$,and an operator $\bold L$ to link the model and the data. Sometimes the operator is merely the first term in a series expansion about $(\bold m_0,\bold d_0)$.Then we fit $\bold d-\bold d_0 \approx \bold L ( \bold m-\bold m_0)$.To fit the model, we must reduce the fitting residuals. Realizing that the importance of a data residual is not always simply the size of the residual but is generally a function of it, we conjure up (topic for later chapters) a weighting function (which could be a filter) operator $\bold W$.This defines our data residual:
\begin{displaymath}
\bold r_d \eq \bold W
[ \bold L
 ( \bold m-\bold m_0)
\ -\ 
 ( \bold d-\bold d_0)
]\end{displaymath} (19)

Next we realize that the data might not be adequate to determine the model, perhaps because our comfortable dense sampling of the model ill fits our economical sparse sampling of data. Thus we adopt a fitting goal that mathematicians call ``regularization'' and we might call a ``model style'' goal or more simply, a quantification of our prejudice about models. We express this by choosing an operator $\bold A$,often simply a roughener like a gradient (the choice again a topic in this and later chapters). It defines our model residual by $\bold A \bold m$ or $\bold A ( \bold m-\bold m_0)$, say we choose
\begin{displaymath}
\bold r_m \eq \bold A \bold m \end{displaymath} (20)

In an ideal world, our model prejudice would not conflict with measured data, however, life is not so simple. Since conflicts between data and preconceived notions invariably arise (and they are why we go to the expense of acquiring data) we need an adjustable parameter that measures our ``bullheadedness'', how much we intend to stick to our preconceived notions in spite of contradicting data. This parameter is generally called epsilon $\epsilon$because we like to imagine that our bullheadedness is small. (In mathematics, $\epsilon$ is often taken to be an infinitesimally small quantity.) Although any bullheadedness seems like a bad thing, it must be admitted that measurements are imperfect too. Thus as a practical matter we often find ourselves minimizing
\begin{displaymath}
\min \quad := \quad
\bold r_d \cdot \bold r_d \ +\ \epsilon^2\ \bold r_m \cdot \bold r_m \end{displaymath} (21)
and wondering what to choose for $\epsilon$.I have two suggestions: My simplest suggestion is to choose $\epsilon$so that the residual of data fitting matches that of model styling. Thus
\begin{displaymath}
\epsilon \eq \sqrt{
\bold r_d \cdot \bold r_d
\over
\bold r_m \cdot \bold r_m 
}\end{displaymath} (22)
My second suggestion is to think of the force on our final solution. In physics, force is associated with a gradient. We have a gradient for the data fitting and another for the model styling:
\begin{eqnarray}
\bold g_d &=& \bold L' \bold W' \bold r_d \\ \bold g_m &=& \bold A' \bold r_m\end{eqnarray} (23)
(24)
We could balance these forces by the choice
\begin{displaymath}
\epsilon \eq \sqrt{
\bold g_d \cdot \bold g_d
\over
\bold g_m \cdot \bold g_m 
}\end{displaymath} (25)
Although we often ignore $\epsilon$ in discussing the formulation of a problem, when time comes to solve the problem, reality intercedes. Generally, $\bold r_d$ has different physical units than $\bold r_m$(likewise $\bold g_d$ and $\bold g_m$)and we cannot allow our solution to depend on the accidental choice of units in which we express the problem. I have had much experience choosing $\epsilon$, but it is only recently that I boiled it down to the above two suggestions. Normally I also try other values, like double or half those of the above choices, and I examine the solutions for subjective appearance. If you find any insightful examples, please tell me about them.

Computationally, we could choose a new $\epsilon$ with each iteration, but it is more expeditious to freeze $\epsilon$, solve the problem, recompute $\epsilon$, and solve the problem again. I have never seen a case where more than one iteration was necessary.

People who work with small problems (less than about 103 vector components) have access to an attractive theoretical approach called cross-validation. Simply speaking, we could solve the problem many times, each time omitting a different data value. Each solution would provide a model that could be used to predict the omitted data value. The quality of these predictions is a function of $\epsilon$and this provides a guide to finding it. My objections to cross validation are two-fold: First, I don't know how to apply it in the large problems like we solve in this book (I should think more about it); and second, people who worry much about $\epsilon$,perhaps first should think more carefully about their choice of the filters $\bold W$ and $\bold A$,which is the focus of this book. Notice that both $\bold W$ and $\bold A$can be defined with a scaling factor which is like scaling $\epsilon$.Often more important in practice, with $\bold W$ and $\bold A$we have a scaling factor that need not be constant but can be a function of space or spatial frequency within the data space and/or model space.

 


next up previous print clean
Next: About this document ... Up: Empty bins and inverse Previous: Abandoned theory for matching
Stanford Exploration Project
4/27/2004