There is a theory for general constraints in quadratic form minimization. I haven't found the theory to be useful in any application I've run into so far, but it should be useful for writing erudite theoretical articles.
Constraint equations are an underdetermined set of equations,
say (the number of components in
exceeds that in
),
that must be solved exactly
while some other set is solved in the least-squares sense,
say
.This is formalized as
![]() |
(1) |
![]() |
(2) |
The great mathematician Lagrange apparently looked at the result, equation (2), and realized that he could have it far more simply by extremalizing the following quadratic form
![]() |
(3) |
When I sat down to write this book I promised myself to include
no equations without a practical use,
so now I'll explain how you can use these equations
to prepare pedantic articles on geophysical inversion.
You can derive marvelously complicated equations
by formally solving equation (2)
as you would any set.
The solution balloons up in size,
particularly since the elements of equation (2)
are submatrices and they cannot be commuted.
You justify these equations by stating that geophysical measurements
are finite in number
and thus all equations with measurements are constraints
to an infinite-dimensional optimization problem
for finding the earth model which is a continuous function.
You can obfuscate further by introducing weighting functions
that are inverse covariance matrices of the unknowns,
and thus fully as unknown as the unknowns themselves.