next up previous [pdf]

Next: The second miracle of Up: Preconditioning Previous: Guessing the preconditioner

PRECONDITIONING THE REGULARIZATION

The basic formulation of a geophysical estimation problem consists of setting up two goals, one for data fitting and the other for model shaping. With two goals, preconditioning is somewhat different. The two goals may be written as:

$\displaystyle \bold 0$ $\displaystyle \approx$ $\displaystyle \bold F \bold m - \bold d$ (6)
$\displaystyle \bold 0$ $\displaystyle \approx$ $\displaystyle \bold A \bold m$ (7)

which defines two residuals, a so-called ``data residual'' and ``model residual'' that are usually minimized by conjugate-direction, least-squares methods.

To fix ideas, let us examine a toy example. The data and the first three rows of the following matrix are random numbers truncated to integers. The model-roughening operator $ \bold A$ is a first-differencing operator times 100.

 d(m)     F(m,n)                                            iter   Sum(|grad|)

-100.     62.  18.   2.  75.  99.  45.  93. -41. -15.  80.     1    69262.0000
 -83.     31.  80.  92. -67.  72.  81. -41.  87. -17. -38.     2    19012.8203
  20.      3. -21.  58.  38.   9.  18. -81.  22. -14.  20.     3    10639.0791
   0.    100.-100.   0.   0.   0.   0.   0.   0.   0.   0.     4     4578.7988
   0.      0. 100.-100.   0.   0.   0.   0.   0.   0.   0.     5     2332.3352
   0.      0.   0. 100.-100.   0.   0.   0.   0.   0.   0.     6     1676.6978
   0.      0.   0.   0. 100.-100.   0.   0.   0.   0.   0.     7      622.7415
   0.      0.   0.   0.   0. 100.-100.   0.   0.   0.   0.     8      454.1242
   0.      0.   0.   0.   0.   0. 100.-100.   0.   0.   0.     9      290.6053
   0.      0.   0.   0.   0.   0.   0. 100.-100.   0.   0.    10      216.0749
   0.      0.   0.   0.   0.   0.   0.   0. 100.-100.   0.    11        1.0488
   0.      0.   0.   0.   0.   0.   0.   0.   0. 100.-100.    12        0.0061
   0.      0.   0.   0.   0.   0.   0.   0.   0.   0. 100.    13        0.0000

The right-most column shows the sum of the absolute values of the gradient. Notice at the 11th iteration, the gradient suddenly plunges. Because there are ten unknowns and the matrix is obviously full-rank, conjugate-gradient theory tells us to expect the exact solution at the 11th iteration. This sudden convergence is the first miracle of conjugate gradients. Failure to achieve a precisely zero gradient at the 11th step is a precision issue that could be addressed with double precision arithmetic. The residual magnitude (not shown) does not approach zero, because 13 linear equations defeat the ten adjustable coefficients.



Subsections
next up previous [pdf]

Next: The second miracle of Up: Preconditioning Previous: Guessing the preconditioner

2015-05-07