Next: The meaning of the
Up: Preconditioning
Previous: THE PRECONDITIONED SOLVER
Recall the fitting goals (10)
with weights
being absorbed into the operator
and the data
.
|
(17) |
Without preconditioning, we have the search direction:
|
(18) |
and with preconditioning, we have the search direction:
|
(19) |
The essential feature of preconditioning is not that we perform
the iterative optimization in terms of the variable
.
The essential feature is that we use a search direction
that is a gradient with respect to
not
.
Using
, we have
,
which enables us to define a good search direction in
space.
|
(20) |
Define the gradient by
, and
notice that
.
|
(21) |
The search direction (21)
shows a positive-definite operator scaling the gradient.
Each component of any gradient vector is independent of each other.
All independently point (negatively) to a direction for descent.
Obviously, each can be scaled by any positive number.
Now, we have found that we can also scale a gradient vector by
a positive definite matrix, and we can still expect
the conjugate-direction algorithm to descend, as always,
to the ``exact'' answer in a finite number of steps.
The reason is that modifying the search direction with
is equivalent to solving
a conjugate-gradient problem in
.
We'll see in Chapter , that
our specifying
amounts to us specifying
a prior expectation
of the spectrum of the model
.
Subsections
Next: The meaning of the
Up: Preconditioning
Previous: THE PRECONDITIONED SOLVER
2015-05-07