next up previous [pdf]

Next: WHAT ARE ADJOINTS FOR? Up: Fomel: Conjugate directions Previous: Induction

ALGORITHM

The results of the preceding sections define the method of conjugate directions to consist of the following algorithmic steps:
  1. Choose initial model ${\bf
m}_0$ and compute the residual ${r}_0
= {\bf d - A m}_0$.
  2. At $n$-th iteration, choose the initial search direction ${\bf c}_n$.
  3. If $n$ is greater than 1, optimize the search direction by adding a linear combination of the previous directions, according to equations (22) and (23), and compute the modified step direction ${\bf s}_n$.
  4. Find the step length $\alpha_n$ according to equation (6). The orthogonality principles (24) and (7) can simplify this equation to the form
    \begin{displaymath}
\alpha_n = {{\left({\bf r}_{n-1}, {\bf A c}_n\right)} \over
{\Vert{\bf A s}_n\Vert^2}}\;.
\end{displaymath} (26)

  5. Update the model ${\bf m}_n$ and the residual ${\bf
r}_n$ according to equations (2) and (4).
  6. Repeat iterations until the residual decreases to the required accuracy or as long as it is practical.
At each of the subsequent steps, the residual is guaranteed not to increase according to equation (8). Furthermore, optimizing the search direction guarantees that the convergence rate doesn't decrease in comparison with (9). The only assumption we have to make to arrive at this conclusion is that the operator ${\bf A}$ is linear. However, without additional assumptions, we cannot guarantee global convergence of the algorithm to the least-square solution of equation (1) in a finite number of steps.


next up previous [pdf]

Next: WHAT ARE ADJOINTS FOR? Up: Fomel: Conjugate directions Previous: Induction

2013-03-03