Model fitting by least squares

Next: Sign convention Up: Model fitting by least Previous: Unknown input: deconvolution with

# KRYLOV SUBSPACE ITERATIVE METHODS

The solution time for simultaneous linear equations grows cubically with the number of unknowns. There are three regimes for solution; which one is applicable depends on the number of unknowns . For three or less, we use analytical methods. We also sometimes use analytical methods on matrices of size if the matrix contains many zeros. A 1988 my desktop workstation solved a system in a minute. Ten years later it would do a system in about a minute. A nearby more powerful computer would do in a minute. Since the computing effort increases with the third power of the size, and since , an hour's work solves a four times larger matrix, namely on the more powerful machine. For significantly larger values of , exact numerical methods must be abandoned and iterative methods must be used.

(The compute time for a rectangular matrix is slightly more pessimistic. It is the product of the number of data points times the number of model points squared . This happens to be the cost of computing the matrix from . Since the number of data points generally exceeds the number of model points by a substantial factor (to allow averaging of noises), it leaves us with significantly fewer than 4000 points in model space.)

A square image packed into a 4096 point vector is a array. The computer power for linear algebra to give us solutions that fit in a image is thus proportional to , which means that even though computer power grows rapidly, imaging resolution using exact numerical methods'' hardly grows at all from our current practical limit.

The retina in our eyes captures an image of size about which is a lot bigger than . Life offers us many occasions where final images exceed the 4000 points of a array. To make linear algebra (and inverse theory) relevant to such applications, we investigate special techniques. A numerical technique known as the conjugate-direction method'' works well for all values of and is our subject here. As with most simultaneous equation solvers, an exact answer (assuming exact arithmetic) is attained in a finite number of steps. And if and are too large to allow enough iterations, the iterative methods can be interrupted at any stage, the partial result often proving useful. Whether or not a partial result actually is useful is the subject of much research; naturally, the results vary from one application to the next.

Subsections
 Model fitting by least squares

Next: Sign convention Up: Model fitting by least Previous: Unknown input: deconvolution with

2011-07-17