next up previous [pdf]

Next: The magical property of Up: KRYLOV SUBSPACE ITERATIVE METHODS Previous: Why steepest descent is

Null space and iterative methods

In applications where we fit $\bold d \approx\bold F \bold x$, there might exist a vector (or a family of vectors) defined by the condition $\bold 0 =\bold F \bold x_{\rm null}$. This family is called a null space. For example, if the operator $\bold F$ is a time derivative, then the null space is the constant function; if the operator is a second derivative, then the null space has two components, a constant function and a linear function, or combinations of both. The null space is a family of model components that have no effect on the data.

When we use the steepest-descent method, we iteratively find solutions by this updating:

$\displaystyle \bold x_{i+1}$ $\textstyle =$ $\displaystyle \bold x_i + \alpha \Delta \bold x$ (61)
$\displaystyle \bold x_{i+1}$ $\textstyle =$ $\displaystyle \bold x_i + \alpha \bold F\T\,\bold r$ (62)
$\displaystyle \bold x_{i+1}$ $\textstyle =$ $\displaystyle \bold x_i + \alpha \bold F\T\,(\bold F\bold x -\bold d)$ (63)

After we have iterated to convergence, the gradient $ \Delta \bold x=\bold F\T\, \bold r$ vanishes. Adding any $\bold x_{\rm null}$ to $\bold x$ does not change the residual $\bold r=\bold F\bold x -\bold d$. Because $\bold r$ is unchanged, $ \Delta \bold x=\bold F\T\, \bold r$ remains zero and $\bold x_{i+1} =\bold x_i$. Thus, we conclude that any null space in the initial guess $\bold x_0$ remains there unaffected by the gradient-descent process. So, in the presense of null space, the answer we get from an iterative method depends on the starting guess. Oops! The analytic solution does not do any better. It needs to deal with a singular matrix. Existence of a null space destroys the uniqueness of any resulting model.

Linear algebra theory enables us to dig up the entire null space should we so desire. On the other hand, the computer demands might be vast. Even the memory for holding the many $\bold x$ vectors could be prohibitive. A much simpler and more practical goal is to find out if the null space has any members, and if so, to view some members. To try to see a member of the null space, we take two starting guesses, and we run our iterative solver for each. If the two solutions, $\bold x_1$ and $\bold x_2$, are the same, there is no null space. If the solutions differ, the difference is a member of the null space. Let us see why: Suppose after iterating to minimum residual we find:

$\displaystyle \bold r_1$ $\textstyle =$ $\displaystyle \bold F\bold x_1 - \bold d$ (64)
$\displaystyle \bold r_2$ $\textstyle =$ $\displaystyle \bold F\bold x_2 - \bold d$ (65)

We know that the residual squared is a convex quadratic function of the unknown $\bold x$. Mathematically that means the minimum value is unique, so $\bold r_1 =\bold r_2$. Subtracting, we find $0=\bold r_1-\bold r_2 =\bold F(\bold x_1-\bold x_2)$ proving that $\bold x_1-\bold x_2$ is a model in the null space. Adding $\bold x_1-\bold x_2$ to any to any model $\bold x$ does not change the modeled data.

A practical way to learn about the existence of null spaces and see samples is to try gradient-descent methods beginning from various different starting guesses.

``Did I fail to run my iterative solver long enough?'' is a question you might have. If two residuals from two starting solutions are not equal, $\bold r_1 \ne \bold r_2$, then you should be running your solver through more iterations.

If two different starting solutions produce two different residuals, then you did not run your solver through enough iterations.


next up previous [pdf]

Next: The magical property of Up: KRYLOV SUBSPACE ITERATIVE METHODS Previous: Why steepest descent is

2014-12-01