Adaptive multiple subtraction using regularized nonstationary regression |

Figure 1a shows a classic example of linear regression applied as a line fitting problem. When the same technique is applied to data with a non-stationary behavior (Figure 1b), stationary regression fails to produce an accurate fit and creates regions of consistent overprediction and underprediction.

pred,pred2
Line fitting with
stationary regression works well for stationary data (a) but poorly
for non-stationary data (b).
Figure 1. |
---|

One remedy is to extend the model by including nonlinear terms (Figure 2a), another is to break the data into local windows (Figure 2b). Both solutions work to a certain extent but are not completely satisfactory, because they decreases the estimation stability and introduce additional non-intuitive parameters.

pred3,pred4
Nonstationary line fitting using nonlinear terms (a) and local windows (b).
Figure 2. |
---|

The regularized nonstationary solution, defined in the previous section, is shown in Figure 3. When using shaping regularization with smoothing as the shaping operator, the only additional parameter is the radius of the smoothing operator.

pred5
Nonstationary line fitting by
regularized nonstationary regression.
Figure 3. |
---|

This toy example makes it easy to compare shaping regularization with the more traditional Tikhonov's regularization. Figures 4 and 5 show inverted matrix from equation 5 and the distribution of its eigenvalues for two different values of Tikhonov's regularization parameter , which correspond to mild and strong smoothing constraints. The operator in this case is the first-order difference. Correspondingly, Figures 6 and 7 show matrix from equation 7 and the distribution of its eigenvalues for mild and moderate smoothing implemented with shaping. The operator is Gaussian smoothing controlled by the smoothing radius.

When a matrix operator is inverted by an iterative method such as conjugate gradients, two characteristics control the number of iterations and therefore the cost of inversion (Golub and Van Loan, 1996; van der Vorst, 2003):

- the condition number (the ratio between the largest and the smallest eigenvalue)
- the clustering of eigenvalues.

As the smoothing radius increases, matrix approaches the identity matrix, and the result of non-stationary regression regularized by shaping approaches the result of stationary regression. This intuitively pleasing behavior is difficult to emulate with Tikhonov's regularization.

tmat0,teig0
Matrix inverted in Tikhonov's regularization applied to nonstationary line fitting (a) and the distribution of its eigenvalues (b). The regularization parameter corresponds to mild smoothing. The condition number is
.
Figure 4. |
---|

tmat2,teig2
Matrix inverted in Tikhonov's regularization applied to nonstationary line fitting (a) and the distribution of its eigenvalues (b). The regularization parameter corresponds to strong smoothing. The condition number is
. Eigenvalues are poorly clustered.
Figure 5. |
---|

smat0,seig0
Matrix inverted in shaping regularization applied to nonstationary line fitting (a) and the distribution of its eigenvalues (b). The smoothing radius is 3 samples (mild smoothing). The condition number is
.
Figure 6. |
---|

smat1,seig1
Matrix inverted in shaping regularization applied to nonstationary line fitting (a) and the distribution of its eigenvalues (b). The smoothing radius is 15 samples (moderate smoothing). The condition number is
. Eigenvalues are well clustered.
Figure 7. |
---|

Adaptive multiple subtraction using regularized nonstationary regression |

2013-07-26