next up previous [pdf]

Next: Performance evaluation and convergence Up: Deblending using shaping regularization Previous: Numerical blending

Shaping regularization

The unknown $ \mathbf{m}$ in equation 3 can be recovered iteratively using shaping regularization:

$\displaystyle \mathbf{m}_{n+1}=\mathbf{S}[\mathbf{m}_n+\mathbf{B}(\mathbf{\tilde{d}}-\mathbf{Fm}_n)],$ (5)

where operator $ \mathbf{S}$ shapes the estimated model into the space of admissible models at each iteration (Fomel, 2008,2007) and $ \mathbf{B}$ is the backward operator that provides an inverse mapping from data space to model space. Daubechies et al. (2004) prove that, if $ \mathbf{S}$ is a nonlinear thresholding operator (Donoho and Johnstone, 1994), $ \mathbf{B}=\mathbf{F}^T$ where $ \mathbf{F}^T$ is the adjoint operator of $ \mathbf{F}$ , iteration 5 converges to the solution of equation 6 with $ L_1$ regularization term:

$\displaystyle \min_{\mathbf{m}} \parallel \mathbf{Fm-\tilde{d}} \parallel_2^2 +...
...mathbf{A}^{-1}\mathbf{m}\parallel_1,%\epsilon\parallel \mathbf{m} \parallel_p,
$ (6)

where $ \mu$ is the threshold and $ \mathbf{A}^{-1}$ denotes a sparsity-promoting transform. A better choice for $ \mathbf{B}$ is the pseudoinverse of $ \mathbf{F}$ : $ \mathbf{B}=(\mathbf{F}^T\mathbf{F})^{-1}\mathbf{F}^T$ (Daubechies et al., 2008).
next up previous [pdf]

Next: Performance evaluation and convergence Up: Deblending using shaping regularization Previous: Numerical blending

2014-08-20