Modified Bregman iteration for data interpolation

Data reconstruction can be treated as a linear-estimation problem and solved by an iterative optimization algorithm (Fomel, 2003). Let $\mathbf{d}$ be the vector of observed data and $\mathbf{\Phi}$ be the regularized model, one can obtain the model-space regularization equation by using a forward modeling operator $\mathbf{L}$:

$\displaystyle \mathbf{d} = \mathbf{L} \mathbf{\Phi} \;.$ (6)

Missing data interpolation is a particular case of data reconstruction, where the input data are already given on a regular grid. One needs to reconstruct only the missing values in the empty bins. In general, $\mathbf{L}$ is selected as a mask operator $\mathbf{K}$ (a diagonal matrix with zeros at locations of missing data and ones elsewhere), and the problem becomes underdetermined. As an alternative theory of Nyquist/Shannon sampling theory, compressed sensing (CS) provides an important theoretical basis for reconstructing images (Donoho, 2006). Analogous to CS, missing data interpolation can be generalized to a NP-hard problem (Amaldi and Kann, 1998) by using inverse generalized velocity-dependent (VD)-seislet transform $\mathbf{T}$:

$\displaystyle \min_{\mathbf{m}}\Vert\mathbf{m}\Vert _0 \; s.t.\; \mathbf{KTm} = \mathbf{d}\;,$ (7)

where $\mathbf{m}$ is the transform coefficient and $\mathbf{\Phi}=\mathbf{Tm}$. Basis pursuit is a traditional method for solving the NP-hard problem (Equation 7), where the corresponding constrained minimization problem is as follows:

$\displaystyle \min_{\mathbf{m}}\Vert\mathbf{m}\Vert _1 \; s.t.\; \mathbf{KTm} = \mathbf{d}\;.$ (8)

Equation 8 is a convex optimization problem, which can be transformed into a linear program and then solved by conventional linear programming solvers. Bregman iteration was introduced by Osher et al. (2005) in the context of image processing. This iteration solves a sequence of convex problems (Yin et al., 2008), and its general formulation is as follows:
$\displaystyle \mathbf{d}^{k+1}$ $\displaystyle =$ $\displaystyle \mathbf{d}+(\mathbf{d}^k-\mathbf{KT}\mathbf{m}^{k})\;,$ (9)
$\displaystyle \mathbf{m}^{k+1}$ $\displaystyle =$ $\displaystyle \arg\min_{\mathbf{m}}
\mu\Vert\mathbf{m}\Vert _1+ \frac{1}{2}\Vert\mathbf{KTm}-\mathbf{d}^{k+1}\Vert _2^2\;.$ (10)

The advantage of the Bregman iteration is that the penalty parameter $\mu$ in equation 10 remains constant. We can therefore choose a fixed value for $\mu$ that minimizes the condition number of the sub-problems, which results in a fast convergence.

An iterative procedure based on shrinkage, also called soft thresholding, is used by many researchers to solve equation 10 (Daubechies et al., 2004). However, it is difficult to find the adjoint of $\mathbf{KT}$. In a general case, forward generalized velocity-dependent (VD)-seislet transform $\mathbf{T^{-1}}$ is an approximate inverse of $\mathbf{KT}$; then the chain $\mathbf{T^{-1}KT}$ is close to the identity operator $\mathbf{I}$. Therefore, we can obtain an iteration with shaping regularization (Fomel, 2008) as follows:

$\displaystyle \mathbf{m}^{k+1} = \mathbf{S}[\mathbf{T^{-1}}\mathbf{d}^{k+1}+
(\mathbf{I}-\mathbf{T^{-1}KT})\mathbf{m}^k]\;,$ (11)

where $\mathbf{S}$ is soft thresholding. The iteration (equation 11) will converge to the solution of the least-squares optimization problem regularized by a sparsity constraint (equation 10). Equation 11 can be further reformulated as

$\displaystyle \mathbf{m}^{k+1} = \mathbf{S}[\mathbf{T^{-1}}(\mathbf{d}^{k+1}+
(\mathbf{I}-\mathbf{K})\mathbf{Tm}^k)]\;.$ (12)

Combining equation 9 with the shaping solver (equation 12), the framework of modified Bregman iteration is as follows:

$\displaystyle \mathbf{d}^{k+1}$ $\displaystyle =$ $\displaystyle \mathbf{d}+(\mathbf{d}^k-\mathbf{KT}\mathbf{m}^{k})\;,$ (13)
$\displaystyle \mathbf{m}^{k+1}$ $\displaystyle =$ $\displaystyle \mathbf{S}[\mathbf{T^{-1}}(\mathbf{d}^{k+1}+
(\mathbf{I}-\mathbf{K})\mathbf{Tm}^k)]\;.$ (14)

This is the analog of “adding back the residual” in the Rudin-Osher-Fatemi (ROF) model for TV denoising (Osher et al., 2005). By using a large threshold value, the modified Bregman iteration can guarantee fast convergence of the objective function (equation 8) and accurate recovery of the regularized model $\mathbf{\Phi}$. The final interpolated result can be calculated by $\hat{\mathbf{d}}=\mathbf{T}\mathbf{m}^{N}$, where $N$ is the number of iteration.


2019-05-06