Missing data interpolation is a particular case of data reconstruction, where the input data are already given on a regular grid. One needs to reconstruct only the missing values in the empty bins. In general, is selected as a mask operator (a diagonal matrix with zeros at locations of missing data and ones elsewhere), and the problem becomes underdetermined. As an alternative theory of Nyquist/Shannon sampling theory, compressed sensing (CS) provides an important theoretical basis for reconstructing images (Donoho, 2006). Analogous to CS, missing data interpolation can be generalized to a NP-hard problem (Amaldi and Kann, 1998) by using inverse generalized velocity-dependent (VD)-seislet transform :

where is the transform coefficient and . Basis pursuit is a traditional method for solving the NP-hard problem (Equation 7), where the corresponding constrained minimization problem is as follows:

Equation 8 is a convex optimization problem, which can be transformed into a linear program and then solved by conventional linear programming solvers. Bregman iteration was introduced by Osher et al. (2005) in the context of image processing. This iteration solves a sequence of convex problems (Yin et al., 2008), and its general formulation is as follows:

The advantage of the Bregman iteration is that the penalty parameter in equation 10 remains constant. We can therefore choose a fixed value for that minimizes the condition number of the sub-problems, which results in a fast convergence.

An iterative procedure based on shrinkage, also called soft thresholding, is used by many researchers to solve equation 10 (Daubechies et al., 2004). However, it is difficult to find the adjoint of . In a general case, forward generalized velocity-dependent (VD)-seislet transform is an approximate inverse of ; then the chain is close to the identity operator . Therefore, we can obtain an iteration with shaping regularization (Fomel, 2008) as follows:

where is soft thresholding. The iteration (equation 11) will converge to the solution of the least-squares optimization problem regularized by a sparsity constraint (equation 10). Equation 11 can be further reformulated as

Combining equation 9 with the shaping solver
(equation 12), the framework of modified Bregman
iteration is as follows:

This is the analog of “adding back the residual” in the Rudin-Osher-Fatemi (ROF) model for TV denoising (Osher et al., 2005). By using a large threshold value, the modified Bregman iteration can guarantee fast convergence of the objective function (equation 8) and accurate recovery of the regularized model . The final interpolated result can be calculated by , where is the number of iteration.

2019-05-06