next up previous [pdf]

Next: Theory Up: Chen & Ma & Previous: Chen & Ma &

INTRODUCTION

Sparse approximations aim at representing the most information of the given data by a linear combination of pre-specified atom signals with sparse linear coefficients. Sparse approximation theory has been a rapidly evolving field, since many state-of-the-art signal and image processing tasks have been successfully handled with the concept of sparse representation, including image inpainting and restoration (Elad et al., 2005; Cai et al., 2013; Mairal et al., 2009,2008; Quan et al., 2011), image denoising (Cai et al., 2013; Protter and Elad, 2009), data compression (Bryt, 2008), and blind source separation (Zibulevsky and Pearlmutter, 2001). The majority of known methods for sparse approximation of signals can be divided into two general categories: an analytic approach and a learning-based approach. The analytic approach refers to a fixed basis while the learning-based approach adaptively finds the required sparse basis by training. Transform is usually applied to refer to analytic approach while dictionary refers to learning-based approach. A number of sparsity promoting transforms with fixed basis functions have been proposed in the literature for handling different signal processing tasks, including wavelets (Sweldens, 1995; Mallat, 2009), curvelets (Candès et al., 2006; Ma and Plonka, 2010), contourlets (Do and Vetterli, 2005), shearlets (Labate et al., 2005), and bandelets (LePennec and Mallat, 2005), as well as the classic Radon transforms (Ibrahim and Sacchi, 2014a,b). The learning-based dictionaries usually utilize machine learning techniques to infer the dictionary. The advantage of this type of approach is finer-tuned dictionaries compared with the analytic approach. Tuned dictionaries can result in better performance in different applications. The downside of the learning-based approach is a higher computational cost because of a large number of iterations and redundant overlap computations for small patches. Different algorithms for learning dictionaries may result in different efficiencies and performances. Commonly used dictionary-training algorithms include principle component analysis (PCA) (Vidal et al., 2005; Jolliffe, 2002), the method of optimal directions (MOD) (Engan et al., 1999), and variants of singular-value decomposition (SVD) such as K-SVD (Aharon et al., 2006).

Over the past several decades, different types of fixed-basis sparsity-promoting transforms have been explored for seismic data processing applications, and promising results have been reported. Luo and Schuster (1992) applied a wavepacket transform to seismic data compression. Zhang and Ulrych (2003) developed a type of wavelet frame that takes into account the characteristics of seismic data both in time and space for denoising applications. Ioup and Ioup (1998) applied a wavelet transform to both random noise removal and data compression using soft thresholding in the wavelet domain. Du and Lines (2000) applied a multi-resolution property of wavelet transform to attenuate tube waves. Jafarpour et al. (2009) used a discrete cosine transform to obtain sparse representations of fields with distinct geologic features and to improve the solutions of traditional geophysical estimation problems.A number of researchers have reported successful applications of the curvelet transform in coherent and random noise attenuation thanks to the multi-scale directional property of the curvelet domain (Neelamani et al., 2008,2010; Hennenfent and Herrmann, 2006; Wang et al., 2008). Following the compressive sensing theory (Donoho, 2006), curvelet transform has also been used to restore missing seismic data (Naghizadeh and Sacchi, 2010; Hennenfent et al., 2010). The fixed-basis sparse transforms enjoy efficiency benefits. However, they are not data-adaptive.

Fomel and Liu (2010) introduced a sparsity promoting transform which can be data adaptive, called the seislet transform. Following the lifting scheme used in the construction of second-generation wavelets, the seislet transform utilizes the spatial predictability property of seismic data to construct the predictive operator. Fomel and Liu (2010) used plane-wave destruction (Fomel, 2002) to aid the prediction process. Instead of using the plane-wave destruction algorithm, Liu and Fomel (2010) used differential offset-continuation to construct seislet transform for prestack reflection data. The offset continuation seislet transform can obtain better sparsity for conflicting-dip events in prestack data by using offset continuation instead of local slopes to connect different common offset gathers.

The learning-based dictionaries have not been widely used in seismic data processing until recent years. Kaplan et al. (2009) used a data-driven sparse-coding algorithm to adaptively learn basis functions in order to sparsely represent seismic data in the transform domain and performed improved denoising performance for both synthetic and field data examples. Based on a variational sparse-representation model, Beckouche and Ma (2014) proposed a denoising approach by adaptively learning dictionaries from noisy seismic data. Liang et al. (2014) applied a learning-based sparsity-promoting dictionary, called data-driven tight frame (DDTF) (Cai et al., 2013), as a sparse transform in the framework of the split inexact Uzawa algorithm to restore missing seismic data. The split inexact Uzawa algorithm was proposed by Zhang et al. (2011) and can be viewed as a generalization of the alternating direction of multiplier method (ADMM). While learning-based dictionaries can be more adaptive than fixed-basis transforms, no prior-constraint structural information is involved in the construction of these dictionaries. As seismic data have distinct structural patterns, the general learning-based dictionaries can be improved by incorporating structure information for achieving additional sparsity.

In this paper, we propose a double sparsity dictionary (DSD) for sparsifying seismic data. The basic concept of DSD is borrowed from the image processing literature (Ophir et al., 2011; Rubinstein et al., 2010). Rubinstein et al. (2010) initialized the basic principle of a sparsity model of the dictionary atoms over a base dictionary. Ophir et al. (2011) extended the method of Rubinstein et al. (2010) largely by grouping the analysis-domain data into bands. By bridging the gap between fixed-basis transforms and learning-based dictionaries, we propose a cascaded DSD framework, which aims at combining the benefits of both approaches. The proposed DSD framework offers an extra level of sparsity for representing seismic data. On the one hand, DSD compensates the weakness of fixed-basis transforms in sparsifying seismic data patterns and thus can be more robust. At the same time, DSD can relieve the dependence of learning-based dictionaries on large filtering-window overlap and large number of iterations and thus can be more efficient and effective. We define two models to construct DSD: a synthesis model and an analysis model; according to the learning domain of each model. We provide an example of the analysis model, in which we cascade the seislet transform and DDTF to form a DSD framework. In this framework, DDTF compensates for the dip dependence of the seislet transform, while the seislet transform adds structural regularization to DDTF. Thus, the proposed DSD can provide sparser representation of seismic data in order to implement thresholding-based denoising algorithms. We use two synthetic tests and three field data examples to test the performance of the proposed DSD framework in removing random noise. Our experiments show noticeably better performance using DSD-based thresholding than using either seislet or DDTF based thresholdings. For the presented examples, DSD also performs significantly better than the classic $ f$ -$ x$ deconvolution, according to both SNR measurements and visual observations.


next up previous [pdf]

Next: Theory Up: Chen & Ma & Previous: Chen & Ma &

2016-02-27