next up previous [pdf]

Next: Missing data interpolation Up: Step 1: Adaptive PEF Previous: Step 1: Adaptive PEF

Regular trace interpolation

An important property of PEFs is scale invariance, which allows estimation of PEF coefficients $ A_n$ (including the leading ``$ -1$ '' and prediction coefficients $ B_n$ ) for incomplete aliased data $ S(t,x)$ that include known traces $ S_{known}(t,x_k)$ and unknown or zero traces $ S_{zero}(t,x_z)$ . For trace decimation, zero traces interlace known traces. To avoid zeroes that influence filter estimation, we interlace the filter coefficients with zeroes. For example, consider a 2-D PEF with seven prediction coefficients:

\begin{displaymath}\begin{array}{ccccc} B_3 &B_4 &B_5 &B_6 &B_7 \\ \cdot &\cdot &-1 &B_1 &B_2 \end{array}\end{displaymath} (1)

Here, the horizontal axis is time, the vertical axis is space, and ``$ \cdot$ '' denotes zero. Rescaling both time and spatial axes assumes that the dips represented by the original filter in equation 1 are the same as those represented by the scaled filter (Claerbout, 1992):

\begin{displaymath}\begin{array}{ccccccccc} B_3 &\cdot &B_4 &\cdot &B_5 &\cdot &...
...ot &\cdot &\cdot &\cdot &-1 &\cdot &B_1 &\cdot &B_2 \end{array}\end{displaymath} (2)

For nonstationary situations, we can also assume locally stationary spectra of the data because trace decimation makes the space between known traces small enough, thus making adaptive PEFs locally scale-invariant. For estimating adaptive PEF coefficients, nonstationary autoregression allows coefficients $ B_n$ to change with both $ t$ and $ x$ . The new adaptive filter can look something like

\begin{displaymath}\begin{array}{ccccccccc} B_3(t,x) &\cdot &B_4(t,x) &\cdot &B_...
...&\cdot &\cdot &-1 &\cdot &B_1(t,x) &\cdot &B_2(t,x) \end{array}\end{displaymath} (3)

In other words, prediction coefficients $ B_n(t,x)$ are obtained by solving the least-squares problem,
$\displaystyle \widehat{B_n}(t,x)$ $\displaystyle =$ $\displaystyle \arg\min_{B_n}\Vert S(t,x)-\sum_{n=1}^{N}
B_n(t,x)S_n(t,x)\Vert _2^2$  
    $\displaystyle + \epsilon^2\, \sum_{n=1}^{N} \Vert\mathbf{D}[B_n(t,x)]\Vert _2^2\;,$ (4)

where $ S_n(t,x)$ = $ S(t-m\,i\,\Delta\,t,x-m\,j\,\Delta\,x)$ , which represents the causal translation of $ S(t,x)$ , with time-shift index $ i$ and spatial-shift index $ j$ scaled by decimation interval $ m$ . Note that predefined constant $ m$ uses the interlacing value as an interval; i.e., the shift interval equals 2 in equation 3. Subscript $ n$ is the general shift index for both time and space, and the total number of $ i$ and $ j$ is $ N$ . $ \mathbf{D}$ is the regularization operator, and $ \epsilon$ is a scalar regularization parameter. All coefficients $ B_n(t,x)$ are estimated simultaneously in a time/space variant manner. This approach was described by Fomel (2009) as regularized nonstationary autoregression (RNA). If $ \mathbf{D}$ is a linear operator, least-squares estimation reduces to linear inversion

$\displaystyle \mathbf{b} = \mathbf{A}^{-1}\,\mathbf{d}\;,$ (5)

where

$\displaystyle \mathbf{b} = \left[\begin{array}{cccc}B_1(t,x) & B_2(t,x) & \cdots & B_N(t,x)\end{array}\right]^T\;,$ (6)

$\displaystyle \mathbf{d} = \left[\begin{array}{cccc}S_1(t,x)\,S(t,x) & S_2(t,x)\,S(t,x) & \cdots & S_N(t,x)\,S(t,x)\end{array}\right]^T\;,$ (7)

and the elements of matrix $ \mathbf{A}$ are

$\displaystyle A_{nk}(t,x) = S_n(t,x)\,S_k(t,x) + \epsilon^2\,\delta_{nk}\,\mathbf{D}^T\,\mathbf{D}\;.$ (8)

Shaping regularization (Fomel, 2007) incorporates a shaping (smoothing) operator $ \mathbf{G}$ instead of $ \mathbf{D}$ and provides better numerical properties than Tikhonov's regularization (Tikhonov, 1963) in equation 4 (Fomel, 2009). Inversion using shaping regularization takes the form

$\displaystyle \mathbf{b} = \widehat{\mathbf{A}}^{-1}\,\widehat{\mathbf{d}}\;,$ (9)

where

$\displaystyle \widehat{\mathbf{d}} = \left[\begin{array}{cccc}\mathbf{G}\left[S...
...ight] & \cdots & \mathbf{G}\left[S_N(t,x)\,S(t,x)\right]\end{array}\right]^T\;,$ (10)

the elements of matrix $ \widehat{\mathbf{A}}$ are
$\displaystyle \widehat{A}_{nk}(t,x)$ $\displaystyle =$ $\displaystyle \lambda^2\,\delta_{nk} + \mathbf{G}\left[S_n(t,x)\,S_k(t,x) -
\lambda^2\,\delta_{nk}\right]\;,$ (11)

and $ \lambda$ is a scaling coefficient. One advantage of the shaping approach is the relative ease of controlling the selection of $ \lambda$ and $ \mathbf{G}$ in comparison with $ \epsilon$ and $ \mathbf{D}$ . We define $ \mathbf{G}$ as Gaussian smoothing with an adjustable radius, which is designed by repeated application of triangle smoothing (Fomel, 2007), and choose $ \lambda$ to be the mean value of $ S_n(t,x)$ .

Coefficients $ B_n(t,x_z)$ at zero traces $ S_{zero}(t,x_z)$ get constrained (effectively smoothly interpolated) by regularization. The required parameters are the size and shape of the filter $ B_n(t,x)$ and the smoothing radius in $ \mathbf{G}$ .


next up previous [pdf]

Next: Missing data interpolation Up: Step 1: Adaptive PEF Previous: Step 1: Adaptive PEF

2013-07-26