next up previous [pdf]

Next: Signal/noise decomposition examples Up: Nonstationarity: patching Previous: INVERSION AND NOISE REMOVAL

SIGNAL-NOISE DECOMPOSITION BY DIP

Choose noise $ \bold n$ to be energy that has no spatial correlation and signal $ \bold s$ to be energy with spatial correlation consistent with one, two, or possibly a few plane-wave segments. (Another view of noise is that a huge number of plane waves is required to define the wavefield; in other words, with Fourier analysis you can make anything, signal or noise.) We know that a first-order differential equation can absorb (kill) a single plane wave, a second-order equation can absorb one or two plane waves, etc. In practice, we will choose the order of the wavefield and minimize power to absorb all we can, and call that the signal.

$ \bold S $ is the operator that absorbs (by prediction error) the plane waves and $ \bold N$ absorbs noises and $ \epsilon > 0$ is a small scalar to be chosen. The difference between $ \bold S $ and $ \bold N$ is the spatial order of the filters. Because we regard the noise as spatially uncorrelated, $ \bold N$ has coefficients only on the time axis. Coefficients for $ \bold S $ are distributed over time and space. They have one space level, plus another level for each plane-wave segment slope that we deem to be locally present. In the examples here the number of slopes is taken to be two. Where a data field seems to require more than two slopes, it usually means the ``patch'' could be made smaller.

It would be nice if we could forget about the goal (10) but without it the goal (9), would simply set the signal $ \bold s$ equal to the data $ \bold d$ . Choosing the value of $ \epsilon$ will determine in some way the amount of data energy partitioned into each. The last thing we will do is choose the value of $ \epsilon$ , and if we do not find a theory for it, we will experiment.

The operators $ \bold S $ and $ \bold N$ can be thought of as ``leveling'' operators. The method of least-squares sees mainly big things, and spectral zeros in $ \bold S $ and $ \bold N$ tend to cancel spectral lines and plane waves in $ \bold s$ and $ \bold n$ . (Here we assume that power levels remain fairly level in time. Were power levels to fluctuate in time, the operators $ \bold S $ and $ \bold N$ should be designed to level them out too.)

None of this is new or exciting in one dimension, but I find it exciting in more dimensions. In seismology, quasi-sinusoidal signals and noises are quite rare, whereas local plane waves are abundant. Just as a short one-dimensional filter can absorb a sinusoid of any frequency, a compact two-dimensional filter can absorb a wavefront of any dip.

To review basic concepts, suppose we are in the one-dimensional frequency domain. Then the solution to the fitting goals (10) and (9) amounts to minimizing a quadratic form by setting to zero its derivative, say

$\displaystyle 0 \eq \frac{\partial \ }{\partial \bold s\T} \left( (\bold s\T-\b...
...d N(\bold s - \bold d ) + \epsilon^2 \bold s\T \bold S\T\bold S \bold s \right)$ (11)

which gives the answer
$\displaystyle \bold s$ $\displaystyle =$ $\displaystyle \left(
\frac{
\bold N\T \bold N
}{
\bold N\T \bold N \ + \ \epsilon^2 \bold S\T\bold S}
\right) \ \bold d$ (12)
$\displaystyle \bold n \eq \bold d - \bold s$ $\displaystyle =$ $\displaystyle \left(
\frac{\epsilon^2 \bold S\T\bold S
}{
\bold N\T \bold N \ + \ \epsilon^2 \bold S\T\bold S}
\right) \ \bold d$ (13)

To make this really concrete, consider its meaning in one dimension, where signal is white $ \bold S\T\bold S=1$ and noise has the frequency $ \omega_0$ , which is killable with the multiplier $ \bold N\T\bold N=(\omega-\omega_0)^2$ . Now we recognize that equation (12) is a notch filter and equation (13) is a narrow-band filter.

The analytic solutions in equations (12) and (13) are valid in 2-D Fourier space or dip space too. I prefer to compute them in the time and space domain to give me tighter control on window boundaries, but the Fourier solutions give insight and offer a computational speed advantage.

Let us express the fitting goal in the form needed in computation.

$\displaystyle \left[ \begin{array}{c} \bold 0 \\ \bold 0 \end{array} \right] \q...
... s \ +\ \left[ \begin{array}{c} -\bold N \bold d \\ \bold 0 \end{array} \right]$ (14)

user/gee/signoi.c
void signoi_lop (bool adj, bool add, int n1, int n2, 
		 float *data, float *sign)
/*< linear operator >*/
{
    sf_helicon_init (nn);
    sf_polydiv_init (nd, ss); 

    sf_adjnull(adj,add,n1,n2,data,sign);

    sf_helicon_lop (false, false, n1, n1, data, dd);
    sf_solver_prec(sf_helicon_lop, sf_cgstep, sf_polydiv_lop, 
		   nd, nd, nd, sign, dd, niter, eps, 
		   "verb", verb, "end");
    sf_cgstep_close();

    nn++;
    ss++;
}
As with the missing-data subroutines, the potential number of iterations is large, because the dimensionality of the space of unknowns is much larger than the number of iterations we would find acceptable. Thus, sometimes changing the number of iterations niter can create a larger change than changing epsilon. Experience shows that helix preconditioning saves the day.



Subsections
next up previous [pdf]

Next: Signal/noise decomposition examples Up: Nonstationarity: patching Previous: INVERSION AND NOISE REMOVAL

2013-07-26