next up previous [pdf]

Next: Synthetic data tests Up: theory Previous: Noniterative local dip calculation

Nonstationary polynomial fitting

Traditional stationary regression is used to estimate the coefficients $ a_{i}, i=1,2,\dots,N$ by minimizing the prediction error between a ``master'' signal s( $ \mathbf{x}$ ) (where $ \mathbf{x}$ represents the coordinates of a multidimensional space) and a collection of slave signals $ L_{i}(\mathbf{x}), i = 1, 2,\dots ,N$ (Fomel, 2009)

$\displaystyle E(\mathbf{x})=s(\mathbf{x})-\sum_{i=1}^{N}a_{i}L_{i}(\mathbf{x}).$ (7)

When $ \mathbf{x}$ is 1D and $ N= 2$ , $ L_{1}(\mathbf{x})=1$ and $ L_{2}(\mathbf{x})=x$ , the problem of minimizing $ E(\mathbf{x})$ amounts to fitting a straight line $ a_{1}+a_{1}x$ to the master signal. Nonstationary regression is similar to equation 7 but allows the coefficients $ a_{i}(\mathbf{x})$ to vary with $ \mathbf{x}$ , and the error (Fomel, 2009)

$\displaystyle E(\mathbf{x})=s(\mathbf{x})-\sum_{i=1}^{N}a_{i}(\mathbf{x})L_{i}(\mathbf{x})$ (8)

is minimized to solve for the multinomial coefficients $ a_{i}(\mathbf{x})$ . The minimization becomes an ill-posed problem because $ a_{i}(\mathbf{x})$ rely on the independent variables $ \mathbf{x}$ . To solve the ill-posed problem, we constrain the coefficients $ a_{i}(\mathbf{x})$ . Tikhonov's regularization (Tikhonov, 1963) is a classical regularization method that amounts to the minimization of the following functional (Fomel, 2009)

$\displaystyle F(a)=\Vert E(\mathbf{x})\Vert^{2}+ \varepsilon^{2}\sum_{i=1}^{N}\Vert\mathbf{D}[a_{i}(\mathbf{x})]\Vert^2 ,$ (9)

where $ \mathbf{D}$ is the regularization operator and $ \varepsilon$ is a scalar regularization parameter. When $ \mathbf{D}$ is a linear operator, the least-squares estimation reduces to linear inversion (Fomel, 2009)

$\displaystyle \mathbf{a}=\mathbf{A}^{-1}\mathbf{d} ,$ (10)

where

\begin{displaymath}\begin{split}\mathbf{a} & =[a_1(x)a_2(x)\cdots a_N(x)]^T ,\\ ...
...bf{d} & =[L_1(x)s(x)L_2(x)s(x)\cdots L_N(x)s(x)]^T, \end{split}\end{displaymath}    

and the elements of matrix $ \mathbf{A}$ are

$\displaystyle A_{ij}({\mathbf{x}})=L_i({\mathbf{x}})L_j({\mathbf{x}})+\varepsilon^2 \delta_{ij}\mathbf{D}^T\mathbf{D} \;.$    

compare
compare
Figure 2.
Least-squares linear fitting compared with nonstationary polynomial fitting.
[pdf] [png] [scons]

Next, we use a simple signal to simulate the variation of the amplitude of a nonstationary event with random noise (dashed line in Figure 2). In Figure 2, the dot dashed line denotes the results of the least-squares linear fitting and the solid line denotes the results of the nonstationary polynomial fitting. We compare the least-squares linear fitting and nonstationary polynomial fitting results, and we find that the nonstationary polynomial fitting models the curve variations more accurately for events with variable amplitude, particularly for $ 40<x<60$ .


next up previous [pdf]

Next: Synthetic data tests Up: theory Previous: Noniterative local dip calculation

2015-05-07