next up previous [pdf]

Next: EXAMPLES Up: Fomel, Ying, & Song: Previous: WAVE EXTRAPOLATION

LOWRANK APPROXIMATION

The key idea of the lowrank decomposition is decomposing the wave extrapolation matrix

$\displaystyle W(\mathbf{x},\k ) = e^{i \left[\phi(\mathbf{x},\k ,\Delta t)-\k\cdot \mathbf{x}\right]}$ (11)

for a fixed $ \Delta t$ into a separated representation

$\displaystyle W(\mathbf{x},\k ) \approx \sum\limits_{m=1}^M \sum\limits_{n=1}^N W(\mathbf{x},\k _m) a_{mn} W(\mathbf{x}_n,\k ).$ (12)

Representation (12) speeds up the computation of $ P(\mathbf{x},t+\Delta t)$ since
$\displaystyle P(\mathbf{x},t+\Delta t)$ $\displaystyle =$ $\displaystyle \int e^{i \mathbf{x}\k } W(\mathbf{x},\k ) \widehat{P}(\k ,t) d
\k$  
  $\displaystyle \approx$ $\displaystyle \sum\limits_{m=1}^M W(\mathbf{x},\k _m) \left( \sum\limits_{n=1}^...
...^{i \mathbf{x}\k } W(\mathbf{x}_n,\k ) \widehat{P}(\k ,t) d\k\right) \right)\;.$ (13)

The evaluation of the last formula is effectively equivalent to applying $ N$ inverse Fast Fourier Transforms. Physically, a separable lowrank approximation amounts to selecting a set of $ N$ representative spatial locations and $ M$ representative wavenumbers.

In order to discuss the construction of approximation (12), let us view it as a matrix decomposition problem

$\displaystyle \mathbf{W} \approx \mathbf{W}_1   \mathbf{A}   \mathbf{W}_2$ (14)

where $ \mathbf{W}$ is the $ N_x\times N_x$ matrix with entries $ W(\mathbf{x},\k )$ , $ \mathbf{W}_1$ is the submatrix of $ \mathbf{W}$ that consists of the columns associated with $ \{\k _m\}$ , $ \mathbf{W}_2$ is the submatrix that consists of the rows associated with $ \{\mathbf{x}_n\}$ , and $ \mathbf{A} = \{a_{mn}\}$ . In practice, we find that the matrix $ \mathbf{W}$ has a low rank separated representation provided that $ \Delta t$ is sufficiently small, which, in the case of smooth models, can be partially explained by the separation of terms in the Taylor series 5. Let $ \varepsilon $ be a prescribed accuracy of this separated representation, and $ r_\varepsilon $ be the numerical rank of $ \mathbf{W}$ . The construction of the separated representation in equation (14) follows the method of Engquist and Ying (2007, 2009) and is detailed in the appendix. The main observation is that the columns of $ \mathbf{W}_1$ and the rows of $ \mathbf{W}_2$ should span the column space and row space of $ \mathbf{W}$ , respectively, as well as possible. The algorithm for computing (14) takes the following steps:
  1. Pick a uniformly random set $ S$ of $ \beta \cdot r_\varepsilon $ columns of $ \mathbf{W}$ where $ \beta$ is chosen to be 3 or 4 in practice. Perform the pivoted QR factorization to $ (\mathbf{W}(:,S))^*$ (Golub and Van Loan, 1996). The first $ r_\varepsilon $ pivoted columns correspond to $ r_\varepsilon $ rows of the matrix $ \mathbf{W}(:,S)$ . Define $ \mathbf{W}_1$ to be the submatrix of $ \mathbf{W}$ that consists of these rows and set $ \mathbf{x}_1,\ldots,\mathbf{x}_N$ with $ n=r_\varepsilon $ to be the corresponding $ x$ values of these rows.
  2. Pick a uniformly random set $ T$ of $ \beta \cdot r_\varepsilon $ rows of $ \mathbf{W}$ and perform the pivoted QR factorization to $ \mathbf{W}(T,:)$ . Define $ \mathbf{W}_2$ to be the submatrix of $ \mathbf{W}$ that consists of these columns and set $ \k _1,\ldots,\k _M$ with $ m=r_\varepsilon $ to be the corresponding $ \k$ values of these columns.
  3. Set the middle matrix $ \mathbf{A} = \mathbf{W}^{\dagger}(\mathbf{x}_n,\k _m)_{1\le n \le N, 1\le
m \le M}$ where $ \dagger$ stands for the pseudoinverse.
  4. Combining the result of the previous three steps gives the required separated representation $ \mathbf{W} \approx \mathbf{W}_1   \mathbf{A}   \mathbf{W}_2$ .
The algorithm does not require, at any step, access to the full matrix $ \mathbf{W}$ , only to its selected rows and columns. Once the decomposition is complete, it can be used at every time step during the wave extrapolation process. In multiple-core implementations, the matrix operations in equation (12) are easy to parallelize. The algorithm details are outlined in the appendix.

The cost of the algorithm is $ O(M N_x \log{N_x})$ operations per time step, where $ N_x \log{N_x}$ refers to the cost of the Fourier transform. In comparison, the cost of finite-difference wave extrapolation is $ O(L N_x)$ , where $ L$ is the size of the finite-difference stencil. Song et al. (2011) present an application of the proposed lowrank approximation algorithm for devising accurate finite-different schemes. There is a natural trade-off in the selection of $ M$ : larger values lead to a more accurate wave representation but require a longer computational time. In the examples of the next section, we select these parameters based on an estimate of the approximation accuracy and generally aiming for the relative accuracy of $ 10^{-4}$ . The resulting $ M$ is typically smaller than the number of Fourier transforms required for pseudo-spectral algorithms such as pseudo-spectral implementations of the rapid expansion method (Pestana and Stoffa, 2011).


next up previous [pdf]

Next: EXAMPLES Up: Fomel, Ying, & Song: Previous: WAVE EXTRAPOLATION

2013-08-31