next up previous [pdf]

Next: The Newton, Gauss-Newton, and Up: Pengliang Yang: Primer for Previous: Numerical examples

Full waveform inversion (FWI)

Time domain FWI was proposed by Tarantola (1984), and developed in Pica et al. (1990); Tarantola (1986). Later, frequency domain FWI was proposed by Pratt et al. (1998). Actually, many authors call it full waveform tomography. (tomography=fwi, imaging=migration) Here, we mainly follow two well-documented paper Pratt et al. (1998) and Virieux and Operto (2009). We define the misfit vector $ \Delta \textbf{p}=\textbf{p}_{cal}-\textbf{p}_{obs}$ by the differences at the receiver positions between the recorded seismic data $ \textbf{p}_{obs}$ and the modelled seismic data $ \textbf{p}_{cal}=\textbf{f}(\textbf{m})$ for each source-receiver pair of the seismic survey. Here, in the simplest acoustic velocity inversion, $ \textbf{m}$ corresponds to the velocity model to be determined. The objective function taking the least-squares norm of the misfit vector $ \Delta \textbf{p}$ is given by

$\displaystyle E(\textbf{m})=\frac{1}{2}\Delta \textbf{p}^{\dagger}\Delta \textb...}(\textbf{x}_r, t;\textbf{x}_s)-p_{obs}(\textbf{x}_r, t;\textbf{x}_s)\vert^2$ (64)

where $ ns$ and $ ng$ are the number of sources and geophones, $ \dagger$ denotes the adjoint and $ *$ the complex conjugate, while $ \textbf{f}(\cdot)$ indicates the forward modeling of the wave propagation. The recorded seismic data is only a small subset of the whole wavefield.

The minimum of the misfit function $ E(\textbf{m})$ is sought in the vicinity of the starting model $ \textbf{m}_0$ . FWI is essentially a local optimization. In the framework of the Born approximation, we assume that the updated model $ \textbf{m}$ of dimension $ M$ can be written as the sum of the starting model $ \textbf{m}_0$ plus a perturbation model $ \Delta \textbf{m}$ : $ \textbf{m}=\textbf{m}_0+\Delta \textbf{m}$ . In the following, we assume that $ \textbf{m}$ is real valued.

A second-order Taylor-Lagrange development of the misfit function in the vicinity of $ \textbf{m}_0$ gives the expression

$\displaystyle E(\textbf{m}_0+\Delta \textbf{m})=E(\textbf{m}_0) +\sum_{i=1}^M\f...
..._i \partial m_j}\Delta m_i \Delta m_j+O(\vert\vert\Delta\textbf{m}\vert\vert^3)$ (65)

Taking the derivative with respect to the model parameter $ m_i$ results in

$\displaystyle \frac{\partial E(\textbf{m})}{\partial m_i}=\frac{\partial E(\tex...
...artial^2 E(\textbf{m}_0)}{\partial m_j \partial m_i}\Delta m_j, i=1,2,\ldots,M.$ (66)

Briefly speaking, it is

$\displaystyle \frac{\partial E(\textbf{m})}{\partial \textbf{m}}=\frac{\partial...{m}}+\frac{\partial^2 E(\textbf{m}_0)}{\partial\textbf{m}^2}\Delta \textbf{m}$ (67)


$\displaystyle \Delta \textbf{m}=-\left(\frac{\partial^2 E(\textbf{m}_0)}{\parti...
...ial E(\textbf{m}_0)}{\partial \textbf{m}}=-\textbf{H}^{-1}\nabla E_{\textbf{m}}$ (68)


$\displaystyle \nabla E_{\textbf{m}}=\frac{\partial E(\textbf{m}_0)}{\partial \t...
...}{\partial m_2}, \ldots, \frac{\partial E(\textbf{m}_0)}{\partial m_M}\right]^T$ (69)


$\displaystyle \textbf{H}=\frac{\partial^2 E(\textbf{m}_0)}{\partial\textbf{m}^2...
... m_2}&\ldots&\frac{\partial^2 E(\textbf{m}_0)}{\partial m_M^2}\\ \end{bmatrix}.$ (70)

$ \nabla E_{\textbf{m}}$ and $ \textbf{H}$ are the gradient vector and the Hessian matrix, respectively.

next up previous [pdf]

Next: The Newton, Gauss-Newton, and Up: Pengliang Yang: Primer for Previous: Numerical examples