next up previous [pdf]

Next: SEARCHING THE SEA OF Up: Regularization is model styling Previous: Operator approach to missing

WELLS NOT MATCHING THE SEISMIC MAP

Accurate knowledge comes from wells, but wells are expensive and far apart. Less accurate knowledge comes from surface seismology, but this knowledge is available densely in space and can indicate significant trends between the wells. For example, a prospective area may contain 15 wells but 600 or more seismic stations. To choose future well locations, it is helpful to match the known well data with the seismic data. Although the seismic data is delightfully dense in space, it often mismatches the wells because there are systematic differences in the nature of the measurements. These discrepancies are sometimes attributed to velocity anisotropy. To work with such measurements, we do not need to track down the physical model, we need only to merge the information somehow to appropriately map the trends between wells and make a proposal for the next drill site. Here, we consider only a scalar value at each location. Take $\bold w$ to be a vector of 15 components, each component being the seismic travel time to some fixed depth in a well. Likewise, let $\bold s$ be a 600-component vector each with the seismic travel time to that fixed depth as estimated wholly from surface seismology. Such empirical corrections are often called ``fudge factors''. An example is the Chevron oil field in Figure 6.

wellseis
wellseis
Figure 6.
Binning by data push. Left is seismic data. Right is well locations. Values in bins are divided by numbers in bins. (Toldi)
[pdf] [png] [scons]

The binning of the seismic data in Figure 6 is not really satisfactory when we have available the techniques of missing data estimation to fill the empty bins. Using the ideas of subroutine mis1() , we can extend the seismic data into the empty part of the plane. We use the same principle that we minimize the energy in the filtered map where the map must match the data where it is known. I chose the filter $\bold A = \nabla\T \nabla=-\nabla^2$ to be the Laplacian operator (actually, its negative) to obtain the result in Figure 7.

misseis
misseis
Figure 7.
Seismic binned (left) and extended (right) by minimizing energy in $\nabla^2 \bold s$.
[pdf] [png] [scons]

There are basically two ways to handle boundary conditions. First, as we did in Figure 1, by using a transient filter operator that assumes zero outside to the region of interest. Second is to use an internal filter operator. Internal filters introduce the hazard that solutions could be growing at the boundaries. Growing solutions are rarely desirable. In that case, it is better to assign boundary values, which is what I did here in Figure 7. I did not do it because it is better, but did it to minimize the area surrounding the data of interest.

The first job is to fill the gaps in the seismic data. We did such a job in one dimension in Figures 1-5. More computational details later come later. Let us call the extended seismic data $\bold s$.

Think of a map of a model space $\bold m$ of infinitely many hypothetical wells that must match the real wells, where we have real wells. We must find a map that matches the wells exactly and somehow matches the seismic information elsewhere. Let us define the vector $\bold w$, as shown in Figure 6 so $\bold w$ is observed values at wells and zeros elsewhere.

Where the seismic data contains sharp bumps or streaks, we want our final Earth model to have those features. The wells cannot provide the rough features, because the wells are too far apart to provide high-spatial frequencies. The well information generally conflicts with the seismic data at low-spatial frequencies because of systematic discrepancies between the two types of measurements. Thus we must accept that $\bold m$ and $\bold s$ may differ at low-spatial frequencies (where gradient and Laplacian are small).

Our final map $\bold m$ would be very unconvincing if it simply jumped from a well value at one point to a seismic value at a neighboring point. The map would contain discontinuities around each well. Our philosophy of finding an Earth model $\bold m$ is that our Earth map should contain no obvious ``footprint'' of the data acquisition (well locations). We adopt the philosophy that the difference between the final map (extended wells), and the seismic information $\bold x=\bold m-\bold s$ should be smooth. Thus, we seek the minimum residual $\bold r$, which is the roughened difference between the seismic data $\bold s$ and the map $\bold m$ of hypothetical omnipresent wells. With roughening operator $\bold A$ we fit:

\begin{displaymath}
\bold 0\quad\approx\quad \bold r \eq \bold A ( \bold m - \bold s )
\eq \bold A \bold x
\end{displaymath} (12)

along with the constraint that the map should match the wells at the wells. We honor this constraint by initializing the map $\bold m = \bold w$ to the wells (where we have wells, and zero elsewhere). After we find the gradient direction to suggest some changes to $\bold m$, we simply do not allow those changes at well locations by using a mask. We apply a ``missing data selector'' to the gradient which zeros out possible changes at well locations. Like with the goal (7), we have:
\begin{displaymath}
\bold 0\quad\approx\quad \bold r \eq
\bold A \bold J \bold x + \bold A \bold x_{\rm known}
\end{displaymath} (13)

After minimizing $\bold r$ by adjusting $\bold x$, we have our solution $ \bold m = \bold x + \bold s $.

Now, we prepare some roughening operators $\bold A$. We have already coded a 2-D gradient operator igrad2. Let us combine it with its adjoint to get the 2-D Laplacian operator. (You might notice that the Laplacian operator is ``self-adjoint,'' meaning that the operator does the same calculation that its adjoint does. Any operator of the form $\bold A\T\bold A$ is self-adjoint because $(\bold A\T\bold A)\T=\bold A\T (\bold A\T)\T=\bold A\T\bold A$. )

system/generic/laplac2.c
    for     (i2=0; i2 < n2; i2++) {
	for (i1=0; i1 < n1; i1++) {
	    j = i1+i2*n1;

	    if (i1 > 0) {
		if (adj) {
		    p[j-1] -= r[j];
		    p[j]   += r[j];
		} else {
		    r[j] += p[j] - p[j-1];
		}
	    }
	    if (i1 < n1-1) {
		if (adj) {
		    p[j+1] -= r[j];
		    p[j]   += r[j];
		} else {
		    r[j] += p[j] - p[j+1];
		}
	    }

	    if (i2 > 0) {
		if (adj) {
		    p[j-n1] -= r[j];
		    p[j]    += r[j];
		} else {
		    r[j] += p[j] - p[j-n1];
		}
	    }
	    if (i2 < n2-1) {
		if (adj) {
		    p[j+n1] -= r[j];
		    p[j]    += r[j];
		} else {
		    r[j] += p[j] - p[j+n1];
		}
	    }
	}
    }
Subroutine lapfill() is the same idea as mis1() except that the filter $\bold A$ has been specialized to the Laplacian implemented by module laplac2.

system/generic/lapfill.c
void lapfill(int niter   /* number of iterations */, 
	     float* mm   /* model [m1*m2] */, 
	     bool *known /* mask for known data [m1*m2] */)
/*< interpolate >*/
{
    if (grad) {
	sf_solver (sf_igrad2_lop, sf_cgstep, n12, 2*n12, mm, zero, 
		   niter, "x0", mm, "known", known, 
		   "verb", verb, "end");
    } else {
	sf_solver (laplac2_lop, sf_cgstep, n12, n12, mm, zero, 
		   niter, "x0", mm, "known", known, 
		   "verb", verb, "end");
    }
    sf_cgstep_close ();
}

Subroutine lapfill() can be used for each of our two applications: (1) extending the seismic data to fill space, and (2) fitting the map exactly to the wells and approximately to the seismic data. When extending the seismic data, the initially non-zero components $\bold s \ne \bold 0$ are fixed and cannot be changed.

The final map is shown in Figure 8.

finalmap
finalmap
Figure 8.
Final map based on Laplacian roughening.
[pdf] [png] [scons]

Results can be computed with various filters. I tried both $\nabla^2$ and $\nabla$. There are disadvantages of each, $\nabla$ being too cautious and $\nabla^2$ perhaps being too aggressive. Figure 9 shows the difference $\bold x$ between the extended seismic data and the extended wells. Notice that for $\nabla$ the difference shows a localized ``tent pole'' disturbance about each well. For $\nabla^2$, there could be a large overshoot between wells, especially if two nearby wells have significantly different values. I do not see that problem here.

My overall opinion is that the Laplacian does the better job in this case. I have that opinion because in viewing the extended gradient, I can clearly see where the wells are. The wells are where we have acquired data. We would like our map of the world to not show where we acquired data. Perhaps our estimated map of the world cannot help but show where we have and have not acquired data, but we would like to minimize that aspect.

A good image of the Earth hides our data acquisition footprint.

diffdiff
diffdiff
Figure 9.
Difference between wells (the final map) and the extended seismic data. Left is plotted at the wells (with gray background for zero). Center is based on gradient roughening and shows tent-pole-like residuals at wells. Right is based on Laplacian roughening.
[pdf] [png] [scons]

To understand the behavior theoretically, recall that in one dimension the filter $\nabla$ interpolates with straight lines and $\nabla^2$ interpolates with cubics. The reason is that the fitting goal $\bold 0 \approx \nabla \bold m$, leads to $\frac{\partial }{\partial \bold m\T} \bold m\T\nabla\T \nabla \bold m = \bold 0$ or $\nabla\T \nabla \bold m = \bold 0$; whereas, the fitting goal $\bold 0 \approx \nabla^2 \bold m$ leads to $\nabla^4 \bold m = \bold 0$, which is satisfied by cubics. In two dimensions, minimizing the output of $\nabla$ gives us solutions of Laplace's equation with sources at the known data. It is as if $\nabla$ stretches a rubber sheet over poles at each well; whereas, $\nabla^2$ bends a stiff plate.

Just because $\nabla^2$ gives smoother maps than $\nabla$ does not mean those maps are closer to reality. An objectively better choice for the model styling goal is addressed in Chapter [*]. It is the same issue we noticed when comparing Figures 1-5.


next up previous [pdf]

Next: SEARCHING THE SEA OF Up: Regularization is model styling Previous: Operator approach to missing

2014-12-03