next up previous [pdf]

Next: Simulation Up: Case: Constant-density Acoustics Previous: Interlude: The Internal Grid

Multi-shot (survey) simulation

IWaveSim loops over any axes beyond the time axis (signified by idxxx=dim through idxxx=99) that IWaveSampler objects add to the internal grid. In particular, SEGY data file identified as output or input adds a simulation axis with idxxx=dim+1 to the internal grid. The loop over this axis increments when a the keywords sx, sy, or sz change from one trace to the next, signifying a new shot.

From the user point of view, this means that multi-shot simulation is automatic: if multiple shots are part of the output data structure, then all shots will be simulated.

We show several examples that illustrate this data-flow feature of IWAVE. The examples are identical to those reviewed above for single shots, except that the additional computation load of multi-shot simulation suggests the use of parallelism. The parallel features of IWAVE (parallel shots, parallel subdomains, parallel loops) will be the subject of a subsequent report. For now, we note that the partask keyword indicates the number of shots to run in parallel. The SConstruct files for the several multishot simulations include a line (near the top) to set the variable NP. If NP=1, then the simulations described below are run in serial mode. If NP is set to a value larger than one, then this value indicates the number of shots to process in parallel, via a collection of MPI communicators. Running several shots in parallel requires that IWAVE be installed with MPI enabled IWAVE_USE_MPI defined as a compiler parameter, see the README_INSTALL file in the top-level directory. The number of MPI processes assigned (via mpirun -np can be fewer than the number of shots to be simulated - in that case the simulations run in batches until all shots are completed. Any unnecessary processes at the terminal stage of the simulation are simply left idle, so there is no necessary relation between the number of MPI processes and the number of shots. The SConstruct script in the project directory for this paper uses mpirun -np NP to initiate MPI and assign the number of processes to be used. The follow-on report will describe the use of IWAVE in a batch environment, for both parallelization over shots and via domain decomposition.

The next few examples are large enough that completion single-threaded execution requires perhaps half an hour on a typical (circa 2014) desktop CPU. The data displayed were obtained on a typical multicore desktop machine, using MPI with NP=6. These results are precisely the same as those that are obtained with a single process, but required less than 4 minutes walltime.



Subsections
next up previous [pdf]

Next: Simulation Up: Case: Constant-density Acoustics Previous: Interlude: The Internal Grid

2015-04-20