Wave-equation-based seismic processing algorithms have been developed over the years with the aim of handling the 3D, full-wavefield nature of seismic waves. Multi-Dimensional Deconvolution (MDD) is one of such algorithms, commonly used to remove overburden-related effects from up/down separated wavefields (e.g., removal of free-surface multiples from ocean-bottom data). However, MDD comes with several computational challenges; this is especially the case for its time-domain implementation, which requires repeated access to Terabyte-scale seismic datasets. In this work, we present a novel algorithmic solution that leverages the inherent data sparsity of seismic data in the frequency domain by means of tile low-rank data compression. We further rely on so-called Hilbert reordering to achieve a boost in the compressibility of the dataset under study. Tile Low-Rank Matrix Vector Multiplication (TLR-MVM) is then introduced to speed up the Multi-Dimensional Convolution (MDC) operator that lies at the core of the MDD algorithm. The presented solution is tested on a realistic 3D seismic dataset modelled from the SEG/EAGE Overthrust model, and the impact of two key parameters in tile low-rank compression algorithm, namely tile size and error accuracy, is thoroughly investigated. Inversion is finally performed using the LSQR solver with all MDC operations performed onto GPUs. On a 4 A100 cluster, successful deconvolution for single virtual source is accomplished within 2 minutes (including I/O). To conclude, the proposed algorithm is deployed onto several mainstream hardware the associated roofline performance model is presented.

This content is only available via PDF.
You can access this article if you purchase or spend a download.