Abstract
Geological models are preferably built with very fine grids in a 3D dimensional geometry to capture reservoir complexity and heterogeneity. However, simulating such detailed models with sizes that can range to the 100’s of millions of cells is a huge challenge for current commercial CPU (Central Processing Unit) simulators as the results cannot be achieved within an acceptable time frame and simulation may last days or weeks.
These long run times cause a delay in achieving a robust history matched or production forecast model as it constrains the ability to effectively characterize the uncertainty range of subsurface parameters and potential development solutions. Restricting the number of alternate scenarios in this manner diminishes the ability to arrive at solutions which may lead to loss of business opportunity and economic value. The alternatives have been used by oil and gas industry so far to reduce simulation run time, are either upscaling fine grid models to a coarser grid, consequently reducing the number of cells or increasing the number of nodes on a high-performance cluster so that simulation results can be achieved more rapidly by parallel computing using CPU based simulators. The former solution is limited by reservoir complexity and the latter by the scaling limits of the software. However, recent advances in high performance technical computing using Graphic Processing Units (GPU) has generated significant interest in the performance characteristics of a GPU-based simulator. This paper demonstrates performance of new generation of GPU based simulator as compared to CPU based simulators. A simulation run time of 9-11 hours on a 4-core CPU based simulator was reduced to 25-40 min on 1 GPU based simulator with tight bounds on results.