Abstract
For modern day reservoir simulators, it is essential to provide realistic physical description of reservoirs, fluids and hydrocarbon extraction technology and guarantee excellent performance and parallel scalability.
In the past, the advances in simulation performance were largely limited by memory throughput of CPU based computer systems. Recently, new generation of graphical processing units (GPU) became available for general purpose computing with the support of double precision floating point operations, necessary for dynamic reservoir simulations. The graphical cards currently available on the market have thousands of computational cores that can be efficiently utilized for simulations.
In this paper, for the first time we present results of running full physics reservoir simulator on CPU+GPU platform and discuss implications of this modern technology on the existing reservoir simulation workflows. We discuss challenges and developed solutions for running reservoir simulations using modern CPU+GPU hardware architecture and propose a methodology to distribute the workload between various parts efficiently. The approach is tested on several data sets on various computational platforms, such as personal computers and clusters with and without GPU's involved.
The technology proposed in this paper demonstrates multifold speed up for models with substantial number of active grid blocks. The speed up due to GPU utilization can in some cases reach as high as 3-4 times compared to the traditional GPU-based approach. Considering the recent progress in the GPU development, this factor is expected to grow in the near future, and the hybrid CPU+GPU based approach allows to utilize the exciting potential of the hardware evolution. The results, advances and potential bottlenecks combined with detailed analysis of the performance and the ‘value for money’ of the modern hardware solutions are discussed.