One way to reduce uncertainties in reservoir simulation is to use geostatistical fine grids with little or no upscaling, but this approach can lead to excessively large simulations with literally millions of gridblocks. In the case of compositional studies, another way is to increase the number of components. Running this kind of large simulations requires the use of powerful computers with vector or parallel architectures.

In general, existing simulators must be restructured to make the use of parallel computers possible and efficient. Reengineering and re-writing of the source code are time and cost consuming for multi-purpose simulators. Previous papers have described the use of MPI (Message Passing Interface) to trigger parallel processing. This approach is delicate to manage as it involves incremental reengineering of the code, which is rather difficult to perform.

We present our experience of parallelization of an existing all-purpose reservoir simulator for shared memory platforms, using OpenMP instructions. The paper describes the methodology of parallelization in three incremental steps: first identifying the most CPU time consuming routines. Second, parallelizing these routines. Finally, developing efficient dedicated parallel preconditioned solvers. It is shown that this approach is easier to manage than the MPI approach while being as efficient on shared memory parallel architectures like SGI Origin2000, IBM SP3, or Compaq ES40.

Several large simulations, typically one million gridblocks, are presented to show the capabilities of ATHOS simulator on these parallel architectures. Simulations carried out on NEC SX5 are also presented, for comparison with a vector computer. These simulations include the main physical features of the ATHOS simulator used for black-oil, compositional and fractured reservoir studies.

You can access this article if you purchase or spend a download.