Recently, the need for small-scale grid blocks in numerical reservoir simulation models has grown remarkably, in order to reproduce detailed geological models that realistically describe reservoir heterogeneity. The progress of computer technology has assisted in constructing these large-scale models and those with over million cells are no longer unusual. The upgrade of processors alone, however, cannot sufficiently satisfy such demands and we are still faced with a serious problem, i.e., how to reduce the execution time as a severe operational constraint on reservoir simulation studies.

Parallel computers equipped with multiple processors have been developed to solve such problems. Several parallelized commercial reservoir simulators are available at present, but may not be sufficiently utized in the petroleum industry except for specific research. The study described in this paper was carried out to investigate, from the user's point of view, the features and noteworthy points on the usage of the current commercial parallel computers and parallelized simulators, and to obtain some guidelines for parallel computing.

We conducted the benchmark tests, in terms of parallel computing, with relatively large-scale actual models built for black oil type and compositional type simulations (100,000 active cells with a variety of sizes). For the comparison of the computational results, we focused on the speed-up of execution time, the consistency between serial and parallel computing and the influence of domain decomposition. We utilized parallel computers (UNIX™-based workstations) and parallerized commercial simulators.

The results of the benchmark tests indicated that an optimal number of processors exists in relation to computational loads. It also showed how domain decomposition and handling of irregular sized grid blocks affected the speed-up scalability.


Over the last decade, commercial RISC-based workstations have specialized into two types, i.e., Symmetric Multi-Processor (SMP) and Massively Parallel Processor (MPP). SMP with relatively low parallellism (roughly up to 102 CPU) aims at the price/performance ratio while MPP with high parallelism focuses on high scalability. High Performance Computing (HPC) using these relatively low-priced parallel computers has been popularized for multipurpose computers such as high and network servers. For example, SGI and Sun Microsystems provide their SMPs with shared memory and Distributed Shard Memory (DSM) MIMD architectures, respectively. IBM provides a MPP with distributed memory MIMD architecture which uses SMPs as nodes to be connected with high performance switches. Furthermore, a PC cluster using a network is catching up with the RISC-based workstation and has been recently utilized by a commercial reservoir simulator [1].

Although parallelization for reservoir simulation dealing with very complex and dynamic data has been studied since the 1980s [2,3], it has only been within the past two or three years that commercial reservoir simulators have reached a practical level. One of the reasons may be that parallel computers, originally expensive, have become cheaper as HPC workstations since the end of the 1980s. The main reason, however, may be the several difficulties encountered in parallelization. Killough et al. summarized such difficulties and pointed out the following items [3.4].

  1. The recursive character of the existing linear solution techniques for serial computing is not easily adaptable to a massively parallel architecture.

  2. Optimization of the group control algorithm in massively parallel processing may lead to a severe bottleneck.

  3. Optimization of load imbalance generated by the unevenly distributed data is very difficult to achieve.

This content is only available via PDF.
You can access this article if you purchase or spend a download.