The increasing complexity of reservoir simulation models continues to increase the computing requirements. In addition, the coupled geomechanical simulation typically requires an order of magnitude more computer time and quickly becomes impractical on serial hardware. This paper describes the architecture, startup experience and results of parallelization efforts for conventional reservoir simulation and coupled geomechanical simulation on a WIN32 cluster at the EPSL lab at the U. of Calgary. The WIN32 architecture was chosen in preference to much more established Unix (Linux) OS. The results of the initial testing have shown that significant differences in performance can result depending on the hardware setup. The testing of the commercial reservoir software demonstrates that the parallelization is still far from being a mature option. The Eclipse parallel option can perform very poorly on certain problems. The factors affecting performance include the connectivity of the reservoir, and well trajectories. The parallelization of the geomechanical code used the PETSC solver library which showed promising performance gains. Additional improvement was obtained by parallelizing the matrix building code, which indicates that further gains are possible. The current coupled GEOSIM code achieved an order of magnitude speedup on realistic field examples of reservoir compaction. Future work will examine the use of 64 bit hardware.
The extension of reservoir simulators to include more and better physics results in significantly greater demands on both computer speed and memory. In particular, the coupling of geomechanical stress-strain computation introduces a completely separate second grid, which is usually larger in all three dimensions. Since the CPU time required to do matrix solutions goes up geometrically with the number of unknowns, there is more time spent in solving the geomechanics portion of the problem than in the reservoir flow portion. When nonlinearity is taken into account, by iterating between the reservoir and geomechanics solutions on each time step, the computation time can easily increase by an order of magnitude over the time to solve the reservoir flow only. In the not-toodistant past, the only way to solve such problems in a practical amount of time was by the use of very expensive supercomputers. But the situation has improved considerably with the advent of Beowulf computational clusters.
The advantages of computational clusters for doing numerical simulation have been well established1,2. The use of commodity, off-the-shelf components results in tremendous cost- performance benefit, with performance which was achievable in the past only through proprietary, expensive hardware. Provided that the problem being modeled and the software allow it to be distributed over a number of processors (each with their own memory), the time to run a simulation is reduced. This time reduction means that larger problems can be run (in terms of more timesteps, or smaller timesteps). But in addition, the distribution of the memory requirements means that larger grid systems can also be tackled. This is particularly important for coupled geomechanical simulation because the size of coupled problems that can be currently simulated on serial hardware lags behind the uncoupled simulation (where models of up to a million cells are becoming routine).