Abstract

As development activities in heavy oil and in situ bitumen deposits have accelerated, the challenge of forecasting the performance of in situ recovery processes at field scale has increased exponentially.

Delineation drilling results make it apparent that these deposits are highly complex and three-dimensionally heterogeneous. Heterogeneity has a significant impact on the effectiveness and economics of the recovery process.

Many experienced operators are recognizing that in addition to the static complexity of the reservoirs it is necessary to consider the dynamic stress state in the regions undergoing production. Geomechanical factors are significant and must be built into any realistic numerical simulation of recovery processes.

It has become apparent to operators that modeling single well-pair operations may be misleading, and seven to ten wellpair models are now quite common.

All these factors result in increasing size and complexity of numerical simulation models.

Reservoir Simulation Developers have responded with two technologies to achieve reasonable run times in these large and complex models. The combined use of 64-bit symmetrical multiprocessor computers and dynamic grid refinement will be discussed and compared against traditional simulation methods.

This paper will provide examples of the application of these leading edge technologies for in situ oil sands development, including the Surmont area of the Athabasca deposit.

Introduction

The investigation discussed in this paper began with a 3D simulation model of a typical 9-wellpair ?-pad, gridded so that the resulting model would fit within the 32-bit environment of desktop PC's, i.e. memory requirements for the model cannot exceed 3 GB of RAM.

This model was designated the COARSE grid model. The results of a forecast of reservoir performance with this model were compared to those obtained from a 3D model that was statically gridded to accurately model the thermal and flow regimes that occur perpendicular to the well paths, i.e. in the cross-sectional plane. Such a model requires 64-bit address space as memory requirements approach 16 GB of RAM. The forecast results and performance of this finely gridded model formed a base for comparison with the results obtained from all other models.

This second model was designated the STATIC FINE grid model, or SF model.

Such large models take a long time to run in serial, or single-processor mode. We therefore extended the investigation with this model to cover the use of parallel processing, using up to 32 CPU's in a shared memory environment. This approach also took advantage if IBM's Simultaneous Multi-Threading technology.

Static gridding of these models is wasteful of resources and time. The fine grid is really only required in areas of the model where there are substantial changes in variables - temperature, saturation, viscosity, flow rate, and pressure to pick a few - over relatively short distances (<5 meters).

To address this, the technique of dynamic gridding was applied to the SF model and this DYNAMIC FINE gridded model (or DF model) was run in serial (single processor) mode using a 64-bit machine. The run time for this model was compared with that for the SF model, for equivalent forecast results.

This content is only available via PDF.
You can access this article if you purchase or spend a download.