Proposal
A three-dimensional, three-phase black oil reservoir simulator has been developed using the Control Volume Finite Element (CVFE) formulation. Flux-based upstream weighting is necessary to ensure flux continuity and solution stability. Traditional CVFE formulations for reservoir simulation are not flux continuous. Flux continuity is observed with respect to all the phases; the solutions are locally and globally mass conservative. In addition, numerical aspects of the problem are not any more complicated than the finite- difference techniques, since numerical integration of dependent variables in the control volume is not required. It is thus established that CVFE formulation can be employed for field-wide reservoir simulation as an alternative to finite- difference formulations. Comparison of the results to results from a finite-difference simulator is presented.
The formulation is fully implicit. State-of-the-art parallel, linear solvers (coupling with the Portable Extensible Toolkit for Scientific Computation (PETSc)) are used. The simulator is appropriate for reservoirs of complex geometry. The formulation is applicable to anisotropic and heterogeneous domains, with full-tensor permeability. Fractures and faults are represented by lines (in two dimensions) and planes (in three dimensions). The difficult task of generating an unstructured mesh for complex domains with faults and fractures is accomplished in this study. Results of two-phase and three-phase simulations in a variety of fractured/faulted and nonfractured domains are presented. These domains are geometrically complex and are not easily represented by traditional finite-difference discretization. Fundamental aspects of fracture flow mechanics, such as imbibition, water bypassing, etc. are more easily examined using these simulators. Capillary pressure functions in matrix and fractures are very important in determining recovery behavior. Different combinations of capillary pressures in matrix and fractures can be examined with these models. The models were implemented in parallel on a 18-processor Linux cluster (64-bit Opteron chips) using Message Passing Interface (MPI). Speedup of up to 12 on a 16-processor setup was observed.