Every simulation engineer wishes to simulate large full-field models, but historically reservoir simulation of the steam-assisted-gravity- drainage (SAGD) process has been constrained to single-well models up to a single pad. Models of these sizes provide valuable information and have helped to assess the development potential of reservoirs. These models may be used for reservoir management and to support the decision-making process for the design of the initial completion, operating strategy, and multipad wind-down evaluations, and also qualitatively assess the uncertainty in the SAGD forecast under different geological settings. However, in many cases we are left with the question of how multiwell and multipad communications ultimately affect performance at the well-pair scale. Because of technological constraints with computer hardware and simulation technology, running extremely large multipad models has been until recently largely impractical, especially when trying to run multiple scenarios to better understand the impact of geological and operational uncertainty. In this paper, we present a new and practical workflow that makes running extremely large multipad, multimillion-grid-cell SAGD models a reality. The three major steps of the workflow are (1) generating simulation-friendly geomodels, (2) use of experimental design and 3D submodels on the basis of SAGD performance index (SPI) for numerical tuning, and (3) use of 2D cross sections and SPI to develop dynamic grid-refinement-parameter values for the full 3D model. All of these steps are intended to improve the numerical stability and run time of multipad SAGD simulation models. A 24-SAGD-well-pair model with 2.52 million gridblocks was simulated for 10 years of forecast. The reservoir is geologically complex and highly heterogeneous. We discuss some of the important aspects that need to be accounted for when simulating large-scale SAGD models. Using this new workflow, the simulation run time was reduced from 42 days to 7 days on eight central processing units (CPUs)—a six-time speedup. The resulting run time is short enough to facilitate multirealization simultaneous runs using eight CPUs, hence maximizing the throughput and minimizing the simulation cycle time. This new workflow can be easily replicated and, more importantly, automated to reduce engineering time requirements. While this paper focuses on the SAGD process, this methodology is completely generic in that it can be applied to any large data set for any process. Details will differ depending on the process, but the workflow will be the same.

This content is only available via PDF.
You can access this article if you purchase or spend a download.