Summary

This research aims at making optimal updates of geological models by jointly inverting flow and seismic data while honoring the geologic spatial continuity. Numerical models for reservoir characterization are increasing in complexity, due in part to the greater need to model the complex spatial heterogeneity and fluid flow in the subsurface. These models, once properly calibrated, can make better forecasts. This calibration process requires in essence the solving of an inverse problem. The inversion problem is formulated as minimizing the mismatch function between observations and the output of the numerical models. The optimal search is carried out by adjusting model parameters, typically one or more for each grid-point of the reservoir. The optimization problem is large-scale in nature, with a nonlinear and nonconvex objective function, that often involves time-expensive simulations. Additionally, this problem is generally ill-conditioned, because the number of degrees of freedom usually is larger than the number of observations. We present a robust and fairly efficient methodology to deal with these difficulties in the framework of oil reservoir characterization. The ill-conditioned character of the optimal search can be attenuated in two ways. By Principal Component Analysis (PCA) the search space can be projected to a subspace of much smaller dimension, while keeping consistency with prior spatial geological features already known for the reservoir. The number of optimal solutions can be reduced further by increasing the diversity of the data observed. We integrate two different types of data: time-lapse seismic (spatially distributed and of lower temporal periodicity) and production data (localized around wells and of high temporal periodicity). Production data provides an integrated response of the reservoir to fluid flow, while time-lapse seismic data yields a spatially distributed characterization of the changes in elastic velocities due to saturation and pressure variations. The reduction in the number of optimization variables by PCA allows the use of numerical derivatives of the cost function. Within a distributed computing framework these approximate derivatives can be calculated efficiently. We also consider derivative-free algorithms. We illustrate the methodology on a sector of the Stanford VI synthetic reservoir created for testing algorithms.

Introduction

Reservoir management requires efficient updating of the reservoir model as new data are obtained. These data can be of disparate spatial distributions and temporal periodicities, such as production data and time-lapse seismic data. Huang et al. (1998) formulate the simultaneous matching of production and seismic data as an optimization problem. In this scheme, the updating of model parameters (such as porosity) is not assured to honor the spatial statistics of the reservoir. Sarma et al. (2006) give an implementation of the model updating approach, while maintaining spatial statistics, using production data. However, that procedure is invasive with respect to the flow simulator (and thus not straightforward to implement) and is not robust with respect to being trapped in one of the multiple solutions. Walker and Lane (2007) present a case study that includes time-lapse seismic data as a part of the production history matching process, and show how the use of seismic monitoring can improve reservoir prediction.

This content is only available via PDF.
You can access this article if you purchase or spend a download.