Geostatistical algorithms are being widely used to integrate different data such as seismic amplitude, well logs and core measurements into reservoir models. However, approaches to integrate dynamic/production data efficiently into these models are largely lacking. Production data differs from other types of static data (such as porosity, permeability, amplitude, etc.) primarily because they are non-linearly related to the connectivity characteristics of the reservoir. In this paper, we develop a gradual deformation methodology to integrate two-phase production data in order to give rise to a suite of reservoir models that are conditioned to static data, as well as dynamic data. We utilize the Sequential Indicator Simulation algorithm within a non-stationary Markov Chain to iteratively update the realizations till a history match is obtained. The methodology is tested on a synthetic 2D and 3D reservoir.


Reservoir flow simulation is used to forecast oil and gas production profiles corresponding to different development scenarios. The reservoir models on which the flow simulations are performed are themselves uncertain due to the sparse information that is available to construct them. There is minimum uncertainty at the well locations and maximum uncertainty at locations away from the well.

Hence, the production profile for a particular development scheme cannot be predicted exactly. Geostatistical simulation algorithms such as Sequential Gaussian Simulation (SGS) and Sequential Indicator Simuation (SIS) are being widely used nowadays to develop multiple equip-probable reservoir models. Each of these models is conditioned to the available static data, such as seismic, well logs, cores, etc., and the set of reservoir models quantify the uncertainty stemming from the lack of complete information about the reservoir.

Historic production data contains valuable information pertaining to the connectivity characteristics of the reservoir. However, constraining reservoir models to measurements of pressure, oil, gas and water rates at different times is considerably more complicated due to the non-linearity between the dynamic response outputs and the model parameters. The procedure to adjust a reservoir model such that the production history is reflected correctly is known as history matching. It is a very time consuming process and may require several months of work by the reservoir engineer. In order to alleviate this problem, various attempts have been made to automate the process of history matching.

History matching is an ill-posed problem and the parameter set that result in minimizing the deviation from data is non-unique. Mathematically, the history-matching problem can be posed in an optimization context, i.e., the minimization of a complex least squares objective function in a parameter space populated by multiple local minima. Two broad approaches for solving the problem are:

  1. Trial and error methods: Trial and error methods run repeated flow simulations on multiple reservoir models and retain only those models that reflect the historic production characteristics within some acceptable tolerances. Trial and error methods are computationally inefficient.

  2. Gradient-based methods: Gradient-based methods(1,2) capitalize on the nature of the physical relationship between the observed flow response and the model parameters in order to expedite convergence of the optimization process.

This content is only available via PDF.
You can access this article if you purchase or spend a download.