One of the most challenging subjects in reservoir engineering concerns the conditioning of large reservoir models to the available data: dynamic data in the form of production history and a priori (static) geological or geostatistical knowledge. Streamline simulation has been proposed as an ideal tool for rapid conditioning of large, fine-grid geologic models. We develop a novel procedure that combines a streamline-based history matching method and a geostatistical parameterization technique to estimate the distributions of permeability and porosity in heterogeneous petroleum reservoirs.
An efficient two-step procedure was introduced recently for history matching. It consists first in matching the streamline effective permeabilities to production data, then mapping the resulting perturbations to the grid block permeabilities. This procedure requires relatively simple post-processing of a single, standard streamline simulation and results in an extremely rapid history match to the production data. However, the mapping step often induces an undesired departure from the a priori geostatistical variability model. To circumvent this drawback, we modify the second step. The "optimal" streamline effective permeabilities obtained at the end of the first step are integrated into a new objective function which is minimized using the gradual deformation method. This parameterization technique allows for matching the "optimal" streamline effective permeabilities by changing the grid block permeabilities while at the same time preserving the a priori spatial variability model.
History matching plays a significant role in the prediction of future reservoir performances. Its main purpose is to build a numerical reservoir model consistent with all the available data such as the geological or geostatistical knowledge as well as the production data (water cuts, flow rates, pressures…).
Essentially, the numerical reservoir model consists of a three dimensional grid. To each grid block we attribute porosity and permeability values. For practical reasons, the distributions of these properties are usually approximated by Gaussian or Gaussian-related random fields [1,2]. Once a reservoir model is drawn based on the a priori geostatistical knowledge, it has to be modified to account for the production data. This process, known as history-matching, is traditionally addressed as an optimization problem. First, an objective function is defined to quantify the mismatch between the actual data and the parallel responses computed for the numerical reservoir model. Second, the reservoir model is iteratively modified in order to minimize this objective function. Each iteration calls for a forward fluid flow simulation, which is often very CPU demanding. An efficient optimization procedure has the following qualities:
it must be capable of modifying large, fine-grid reservoir models;
it must preserve the spatial variability model whatever the modification; and
it must require a reduced number of forward fluid flow simulations.
Most of the common optimization procedures in use fall under the general class of gradient methods . These methods involve the computation of the derivatives of the objective function relatively to the unknown parameters. However, for large numbers of parameters, the CPU time for gradient calculations may be prohibitive. Additionally, if the gradient information is used to update simultaneously the permeability and porosity values, the variability models (variograms) characterizing their spatial distributions can be unintentionally distorted. To circumvent these drawbacks, gradient methods are combined with geostatistical parameterization techniques such as the pilot point method  or the gradual deformation method [5,6]. Both methods decrease the number of inversion parameters while preserving the spatial variability model, although the pilot point method is liable to numerical artifacts.