In light of the inverse solution theory, an efficient solution procedure has been developed to generate reservoir descriptions conditioned to statistics for rock properties, hard data, and dynamic data. The technique yields realizations for wellbore skin factors at each active well and porosity and permeability which honor a priori information and dynamic production data. The technique invokes inverse solution theory to construct the objective function and uses the gradient method to generate the maximum a posteriori estimates.

Differing from the previous works, we derived and implemented a two-loop iteration method to perform the minimization. By using Krylov space-based methods to solve the linear part involved in the minimization, the explicit construction of the sensitivity coefficient matrix is avoided. Complexity analysis and computational results indicate the new algorithm is more efficient than several available methods due to the decrease of the number of the nonlinear iterations demanded for convergence.

We also developed a modified procedure for computing realizations using the Chebyshev approximation of the decomposed a posteriori covariance matrix. In this way, the expensive computational cost of the construction and decomposition of the a posteriori covariance matrix is substantially reduced. When estimating multiple categories of parameters, which is the case in this study, our new procedure produces much more accurate results than the conventional Chebyshev method.


The most commonly encountered and probably the most challenging work in the management of a mature reservoir is to effectively and efficiently implement various feasible improved oil recovery (IOR) techniques and assess the impact of such implementations on the future performance of the reservoir. To this end, an accurate reservoir description is essential. The description should include at least major localized discontinuities such as faults, fractures, and bounding surfaces, as well as facies distributions, rock properties distributions within the facies, and rock-fluid properties. Due to the complex nature of the multiple scales of heterogeneity inherent to petroleum reservoirs, different production processes may be sensitive to different scales of the heterogeneities. Therefore, theoretically, we need an infinite-dimensional model space to adequately describe the real reservoir. In reality, what we can do, at best, is to generate an equivalent description of the reservoir at a scale that is suitable to the production process in which we are interested. In other words, we partition the reservoir into a proper gridblock system, and then search the discrete reservoir model corresponding to the partitioned system.

Limited by technology and expense, generating a reservoir description by sampling the interested rock properties in the entire reservoir domain in a suitable scale is impractical. Therefore, we are forced to perform the task of transferring to model space, via theoretical correlation, the prior information and the information carried in the indirect data set. By definition, such a process is referred to as an inverse procedure.

The information is usually divided into two classes: prior information and dynamic information. The former includes all the phenomenological information and static data; the latter includes production data, pressure transient data, tracer testing data, etc.

Although the prior information plays an important role in the inversion of reservoir properties from the observed data, both as a mathematical necessity for reducing the dimension of model space and regularizing the ill-posed problem, and as a requirement for geological and logical consistencies, the reservoir description generated using the conventional kriging technique based on the sparsely distributed prior information will usually provide an unrealistic smooth version of the "true" reservoir model with high uncertainty. The predicted flow performance based on such reservoir descriptions usually cannot honor the production history of the reservoir.

On the other hand, generating the reservoir description purely from the dynamic data (referred to as "history matching" in petroleum literature) suffers from instability and nonuniqueness. Moreover, the resulting description usually violates the geological description and yields an unreliable prediction of the future reservoir performance. Therefore, it is expected intuitively that by constraining the above two categories of information simultaneously, the nonuniqueness and uncertainties inherent in the resulting reservoir description will be reduced.

The fundamental concepts and methodology to this end were introduced as an inverse solution theory by Tarantola in his excellent book 1 and a series of papers.2–4 If one assumes the estimated model errors in the data space are Gaussian, the prior model parameters satisfy the multinormal distribution, and data measurement errors can be approximated as a Gaussian random variable with zero mean and known variances, then the inverse solution theory provides a means to rigorously incorporate the prior information and the dynamic data to construct an objective function based on Bayes's theorem. The maximum a posteriori estimates can be obtained by minimizing the resulting objective function. The realizations are obtained by decomposition of the a posteriori covariance matrix.

It is noted that both the maximum a posteriori estimates and the realizations of the model have practical applications. The most probable model is instructive for implementing the IOR techniques (for instance, to help in locating the infill well), whereas realizations may be valuable for assessing the certainty of the reservoir flow performance upon such implementations.

Oliver5 initiated and explored the application of the inverse solution theory in reservoir characterization. In his work, Oliver presented a procedure to generate the maximum a posteriori estimates and realizations for one-dimensional porosity, and one- and two-dimensional permeability distributions by incorporating the a priori information, hard data, and pressure transient data. He also developed a method to incorporate a single category of hard data (namely, permeability or porosity measurements) in the frame of the maximum-likelihood method of estimates. In his work, Oliver generated the maximum a posteriori model using the Gauss-Newton method and constructed the sensitivity coefficient matrix by Carter's method. 6

This content is only available via PDF.
You do not currently have access to this content.