Optimization algorithms that incorporate a stochastic gradient (SPSA, simplex, EnOpt) are easy to implement in conjunction with any reservoir simulator. However, for realistic problems, a stochastic gradient only provides a rough approximation of the true gradient, and, in particular, the angle between a stochastic gradient and the associated true gradient is typically far from zero even though a properly computed stochastic gradient usually represents an uphill direction.

This paper develops a more robust optimization procedure by replacing the largest components in magnitude of the stochastic gradient with a finite-difference approximation of the pertinent partial derivatives. Specifically, after ranking the magnitudes of the components of the stochastic gradient from largest to smallest in magnitude, we replace, in this same order, components of the stochastic gradient with finite difference approximations until we obtain a modified stochastic gradient which is in approximately the same direction as the unknown true gradient. The viability of the method relies on a theoretical result which allows us to approximate the angle between the modified stochastic gradient and the true gradient without knowledge of the true gradient. With this simple procedure, one can continue to replace components of the stochastic gradient with finite different approximations until one obtains a modified stochastic gradient where the angle between it and the unknown true gradient is sufficiently small. As the stochastic gradient is approximate, the n largest component magnitudes of the stochastic gradient do not exactly correspond to the n components of largest magnitude of the true gradient. However, another theoretical result allows us to approximately determine whether a dominant component of the stochastic gradient corresponds to a dominant component of the true gradient. If this correspondence does not hold, one option is to attempt to compute a more accurate stochastic gradient. In essence, the objective of the method is to determine which components of the unknown true gradient are most important and then replace the corresponding components of the stochastic gradient with more accurate finite- difference approximations. This modified gradient can then be used in a gradient-based optimization algorithm to find the minimum or maximum of a given cost function. In this work, we consider only a well known mathematical optimization test problem and published problems on estimating optimal well controls in closed-loop reservoir management although it is clear that the method could also be used for other applications including history matching.

You can access this article if you purchase or spend a download.