Subsurface development involves well-placement decisions considering the highly uncertain understanding of the reservoir in the subsurface. The simultaneous optimization of a large number of well locations is a challenging problem. Conventional gradient-based methods are known to perform efficiently for well-placement optimization problems when such problems are translated into real-valued representations, and special noisy objective function handling protocols are implemented. However, applying such methods to large-scale problems may still be impractical because the gradients of the objective function may be too expensive to compute for realistic applications in the absence of the implementation of the adjoint method. In this paper, we develop a quasi-Newton method based on the stochastic simplex approximate gradient (StoSAG), which requires only objective-function values.

We have implemented the BFGS quasi-Newton updating algorithm together with line-search and trust-region optimization strategies. We have developed a novel approach to enhance the accuracy of StoSAG gradients by modifying their formulations to enable exploiting the objective-function structure. The objective function is treated as a summation of element functions, each representing the contribution from an individual well at distinct time steps. Instead of working with a single value for the gradient, we treat it as a sum of sub-gradients. We then utilize problem-specific prior knowledge to form a matrix W that acts on the sub-gradients. The entries of W vary from 0 to 1 and are proportional to the interference effects the neighbouring wells have on each other. We define those entries (or weights) based on the radii of investigation around the wells. The BFGS-StoSAG variants are demonstrated on a realistic synthetic case with 26 wells while varying the average reservoir permeability.

We first show that the BFGS algorithm delivers promising performance as in many cases it results in the most rapid improvement for the objective-function values (especially in early iterations). Further testing results confirm that the trust-region protocol is more effective than the line-search protocol for accelerating convergence with BFGS. Although the objective function is not always continuously differentiable with respect to well locations, the StoSAG variants overcome this challenge owing to their smoothing properties of approximate gradients. Moreover, we show that using our gradient correction procedures on the well-location optimization problem results in drastic acceleration in convergence indicating enhancement in the StoSAG gradient approximation quality.

You can access this article if you purchase or spend a download.