Abstract
Previous implementation of distributed Gauss-Newton (DGN) optimization algorithm runs multiple optimization threads in parallel, employing a synchronous running mode (S-DGN). As a result, it waits for all simulations submitted in each iteration to complete, which may significantly degrade performance because a few simulations may run much longer than others, especially for time-consuming real-field cases. To overcome this limitation and thus improve the DGN optimizer's execution, we propose two asynchronous DGN (A-DGN) optimization algorithms in this paper.
The A-DGN optimizer is a well-parallelized and efficient derivative-free (DFO) method. The A-DGN optimizer generates multiple initial guesses by sampling from the prior probability distribution of uncertain parameters in the first iteration. It then runs multiple simulations on high-performance-computing (HPC) clusters in parallel. A checking time interval is introduced to control the optimization process. The A-DGN optimizer checks the status of all running simulations after every checking time frame. A new simulation case is proposed immediately once the simulation of an optimization thread is completed, without waiting for the completion of other simulations. Thus, each A-DGN optimization thread becomes independent.
The two A-DGN optimization algorithms are 1) the local-search algorithm to locate multiple maximum-a-posteriori (MAP) estimates and 2) the integrated global-search algorithm with the randomized-maximum-likelihood (RML) method to generate hundreds of RML samples in parallel for uncertainty quantification. We modified the training-data data set updating algorithm using the iteration index for each thread to implement the asynchronous running mode. The sensitivity matrix at the best solution of each optimization thread is estimated by linear interpolation of a subset of the training data closest to the best solution, using the modified QR decomposition method. A new simulation case (or search point) is generated by solving the Gauss-Newton trust-region subproblem (GNTRS), together with the estimated sensitivity matrix, using the more efficient and robust GNTRS solver that we developed recently. The proposed A-DGN optimization method is tested and validated on a synthetic problem and then applied to a real-field deep-water reservoir model. Numerical tests confirm that the proposed A-DGN optimization method can converge to solutions with matching quality comparable to those obtained by the S-DGN optimizer, saving on the time required for the optimizer to converge by a factor ranging from 1.3 to 2 when compared to the S-DGN optimizer depending on the problem.
The new A-DGN optimization algorithm presented in this paper helps improve efficiency and robustness in solving history-matching or inversion problems, especially for uncertainty quantification of subsurface model parameters and production forecasts of real-field reservoirs by conditioning to production data.