Abstract
For large scale history matching problems, where it is not feasible to compute individual sensitivity coefficients, the limited memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) is an efficient optimization algorithm1,2 . However, computational experiments reveal that the original implementation of LBFGS may encounter the following problems: (i) the algorithm often does not give as good a match of production data as is expected; (ii) occasionally a bad search direction is encountered that either leads to false convergence or a restart with the steepest descent direction which radically reduces the convergence rate; (iii) on rare occasions, the LBFGS converges to a model which results in overshooting/undershooting problems, i.e., converges to a vector of model parameters which contains some abnormally high or low values of model parameters. This overshooting/undershooting occurs even though all history matching problems are formulated in a Bayesian framework with a prior model providing regularization.
Here, we show that the rate of convergence and the robustness of the algorithm can be significantly improved by: (1) a more robust line search algorithm motivated by the theoretical result that the Wolfe conditions should be satisfied; (2) an application of data damping procedure at early iterations; (3) rescaling of model parameters prior to application of the optimization algorithm; (4) an application of constraints on the permeability/porosity fields; and (5) a minor modification of the LBFGS updating formula for the inverse Hessian matrix calculation.