Abstract
The Gauss-Newton line-search method has proven to be very efficient for least-squares problems. However, it may fail to converge when applied to real-field problems because of inaccurate gradients or singular Hessians. By contrast, the trust-region optimization methods perform more robustly and efficiently but introduce significant overhead when using the conventional matrix factorization method to solve the associated Gauss-Newton trust-region subproblem (GNTRS). This paper analyzes and compares the performance of different approaches to solving a GNTRS.
Solving a GNTRS involves solving a nonlinear equation iteratively using a Newton-Raphson method. In each iteration, a symmetric linear system is solved by matrix factorization, e.g., either Cholesky-decomposition (CD), eigenvalue-decomposition (EVD), or singular-value-decomposition (SVD). This paper investigates the computational costs and compares performances of different matrix factorization subroutines in LAPACK with different options and settings, both numerically (measuring CPU time to solve different problems with different sizes) and theoretically (counting floating-point operations, or flops, required in each algorithm). Numerical testing results confirm that the measured CPU time is linearly proportional to the theoretically computed flops.
The computational cost of solving a GNTRS mainly depends on the size of the m × n sensitivity matrix, with m and n the number of observed data and uncertainty parameters, respectively. When m < n the GNTRS solver using the matrix inversion lemma (MIL) or linear transformation (LT) is more efficient, requiring roughly 2nm2 + O(m3) flops. Both MIL and LT methods transform the original linear system with n unknowns to a new linear system with m unknowns. Otherwise, directly solving the symmetric linear system, with n variables, performs more efficiently, as it requires roughly 2nm2 + O(n3) flops. Finally, we benchmark the performance of different GNTRS solvers on different problems with different settings, and numerical results are consistent with results of theoretical performance analysis. GNTRS solvers using EVD and SVD perform more robustly than those using CD. When either m or n is not too large (<1000), the GN trust-region search optimization algorithm performs robustly and efficiently. However, when both m and n are large, solving the GNTRS becomes expensive, and the limited- memory BFGS optimization algorithm may be a better choice (as it requires 40n to 100n flops).
Better uncertainty quantification in subsurface modelling workflows requires using more uncertainty parameters and fitting more observed data, which increases the importance of efficiently solving the trust-region subproblems without seeing any failure. Based on performance analysis results presented in this paper, users can make better decisions to strike the balance between computational cost and solution quality by roughly estimating the computational cost from the size m and n of a history matching problem before running expensive simulations.