To improve the current industry standard one-parameter-at-a-time sensitivity analysis method, we propose a new sensitivity analysis framework that utilizes Plackett-Burman design and Random Forests (a well-known data mining method). The new framework significantly reduces the number of required simulation runs (i.e. samples), and at the same time compellingly improves the automatic history matching error.

The proposed sensitivity analysis framework starts with generating samples/simulations using Plackett-Burman design. Here, each simulation is executed based on different combinations of the parameters' input values. Once the samples are ready, the parameters' input values and the target vector (i.e. the history matching error vector) are used to construct a Random Forests model. This model is used to rank the importance of each history matching parameter. Hence, parameters with low impact on the history matching error are discarded, and the remaining are used for Genetic Algorithm-based automatic history matching. The impact of an internal Random Forests parameter (number of decision trees) on the history matching error is also observed.

The highly faulted reservoir with water injection is used for history matching and 10 parameters are defined uncertain consisting of fault transmissibilities, permeabilities, and connate water saturation. The aim is to match the bottom hole pressure and watercut for several oil producing wells. The one-parameter-at-a-time method requires 21 samples, and the selected top 4 parameters from this method are mainly fault transmissibilities. The resulting automatic history matching results gave a final error of 463.338. With the new framework, which requires only 12 samples instead of 21, the final error was 628.041, given that 100 trees are utilized. As we increased the number of trees to 500, the final error is significantly better (i.e. 36% improvement) than the results obtained from the one-parameter-at-a-time method. We can see that using the new framework, the majority of the top 4 parameters are related to permeabilities. Here, there is no significant change in terms of computation time as we increase the number of trees from 100 to 500; total computation time is less than 10 seconds.

For this case study, not only this new framework requires significantly less number of initial simulation samples, it significantly improved the history matching error (as compared to the industry standard one-parameter-at-time sensitivity analysis method). Therefore, practicing engineers can utilize this framework to save time and simultaneously improve accuracy of the history matching parameter ranking. Parameter ranking computation with Random Forests takes less than 10 seconds.

You can access this article if you purchase or spend a download.