We present a novel reservoir simulator time-step selection approach which uses machine-learning (ML) techniques to analyze the mathematical and physical state of the system and predict time-step sizes which are large while still being efficient to solve, thus making the simulation faster. An optimal time-step choice avoids wasted non-linear and linear equation set-up work when the time-step is too small and avoids highly non-linear systems that take many iterations to solve.
Typical time-step selectors use a limited set of features to heuristically predict the size of the next time-step. While they have been effective for simple simulation models, as model complexity increases, there is an increasing need for robust data-driven time-step selection algorithms. We propose two workflows – static and dynamic – that use a diverse set of physical (e.g., well data) and mathematical (e.g., CFL) features to build a predictive ML model. This can be pre-trained or dynamically trained to generate an inference model. The trained model can also be reinforced as new data becomes available and efficiently used for transfer learning.
We present the application of these workflows in a commercial reservoir simulator using distinct types of simulation model including black oil, compositional and thermal steam-assisted gravity drainage (SAGD). We have found that history-match and uncertainty/optimization studies benefit most from the static approach while the dynamic approach produces optimum step-sizes for prediction studies. We use a confidence monitor to manage the ML time-step selector at runtime. If the confidence level falls below a threshold, we switch to traditional heuristic method for that time-step. This avoids any degradation in the performance when the model features are outside the training space. Application to several complex cases, including a large field study, shows a significant speedup for single simulations and even better results for multiple simulations. We demonstrate that any simulation can take advantage of the stored state of the trained model and even augment it when new situations are encountered, so the system becomes more effective as it is exposed to more data.