This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 203941, “Bridging the Gap Between Material Balance and Reservoir Simulation for History Matching and Probabilistic Forecasting Using Machine Learning,” by Nigel H. Goodwin, Essence Analytics. The paper has not been peer reviewed.
Currently, building and maintaining a reservoir simulation model is the bottleneck for reservoir decision support. In this paper, the authors describe an approach for creating a digital twin that is informed by the data itself based on established physics modeling and uses a fully probabilistic Bayesian approach.
Differences Between Statistical Models and Physics Models. The statistical model includes an error term for each timestep. In practice, this means that the predicted fit to historic data is very good and the error terms are small, but forecasting can have a major funneling effect. Many methods exist for forecasting of statistical models, but the increasing uncertainty of prediction over time, and the large magnitude of uncertainty, is a common feature. The error term implies a future random walk behavior. When forecasting, multiple single-step integrations are being performed in which each step has its own error.
Where known values of exogenous variables exist in the future, prediction uncertainty can be greatly ameliorated. In contrast, the differential algebraic equations (DAE) for a physical model have no error term for each timestep. Each time the model is run with the same parameter values, the same results are obtained.
The error in reservoir simulation lies in measurement error and model error. Errors exist in individual historical measurements (or maybe systemic errors), and the model and its parameters are known to be wrong. For forecasting, a range of models and model parameter values are used—ideally, based around a robust Monte Carlo approach. An ensemble of forecasts can then be generated that represent uncertainty.