Model-Based Evaluation of Surveillance-Program Effectiveness With Proxies
- Chris Carpenter (JPT Technology Editor)
- Document ID
- Society of Petroleum Engineers
- Journal of Petroleum Technology
- Publication Date
- September 2015
- Document Type
- Journal Paper
- 121 - 123
- 2015. Society of Petroleum Engineers
- 1 in the last 30 days
- 44 since 2007
- Show more detail
- View rights & permissions
|SPE Member Price:||Free|
|SPE Non-Member Price:||USD 15.00|
This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 173229, “Model-Based A Priori Evaluation of Surveillance-Program Effectiveness With Proxies,” by Jincong He, Jiang Xie, Pallav Sarma, Xian-Huan Wen, Wen H. Chen, and Jairam Kamath, Chevron, prepared for the 2015 SPE Reservoir Simulation Symposium, Houston, 23–25 February. The paper has not been peer reviewed.
This paper proposes a framework based on proxies and rejection sampling (filtering) to perform multiple history-matching runs with a manageable number of reservoir simulations. The proposed work flow enables qualitative and quantitative analysis of a surveillance plan. Qualitatively, heavy-hitter-alignment analysis for the objective function and the observed data provides actionable measures for screening different surveillance designs. Quantitatively, the evaluation of expected uncertainty reduction from different surveillance plans allows for optimal design and selection of surveillance plans.
In this work flow, the authors perform a set of training simulations (determined by experimental design) and use the result to build proxies for the objective function and each of the surveillance data points. Proxies are then used to generate a number of plausible realizations of the surveillance data. Then, in turn, one of the plausible data realizations is assumed to be the true data to be observed, and a history-matching run is performed to assimilate these ”true” data using proxy-based rejection sampling to establish the corresponding posterior distribution. The process is repeated for all plausible surveillance-data realizations, to obtain a set of plausible posterior distributions (one for each data realization). The amount of expected uncertainty reduction is obtained by comparing the amount of uncertainty in the prior distribution and the average amount of uncertainty in the posterior distribution. To the best of the authors’ knowledge, this is the first attempt for a priori surveillance analysis by multiple history matching under data uncertainty by use of rejection sampling and proxies.
The proposed method has several benefits. First, the linear Gaussian assumption between surveillance data and objective function can have a major impact on the accuracy of the quantification of the uncertainty reduction. The method proposed in this work does not make the linear Gaussian assumption. Therefore, it can better capture the distributions of, and the relation between, the surveillance data and the objective function. Second, it is a black-box method that does not require gradient information, and therefore the work flow can be used with most simulators as long as it can predict the value of the surveillance data from the surveillance program and the value of the objective function. In addition, the number of training simulations needed is independent of the number of plausible data realizations and is thus independent of the number of history-matching runs. The computational cost is also insensitive to the number of data points to be collected in the surveillance program. Furthermore, besides the quantification of expected uncertainty reduction, the work flow also enables qualitative screening of surveillance concepts by analyzing the ranking and the magnitude of the sensitivities of the objective function and the surveillance data to different parameters. The complete paper provides a discussion of the formulation of the problem of quantifying the uncertainty reduction for surveillance programs and details the new work flow.
|File Size||194 KB||Number of Pages||2|