The past 10 to 15 years have seen a major shift in thinking on the subject of uncertainty and its incorporation into oil and gas evaluations. Witness the struggle that has occurred in the adoption of probabilistic expressions of reserves estimates and the gradual acceptance of these methods by the industry. Despite this new-found acceptance, there is still much confusion on how such evaluations are performed (lack of consistent ground rules or calibration procedures) and particularly, how to use or interpret the results. The question then becomes, have we really made any progress in terms of improving the accuracy of our estimates?
This paper explores the sources of uncertainty in an oil and gas evaluation and attempts to characterize these sources by their definition and interrelationships in order to better understand how to handle them. As geoscientists and engineers, we tend to focus on generally simple computational procedures to calculate most of our required parameters. In other words, we fit a straight line through the scatter to develop a transform, calculate a result, and move on. Many empirical relationships are in standard use that were developed from very scattered data. Such methods have changed little as graph paper gave way to hand calculators, and now computers.
In characterizing uncertainty, we document the sources as including measurement inaccuracy, computational approximation, the effect of incomplete or missing data and dealing with naturally stochastic systems, and examine the impacts on the evaluation of uncertainty. Finally, comparisons are made with non-oilfield handling of uncertainty to compare and contrast concepts and the way we visualize the results.
Does this improve the accuracy of our evaluations? Read on, you will be surprised at the answer.