The difficulty of assessing uncertainty has been well-documented by many authors (Capen 1976, Begg, Bratvold et al. 2003, Bickel and Bratvold 2007). Part of this difficulty is attributed to the biases that affect estimation and decision-making. This has also been documented in the literature (Tversky and Kahneman 1974, Welsh, Bratvold et al. 2005, Welsh, Begg et al. 2007, Virine 2008). The best solution to resolving those biases is the calibration of the uncertainty estimates (Capen 1976, McVay, Lee et al. 2005).

Calibration is incorporating past experiences in the new estimation process by applying correction factors to the raw estimates, based on the reliability of past estimates, to give calibrated estimates. This cannot be done instantly because it requires tracking past probabilistic predictions and comparing them to actual outcomes. Such information is not usually available because it requires project management discipline, corporate knowledge management, and appreciation for the value of quantifying uncertainty.

The objective of this paper is to demonstrate an automatic calibration method that can be readily applied for any uncertainty estimation without requiring past information about the reliability of previous estimates. This method combines two components: anchoring heuristics as demonstrated by Tversky and Kahneman, and the graphical demonstration by Capen (1976). The results of this study show that any general estimate of an uncertain parameter (e.g. Porosity, Permeability) can be calibrated by doubling the range while anchoring in the central tendency of the estimates. The process was applied to survey results a la Capen (1976) and showed an increased confidence in the estimate from 34% to 72%.

You can access this article if you purchase or spend a download.