Abstract

Even with unbiased judgments and project calculations, an insidious bias will still be present in an approved projects portfolio. The screening and ranking process used to select projects causes an optimizer's curse effect (Smith and Winkler, 2006). A portfolio's true outcome value will tend to fall below the forecast distribution mean. Stated another way, we should expect to be disappointed in the value of the eventual outcomes. This optimizer's curse effect can be huge: the true portfolio value might be half its forecast value when forecast in the usual way.

This paper describes a correction process based upon Bayesian inversion obtained with Monte Carlo simulation. Estimate/Actual distributions characterize component evaluation errors. The mean and standard deviation of the E/A distribution measures the quality of judgments and estimates. Examples demonstrate the value of better information in project or asset evaluation, portfolio optimization, and competitive bidding.

The premise of our approach to correct for optimizer's curse is straightforward enough, though implementing this method requires judging the population of evaluated projects—or at least assessing the shape of this population's distribution. We offer a process and guidelines for correcting for the optimizer's curse in both project and portfolio value calculations. Correcting estimates for systematic biases restores luster to the recommended expected value-maximizing decision policy. We also tie-back to the winner's curse (Capen, Clapp and Campbell, 1971) experienced in competitive bidding and how it affects bid optimization.

Introduction

Our primary goals for this paper are:

  • Help publicize the optimizer's curse, its cause and importance

  • Propose some early methods we believe will be useful in countering the optimizer's curse and a related bias in planning and evaluation work.

The foremost characteristic of good evaluation work is objectivity, that is, synonymously, lack of bias. Until recently, the authors believed that if (a) the judgments going into a stochastic cashflow model were unbiased and (b) the model calculations did not introduce a bias, then the forecast produced by the model would be objective. This still works if we consider judgments and models in all projects considered.

A 2006 article by Smith and Winkler describes the optimizer's curse (OC) phenomenon that spoils this illusion of simplicity.1 The OC describes how the simple and everyday act of screening and ranking projects introduces a bias. Projects accepted tend to be those where random evaluation errors are optimistic.

We can estimate the OC bias and then adjust-out the effect to restore objectivity. We will discuss a simple approach to characterize and reduce the effects of the OC. Then we demonstrate a Bayesian approach to estimate and back-out the OC bias. Unfortunately, this adjustment requires additional work in modeling the evaluation process and in assessing judgments for additional inputs. Much more calculation effort is needed to estimate this correction, though the cost of the personal computer time is inconsequential.

We will be demonstrating an OC correction process in the context of large exploration projects, a portfolio of such projects, and bidding to acquire an exploration prospect tract. The same methods would apply to other types of cash-producing assets or anything else of value.

Closely related to the OC is the winner's curse in competitive bidding. This was first described in the SPE literature in 1971 by Capen, Clapp and Campbell.2 The winner's curse phenomenon explains why an auction winner tends to overpay. In hindsight, it appears these authors were prescient: the winner's curse bias is widely exhibited and is closely related to the OC. Random evaluation errors are the cause of both winner's and optimizer's curses.

This content is only available via PDF.
You can access this article if you purchase or spend a download.