Reservoir-modeling practice has developed into a complex set of numerical algorithms and recipes for modeling subsurface geology and fluid flow. Within these workflows, a number of myths have sometimes been propagated, especially in relation to (a) methods for handling net-to-gross (N/G), (b) implementation of upscaling methods, and (c) conditioning of reservoir models to well data. This paper discusses different practices in the use and upscaling of reservoir data and models, by comparing two end-member approaches:

  1. the N/G method and

  2. total-property modeling. Total property modeling, in which all rock elements are represented explicitly, is the generally preferred method.

The N/G method involves a simplified representation of reality, which may be an acceptable approximation. Implications for upscaling and conditioning reservoir models to well data are discussed, and recommended practices are suggested.


A number of weak assumptions have propagated within the oil industry and related research groups with respect to how reservoir data are rescaled and handled within the reservoir model. Three myths prevalent in reservoir modeling are that

  1. The net-to-gross (N/G) ratio is a trivial concept.

  2. Upscaling is not usually necessary.

  3. Measurements at the well are fixed data points.

While it is generally appreciated that the N/G ratio is an important concept, it is widely and falsely assumed that treatment of N/G ratios in the reservoir model is a trivial matter. Similarly, while the upscaling of flow properties is an important research activity, a common assumption in practice is that upscaling is a specialist research topic that does not significantly affect practical reservoir modeling or, indeed, that other uncertainties dominate over any upscaling uncertainties. Furthermore, although upscaling methods are employed increasingly, too often standard recipes are used without checking the validity of assumptions. The third myth is prevalent in the use of common modeling techniques in which the focus is on geostatistical modeling of the interwell volume with the assumption that the statistical variables must merely be "tied to" or conditioned to (hard) well-data control points. While it is generally true that interwell uncertainties are large compared to well data, the well data sets themselves have significant uncertainties in interpretation and rescaling, especially for thin-bedded reservoir systems. This paper examines these issues and suggests an improved practice for representation and transformation of multiscale reservoir data in the reservoir model. Contrasting approaches to the handling of N/G ratios and cutoff values are the main concern, but implications for upscaling, handling of well data, and reservoir modeling are also identified. The main goal is assumed to be reservoir modeling for flow simulation and reservoir forecasting, but the arguments are also relevant for volume and reserves estimation.

This content is only available via PDF.
You do not currently have access to this content.