Core analysis results in all reservoir types have long been considered the „ground truth‟ for log-based estimates of petrophysical properties such as porosity and water saturation. Most core analysis laboratories have adopted the „Gas Research Institute‟ (GRI) evaluation method for evaluating shale gas core samples as the standard technique to perform core measurements. Oftentimes, GRI analysis results have been observed to vary between laboratories outside of accepted ranges of measurement uncertainty. Differences have been noted in datasets across similar measurements, such as gas-filled porosity and gas saturation, via extensive comparative testing between laboratories. Variations in the GRI technique and its implementation by different laboratories have resulted in differences in reported data values and types. This poses a significant challenge in evaluating plays and areas where core datasets are developed using a mix of analysis vendors. A workflow is presented in this paper for „normalizing‟ different vendor datasets through corrections based upon reported volumes of gas filled porosity, water saturation and clay bound water (CBW) volumes. This technique relies upon the assumption of constant hydrocarbon pore volume (HCPV) from vendor to vendor and is treated as a physical constant. Partitioning of different volumes from core analyses (i.e. total and effective porosity) is handled via CBW and gas-filled porosity volumes allowing one to accurately move between effective and total porosity and gas saturation values.
Recent comparative or „round robin‟ studies of results reported by different vendors using the „Gas Research Institute‟ (GRI) method (Luffel and Guidry, 1990) or its variants have highlighted significant differences in the results from different commercial core analysis laboratories. So far, the common industry practice for shale gas core analysis has been the GRI technique (Luffel et al, 1992) or its variants.