"Simplicity is the ultimate sophistication."
Leonardo da Vinci
The process to characterize the reservoirs of a mature field encapsulates the analysis of large data sets collated from well tests, production history and core analysis results enhanced by high resolution mapping of seismic attributes to reservoir properties. It is imperative to capture the more subtle observations inherent in these data sets, to comprehend the structure of the data. Invariably, geostatistical methods can be implemented to accurately quantify heterogeneity, integrate scalable data and capture the scope of uncertainty. However, between 50 and 70 per cent of allotted time for any reservoir characterization study worth its investment should be concentrated on Exploratory Data Analysis (EDA). As an overture to spatial analysis, simulation and uncertainty quantification, exploratory data analysis ensures consistent data integration, data aggregation and data management, underpinned by univariate, bivariate and multivariate analysis.
This paper not only details some of the more common EDA steps that initiate efficient reservoir characterization projects, but also underlines the importance of the EDA school of thought often overlooked or even precluded prior to the spatial analysis, kriging, simulation and uncertainty quantification steps. See Figure 1 for a comprehensive reservoir characterization project flow chart as it cycles through each step from EDA to uncertainty analysis. Illustrated by a case study to optimize recovery factors, the EDA techniques are enumerated and graphically explicated. Using a suite of statistical tools in various workflows across an enterprise business intelligence framework, data integrity is preserved and managed efficiently to enable various regression models to be deemed appropriate, implementing stepwise algorithms and other techniques to render reliable knowledge of those reservoir properties that are most influential in increasing recovery factors.