Abstract

The present work evaluates the use of hierarchical ensemble Kalman filter (HEnKF) for updating real size reservoir models. It is tested on the large scale Brugge field SPE benchmark study and we chose to revisit the first cycle of this case study (Peters et al., 2009). The Brugge field is a synthetic reservoir built by TNO that has the size of a real field and has been purposely built to mimic a real reservoir with a 30 years leasing life.

The HEnKF method is an automated generic localisation approach that computes at each assimilation time a damping factor, for each state variable, aimed at minimising sampling errors estimated from a group of sub-ensembles. It differs from the ensemble Kalman filter (EnKF) algorithm in the implementation of its analysis step.

The results are compared against those obtained when using the traditional EnKF approach on two different ensembles. The members of the first ensemble are generated based on empirical variograms while the second ensemble is composed of all the 104 original realisations provided by TNO. The results show that spurious correlations are avoided when using HEnKF as do distance dependent localisation approaches. Our best results are obtained when using HEnKF and are the second best when compared to all the results presented in the original Brugge field study (Peters et al., 2009). The main advantages of HEnKF is that it is not limited to spatially distributed variables and it is simpler and more straightforward to use than localisation functions.

In addition we observed that the best results were obtained using an initial ensemble with more higher diversity in the types of reservoir models included. This was somewhat surprising since the alternative was an initial ensemble not only built to take into account prior geostatistical knowledge, but also to satisfy the assumption of Gaussianity that lies behind the EnKF approach.

Introduction

The last four years have seen an increasing number of publications where the ensemble Kalman filter (EnKF) was successfully applied to history match real reservoir simulation models (see e.g. Bianco et al., 2007; Evensen et al., 2007; Aanonsen et al., 2009). The EnKF is a Monte Carlo approach where an ensemble of Ne reservoir model realizations is used to compute statistics and consequently provide uncertainty about the updated model parameters (such as porosity, permeability. . .) or state variables (pressure, saturations for example). Moreover, it is a sequential data assimilation technique where observed measurements (such as bottom-hole pressure, production rates etc. . .) are assimilated sequentially in time to update reservoir models. It differs from traditional history matching procedures in the way the observed measurements are treated. In the latter case, all observed data throughout the life of the reservoir are used simultaneously to minimize an objective function in order to infer the porosity and/or permeability fields of the reservoir. The resulting inversion is very computationally demanding due to the very large amount of data and the total number of parameters that can be tuned is consequently limited. The EnKF being a sequential method, only the data observed at a given assimilation time step are used to update the reservoir simulation models at that time. These observed data being only a small fraction of all the data observed throughout the life of the reservoir this makes the continuous updating of reservoir simulation models by means of EnKF very fast compared to the traditional history matching methods and it is then also possible to update a large number of parameters as well as parameter fields (such as porosity for instance).

This content is only available via PDF.
You can access this article if you purchase or spend a download.