Abstract

History matching and uncertainty quantification are two important research topics in reservoir simulation currently. In the Bayesian approach, we start with prior information about a reservoir - for example from analogue outcrop data - and update our reservoir models with observations, for example from production data or time lapse seismic. The goal of this activity is often to generate multiple models that match the history and use the models to quantify uncertainties in predictions of reservoir performance. A critical aspect of generating multiple history matched models is the sampling algorithm used to generate the models. Algorithms that have been studied include gradient methods, genetic algorithms, the Ensemble Kalman Filter, and others.

This paper investigates the efficiency of three stochastic sampling algorithms: Hamiltonian Monte Carlo (HMC) algorithm, Particle Swarm Optimization (PSO) algorithm and the Neighborhood Algorithm (NA). HMC is a Markov Chain Monte Carlo (MCMC) technique that uses Hamiltonian dynamics to achieve larger jumps than are possible with other MCMC techniques. PSO is a swarm intelligence algorithm that uses similar dynamics to HMC to guide the search, but incorporates acceleration and damping parameters to provide rapid convergence to possible multiple minima. The Neighbourhood Algorithm is a sampling technique that uses the properties of Voronoi cells in high dimensions to achieve multiple history matched models.

The algorithms are compared by generating multiple history matched reservoir models, and comparing the p10 - p50 - p90 uncertainty bounds produced by each algorithm. We show that all the algorithms are able to find equivalent match qualities for this example, but that some algorithms are able to find good fitting results quickly, whereas others are able to find a more diverse set of models in parameter space. The effects of the different sampling of model parameter space are compared in terms of the p10 - p50 - p90 uncertainty bounds in forecast oil rate.

These results show that algorithms based on Hamiltonian dynamics and swarm intelligence concepts have the potential to be effective tools in uncertainty quantification in the oil industry.

Introduction

History matching and uncertainty quantification are two important research topics in reservoir simulation currently. Automated and assisted history matching concepts have been developed over many years (see for example Oliver et al, 2008). In recent years, research has been around quantifying uncertainty by generation of multiple history matched reservoir models, rather than just seeking the best history matched model. A practical reason for using multiple history matched models is that a single model, even if it is the best history matched model, may not provide a good prediction (Tavassoli et al., 2004).

Most uncertainty quantification studies use a Bayesian approach, where we start with prior information about a reservoir expressed as probabilities of unknown input parameters to our model. These prior probabilities can come from a number of sources - for example from analogue outcrop data, or previous experience with an analogue reservoir from a similar depositional environment. The prior probabilities are then updated using Bayes rule, which provides a statistically consistent way of combining data from multiple sources. The data that is used to update the prior probabilities are observations about the reservoir, for example production data or time lapse seismic.

This content is only available via PDF.
You can access this article if you purchase or spend a download.