Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
NARROW
Format
Subjects
Close
Date
Availability
1-19 of 19
Evaluation of uncertainties
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec, December 1–3, 2020
Paper Number: SPE-200540-MS
Abstract
Digitalization and Artificial Intelligence have impacted the oil and gas industry. Drilling of wells, predictive maintenance and digital fields are examples for the use of these technologies. In hydrocarbon production forecasting, numerical reservoir models and "digital twins" of reservoirs have been used for decades. However, increasing computing power and Artificial Intelligence recently enabled oil and gas companies to generate "digital siblings" of reservoirs (model ensembles) covering the uncertainty range in static data (e.g. petrophysics, geological structure), dynamic data (e.g. oil or gas properties) and economics (Capital Expenditures, Operating Expenditures). Machine Learning and Artificial Intelligence are applied to condition the model ensembles to measured data and improve hydrocarbon production forecasting under uncertainty. The model ensembles can be used for quantitative decision making under uncertainty. This allows companies to shorten the time for field (re-)development planning and to develop into learning organizations for decision making. These developments require companies to change the way of working in hydrocarbon production forecasting and decision analysis. Additional skills need to be developed in companies to embrace digitalization. Data science - which is considered a key skill in digitalization - has not been identified as crucial in skills development of oil and gas companies in the past. However, for data driven decision making, advanced data analytics skills and data science skills are a pre-requisite. To overcome this skill gap, staff needs to be trained and graduates with data science and profound physical and chemical skills need to be hired. Furthermore, skills development has to address the challenge of incorrect use of Machine Learning technologies and the risks of Artificial Intelligence leading to erroneous optimizations. In particular interpretability of AI needs to be covered in skills development.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec, December 1–3, 2020
Paper Number: SPE-200568-MS
Abstract
Polymer flooding offers the potential to recover more oil from reservoirs but requires significant investments which necessitate a robust analysis of economic upsides and downsides. Key uncertainties in designing a polymer flood are often reservoir geology and polymer degradation. The objective of this study is to understand the impact of geological uncertainties and history matching techniques on designing the optimal strategy and quantifying the economic risks of polymer flooding in a heterogeneous clastic reservoir. We applied two different history matching techniques (adjoint-based and a stochastic algorithm) to match data from a prolonged waterflood in the Watt Field, a semi-synthetic reservoir that contains a wide range of geological and interpretational uncertainties. An ensemble of reservoir models is available for the Watt Field, and history matching was carried out for the entire ensemble using both techniques. Next, sensitivity studies were carried out to identify first-order parameters that impact the Net Present Value (NPV). These parameters were then deployed in an experimental design study using a Latin Hypercube to generate training runs from which a proxy model was created. The proxy model was constructed using polynomial regression and validated using further full-physics simulations. A particle swarm optimisation algorithm was then used to optimize the NPV for the polymer flood. The same approach was used to optimise a standard water flood for comparison. Optimisations of the polymer flood and water flood were performed for the history matched model ensemble and the original ensemble. The sensitivity studies showed that polymer concentration, location of polymer injection wells and time to commence polymer injection are key to optimizing the polymer flood. The optimal strategy to deploy the polymer flood and maximize NPV varies based on the history matching technique. The average NPV is predicted to be higher in the stochastic history matching compared to the adjoint technique. The variance in NPV is also higher for the stochastic history matching technique. This is due to the ability of the stochastic algorithm to explore the parameter space more broadly, which created situations where the oil in place is shifted upwards, resulting in higher NPV. Optimizing a history matched ensemble leads to a narrow variance in absolute NPV compared to history matching the original ensemble. This is because the uncertainties associated with polymer degradation are not captured during history matching. The result of cross comparison, where an optimal polymer design strategy for one ensemble member is deployed to the other ensemble members, predicted a decline in NPV but surprisingly still shows that the overall NPV is higher than for an optimized water food. This indicates that a polymer flood could be beneficial compared to a water flood, even if geological uncertainties are not captured properly.
Proceedings Papers
Helena Nandi Formentin, Ian Vernon, Guilherme Daniel Avansi, Camila Caiado, Célio Maschio, Michael Goldstein, Denis José Schiozer
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 81st EAGE Conference and Exhibition, June 3–6, 2019
Paper Number: SPE-195478-MS
Abstract
Reservoir simulation models incorporate physical laws and reservoir characteristics. They represent our understanding of sub-surface structures based on the available information. Emulators are statistical representations of simulation models, offering fast evaluations of a sufficiently large number of reservoir scenarios, to enable a full uncertainty analysis. Bayesian History Matching (BHM) aims to find the range of reservoir scenarios that are consistent with the historical data, in order to provide comprehensive evaluation of reservoir performance and consistent, unbiased predictions incorporating realistic levels of uncertainty, required for full asset management. We describe a systematic approach for uncertainty quantification that combines reservoir simulation and emulation techniques within a coherent Bayesian framework for uncertainty quantification. Our systematic procedure is an alternative and more rigorous tool for reservoir studies dealing with probabilistic uncertainty reduction. It comprises the design of sets of simulation scenarios to facilitate the construction of emulators, capable of accurately mimicking the simulator with known levels of uncertainty. Emulators can be used to accelerate the steps requiring large numbers of evaluations of the input space in order to be valid from a statistical perspective. Via implausibility measures, we compare emulated outputs with historical data incorporating major process uncertainties. Then, we iteratively identify regions of input parameter space unlikely to provide acceptable matches, performing more runs and reconstructing more accurate emulators at each wave, an approach that benefits from several efficiency improvements. We provide a workflow covering each stage of this procedure. The procedure was applied to reduce uncertainty in a complex reservoir case study with 25 injection and production wells. The case study contains 26 uncertain attributes representing petrophysical, rock-fluid and fluid properties. We selected phases of evaluation considering specific events during the reservoir management, improving the efficiency of simulation resources use. We identified and addressed data patterns untracked in previous studies: simulator targets, e.g. liquid production, and water breakthrough lead to discontinuities in relationships between outputs and inputs. With 15 waves and 115 valid emulators, we ruled out regions of the searching space identified as implausible, and what remained was only a small proportion of the initial space judged as non-implausible (~10 −11 %). The systematic procedure showed that uncertainty reduction using iterative Bayesian History Matching has the potential to be used in a large class of reservoir studies with a high number of uncertain parameters. We advance the applicability of Bayesian History Matching for reservoir studies with four deliveries: (a) a general workflow for systematic BHM, (b) the use of phases to progressively evaluate the historical data; and (c) the integration of two-class emulators in the BHM formulation. Finally, we demonstrate the internal discrepancy as a source of error in the reservoir model.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 81st EAGE Conference and Exhibition, June 3–6, 2019
Paper Number: SPE-195465-MS
Abstract
Probabilistic and deterministic methods for reserves and resources evaluation are commonly used in isolation and often considered mutually exclusive. Subsurface uncertainties are critical factors impacting projects and reserves/resources especially in projects/areas where large sums of capital investment are required. Probabilistic methods allow a rigorous use of information on the ranges of uncertainty on key reservoir parameters like porosity, water saturation, permeability aquifer size for reserves estimation. A key output of probabilistic methods is the confidence levels associated with the reserves. Deterministic methods cannot provide confidence levels associated with reserves and resources assessments reason why its successful application is often relaying on the expert knowledge of the evaluator and the strict use of reserves or resources definitions. Technological advances in computing in the last decades have played a key role in advancing computationally intensive probabilistic methodologies including artificial intelligence. These advances have allowed integrated teams to perform studies using sophisticated workflows in feasible timeframes. In this paper a detailed analysis of the different probabilistic methods is presented with a review of the level of regulatory compliance with the SEC 1 and other industry guidelines like SPE PRMS 2 that each of the different methods have. A group of workflows using a combination of these methods is also analysed.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 80th EAGE Conference and Exhibition, June 11–14, 2018
Paper Number: SPE-190778-MS
Abstract
In order to better understand reservoir behavior, reservoir engineers make sure that the model fits the data appropriately. The question of how well a model fits the data is described by a match quality function carrying assumptions about data. From a statistical perspective, improper assumptions about the underlying model may lead to misleading belief about the future response of reservoir models. For instance, a simple linear regression model may have a fair fit to available data, yet fail to predict well. On the contrary, a model may perfectly match the data but make poor prediction (i.e. overfitting). In both cases, the regression model mean response will be far from the true response of the reservoir variables and will cause poor decision making. Therefore, a suitable model has to provide balance between the goodness of the fitted model and the model complexity. In the model selection problem, realistic assumptions concerning the details of model specification are the key elements in learning from data. With regard to conventional history match scheme, the data fitting is usually performed by linear least-squares regression model (LSQ) which makes simple, yet often unrealistic, assumptions about the discrepancy between the model output and the measured values. The linear LSQ model ignores any likely correlation structure in discrepancy, changes in mean and pattern similarities reflecting on poor prediction. In this work, we interpret the model selection problem in data-driven settings that enables us to first interpolate the error in history period, and second propagate it towards unseen data (i.e. error generalization). The error models constructed by inferring parameters of selected models can predict the response variable (e.g. oil rate) at any point in input space (e.g. time) with corresponding generalization uncertainty. These models are inferred through training/validation data set and further compared in terms of average generalization error on test set. Our results demonstrate how incorporating different correlation structures of errors improves predictive performance of the model for the deterministic aspect of the reservoir modelling. In addition, our findings based on different inference of selected error models highlight an enormous failure in prediction by improper models.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 78th EAGE Conference and Exhibition, May 30–June 2, 2016
Paper Number: SPE-180189-MS
Abstract
Calibrating complex subsurface geological models against dynamic well observations yields to a challenging inverse problem which is known as history matching in oil and gas literature. The highly nonlinear nature of interactions and relationships between reservoir model parameters and well responses demand automated, robust and geologically consistent inversion techniques. The ensemble of calibrated and history matched models quality determines the reliability of production uncertainty assessment. Reliable production forecasting and uncertainty assessment are essential steps toward reservoir management and field development. The Bayesian framework is a widely accepted approach to incorporate dynamic production data to the prior probability distribution of reservoir models and obtain the posterior distribution of reservoir parameters. Uncertainly assessment is performed by sampling the posterior probability distribution which is a computationally challenging task. Markov-Chain Monte Carlo (MCMC) algorithm has shown successful application in reservoir model calibration and uncertainty quantification is recent years. MCMC can efficiently sample the high-dimensional and complex posterior probability distribution of reservoir parameters and generate history matched reservoir models that consequently can be used for production forecasting uncertainty assessment. MCMC method is a gradient-free approach which makes is favorable when gradient information is not available through reservoir simulation. In MCMC method normally to march to next iteration the new sample is independent of the previous sample and the proposal distribution is rather random. To improve the sampling procedure and make MCMC process more efficient we propose an approach based on locally varying mean (LVM) Kriging to base the new sample generation on the previous iteration sample. In this method, the previous sample is used as the varying mean map in the geostatistical simulation approach to generate the new proposal for the next iteration. Using LVM Kriging to relate the new sample to previous iteration sample, make the chain of samples in MCMC more correlated and geologically consistent. Also this new proposal distribution makes the sampling procedure more efficient and avoids random and arbitrary movements is the parameter space. We applied MCMC with LVM Kriging to a suite of 2D and 3D reservoir models and obtained the calibrated model. We observed that the application of the new proposal distribution based on LVM Kriging along with MCMC improved the quality of the samples and resulted in promising uncertainty quantification. We also observed meaningful improvement in calibrated reservoir models quality and uncertainty interval while utilizing LVM comparing to random proposal or transition distribution in MCMC. MCMC with LVM Kriging as proposal distribution results in improved uncertainty assessment through enhancing the quality of the generated samples from posterior probability distribution of reservoir model parameters. Traditional random or independent proposal distribution does not represent the dependency of the samples through MCMC chain and iterations while this challenge is addressed by combining MCMC with LVM.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the EAGE Annual Conference & Exhibition incorporating SPE Europec, June 10–13, 2013
Paper Number: SPE-164817-MS
Abstract
Production forecasts for petroleum reservoirs are essentially uncertain due to the lack of data. The unknown parameters are calibrated so that the simulated profile can match the observed data. This process is an inverse problem called history-matching which is ill-posed and may have non-unique solutions. This paper addresses two issues: 1) How can we calibrate physical properties in history-matching? 2) How can we predict uncertain reservoir performance based on history-matching? The aim of our study is to quantify uncertainty of reservoir connectivity in a Turbidite sandstone reservoir. The target reservoir is in the on-shore oil field of which the depositional environment is a submarine fan of turbidite deposits. For the calibration we parameterise the reservoir properties and adopt a stochastic sampling method called Particle Swarm Optimisation (PSO) which is one of the swarm intelligence algorithms. Then a Bayesian framework along with Markov Chain Monte Carlo (MCMC) and Neighbourhood Approximation in parameter space is used to calculate the posterior probability. The MCMC is used to overcome the numerical difficulties calculating the normalisation constant in Bayesian inference. A Bayesian framework and PSO have been applied to the evaluation of CO 2 injection test in the tight oil reservoir where the wells have been stimulated by hydraulic fracturing. The observed data used for history-matching include the bottom-hole flowing pressure at the injector well and the gas composition at the wellhead of the producer wells. The in-place volumes and connectivity between the wells have been calibrated in a simple model using the effective algorithm of PSO. The calibrated parameters include permeabilities and porosities in the fracture cells, the length of the injector hydraulic fracture, the net-to-gross ratios and the horizontal and vertical permeabilities around one of the producer wells. We showed the best fit model for the gas breakthrough and the P10-90 envelopes in the reservoir performance forecast assuming an on-going injection after the actual pilot test. The uncertainty envelopes in the CO 2 mole fraction in the produced gas were estimated to see a gas breakthrough at each of the producer wells. The probability given production history is calculated from the prior belief and the misfit between the observed data and the simulated profiles, because the likelihood function can be calculated from the misfit. Our results contribute to the evaluation of the pilot test for a continuous CO 2 injection in the tight oil reservoir. The simplification in parameterising the very heterogeneous reservoir was the key to generating multiple history-matched models, because the amount of computation is prohibitive. It is much quicker to adjust large-scale heterogeneity in a simple model than in a detailed model. The simple model calibration with PSO and the forecast of Bayesian inference have been successfully applied to a real field data.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE EUROPEC/EAGE Annual Conference and Exhibition, June 14–17, 2010
Paper Number: SPE-130253-MS
Abstract
History matching and uncertainty quantification are two important aspects of modern reservoir engineering studies. Finding multiple history matched models for uncertainty quantification with fast and efficient optimization algorithms are the focus of research in assisted history matching methods. Recently a new approach for history matching has been proposed based on differential evolution optimization algorithm. Differential evolution is a very powerful optimization method with a simple structure and few tuning parameters which makes it easy to use in automatic history matching frameworks. In this paper we are looking at three new search strategies as alternates to the proposed method in previous publication for obtaining multiple history-matched reservoir models. These strategies of differential evolution are different in the way that new models are generated during automatic history matching process. The comparative study presents the differences between performances of new search schemes for a simple reservoir simulation case in Gulf of Mexico. We compare the best history matching results and sensitivity of algorithms to starting conditions. Tradeoff between speed of convergence to good fitting regions and coverage of the search space is also demonstrated for different variants of differential evolution. We show that some variants of differential evolution exhibit global searching characteristics, while other ones quickly obtain good results, saving time and computational resources in reservoir engineering studies. Final part of this paper focuses on the uncertainty of production forecasts, discussing the prediction capability of applied differential evolution algorithms.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE EUROPEC/EAGE Annual Conference and Exhibition, June 14–17, 2010
Paper Number: SPE-131241-MS
Abstract
The new SEC rules (Jan 2009) allow the use of probabilistic methods for reserves estimations. The main advantage of probabilistic modelling methods is that they provide an understanding of the confidence levels associated with each reserves/resources category. Reserves evaluators need to ensure regulatory compliance of the output of these probabilistic methods with the reserves definitions adopted by the SEC or other regulatory bodies. A Cumulative Distribution Function of hydrocarbon recovery is obtained as part of a probabilistic modelling workflow, from which estimates of P90, P50 and P10 result. However, these estimates are not necessarily SEC compliant. We may therefore require an additional analysis of the parameters that have been used for each model. The aim of the paper is to provide technical guidance on methodologies for obtaining P90, P50, P10 reserves and on obtaining compliant reserves estimates when these are available. This approach should help to standardize industry external reserves reporting. To understand and quantify the impact of major uncertainties in a reservoir there are different methodologies used in industry with the objective of achieving a fully integrated probabilistic reserves analysis. An examination of different methodologies depending on the stage of appraisal and development of a field is presented. The output of this analysis will be a cumulative distribution of hydrocarbons from which P90, P50, mean and P10 reserves are selected. In summary the aim of this paper is to provide technical guidance on how to estimate reserves for external complaint reporting if probabilistic methods are used.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE EUROPEC/EAGE Annual Conference and Exhibition, June 14–17, 2010
Paper Number: SPE-130596-MS
Abstract
PDO is a major operator in the Middle East with long production history for many of its oil fields. Due to the increased gas supply requirements, it is considering options for blowing down the gas caps of some selected oil fields to cater for the gas demand for achieving sustainable oil production. The key challenge for these gas blow down is to maximize gas production plateau while simultaneously minimise associated oil recovery losses. This Case Study illustrates the Gas Blowdown optimisation of a saturated clastic oil reservoir. During the Blowdown Study, optimum gas production rates from each of the Upper and Middle Gharif reservoirs are targeted to meet a given gas production plateau rate. A fit-for-purpose assessment for the impact of subsurface parameter uncertainties on the proposed development demonstrated robustness of the gas production plateau period. This analysis resulted in a Gas production Plateau period ranging from 4 years (Low case) to 7 years (High case) with an expectation case of 6 years. Multiple Subsurface & Surface development scenarios have been evaluated including lowering pressure ratings for the Separator inlet pressure. The oil & gas production profiles thus generated have been evaluated for robust economics and used for final concept selection & field implementation. Furthermore to minimise oil losses various optimisation efforts have been identified during pre and post blow down phases in terms of re-perforations, gas lift implementation and blow down oriented well and reservoir management practices. This case study proposes an optimised Gas Blowdown field development by maximizing gas plateau period while minimizing associated oil loss. The Study resulted in selecting the appropriate surface development concept, operating specifications and provided an optimised field management plan during the Blowdown phase. The methodology adopted in this case study should find wider applicability in the industry.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the EUROPEC/EAGE Conference and Exhibition, June 8–11, 2009
Paper Number: SPE-121193-MS
Abstract
Abstract This paper introduces a new stochastic approach for automatic history matching based on a continuous ant colony optimization algorithm. Ant colony optimization (ACO) is a multi-agent optimization algorithm inspired by the behaviour of real ants. ACO is able to solve difficult optimization problems in both discrete and continuous variables. In the ACO algorithm, each artificial ant in the colony searches for good models in different regions of parameter space and shares information about the quality of the models with other agents. This gradually guides the colony towards models that match the desired behaviour - in our case the production history of the reservoir. The use of ACO history-matching has been illustrated on a reservoir simulation case for Gulf of Mexico which showed that Ant Colony optimization can be used to generate multiple history-matched reservoir models. Introduction The development of efficient history matching and uncertainty quantification techniques for use in prediction of reservoir performance is an important and challenging topic in petroleum engineering. History matching is a very complex non-linear and ill-posed inverse problem in which we aim to calibrate a reservoir model to reproduce historical observation data such as production rates or formation pressure in a reservoir. Like most inverse problems, history matching has non-unique solution, which means that different combinations of parameters can produce good matches of the observed data. It is necessary to obtain multiple good history matched models in order to realistically quantify uncertainty in reservoir simulation models [1]. Obtaining multiple history matched models requires automated history matching methods which can deal with realistic full field models containing lots of production data and potentially a large number of unknown parameters that should be optimized to get a good quality history match. There are numerous examples of automated history matching techniques in the literature. One of the first attempts to present an automatic framework for history matching was by Chen in 1973 [2] in which the history matching was formulated as an optimal control problem. Today, various broad categories of history matching algorithms exist, namely gradient methods [3, 4], stochastic samplers, and particle filter methods such as the Ensemble Kalman Filter [5–7]. Stochastic methods are easy to implement and they have been very popular in recent years with a number of stochastic methods successfully applied to history matching. Examples of recent applications of stochastic samplers include: Neighborhood algorithm [8], Genetic algorithms [1] [9], Simulated annealing [10], Scatter search [11], Tabu search [12], Hamiltonian Monte Carlo (HMC) [13], Particle Swarm Optimization (PSO) [13, 14], Markov chain Monte Carlo [15], Simultaneous perturbation stochastic approximation (SPSA) [16] and Chaotic optimization [17].
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the EUROPEC/EAGE Conference and Exhibition, June 8–11, 2009
Paper Number: SPE-121993-MS
Abstract
Abstract Decision making in the face of uncertainty is a problem encountered at every strategic level within the exploration and production value chain. This problem is evident in new field development projects when there is limited and uncertain geologic and engineering data. Key uncertainties encountered in reservoir engineering models include drive mechanism, permeability, aquifer support, fluid properties, reservoir extent and connectivity, end point saturations and reservoir structure. Evaluating uncertainty using conventional methods, where model parameters are varied individually, makes it impossible to make an objective business decision without underestimating the effects of uncertainty. This work proposes a systematic approach to evaluate uncertainty in a field development decision process. Initially parameters are varied individually to rank the key uncertainties. Subsequently the experimental design approach is used and a response surface is developed to estimate the impact of uncertainty in the parameters with the largest influence on project economics. In order to integrate the uncertainty in availability of surface facilities, project economics and sub-surface properties, a decision tree is designed to incorporate risk into a logical and consistent decision strategy. Information analysis is also performed to estimate the value of acquiring additional information. The proposed decision-tree is applied to development planning of a field comprising of marginal stacked reservoirs. Numerical simulations are performed to optimize the field development and generate a range of oil recovery predictions. The surface facilities options, project economics and predicted recoveries were then integrated into the decision tree. The value of the project was positive for most outcomes, with negative value occurring only for the lowest oil recovery combined with delayed access to surface facilities. The economic value of obtaining additional information was much lower than its cost. Introduction A reservoir's commercial life begins with exploration that leads to discovery, and is followed by characterization of the reservoir. The major challenge at this stage of reservoir development is the availability of limited data and huge uncertainty; thus reservoir uncertainty evaluation is very important to achieving a good understanding of reservoir management risks. The use of a practical method for estimating uncertainty without compromising accuracy is therefore clearly needed. Various systematic approaches have emerged from the application of experimental design to account for uncertainty associated with reservoir parameters and their effects on technical and financial outcomes. For instance, Corre et al.(2001) used experimental design to integrate data from diverse disciplines and sources in their effort to quantify uncertainty. Friedmann et al.(2001) used the results from experimental design to generate type curves with neural networks which were used to rapidly predict reservoir performance where field data is very limited. Chewaroungroaj et al.(2000) demonstrated the use of results based on experimental design to fit a response surface then predict the oil recovery from the derived function. In this paper a systematic approach is used to evaluate uncertainty for a seemingly marginal discovery. This approach is demonstrated on this stacked multi-reservoir discovery, referred to as Field X, in Nigeria. The development planning was carried out by first building a range of static and dynamic models (high, mid and low cases) for the subsurface development. Different development schemes were tested on each model and the scheme that gave optimum recovery was taken forward for further uncertainty analysis.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the EUROPEC/EAGE Conference and Exhibition, June 11–14, 2007
Paper Number: SPE-107091-MS
Abstract
Abstract The task of selecting and planning infill wells is critical to the ultimate recovery of hydrocarbons from a field, and in a mature basin we have progressed down the long tail of targets to the numerous but small opportunities. There is considerable value left in such targets, even with current drilling technology, but the targets require a better understanding of risk and a higher level of detail. We also face the difficulty this has to occur at a time when the industry is constrained by people resources and rig availability. The Andrew and Harding assets have been leading members of a field trial of a system to test large numbers of alternative depletion plans to try and find the optimum for economic impact and recovery. The assets have also been early adopters of the recent advances in computer assisted workflows, using tools such as BP's TDRM 1 in history matching mode to generate alternative reservoir descriptions that satisfy observations and has some measure of the range in possible outcomes. This paper presents the results of combining infill well planning and multiple reservoir descriptions in a computer-assisted workflow of Top Down Depletion Planning (TDDP) 2. The results have enabled smaller targets to be sanctioned, as there has been a better understanding of the risk. Furthermore, instead of appraising three targets in five months of effort, it has enabled the appraisal of seven targets to the same level of detail in only two months of effort, a factor of 4 increase in efficiency for the subsurface team. The case studies have also forced advances in the data analysis, to understand how the infill wells and the alternate reservoir models interact. We can make the choice to seek the optimum choice of things that we can control, such as the depletion plan, on the average performance across the multiple reservoir models that we can't control, but we can use surveillance to try to distinguish between models if we can identify the surveillance prize. The first case study is an introduction to the data analysis of a simple case, looking at the determination of the optimum in a single reservoir model. The second case examines Andrew in more detail, which optimizes on average performance on a range of models constrained by current surveillance. The second study does show the start of a surveillance prize, by showing that some of the different reservoir descriptions actually support consideration of a different depletion plan. The value of surveillance theme is explored in the third case study where we return to Harding and now examine the infill wells when the problem is phrased as the phasing, location and operating conditions of two wells, based on 9 alternative reservoir models. Background A reservoir management strategy is to assure performance by a combination of a surveillance plan to minimize uncertainty and a depletion plan to mitigate the uncertainty. This paper addresses two fields in the UK North Sea that are at the leading edge of computer-assisted workflows to develop surveillance and depletion plans. Both Andrew and Harding have benefited from the use of time-lapse seismic in the history match to improve the quality of the reservoir models 3, and are mature fields where the remaining targets are small and understanding the probability of success is key to sanctioning new wells. The Andrew case study illustrates the way in which the work to generate a depletion plan can benefit from the computer assisted workflows, with Harding concentrating on the value of surveillance 4. The workflows also enable a subsurface team to evaluate more targets in less time, which is becoming important in the people constrained environment 5. A typical workflow would involve developing a depletion plan on a single model, and testing for sensitivities to a set of parameters. Measuring the value of the depletion plan normally involves comparing a profile to the base case, and running an economic analysis. Testing the variatiants of the depletion plan tends to a very methodical but time intensive process.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec/EAGE Annual Conference and Exhibition, June 12–15, 2006
Paper Number: SPE-100193-MS
Abstract
Abstract This paper presents a case study in which a Multipurpose Environment for Parallel Optimization is applied to assisted History Matching. Experimental Design techniques are used in order to investigate parameter sensitivities. Different global optimization methods are integrated into a workflow controlling a large number of reservoir simulations. For increased efficiency, simulations are run in parallel. Results are analyzed and compared to traditional History Matching to identify potential added value and increased efficiency. The reservoir engineer is in full control over the optimization process and interacts frequently with the History Matching process, while acquiring more information and improved understanding of existing uncertainties of the simulation model. The history matching objective was defined to emphasize selected key wells where traditional history matching had proven to be difficult. The importance of a large number of input parameters was investigated, and an optimization scheme was set up to generate one or more alternative history-matched realizations of the simulation model. Alternative simulation models are determined by differences in model parameters as well as varying match qualities on a well-by¬well basis. Selected results are used as input for alternative prediction scenarios. The study was used to evaluate the economic potential for new infill wells, potential side-track locations and ESP candidates. The case study demonstrates the improvement in assessing reservoir uncertainties for an OMV-operated North African oil field. A structured process was implemented for assessing uncertainties related to oil in place volumes in selected regions of the field with a large impact on alternative prediction scenarios. Introduction Over the last few years increasing interest has been focused on workflows for uncertainty assessment in reservoir management. Structured approaches exist for assessing the impacts of uncertainty on investment decision-making in the oil and gas industry.[1] These approaches mostly rely on simplified component models for each decision domain, e.g. G&G models, production scenarios, drilling model, processing facilities, economics and related costs, etc. Because of its complexity, the integration of dynamical modeling is only gradually entering this domain of decision-making processes. The presented field study contributes to this discussion. Most documented procedures focus on reservoir field evaluations with few or no constraining dynamic measurement data, i.e. production data or time-lapsed seismic measurement data. Workflows exist for static modeling including multiple scenario evaluations and Monte Carlo techniques for uncertainty quantification. Established workflows are missing for the process of incorporating dynamic data in the generation of reservoir models, called History Matching. It is generally accepted that any model reliably predicting future quantities should be able to reproduce known history data. This requires a model validation process[2] (History Matching) which is traditionally cumbersome and time consuming. A consistent introduction of production data is computation-intensive. This requires new approaches in the application of experimental design and optimization methods which is supported by the use of high performance computing facilities.[3–8] In this work Experimental Design and Global Optimization methods are used to support the process of History Matching. Different Experimental Design techniques[9–11] and mostly alternative implementations of genetic algorithms[12–14] have been used in this context before. We concentrate on Evolutionary Algorithms which are generally robust and less sensitive to non-linearities and discontinuities of the solution space. Methodology and workflow used in this work is described in the next section. Readers interested only in results, should refer to the Case Study.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec/EAGE Annual Conference and Exhibition, June 12–15, 2006
Paper Number: SPE-100257-MS
Abstract
ABSTRACT Estimating original hydrocarbons in place (OHIP) in a reservoir is fundamentally important in estimating reserves and potential profitability. Two traditional methods for estimating OHIP are volumetric and material balance methods. Probabilistic estimates of OHIP are commonly generated prior to significant production from a reservoir by combining volumetric analysis with Monte Carlo methods. Material balance is routinely used to analyze reservoir performance and estimate OHIP. Although material balance has uncertainties due to errors in pressure and other parameters, probabilistic estimates are seldom generated. In this paper we use a Bayesian formulation to integrate volumetric and material balance analyses and to quantify uncertainty in the combined OHIP estimates. Specifically, we apply Bayes' rule to the Havlena and Odeh material balance equation to estimate original oil in place, N, and relative gas-cap size, m, for a gas-cap drive oil reservoir. We consider uncertainty and correlation in the volumetric estimates of N and m (reflected in the prior probability distribution), as well as uncertainty in the pressure data (reflected in the likelihood distribution). Approximation of the covariance of the posterior distribution allows quantification of uncertainty in the estimates of N and m resulting from the combined volumetric and material balance analyses. Our investigations show that material balance data reduce the uncertainty in the volumetric estimate, and the volumetric data reduce the considerable non-uniqueness of the material balance solution, resulting in more accurate OHIP estimates than from the separate analyses. One of the advantages over reservoir simulation is that, with the smaller number of parameters in this approach, we can easily sample the entire posterior distribution, resulting in more complete quantification of uncertainty. INTRODUCTION The estimation of original hydrocarbons in place (OHIP) in a reservoir is one of the oldest and, still, most important problems in reservoir engineering. Estimating OHIP in a reservoir is fundamentally important in estimating reserves and potential profitability. We have long known that our estimates of OHIP possess uncertainty due to scarcity of data and data inaccuracies.[1–3] Quantifying the uncertainties in OHIP estimates can improve reservoir development and investment decision-making for individual reservoirs and can lead to improved portfolio performance.[4] The general question we address in this paper is: Given reservoir data available, how do we best estimate OHIP and how do we quantify the uncertainty inherent in this estimate? Two traditional methods for estimating OHIP are volumetric and material balance methods.[5,6] Volumetric methods are based on static reservoir properties, such as porosity, net thickness and initial saturation distributions. Since they can be applied prior to production from the reservoir, volumetric methods are often the only source of OHIP values available in making the large investment decisions required early in the life of a reservoir. Given the often large uncertainty due to paucity of well data early in the reservoir life, it is common to quantify the uncertainty of volumetric estimates of OHIP using statistical methods such as Confidence Interval[7] and Monte Carlo analysis.[8,9] Material balance is routinely used to analyze reservoir performance data and estimate OHIP. The material balance method requires pressure and production data and, thus, can be applied only after the reservoir has produced for a significant period of time. The advantages of material balance methods are we can determine drive mechanism in addition to OHIP, no geological model is required, and we can solve for OHIP (and sometimes other parameters) directly from performance data. Primary sources of uncertainty in material balance analyses are incomplete or inaccurate production data and inaccuracies in determining an accurate average pressure trend, particularly in low-permeability or heterogeneous reservoirs. Although these uncertainties have been long recognized, material balance methods are often considered more accurate than volumetric methods, since they are based on observed performance data. It is not common practice to formally quantify the uncertainty in material balance estimates of OHIP, although there have been some attempts.[10–13]
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec/EAGE Annual Conference and Exhibition, June 12–15, 2006
Paper Number: SPE-99994-MS
Abstract
Abstract The installation of intelligent wells to improve the economics of production is now common practice. These wells allow the access to marginal reservoirs, for which dedicated production might not be economic, and also accelerate the recovery. Sensors, flow-control and other devices can be used to manage the production from the commingled reservoirs and optimize the recovery. Traditional methods for production optimization and back-allocation of complex well configurations, such as nodal analysis, work only for a static problem. They cannot account for the dynamic changes that occur in time in the connected system of reservoirs and wellbore. Once multiphase flow occurs, both the change of the fluid mobility in the reservoir and the change of the choke performance cannot be correctly addressed. Moreover, the large number of uncertainties from reservoir to wellbore behavior that influence the performance of those advanced wells cannot be accurately dealt with using traditional approaches. A process is introduced that creates the most accurate well model of an intelligent completion accounting for all effects influencing the pressure behavior in the wellbore and in the reservoir. This model is used for optimization over all static and dynamic uncertainties to derive an interaction strategy with the intelligent well that maximizes oil production. Furthermore, the back-allocation algorithm is calibrated and trained on the proxy model of the well model. Introduction Intelligent wells are wells that have installed monitoring devices to record the production behavior in the completion, in the wellbore, and at the wellhead and allow conclusions on the inflow behavior from the reservoir. These monitoring devices are located either directly at the completion, at the wellhead or along the tubing string or in combination of all. These devices can measure the pressure, temperature and the flow rate. Although technically possible, the latter is rarely used at the completion level and rather situated at the well head. The monitoring devices deliver the measured data in real-time that allow the on-line analysis of the engineer. However, these sensors alone do not constitute an intelligent well. Control devices, which allow immediate (rig-less) interaction with the completions, production tubing or well head, are needed to react on the recorded events. Those devices can be in the simplest case an on-off tool to close in part or entirely a completion. More sophistication offers a surface-controlled flow control valve that can restrain the inflow from the completion into the production tubing by choking the fluid flow from between fully open to completely closed. The artificial lift system is also part of the control devices as, for example, the production rate can be reduced to avoid coning and establish production below the critical rate in the moment first signs of breakthrough appear. Commingled production from two or more productive horizons is the ideal method to accelerate production from a single well. Furthermore, marginal reservoirs, which are destined to be uneconomic with dedicated production, could become viable for production. The application of intelligent completions for such commingling wells allows not only the production and recovery optimization for each individual reservoir but also the value maximization of the well. Intelligent completions also can guarantee regulatory requirements to back-allocate production from the wellhead measurement to the individual reservoirs for reserves booking. An operator would not accept a predetermined strategy to optimize the production of a costly intelligent well that is based on an uncertain reservoir description.[1] He rather will proactively take advantage of the sensors installed in the well by learning from the production behavior, setting it into relation to the reservoir uncertainties and interacting with the well immediately for optimized production. One of the parameters that has to be known is the contribution from each completion to the well production, since any optimization technique has to base its calculation on it and try to improve the objective function, maximize oil production or maximize oil recovery for example. This paper suggests a workflow that processes the complex task of stochastic production optimization of intelligent completions over all parameters of influence and establishes the stochastic back-allocation algorithm. Both back-allocation and production optimization can be carried out in real-time.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE European Petroleum Conference, October 24–25, 2000
Paper Number: SPE-65144-MS
Abstract
Abstract On behalf of a group of sponsors consisting of the Norwegian Petroleum Directorate (NPD) and most E&P companies active in Norway, a workgroup was set up to author a report on the Best Practices and Methods in Hydrocarbon Resource Estimation, Production and Emissions Forecasting, Uncertainty Evaluation and Decision Making. The workgroup is part of Norway's forum for Forecasting and UNcertainty (FUN). Following a detailed data acquisition and interviewing phase to make an inventory of the current practice of all sponsors involved, the workgroup postulated a relationship between a company's practices and its economic performance. A key distinguishing factor between companies is the degree to which probabilistic methods are adopted in integrated multi-disciplinary processes, aimed at supporting the decision making process throughout the asset life cycle and portfolio of assets. Companies have been ranked in terms of this degree of integration and best practices are recommended. In many companies a gap seems to exist between available and applied technology. Data and (aggregated) information exchange between Governments and companies is also discussed. A best practice based on their respective decision making processes is recommended. Introduction Norway's forum for Forecasting and UNcertainty evaluation (FUN, ref. 1) was established in 1997, and has 18 member companies plus the Norwegian Petroleum Directorate (NPD). The forum is a Norwegian Continental Shelf arena to determine best practice and methods for hydrocarbon resource and emissions estimation, forecasting uncertainty evaluation and decision making. It focuses on matters related to forecasting and uncertainty evaluation of future oil and gas production. Its main purpose is to optimize the interplay between the private industry and the national authorities wishing to regulate their national assets. The basic question that kicked off the FUN Best Practices project was whether the accuracy of Norway's historical production forecasts has been disappointing because of erroneous contributions from the companies or because of wrong aggregation by NPD. The question was posed which "Best Practices" could improve this situation. Whereas reserves form the basis for production, capex, opex and emissions forecasting, the decision making process in the various companies and national authorities links the various components together. Using the latest WPC/SPE guidelines for reserves reporting (allowing the use of probabilistic methods), the project concentrated on assessing the potential advantages of probabilistic techniques when used in combination withfully integrated asset management workflow processes. After a discussion of the current practices of the various companies and authorities visited, "Best Practices" are formulated in the fields of estimating reserves, production, costs and emissions forecasting, decision-making, planning and communications. The paper concludes with recommendations on how to move from the "current practices" to the desired "Best Practices".
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE European Spring Meeting, May 29–30, 1974
Paper Number: SPE-4817-MS
Abstract
This paper was prepared for the SPE-European Spring Meeting 1974 of the Society of Petroleum Engineers of AIME, held in Amsterdam, the Netherlands, May 29–30, 1974. Permission to copy is restricted to an abstract of not more than 300 words. Illustrations may not be copied. The abstract should contain conspicuous acknowledgment of where and by whom the paper is presented. Publication elsewhere after publication in the JOURNAL OF PETROLEUM TECHNOLOGY or the SOCIETY OF publication in the JOURNAL OF PETROLEUM TECHNOLOGY or the SOCIETY OF PETROLEUM ENGINEERS JOURNAL is usually granted upon request to the Editor PETROLEUM ENGINEERS JOURNAL is usually granted upon request to the Editor of the appropriate journal provided agreement to give proper credit is made. Discussion of this paper is invited. Three copies of any discussion should be sent to the Netherland Section of the Society of Petroleum Engineers, P. O. Box 228, The Hague, the Netherlands. Such discussion may be presented at the above meeting and, with the paper, may be considered for publication in one of the two SPE magazines. Abstract As an alternative concept to probabilistic approaches to problems of probabilistic approaches to problems of uncertainty the concept of credibility is described. This concept expresses uncertainty in degrees of belief or credibility a particular individual has in certain propositions. The rules for manipulation propositions. The rules for manipulation of credibilities are selective. Major emphasis in decision making is put on extreme outcomes of possible actions. The method is illustrated by reserve calculations, production forecasting and setting-up of cash-flows. Introduction The systematic and consistent treatment of decisions under uncertainty with help of probabilistic approaches became prominent in the early sixties. prominent in the early sixties. Especially the findings of Grayson and Kaufmann based upon more or less theoretical work can be considered the starting point of various investigations point of various investigations concerning the application of probabilities in the field of petroleum engineering Nevertheless the idea of using probabilities in this industry goes back probabilities in this industry goes back to the thirties and forties. Grayson drew attention towards drilling decision problems in which probabilities interpreted from a personalistic probabilities interpreted from a personalistic point of view can be applied. This view point of view can be applied. This view holds that probability measures the confidence a particular individual has in the truth of a particular proposition. Following this interpretation one can conclude that all probabilities are known to the person concerned. Holders of this view stress that it is the very nature of probability to deal with decisions under probability to deal with decisions under uncertainty. Probabilities which can be determined by measuring - the so-called objectivistic probabilities - are to be considered as knowledge, and decision problems in which these probabilities problems in which these probabilities can be applied are not decisions under uncertainty as long as repeatability is given. Personalistic probabilities are subjected to the concept of mathematical probability which is commonly ascribed probability which is commonly ascribed to Kolmogoroff.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE European Spring Meeting, May 29–30, 1974
Paper Number: SPE-4845-MS
Abstract
This paper was prepared for the SPE-European Spring Meeting 1974 of the Society of Petroleum Engineers of AIME, held in Amsterdam, the Netherlands, May 29–30, 1974. Permission to copy is restricted to an abstract of not more than 300 words. Illustrations may not be copied. The abstract should contain conspicuous acknowledgment of where and by whom the paper is presented. Publication elsewhere after publication in the JOURNAL OF PETROLEUM TECHNOLOGY or the SOCIETY OF publication in the JOURNAL OF PETROLEUM TECHNOLOGY or the SOCIETY OF PETROLEUM ENGINEERS JOURNAL is usually granted upon request to the Editor PETROLEUM ENGINEERS JOURNAL is usually granted upon request to the Editor of the appropriate journal provided agreement to give proper credit is made. Discussion of this paper is invited. Three copies of any discussion should be sent to the Netherland Section of the Society of Petroleum Engineers, P. O. Box 228, The Hague, the Netherlands. Such discussion may be presented at the above meeting and, with the paper, may be considered for publication in one of the two SPE magazines. Abstract The author deals with the data management aspects of incorporating uncertainty in reservoir engineering studies. A data base with supporting programmes which handles probabilistic estimates in a complex data probabilistic estimates in a complex data hierarchy, as used by NAM in The Netherlands, is described. Much attention has been paid to the systematic evaluation of the uncertainties associated with the basic data. Important factors in this respect are: The uncertainty associated with seismic information, determination of net reservoir rock and the areal influence of individual data points. This latter aspect is especially important when concluding initial studies with pertinent information from only a few control wells. Introduction The development of oil- and gas accumulations is always based on limited and uncertain data. Nevertheless decisions have to be made regardless of the uncertainties. This is well recognized as is shown by many publications. Such work treats uncertainty generally in a more theoretical manner. It also stresses the importance of an analysis of uncertainty, as a first requirement for further application of the decision theory. However, to date, decision theory has not generally been very successfully applied in Petroleum Engineering problems, except perhaps in some problem oriented cases. This perhaps in some problem oriented cases. This is surprising, since the Petroleum Industry is constantly faced with the evaluation of the consequences of these uncertainties and the application of the decision theory would seem to be beneficial. There are several reasons to explain why this is the case. DECISION THEORY IN PETROLEUM ENGINEERING As a result of the limitations of our thinking process, we have to work within the limits set by basic assumptions or working hypotheses. The problem is that we formulate or quantify uncertainties within the framework set by such concepts, but that it is nearly impossible to judge the uncertainty associated with a concept itself. This would require that we would be able to define (using the phraseology of the decision theory) all possible phraseology of the decision theory) all possible concepts, as a mutually exclusive and exhaustive list of all "states of the world"**. This is not possible except again perhaps in some isolated cases. The result is that our estimates are invariably of a conditional nature.