Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
NARROW
Format
Subjects
Close
Date
Availability
1-20 of 88
History matching
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec, December 1–3, 2020
Paper Number: SPE-200540-MS
Abstract
Digitalization and Artificial Intelligence have impacted the oil and gas industry. Drilling of wells, predictive maintenance and digital fields are examples for the use of these technologies. In hydrocarbon production forecasting, numerical reservoir models and "digital twins" of reservoirs have been used for decades. However, increasing computing power and Artificial Intelligence recently enabled oil and gas companies to generate "digital siblings" of reservoirs (model ensembles) covering the uncertainty range in static data (e.g. petrophysics, geological structure), dynamic data (e.g. oil or gas properties) and economics (Capital Expenditures, Operating Expenditures). Machine Learning and Artificial Intelligence are applied to condition the model ensembles to measured data and improve hydrocarbon production forecasting under uncertainty. The model ensembles can be used for quantitative decision making under uncertainty. This allows companies to shorten the time for field (re-)development planning and to develop into learning organizations for decision making. These developments require companies to change the way of working in hydrocarbon production forecasting and decision analysis. Additional skills need to be developed in companies to embrace digitalization. Data science - which is considered a key skill in digitalization - has not been identified as crucial in skills development of oil and gas companies in the past. However, for data driven decision making, advanced data analytics skills and data science skills are a pre-requisite. To overcome this skill gap, staff needs to be trained and graduates with data science and profound physical and chemical skills need to be hired. Furthermore, skills development has to address the challenge of incorrect use of Machine Learning technologies and the risks of Artificial Intelligence leading to erroneous optimizations. In particular interpretability of AI needs to be covered in skills development.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec, December 1–3, 2020
Paper Number: SPE-200541-MS
Abstract
Germik, a mature heavy oil field in Southeast Turkey, has been producing for more than 60 years with a significant decline in pressure and oil production. To predict future performance of this reservoir and explore possible enhanced oil recovery (EOR) scenarios for a better pressure maintenance and improved recovery, generation of a representative dynamic model is required. To address this need, an integrated approach is presented herein for characterization, modeling and history matching of the highly heterogeneous, naturally fractured carbonate reservoir spanning a long production history. Hydraulic flow unit (HFU) determination is adopted instead of the lithofacies model, not only to introduce more complexity for representing the variances among flow units, but also to establish a higher correlation between porosity and permeability. By means of artificial intelligence (AI), existing wireline logs are used to delineate HFUs in uncored intervals and wells, which is then distributed to the model through stochastic geostatistical methods. A permeability model is subsequently built based on the spatial distribution of HFUs, and different sets of capillary pressures and relative permeability curves are incorporated for each rock type. The dynamic model is calibrated against the historical production and pressure data through assisted history matching. Uncertain parameters that have the largest impact on the quality of the history match are oil-water contact, aquifer size and strength, horizontal permeability, ratio of vertical to horizontal permeability, capillary pressure and relative permeability curves, which are efficiently and systematically optimized through evolution strategy. Identification and distribution of the hydraulic units complemented with artificial neural networks (ANN) provide a better description of flow zones and a higher confidence permeability model. This reduces uncertainties associated with reservoir characterization and facilitates calibration of the dynamic model. Results obtained from the study show that the history matched simulation model may be used with confidence for testing and optimizing future EOR schemes. This paper brings a novel approach to permeability and HFU determination based on artificial intelligence, which is especially helpful for addressing uncertainties inherent in highly complex, heterogeneous carbonate reservoirs with limited data. The adopted technique facilitates the calibration of the dynamic model and improves the quality of the history match by providing a better reservoir description through flow unit distinction.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec, December 1–3, 2020
Paper Number: SPE-200559-MS
Abstract
Relative permeability and capillary pressure are the key parameters of the multiphase flow in a reservoir. To ensure an accurate determination of these functions in the areas of interest, the core flooding and centrifuge experiments on the relevant core samples need to be interpreted meticulously. In this work, relative permeability and capillary pressure functions are determined synchronously by history matching of multiple experiments simultaneously in order to increase the precision of results based on additional constraints coming from extra measurements. To take into account the underlying physics without making crude assumptions, the Special Core Analysis (SCAL) experiments are chosen to be simulated instead of using well know simplified analytical or semi-analytical solutions. Corresponding numerical models are implemented with MRST ( Lie, 2019 ) library. The history matching approach is based on the adjoint gradient method for the constrained optimization problem. Relative permeability and capillary pressure curves, which are the objectives of history matching, within current implementation can have a variety of representations as Corey, LET, B-Splines and NURBS. For the purpose of analyzing the influence of correlations on the history matching results in this study, the interpretation process with assumed analytical correlations is compared to history matching based on generic NURBS representation of relevant functions.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec, December 1–3, 2020
Paper Number: SPE-200568-MS
Abstract
Polymer flooding offers the potential to recover more oil from reservoirs but requires significant investments which necessitate a robust analysis of economic upsides and downsides. Key uncertainties in designing a polymer flood are often reservoir geology and polymer degradation. The objective of this study is to understand the impact of geological uncertainties and history matching techniques on designing the optimal strategy and quantifying the economic risks of polymer flooding in a heterogeneous clastic reservoir. We applied two different history matching techniques (adjoint-based and a stochastic algorithm) to match data from a prolonged waterflood in the Watt Field, a semi-synthetic reservoir that contains a wide range of geological and interpretational uncertainties. An ensemble of reservoir models is available for the Watt Field, and history matching was carried out for the entire ensemble using both techniques. Next, sensitivity studies were carried out to identify first-order parameters that impact the Net Present Value (NPV). These parameters were then deployed in an experimental design study using a Latin Hypercube to generate training runs from which a proxy model was created. The proxy model was constructed using polynomial regression and validated using further full-physics simulations. A particle swarm optimisation algorithm was then used to optimize the NPV for the polymer flood. The same approach was used to optimise a standard water flood for comparison. Optimisations of the polymer flood and water flood were performed for the history matched model ensemble and the original ensemble. The sensitivity studies showed that polymer concentration, location of polymer injection wells and time to commence polymer injection are key to optimizing the polymer flood. The optimal strategy to deploy the polymer flood and maximize NPV varies based on the history matching technique. The average NPV is predicted to be higher in the stochastic history matching compared to the adjoint technique. The variance in NPV is also higher for the stochastic history matching technique. This is due to the ability of the stochastic algorithm to explore the parameter space more broadly, which created situations where the oil in place is shifted upwards, resulting in higher NPV. Optimizing a history matched ensemble leads to a narrow variance in absolute NPV compared to history matching the original ensemble. This is because the uncertainties associated with polymer degradation are not captured during history matching. The result of cross comparison, where an optimal polymer design strategy for one ensemble member is deployed to the other ensemble members, predicted a decline in NPV but surprisingly still shows that the overall NPV is higher than for an optimized water food. This indicates that a polymer flood could be beneficial compared to a water flood, even if geological uncertainties are not captured properly.
Proceedings Papers
Majid M. Faskhoodi, Akash Damani, Kousic Kanneganti, Wade Zaluski, Charles Ibelegbu, Li Qiuguo, Cindy Xu, Herman Mukisa, Hakima Ali Lahmar, Dragan Andjelkovic, Oscar Perez Michi, Alexey Zhmodik, Jose A. Rivero, Raouf Ameuri
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec, December 1–3, 2020
Paper Number: SPE-200531-MS
Abstract
To unlock unconventional reservoirs for optimum production, maximum contact with the reservoir is required; however, excessively dense well placement and hydraulic fractures interconnection is a source of well-to-well interaction which impairs production significantly. The first step to have successful and effective well completion is to understand the characteristics of the hydraulic fractures and how they propagate in reservoir. This paper demonstrates an integrated approach with a field example in the Montney formation for how modern modeling techniques were used to understand and optimize hydraulic fracture parameters in unconventional reservoir. Advanced logs from vertical wells and 3D-seismic were used to build an integrated geological model. Lamination index analysis was performed, using borehole imagery data to account for interaction of hydraulic fracture with vertically segregated rock fabric and to provide additional control on hydraulic fracture height growth during modeling process. A non-uniform Discrete-Fracture-Network (DFN) model was constructed. 3D-geo-mechanical model was built and initialized, using sonic log and seismic data. Fluid friction and leak-off was calibrated, using treatment pressure and DFIT data. Hydraulic fracture modeling was done for pad consists of 6 horizontal wells with multi-stage fracturing treatments, by utilizing actual pumped schedules and calibrating it against microseismic data. High-stress anisotropy led to planar hydraulic fractures despite presence of natural fractures in area. Fracturing sequence, i.e., effect of stress shadow, is seen to have major impact on hydraulic fracture geometry and propped surface area. Heatmaps were generated to estimate average stimulated and propped rock volume in section. It was also observed that rock fabrics, i.e., natural fracture and lamination has considerable impact on propagation of hydraulic fracture. Multiple realizations of natural fracture and lamination distribution were generated and used as an input in modeling process. High resolution unstructured simulation grids were generated to capture fracture dimensions and conductivities, as well as track propped and unpropped regions in stimulation network. Dynamic model was constructed and calibrated against historical production data. History matched model was then used as predictive tool for pad development optimization and to evaluate parent-child interaction in depleted environment.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec, December 1–3, 2020
Paper Number: SPE-200578-MS
Abstract
Various physico-chemical processes are affecting Alkali Polymer (AP) Flooding. Core floods can be performed to determine ranges for the parameters used in numerical models describing these processes. Because the parameters are uncertain, prior parameter ranges are introduced and the data is conditioned to observed data. It is challenging to determine posterior distributions of the various parameters as they need to be consistent with the different sets of data that are observed (e.g. pressures, oil and water production, chemical concentration at the outlet). Here, we are applying Machine Learning in a Bayesian Framework to condition parameter ranges to a multitude of observed data. To generate the response of the parameters, we used a numerical model and applied Latin Hypercube Sampling (2000 simulation runs) from the prior parameter ranges. To ensure that sufficient parameter combinations of the model comply with various observed data, Machine Learning can be applied. After defining multiple Objective Functions (OF) covering the different observed data (here six different Objective Functions), we used the Random Forest algorithm to generate statistical models for each of the Objective Functions. Next, parameter combinations which lead to results that are outside of the acceptance limit of the first Objective Function are rejected. Then, resampling is performed and the next Objective Function is applied until the last Objective Function is reached. To account for parameter interactions, the resulting parameter distributions are tested for the limits of all the Objective Functions. The results show that posterior parameter distributions can be efficiently conditioned to the various sets of observed data. Insensitive parameter ranges are not modified as they are not influenced by the information from the observed data. This is crucial as insensitive parameters in history could become sensitive in the forecast if the production mechanism is changed. The workflow introduced here can be applied for conditioning parameter ranges of field (re-)development projects to various observed data as well.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec, December 1–3, 2020
Paper Number: SPE-200598-MS
Abstract
Many oil reservoirs worldwide have cycle dependent oil recovery either by design (e.g. WAG injection) or unintended (e.g. repeated expansion/shrinkage of gas cap). However, to reliably predict oil recovery involving three-phase flow process, a transformational shift in the procedure to model such complex recovery method is needed. Therefore, this study focused on identifying the shortcomings of the current reservoir simulators to improve the simulation formulation of the cycle-dependent three-phase relative- permeability hysteresis. To achieve this objective, several core-scale water-alternating-gas (WAG) injection experiments were analysed to identify the trends and behaviours of oil recovery by the different WAG cycles. Furthermore, these experiments were simulated to identify the limitations of the current commercial simulators available in the industry. Based on the simulation efforts to match the observed experimental results, a new methodology to improve the modelling process of WAG injection using the current simulation capabilities was suggested. Then the WAG injection core-flood experiments utilized in this study were simulated to validate the new approach. The results of unsteady-state WAG injection experiments performed at different conditions were used in this simulation study. The simulation of the WAG injection experiments confirmed the positive impact of updating the three-phase relative-permeability hysteresis parameters in the later WAG injection cycles. This change significantly improved the match between simulation and WAG experimental results. Therefore, a systematic workflow for acquiring and analyzing the relevant data to generate the input parameters required for WAG injection simulation is presented. In addition, a logical procedure is suggested to update the simulation model after the third injection cycle as a workaround to overcome the limitation in the current commercial simulators. This guideline can be incorporated in the numerical simulators to improve the accuracy of oil recovery prediction by any cycle-dependent three-phase process using the current simulation capabilities.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec, December 1–3, 2020
Paper Number: SPE-200641-MS
Abstract
Multistage hydraulic fractured horizontal wells (MHFHWs) are widely used in most shale gas reservoirs around the world. Hydraulic fracturing treatment can create hydraulic fractures and activate existing natural fractures to generate a complex fracture network to significantly improve the well performance. For precise production prediction, it is critical to recongnize the spatial extent and properties of the hydraulic fracture network with multiple data such as production history, microseismic et al. In this study, a novel method that combines the automatical history matching technology and embedded discrete fracture modeling (EDFM) is proposed for the recongnizing the spatial extent and properties of fracture network for MHFHWs. For each hydraulic fracturing stage, the fracture network is parameterized by a set of uncertain parameters, including the length of major fracture, width of the stimulated area, fracture density, fracture permeability, etc. Using these parameters, realizations of the fracture network are generated. The production predictions are obtained by running reservoir simulations with EDFM in which all fractures are embedded into a background grid system, and the automatical history matching method is applied to perform history matching. The proposed approach is validated using synthetic single- well and double-well cases. The results show that the spatial extent and properties of the hydraulic fracture network can be well recognized and that the production history can be well matched. Considering that microseismic surveillance is often currently performed in shale gas reservoirs, the prior constraint of microseismic data is also investigated in this work. When microseismic data are available, an area with effective microseismic events for each fracturing stage is first defined. The events within the effective area are used to generate discrete fractures, and the events outside of the effective area are abandoned. Furthermore, the shape parameters of the area with effective microseismic events (wet events) are gradually modified by assimilating the production data. A real field case with microseismic data in the Sichuan Basin of China is investigated to test the performance of the proposed method. Reasonable results are obtained, thus demonstrating the robustness of the proposed approach.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec, December 1–3, 2020
Paper Number: SPE-200620-MS
Abstract
One of the final goals of any reservoir characterization study is to deliver reliable production forecasts. This is definitely a challenging task as the fluid flow dynamics is governed by non-linear equations: a small perturbation in the reservoir model inputs might have a large impact on the modelling outputs, thus the forecasts. Also, depending on the maturity of a project, engineers have various amounts and types of data to deal with and to history match through an optimization process. Considering the case of a mature asset, for which massive datasets of various types are available, the standard history matching process is based on the minimization of a single objective function (history matching criteria), computed through weighted least squares formulation. The difficulty is then to (user) define properly the weight of each data set before the summation itself into the single objective function is performed. To avoid this difficulty, two currently available but yet prospective – in geosciences application - optimization techniques are considered. The former is based on the definition of multiple objective functions (based on data types and/or location on the field) coupled to an optimization process. If all the objective functions minimize trends together, the user has still the flexibility to assess one by one the individual objective functions minimization. On the contrary, if the objective functions are minimizing/maximizing trends in competition (e.g. because of noisy data), then the derived Pareto front (in 2D case) would identify the location of optimal compromises. The latter is a sequential optimization approach based on single-objective constrained optimizations: optimize each objective at a time with constraints on the other objectives. The thresholds defined for the constraints on the objectives are derived from the results of the previous optimization results. This pragmatic approach allows to prioritize the objectives and tunes the expected accuracy on each data type. Both approaches are applied to a real field gas storage asset with more than 40 years of exploitation history and various data types e.g. pressures, saturations as well as gas breakthrough of control wells, leading to the definition of multiple (possibly more than 2) objective functions. Both show promising results in terms of history matching quality and in terms of flexibility as the user might be able to consider, define and update dynamically alternative history matching strategies. These approaches might be considered as alternatives to the standard one for the history matching process, preliminary to the production forecasts computation, even if the associated challenges and complexity are growing accordingly to the number of objective functions.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 81st EAGE Conference and Exhibition, June 3–6, 2019
Paper Number: SPE-195507-MS
Abstract
To improve the reservoir simulation model, uncertain parameters such as porosity and permeability in the reservoir rock strata need to be adjusted to match the simulated production data with the actual production data. This process is known as History Matching (HM). In geological CO 2 storage that is being promoted for use in depleted hydrocarbon reservoirs and saline aquifers, CO 2 tends to migrate upwards and accumulate as a separate plume in the zone immediately beneath the reservoir caprock. Thus caprock morphology is of considerable importance with respect to storage safety and migration prediction for the purpose of long-term CO 2 storage. Moreover, small scale caprock irregularities, which are not captured by seismic surveys, could be one of the sources of errors while matching the observed CO2 plume migration and the numerical modelling results (e.g. Sleipner). Thus here we study the impact of uncertainties in slope and rugosity (small scale caprock irregularities not captured by seismic surveys) on plume migration, using a history-matching process. We defined 10 cases with different initial guesses to reproduce the caprock properties representing an observed plume shape. The results showed a reasonable match between the horizontal plume shape in calibrated and observed models with an average error of 2.95 percentages
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 81st EAGE Conference and Exhibition, June 3–6, 2019
Paper Number: SPE-195542-MS
Abstract
Integration of time-lapse seismic data into dynamic reservoir model is an efficient process in calibrating reservoir parameters update. The choice of the metric which will measure the misfit between observed data and simulated model has a considerable effect on the history matching process, and then on the optimal ensemble model acquired. History matching using 4D seismic and production data simultaneously is still a challenge due to the nature of the two different type of data (time-series and maps or volumes based). Conventionally, the formulation used for the misfit is least square, which is widely used for production data matching. Distance measurement based objective functions designed for 4D image comparison have been explored in recent years and has been proven to be reliable. This study explores history matching process by introducing a merged objective function, between the production and the 4D seismic data. The proposed approach in this paper is to make comparable this two type of data (well and seismic) in a unique objective function, which will be optimised, avoiding by then the question of weights. An adaptive evolutionary optimisation algorithm has been used for the history matching loop. Local and global reservoir parameters are perturbed in this process, which include porosity, permeability, net-to-gross, and fault transmissibility. This production and seismic history matching has been applied on a UKCS field, it shows that a acceptalbe production data matching is achieved while honouring saturation information obtained from 4D seismic surveys.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 80th EAGE Conference and Exhibition, June 11–14, 2018
Paper Number: SPE-190778-MS
Abstract
In order to better understand reservoir behavior, reservoir engineers make sure that the model fits the data appropriately. The question of how well a model fits the data is described by a match quality function carrying assumptions about data. From a statistical perspective, improper assumptions about the underlying model may lead to misleading belief about the future response of reservoir models. For instance, a simple linear regression model may have a fair fit to available data, yet fail to predict well. On the contrary, a model may perfectly match the data but make poor prediction (i.e. overfitting). In both cases, the regression model mean response will be far from the true response of the reservoir variables and will cause poor decision making. Therefore, a suitable model has to provide balance between the goodness of the fitted model and the model complexity. In the model selection problem, realistic assumptions concerning the details of model specification are the key elements in learning from data. With regard to conventional history match scheme, the data fitting is usually performed by linear least-squares regression model (LSQ) which makes simple, yet often unrealistic, assumptions about the discrepancy between the model output and the measured values. The linear LSQ model ignores any likely correlation structure in discrepancy, changes in mean and pattern similarities reflecting on poor prediction. In this work, we interpret the model selection problem in data-driven settings that enables us to first interpolate the error in history period, and second propagate it towards unseen data (i.e. error generalization). The error models constructed by inferring parameters of selected models can predict the response variable (e.g. oil rate) at any point in input space (e.g. time) with corresponding generalization uncertainty. These models are inferred through training/validation data set and further compared in terms of average generalization error on test set. Our results demonstrate how incorporating different correlation structures of errors improves predictive performance of the model for the deterministic aspect of the reservoir modelling. In addition, our findings based on different inference of selected error models highlight an enormous failure in prediction by improper models.
Proceedings Papers
Forlan La Rosa Almeida, Helena Nandi Formentin, Célio Maschio, Alessandra Davolio, Denis José Schiozer
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 80th EAGE Conference and Exhibition, June 11–14, 2018
Paper Number: SPE-190804-MS
Abstract
This paper proposes new objective functions to assimilate dynamic data for history matching, and evaluates their influence on the uncertainty conditioning. Representative events are observed and evaluated separately for the available dynamic data. The proposed objective functions evaluate two specific events: (1) the production transition behavior between the historical and forecasting period, and (2) the water breakthrough time. To assess production transition behavior, the deviation between the latest available historical data is compared with the forecast value, at a specific moment, under forecasting conditions. To assess water breakthrough, the irruption time error is measured in addition to the water-rate objective function. The new objective functions are normalized using the Normalized Quadratic Deviation with Sign, for comparison with conventional objective functions ( i.e. NQDS-oil production rate). These additional objective functions are included in a probabilistic and multi-objective history matching and applied to the UNISIM-I-M benchmark for validation. Two history-matching procedures evaluate the impact of the additional objective functions, based on the same parameterization, boundary conditions and number of iterations. The first procedure (Procedure A) includes objective functions traditionally used such as fluid rates and bottom-hole pressure, computed using all the historical data points. The second procedure (Procedure B) considers the same as for A as well as the two additional objective functions. The advantages of including the additional objective functions was the supplementary data used to constrain the uncertainties, improving attribute updates. Consequently, Procedure B generated better-matched models considering the historical period and more consistent forecasts for both field and well behavior when compared to available reference data. The addition of the breakthrough deviation improved the quality of the match for water rates because breakthrough deviation is sensitive to reservoir attributes different to those objective functions related to water rate. The production transition error assisted the identification of scenarios that under or overestimated well capacity. Production transition error also improved the transition of the models from the historical to the forecasting period, reducing fluctuations due to the changes in boundary conditions. Despite the increased number of objective functions to be matched, the improved reliability for forecasting is an incentive for further study. Other representative events, such as oil rate before and after the start of water production could be separated and evaluated, for example. The improved reliability for forecasting supports the inclusion of the proposed objective functions in history-matching procedures.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 79th EAGE Conference and Exhibition, June 12–15, 2017
Paper Number: SPE-185780-MS
Abstract
This work focuses on the improvement of an integrated methodology for the automatic history matching of compartmentalised reservoirs using 4D seismic results, stochastic initialization and the Ensemble Kalman Filter method. We show the comparison of two different history matching approaches using the Ensemble Kalman Filter (EnKF) to update the Fault Transmissibility Multipliers (FTM) initially estimated with and without considering the 4D seismic results. In this study, the parameters updated during the history matching are two-phase fault transmissibility multipliers (FTM), absolute permeability and effective porosity of a synthetic realistic 3D reservoir. The true impedance map and the changes in reservoir pressure and saturation were previously computed from 4D seismic results. The systematic estimation of two-phase fault transmissibility multipliers is based on the integration of the collected 4D seismic results and an established method validated in our previous work based on a deterministic model, using the gradient-based History Matching, Levenberg Marquardt method (LM). We present the history matching of a synthetic reservoir using the Ensemble Kalman Filter (EnKF) considering the 4D seismic results to update the models and geostatistical techniques to produce the initial geological models. The stochastic method used is the Sequential Gaussian Simulation (SGS) technique to generate 100 initial models. During history matching using the EnKF, the saturation distributions are computed from the forward modelling of a two-phase system (oil-water). The impedance maps are then estimated using the Gassmann equation and compared with the true impedance map as part of the History Matching process. To validate the results, the cost function consisting of two components is calculated, the first is the structural similarity index of the two reconstructed impedance images to the real impedance image and the second is the RMS cost function value, l 2 – norm of the difference between the true (real) and the simulated pressure-production data. The EnKF history matching using the two-phase FTM values considering 4D seismic results produced lower cost function values compared with the model using the initial FTM multiplier without considering 4D seismic results. The EnKF history matching algorithm using 4D seismic presented in this work produced results closer to the true reservoir impedance map compared to our previous 4D gradient based history matching method.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 79th EAGE Conference and Exhibition, June 12–15, 2017
Paper Number: SPE-185837-MS
Abstract
History matching integrated with uncertainty reduction is a key process in the closed loop reservoir development and management methodology which is used for decision analysis related to the development & management of petroleum fields. Despite developments over the last decades in history matching & uncertainty analysis, the challenge of capturing complex interaction among several attributes and several reservoir responses acting simultaneously for complex models still remains. This paper describes the use of a probabilistic and multi-objective history matching integrated with uncertainty reduction as a systematic and iterative process for obtaining a set of reservoir models that honors dynamic data in a complex field case. The methodology is an iterative process that simultaneously matches different objective functions, one for each well production profile. The procedure uses a re-characterization step, where the uncertainties of the attributes (represented by their probability density functions) are updated using indicators that show global and local problems and a correlation matrix to capture the interaction between several reservoir uncertainties and the different objective functions. The methodology was applied to the Norne Field benchmark case considering production data up to 2001 and the remaining part of the provided history is used to estimate the quality of production forecast. The major benefit derived from the application of the methodology was the identification of global and local problems. The initial reservoir models presented high discrepancies between simulated and observed data. The use of independent objective functions in conjunction with the concise plot that is based on the normalized quadratic error of each production data highlighted when new parametrization of the reservoir was necessary. New reservoir attributes were added, such as separated permeability curves for each reservoir formation and new gas permeability curves that better describe the fluid behavior. The initial number of uncertain attributes was twenty seven; the correlation matrix clearly showed which one of those had major influence on the results. Some attributes with significant impact in the study were water-oil and gas-oil contact and faults transmissibility. We updated the probability of the most influencing attributes in order to identify the uncertain levels that improved the history match results. The methodology integrated the process of history matching with uncertainty analysis, addressing both processes simultaneously for a complex case. The methodology was effective and simple to use, even in the complex case study where the reservoir characterization is important.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 79th EAGE Conference and Exhibition, June 12–15, 2017
Paper Number: SPE-185800-MS
Abstract
In this work, a Bayesian data assimilation methodology for simultaneous estimation of channelized facies and petrophysical properties (e.g., permeability fields) is explored. Based on the work of Zhao et al. (2016a , b ), common basis DCT is used for the parameterization of facies fields in order to achieve model feature extraction and reduce the inverse problem dimensionality. An iterative ensemble smoother method along with a post-processing technique are employed to simultaneously update the parameterized facies model, i.e., DCT coefficients, and the permeability values within each facies in order to match the reservoir production data. Two synthetic examples are designed and investigated to evaluate the performance of the proposed history matching workflow under different types of prior uncertainty. One example is a 2D three-facies reservoir with sinuous channels and the other example involves a 3D three-facies five-layer reservoir with two different geological zones. The computational results indicate that the posterior realizations calibrated by the proposed workflow are able to correctly estimate the key geological features and permeability distributions of the true model with good data match results. It is known that the reliability of prior models is essential in solving dynamic inverse problems for subsurface characterization. However, the prior realizations are usually obtained using data from various sources with different level of uncertainty which results in great challenges in the history matching process. Thus in this paper, we investigate several particular cases regarding different prior uncertainties which include fluvial channels conditioned to uncertain hard data information or generated by diverse geological continuity models. The proposed methodology presents desirable robustness against these prior uncertainties that occur frequently in the practical applications.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 79th EAGE Conference and Exhibition, June 12–15, 2017
Paper Number: SPE-185847-MS
Abstract
Assisted history matching which integrates production data dynamically in reservoir modelling has been used to reduce uncertainty in reservoir geological properties which leads to credible production forecasting. For largescale heterogeneous heavy oil reservoirs, typically thousands of full physics simulation runs of multimillion grid reservoir models might be required to accurately probe the posterior probability space given the production history of reservoir, therefore not practical. In this paper, a unique approach for computationally efficient dynamic data integration is presented which includes construction of a proxy model that can replace reservoir simulator. Realizations are first parameterized using Karhunen-Loeve (KL) transformation and represented in terms of few uncorrelated random variables. Considering these random variables as input and production parameters as output, a mathematical model based on Polynomial Chaos Expansion (PCE) is constructed using deterministic coefficients and orthogonal polynomials which is further employed in assisted history matching instead of computationally expensive reservoir simulator. History matching of a SAGD field located in northern Alberta is performed using proposed KL-PCE framework and results are compared with the base case that uses commercial reservoir simulator. Ensemble Kalman Filter (EnKF) is used for assisted history matching due to its ability to assimilate data in large-scale nonlinear systems. Effectiveness of proposed idea is evaluated based on the following criteria: (1) Does KL-PCE framework reduce computational cost significantly, and (2) does proposed workflow produce satisfactory history matching results? It is observed that KL-PCE based proxy model provides similar performance as a commercial simulator in terms of ensemble convergence. Also, uncertainty in geological parameters is reduced significantly which is evident from convergence of updated ensemble towards the true value. Furthermore, computing cost of assisted history matching is reduced by almost 95% as training of PCE needs only few full physics simulations. Finally, proposed surrogate-accelerated integrated dynamic modelling can be used in greenfield closed-loop optimization workflows and uncertainty assessment with minimal use of numerical simulator which ultimately maximize the benefit in monetary terms.
Proceedings Papers
Francis Morandini, Jean-Francois Rainaud, Mathieu Poudret, Michel Perrin, Philipp Verney, Florian Basier, Rob Ursem, Jay Hollingsworth, Donna Marcotte
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 79th EAGE Conference and Exhibition, June 12–15, 2017
Paper Number: SPE-185761-MS
Abstract
Objective/Scope Exploration and production (E&P) work flows continue to evolve in completeness and complexity. Multidisciplinary teams use a variety of software packages to perform the many tasks required to build and update accurate and comprehensive earth models used over the life of a field. Continued data-gathering and iterations to characterize the range of uncertainty is an integral yet challenging part of the process. Incorporating new data into an existing model can be "painful"—time consuming, tedious, and error prone—which inhibits our ability to easily and accurately update a model. Methods, Procedures, Process RESQML is the industry-defined data-exchange standard used in E&P to transfer earth models between software applications in a vendor-neutral, open, and explicit format. In Version 2.0.1 (published in September 2015), RESQML defines a richer, more complete set of data objects (than Version 1) across the subsurface work flow. RESQML now also defines precise classifications of data objects and the relationships between them to create a knowledge hierarchy of: abstract subsurface features, human interpretations of those features, the data representations of those interpretations, and the properties indexed onto those representations. These and other new features now make it easier to exchange data, iterate, and update models along the entire subsurface work flow. Result, Observations, Conclusions This paper presents a work flow—using actual data from the Alwyn North Field—for adding "one more fault" to a structural interpretation after a preliminary unsatisfactory history-matching exercise in a flow simulator. The paper describes how the RESQML v2.0.1 data-exchange standard can support a repository for geological knowledge and how multiple RESQML-enabled software packages (structural and stratigraphic interpretation applications and reservoir modeling systems) can share, transfer, and iterate on a coherent model. The work flow is based on a test case used to demonstrate interoperability of multiple software packages from various member-companies (operators and service/software companies) of Energistics, the upstream oil and gas data standards organization. All along the reservoir model cycle, Energistics members exchanged: individual interpretations (e.g., for horizons, faults, wellbore trajectories and formation markers), their individual representations (e.g., scattered points, surfaces, wellbore logs, and blocked wellbores), composite interpretations (e.g., structural, stratigraphic, and reservoir organizations) with their framework and grid-based representations and properties. The paper explains how the members produced a reservoir model, then updated and exchanged only the model elements required to change when a poor history match indicated an important fault was missing from the initial structural interpretation and had to be added to the model. Novel/Additive Information The new RESQML v2.0.1 design and capabilities mean E&P professionals can now transfer complete models with all data in context and/or logically transfer (update) only the parts of a model that have changed. This new v2.0 functionality is a significant improvement over RESMQL v1.1 and its precursor, RESCUE, both of which could only exchange a smaller set of individual elements and none of the relationships between them.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 79th EAGE Conference and Exhibition, June 12–15, 2017
Paper Number: SPE-185877-MS
Abstract
An increasing number of field development projects include rigorous uncertainty quantification workflows based on parameterized subsurface uncertainties. Model calibration workflows for reservoir simulation models including historical production data, also called history matching, deliver non-unique solutions and remain technically challenging. The objective of this work is to present a manageable workflow design with well-defined project workflow tasks for reproducible result presentation. Data analysis techniques are applied to explore the information content of multiple-realization workflow designs for decision support. Experimental design, sampling and Markov Chain Monte Carlo (MCMC) techniques are applied for case generation. Data analytics is applied to identify patterns in data sets supporting the evaluation of the history matching process. Visualization techniques are used to present dependencies between contributions to the history matching error metric. Conflicting history matching responses are identified and add value to the interpretation of history matching results. Probability maps are calculated on the basis of multiple-realizations sampled from a posterior distribution to investigate potentially under-developed reservoir regions. Technologies are applied to a real gas field in the Southern North Sea. For the purpose of the benchmark, a structured workflow design to history matching and estimation of prediction uncertainty is presented. Sensitivity evaluations are used to identify key uncertain input parameters and perform parameter reduction. Markov Chain Monte Carlo (MCMC) is applied for optimization and uncertainty quantification. Statistical stability of key performance parameters is verified by repeating relevant phases of the workflow several times. In conclusion practical consequences and best practices as well as the use of data analytics in history matching workflows are discussed.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Europec featured at 78th EAGE Conference and Exhibition, May 30–June 2, 2016
Paper Number: SPE-180106-MS
Abstract
An increasing number of field development projects include rigorous uncertainty quantification workflows based on parameterized subsurface uncertainties. Reservoir model calibration workflows for reservoir simulation models including historical production data, also called history matching, deliver non-unique solutions and remain technically challenging. In addition, the validation process of the reservoir simulation model often introduces a break of the conceptual connection to the geological model. This raises questions on how to quantify the deviation between the calibrated simulation model and the original geological model. Workflow designs for history matching require scalable and efficient optimization techniques to address project needs. Derivative-free techniques like Markov Chain Monte Carlo (MCMC) are used for optimization and uncertainty quantification. Adjoint techniques derive analytical sensitivities directly from the flow equations. For history matching those sensitivities are efficiently used for property updates on grid block level. Both techniques have different characteristics and support alternative history matching strategies like global vs. local, stochastic vs. deterministic. In this work both techniques will be applied in an integrated workflow design to the Norne field. The Norne field is a North Sea oil-and-gas reservoir with approximately 30 wells, with one third being used for WAG injection for pressure support. Field data was previously released by Statoil and made available for a public benchmark study (NTNU Norway) testing history matching techniques including production and time-lapsed seismic data. We focus on well production data for history matching. MCMC is used for global parameter updates and uncertainty quantification in a Bayesian context. An implementation of an adjoint technique is applied for analytical sensitivity calculations and local parameter adjustments of rock properties. History matching results are presented for field wide and well-by-well production data. Consistency checks between updated and original geological model are presented for rock property distribution maps. Geostatistical measures including spatial correlations are used to quantify deviations between updated and original geological model. In conclusion scalability and performance efficiency of the practical workflow implementation is discussed with a perspective of a consistent feedback loop from history matching to geological modeling.