Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
NARROW
Format
Subjects
Date
Availability
1-20 of 345
Keywords: machine learning
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-3014-MS
... technology, generative machine learning methods, such as those used for Deep Fake, can generate images and data that are all but indistinguishable from reality. Using an adapted generative method known as the Generative Adversarial Imputation Network (GAIN), this paper evaluates these methods and...
Abstract
Many analytical methods rely on complete datasets. However, data obtained from the field are often incomplete or inaccurate. An example of this would be well production data, where oil, water and gas rates are provided, but sometimes water or gas rates are missing. With current technology, generative machine learning methods, such as those used for Deep Fake, can generate images and data that are all but indistinguishable from reality. Using an adapted generative method known as the Generative Adversarial Imputation Network (GAIN), this paper evaluates these methods and capabilities for filling in missing data. Validation of these methods was done by creating missing data within complete data sets, comparing the generated values to that of the original. This work found that with initially large datasets and upwards of 45% of missing values, data can be "filled in" with surprising accuracy. Quantification of the GAIN process and association of missing data and variable importance was done with the use of probability distributions. The relationship between the amount of missing data and the accuracy and probability associated with the predictions has been further quantified and presented within the context of various types of datasets. This paper discusses how generative methods of machine learning were utilized to fill in the missing portion of existing data with great success. Using a GAIN model, the missing fields can be generated for use in statistical analysis, decision making, and the optimization of current and future projects. Introduction Many studies have been done using machine learning in order to find better and better ways to predict the unknowns. This is true for all fields, and when the data acquired does not always meet the criteria of the researcher, decisions on how to go forward tends to lead to some data just outright getting discarded. Missing data is a profound issue affecting a number of fields, from agricultural to medical and, of course, particularly data in the oil industry. These missing variables can be problematic when a researcher desires to use all the data they can when doing any sort of data analytics. Suddenly, the researcher is required to make decisions on how to, or whether to, make up for the missing information. That person must make a choice, to take an average of the other available data, come up with a control value to make up for any missing data in a specific variable, or determine if it is the best practice to simply drop that sample of data. For example, if there is a row of data (sample) with 30 data values, and three values are missing, a choice must be made on whether to disregard the entire row of data, or to find suitable values that can be used to replace the missing values.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2782-MS
.... completion installation and operations information artificial intelligence time series data data quality upstream oil & gas signal processing technique prediction data stream enable real-time reporting procedure dataset treatment plot post-stage data slurry rate machine learning hydraulic...
Abstract
One component of the modern hydraulic fracturing process is streaming (continuously generating and transmitting) high-frequency data from the field to remote locations for monitoring, storage, and analysis. Analyzing this data in real-time can be especially challenging during zipper fracturing operations, which involve several wells and potentially more than one frac crew. Accurate and consistent event identification such as stage start and end times enable real-time reporting of important stage metrics, including pressures, rates, and concentrations. More advanced workflows allow real-time stage comparisons aligned on identified events such as the start of each stage, achieving target rate, and breakdown. This study aims to demonstrate an automation process to identify accurate and consistent stage start and end times in real-time using signal processing techniques. The dataset includes two types of data: post-stage treatment data including treating pressures and slurry rates for 1,151 stages from all major North American basins; and 15,000+ hours of real-time data that includes streams from zipper frac operations. In addition to the well-timing challenges, the raw field data can be very noisy, making it difficult for automated techniques to separate real events from false positives. The authors use signal processing techniques to mitigate noise, easily accommodate business rules, and follow the subject matter experts’ decision logic. The authors designed several auxiliary channels to identify approximate windows of time where the frac crew is pumping and not pumping. Many of these derived channels involve smoothing the original signals and depend on the degree of noise and whether to incorporate information about the past or the near future. Once these windows are identified, a search procedure is used to find the precise boundary between pumping times and non-pumping times (similar to zooming in on a treatment plot). The algorithm mimics an expert by identifying the relevant portions of the plot, thereby avoiding gross errors, and then zooms into the correct interval to refine its choice. Due to time constraints, and limited data viewing resolution capabilities in many frac vans, on-site frac supervisors often do not give proper attention to event tagging. As such, the algorithm’s choices are more precise (∼5 seconds of the actual event) than the average human performance (20-30 seconds). This is the first time a signal processing approach has been applied to identify key hydraulic fracturing events in a real-time data stream. This approach provides a robust, automated, transparent (white box), and extremely performant model that easily accommodates operating constraints. In turn, this enables real-time reporting of operational metrics and more advanced analyses, like comparing stages aligned on events. Providing accurate details for pump times and related operating metrics ultimately helps improve the operation’s execution and reduces completion costs.
Proceedings Papers
Abhijit Mitra, James Kessler, Sudarshan Govindarajan, Deepak Gokaraju, Akshay Thombare, Andreina Guedez, Munir Aldin
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2806-MS
... fracturing upstream oil & gas woodford shale constituent laboratory measurement machine learning shale gas structural geology anisotropy ratio anisotropy geomechanics symposium lithotype geophysics algorithm urtec 2806 Abstract The magnitude of elastic anisotropy in shale is a...
Abstract
The magnitude of elastic anisotropy in shale is a function of composition, texture, and fabric. Rock components such as mineralogy, organic content, clay mineral orientation, alignment of matrix pore and intraparticle kerogen pores, as well as the distribution of cracks, fractures, and other discontinuities can influence anisotropy. Elastic anisotropy has a significant impact on seismic waveform interpretation, time-depth models, and stress characterization used in drilling and well completion design. Anisotropy can be estimated explicitly from core measurements but the time and budgetary requirements to conduct extensive laboratory measurements are usually prohibitive in an operating environment. Existing models aimed at characterizing anisotropy from log data involve some assumptions that may not be realistic in every formation or lithology type. We aim to predict anisotropy from log data as a function of lithotype defined primarily by mineralogy, organic content, and porosity. This paper presents a workflow to identify lithotypes based on mineralogy, organic content, and porosity in core data from a single well and then predict elastic anisotropy for each lithotype away from the cored interval and in other wells. The workflow employs a multi-disciplinary experimental program using geology, engineering, and data analytics techniques to interpret data from core samples and log data obtained from a well in the Permian basin. First, we derive the relationship between stiffness anisotropy and lithotypes defined in core. Second, we derive the relationship between lithotype and electrofacies from log data using machine learning techniques like principal component and clustering algorithms. We then apply the predictive models to estimate anisotropy for each lithotype and test the predictive capability in the source well. Analyses of laboratory measurements reveal that anisotropy is not significantly influenced by any single mineralogical constituent, volume of organic material, or porosity. However, a multiple linear regression model that utilizes all three of those constituents measured from core is successful in predicting anisotropy for lithotypes identified from machine learning techniques. There is good agreement between measured and modeled anisotropy when applied as an upscaling tool using well log data. Further work will test the predictability of the model in a blind test well by comparing modeled results with core and log data that are independent of this analysis. This paper successfully applies a combination of traditional geological and engineering applications with new machine learning techniques to characterize lithotypes and predict rock properties such as elastic anisotropy. The technique avoids the assumptions used in existing models to characterize anisotropy and provides the foundational workflow that can be utilized to predict other rock properties for a variety of applications in the unconventional oilfield.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2976-MS
... machine learning reservoir characterization complex reservoir upstream oil & gas reconcavo basin shale vector energy economics shale oil oil shale shale data prediction basin component plane shale gas structural geology new zealand das model self-organizing map hydrocarbon...
Abstract
The objective of this work is to predict the estimated ultimate recovery (EUR) and rank locations following application of the Digital Analogue Shale (DAS) model to exploration data types (character, geomechanics, quantity, quality, maturity, mineralogy) across shales worldwide. To address this objective, we developed the reduced order DAS model using a machine-learning (ML) approach for rapid determination of EUR with applications to independent data sets including the East Coast Basin, NZ and the Reconcavo Basin, BR. A priori knowledge of the EUR will save money and reduce time by prioritizing economic investments and returns in unconventional shale assets. Development of the DAS model follows a ML workflow - training, feature selection, testing, and uncertainty quantification. The unsupervised ML network involves competitive training and self-organization of publicly-available reservoir data from unconventional shale plays in the USA (e.g. Barnett, Bakken, Eagle Ford, Haynesville, Marcellus, Niobrara, others) that include character (depth and thickness), geomechanics (porosity, permeability, Poisson ratio, and Young’s modulus), quantity (free hydrocarbons, amount of hydrocarbons generated through thermal cracking, and total organic carbon, EUR), quality (hydrogen index), maturity (maximum temperature and vitrinite reflectance), and mineralogy (clay content, carbonate content, and silica content). Minimization of quantization and topographical error vectors provide EUR predictions for testing generalizability and quantifying uncertainty by stochastic cross-validation. The DAS model provides unbiased EUR predictions and their uncertainty estimates when applied to independent shale data. Differences in average and median EUR predictions reveal a nonlinear process underscoring the importance of using the unsupervised ML approach to develop the DAS model. Given the range of estimation uncertainty, the preferred DAS model predictions (closest to observations) are median EUR values. We successfully applied the DAS model to quantify amount and rank locations of EUR across three structural blocks in the Renconcavo basin; and by block, formation and member in the East Coast Basin. The DAS model represents an innovative step-change toward near real-time determination of EUR with quantifiable uncertainty anywhere that (sparse) exploration data are available. By ranking EUR predictions, the DAS model facilitates rapid prospect generation consistent with world-class shale plays. As new reservoir data become available, the DAS model can be refined and redeployed across unconventional shale assets. The DAS model can be applied using only sparse data from exploratory wells and can be applied at an existing field to identify targets for infill drilling. Preliminary tests demonstrate model ability to predict (in addition to EUR) economic considerations, such as well cost, operational well cost, recovery, volume of drilling water per well, volume of fracking water per well. Lastly, the rapid deployment of the DAS model can provide guidance to governments for future bid rounds, exploration planning and prioritization.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2594-MS
... urtec 2594 fracture machine learning shale gas upstream oil & gas main hydraulic fracture shale gas well scenario workflow fracture geometry permeability Abstract Production history match can be used to evaluate effective fracture geometry and to confine the uncertainty of fracture...
Abstract
Production history match can be used to evaluate effective fracture geometry and to confine the uncertainty of fracture and reservoir properties such as fracture conductivity and relative permeability. Although these parameters are critical in optimizing completions design such as well and cluster spacing, they are unfortunately difficult to be quantified using fracture modeling or most diagnostic techniques, which focus on geometry and properties during fracturing, different from those during production. To tackle this challenge, we leveraged the automatic history match (AHM) scheme based on Neural Network-Markov Chain Monte Carlo (NN-MCMC) to compare the parameters of a horizontal shale gas well with 74 days production history. 10 parameters characterizing the fracture and reservoir properties were quantified. The case with and without enhanced permeability area (EPA) were investigated. The posterior distributions of these parameters were obtained from the multiple history matching solutions. These multiple solutions were found by probabilistically iterating through 1 million realizations using the NN-MCMC algorithm and a total of 650 realizations were proposed to be validated with reservoir simulator. The MCMC algorithm has the advantage of quantifying uncertainty without bias or being trapped in local minima. The employment of neural network (NN) as a proxy model unlocks the limitation of an infeasible number of simulation runs required by a traditional MCMC algorithm. The proposed AHM workflow also utilized the benefits of Embedded Discrete Fracture Model (EDFM) to model fractures with a higher computational efficiency than a traditional local grid refinement (LGR) method and more accuracy than the continuum approach. We found that when EPA was included to represent small fractures surrounding main hydraulic fractures, the shorter fracture geometry posterior distributions were obtained compared with the case of hydraulic fractures only (without EPA). This causes the production forecast of the case with EPA to be significantly lower than the one with only hydraulic fractures (without EPA). This means that if a simple model with only hydraulic fractures was assumed while in the actual operation, there is EPA due to the small fracture networks created around main hydraulic fractures, we would overpredict the fracture geometry and gas EUR prediction. With the use of NN-MCMC as history matching workflow, the uncertainty range of 10 parameters were characterized automatically. These effective fracture geometry and properties can be used to improve well spacing and completion design in the next fracturing campaign.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2855-MS
... Abstract With the abundance of big data in the oil and gas industry, it can be sufficient to treat and solve petroleum engineering problems using data analytics. Modern data analytic techniques, statistical and machine learning algorithms have received widespread applications for solving such...
Abstract
With the abundance of big data in the oil and gas industry, it can be sufficient to treat and solve petroleum engineering problems using data analytics. Modern data analytic techniques, statistical and machine learning algorithms have received widespread applications for solving such problems, particularly in unconventional formations. As we face the problem of parent-child well interactions, well spacing, and depletion concerns, it becomes necessary to model the effect of geology, completion design, and well parameters on production using models that can capture both spatial and temporal variability of the covariates on the response variable. We can accomplish this idea using well-formulated spatio-temporal (ST) models. In this paper, we present a multi-basin study of production performance evaluation and applications of spatio-temporal (ST) models for oil and gas data.We analyzed dataset from 10,077 horizontal wells in five unconventional formations in the US: Bakken, Marcellus, Eagleford, Wolfcamp and Bone Spring formations. We first gathered, cleaned, and prepared the data for analysis. Next, we present some ST-plots for exploratory data analysis and show how these plots enable us to select a suitable distribution and suggest appropriate covariates to include in the model. Finally, we build our models, check residual plots to ensure model assumptions are not violated and visualize and discuss the results. We present two methods for fitting the ST models: Fixed Rank Kriging (FRK) and ST versions of generalized additive models (ST-GAMs) using thin plate and cubic regression splines as basis functions in the spline-based smooths. We selected these methods because they are suitable for handling big data sets and allow some flexibility in the model formulation. Production, completion, and geologic data are available for the Bakken and hence the model incorporated these covariates.We selected only space and time as covariates for the other three formations. In all four cases, the six-month cumulative oil and gas production are the response variables. The goal is to evaluate the performance of these ST models on production in several unconventional formations. For the Bakken case, we constructed three models to sequentially evaluate the effects of space-time only, geology, and completion on oil production, but we discuss the results of the final best model. Results show a significant effect on production by the smooth terms due to the tensor product of space and time, accounting for between 60 to 95 percent of the variability in the six-month production in the oil-producing and gas-producing formations. In general, we saw a much better production response to completions for the gas formations compared to oil-rich formations. Overall, the model set up using basis functions across spatial and temporal domains captures a significant percentage of production variability in these unconventional plays. It also emphasizes the importance of well location and/or landing point of the lateral section. The value of a multi-disciplinary team involving geologists, petrophysicists, drilling, production, completions, and reservoir engineers is clearly visible through the data silo imports and the interdisciplinary nature of these kinds of projects.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-3048-MS
... simulators. However, performing these simulations at the field scale is not possible due to their computational expense. Therefore, we present a machine learning technique based on deep neural networks to predict the fluid distribution within these fractures at steady state trained upon on the lattice...
Abstract
Multiphase flow through fractures is common in many fields, yet our understanding of the process remains limited. In general, this is because some factors which separate multiphase flow from single-phase flow (interfacial tension, wettability, residual saturation) are difficult to characterize and control in a laboratory setting, and are also challenging to implement in traditional numerical simulators. Here, we present a series of lattice Boltzmann simulations of CO 2 displacing brine in rough fractures with heterogeneous wettability. This extended abstract focuses on the application of this technique to predict irreducible brine saturation within the fractures. We show that this irreducible brine saturation may be greater than 25%, which could have significant impacts on production estimates from unconventional reservoirs and is typically not accounted for in reservoir simulators. However, performing these simulations at the field scale is not possible due to their computational expense. Therefore, we present a machine learning technique based on deep neural networks to predict the fluid distribution within these fractures at steady state trained upon on the lattice Boltzmann simulations. To our knowledge, this is the first example of machine learning being used to predict the distribution of fluid within a subsurface media. Here we show that a trained network is able to accurately predict the fluid residual saturation and distribution based solely on the dry fracture characteristics. This proves that machine learning holds promise for upscaling these simulations to a relevant scale for application to the oil and gas industry. Introduction Multiphase flow in fractures has implications to many fields including nuclear waste disposal, CO 2 sequestration, geothermal energy, and the oil and gas industry. During multiphase flow, factors that do not play a role in single-phase flow become important. These factors include the interfacial tension between fluids, the fluids viscosity ratio, and the wettability between the solid surfaces and each fluid. Compared to porous media, where the effect of wettability has been extensively researched, the influence of wettability during fracture flow is relatively unstudied. This is partly due to the difficulty in characterizing the wettability of natural rock cores and conducting experiments as well as the difficulty in including wettability into numerical simulations. Therefore, the importance, or lack thereof, of wettability during multiphase fracture flow remains uncertain.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2552-MS
... computational cost, tedious modelling process and the requirement of high-performance simulation software. Machine learning-assisted computing methods have attracted significant attention during the past decade. As machine learning requires fewer input parameters while reaching better accuracy, many different...
Abstract
Constant-gas-production-rate methology is widely used in shale gas stable production period time. Appropriate proration determination is important. Traditional shale gas proration computing for single well typically applies numerical simulation method, which requires more than 20 sets of well testing data or more than 10 engineering data. The well testing data are always obtained by conducting costly and time-consuming well testing. Engineering data are often incomplete and estimated based on assumed simplifications. The numerical simulation methods simultaneously lead to heavy computational cost, tedious modelling process and the requirement of high-performance simulation software. Machine learning-assisted computing methods have attracted significant attention during the past decade. As machine learning requires fewer input parameters while reaching better accuracy, many different algorithms and methods have been developed and analyzed. Under these circumstances, this study proposes a synchronized machine learning based shale gas proration computing method combined with a parameter optimization algorithm. The experiment used a dataset of 165 wells in Fuling shale gas field, including geological, engineering and well testing data. Spearman's rank correlation coefficient was implemented to filter low-relevance factors and obtain predominant proration parameters of gas wells. The highly relevant parameters ensured the accuracy and reliability of the input for the proposed machine learning algorithm. Conducting simple KNN algorithm and Xgboost can reach an accuracy of 87.9% and 75.6% respectively. This was not accurate and reliable enough compared with numerical modelling methods. In order to improve the accuracy of the prediction with respect to actual well performance, the proposed method in this study synchronized two of machine learning based regression algorithms. Xgboost and KNN algorithm were combined to compute proration of shale gas wells. The Xgboost was initially used to calculate preliminary proration. The initial results generated from the aforementioned system were added to the initial dataset as a new independent variable to form a new dataset. A new dataset was further imported into k-Nearest Neighbor (KNN) algorithm for ultimate optimization. The accuracy generated from the synchronized method reached 91.8%, which showed a high percentage of overlap between the calculated value and the true value. The result was competitive with common numerical modelling approaches. In addition, only less than 10 parameters were implemented as input for this synchronic approach. The high accuracy computing model was generated while the tedious modelling process was avoided. This study proposes a protocol for a high accuracy synchronized machine learning, shale gas proration computing method. It provides a sufficient reference for stakeholders to make development plans.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2573-MS
... machine learning (ML) algorithms so that the algorithms can learn the underlying physics from reservoir simulation input and output. The ML model is trained such that it provides fast and scalable applications with good accuracy to find optimum unconventional field development, accounting for geological...
Abstract
The objective of this study is to develop a hybrid model by combining physics and data-driven approach for unconventional field development planning. We used physics-based reservoir simulations to generate training datasets. These uncalibrated priors were incorporated into data-driven machine learning (ML) algorithms so that the algorithms can learn the underlying physics from reservoir simulation input and output. The ML model is trained such that it provides fast and scalable applications with good accuracy to find optimum unconventional field development, accounting for geological properties, completions design, well spacing and child well timing. We trained ML models with reservoir simulation inputs and cumulative oil production for parent and child wells. A single half-cluster reservoir model was built where fracture propagation is simulated with pressure-dependent fracture properties and a child-well is introduced with different timing and well spacing. After performing a sensitivity analysis to reduce the number of training inputs, more than 20,000 simulations results were generated as the training data. The best accuracy, R 2 =0.94, was achieved with the neural network model after tuning hyper-parameters. Then, we incorporated the trained model with the genetic algorithm to perform efficient history matching to calibrate model parameters. The hybrid model, physics-embedded machine learning model, is extremely efficient that it takes several minutes to complete a single well history matching. The prediction from the history-matched hybrid model is physically meaningful showing that it properly captures the impact of fracture geometry, child well spacing, and timing on production. With the multiple history matching results, we populated spatial distribution of estimated ultimate recovery (EUR) and calibrated model parameters. To validate the workflow, a blind test was conducted on selected areas from US onshore field. The model prediction with the populated parameters was found to be in good agreement with the actual production history indicating the predictive capability of the hybrid approach. The proposed model can provide quick and scalable solutions that honors underlying physics to help decision making on unconventional field development. The model can capture interactions between wells including production degradation due to child-well effect. By calibrating model input parameters over the entire basin, we can predict EUR, yearly cumulative oil followed by economic metrics such as NPV10 at any location in the basin. The impact of different completion design (e.g., fluid intensity, cluster spacing) on production profile and economic matrices is also quickly assessed.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2786-MS
... recovery from the field and increasing the economic life of industrialized shale completions. machine learning reservoir characterization reservoir geomechanics shale oil artificial intelligence proppant hydraulic fracturing shale gas complex reservoir fracturing materials completion...
Abstract
This paper assesses the effectiveness of combining hydraulic fracture monitoring (performed using borehole pressure-wave readings) with facies analysis based on mechanical specific energy (MSE) measurements. Beneficial applications include: 1) evaluation and optimization of completion designs, 2) design and measurement of diversion effectiveness and 3) placement of the frac as designed – while avoiding offset well communication – to increase estimated ultimate recovery (EUR). The evaluation was performed on a four-well dataset in the Eagle Ford shale. For each well, facies analysis directed pre-job planning, resulting in various frac stage designs that were based on variations in MSE. The stages were monitored during the job, and, based on results, frac stage designs were modified in real time to optimize the next geomechanically similar stage. Far-field diversion was used on targeted stages to limit half-length growth in select wells. On all the wells, the number of clusters per stage was varied and the impact was monitored. The first well was used as a baseline to provide direct, quantifiable correlations between the facies MSE and the measured fracture half-lengths. On subsequent wells, different treatment designs were executed, based on the varying MSE measurements, to obtain the desired half-length. The design changes included variations in the number of clusters per stage, far-field diversion strategies, pump rates, and proppant concentrations and quantities. Throughout the operation, frac performance was monitored continuously and pumping designs were optimized by varying parameters such as perforation clusters spacing, pump rate, diverter, acid volume, pad volume, slurry/proppant design, and volume per linear foot. The completion design of every stage was modified in real time, based on the performance of the fracture system. In each well, the first stages in each rock type served as control stages for calibration purposes. The result was the development of a uniform fracture system, in terms of both its extension as well as its near- and far-field conductivity. In a series of 204 stages across all four wells, the integration of MSE facies with fracture performance enabled real-time optimization of the fracture system, which delivered significant improvements in production performance, reservoir development, and reduced rate of depletion. The combination of MSE analysis with borehole pressure-wave-based hydraulic fracture monitoring is a paradigm shift that has the potential to revolutionize how horizontal plays are developed. Employing these combined technologies can be used to drive each frac stage to meet frac half-length, height, and conductivity goals. The fit-for-purpose, noninvasive and scalable qualities of both technologies deliver strong cost efficiencies and can significantly increase EUR from the project acreage. At both the well and field levels, this combination of cost efficiency and customizability is critical to optimizing recovery from the field and increasing the economic life of industrialized shale completions.
Proceedings Papers
Cenk Temizel, Celal Hakan Canbaz, Onder Saracoglu, Dike Putra, Ali Baser, Tomi Erfando, Shanker Krishna, Luigi Saputelli
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2878-MS
... machine learning production forecasting shale gas artificial intelligence complex reservoir deep learning curve analysis petroleum science dataset type curve upstream oil & gas decline curve algorithm unconventional reservoir society of petroleum engineers oil shale...
Abstract
Predicting EUR in unconventional tight-shale reservoirs with prolonged transient behavior is a challenging task. Most methods used in predicting such long-term behavior have shown certain limitations. However, long short-term memory (LSTM) – an artificial recurrent neural network (RNN) architecture used in deep learning – has proven to be well-suited to classifying, processing, and making predictions based on time series data with lags of unknown duration between important events. This study compares LSTM and reservoir simulation forecasts. Available unconventional tight-shale reservoir data is analyzed by LSTM and predictions obtained. A reservoir simulation model based on the same data is used to compare the LSTM forecast with results from a physics-based model. In the LSTM forecasting, any operational interferences to the well are taken into account to make sure the machine learning model is not impacted by interferences that do not reflect the actual physics of the production mechanism on the behavior of the well. The forecasts from the LSTM machine learning model and the physics-based reservoir simulation model are compared. The LSTM model shows a good level of accuracy in predicting long-term unconventional tight-shale reservoir behavior using the physics-based reservoir simulation model as a benchmark. An analysis of the comparison shows that the LSTM machine learning model provides robust predictions with its long-term forecasting capability. This allows for better data-driven forecasting of EUR in unconventional tight-shale reservoirs. A detailed analysis is done using the forecast results from LSTM and the reservoir simulation model, and the key drivers of the EUR response are evaluated and outlined. Deep learning applications are limited in the oil and gas industry. However, it has key advantages over other conventional machine learning methods; especially where relationships are in time and space and not very clear to the modeler. This study provides a detailed insight into deep learning applications in the oil and gas industry by using LSTM for long-term behavior prediction in unconventional shale reservoirs.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2751-MS
... method machine learning artificial intelligence hydraulic fracturing upstream oil & gas eagle ford shale ann model sensitivity analysis reduced-order model society of petroleum engineers unconventional resource technology conference sobol function URTeC: 2751 Utilizing a Global...
Abstract
In order to maximize the profitability of a well and minimize the cost, three key questions must be answered before drilling a well: Where to drill the well? What completion design is to be used? Which fluid type will be produced from the reservoir? These questions must be answered under the premise of maximizing profitability. In this study, we combine the recently developed artificial neural network (ANN) model with a global sensitivity analysis method to present a reduced-order model for addressing these questions. We developed ANN models to predict the oil and gas production of the first year. The input of the model are parameters such as longitude, latitude, true vertical depth, lateral length, fracturing fluid volume, proppant volume, and fracture stages. Next, we use the Sobol global sensitivity analysis to identify the dominant input variables and their interactions on the variation of the oil and gas production. Finally, we develop reduced-order models that can be represented as a simple algebraic expression consisting of simple mathematical functions. These equations can then be used to predict the production in the Eagle Ford shale rapidly by engineers on the field. The ANN model used in this study predicted the oil and gas production of the first year with reasonable accuracy. Our model suggests increasing the number of fracture stages and proppant volume in the oil-bearing region. The suggestions for the gas bearing cases were opposite to the oil case. The Sobol global sensitivity approach used in this study captures the variation of the output parameters of the ANN model with respect to the changes in the input parameters. Also, it identifies the combined output variation due to the changes of multiple input parameters. After ranking the dominant contributing input parameters, the model was used to present a simple function to predict the oil and gas production of the first year (combined oil and gas). The function has the advantage to be used in a simple excel sheet and can rapidly predict the results. We compared the accuracy of the proposed reduced order model against the developed ANN model, and results showed less than 5% error in predictions. For the first time, we have combined the data science methods with analysis of variance (ANOVA) based methods. This has resulted in a simple mathematical function to rapidly and directly predict the oil and gas from Eagle Ford shale, based on the input parameters that can be selected before drilling the well. Using the presented methodology, other such functions can be created for other shale plays and will aid engineers and decision-makers for field development to make reliable and quick decisions.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2937-MS
... 24.6 and 6.2 seconds, respectively, which confirms the robust performance of the metrics. reservoir characterization pixel geological modeling complex reservoir geologic modeling artificial intelligence quantification machine learning upstream oil & gas material constituent...
Abstract
Materials are made of distinct constituents. Connectivity of material constituents govern several physical properties, such as transport, mechanical, and electromagnetic properties. High-resolution microscopy imaging of a material is the best way to capture the microstructural aspects describing the distribution, topology and morphology of various material constituents. In this study, we develop two novel connectivity-quantification metrics for robust quantification of connectivity of material constituents captured in high-resolution images. Two-point connectivity function and fast-marching-based travel-time histograms are developed to quantify connectivity of each type of material constituent captured in the images. Two-point connectivity function for a specific constituent type is computed as a function of separation distance between two randomly selected pixels belonging to a specific constituent type. On the other hand, a fast-marching-based travel-time histogram for a specific constituent type is generated by using the fast marching method to compute the time taken by monotonically advancing interfaces starting from several randomly selected pixels to travel to each pixel belonging to the specific constituent type. The travel-time histogram indicates the tortuosity of connected paths, whereas the connectivity function indicates the length-scale of dominant globular connectivity. As a scalar measure of connectivity, the distributions corresponding to the two metrics are transformed to an average connected distance derived from the connectivity function and average travel time derived from fast-marching calculations. The performances of these two connectivity-quantification metrics are tested on 1500 images belonging to three categories of connectivity, namely poor, intermediate, and good connectivity, with 500 images for each category. Then, the metrics are evaluated on the organic constituent captured in the scanning electron microscopy (SEM) images of rock samples from various shale formations. Material constituents exhibiting high connectivity results in large values of average travel time and average connected distance. The average connected distances for the three categories of connectivity are 140.1, 14.6, and 5.6 pixels, respectively. The average travel times for the three categories of connectivity are 34.1, 5.2, and 1.9 seconds. The quantifications of connectivity using the two metrics show good agreement with each other and with visual inspection. For the two real SEM images exhibiting good connectivity and poor connectivity of the organic constituent, the average connected distances are 125.9 and 25.5 pixels, respectively, and the average travel times are 24.6 and 6.2 seconds, respectively, which confirms the robust performance of the metrics.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2763-MS
... for features segmentation using machine learning techniques. Primary facies and secondary features (e.g., pyrite) were identified from the URTeC 2763 CT images. Lazar et al. (2015) focuses on describing fine-grained sedimentary rocks using three major components texture, bedding, and composition. The...
Abstract
The Marcellus Shale Energy and Environmental Laboratory (MSEEL) consists of two project areas within the dry gas producing region of the Marcellus shale play in Monongalia County, West Virginia. MSEEL is a collaborative field project led by West Virginia University, with Northeast Natural Energy LLC, several industrial partners, and sponsored by the US Department of Energy National Energy Technology Laboratory. The study areas are drilled approximately 8.5 miles apart to better understand the vertical and lateral changes in stratigraphy over a short distance. Two vertical pilot wells, MIP-3H and Boggess 17H were drilled in the fall of 2015 and spring of 2019, respectively. Core was recovered from the MIP-3H (API: 47-061-01707-00-00) 112 feet (34m) between depths of 7445 to 7557 feet, and from the Boggess 17H (API: 47-061-01812-00-00) 139 feet (42m) between depths of 7908 and 8012 ft. A full suite of triple combo (gamma ray, neutron, density logs), image logs, and advanced logging tools were run in both wells and calibrated to core analysis. Core analysis includes medical computed tomography (CT) scans, mineralogy and chemostratigraphy determined from handheld X-Ray fluorescence (hhXRF) and X-Ray powder diffraction (XRD) measurements, and determination of total organic content (TOC). Lithofacies were determined at core-scale using traditional core description techniques and medical CT-scan images. Log-scale facies are based on mineralogy and TOC data and developed using petrophysical logging data calibrated to core data (XRD and pyrolysis data). Chemostratigraphic analysis utilized hhXRF data to determine the major and trace element trends in the cores. In the two wells six shale lithofacies were recognized at the core and log scale. Both wells show organic-rich facies (TOC > 6.5%) primarily in the middle and lower Marcellus, with a slight decrease in thickness of this interval in the Boggess 17H. This interval is interpreted as an increase in paleo-productivity (increased Ni, Zn, and V), decreased sedimentation (decreased detrital proxies), and anoxic to euxinic conditions (increased Mo and chalcophile elements). Paleo-redox conditions in both wells are dynamic throughout deposition transitioning between euxinic/anoxic to dysoxic/oxic. This trend is seen through elemental proxies and calcite/pyrite concretion distributions.
Proceedings Papers
K. N. Darnell, K. Crifasi, G. Stotts, D. Tsang, V. Lavoie, T. Cross, D. Niederhut, J. A. Ramey, K. Sathaye
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2795-MS
... not explicitly consider hydrocarbon composition or fluid properties; rather, we let our machine learning algorithm discern implicit relationships between location, which is directly correlated to fluid properties, hydrocarbon composition, and rock properties, and controllable parameters such as...
Abstract
Publicly reported hydrocarbon production data offers the opportunity to assess new acreage, compare production with privately held wells, and to develop general insights about hydrocarbon plays. However, in some cases, publicly reported data obscures valuable information due to reporting requirements and procedures. For example, in Canada, retrograde condensate reservoirs produce gas and condensate, but the volume is often reported as total gas-equivalent hydrocarbon volume. Our goal in this study is to deconvolve production histories of aggregated gas-equivalent hydrocarbon volumes into separate production histories for the condensate and gas streams. We use a small proprietary dataset of a few hundred wells, where each well contains a matched combined gas-equivalent production history with separate production histories for the individual condensate and gas products. We do not explicitly consider hydrocarbon composition or fluid properties; rather, we let our machine learning algorithm discern implicit relationships between location, which is directly correlated to fluid properties, hydrocarbon composition, and rock properties, and controllable parameters such as interwell spacing and completions designs. Using a held-out test set, our algorithm accurately captures cumulative volumes of each product over its first two years of production and captures condensate-to-gas ratios (CGR) over the same time span. This method reduces mean absolute percent errors in CGR at IP720 by 10-20% when compared with the traditional approach of estimating in-place CGR in conjunction with a simple decline curve, and accurately predicts decline curve shapes of the CGR history without any human bias. We apply our method to separate datasets from the Montney and Duvernay plays. While different features appear to be more important between the plays, the method offers comparable accuracy in both plays. We ultimately reduce our feature dataset to the publicly reported total gas-equivalent production history, digitized maps of relevant geological properties, and spacing and completions parameters. This feature dataset is all that is necessary to reproduce proprietary production histories of separated condensate and gas streams with a mean absolute percent error in the first two years of less than 30% on average. Introduction Hydrocarbon reservoirs produce a variety of output streams as a byproduct of hydraulically-fractured and horizontally-drilled wells, where the amount and distribution of the individual streams is a function of operator choices and inherent reservoir conditions and characteristics. Under many regulatory jurisdictions, operators are required to publicly report produced volumes broken down into streamspecific volumes, providing ample opportunity for non-drilling entities, such as competitors, investors, academic institutions, and technology companies, etc., to assess the outcome of drilling choices including spacing and completions designs in addition to well placement. With auxiliary information, publicly reported production can be used to investigate hypothetical inquiries without ever taking on risk or allocating capital such as whether high gas production and low oil production was the result of tight spacing, low stimulation, or bubble point conditions. However, in some localities, such as in Canada, publicly reported information is transformed in a way that obscures stream-specific information and, hence, any of the aforementioned analyses.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2743-MS
... well logging machine learning artificial intelligence urtec 2743 reservoir characterization complex reservoir bakken hexane mid principal component log analysis upstream oil & gas explanation prediction proppant loading information rock quality index geoshap value bakken 3...
Abstract
Although machine learning models can provide tremendous value to the unconventional oil and gas industry, interpreting their inner workings and outputs can be a laborious, time consuming, and difficult process. Here we present a novel method for extracting an overall rock quality index from a machine learning model trained on well logs. This rock quality index (RQI), which we term geoSHAP, can be used for performance benchmarking, completions tailoring, and acreage evaluation workflows. We trained a decision trees-based model on a regional Williston Basin dataset. The model predicts oil, gas, and water production at 30-day increments out to IP 720 based on training features of completions design, petrophysical grids, and spacing/stacking parameters. We started with over 400 petrophysical grids and reduced them down to 5 principal components using a Gaussian Kernel Principal Components Analysis. We then employ SHAP values (SHapley Additive exPlanations), which reflect how much each individual feature contributed to the model prediction. To extract our RQI, we sum the SHAP values for each of the principal geologic components for each well at each IP day. These summed geoSHAP values reflect the overall rock quality around the basin, identifying sweet spots and low performing areas. The model is able to identify high-performing areas on the Nesson Anticline, Antelope Anticline, Fort Berthold area, and Parshall/Sanish. We also show how the geoSHAP trends with overall operator performance and can be used to benchmark performance relative to expectation. This method is repeatable across trees-based machine learning algorithms. It removes the need to construct partial dependence plots or to take the time-consuming steps of running synthetic pads across the entire basin. Additionally, this method simplifies the selection of petrophysical grids and removes issues with multicollinearity that can debilitate machine learning models. GeoSHAP provides a purely empirical perspective on rock quality that can be compared to more prescriptive, assumptions-laden traditional methods, such as combining Archie’s equation with recovery factors. It also provides a generalizable method applicable to models built with simpler, easier to obtain data such as formation tops and isopachs. Introduction Over the past several years, machine learning methods have found increasingly common usage for well performance prediction and design optimization in unconventional reservoirs. These algorithms offer several advantages that have made them attractive to engineers and geoscientists, including increased accuracy, ability to deal with complex problems, and reduced bias. However, difficulty in assembling datasets and lack of interpretability have limited widespread usage. Because machine learning models thrive on large datasets, operators are often forced to incorporate publicly-available data from non-operated wells (whether directly from a state database or through a vendor). Even if an operator does have hundreds or even thousands of horizontal wells within a given basin, their implemented completions and spacing/stacking configurations may poorly sample the distribution of design parameters, limiting the effectiveness of a single-operator model.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-3036-MS
... geology, geophysics, and petroleum engineering to improve TMS reservoir and completion quality. reservoir characterization complex reservoir shale oil structural geology quartz porosity line plot calcite content machine learning artificial intelligence oil shale shale gas clay content...
Abstract
Tuscaloosa Marine Shale (TMS) is one of few oil-prone shale reservoirs in the United States with a track record of oil production. It is located at the boundary of Mississippi and Louisiana with an estimate of several billion barrels of oil that is potentially recoverable. When compared to its age-equivalent counterpart in Texas (Eagle Ford Shale), which is mostly calcite-rich, the TMS is richer in clay minerals. Moreover, TMS shows fine-scale heterogeneity which is challenging to characterize at a large scale. Being both oil and clay rich in addition to the availability of scarce public data is the motivation behind studying the storage capacity of TMS. In this study, new porosity data from nine wells in the core producing-area of TMS have been analyzed to provide a statistical overview of the TMS storage capacity. The dataset covers more than four hundred porosity measurements spanning from west Feliciana County to Tangipahoa County (eastern part of the TMS). The depth of the TMS wells range from 11,668 ft to 15,224 ft and significant amount of fine-scale heterogeneity was observed in the cores. According to the results, total clay, quartz, and calcite are the primary mineral contents. Clay is the most influential mineral with an average composition of 44.4 wt% and it controls the porosity. The results also show an increase in silty (quartz) laminas toward the base of TMS. This contributes to a slight increase in porosity where the calcite and clay contents are reduced. TMS porosity ranges from 1% to 10.6% with a mean value of 5.7%. Each well’s mean porosity was determined and the maximum difference between each mean porosity and the overall mean porosity is 0.86%. This paper presents the results of large dataset on the storage capacity of Tuscaloosa Marine Shale. Compared to two major liquid-rich shale reservoirs (Eagle Ford and the Middle Bakken), TMS is a clay and liquid-rich formation with limited availability of public knowledge. This study is part of a large multidisciplinary project known as the Tuscaloosa Marine Shale Laboratory (TMSL) which integrates geology, geophysics, and petroleum engineering to improve TMS reservoir and completion quality.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-3077-MS
... machine learning artificial intelligence data mining proppant reservoir simulation structural geology infill well production ratio reservoir characterization hydraulic fracturing fracturing fluid facies variability fracturing materials complex reservoir well performance depletion...
Abstract
Infill wells’ underperformance in the Permian Basin has raised concerns over decreasing overall production, which can negatively impact future reserve estimates and field economics. The underperformance of infill wells is usually due to changing of the stress state of the reservoir by the parent well, resulting in asymmetric fracturing and inefficient drainage of child wells. Child well performance is often compared to that of one parent, and production variability cannot be explained or predicted. Recent publications ignore two important factors about offset wells: the effect of multiple parent wells on a single child well and the effect of down-spacing between child wells. In addition to the effects of offset wells, existing models lack a robust understanding of geological and geomechanical properties that may influence parent-child performance. To address these issues, we present a new workflow designed to generate a depletion score by combining spatial and temporal variables, specifically well spacing and parent wells’ time on production. A Delaware Basin well-spacing database was generated that contains the lateral and vertical distance between reference wells and the nearest offset wells. Distance between reference wells and nearest offset wells was used to calculate the depletion score for each well in the dataset. The weighting of each offset well in this scoring system is proportional to time on production with respect to the reference well. The effect of geological and geomechanical properties on parent-child fracture interference is incorporated by capturing the vertical lithofacies variability, or vertical heterogeneity, of the interval between parent and child wells. The vertical distribution of each lithofacies is calculated in terms of the number of beds of specific facies with distinct geomechanical properties, such as carbonates, and the average thickness of those beds. The overall vertical heterogeneity of the reservoir is quantified by calculating the number of facies changes per foot within each well for the targeted interval. Depletion score correlates better with, and has a higher predictive power in, child well performance in the statistical model compared to well spacing and parent time on production individually. Multiple parent wells and closely spaced infilled wells increase the depletion score and the underperformance of child wells. For wells drilled in the same bench, vertical facies variability correlates to child underperformance related to increased well interference. Higher interference could be due to the tendency of hydraulic fractures to propagate laterally rather than vertically a strongly layered bench, which would increase the interaction between parent and child wells drilled in that same bench. These new engineering and geologic variables have more explanatory power in multivariate statistical modeling. The multivariate statistical model is trained with 460 Delaware Basin wells and predicts percent change in new well production compared to existing (same bench) wells due to changes in completion and spacing. The approach is useful in evaluating infill wells’ drilling and completion scenarios.
Proceedings Papers
Alireza Shahkarami, Robert Klenner, Hayley Stephenson, Nithiwat Siripatrachai, Brice Kim, Glen Murrell
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2753-MS
... Abstract Data science techniques have proven useful with the high volume of data collected in unconventional reservoir development workflows. In this paper, we present an analytics and machine learning use case for operations to minimize deferred production and quantify long-term production...
Abstract
Data science techniques have proven useful with the high volume of data collected in unconventional reservoir development workflows. In this paper, we present an analytics and machine learning use case for operations to minimize deferred production and quantify long-term production impacts due to frac hits in the Bakken and Three Forks formation during infill development. The outcomes of this study used significant amounts of data to provide the operator with a more efficient shut in strategy that can contribute to saving capital expenses and optimizing the rate of return on investment. A workflow is presented that improves our approach to a shut in radius which is required when offset wells experience intense pressure spikes within a given radius during the stimulation of infill wells. These pressure spikes are caused by connecting the hydraulic fracture networks and at times can damage the structure of the well, promote sand production and/or impact post-completion production. To mitigate these impacts, operators may shut in all offset wells to help reduce pressure sinks nearby or use re-pressurization techniques such as high pressure/low rate injection. Determination of the shut in distance is often based on analogous operations and/or experience alone, and tends to be conservatively derived, potentially leading to the unnecessary shut in of wells that may otherwise not experience any pressure event and may have been deemed low risk. Shutting in too many wells can be the largest expense incurred by a new completion, as operators not only work-over the offset wells, but also defer production for the entirety of the completion job. On the other hand, an underestimated shut in distance might enhance fracture driven interferences (frac hits) during a completion job. The use case applied a workflow to a large field dataset. We underscore that historical data can be used to quantify the zonal communication and to provide recommendations for future operations regarding a shut in radius. With this novel approach, we analyzed several well pads in Bakken basin and all in close proximity. The analysis included the following datasets: static geological/formation data, completion data, one-second pressure data, and production history. The method used in this study can be defined as a 3-step process: 1) Employing analytics to assess and evaluate fracture driven interferences during the completion of new infill wells. 2) Quantifying the long-term production impact that may occur after shutting an offset well. 3) Applying machine learning techniques to determine the optimal offset distance and degree of communication. Pressure data from the offset monitoring wells were used to determine the presence of fracture driven communication between wells during a completion operation. Production data were also utilized to quantify the long-term impact of shut in and fracture driven interferences. Machine learning techniques were then applied to measure the influence of offset distance (and other parameters such as completion design, depletion history, and zonal variance) to communication. The results of the analysis indicated the distance at which communication occurs most often in offset wells from the hydraulic completion of new infill wells. Considering this information, an optimized shut in distance was proposed for offset wells in the area reducing it from the previous radius by 250-550 ft thus improving production metrics.
Proceedings Papers
Paper presented at the SPE/AAPG/SEG Unconventional Resources Technology Conference, July 20–22, 2020
Paper Number: URTEC-2020-2787-MS
...), leaving significant amounts of unrecovered hydrocarbon in the subsurface. machine learning bayesian inference history matching artificial intelligence steam-assisted gravity drainage enhanced recovery sagd concentration information gas injection conductivity realization upstream oil...
Abstract
The dynamic nature of unconventional-reservoir developments calls for availability of fast and reliable history matching methods for simulation models. Here, we apply an assisted history-matching (AHM) approach to a pair of wells in the Wolfcamp B and C formations of Midland Basin, for which production history is recorded for two periods: primary production and gas injection (Huff-n-Puff, or HNP ). The recorded history of gas injection reveals severe inter-well interactions, underscoring the importance of fracture interference modeling. Fracture segments are modeled with embedded discrete fracture model (EDFM). Inter-well communication is modeled using long fractures that only become active during gas injection. We apply a Bayesian AHM algorithm with a neural-network-proxy sampler to quantify uncertainty and find the best model matches. For each well, we use primary production observations to invert for 13 uncertain parameters that describe fracture properties, initial conditions, and relative permeability. Subsequently, by minimizing pressure- and rate-misfit errors during the HNP period, we evaluate the size and conductivity of inter-well fractures. For each AHM study, the objective is to minimize a cost function that is a linear combination of misfit errors between simulation results and observation data for well pressure and production rates of oil, water, and gas. The selected solution samples were used to perform probabilistic forecasts and assess the potential of HNP enhanced oil recovery (EOR) in the area of interest. From 1400 total simulation runs, the AHM algorithm generated 100 cases (solutions) that satisfy predefined selection criteria. Even though the parameter prior distributions were the same for the two wells, the marginal posteriors were dissimilar. Relative permeability curves for solution candidates can vary significantly from each other. The prospects of EOR were proven decent for the wells of interest. We reported 30% and 81% incremental recovery for the P50 predictions of wells BH and CH, respectively. Introduction Exploitation activities of tight oil resources (with formation permeability less than 0.1 mD) have been increasing as horizontal drilling and hydraulic fracturing technologies continue to improve. In 2018, 61% of total US crude oil production was produced from tight formations (EIA 2019). A typical tight oil well will be completed over multiple stages creating hundreds of fracture clusters along a horizontal wellbore that extends for thousands of feet. This completion forms a large network of fractures that connects the wellbore to a large surface area of the shale formation. The initial well productivity could be quite high, it typically declines very rapidly and remains low during long term production. Pressure depletion occurs quickly because of the small permeability of tight pores. As a result, recovery factors are only in the 1 to 10% range of the original oil in place during primary production (EIA 2013), leaving significant amounts of unrecovered hydrocarbon in the subsurface.