Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number
NARROW
Format
Subjects
Date
Availability
1-8 of 8
Razi Gaskari
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the Eastern Regional Meeting, October 17–19, 2007
Paper Number: SPE-111204-MS
Abstract
Abstract About one-fifth of the natural gas used by Americans each winter comes from natural gas storage sites. Gas storage is the primary means for the gas industry to manage fluctuations in supply and demand. Natural gas can be stored in a variety of ways. Most commonly, it is held in underground formations, in depleted oil or gas reservoirs, or in natural aquifers. Many gas storage wells show a decline in deliverability as a function of time due to several damage mechanisms. The remedial operations such as stimulation and workovers are used to restore the loss in deliverability and to enhance the productivity of a well. Candidate selection for the stimulation or workover process is generally based on well history. Skin factor is an important parameter to predict the well performance. Skin is usually calculated from a multi rate well test (MRT). However performing a MRT on a regular basis is an unattractive activity when considering the economic issues. First of all, performing a well test may cause temporary production or injection interruptions. Secondly, the cost associated with well test is considered as an operating expense, a fact that does not help the overall economics of operating a Gas Storage Field. Single Rate tests (SRT) are also performed to estimate the deliverability, but they do not contain sufficient data in order to estimate true skin factor. The objective of this study is to introduce a new methodology to enhance the current practices of estimating true skin factor from a SRT. This method includes history matching of the actual MRT and then estimation of skin value from SRT using the history matched model. Using this methodology it is shown that change in the skin can be estimated with reasonable accuracy. Introduction Traditionally, gas storage wells in the Appalachian basin are evaluated by multi rate test or single rate test data. The results of the single rate test data are extrapolated to estimate the Absolute Open Flow (AOF) value of the well. The new value is compared in order to make decision on need for remedial treatments. Candidate selection is generally based on the well performance history. Skin factor is an important parameter to predict this performance. Change in Skin (?S) generally is a good indicator whether a well needs remedial operations or not. Skin factor is usually calculated from a well test by conventional well test analysis. To maintain an accurate estimate of the well performance for the candidate selection, gas storage wells need to be tested regularly. However, performing a well test regularly is not quite economic. First, performing a well test may cause temporary production or injection interruptions and second, the cost associated with well test is considered an OpEx (Operating Expense), a fact that does not help the overall economics of operating a Gas Storage field. Each year Gas Storage operators spend thousands of their OpEx dollars to test storage wells. In industry, multi rate tests (MRT) and single rate tests (SRT) are being performed to have a qualitative assessment for candidate well selection by comparing the AOF (absolute open flow potential) of the wells or they are being performed to have an approximate estimation of the skin factor. Since, a MRT causes temporary production interruptions and a SRT do not contain sufficient data to estimate the "true skin factor", getting the best possible and reliable reservoir and well bore characterization from the performed tests becomes very important. The objective of this study is to introduce a new methodology to enhance the current practices of estimating the true skin factor from a SRT. This method includes history matching of the actual MRT and then estimation of skin value from SRT using the history matched model. Using this methodology change in skin has been studied. In this study, optimization of well test analysis has been aimed. This optimization can have two dual benefits for the storage operators. It can reduce OpEx by reducing the number of multi rate tests that must be performed, thus fewer production/ injection interruptions, while by improving the analysis process a more realistic assessment of the damage over time (identified by skin) becomes possible. Using a commercial numerical simulator and well test data representative of an actual gas storage field located in Ohio we verified the methodology mentioned above.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the Eastern Regional Meeting, October 17–19, 2007
Paper Number: SPE-111205-MS
Abstract
Abstract Intelligent Production Data Analysis - IPDA, is a new methodology for Reservoir Characterization based only on monthly production rate data. This technique combines conventional methods of production data analysis (decline curve analysis, type curve matching and history matching) with intelligent systems. The study targets the validation of this methodology under a controlled environment, attempting three main objectives: Identifying Sweet Spots, Forecasting Reserves and recognizing under-performer wells. The study investigates the behavior of five different reservoirs, modeled using a commercial simulator. The structure, parameters and heterogeneity of each configuration was inspired by existing formations. Records of production rate data were generated from the simulated fields (both single and multi-layer formations) and used as input to perform an "Intelligent Production Data Analysis". The findings highlight strength of this technique in tracking the fluid movement in the reservoir as a function of time. Furthermore, this study identifies some limitations and circumstances under which the analysis may not result in correct recommendations. 1. Introduction The recent rise in the global demand for energy has significantly increased oil and gas prices. In the last few years E&P companies have reported record profits. A new reality rules the energy market, and large amounts of money have been invested in order to increase the production capacity. Now, mature fields, which were not profitable in the late 90s, have become very attractive for major oil and gas producers. An effective revitalization of this type of reservoirs has come to play a big role in the industry. Recovery techniques have been tremendously improved over the last decade. However, lack of data is a problem with mature fields. Production Rate is about the only data that can be easily accessed in most of the brown fields. But, what can be done with this data? Recently, a new technique for production analysis was introduced. The procedure is called Intelligent Production Data Analysis (IPDA). It combines the well-known methods for production data analysis (Decline Curve Analysis, Type Curve Matching and History Matching) with intelligent systems (Neural Networks, Genetic Algorithms and Fuzzy Logic). The results provide a unified set of reservoir characteristics based only on records of monthly production rate data. The fact that this information can be found in public records, projects IPDA as a valuable tool for independent asset evaluation, prior to lease acquisitions. The IPDA technique has been applied in several fields throughout the United States (Rockies and Mid-Continent). One of these cases was the Golden Trend fields in Oklahoma 1. The only available data in this field was monthly production rate data. The application of IPDA technique provided both reservoir intrinsic properties and remaining reserves distribution throughtout the field. In 2006 Jalali applied IPDA technique to characterize Carthage Field in the Cotton Valley Formation in Texas. Production records from 349 wells were employed during this analysis. The result was a unified set of reservoir indicators such as, Estimated Ultimate Recovery (EUR), remaining reserves at different times, permeability distribution, drainage area, etc.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Eastern Regional Meeting, October 11–13, 2006
Paper Number: SPE-104554-MS
Abstract
ABSTRACT The U.S. Geological Survey estimates tight-gas sands and shales may contain up to 460 trillion cubic feet (Tcf) of gas in the U.S. alone - almost three times the amount of currently proven gas reserves - and that some 135 Tcf may be technically recoverable. Since natural fractures are the main source of permeability in gas shales, key to producing this vast resource is locating areas where natural fractures are abundant. By placing infill wells in such locations operators can significantly increase gas recovery. In most cases, in fields that produce from gas shales, monthly production rate is the only data that is available. This paper introduces a new strategy for estimation of major and minor natural fracture using production data. The framework includes the use of Geographic Information System as an environment to analyze the production data where the preliminary estimation of natural fracture trends is developed. Then by superimposing the results of the preliminary estimation on reservoir quality indices developed using a fuzzy pattern recognition technique, the uncertainty associated with the initial estimation is reduced. This technique is validated using a set of gas shale wells in Kentucky and West Virginia. INTRODUCTION In most cases, in fields that produce from gas shales, the monthly production rate is the only data that is available. Shale provides a significant amount of gas. The U.S. Geological Survey estimates tight-gas sands and shales may contain up to 460 trillion-cubic-feet (Tcf) of gas in the U.S. alone--almost three times the amount of currently proven gas reserves--and that some 135 Tcf may be technically recoverable. Shale production is tied to existence of a natural fracture system and not conventional geology, one well may come on at 10 3 mcf/d, while its neighbor may yield nothing 1 . The high-rate of production in shale reservoir is achieved by communicating the well bore with the natural fractures, which are the trap and transport system for the reservoir fluids 2 . Identification of infill drilling locations has been challenging with mixed results. This is due to the random nature of the natural fractures in the shale. In this study, natural fractures of the shale are treated as a random variable. The most logical approach is to use an integrated approach of the probability theory and intelligent system. Also, there must be an understanding that a deterministic approach is almost impossible to develop, no matter how many resources are used to solve this problem. All we can do is to use the history of development to increase our probability of success. There will be hits and misses in our prediction. Our goal is to increase the probability of hits and decrease the probability of misses. Due to the nature of the intelligent systems' approaches, our methodology gets better with time. This technique is validated using a set of gas shale wells in Kentucky and West Virginia (Figure 1).
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Eastern Regional Meeting, October 11–13, 2006
Paper Number: SPE-104571-MS
Abstract
Abstract The main goal of this paper is to modify and apply the state-of-the-art intelligent, optimum portfolio management to the gas storage field in order to optimize the return on investment associated with well remedial operations. It continues the development of a methodology for candidate selection and stimulation design and optimization using Artificial Intelligence techniques. The data of an actual gas storage field was used to test the results. The project data include Well-bore, Completion, Perforation, Stimulation, Well-test and Reservoir Data. To make candidate selection for gas storage fields operators predict the effectiveness of the stimulation commonly using three parameters. One in Peak Day rate second is Absolute open flow and third is change in skin provided permeability values in the field don't vary much. The software developed in parallel with this selection methodology includes an easy to use interface that allows the user to edit the data for a gas storage field, perform well-test analysis and use neural networks in association with Genetic optimization tool. The software ranks the well according to maximum change in skin value and recommends the best stimulation slurry based on the weitage given to the skin and cost of stimulation. A decision to select the ranked wells for re-stimulate can be made accordingly. Background Each year gas storage operators spend hundreds of thousands of dollars on workovers, completions and re-stimulation of storage wells in order to battle the inevitable decline in deliverability due to well damage with time. A typical storage field has tens if not hundreds of production wells. Each well will respond to a remedial operation in its own unique way that is a function of a set of uncontrollable parameters such as porosity and permeability and a set of controllable parameters such as completion and stimulation practices which can be controlled by the operator and the service company performing the remedial job and hence identified by the methodology presented in this paper. The software application then allows the user to analyze the remedial operation performed in the storage field and identify the most appropriate candidates for such operations taking into account certain economical parameters. This methodology is modification of previous efforts made to make intelligent decisions for re-stimulation candidate selection of gas storage wells. In all the previous attempts decisions were based on well deliverability parameters (either the Peak day rate or Absolute open flow). The study in this paper takes into account the new reality faced by intelligent systems prediction of candidate wells i.e., sometimes artificial neural nets cannot predict with reasonable accuracy from well deliverabilty parameter data. This new methodology uses change in skin as optimization tool and the software tool made in line with this methodology uses different trained neural networks and is very flexible to design Genetic Algorithm based on different parameters. This flexability gives the engineers an advantage to use the same software to select candidate wells for different storage fields, this tool was not available so far. Figure 1 shows the screen shot of this software which has been named ‘Intelligent Storage’. Economics play a pivotal role in the restimulation candidate selection. The engineers each year have fixed amount of budget to re-stimulate few wells out of hundreds. The new economic optimization tool uses Genetic Algorithm based on both skin and cost of stimulation to optimize the stimulation recipe. Methodology To make an intelligent decision about the candidate well selection we need to have a judgment whether the stimulation was good or bad. To make that decision we collected all the relevant data and put it in a database so that we can manipulate it the way we like to look at it. Permeability analysis was done for all the wells in the storage field and frequency graph was plotted to check the variability of permeability values in the field, which is shown in Figure 2.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Eastern Regional Meeting, October 11–13, 2006
Paper Number: SPE-104550-MS
Abstract
Abstract Recent increase in global demand for energy and the consequent high prices have prompted a need for improving the recovery from mature reservoirs. Identifying sweet spots in these fields for in-fill drilling and ranking the infill locations based on their potential productivity as well as underperformer wells as candidates for remedial operations are important for improving the economics of mature fields. One of the most important issues that make analysis of mature fields quite challenging is lack of data. Production rate data is about the only data that can be easily accessed for most of the mature fields. The most accessible production data usually does not include flowing bottom-hole or well head pressure data. Lack of pressure data seriously challenges the use of conventional production data analysis techniques for most of the mature fields. The motivation behind development of the techniques that are presented in this study is to demonstrate that much can be done with only monthly production rate data in order to help the revitalization of mature fields. Methods currently used for production data analysis are decline curve analysis, type curve matching, and history matching using numerical reservoir simulators. Each one of these methods has its strengths and weaknesses. They include significant amount of subjectivity when they are used individually in the context of production data analysis. In this paper, intelligent systems are used in order to iteratively integrate the abovementioned techniques into one comprehensive methodology for identification of infill drilling locations as well as underperformer wells that would be the prime candidates for restimulation and/or workovers. Application of this technique to a large number of wells in the Carthage field, Cotton Valley formation is presented. Introduction Several production analysis tools and strategies for estimating the remaining reserve, identifying infill drilling locations and underperformer wells exist in the oil and gas industry. In order to make any conclusions using most of these methods, a large amount of data such as production data and reservoir properties are required. Production data analysis techniques have improved significantly over the past several years. These techniques provide the engineer some of the reservoir properties and estimates of the hydrocarbon in place and ultimate recovery. First and most common method for production data analysis is decline curve analysis. Decline Curve Analysis (DCA) is a method that fits the observed production rates of individual wells, group of wells, or reservoirs by a mathematical function in order to predict the performance of the future production by extrapolating the fitted decline curve. DCA was first introduced by Arps 1 in 1940s using mathematical equations. The reason for DCA being widely used is its simplicity and since it is an empirical method, it does not require any information regarding the reservoir or well parameters. The mathematical functions are characterized by three parameters; (initial flow rate),(hyperbolic exponent), and (initial decline rate.) When, the decline is exponential. When, then the decline is harmonic. When, the decline is hyperbolic. Fetkovich 2 introduced decline curve analysis by type curves in 1980s by relating Arps' decline parameters to some reservoir engineering parameters for production against constant bottom-hole pressures. Over the past few years, the type curve matching methods have been improved by several people in order for the type curves to be used for different reservoir types and producing scenarios. Although decline curve analysis and type curve matching techniques are still being used widely, but the results they provide are highly subjective.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Eastern Regional Meeting, September 14–16, 2005
Paper Number: SPE-98013-MS
Abstract
ABSTRACT A methodology to generate synthetic wireline logs is presented. Synthetic logs can help analyze the reservoir properties in areas where the set of logs that are necessary, are absent or incomplete. The approach presented involves the use of Artificial Neural Networks as the main tool, in conjunction with data obtained from conventional wireline logs. Implementation of this approach aims to reduce costs to companies. Development of the neural network model was completed using Generalized Regression Neural Network, and wireline logs from four wells that included gamma ray, density, neutron, and resistivity logs. Synthetic logs were generated through two different exercises. Exercise one involved all four wells for training, calibration and verification process. The second exercise used three wells for training and calibration and the fourth well was used for verification. In order to demonstrate the robustness of the methodology, three different combinations of inputs/outputs were chosen to train the network. In combination "A" the resistivity log was the output and density, gamma ray, and neutron logs, and the coordinates and depths (XYZ) the inputs. In combination "B" the density log was output and the resistivity, the gamma ray, and the neutron logs, and XYZ were the inputs, and in combination "C" the neutron log was the output and the resistivity, the gamma ray, and the density logs, and XYZ were the inputs. After development of the neural network model, synthetic logs with a reasonable degree of accuracy were generated. Results indicate that the best performance was obtained for combination "A" of inputs and outputs, then for combination "C", and finally for combination "B". In addition, it was determined that accuracy of synthetic logs is favored by interpolation of data. As an important conclusion, it was demonstrated that quality of the data plays a very important role in developing a neural network model. INTRODUCTION Well logging has been in use for almost one century as an essential tool for determination of potential production in hydrocarbon reservoirs. Log analysts interpret the data from the log in order to determine the petrophysical parameters of the well. However, for economical reasons, companies do not always posses all the logs that are required to determine reservoir characteristics. This paper presents amethodology that can help to solve the aforementioned problem by generating synthetic wireline logs for those locations where the set of logs that are necessary to analyze the reservoir properties, are absent or are not complete. The intention of the technique used here is not to eliminate well logging in a field but it meant to become a tool for reducing costs for companies whenever logging proves to be insufficient and/or difficult to obtain. This technique in addition, can provide a guide for quality control during the logging process, by prediction of the response of the log before the log is acquired. The approach presented involves the use of artificial neural networks, as the main tool, in conjunction with data obtained from conventional wireline logs.
Proceedings Papers
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Eastern Regional Meeting, September 14–16, 2005
Paper Number: SPE-98010-MS
Abstract
ABSTRACT Most of the mature fields in the United States have been producing for many years. Production in these fields started at a time when reservoir characterization was not a priority; therefore they lack data that can help in reservoir characterization. On the other hand to re-vitalize these fields in a time that price of hydrocarbon is high, requires certain degree of reservoir characterization in order to identify locations with potentials of economical production. The most common type of data that may be found in many of the mature fields is production data. This is due to the fact that usually production data is recorded as a regulatory obligation or simply because it was needed to perform economic analysis. Using production data as a source for making decisions have been on the petroleum engineer's agenda for many years and several methods have been developed for accomplishing this task. There are three major shortcomings related to the efforts that focus on production data analysis. The first one has to do with the fact that due to the nature of production data its analysis is quite subjective. Even when certain techniques show promise in deducing valuable information from production data, the issue of subjectivity remains intact. Furthermore, as the second shortcoming, existing production data analysis techniques usually address individual wells and therefore do not undertake the entire field or the reservoir as a coherent system. The third short-coming is the lack of a user friendly software product that can perform production data analysis with minimum subjectivity and reasonable repeatability while addressing the entire field (reservoir) instead of autonomous, disjointed wells. It is a well known fact that techniques such as decline curve analysis and type curve matching address individual wells (or sometime groups of wells without geographic resolution) and are highly subjective. In this paper a new methodology is introduced that attempts to address the first and the second, i.e. unify a comprehensive production data analsysis with reduced subjectivity while addressing the entire reseroivr with reasonable geographic resolution. The geographic mapping of the depletion or remainind reserves can assists engineers in making informed decision on where to drill or which well to remediate. The third shortcoming will be addressed in a separate paper where a software product is introduced that would perform the analysis with minimum user ineraction. The techniques introduced here are statistical in nature and focuses on intelligent systems to analyze production data. This methodology integrates conventional production data analysis techniques such as decline curve analysis, type curve matching and single well radial simulation model, with new techniques developed based on intelligent systems (one or more of techniques such as neural networks, genetic algorithms and fuzzy logic) in order to map fluid flow in the reservoir as a function of time. A set of two dimensional maps are generated to identify the relative reservoir quality and three dimensional maps that track the sweet spots in the field with time in order to identify the most appropriate locations that may still have reserves to be produced. This methodology can play an important role in identifying new opportunities in mature fields. In this paper the methodology is introduced and its application to a field in the mid-continent is demonstrated.
Proceedings Papers
Shahab D. Mohaghegh, Upender Naik Nunsavathu, Jalal Jalali, Miguel Angel Tovar, Nikola Maricic, Razi Gaskari, Samuel Ameri
Publisher: Society of Petroleum Engineers (SPE)
Paper presented at the SPE Eastern Regional Meeting, September 14–16, 2005
Paper Number: SPE-98011-MS
Abstract
ABSTRACT As the interest in coalbed methane production increases more and more engineers and scientists are becoming interested in the reservoir and sorption characteristics of different coal seams throughout the U.S and the world. As they set out to search for such information it becomes clear that finding such information is not easy. The difficulties associated with collecting such information are due to three major factors. First is the fact that development of CBM reservoirs is a relatively new phenomenon when it is compared with conventional hydrocarbon reservoirs. Secondly, companies that are in possession of such data and information are very protective of their data. They consider them company assets and are not ready to release them. Thirdly, whatever data and information that is available in the public domain is scattered in hundreds of technical papers and publicly funded project reports. The objective of this paper is to introduce and describe the details of a series of newly developed databases that will soon become publicly available. These databases are a collection of all the publicly available CBM related data and information in one location. This series includes three major databases. First is a literature database that includes the title, authors, topics and summary of all the available technical papers and project reports related to CBM. Each entry includes a summary of the paper and other information that facilitate locating the article or the report. The database includes papers from SPE and CBM conferences and symposiums along with reports published by DOE, GRI and others. The second database is a collection of reservoir and sorption characteristics of CBM reservoirs that has appeared in the public domain. The data has been organized by basin and field and one can identify ranges and statistics about the available data based on each basin. A reference for each piece of data is also presented. The third database is CBM well productions from different states and different local databases that has been gathered and compiled in one location for easy access. These databases are located and maintained at the Petroleum & Natural Gas Engineering department at West Virginia University. INTRODUCTION Coalbed Methane (CBM) is growing source of natural gas in the United States. Figure 1 shows the major basins with their associated CBM reserve. It has been estimated that 11.7% of total U.S. natural gas resources comes from coalbed methane as shown in Figure 2. Figure 3 shows the current contribution of each basin to total CBM production in the United States[1]. In a CBM reservoir that is in saturated state, water completely permeates coal beds, and its pressure causes the methane to be adsorbed onto the grain surfaces of the coal. Most of the gas is adsorbed into the internal structure of the coal. Coals contain small, regularly-spaced, naturally occurring fractures called face cleats and butt cleats. Coal reservoirs also contain larger-scale natural fractures. To produce CBM, water must be drawn off first, lowering the pressure so that the methane will desorb from the coal and then flow to the well bore. This process is shown in Figure 4. CBM production is attractive due to several geological factors. Coal stores six or seven times as much gas as a conventional natural gas reservoir of equal rock volume due to the large internal surface area of coal. Much coal is accessible at shallow depths, making well drilling and completion relatively inexpensive.