Machine Learning for Well Log Normalization
- Ridvan Akkurt (Schlumberger) | Mike Miller (Cimarex) | Brooke Hodenfield (Schlumberger) | Iain Pirie (Cimarex) | David Farnan (Cimarex) | Manas Koley (Schlumberger)
- Document ID
- Society of Petroleum Engineers
- SPE Annual Technical Conference and Exhibition, 30 September - 2 October, Calgary, Alberta, Canada
- Publication Date
- Document Type
- Conference Paper
- 2019. Society of Petroleum Engineers
- Well Log Normalization, Unconventionals, Machine Learning
- 26 in the last 30 days
- 465 since 2007
- Show more detail
- View rights & permissions
|SPE Member Price:||USD 5.00|
|SPE Non-Member Price:||USD 28.00|
A well log measurement can be modelled as the sum of three components: the formation signal, random noise, and systematic error. The sources for systematic error include tool malfunctions, shop and field miscalibrations, operator error, and inherent hardware design limitations. Log calibration, more commonly referred to as log normalization, is the process of applying corrective shifts to well logs to minimize the systematic error.In this paper we develop a machine learning approach to the multi-well log normalization problem which we believe is particularly applicable in unconventional field studies involving hundreds of wells of varying data quality and vintage.
We start by applying machine learning to the multi-well normalization problem, where the reference unit and reference wells are selected by the geoscientist. The reference unit is typically a laterally extensive stratigraphic interval with a consistent log response over the area of interest, which in our case is a tight limestone with small amounts of dolomite and/or silt. The reference wells are those that do not require any normalization. A predictive machine learning model is trained using log data from the reference unit in the reference wells, and a regression-based optimization algorithm is used to solve for constant shifts which are applied as normalization corrections for density and neutron logs in the remaining wells.
The process of selecting reference wells and picking the boundaries of the calibration unit can be subjective; it is influenced by the experience of the geoscientist with the area of interest, the pressures of project timelines and the availability of sufficient resources. The impact of these human-introduced biases can be severe in large projects where a team of geoscientists is engaged to process hundreds of wells: inconsistent normalization practices can lead to large errors in computed reservoir properties.
As a solution to minimize such inconsistencies, we propose to extend the use of machine learning to create a seamless workflow that is automated and requires minimal user involvement in the execution phase. Towards this objective, we incorporate an additional machine learning component, which eliminates the requirement for a-priori knowledge of the reference unit boundaries. The resulting workflow produces consistent and high-quality results that compare very well to those produced by manual Log Normalization workflows run by experts.
Comparison of machine learning based results against expert answers show that the machine learning approach offers an efficient alternative to manual log normalization where gains can be significant in projects involving large numbers of wells, as in the case of unconventional plays.
|File Size||1 MB||Number of Pages||15|