Log interpretation is critical in locating pay zones and evaluating their potential. Conventional log interpretation is done manually. In our work, deep learning methods are utilized to deal with preliminary pay zone classification. In this way, human expertise can be liberated from trivial and repetitive tasks during logging interpretation.

In a fluvial depositional environment, the sand distribution varies both vertically and horizontally. Thus, a large dataset covering a large area may lead to a too "averaged" model. In our work, we select a relatively small dataset (e.g., seven wells) to reflect the regional features.

Standard deep learning processes are employed. The log data are cleaned, visualized, and preprocessed for the algorithms. A preliminary random forest (RF) model is used to separate the sand (interpretation needed) from the shale (interpretation not needed) facies. In the classification model building and training stages, various types of algorithms are tried and compared, from the simple K-nearest neighbor (KNN) to dense neural network (DNN).

To account for the continuity and influence of adjacent depths, a 1D convolutional neural network (CNN) model is tested. With the model, a simple self-training model is developed and discussed. K-fold validation methods are used to fully reflect the model's performance in such relatively small dataset. With the given dataset, common deep learning methods generate only moderate accuracy and are easily overfitted.

On the other hand, the CNN outperforms the other approaches due its features for pattern recognition. With special caution, a self-learning approach can also further improve the performance. A comparison of different deep learning approaches in terms of time of computation, accuracy, and stability is established. Even trained from a small dataset, with the CNN model, it is possible to identify the zones of interest automatically and consistently.

Due to the size of dataset, a series of techniques is utilized to reduce the impact of overfitting, including balance sampling, drop out, regularization, and early stopping, among others. During the optimization of critical hyperparameters, grid search with Bayesian statistics is used together with K-fold validation.

You can access this article if you purchase or spend a download.