The objective of this paper is to introduce a new methodology to increase the accuracy of the Artificial Neural Network (ANN) by improving the selection criteria of training dataset. Such approach will result in faster and better prediction models.

Methods and Procedures

The strength of ANN is its capability to find a correlation between input and output. This correlation consists of several connections called neurons that represent coefficients and weight factors. Developing such neurons requires a large number of internal iterations and data processing, which include a lot of randomness, starting from identifying the training and testing dataset. This paper introduces a methodology that ensures the training dataset in ANN has a wide range and a small size, which improves the training dataset criteria and leads to results that are more accurate.

Results and Conclusions

A case study is conducted to identify the formation density from the drilling surface parameters. Several ANN models, including the proposed method, are constructed on the same dataset. Results show that the new methodology can be 85% faster and 18% more accurate than the traditional ANN on average. It is as accurate as the best traditional ANN, while the former is 90% faster than the latter.

Novel and Additive Information

The novelty in this approach is its methodology which results in much faster ANN development with higher accuracy. It is worth mentioning that the new methodology is based on 50% training dataset, which reduces the training dataset size up to 25% compared to the traditional methods.

You can access this article if you purchase or spend a download.