Reservoir engineering constitutes a major part of the studies regarding oil and gas exploration and production. Reservoir engineering has various duties, including conducting experiments, constructing appropriate models, characterization, and forecasting reservoir dynamics. However, traditional engineering approaches started to face challenges as the number of raw field data increases. It pushed the researchers to use more powerful tools for data classification, cleaning and preparing data to be used in models, which enhances a better data evaluation, thus making proper decisions. In addition, simultaneous simulations are sometimes performed, aiming to have optimization and sensitivity analysis during the history matching process. Multi-functional works are required to meet all these deficiencies. Upgrading conventional reservoir engineering approaches with CPUs, or more powerful computers are insufficient since it increases computational cost and is time-consuming. Machine learning techniques have been proposed as the best solution for strong learning capability and computational efficiency. Recently developed algorithms make it possible to handle a very large number of data with high accuracy. The most widely used machine learning approaches are: Artificial Neural Network (ANN), Support Vector Machines and Adaptive Neuro-Fuzzy Inference Systems. In this study, these approaches are introduced by providing their capability and limitations. After that, the study focuses on using machine learning techniques in unconventional reservoir engineering calculations: Reservoir characterization, PVT calculations and optimization of well completion.
These processes are repeated until all the values reach to the output layer. Normally, one hidden layer is good enough for most problems and additional hidden layers usually does not improve the model performance, instead, it may create the risk for converging to a local minimum and make the model more complex. The most typical neural network is the forward feed network, often used for data classification. MLP has a learning function that minimizes a global error function, the least square method. It uses back propagation algorithm to update the weights, searching for local minima by performing a gradient descent (Figure 1). The learning rate is usually selected as less than one.