The primary functions of a reservoir engineer include estimation of hydrocarbons in place, the evaluation of the recovery factor and the scheduling of the recovery. These roles are central in meeting the extremely complex challenges of the life-cycle development of hydrocarbon resources.

These challenges have now assumed gargantuan proportions because of mature assets requiring more attention to squeeze the last drop of oil from them, complicated accumulation needing ingenious solutions to make them profitable, high development costs demanding greater attention to details to reduce costs and unprecedented prices that has made otherwise marginal fields more attractive to develop. Throw also in the mix the heightened sense of environmental issues, unconventional hydrocarbon resources, competing alternative sources of energy, strict compliance with regulatory bodies and uncertain political situations in most of the growth basins and you might have sympathy for those crying for more reservoir engineers in the industry.

The fact is that our traditional deterministic approach of working is people intensive and it has started to fail the industry. Therefore, recruiting many more engineers to meet the present challenges can only fuel the vicious cycle of hire and fire approach that has made our industry less attractive in the past. We need to embrace new workflows based on established statistical concepts like neural networks and experimental designs that move the focus from people to computers. These new workflows also enable greater flexibility in data handling, ensure consistency in uncertainty quantification and give results in continuous distributions that can be sampled rapidly using statistical techniques like Monte Carlo.

This paper describes how statistical techniques can help in many core reservoir engineering roles like surveillance, history match and reservoir management. In addition, it presents relevant examples that illustrate the successful application of these concepts in the industry including a possible automation of integrated reservoir studies.


The industry presently believes that more people are required to meet the growing energy demands. The gap appears more pronounced in reservoir engineering because of the complexity of the issues that may not always lend itself readily to automation with current traditional workflows. Additionally, reservoir engineering practice relies in large part on limited, unstructured and often uncertain data. With these limitations, it seems obvious that statistical methods would play a critical role in a reservoir engineer's toolkit. Unfortunately, this has not been the case and statistics has failed to make its way into mainstream reservoir engineering practice. This is despite the fact that statistics offers a disciplined approach to collecting, organizing, analyzing and interpreting data. Statistics also facilitates the making of inferences, predictions and decisions about the characteristics of a data population from information obtained from a subset of the population.

This paper examines how reservoir engineers can improve productivity and bridge the skills gap in the industry by harnessing statistical concepts and stochastic workflows that depend on these concepts.

Data Handling

At the heart of a reservoir engineer's practice, is pattern recognition. Pattern recognition underpins well test analysis, estimates of ultimate recovery for different development decisions and diagnosis of reservoir and drainage point performance. It also forms the basis for seeking analogues to independently verify or corroborate an engineer's technical evaluations. To identify a pattern, a reservoir engineer necessarily needs to overcome many data related problems. These problems vary from the paucity of data in many exploration assignments to data overload in intelligent fields. It also includes how the adopted workflow handles any uncertainties associated with the data.

This content is only available via PDF.
You can access this article if you purchase or spend a download.