When dealing with new exploration areas, basin geologists face the challenge of collecting relevant information from all available sources. This include a number of structured commercial databases, but also large corpora of technical documents in which an invaluable amount of information is scattered across. Even if assisted by search tools to filter the documents of interest, extracting information requires a human effort in reading and understanding the documents.
Eni and IBM developed a cognitive engine exploiting a deep learning approach to scan documents searching for basin geology concepts, extracting information about petroleum system elements (e.g. formation name, geological age and lithology of source rocks, reservoirs and seals) and enabling basin geologists to perform automated queries to collect all the information related to a basin of interest. The collected information is fully referenced to the original paragraphs, tables or pictures of the document in which it was discovered, therefore enabling to validate the robustness of the results.
The cognitive engine has been integrated within an application which enables to build a graphical representation of the Petroleum System Event Charts of the basin, integrating the information extracted from commercial databases, the results from the cognitive engine and the manual input from the geologist. The quality of the results from the cognitive engine has been evaluated using a commercial database which provides both tabular data about basins and detailed pdf reports. The cognitive engine has been trained on the pdf reports alone, and the results have been compared with the tabular content of the database, representing the ground truth. The cognitive engine succeeded in identifying the right formations, lithologies and geological ages of the petroleum systems with an accuracy in the range 75% – 90%.
The cognitive engine is built with highly innovative technologies, combining the data driven capabilities of deep neural networks with more traditional natural language processing methods based on ontologies. Documents are processed with a three-step approach. In the first step, convolutional neural networks (CNN) are used to recognize the structural elements within a technical paper (e.g. title, authors, paragraphs, figures, tables, references) and to convert a complex pdf structure into a clean sequence of text, which can be analyzed. In the second step, concepts are extracted from these processed documents using extractors, NLP annotators (based on recurrent neural networks) and aggregators. Finally, the joint use of the results from the deep learning tools and the provided ontologies are used to build a knowledge graph, which links together all the discovered entities and their relationships. A fit-for-purpose high efficient graph database has been developed so that the graph can be traversed with full flexibility, collecting all the concepts needed for basin geology studies.