Abstract

Efficiency and safety are primary requirements for oil & gas fluid filled transportation system. However, the complexity of the asset makes it challenging to derive a theoretical framework for managing the control parameters. The current frontier for a real time monitoring exploits the "digital tansformation", i.e. the acquisition and the analysis of large datasets recorded along the whole asset lifecycle, which are used to infer "data driven" relations and to predict the evolution of the asset integrity. This paper presents some results of a research project for the design, implementation and testing of a "machine learning" approach to vibroacoustic data recorded continuously by acquisition units installed every 10-20 km along a pipeline.

In a fluid transportation system, vibroacoustic signals are generated by the flow regulation equipment (i.e. pumping, valves, metering), by the fluid flowing (i.e. turbulence, cavitation, bubbles), by third party interference (i.e. spillage, sabotage, illegal tapping), by internal inspection using PIGs operations), and by natural hazards (i.e. microseismic, subsidence, landslides). The basic principle of machine learning is to "observe", for an appropriate time interval, a series of descriptors, in this stage related to vibroacoustic signals but that can be integrated with other physical data (i.e. temperature, density, viscosity), in order to "learn" their safe range of variation or, when properly fed to a classification procedure, to obtain automatically a discrete set of operational status. The classification criteria are then applied to new data, highlighting the presence of system anomalies.

The paper considers vibroacoustic signals collected at the flow stations of an oil trunkline in Nigeria. The vibroacoustic signals are the static pressure, the acceleration and the pressure transients recorded at the departure and at the arrival terminals. More than one year of data is available. Derived smart indicators are defined, which are directly linked to the asset parameters: for instance, the cross-correlation of the pressure transients at adjacent measuring locations permits to estimate the fluid channel continuity (correlation value), the sound velocity (time of correlation peak), and the sound attenuation (amplitude versus frequency amplitude decay). A portion of the data during normal operation is used for training and tuning a reference model. After that, new data are compared with the model, and anomalies are automatically detected. Two kind of errors are raised: i) sensors; ii) alerts. Sensor errors are referred to missing or corrupted sensors data. Alerts are raised when the measured physical quantities are not coherent with the functional and known service behaviors of the transport system.

The system model is not static over time, and in fact it can be updated by the operators’ feedback, that can tag false alarms and thus, automatically, re-define the set of operational scenarios of the upstream system. The medium-long term construction and update of data driven models is effective for predictive maintenance, automatic anomalies detection, optimization of operational procedures. Moreover, the new policy of data management and the opportunity of gaining awareness by interconnecting the monitoring experience of different assets leverages the introduction of new technologies (cloud, big data), new professional figures (smart data scientist), new operational and business models.

You can access this article if you purchase or spend a download.