Abstract
To lower costs, simplify operations and improve HSE, offshore operators have set aggressive goals to relocate technical roles from the rig to onshore centralized facilities. This is only possible if the specialists who are remotely supervising and advising the drilling operations see the data as it is being acquired, as well as obtain enough information about the data's reliability and integrity to accurately appraise the events observed during drilling to make fully informed decisions. The amount of data being transferred to shore has dramatically increased over the past 15 years, saturating a legacy data transfer process hampered by inherent limitations in responsiveness and bandwidth utilization. In addition to this, the data models used for transfer lacked indispensable metadata to properly qualify and assure that the data being received onshore was trustworthy.
These issues were addressed with two initiatives. First re-engineer the data transfer process to minimize lag time and overheads. Legacy architectures delivered buffered data with a lag of 10 seconds or more. The transaction overhead wasted about 90% of available bandwidth. The new streaming protocol reduces lag to about one second, delivering a stream of data that updates continuously. The second initiative addressed the scant amount of metadata, resulting in time-consuming verifications. This is of particular importance to analyze anomalies, e.g. whether the measuring device is properly calibrated. The metadata specifications cover not only the source specifications and process history, but also the information on adherence to data quality rules specific to an operator.
Over a test period of months, the operator saw that the quality and speed of analysis resulted in faster and more confident decisions, avoiding costly rig interventions. The technical ability to stream larger amounts of better-qualified data to multiple recipients simultaneously is opening up new, more collaborative ways to operate offshore drilling remotely.