There are always initially unknown or wrongly known physical parameters in any pipeline simulation. These will cause erroneous model results unless the model is calibrated to compensate for them. This is true for both online and offline systems. Common sources of difference between a real pipeline system and that system's description on paper include buildup of deposits (or liquids in a gas line), pipe corrosion, changing ground thermal properties, and the effects of small impurities on fluid properties. In addition, SCADA systems may give incorrect readings due to sensor drift, and there's also opportunity for human error in recording the physical characteristics of the system. This article examines the effects of different sources of error on the results of both steady-state models (which are the basis of much offline analysis) and transient models. Pipeline models have used many different techniques to compensate for these errors. These techniques are all based on matching the model's results to historical SCADA data. In offline models, this is typically done using a manual trial-anderror approach, while online systems usually include some sort of automatic tuning for this purpose, either based on feedback or state estimation. A new tuning algorithm is presented as an alternative to the manual trial-and-error approach for tuning an offline model or performing initial tuning of an online system, and results of tests of this new approach are given.
If an online pipeline model doesn't have a working automatic tuning system, even if the model was initially tuned to provide good agreement with SCADA, it will very likely become useless in a few weeks after the initial calibration. While a vendor can usually manually tune a model so that it matches the instantaneous performance of the pipeline at the moment of the tuning, the unmodeled physical parameters of most pipelines vary significantly and rapidly enough that the state of the model will quickly deteriorate if nothing is done to track this variation. Because model-based leak detection requires a very accurate model, it is the application where poor calibration has the most obvious effect. However, even in offline steady-state analysis some tuning is required. The user depends on an offline model to produce physically accurate results in a wide variety of situations, some of which may never have been attempted historically. In an effort to make the physical parameters of that model as close to reality as possible, it's calibrated to match any available historical situations that have recorded SCADA measurements. This is usually done via a laborious, iterative process of running the model for a historical situation, adjusting tuned parameters until the model agrees with SCADA for that particular scenario, and then moving on to the next scenario in an attempt to get agreement in all recorded situations. (Of course, if there were historical data available for every situation of interest, then there would be no need of a model - the user could just look up what would happen in the list.)