The development of corrosion prediction software systems over the years has concentrated on scalar analysis, that is, using fixed input values to develop a single output parameter (corrosion rate). This is consistent with the origins of the majority of prediction systems which are based on the results of a series of laboratory tests using a standard changing variable approach to develop an algorithm providing corrosion rate from the varying inputs.

However, it has also been recognised that in real life situations, input parameters are not fixed and also that any algorithm will have a degree of uncertainty (prediction error). In order to account for variable conditions, a series of sensitivity calculations are often carried out, covering expected changes in (for example), flow rates, operating temperatures & pressures, process chemistry variations, etc. Then taking the varying predicted corrosion rates to provide an expected operating range.

Modelling uncertainty is normally handled by considering that the actual expected corrosion rate can be described as a distribution, with the predicted scalar value taken as an input to the distribution.

Both approaches are valid but can be time consuming. In particular, the resulting predicted corrosion rate range is not necessarily well defined (especially with respect to the minimum & maximum values). In order to provide both a more efficient and rigorous approach, one of the leading corrosion prediction software packages used in the oil & gas industry (with respect to downhole, pipelines and pipework) has been expanded to include Monte Carlo analysis.

This paper will describe the Monte Carlo approach adopted, consider the practical (engineering) implication of the distribution results and compare the output to more traditional sensitivity approaches.

Corrosion Modelling - Background

Over the years there have been several different corrosion modelling software packages developed to provide predicted (estimated) corrosion rates for use in the oil & gas industries. Many are based on the original work of DeWaard & Milliams 1, 2, 3 which provided a best-fit statistical model to corrosion rates measured in flow loop laboratory tests conducted at the IFE (Institutt For Energiteknikk) in Norway 4; covering (initially) just partial pressure of CO2, temperature, liquid flow velocity and pH (typically as bicarbonate and dissolved CO2). Over the years these models have been further developed to consider a wider range of operating parameters, including H2S, multiphase flow, volatile fatty acids 5, chloride content 6, condensation conditions (for gas lines), top of the line corrosion 7, 8, flow pattern characteristics (for multi-phase flow), the protective influence of Iron Carbonate scaling, crude oils 9, 10, corrosion inhibition and glycol injection, etc.

This content is only available via PDF.
You can access this article if you purchase or spend a download.