The field data used by corrosion engineers is generally collected for different purposes by different entities operating oil and gas fields. The aim is generally not for the validation of corrosion prediction models except for the development of field specific models. The accuracy, content and reliability required by these entities are not the same as what is required by corrosion engineers for model validation. If the field data is questionable there is a risk of considering an accurate prediction as wrong or vice versa. Also, as published previously, some of the important parameters collected from fields do not have the same meaning as the data used in the prediction models. These are probably the reasons of unsuccessful attempts that have been made by oil and gas companies and research institutes to compare the field data with the predicted internal corrosion rate/corrosivity. Another aspect is the lack of knowledge about the impact of some important parameters, like small amount of H2S and organic acids, on the corrosion rate measured in the field and consequently the interpretation of the data is often questionable. The mechanism of localized corrosion and the calculation of the corrosion rate, based on inspection and monitoring, is another domain of misinterpretation. The present paper summarizes the current practices for field data collection, comments about the accuracy of some production and corrosion data and makes suggestions on how to improve quality of the field data dedicated to model validation.
The validation of corrosion prediction models is of a paramount importance as the possible need of using corrosion resistant alloys (CRAs), corrosion control and risk based inspection depend largely on the predicted corrosion rate/corrosivity. The purpose of this paper is to contribute to the improvement in quality of the collection of reliable field data for the validation of prediction models used for corrosion rate/corrosivity assessment in oil and gas production environments containing CO2 and in some cases H2S. The reliability of the models becomes questionable if the data used is not accurate or valid. The use of unreliable estimated corrosion rate/corrosivity for corrosion control and risk based inspection may have important consequences on the integrity of the facilities and/or the cost of the project. Consequently, during the past few years important efforts have been made for the validation of available corrosion prediction models using field data.
The corrosivity prediction started in the 70?s and progressed significantly with different approaches and in different directions. The first model, the de Waard and Milliams model, was based on laboratory data. Then a few empirical field specific models, like the Copra Correlation, were published. Currently semi-empirical field and/or laboratory validated models (like Lipucor, Cormed, Norsok model, and Cassandra) and mechanistic models like Hydrocor, KSC model and Multicorp have been used by oil and gas companies. Most of the available models are described in a paper published by Nyborg in 2002.
After being developed, the above mentioned models were generally tested using field data (corrosion monitoring, inspection, failure analysis) or laboratory generated data. The field data used for this purpose consists of four main components: design data concerning the line or tubing (diameter, length, grade and metallurgy), data collected for the corrosion rate measurement/corrosivity assessment (corrosion monitoring and inspection), failure analysis and the data collected for production follow up. Even though the design data is essential for model validation purposes, the collection of this data is strai