INTRODUCTION

ABSTRACT:

The performances of two or more In-Line Inspection (ILI) systems in terms of defect sizing accuracy are typically compared using standard statistical methods. Each system produces a performance specification, computed using a standard set of defects, using three quantities: Tolerance, Certainty and Confidence. In order to compute the three quantities that comprise the performance specification, an underlying statistical distribution is assumed. The most common choice of such distribution is Gaussian distribution. To assess the specified performance of a system, instead of Gaussian distribution, a binomial distribution based method is commonly used as described in API 1163. It is observed that the procedures used for performance specification and verification use completely different and incoherent approaches. Also there are some aspects of the procedures that are nonintuitive and theoretically inadequate. In this paper, we introduce a novel use of Gamma distribution for performance assessment and propose a uniform approach for performance specification and verification.

The results obtained from the in-line inspection typically form a baseline for designing the necessary pipeline maintenance procedures. Considering the huge investment involved in the maintenance, the performance of the in-line inspection system must be carefully assessed using reliable statistical methods. API-1163 document³ describes methods that can be used to statistically analyze the performance of an in-line inspection system and later to verify it. Interestingly, the performance specification method is based on the theory of Gaussian distribution while the performance verification method is based on theory of binomial distribution. There are other references1, 2 that elaborate more on the use of binomial distribution for the performance verification. The performance specification is typically described in terms of three quantities: tolerance, certainty and confidence level. The tolerance is a numeric quantity that describes how much of quantitative variation from the ground truth can be expected when using the given system. The certainty is a percentage value that describes the number of times the specified tolerance will be observed if the experiment is repeated for large number of times. For example, if the tolerance is specified as ±10, with certainty of 80%, then if an inspection experiment is carried out n number of times (n is very large), then about 0.8*n times the average tolerance of the predictions would be less than or equal to 10. Ideally, in order to have absolute quantitative measurements of tolerance and certainty, the experiment must be repeated infinite number of times. However it is not practically possible. Hence a second order statistical quantity called as confidence level is introduced that states the statistical confidence in the specification of tolerance and certainty, when the experiment is conducted finite number of times. The performance verification is typically carried out as a hypothesis testing procedure. There is a given hypothesis in the form of performance specification and then using suitable statistical procedure, we can either accept it or reject it with certain statistical confidence. In cases when there is ambiguity about the decision, the performance specification is modified to reach a definite conclusion.

This content is only available via PDF.
You can access this article if you purchase or spend a download.