BY KENT MUHLBAUER
In this article, we introduce the concept of essential elements for pipeline risk assessment. These are the “let’s all get on the same page” aspects that every risk assessment should have in common.
An important Essential Element calls for the use of measurements instead of any other kind of rating scheme (eg, indexes, points, scores, descriptors, etc). To see why this is an essential element, we can first examine difficulties associated with the alternatives. For ease of discussion, let’s refer to all non-measurement based systems as scoring-type systems and use ‘scores’ to refer to any of these pseudo-measurements.
Scoring-type systems commonly appear when a standardized measurement ‘tool’ is unavailable and a great deal of subjectivity is required in an assessment. Examples can be found in sports (boxing matches, figure skating, platform diving, etc), finance (indexes, credit ratings, etc), and many other arenas. In the early days of pipeline risk assessment (PL RA), scoring-type systems were widely used as short-cuts to get to relative risk. We really weren’t trying for full measurement of risk, but rather only an understanding of relative risks—‘pipeline segment A needs attention before segment B’. So, scoring-type systems emerged to avoid what was believed to be unwieldy and impractical application of more formal (QRA type) analyses to long, linear assets such as a PL. We also lacked algorithms to efficiently utilize the hard numbers—it was easier to process the pseudo-measurement scores rather than real measurements. More about the algorithms later.
With today’s increased emphasis on PL RA, scoring and the use of any other kind of pseudo-measurement are now problematic. Not only do they cause difficulties since one must become familiar with a custom scoring-type system, but even the inventor is inconvenienced since he must set up and maintain ‘overhead’ that ensures the intended use of his custom scoring-type system. A de-coder is required to understand how the scores work. Even more procedures and processes are then required to link the scoring-type system to the real world. And be assured that, despite protestations from inventors that their scoring-type system is intended only for making relative comparisons, there will be frequent requests/demands to place scores in context of real world risks. US style IMP regulations all but insist upon RA in measurement terms. So, ironically, the short cut solution of using scoring-type scales has instead added complications, now that more is demanded of RA.
Fortunately for those who have well-established scoring-type systems, the conversion to measurements is quite painless. If the scores were consistently obtained, a simple translation between the scores and the underlying measurement is all that is required. For instance, if third party activity level was assigned a 7 out of 10, one need only build a corresponding scale showing that a 7 means, perhaps, ‘activity once every year for a mile of pipeline’. Having built the translator, it can now be applied across all the previously assigned scores to instantly update the old style model into a much more powerful and useful assessment model. This is, of course a bit of an over-simplification, but only a bit.
In case the word ‘measurement’ conjures up more than is intended, remember that measurements can be estimated: “it’s about 1.5 miles away”; “it happens about once every other year”; “it’s between 2 and 4 feet deep”. So, the measurement does not always mean the use of a physical measurement device held against a thing to be measured. In our application here, the important thing is that a measurement is expressed in units that are verifiable. While an estimate might be used, that estimate can be verified someday with a measurement tool (even if that measurement tool is simply a counter of occurrences). Everyone recognizes the measurement units and can reproduce the measurement with some degree of accuracy. When the measurement involves a dimension of time and is used as a predictor, then it too can be verified, once an appropriate amount of time has passed: if 2 events/mile-year are estimated, then, after a couple of years, 2 miles of pipeline should have experienced about 8 events.
As further incentive, modern risk assessment algorithms are now hungry for actual measurements, not scores. We’ll dive deep into RA algorithms in a later column, but, for now, recognize how neatly and efficiently measurements (not scores) fit into the production of robust risk estimates. All of the calculations required to produce Probability of Failure (PoF) estimates fall into one of two basic forms, depending on the role of time. When things do not get worse over time, we say the failure mechanism is time independent. We produce a PoF estimate for time-independent failure as:
PoF (failures per mile-year) = [events/mile-year] x [fraction of events blocked by mitigation] x [fraction of unmitigated events resulting in failure] x [miles] x [years]
When time plays a role, making failure more likely as time passes, we estimate a time-dependent PoF as:
PoF (failures per mile-year) = ([remaining life, years])
[remaining life] = [resistance] / ([degradation rate] x [fraction of degradation unmitigated])
years-to-failure = [mm pipe wall] / ([mm per year corrosion rate] x [% unmitigated corrosion])
Both of these algorithm forms are very simple and intuitive. They efficiently use measurements and, in turn, also produce a measurement of PoF. The same applies to consequences of failure. We’ll detail these calculations in future columns.
So, that is the partial argument for using real measurements in today’s more demanding RA environment. As we abandon the use of scoring-type approaches, there will be just a twinge of discomfort. This twinge will be quickly replaced by an “ah-ha” moment, when real risk numbers materialize and decision-making can be directly linked to a real understanding of risks.
For more information, please see www.PipelineRisk.net.