Full discussion of mitigation in Chap 11 CoF
With ‘failure’ defined as leak/rupture (loss of integrity), we can utilize the previously discussed guiding equation for understanding consequence potential (“Risk is PoF x CoF—Where Should the Focus Be?” July 2016). That discussion also noted that this equation also gives guidance on options to reduce consequence potential.
CoF associated with any pipeline release can be efficiently understood as being comprised of four parts acting in a dependent relationship:
CoF = P × V × D × R
Where
P = product hazard (toxicity, flammability, etc)
V = release quantity (quantity of the liquid or vapor release)
D = dispersion (spread or range of the release, including early- and late-ignition scenarios)
R = receptors (all things that could be damaged by the release).
The dependent relationship is illustrated in the use of the multiplier in this equation[1]. Each factor can have a dramatic impact on total CoF. Any directional changes—higher or lower—in any of these four variables will generally forecast the change in consequence potential. To reduce overall consequence potential, any single component can be reduced. If any goes to zero, then there are zero consequences.
Note that the first three impact hazard zone size while the fourth impacts damages within the hazard zone.
This simple equation helps us to understand the risk management options that focus on CoF. Consistent with this guiding equation, we can reduce CoF and, hence, risk, by actions targeting any of these four, such as:
- Changing the product
- Reducing product pressure or flowrate
- Preventing or limiting dispersion (eg, emergency response, secondary containment, boom deployment, etc)
- Reducing spill quantities (eg. Leak detection, remotely operated equipment, personnel deployments, etc)
- Changing proximity to receptors (people, property, environment).
Of course, these have varying levels of practicality. Even the more practical opportunities are far from fool proof. Their ability to reliably reduce CoF are highly location- and scenario-specific. In some instances, they play a significant, valuable role; in others much less so.
Examining each of the four, in roughly reverse order of ability to change . . .
Damage to Receptors Reduction Measures
Before examining the more practical opportunities to reduce consequence potential via reduction in hazard zone, let’s mention the option of minimizing the damage to receptors within the hazard zone. Changing proximity to (or vulnerability of) receptors (people, property, environment) is a possible but rarely practical option.
- Usually not viable options, but can theoretically reduce CoF potential by
- Moving the receptors
- Re-routing the pipeline
Product Hazard Reduction Measures
Changing the product transported–making it somehow less damaging to potential receptors. This is usually not a viable option unless some toxic, flammable, etc, component(s) can be reduced, thereby reducing harm potential. When possible, such changes could reduce hazard zone dimensions, thereby reducing consequence potential.
Dispersion Reduction Measures
Preventing or limiting dispersion (eg, emergency response, secondary containment, boom deployment, etc) is a potentially viable area of CoF reduction. Dispersion directly impacts hazard zone dimensions, so limiting dispersion reduces hazard zones.
Anything, other than the reducing the rate of release, that halts or limits the distance the product travels, is assessed here. Secondary containment around tanks and pumps is a common example.
Limiting spill/release size versus limiting dispersion may sometimes appear overlapping but the differentiation is useful. Consider a hydrocarbon sensor in a propane pump station which triggers upon detection of a flammable hydrocarbon, presumably the edge of a vapor cloud. If this device triggers a valve to close, it has limited spill/release size while if it triggers some type of isolation curtain (as a theoretical example), it has not affected the release quantity, only its extent of dispersion.
An extreme example is the contentious practice of igniting a flammable vapor cloud (using a flare gun, for example) to prevent it from continuing to expand and finding a more distance ignition source (thereby increasing hazard zone).
Spill/Release Volume Reduction Measures
Reducing the amount of product released will also reduce the hazard zone. The size of a potential release is a function of pressures and flowrates, among other factors. Reducing product pressure or flowrate is an option, although not often economically desirable. More realistic options address the speed and efficiency of reactions to leak/rupture as discussed below.
Leak detection and isolation capabilities are dominant among opportunities to reduce hazard zone dimensions. Those are detailed below and in other locations. Other opportunities, include:
- procedures, especially control center procedures mandating shut ins with less provocation
- training, including emergency drills
These aspects can be included in the risk assessment as part of mitigation effectiveness and/or in assessments of human error potential (Incorrect Operations)
Capability Analyses Tools–Leak Detection and Pipeline Isolation
To begin the discussion of these more common consequence-reduction measures, note that US IMP regulations require analysis of potential mitigation measures under certain regulatory requirements (HCA-impacting segments under an IMP regulation). Among the specific mitigation measures to be evaluated are leak detection capability and shut in capabilities: ie, EFRD and ASV and RCV analyses .
Leak detection is examined in the next section while isolation capability is examined in the subsequent section.
Risk Reduction via Leak Detection, Part 1
Part 2
While detection of pipeline leaks is intuitively a way to reduce risks, some might not see exactly how risk reduction can occur or how to measure it.
Leak detection can reduce risk by reducing consequences. It has little to no effect on failure prevention (with rare exceptions noted previously). So, of the two parts of risk, probability and consequence, it plays a role mostly in potentially reducing the level of damage after a spill/release has begun.
Leak detection can reduce the spill quantity and, logically, the associated dispersion. But is it good choice for efficient risk reduction?
US regulations give much latitude in what risk reduction actions an operator employs. Leak detection is, however, specifically mandated in certain situations including the use of odorization. Even when specific leak detection systems are not mandated, leak detection in general, as a potential risk management option, must at least be evaluated per some regulations. Regulatory auditors can and do insist on reviewing these evaluations. Some operators have difficulty assessing their current capabilities.
Then, as related regulatory mandate, a formal decision process determining the sufficiency of that capability is also required. So, pipeline operators are required to assess leak detection capabilities and have a process to consistently judge when that capability should be enhanced. Let’s examine each of these facets.
A leak detection capability analysis must recognize two important aspects: there are almost always multiple ‘leak detection systems’ (LDS) in place and each has varying abilities to find leaks of varying sizes. So, step one is to identify all of the systems in place. LDS types often include:
- SCADA based systems such as monitoring via alarms (pressure, flowrate, temperature, etc), transient models, mass balances, etc
- Field based systems such as staffing, patrol, sensors, ground water monitoring, odorization, and even passerby reporting
Each LDS is sensitive to either leak rate or spilled volume. Many can find high leak rates. The noise, smell, vapor clouds, pressure drops, flows over ground surface, etc from high rates are readily detectable. At the other extreme, some small leak rates are undetectable until a certain volume has been released, ie, only a puddle, a sheen on water, ground water contamination, or other evidence allows detection.
Since the ability to detect various leak rates by an LDS is not a single value, a curve can be plotted to show the capabilities of each system. A composite curve can then be built which shows the combined capabilities of all LDS’s. If plotted on a graph of leak rate versus time, the area under the composite curve is the volume released before detection. If the analyses were to be distilled into a single value, this would be that value.

Having analyzed the family of curves representing current leak detection capabilities, an important input into the second aspect—are current LDS capabilities sufficient?.-emerges. The area under the composite curve, the total volume spilled before detection, provides insight into the amount of consequence that could theoretically be impacted by improved leak detection capabilities. That’s the beginning of a cost/benefit analysis—the most defensible way to decide sufficiency[2].
Proposed leak detection enhancements will generate additional curves. A proposed improvement to leak detection capabilities will generally focus on a specific part of the leak-rate vs time-to-detect curve. The difference between the current composite curve and the potential future composite curve shows the amount of product loss that is avoided by the enhancement.
The volume reduction must be monetized in order to complete the cost/benefit analysis. For some products, a cost savings is readily assigned to this avoided volume loss. The savings realized may be simply the value of the lost product itself and cleanup/remediation expenses avoided. For other products, scenarios involving ignition, fire, and thermal damages must be factored into potential consequence reduction. A good risk assessment should be able to quantify the change in risk associated with any potential leak detection improvement. Ideally, this will be expressed in terms of Expected Loss in $/km-year.
Finally, the costs of the improvement in leak detection must be factored in. that cost must consider initial and on-going expenses and apply those to the miles of pipeline and the time period for which the improvement provides benefits. So, total costs are expressed in the same $/km-year units as avoided loss (risk). This allows the direct comparison.
Recognizing the extent of the leak detection improvement, over the lengths and time periods reveals some interesting things. Even a very expensive enhancement, such as a full, SCADA-based transient model, can be cost effective. If such an LDS covers many miles of pipeline for many years, the per mile-year cost could be an efficient way to reduce the per mile-year risk. On the other hand, a seemingly inexpensive solution applied very narrowly, in terms of lengths and time periods, may be hard to justify.
As with many issues in risk management, performing the calculations often results in new and interesting insights. This is, of course, the central intent of formal risk management—revealing the nuances that can optimize decision-making.
Risk Reduction via Isolation Capability
US regulations give much latitude in what risk reduction actions an operator employs. Just as with leak detection, isolating capability is, however, specifically mandated as a potential risk management option that must at least be evaluated. Regulatory auditors can and do insist on reviewing these evaluations.
In examining isolation or shut in capabilities, EFRD means emergency flow restricting device, ASV means automatic safety valve, and RCV means remote control valve. The different nomenclature refers to different regulations in the US. Regardless of the regulation and the nomenclature used, the intent is the same. The owner/operator is obliged to determine, in a formal way, whether additional shut in capabilities are appropriate on the subject pipeline segment. These additional capabilities can be in the form of additional valves (manual only or automated or remotely controlled) and or capabilities added to existing valves . Adding capabilities can take the form of check valves, automatic trip initiators, or closures that can be prompted from a remote control center via SCADA. Leak detection obviously overlaps this mitigation. For instance, field-deployed hydrocarbon sensors could be set to automatically isolate leaking segments via ASV’s.
EFRD And leak detection capability analyses are required by some regulations and are always a good idea even when not required. Both of these are consequences mitigation measures . They do not impact the probability of failure, they only serve to potentially minimize consequences of a failure by potentially reducing spill/release size and, hence, hazard zone size.
These analyses are often completed as a service that is outsourced by an owner operator. Common deliverables and common gaps in analyses are discussed here.
The EFRD and leak detection capability analyses should be conducted in the same manner as any other contemplated mitigation measure. Consistent with ALARP and PMM requirements, cost benefit estimates for any possible mitigation measure should be produced and compared to decision thresholds–such as from ALARP–to determine if that particular mitigation measure is appropriate. This is a powerful process because in some cases it may document a defensible position that the system is already safe enough IE no further mitigations are warranted
[1] This is more of a conceptual equation rather than mathematical.
[2] As noted in Nov 2019 article “The $200 anomaly”, a cost/benefit analysis as part of an ALARP (As Low As is Reasonably Practicable), is the most widely recognized method for determinations of ‘safe enough’.
Leak Detection Capability Analyses
The challenge in valuing the risk reduction benefit of leak detection is the extreme variability not only in risk but also in leak detection capability. The variability is driven by pipeline characteristics (diameter, wall, SMYS, etc.), operations (product, pressure, flowrates, etc.), location (topography, receptors, etc.), leak detection systems in place, and many other factors, some of which change foot-by-foot along a pipeline.
Any contemplated addition to current capabilities will not be able to replace all LDS’s. Rather, it can play a valuable role for a certain set of leak scenarios. Those scenarios must be understood in terms of their potential consequences and frequency of occurrence as well as the potential benefits of earlier detection specific to those certain scenarios.
A “Leak Detection Capability Analysis” is a standalone task, necessary to both meet certain regulatory mandates and also to assess the value of any contemplated change in LDS. The latter is necessary since a valuation of any LDS is contingent upon what gap is being filled in an operator’s current leak detection capabilities. Operators almost universally do not have detailed, quantified knowledge of their leak detection capabilities since this can be a non-trivial calculation.
Most US operators are under regulations that mandate that portions of their systems must have such analyses along with decision processes regarding sufficiency of current capabilities. These regulations are often not yet being well-enforced, hence the apparent current gap between regulations and compliance.
The spreadsheet used in this valuation analyses assists in this task. Included in any analysis could be a notation of where/how additional LDS capability could fill a potential gap in his current capabilities and how much risk reduction could accompany it.