Table of Contents
Introduction
Pipeline risk management is a complex and fascinating practice, bringing together aspects of science (including physics, chemistry, biology, geology, and more), engineering, history, probability theory, human psychology, and even philosophy.
It begins with assessing the risks. Here is the typical challenge: decades ago, someone designed a multi-component engineered structure using pressurized pipe, valves, fittings, compressors, pumps, tanks, etc. It was installed in a highly variable natural/man-made environment across deserts, jungles, farms, rivers, lakes, mountains, urban centers—often with changing soils, temperature extremes, micro-organism activity, magnetic field effects, etc. Now, years and years later, we are trying to determine where weaknesses and more consequential failure locations exist. A myriad of scientific phenomena—both natural and man-made—are interacting to complicate our ability to understand and creating a puzzle with thousands of pieces to fit together. What an interesting confluence of engineering coexisting with Mother Nature!
Next comes the practical applications of having ‘solved’ this puzzle: armed with an understanding of the risks, what can and should now be done? This is where we must leave the realm of pure science and engineering and enter into aspects of the human behavioral sciences.
This text endeavors to examine more completely the solving of the puzzle—the risk assessment—and then lightly step into the issues of managing risk.
The intention is to equip the risk manager with the tools to understand the risk and the ability to efficiently apply this knowledge when making decisions.
The Puzzle
Today, we have an unprecedented amount of data available to solve this pipeline risk puzzle. Let’s say we want to understand internal corrosion potential on a natural gas pipeline. We examine some recent ILI results, looking for internal corrosion metal loss indications. We find some. Are they occurring at bottom o’clock positions of the pipe circumference? If so, that is a clue. We plot the ILI anomalies in GIS, add aerial photography, add topography, and look for more clues. Do we see clusters of metal loss at possible low spots—where the pipe is crossing creeks, valleys, etc? Let’s overlay elevation data—are there steep inclinations here where liquids/solids could accumulate and persist? Are we close to gas inputs, where historical liquid excursions (carryovers) might have accumulated and might first impact piping?
Next, we examine gas quality records and the performance record of the input gas streams that might have put contaminants into the gas stream. Given this, we need to understand the chemistry—what combinations of chemicals and environmental factors could be generating corrosion and at what rates? Then we can study fluid flows, thermodynamics, and hydraulics to understand how contaminants might behave inside the product stream. For those who like engineering detective work—isn’t such sleuthing compelling?
This is essentially what good risk assessment is doing. But it is far more efficient than what we would-be detectives can do individually. The risk assessment can broadcast our detective work over tens of thousands of miles of pipelines almost instantly. This effectively replaces thousands of man-hours of investigation and instantly puts key information into the hands of decision-makers.
It really is exciting to see large quantities of data drawn into a model and immediately see meaningful, actionable information come out. Turning data into information ensures that the right decisions can be made.
The risk assessment should add clarity. Some risk assessments add complexity. The real world is sufficiently complex that no unnecessary additional complexity should be tolerated. In a good risk assessment, if complexity appears, it should only be because the underlying science is complex.
Assessment is of course, just the beginning of risk management. Even with complete understanding of risk—via the risk assessment—we still have the challenges of how to manage this risk. Again, a host of factors comes into play: how much risk reduction is warranted? How quickly should risk reduction occur? Which is better—much risk reduction at a specific location or more modest risk reduction but over many miles of pipeline? All strive to answer the key underlying question: how safe is ‘safe enough’?
How Risk Assessment Helps
Achieving safety while undertaking a potentially dangerous activity means identifying and managing risks. Although they seem simple in concept, pipelines are actually complex, dynamic systems, operating in often-challenging environments and subject to a vast and varying array of integrity threats.
While risk has always been an interesting topic to many, it is also often clouded by preconceptions. Many equate risk analyses with requirements of huge databases, complex statistical analyses, and obscure probabilistic techniques. In reality, good risk assessments can be done with only moderate effort and even in a data-scarce environment. This was the major premise of the earlier PRMM[1].
PRMM has a certain sense of being a risk assessment cookbook—“Here are the ingredients and how to combine them.” Feedback from readers indicates that this was useful to them. That aspect is reflected in this book, even as the new methodologies shown here are far superior to our past practices.
Beyond the desire for a straightforward approach, there also seems to be an increasing desire for more sophistication in risk modeling. This is no doubt the result of an unprecedented number of practitioners pushing the boundaries as well as more widespread availability of data and more powerful computing environments. Today, it is easy and cost-effective to consider many more details in a risk model. Initiatives are currently under way to generate more widespread, complete, and useful databases to further our knowledge and to better support the detailed risk modeling efforts.
The desire for ‘more’—more accuracy, more knowledge, more decision-support—is also fueled by the knowledge that potential consequences of incorrect risk management are higher now than in the past and will likely continue to increase. Aging infrastructure, system expansions, and encroaching populations are primary drivers of this change. Regulatory initiatives reflect this concern in many parts of the world.
Robustness Through Reductionism
The best practice in risk assessment is to assess major risk variables by evaluating and combining many lesser variables, generally available from the operator’s records or public domain databases. This is sometimes called a reductionist approach, reducing the problem to its subparts for examination. This allows assessments to benefit from direct use of measurements or evaluations of multiple smaller variables, rather than a single, high-level variable, thereby reducing subjectivity. If the subparts—the details—are not yet available, then higher level inputs must suffice.
The reductionist approach also applies to the physical dimensions of the system. The risk for a pipeline is assessed as the sum of the risk of its components, where the components are the pipe, fittings, valves, tanks, pumps, compressors, meters, etc.
A critical belief underlying this book is that all pertinent information should be used in a risk assessment. There are very few pieces of collected pipeline information that are not useful to the risk assessment. The risk evaluator should expect any piece of information to be useful until he absolutely cannot see any way that it can be relevant to risk or decides its inclusion is not cost-effective.
Any and all experts’ opinions and thought processes can and should be codified, thereby demystifying the experts’ personal assessment processes. The experts’ analysis steps and logic processes can be replicated to a large extent in a risk assessment model. A detailed model should ultimately be ‘smarter’ than any single individual or group of individuals operating or maintaining the pipeline—including that retired guy who ‘knew everything’. It is often useful to think of the assessment process as ‘teaching the model’. We ‘tell’ the model what we know and what it means to know various things. We are training the model to ‘think’ like the best experts and giving it the benefit of the collective knowledge of the entire organization and all the years of record-keeping.
Changes from previous approaches
Previous risk assessment approaches served us well in the past. They helped support decision making by crystallizing thinking, removing subjectivity, and helping to ensure consistency. But the era of many older approaches has passed, due to increased expectations as well as the now superior analyses techniques and availability of powerful and inexpensive computer tools.
Our regulators, attorneys, neighbors, and other stake holders are no longer satisfied that we can successfully manage risk using tools that are not modern and robust. We now have strong, reliable, and easily applied methods to estimate actual risks, and no longer must accept the compromises generated by intermediate scoring schemes or statistics-centric approaches. The modern approach to pipeline risk assessment is presented here. It is superior—in accuracy, defensibility, and cost of analyses—to all alternative approaches since it incorporates the best and eliminates the weaknesses from others. The migration from the older approaches is described in the following sections.
A substantial improvement in risk assessment methodology should not be a surprise. Changes to risk algorithms have always been anticipated, and every risk model—even the most advanced—should be regularly reviewed in light of its ability to incorporate new knowledge and the latest information.
This book presents the newer risk assessment methodologies for evaluating all aspects of pipeline risk. This approach reflects the advances in risk assessment technology from research & development efforts as well as years of input of pipeline operators, pipeline experts, and risk assessors.
A migration from both relative risk assessment and ‘classical’ QRA is central to better understanding risk. There is no longer any valid reason to use a relative, scoring type risk assessment approach. There is also no reason to adopt the statistics-centric ‘classical’ QRA approaches. We now have updated techniques and a powerful, but simple framework to capture and more efficiently use all available information. When much more useful results are available with no additional cost or effort, why use lesser solutions?
Key Changes
Early chapters of this book offer foundational and background information. The experienced, practicing risk manager may wish to move directly to the how-to chapters. It is advisable to quickly become familiar with the most essential elements of the newer methodology presented in this book. Central to this much-improved methodology are several key features:
- The abandonment of all scoring (point assignment systems) which is now replaced by measurements.
- The PoF triad—exposure, mitigation, and resistance—the essential ingredients to understand PoF.
- The use of OR and AND gate math.
- The use of both measurements and estimates to replicate an SME’s decision processes.
- The calculation of hazard zones to drive CoF estimates.
Many other aspects of risk assessment remain similar to previous approaches. Pipeline risk factors are generally well understood. It is only the better capturing of their role in risk that changes. The estimation of consequences has generally been more grounded in physics and engineering principles already. Fewer changes in those methodologies are warranted.
Armed with these key changes in methodology, the more experienced reader can scan Chapter 2 for basic definitions and application nuances and then move to Chapters 5-11 to efficiently begin assessing risks.
- Modeling of Pipeline Risk
Migration from previous methodologists
If you have a complete risk assessment system based on older methods, that system can usually be readily migrated to a modern platform. The previous work is preserved and can be more efficiently employed while measured data from today’s modern inspection and integrity-evaluation tools is also integrated.
below shows an example of converting input data from an older, scoring type risk assessment approach into a modern risk assessment. The first step is to identify what aspect of risk is impacted by the previously-collected data. All inputs should inform estimates of either: PoF-exposure, PoF-mitigation, PoF-resistance, or CoF. Then, the previously assigned scores or point values can be linked to measurement values. This allows rapid conversion of even the largest scoring type risk databases.
Example Conversion of Scores to Measurements
|
Risk Issue |
Old Index/Score |
New PoF Element |
Measurement/Estimate |
|
depth cover |
shallow = 8 pts |
mitigation |
15% |
|
wrinkle bend |
yes = 6 pts |
resistance |
-0.07” pipe wall |
|
coating condition |
fair = 3 pts |
mitigation |
0.01 gaps/ft2 |
|
soil |
moderate = 4 pts |
exposure |
4 mpy |
Some calibrations and handling of special cases will usually be needed, and documentation will need to be updated, but the whole conversion/migration effort should consume only dozens of man-hours, not hundreds.
In re-using previous data, there should be some similarities in results when comparing old versus new. But there should also be new and important insights emerging, as the modern approaches provides superior results that more accurately represent real- world risks.
Sidebar
The Outlook for Pipeline Risk Assessment: An Interview
US regulators have recently expressed criticism regarding how Integrity Management Plan (IMP) risk assessment (RA) for pipelines is being conducted. Do you also see problems?
There is a wide range of practice among pipeline operators right now. Some RA is admittedly in need of improvement—not yet meeting the intent of the IMP regulation. However, I believe that is not due to lack of good intention but rather incomplete understanding of risk. Risk is a relatively new concept and not easy to fully grasp. To address PHMSA’s concerns, we as an industry need to improve our understanding of risk and how to measure it.
What’s new in the world of pipeline risk assessment?
In the last few years, the emergence of the US IMP regulations has prompted the development of more robust RA methodologies specifically designed for pipelines. Even though PHMSA and others have identified weaknesses among some practitioners, much progress has been made. Previous methodologies fell into two categories: 1) scoring systems designed for simple ranking of pipeline segments, and 2) statistics-based quantitative risk assessments (QRA’s) used in more robust applications, often for industrial sites and for certain regulatory and legal needs. The first were popular among the pre-IMP voluntary practitioners but were limited in their ability to accurately measure risk and to meet IMP regulatory requirements. The second category was costly and ill-suited for long linear assets, like pipelines.
You note two categories of previous risk assessment methodologies. What about others, like ‘scenario-based’ or ‘subject matter experts’, that are listed in some standards?
I think that listing is confusing tools with risk assessment methodologies. The two examples you mention are important ingredients in any good risk assessment but they are certainly not complete risk assessments themselves.
What are the newest pipeline risk assessment methodologies like?
They’re powerful, intuitive, easy to set up, less costly, and vastly more informative than either of the previous approaches. By independent examination of key aspects of risk and the use of verifiable measurement units, the whole landscape of the risks becomes apparent. That leads to much improved decision-making.
How can they be both easy and more informative?
More informative since they produce the same output as the classic QRA but are more accurate. Easy because they directly capture our understanding of pipelines and what can cause them to fail. The word ‘directly’ is key here. Previous methods relied on inferential data and/or scoring schemes that tended to interfere with our understanding.
If they do the same thing as QRA, why not just use classical QRA?
Several reasons: classic QRA is expensive and awkward to apply to a long, linear asset in a constantly changing natural environment—can you imagine developing and maintaining event trees/fault trees along every foot of every pipeline? Classical QRA was created by statisticians and relies heavily on historical failure frequencies. Ask a statistician how often something will happen in the future and he will ask how often it has happened in the past. I often hear something like “we can’t do QRA because we don’t have data.” I think what they mean is that they believe that databases full of incident frequencies—how often each pipeline component has failed by each failure mechanism—are needed before they can produce the QRA type risk estimates. That’s simply not correct. It’s a carryover from the notion of a purely statistics-driven approach. While such historical failure data is helpful, it is by no means essential to RA. We should take an engineering- and physics-based approach rather than rely on questionable or inadequate statistical data.
But if I need to estimate (‘quantify)’how often a pipeline segment will fail from a certain threat, don’t I need to have numbers telling me how often similar pipelines have failed in the past from that threat?
No, it’s not essential. It’s helpful to have such numbers, but not necessary and sometimes even counterproductive. Note that the historical numbers are often not very relevant to the future—how often do conditions and reactions to previous incidents remain so static that history can accurately predict the future? Sometimes, perhaps, but caution is warranted. With or without historical comparable data, the best way to predict future events is to understand and properly model the mechanisms that lead to the events.
Why do we need more robust results? Why not just use scores?
Even though they were developed to help simplify an analysis, scoring and indexing systems add an unnecessary level of complexity and obscurity to a risk assessment. Numerical estimates of risk—a measure of some consequence over time and space, like ‘failures per mile-year’—are the most meaningful measures of risk we can create. Anything less is a compromise. Compromises lead to inaccuracies; inaccuracies lead to diminished decision-making, leading to mis-allocation of resources; leading to more risk than is necessary. Good risk estimates are gold. If you can get the most meaningful numbers at the same cost as compromise measures, why would you settle for less?
Are you advocating exclusively a quantitative or probabilistic RA?
Terminology has been getting in the way of understanding in the field of RA. Terms like quantitative, semi-quantitative, qualitative, probabilistic, etc. mean different things to different people. I do believe that for true understanding of risk and for the vast majority of regulatory, legal, and technical uses of pipeline risk assessments, numerical risk estimates in the form of consequence per length per time are essential. Anything less is an unnecessary compromise.
What about the concern that a more robust methodology suffers more from lack of any data? (i.e.,” If I don’t have much info on the pipeline, I may as well use a simple ranking approach”.)
That is a myth. In the absence of recorded information, a robust RA methodology forces SME’s to make careful and informed estimates based on their experience and judgment. From direct estimates of real-world phenomena, reasonable risk estimates emerge, pending the acquisition of better data. Therefore, I would respond that lack of information should drive you towards a more robust methodology. Using a lesser RA approach with a small amount of data just compounds the inaccuracies and does not improve understanding of risk—it is largely a waste of time.
It sounds like you have methods that very accurately predict failure potential. True?
Unfortunately, no. While the new modeling approaches are powerful and the best we’ve ever had, there is still significant uncertainty. We are unable to accurately predict failures on specific pipe segments except in extreme cases. With good underlying data, we can do a decent job of predicting the behavior of numerous pipe segments over longer periods of time—the behavior of a population of pipeline segments. That is of significant benefit when determining risk management strategies.
Nonetheless, it sounds like you’re saying there are now pipeline RA approaches that are both better and cheaper than past practice… ?
True. RA that follows the Essential Elements guidelines avoids the pitfalls that befall many past practices. Yet, we can still apply all of the data that was collected for the previous approaches. Pitfall avoidance, full transparency, and re-use of data makes the approach more efficient than other practices. Plus, the recommended approaches now generate the most meaningful measurements of risk that we know of.
Sounds too good to be true. What’s the catch?
One catch is that we have to overcome our resistance to the kinds of risk estimate values that are produced. When faced with a number such as 1.2E-4 failure/mile-year, many react with immediate negative reaction, far beyond a healthy skepticism. Perhaps it is the scientific notation, or the probabilistic implication, or the ‘illusion of knowledge’, or some other aspect that evokes such reactions. I find that such biases disappear very quickly however, once an audience sees the utility of the numbers and makes the connection — ‘Hey, that’s actually a close estimate to what the real-world risk is.’
Another ‘catch’ is the one we touched on previously. Rare events like pipeline failures have a large element of randomness, at least from our current technical perspective. That means that, no matter how good the modeling, some will still be disappointed by the high uncertainty that must often accompany predictions on specific pipeline segments.
How can industry as a whole improve RA, especially in the eyes of the public and regulators?
A degree of standardization that serves all stake holders is needed. A list of essential elements sets forth the minimum ingredients for acceptable pipeline risk assessment. Every risk assessment should have these elements. A specific methodology and detailed processes are intentionally NOT essential elements, so there is room for creativity and customized solutions. If regulators encounter too many substandard pipeline RA practices, then prescriptive mandates might be deemed necessary. Such mandates are usually less efficient than approaches that permit flexibility while prescribing only certain ingredients.
Pipeline Risk Management Manual, 3rd Edition, hereinafter referred to as PRMM ↑