Skip to content

Chp1 Risk Concepts

The importance of expressing risk numerically is really a matter of removing subjectivity. If you tell somebody the temperature is high or the temperature is low that can mean a lot of things right? Same thing with risk. One person’s idea of high risk is quite a bit different than another . But if we express temperature or risk numerically, and importantly, as a numerical scale that is grounded in reality, now we’ve taken much of the subjectivity out. So when we talk about quantitative risk assessment we’re talking about expressing risk in numerical values that have some meaning to the audience. If I tell you the risk is ‘high’ that leads to a lot of subjectivity. If I tell you the risk is one chance in 100 now you understand what I mean by the qualitative descriptor of ‘high’.

Risk Assessment at a Glance

Risk assessment at-a-glance

Modeling of Pipeline Risk

Risk assessment should be consistent—there is no reason for multiple types of risk assessment. The same framework applies to very robust as well as very simple assessments. An example of a rudimentary (high level, few details) application of this risk assessment strategy using this approach is shown later in this chapter.

Risk: Theory and application

The Need for Formality

Humans are poor estimators of risk without formality. We routinely overestimate and underestimate true risks due to influences of emotion, memory, or personal preference. Here is an insightful quote from a bestselling book on risk [10]:

“Nature is so varied and so complex that we have a hard time drawing valid generalizations from what we observe.

We use shortcuts that lead us to erroneous perceptions, or we interpret small samples as representative of what larger samples would show.

We display risk aversion when we are offered a choice in one setting and then turn into risk seekers when we are offered the same choice in a different setting.

We have trouble recognizing how much info is enough and how much is too much.

We pay excessive attention to low-probability events accompanied by high drama and overlook events that happen in routine fashion.

We start out with a purely rational decision about how to manage our risks and then extrapolate from what may be only a run of good luck.”

There are also those who opine that attempts to quantify risk are generally flawed. So-called Black Swan (taleb[1]) events are so complex and rare as to be essentially unpredictable. These are events previously thought to be impossible, until they actually happen—for example, the first sighting of a black-colored swan. Even the specialized statistical theories and associated distributions (for example, extreme value, etc.) are thought by some to be vain attempts to know the unknowable. Most will agree that extensive and complex modeling can quickly become impractical for real-world risk management and that over-reliance on modeled values with high uncertainty can lead to misdirection of resources.

However, extreme positions against measuring risk underestimate the value of the attempt itself. Such critics miss the key point that the measurement effort itself yields great rewards, even when the measurement is imperfect: “anything that is measured, improves.” Striving to assign some realistic value to any phenomena yields benefits far beyond the value produced. Even when results are imprecise and may not fully capture the unforeseeable, the knowledge gained by earnest attempts to include all possibilities and to assign meaningful values is a significant benefit of risk management.

Much has been written on the general topics of the scientific method and modeling in both science and engineering. See PRMM for a relevant discussion of these principles, and their nuanced application in engineering and risk assessment for pipelines.

The objective should be to build a useful tool—one that is regularly used to aid in everyday business and operating decision making, one that is accepted and used throughout the organization, and one that is robust and defensible.

Complexity

In any modeling effort, complexity should exist only because the underlying real-world phenomenon is complex. The RA should not add complexity.

Ironically, a scoring type risk assessment, intended to simplify the modeling of real-world phenomena, actually adds complexity. By converting real-world phenomena into ‘points’ via an assignment protocol, an artificial layer of complexity has been introduced. This is unnecessary.

A robust risk assessment, covering complex scientific elements such as corrosion mechanisms and stress-strain relationships, may require a level of complexity in order to fully represent the associated risk issues. In this case, the complexity reflects the complexity of the underlying science and is appropriate for certain kinds of risk assessment. In contrast, a risk assessment that requires the assignment of scores to various conditions—for example, soil corrosivity, CP effectiveness, etc—and then the assignment of weightings to each, and then the combination of the scores using non-intuitive algorithms, is adding complexity that perhaps yields little to no improvement in the analyses. As a matter of fact, such artificial complexity probably detracts from the accuracy and usability of the risk assessment.

Intelligent Simplification

The challenge when constructing a risk assessment model is to fully understand the mechanisms at work and then to identify the optimum number of required variables for the model’s intended use. This follows the reductionist approach—breaking the problem down into pieces for later reassembly into meaningful risk estimates.

We must first understand and even embrace the complexity in order to achieve the optimum amount of simplification—this is the process of ‘intelligent simplification.’ The best approach is to begin with the robust solution, including all details and all nuances that make up the real-world phenomena. Only then can a shortcut be contemplated. That way, what is sacrificed by the simplification is clear to the designer.

Furthermore, the robust, all-inclusive solution will be immediately appropriate for many practitioners and eventually appropriate for many more (ie, a desired future level of detail in the risk assessment). Modeling complex phenomena such as AC induced corrosion, vapor cloud explosion potential, fracture mechanics, and many others, requires numerous inputs and interactions among inputs. Understanding what those inputs are and how they should be used to best model risk scenarios is the first step. With that understanding, simplifications without excessive loss of accuracy may be possible.

When simplifications are not appropriate, the robust solution should be employed, but perhaps in such a way that it does not interfere with the risk assessment’s efficiency. Many processes, originating from sometimes complex scientific principles, are “behind the scenes” in a good risk assessment system. These must be well documented and available, but need not interfere with the casual users of the methodology (everyone does not need to understand the workings of the engine in order to benefit from use of the vehicle). Engineers and other more technical audiences will normally seek a rational basis underpinning a system before they will accept it. Therefore, the basis must be well documented. But it can be a bolt-on aspect to the overall risk assessment if it has become too complex to integrate efficiently.

Deciding not to include a detailed variable directly in the risk assessment does not necessarily mean it is ignored. The detail may already be part of an evaluation being conducted elsewhere. For instance, the corrosion department may have a very sophisticated analyses of AC induced corrosion potential. Rather than replicate this analysis in the risk assessment, perhaps only the results need to be migrated into the risk assessment.

Among all possible variables, choices are required that yield a balance between a comprehensive model and an unwieldy model—inclusion of every possible detail versus loss of important information. Users should be allowed to determine their own optimum level of complexity. Some will choose to capture much detailed information because they already have it available; others will want to get started with a high-level framework. However, by using the same overall risk assessment framework, results can still be compared: from very detailed approaches to overview approaches.

Fig below illustrates the use of a ‘short circuit’ pending availability of full soil corrosivity information. A 16 mpy soil corrosivity value is used pending information regarding soil chemistry characteristics including moisture, pH, and contaminant levels which will lead to more accurate soil corrosivity values. Having the details shown, but not populated, in the risk assessment model has advantages. It documents that further analyses is possible, and maybe also warranted, and that the entered value is thought to conservatively capture the sub-variables that are not yet known.

Using Short Circuit, Pending Full Data Availability

Having flexibility in the level of rigor of a risk assessment is a large advantage. While detailed, technically rigorous analyses will always strengthen the assessment, it will not always be warranted. By this we mean, the cost/benefit of the rigor does not always justify the effort. In some instances, this will be a guess—a perceived low-value analysis may actually turn out to be a critical consideration and its absence is lamented. For instance, discounting the potential for H2 permeation through a steel component’s wall seems reasonable until the rare phenomenon contributes to a failure and prompts regret that it wasn’t previously a consideration.

See also the discussion of Verification, Calibration, Validation in Chapter3.

Classical QRA versus Physics-based Models

Most documented quantitative risk assessment approaches are based on statistical analyses. This is because the problem of risk assessment was initially given to statisticians to solve. Ask a statistician how often something will happen in the future, and their first response will be ‘how often has it happened in the past?.’ This is reasonable for a methodology that deals exclusively with analyses of how numbers are ‘behaving.’ The ability of statistics to model the behavior of larger populations over longer periods of time is undisputed. But this does not provide a complete solution for practitioners of risk management.

Historical data should always influence our estimates of risk. However, it will rarely capture all the pertinent considerations. Even purists will usually agree that statistics can only fully describe very simple and rather uninteresting systems in the universe. Coin flips and games of chance (cards, roulette, etc) are examples. Real-world systems are complex and require many insights well beyond statistical analyses for understanding.

Scientists and engineers, rather than statisticians, have been more involved in certain portions of the risk assessment, notably consequence modeling. Historically, consequence assessments have made sound use of science and engineering where probability assessments often have not. Consequence evaluations have, for years, routinely used dispersion modeling, thermal effects predictions, heat transfer equations, kinetics and thermodynamics of fluid movements, and many others. On the other hand, probability was simply based on historical rates. Perhaps the historical rates were modified by some subjective ‘adjustment factors’ to account for instances when the subject pipeline was thought to behave differently from the underlying population. But, too often, little science and engineering was applied to the problem of measuring failure potential in a formal but efficient manner.

Underlying most meanings of risk is the key issue of ‘probability.’ Statistics and probability are closely intertwined. But, as is detailed in this text, probability expresses a degree of belief beyond statistical analyses. ‘Degree of belief’ is the most compelling definition of probability because it encompasses statistical evidence as well as science, engineering, interpretations, and judgment. Our beliefs should be firmly rooted in fundamental science, engineering judgment, and reasoning. This does not mean ignoring statistics—proper analysis of historical data—for diagnosis, to test hypotheses, or to uncover new information. Statistics help us understand our world, but it certainly does not explain it.

The assumption of a predictable distribution of future leaks predicated on past leak history might be realistic in certain cases, especially when a database with enough events is available and conditions and activities are constant. However, one can easily envision scenarios where, in some segments, a single failure mode should dominate the risk assessment and result in a very high probability of failure rather than only some percentage of the total. Even if the assumed distribution is valid in the aggregate, there may be many locations along a pipeline where the pre-set distribution is not representative of the particular mechanisms at work there.

There is an important difference between using statistics to better understand numbers—inputs and results—versus basing a risk assessment predominantly on historical incident rates, essentially using statistics to support the belief that the past is the best way to predict the future. This is admittedly an oversimplification and is debatable in several key ways, especially when considering that all techniques are strengthened by simultaneous understanding of both the underlying physics and the statistics. However, this distinction emphasizes a core premise of this recommended methodology in this book. That premise is that the understanding of the physical phenomena behind pipeline failure should be the dominant basis of a risk assessment. Statistics, in particular historical event frequencies, should be secondary inputs.

The exposure-mitigation-resistance analyses that is an essential element of PoF assessment, is a key aspect that differentiates a modern pipeline risk assessment from classical QRA. Classical QRA does not seek the exposure-mitigation-resistance differentiation. Without this insight, past failure rates typically used in such assessments have questionable relevance to future failure potential.

Failure to quantify the exposure-mitigation-resistance influences leads to incomplete understanding which makes risk management problematic. Ideally, historical event rate information will be coupled with the exposure-mitigation-resistance analyses to yield the best PoF estimates.

The exposure-mitigation-resistance analyses is an indispensable step towards full understanding of PoF, as is detailed in later chapters. Without it, understanding is incomplete. Full understanding leads to the best risk management practice—optimized resource allocation—which benefits all stakeholders.

More will be said about improvements over Classical QRA approaches in later sections.

Statistical Modeling

To be clear, the message here is NOT that statistical theory is to be avoided but rather that statistics should supplement rather than drive risk modeling. Science and physics provide the model basis but statistics is very useful in tuning or calibrating inputs and results. Failure to use statistical theory would be an error.

In fact, the risk assessment framework proposed in this text has been successfully deployed as a model making increased use of statistical techniques. In one such application, Bayesian networks were established to better incorporate probability distributions, rather than point estimates, and learning or feedback processes were included. The same essential elements as recommended here should be used in this application. This is especially important for the breakdown of PoF into separate, but connected, measurements of exposure, mitigation, and resistance.

In addition to the classical models of logic, new logic techniques are emerging that seek to better deal with uncertainty and incomplete knowledge. Methods of measuring “partial truths”—when a thing is neither completely true nor completely false—have been created based on fuzzy logic originating in the 1960s from the University of California at Berkley as techniques to model the uncertainty of natural language. Fuzzy logic or fuzzy set theory resembles human reasoning in the face of uncertainty and approximate information. Questions such as “To what degree is x safe?” can be addressed through these techniques. They have found engineering application in many control systems ranging from “smart” clothes dryers to automatic trains.

The Risk Assessment Process

Fix the Obvious

Where formal risk assessment is not yet in place, potential practitioners sometimes feel overwhelmed, hesitating to get started due to the apparent magnitude of the task ahead. How to assess the risks associated with hundreds or thousands of miles of pipeline system, especially when desired information is scarce?

It is important to recognize that, even in the absence of a formal risk assessment, risk assessment has always been occurring, usually successfully, in all pipeline operations since their inception. The formalization of risk understanding should not interfere with the practice of ‘fix the obvious.’ A formal risk assessment is not needed to show where large issues are already apparent. The risk assessment can refine and improve resource allocation and bring to light the less apparent or distant future risk issues. But when an indisputable risk issue is identified and mitigation actions are obvious and available, time should not be wasted in extensive study or other formalization.

So, the obvious advice is: While seeking to improve risk management processes, continue the practice of risk management.

Using this Manual

Robust pipeline risk assessment generates a risk profile, showing changes in risk along a pipeline route. Risk management uses that profile to identify ways to effectively minimize the risk. Chapters 1 – 12 discuss the risk assessment process as it can be applied to all types of facilities, handling any kind of product, and traversing any location. Chapter 13 describes the transition from risk assessment to risk management.

Quickly getting answers

Focus Point
A good risk assessment framework supports all levels of rigor: from rapid, ‘ballpark’ estimates to detailed, robust analyses.

Formal pipeline risk assessment does not have to be highly complex or expensive. A savvy risk manager can, in a relatively short time, have a fairly detailed pipeline risk assessment system set up, functioning and producing useful results. Simple computer tools such as a spreadsheet or desktop database can efficiently and completely support even the most robust of assessments. Then, by establishing some administrative protocols around the processes, the quick-start applicator now has a complete system to fully support risk management.

The underlying ideas are straightforward, and rapid establishment of a very useful decision support system is certainly possible. Initial information and processes may not be of sufficient rigor for full decision-support, but the user will nonetheless immediately have a formal structure from which to better ensure decisions of consistency and completeness of information.

Both a rudimentary, quick assessment and a robust, detailed assessment will follow the same procedure. This provides for the assessment to grow—getting more accurate with the inclusion of more and more details. The difference between the simple assessment and the robust lies only in the depth of investigation. Before examining this in more detail, consider also that a risk conceptualization exercise is also available to ‘get answers quick’.

Risk Conceptualization—Getting ‘In the Ballpark’

There exists a type of risk analysis that is even more preliminary than the rudimentary assessment to be presented in a following section. This might be termed more of a risk conceptualization rather than assessment and is based solely on basic deductive reasoning.

Illustrated by an example, an analyst may posit that a pipeline’s future risks will mirror the losses shown by recent historical annual US gas transmission pipeline experience. He assumes that the subject pipeline ‘behaves’ as an average[2] US gas transmission pipeline (see Stats). Under this assumption, he deduces that future risks on the subject pipeline are 1.2 significant leak/ruptures per 2,000 mile-years that generate $1,200/mile-year of losses. He scales these values to the length of his subject pipeline and uses results in decision-making.

A similar approach is the use of historical leak/break rates to predict future behavior of sections of distribution pipeline systems. With larger counts of leak/break events, these produce more statistically valid summaries and are sometimes used to understand system deterioration rates.

These generalized, deductive-reasoning approaches obviously are limited, especially when applied to a particular pipeline segment (see numerous discussions later in this text regarding pitfalls associated with use of general statistics in this way). They do, however offer useful risk context, providing insights into behaviors of populations of components over long periods of time. In the absence of any other information, this approach provides estimates that may often be a close approximation—perhaps within an order of magnitude or so—of average future performance of many pipelines.

Risk Assessment Steps

True risk assessment must consider the specifics of the asset being assessed and not be unduly influenced by historical data from other assets, even if similar. The following minimum steps are required for assessment of pipeline risk, regardless of level of rigor. While seemingly detailed, these steps can be completed very quickly when only approximate solutions are sufficient.

  1. Segmentation: Identify the components that comprise the pipeline being assessed
    • A new component is needed for every significant change in the pipeline’s current and historical construction/operating/maintenance practice and every significant[3] change in the pipeline’s surroundings.
  2. Exposure: Estimate each component’s unmitigated exposure from each threat, recognizing the two types of exposure
    • Degradation rate from time-dependent failure mechanisms
    • Event rate from time-independent failure mechanisms.
  3. Mitigation: Estimate effect of each mitigation measure for each component’s threats
    • Identify all mitigation measures
    • Rate effectiveness of each
    • Combine and apply estimates to appropriate exposures.
  4. Resistance: Estimate each component’s resistance to failure from each mitigate exposure
    • Theorize amount of resistance available in the absence of defects
    • Estimate the role of possible defects present in each component, considering rates of defect emergence and age and accuracy of all inspections and integrity assessments.
  5. PoF: Calculate PoF from each threat
    • Risk Triad: combine Exposure, Mitigation, Resistance
    • Estimate TTF and then PoF for time-dependent failure mechanisms
    • Estimate PoF for time-independent failure mechanisms
    • Combine all PoF’s.
  6. Calculate CoF for each component, based on desired level of conservatism and
    • Possible failure scenarios
    • Possible damages from each scenario.
  7. Combine PoF and CoF into a risk estimate for each component. Combine component risk estimates as needed.

These steps show the minimum amount of inputs and analyses necessary to produce plausible estimates of risk along a pipeline. Experience has shown that any threat can independently dominate the actual risk. Therefore, each warrants consideration and should be documented in the assessment, even if only a cursory level of effort can be applied to generate initial estimates.

Implicit in these steps is the initial recognition that a pipeline (or pipeline station or any other portion of a pipeline system) is a collection of components. Each component will contribute to the risk associated with the whole collection. Each component is exposed to threats from its immediate surroundings. These normally include corrosion, external forces, and others. Each component also generates some amount of consequence potential to its surroundings. This is the reality that should be captured in any risk assessment. See segmentation discussions.

Even the most rudimentary risk assessment needs to acknowledge the individual components that comprise the pipeline system and their individual surroundings. To do this, a list of components is needed. This can be very detailed or, at the other extreme, very generalized.

As described above, for each component, three inputs are needed to characterize each plausible threat (failure mechanism). Each component also requires one input for consequence potential. These four component-specific inputs are best obtained by examination of all of the pertinent underlying features but can be simply assigned a preliminary general estimate, pending the deeper analyses. In a very rudimentary assessment, the four ingredients are directly input for each component based perhaps solely on SME judgment.

Rudimentary Risk Assessment

Beyond a much generalized risk conceptualization exercise as discussed previously, a rudimentary risk assessment for a specific pipeline or portion of a pipeline under specific operational and maintenance protocols, can be conducted with a minimum of inputs. With fewer inputs, it will suffer from reduced accuracy—there are always trade-offs between rigor and accuracy/defensibility.

A rudimentary initial risk assessment can be created by obtaining SME estimates for each of the inputs implied in the list above and in figure below.

Risk Assessment Structure for Each Failure Mechanism on Each Component Assessed

With a bit of guidance, the SME’s can provide the necessary PoF inputs. Then consequence potential needs to be estimated. This may require a different SME since the operator/maintainer, while hopefully being well schooled in incident response, may have little or no experience with consequence valuations. The kinds of damage scenarios potentially created are first identified by an appropriate SME. These will fall into one or more of the categories of thermal (fire and explosion), toxicity (including pollution damages), and mechanical (the non-explosion phenomena associated with pressurized components). Then, the receptors potentially exposed to these scenarios are characterized. Receptors include people, property, environment, commercial activities, service interruption, and others, depending on the scope of the risk assessment.

Example 1.1

For example, an assessor thinks that a portion of a pipeline system can be characterized by four different combinations of pipe characteristics, soil types, product corrosivities, potential excavation activities, and nearby population densities. He creates groupings using these parameters. The groups will serve as surrogates for the segments that actually exist. In other words, prior to the full solution of a dynamically segmented pipeline with risk estimates for each segment, he is employing a short cut by modeling the risks in terms of four general combinations of characteristics occurring along this pipeline.

He models each ‘segment’ as being exposed to 4 general types of failure mechanisms, requiring 12 inputs for each segment and: 4 segments x 4 failure mechanisms per segment x 3 PoF inputs per failure mechanism = 48 inputs as the minimum requirement for a PoF estimate representing the threats to all segments. He also needs an estimate of CoF for each segment, A through D, for a total input count of 52 inputs. He builds a framework to capture the needed inputs and calculations:

Sample Rudimentary P90+ Risk Assessment, Part 1: Structure

     

Units

A

B

C

D

     

failures/year

       
 

PoF

Exposure

mpy

       

Ext Corr

Mitigation

%

       
 

Resistance

%

       
               
     

failures/year

       
 

PoF

Exposure

mpy

       

Int Corr

Mitigation

%

       
 

Resistance

%

       
               
     

failures/year

       
 

PoF

Exposure

events/year

       

External Force

Mitigation

%

       
 

Resistance

%

       
               
     

failures/year

       
 

PoF

Exposure

events/year

       

Human error

Mitigation

%

       
 

Resistance

%

       

PoF total

   

failures/year

       

CoF

   

$/failure

       

Risk (EL)

   

$/year

       
     

$/year

       

From a properly structured SME team meeting, the assessor now populates the inputs for each risk element based on the team’s judgment and specific knowledge of each pipeline segment assessed. Their inputs have a targeted P90 level of conservatism—ie, they provide values that most likely overstate the actual risk.

Sample Rudimentary P90+ Risk Assessment, Part 2: Inputs

     

Units

A

B

C

D

     

failures/year

       
 

PoF

Exposure

mpy

16

8

8

12

Ext Corr

Mitigation

%

0.9

0.9

0.9

0.9

 

Resistance

%

0.25

0.375

0.375

0.25

               
     

failures/year

       
 

PoF

Exposure

mpy

0.1

0.1

4

2

Int Corr

Mitigation

%

0.5

0.5

0.5

0.5

 

Resistance

%

0.25

0.375

0.375

0.25

               
     

failures/year

       
 

PoF

Exposure

events/year

2

5

0.2

0.5

External Force

Mitigation

%

0.95

0.95

0.95

0.95

 

Resistance

%

0.9

0.95

0.95

0.9

               
     

failures/year

       
 

PoF

Exposure

events/year

0.1

0.1

0.1

0.1

Human error

Mitigation

%

0.99

0.99

0.99

0.99

 

Resistance

%

0.9

0.9

0.9

0.9

PoF total

   

failures/year

       

CoF

   

$/failure

$50

$200

$50

$50

Risk (EL)

   

$/year

       
     

$/year

       

Having obtained the needed inputs, the assessor then uses simple equations, discussed in this text, to arrive at preliminary risk estimates for each component. The simple equations used are summarized as follows (and detailed in later chapters):

Risk = Expected Loss (EL) = PoF x CoF

PoF_time-independent = exposure x (1 – mitigation) x (1 – resistance)

PoF_time-dependent = ƒ (Time-to-Failure, TTF)

TTF = resistance / [exposure x (1 – mitigation)]

Sample Rudimentary P90+ Risk Assessment, Part 3: Results

     

Units

A

B

C

D

     

failures/year

0.006

0.002

0.002

0.005

 

PoF

Exposure

mpy

16

8

8

12

Ext Corr

Mitigation

%

0.9

0.9

0.9

0.9

 

Resistance

%

0.25

0.375

0.375

0.25

               
     

failures/year

0.0002

0.0001

0.005

0.004

 

PoF

Exposure

mpy

0.1

0.1

4

2

Int Corr

Mitigation

%

0.5

0.5

0.5

0.5

 

Resistance

%

0.25

0.375

0.375

0.25

               
     

failures/year

0.010

0.013

0.001

0.003

 

PoF

Exposure

events/year

2

5

0.2

0.5

External Force

Mitigation

%

0.95

0.95

0.95

0.95

 

Resistance

%

0.9

0.95

0.95

0.9

               
     

failures/year

0.0001

0.0001

0.0001

0.0001

 

PoF

Exposure

events/year

0.1

0.1

0.1

0.1

Human error

Mitigation

%

0.99

0.99

0.99

0.99

 

Resistance

%

0.9

0.9

0.9

0.9

PoF total

   

failures/year

0.017

0.015

0.008

0.011

CoF

   

$/failure

$50

$200

$50

$50

Risk (EL)

   

$/year

$835

$2,973

$403

$570

     

$/year

$4,782

     

The various combinations of PoF and CoF yield differing risks for each segment. Armed with these estimates, the decision-makers now move into a risk management phase. This phase will often include improving upon the initial risk estimates, either with deeper analyses or with actual inspections, surveys, investigations, and tests.

In a short period of time, the assessor has produced a rudimentary estimate of risk, documenting key inputs and intermediate calculations associated with the estimate. Furthermore, he has established a framework from which the subsequent robust risk assessment can emerge.

Each of his preliminary inputs can now be reviewed and revised in light of appropriate additional inputs from measurements and investigations. For instance, he may use actual soil resistivity measurements to better estimate exposure rates, mpy, to external corrosion, creating additional components (segments) as the new data provides more granularity since it captures changes along the route. Similarly, he can use depth of cover surveys to modify mitigation estimates for external forces. He can consult previous HAZOP studies to improve his human error inputs. He can use ILI results for improved resistance estimates. He may choose to put additional analyses focus where it is warranted, for example, rare, but sometimes critical phenomena such as AC induced corrosion, landslide potential, SCC, etc. There are countless ways to continuously make the assessment better without changing any aspect of the underlying methodology.

Better Pipeline Risk Assessment

The previous section described a simple risk assessment application that employed a short-cut solution—using a surrogate instead of dynamically segmenting a pipeline and using only a few inputs. While it illustrates the framework of good risk assessment, this short cut compromises the risk assessment and should only be used for limited applications and under special circumstances.

Risk assessment on any facility is most efficiently done by first dividing the facility into components with unchanging risk characteristics. For a cross-country pipeline, this involves collecting data on all portions of the pipeline and its surroundings and then using this data to ‘dynamically segment’ the pipeline into segments of varying length. Risk algorithms are applied to each of the segments, producing risk estimates that truly reflect changing risks along the pipeline.

The risk estimating algorithms are conceptually very straightforward. However, as with any assessment of a complex mechanical system installed in a varying, natural environment, there are many details to consider. This is illustrated by an example risk assessment on a hypothetical pipeline.

Varying levels of analyses rigor are available to risk assessors. For example, a resistance estimate might be modeled as simply being related to stress level and pipe characteristics or, for more robust analyses, could include sophisticated finite element analyses. In the following example, details are omitted in order to better demonstrate the higher level principles.

Example 2

To illustrate key concepts, one time-independent failure mechanism (third party damage) and one time-dependent failure mechanism (external corrosion) are assessed in this example. All other failure mechanisms will follow one of these two forms. Estimates from all failure mechanisms can be combined to meet the needs of the subsequent risk management processes.

A 120 mile pipeline is to have a risk assessment performed. For the assessment, failure is defined as loss of integrity leading to loss of pipeline product. Consequences are measured as potential harm to public health, property, and the environment and are expressed in units of dollars loss; ie, in this example, all consequences are monetized.

Verifiable measurement units for the assessment are as follows:

MEASUREMENT

UNITS

Risk

$/year

Probability of Failure (PoF)

failures/mile-year

Consequence of Failure (CoF)

$/failure

Time to Failure (TTF)

years

Exposure

events/mile-year

Mitigation

%

Resistance

%

Data is collected and includes Subject Matter Expert (SME) estimates where actual data is unavailable. The integrated data shows changes in risk along the pipeline route—6,530 segments are created by the changing data along the 120 mile pipeline so an average segment length of 87 ft is seen. This relatively short average length shows that a risk profile with adequate discrimination has been generated (as long as there are no segments with excessively long length).

A target level of conservatism is defined as P90 for all inputs that are not based on actual measurements. This is conservative—a bias towards overestimation of actual risks. P90 means that risk is underestimated once out of every 10 inputs, ie, there will be a negative surprise only 10% of the time. The risk assessors have chosen this level of conservatism to account for plausible (albeit extreme) conditions and to ensure that risks are not underestimated.

For assessing PoF from time-independent failure mechanisms—those that do not worsen over time, such as third party damage and human error—the summary equation is as follows:

PoF_time-independent = exposure x (1 – mitigation) x (1 – resistance)

As an example for applying this to PoF due to time-independent third-party damage, the following inputs are identified (by SME’s) for a certain one-mile portion of the subject pipeline.

  • Exposure (unmitigated ‘attack’) is estimated to be three (3) third-party damage events per mile-year. This means that, over this mile of pipeline, excavators are expected 3 times per year and, in the absence of mitigation, will cause damage to the pipeline three times per year
  • Using a mitigation (defense) effectiveness analysis, SME’s estimate that 1 in 50 of these exposures will not be successfully prevented by existing mitigation measures. This results in an overall mitigation effectiveness estimate of 98% mitigated.
  • SME’s perform a resistance analysis to estimate that, of the exposures that are not mitigated, 1 in 4 hits on the pipeline will result in immediate failure, not just damage. This estimate includes the possible presence of weaknesses due to threat interaction and/or manufacturing and construction issues. So, the pipeline in this area is judged to have a 75% resistance to failure (survivability) from excavators, given the failure of mitigations.

Assuming that frequencies and probabilities are practically interchangeable, these inputs result in the following assessment:

PoF_third-party damage

= (3 damage events per mile-year) x (1 – 98% mitigated) x (1 – 75% resistive)

= 1.5% (0.015) per mile-year

(a failure every 67 years along this mile of pipeline)

Note that a useful intermediate calculation, ‘probability of damage’ (but not failure), emerges from this assessment and can be verified by future inspections.

(3 damage events per mile-year) x (1 – 98% mitigated)

= 0.06 damage events/mile-year

(damage occurring about once every 17 years).

This same approach is used for other time-independent failure mechanisms and for all portions of the pipeline.

In assessing PoF due to time-dependent failure mechanisms—corrosion and cracking, the previous algorithms are slightly modified:

PoF = [exposure x (1 – mitigation)] / resistance

Modelers have chosen to examine only leak potential to begin, ignoring rupture potential for now. A conservative relationship between TTF and PoF is chosen.

PoF_time-dependent = ƒ (Time-to-Failure, TTF)

TTF = resistance / [exposure x (1 – mitigation)]

To continue the example, SME’s have determined that, at certain locations along the 120 mile pipeline, soil corrosivity leads to 5 mpy external corrosion exposure (if left unmitigated). Analyses of coating and CP effectiveness leads SME’s to assign a mitigation effectiveness of 90%.

Recent inspections, adjusted for uncertainty and considering possible era-of-manufacture weaknesses, result in an effective pipe wall thickness estimate of 0.220” (remaining resistance). Use of these inputs in the PoF assessment for the next year is shown below:

TTF = 220 mils / [5 mpy x (1 – 90%)] = 440 years

PoF = 1 / TTF = [5 mpy x (1 – 90%)] / 220 mils = 0.22% PoF

So, the combined PoF from these two threats—third party excavators and external corrosion—is estimated to be 0.015 + 0.0022 = 0.017 failures/mile-year. This 1.7% failure probability can now be used with estimates of consequence potential to arrive at overall risk estimates generated by these two threats.

SME’s have analyzed potential scenarios and determined the range of possible consequences generated by a failure. After assignment of probabilities to each scenario, a point estimate representing the distribution of all future scenarios yields the value of $18,500 per failure. This can be thought of as a probability-adjusted ‘average’ consequence per failure.

Risk assessors similarly calculate all risk elements for each of the 6,530 segments. To estimate PoF for any portion of the 120 mile pipeline, a probabilistic summation is used to ensure that length effects and the probabilistic nature of estimates are appropriately considered. To estimate total risk, an expected loss calculation for the full 120 miles yields $25,200 of risk exposure from this pipeline per year of operation. The average is $210/mile-year.

Risk Management

The risk estimates generated in this way are extremely useful to decision makers. Such estimates can become part of the budget setting and valuation processes. In this example, the company first uses these values to compare to, among other benchmarks, a value based on a recent US national average for similar pipelines of $350/mile-year. The comparison needs to consider the P90 level of conservatism employed. Often, a P90 or higher level of conservatism is appropriate for determining risk management on specific pipeline segments, but will not compare favorably to historical incident data since those generally reflect P50 estimates.

Understanding how each pipeline segment contributes to the overall risk sets the stage for efficient risk management. This is the role of the profile.

For risk management at specific locations, cost / benefits of various risk mitigation measures can be compared by running ‘what if’ scenarios using the same equations with anticipated mitigation effectiveness arising from the proposed action(s).

These estimates can also be used to establish ‘safe enough’ limits by comparing to pre-determined risk acceptability criteria such as those discussed in Chapter 13.

  1. Changing Risk Along a Segmented Pipeline EL in $/yr

Values Shown are Samples Only

To help with understanding and preparations of both preliminary and complete risk assessments, this text offers sample valuations for many pipeline risk factors. As with any engineered system (the risk assessment system described herein employs many engineering principles), a degree of caution and due diligence is warranted. The experienced pipeline operator should challenge the example value assignments offered: Do they match your operating experience in general? Are they appropriate for the subject component being assessed? Read the reasoning behind all valuations: Do you agree with that reasoning? Invite (or require) input from employees at all levels. Is there more definitive or more recent data suggesting alternative valuations?

  1. Taleb, N.N., 2010, The Black Swan: the Impact of the Highly Improbable, Random House Trade Paperbacks.

  2. Actually, more of a ‘composite’ performance since the vast majority of miles of pipeline have incident rates and losses much lower than implied by an average
  3. Significant from a risk standpoint; ie, anything that can impact the probability of failure or the consequences should a failure occur.