Skip to content

Mechanics of RA

Here we examine the basic steps in conducting a risk assessment once sufficient information has been gathered and made available to a set of risk assessment algorithms. That is, how to actually run the assessment and produce meaningful results. This assumes that risk calculating algorithms and risk data have been collected or created and the user is ready to begin producing risk estimates.

While risk assessment could be performed with pencil and paper, much efficiency, resolution, and robustness is obviously achieved by utilizing computers. Steps outlined here focus on the diy (do it yourself) approach, using common computer platforms and tools rather than specialized software.

How to Perform Risk Assessment: Steps in Conducting the Risk Assessment

Here are the general steps in risk assessment. Note that ‘software’ discussion is listed last. This is because, software should be a consideration only after the basic processes are understood. Too many risk assessment projects have floundered due to a too-early focus on software.

For a simple, beginning-to-end example of a modern RA, see this Overall Example.

For a full reading related to this topic, see Ch3 for general guidance and Ch4 for details on the mechanics.

See also Risk Analysis Tools, Myths and Misconceptions

Information Identification

A great deal of information is usually available in a pipeline operation. Information that can routinely be used to create and update the risk assessment typically includes

      • All survey results such as pipe-to-soil voltage readings, leak surveys, patrols, depth of cover, population density, etc.
      • Documentation of all repairs
      • Documentation of all inspections, especially excavations
      • Operational data including pressures and flow rates
      • Results of integrity assessments
      • Maintenance reports
      • Updated potential consequence information (flow hydraulics, pathways, etc)
      • Updated receptor information—new housing, high occupancy buildings, changes in population density or environmental sensitivities, etc.
      • Results of root cause analyses and incident investigations
      • Availability and capabilities of new technologies.

All information needs to be integrated onto a common alignment for use in a risk assessment.

There are also opportunities to use the same piece of information in multiple ways. Let’s say you know something simple about the soil type—where it’s rocky and where it’s mostly sand or clay. Most would agree this is information that is easy to obtain. So let’s examine the surprisingly long list of risk implications that could be associated with this simple data set. Some of the risk factors that can be strongly influenced by soil type include:

      • Potential soil moisture content, impacting corrosivity estimate
      • Likelihood of past coating damages during installation
      • Propensity of future coating damages to occur
      • Dispersion of liquid spills—infiltration rates, surface flow, etc
      • Amount of potential harm to certain receptors (for example, aquifers vs surface receptors)
      • Exposure to third party excavation damages
      • Exposure to certain geotechnical phenomena (for example, subsidence, shrink/swell, landslide, etc)

Perhaps you can think of more. The point is that you may have more information than you first thought. In this example, a single piece of information—a simple soil characteristic; rock vs clay—has influenced seven different risk variables.

Other examples of multiple uses of info: Product flowrates. The flowrate tells us something about 4 different kinds of exposures (threats): internal corrosion, erosion, surge, waterhammer

See also, Myths and Missteps, especially the “I don’t have enough data” myth.

How to Place Information onto a Common Framework: Data Alignment

A modern pipeline RA will use a centerline and measures (or stationing) as the mechanism for aligning and, eventually, integrating all information. This allows the efficient use of data in many forms, including maps and GIS-centric information.

Once all information is on a common ‘platform’ (the centerline), an Event table should be built.

Special care around surveys and inspections is warranted. Odometers and even GPS (lat-longs, x-y’s, etc) are not substitutable for centerline measures. measures are distinct from stationing in that they are continuous and can be used to calculate distances/lengths. Stationing are often interrupted by station-equations, requiring additional calculations to arrive at distances/lengths, ie, centerline measures.

How to Handle Data Gaps: Assignments and Defaults

A data gap may not be a knowledge gap. Provisions can be made to incorporate knowledge when that knowledge does not yet reside in a database.

While we will never have all the information we want, an absence of numerical values cannot be tolerated in the calculation part of the risk assessment. The computer will not know what to do when attempting to calculate with a null or blank value.

So there has to be some numerical value for every piece of information sought at every location along the pipeline. These numerical values can be efficiently established wholesale–they do not have to be hand-entered for hundreds or thousands of locations. The recommendation is to use assignments and defaults to fill information gaps in an efficient, semi-automatic process. In a computer environment, the model first seeks the information in a database table. If not found there, it looks for an assignment. If no assignment, it uses the default value as a last resort.

This three-tiered approach has proven to be very efficient and useful. It establishes a hierarchy of information, where the tabulated, location-specific data is normally the most accurate and granular. That is why the model first seeks data from that source. If no appropriate data table that covers the subject location can be found, then the model ‘knows’ that another source must be sought–the more general information called ‘assignments’. Finally, having exhausted the more reliable sources of good information, the model recognizes that it must use an often wildly-conservative value called a ‘defaults’.

Here is the background. Any gaps in information must be filled prior to calculating risk values. Typical gaps could be lack of information regarding the depth of cover or coating condition on an older pipeline. To fill the knowledge gaps, the risk assessor must select some input value that is consistent with the desired level of conservatism of the assessment. Each event along the pipeline must have an assigned attribute – a value must be provided for the missing data. This is efficiently done in the two alternative sources (after tables of data) noted above. In the first, values are assigned based on SME knowledge of a specific region or system characteristics. For example, hurricane damage potential in Aspen, Colorado, US can confidently be assigned very low probabilities by SME’s, as can frost heave phenomena in the islands of the Caribbean.

Assignments are also used to avoid extra database content, thereby making file sizes smaller and risk assessment processing more efficient. For instance, there will likely be database tables showing locations of roads and test leads, as two common risk assessment inputs. Rather than also build tables of ‘not roads’ and ‘not test leads’–ie, all the locations where there is not a road and not a test lead–an assignment can be used. Since the risk model first uses data tables and then looks for assignments when the data table does not cover a length of pipeline, the assignment will effectively fill the gaps left by the data tables.

In the final phase of gap-filling, values must sometimes be assigned in the absence of any available SME information. For instance, until an SME is able to say that landslides will not happen along a stretch of pipeline, then a very conservative default—perhaps 1 to 10 landslides per year for every mile of pipe—should be assigned as an exposure in a conservative risk assessment. After all, if no SME can say such numbers are not possible, then the assessment, especially the P90+ assessments, must assume that they are plausible.

Note that defaults should be relatively rare. Hopefully, pipeline owner/operators will have at least some notion of threats and be able to roughly quantify values. If not, then defaults are needed and should be ‘painful’ values–ie, showing unreasonable levels of risk. This is intentional. If no SME or data source can offer any alternative value, then it is prudent to assume that this issue is completely unknown, unconsidered, and therefore potentially very dangerous.

This two-step gap-filling approach completes a hierarchy of data input into the assessment, as shown by the following list:

      1. Location-specific data measurements–database tables.
      2. Location-specific data estimates–database tables.
      3. Values assigned to fill gaps in database tables.
      4. Values assigned to general areas by SME’s.
      5. Conservative defaults to be used when no other info is available.

These are in order of progressive uncertainty, with defaults carrying the highest level. Defaults are the values that are to be assigned in the absence of any other information. There are implications in the choice of default values and an overall risk assessment default philosophy should be established.

It is not possible to assign a default to all variables. A small number of inputs cannot reasonably be assumed: pipe diameter and type of product are examples. Here, the missing data should lead to a non-assessed segment.

All assignments and defaults should be maintained in easy-to-manage list(s). This makes the process of retrieving, comparing, modifying, and maintaining the defaults and assignments simpler. Establishing these gap-filling values might be governed by rules based on other pertinent information. These rules should be documented as part of the algorithm and data-input documentation. They can infer the value to be used base on some associated information. Conditional statements (“if X is true, then Y” or “select case” or various lookup table functions) are especially useful. For example, the numerical equivalents of statements such as these may be used to assign values when direct information is unavailable:

If (land-use type) = “residential high” then (population density) =
22 persons/acre

If (pipe date) < 1970 AND if (seam type) = “ERW” OR “unknown” then
(pipe manufacture) = “LF ERW”[1]

Other special equations by which defaults will be assigned may also be desired.

When event frequencies are to be assigned for events that have never occurred, a useful exercise may be to quantify the intuitive ‘test of time’ aspect. That is, if x miles of pipeline have existed for y number of years and the subject event has never occurred, this is useful evidence. Absent any other information, it can be assumed that if the event were to occur now, the historical rate thus created represents a useful predictive rate, at some PXX level of conservatism.

For example, an evaluation team wishes a quick, initial risk assessment and seeks the frequency of ground subsidence events along a pipeline. They believe that the land above their 200 miles of pipeline in this area has never shown any indication of land subsidence in the 20 years the pipeline has existed. Were subsidence to occur somewhere along the pipelines now, the frequency of occurrence could be estimated to be 1 event per 200 miles x 20 years = 0.0025 events/mile-year. Pending the acquisition of better information—perhaps via soils analyses and geotechnical calculations—the team chooses to use this value for their P70 estimate in this initial risk assessment. Given that other threats to system integrity may have estimates that far surpass this value, it may be that additional analyses to produce a better estimate is never warranted. The team could decide that this rough estimate alone is sufficient, unless some future evidence emerges suggesting the need for a better evaluation. This, in itself, is another exercise in risk management—choosing where resources are best applied.

Conservatism in assigning gap-filling values will be appropriate in most risk assessments. A danger in assigning non-conservative values is that they are no longer noticed by risk managers. They are discovered to be non-conservative only when an incident happens. At that point, many outside parties will legitimately question the value of an assessment that does not cause gaps in knowledge to be highlighted (ie, via use of conservatism). Credibility will have been lost in addition to the missed opportunity to better manage the risk.

Adhering to a practice of conservatism in defaults requires discipline. It is sometimes difficult to, for instance, use a default of 18” or 24” of cover for all portions of a pipeline that was installed with 36” of cover just 5 years ago. However, with a real chance that some short section has indeed lost cover, the default value reflects real uncertainty, perhaps prompting a depth of cover survey to verify the more likely 36” depth everywhere.

Segmentation

Structures and equipment of any type are actually collections of components (see segmentation and length effects). Each component plays a role in the structure’s integrity and also contributes to failure potential of the overall structure. Performing a risk assessment on any structure is therefore an exercise in assessing each component’s contribution to risk and then aggregating the results.

Components can be grouped based on type or function but their location in the structure must also be considered. For instance, in a building, a support column located on the ground floor of a tower plays a different role in risk than an identical column located on the top floor.

Pipelines too are structures that are collections of components. Analyses of long, linear assets like pipelines require special techniques for segmenting the pipeline. See full discussion on segmentation and length effects as well as on profiling.

.

Published inRisk Modeling