Skip to content

Mechanics of Risk Assessment–A Sample Illustration

Focusing on a single threat–external corrosion–to illustrate the overall process, this example begins with a sketch, covers data collection and management, dynamic segmentation, and then produced risk estimates.

For a complete discussion, see Mechanics of Risk Assessment.

.finally, I hope you take away the sense that we’re doing a pretty high tech solution basically on ‘the back of an envelope’. It only took us a few minutes and a few pieces of data to produce credible, defensible estimates of external corrosion.

Obviously there’s a lot of detail here that we’ve skipped over but it should be apparent that this process is not overly complex: it is very understandable as a clear and methodical process that we can set up in a computer and the computer can keep the risk assessment evergreen for us with very little effort…

A quick view of the whole process

Transcribed

(warning: dictation errors)

Let’s demystify this whole risk assessment process by going through a quick overall example– a very simple, single-threat example but it’s going to illustrate all the important pieces in a pipeline risk assessment.

We need to start with a sketch and let’s borrow some terminology from the GIS world. let’s call the pipeline itself a centerline that’s a representation of the pipeline on the planet. In our sketch, we have some population thru  which the pipeline is going and then we have a change of soil here: maybe the pipeline is going offshore maybe it’s going into a swamp but there’s some kind of a land change right here at this location

 let’s begin by inventorying what information we have based on this sketch.  We need to tabulate start and stop points for the population; stop points based on pipe specification; and based on that soil type indication.

 we know that from  location  kilometer 5 to kilometer 7 our population is at 100,000 otherwise it’s at 10,000;   kilometre 8 to 18 we have 0 .5 inch wall thickness everywhere else it’s only  0.25 inches and from kilometer 15 till the end (let’s call the end 20) we have a soil corrosivity of 10 mils per year otherwise it’s 5 mils per year

We’ve got a’ population related’ consequences potential ranging from $10,000 to $100,000. we’re using the population density as a surrogate for the dollars per incident consequences potential

 let’s say that we’ve also consulted our corrosion department and they’ve gone through an analysis and tell us that the normal defenses against external corrosion(cathodic protection and coating) they think in this 20 kilometre area there 90% effectiveness of this mitigation. There will  be a lot of videos subsequent to this one, talking about how to get numbers like this 90% efficiency but let’s just accept that for now so we’ve accepted this information.

 realistically we want to put this into a computer to where the computer will do most of the work for us.  we might be tempted to tabulate the information based on the kind of events that we see. we have a wall thickness event, we have a population consequences event, we have a soil corrosivity event, so we might be tempted to put those events across column headings.

but bear with me here and let’s set this table up a little bit differently. let’s set it up in what we call an Events table format, borrowing from the GIS guys the use of the term ‘event’.    anything anything that changes along the route is an event.

pipe wall thickness we know from zero to 8 kilometres we have a different wall thickness than from 8 to 18 and from 18 to 20 inch different from 8 to 18 so we can populate these values we call that the code the event is a pipe wall thickness the code or the attribute is what that from this distance from this location of this location is what is being assigned for that event the next week and do the soil mils per year we know from zero to 15 we have a certain soil condition after and then from 15 to the end we have a different soil corrosivity condition do the same population and continue on recall that our corrosion department told us from zero to 20 we can

 we’ve got the sketch and now the table.  we can put the two together and now we can go back and forth between the sketch and the narrative information

so we’ve now got a compiled list of everything we know about this stretch of pipeline changes in wall thickness changes in soil corrosivity changes in population and consequense and recall that our corrosion control guys told us we could use a number of 90% and again we’ll talk more detail an enormously useful table it’s very helpful in diagnosing changes in risk because any change in risk along the centerline of the pipeline will be prompted by a change in data and we now have a table with all of the collected data we know where the data changes and what data changes at each location so that’s going to be invaluable for use in risk management which begins with diagnosing the risk issues so our next step is to segment

we’ve got a pipeline center line we’ve got a lot of data associated with that center line but we need to break the pipeline into pieces into manageable pieces for which we can do risk calculations on

 so let’s talk about that. dynamic segmentation just means that the data itself determines when the segment started stop it’s the most efficient and the most comprehensive way to segment the pipeline artificially setting fixed lengths of segmentations is a very inefficient way to approach risk assessment for pipelines or any long linear asset because then you’re forced to take a maximum or an average or or something in each segment in this case each segment is unique from its neighbors and therefore so the trick to segmentation dynamic segmentation is to refine all of the Start Stop points of any piece of data

so recall that we compiled all of our start and stop points in the events table so we know that the pipeline segment of interest begins at 0 so the next the end point will be where any piece of data changes.

 something in each segment in this case each segment is unique from its neighbors and therefore it’s automatically avoids having to compromise

 so the back end this we’re going to set up the table where the events are column headings and in our simple example we’ve only got 3 column headings beyond the beginning and end we’ve got the pipe wall thickness the soil corrosivity in a population if we were doing a full risk assessment we would have hundreds of columns of data so in this simple example we’ve only got 3

So the trick to segmentation dynamic segmentation is to define all of the start and stop points of any piece of data

 so recall that we compiled all of our start and stop points in the events table we still have our sketch so we know that the pipeline segment of interest begins at 0 so

 the first endpoint will be where any piece of data changes so we can look at the events table or  we can look at the sketch and we see that at station 5( kilometer five) we have a change. from zero to five our wall thickness is .25 our soil is 5 and our population is 10,000 so

our next segment has to begin where the previous segment ended so we look at the diagonals we compare the five that we ended with in the first segment and make sure that we’ve got a 5as the  beginning of the next segment. The next change is at km 7.

From km5 to 7 looking at our events table we see that the the wall thickness is .25 the soil is 5 but now the population is 100,000 so this is another way to check the data:  every dynamic segment must have at least one piece of data that has changed otherwise there would be no reason for that dynamic segment. so another quality control check is to make sure at least one piece of data has changed to prompt that segment.  again we can use the sketch or the table.  Ultimately, we don’t want to do this by hand of course. we want to do this by the computer. there are several routines you can get that would turn an event

Some simple relationships to our set of knowledge relate the consequences of failure to the population density recall I said earlier that the 10,000 and 100,000 values are not really counts of people residing there but rather the amounts of potential damages per incident

Now we need a few equations.  TTF stands for time to failure. our time to failure is a simple calculation it’s the pipe wall thickness divided by the mitigated corrosion rate so Mills of wall thickness divided by mils per year gives us years for time to failure probability of failure is gonna be related to time to failure and we can have a simple relationship for that or a complex relationship talk more about that later and finally our measure of risk is going to be El which stands for expected loss expected loss is just our probability times are consequences so let’s use our dynamic segment table and these relationships let’s begin doing some calculations so here’s our dynamic segments beginning and ending we got the wall thickness to soil the population we can now add the mitigated mils per year because recall our corrosion Department told us we were 90% mitigated and we know from our [exposure mitigation resistance] equations that exposure times 1 minus mitigation is going to be our damage rate so our soil corrosivity numbers 5 and 10 reduced by 90% give us the mitigated mils per year so this is how much degradation we are: modeling .5 mils per year 1 million per year

We can add another column for TTF. time to failure units are years. recall our relationship: the pipe wall thickness divided by the mitigated corrosion rate. so a 250 wall thickness corroding at 0.5 mils per year gives us a 500 year time to failure. we continue that calculation for all the other segments and we have a column of time to failure.

finally we can add a column of probability of failure. our simple relationship for probability of failure can be related to time to failure in just a simple reciprocal kind of relationship. this is not necessarily the relationship we will always want to use. it is very conservative. but for the purpose of illustration let’s go with this.

so if we’ve got a 500 year time to failure 1 / 500 this leads to a probability of failure of .02..same for segment two, segment three, segment four has a longer time to failure so a lower probability of failure. segment five has the same time to failure as before so we see the same value again. now we’ve got a very short time to failure 250 years leading to a higher probability of failure.

these numbers are additive as long as they’re small or if they are expressed as frequencies so we can pretend they are each a frequency. a summation is telling us that we’ve got about a 1.3%/yr chance of failing from external corrosion somewhere in these 20 kilometres

finally we do our last calculation for risk which is EL (expected loss) units are going to be dollars per year using a definition of risk as probability times consequences. these are monetized risks: the frequency of the losses times the loss per incident. so this is simply the probability number multiplied by our surrogate consequense number (recall, these are really a dollars per incident rather than population density) so dollars per incident times our incident frequency gives us our expected loss.

so segment one is contributing $20 of expected loss; segment two is $200; $210, $240 again we can sum those up not telling us that the whole 20 kilometres is presenting to the company about $310 per year of expected loss from external corrosion.

so putting all this together, we’ve come up with some defensible and credible estimates of the probability of failing from corrosion and the associated expected loss that those failures might generate.

so hopefully you recognize several key takeaways in this little exercise:

  • we’ve demonstrated the use of centerlines
  • how to efficiently collect and manage data in an events table
  • we’ve introduced the concept of dynamic segmentation
  • producing risk estimates
  • aggregation recall that we were able to sum up some columns to get numbers like these

finally I hope you take away the sense that we’re doing a pretty high tech solution basically on the back of an envelope it only took us a few minutes it only took a few pieces of data and we’re coming up with incredible defensible estimates of external corrosion now obviously there’s a lot of detail here that we’ve skipped over but I just wanted to generate a sense that this is not rocket science you know this is very understandable it’s very clear it’s a very methodical process that we can set up in a computer and the computer can keep the estimates Evergreen for us with very little effort

Published inBeginners CornerDoing it RightRisk ModelingUncategorized