See also Missteps, Myths, and Past Practice
Classical QRA versus Physics-based Models
Many documented risk assessment approaches are based solely on statistical analyses. This is because the problem of risk estimation was initially given to statisticians to solve. Ask a statistician how often something will happen in the future, and their first question will be ‘how often has it happened in the past.’ This is reasonable for a methodology that deals exclusively with analyses of how numbers are ‘behaving.’ The ability of statistics to model the behavior of larger populations over longer periods of time is undisputed.
But this does not provide a complete solution for practitioners of risk management.
Historical data should always influence our estimates of risk. However, it will rarely capture all the pertinent considerations. Even purists will usually agree that statistics can only fully describe very simple and rather uninteresting systems in the universe. Coin flips and games of chance (cards, roulette, etc) are examples. Real-world systems are complex and require many insights well beyond statistical analyses for understanding.
Scientists and engineers, rather than statisticians, have been more involved in certain portions of the risk assessment, notably consequence modeling. Historically, consequence assessments have made sound use of science and engineering where probability assessments often have not. Consequence evaluations have, for years, routinely used dispersion modeling, thermal effects predictions, heat transfer equations, kinetics and thermodynamics of fluid movements, and many others. On the other hand, probability was simply based on historical rates. Perhaps the historical rates were modified
by some very subjective ‘adjustment factors’ to account for instances when the subject pipeline was thought to behave differently from the statistical population. But, too often, little science and engineering was applied to the problem of measuring failure potential in a formal but efficient manner.
Underlying most meanings of risk is the key issue of ‘probability.’ Statistics and probability are closely intertwined. But, as is detailed in this text, probability expresses a degree of belief beyond statistical analyses. ‘Degree of belief’ is the most compelling definition of probability because it encompasses statistical evidence as well as science, engineering, interpretations, and judgment. Our beliefs should be firmly rooted in fundamental science, engineering judgment, and reasoning. This does not mean ignoring statistics—proper analysis of historical data—for diagnosis, to test hypotheses, or to uncover new information. Statistics help us understand our world, but it certainly does not explain it.
The assumption of a predictable distribution of future leaks predicated on past leak history might be realistic in certain cases, especially when a database with enough events is available and conditions and activities are constant. However, one can easily envision scenarios where, in some segments, a single failure mode should dominate the risk assessment and result in a very high probability of failure rather than only some
percentage of the total. Even if the assumed distribution is valid in the aggregate, there may be many locations along a pipeline where the pre-set distribution is not representative of the particular mechanisms at work there.
There is an important difference between using statistics to better understand numbers—inputs and results—versus basing a risk assessment predominantly on historical incident rates, using statistics to support the belief that the past is the best way to predict the future. This is admittedly an oversimplification and is debatable in several key ways, especially when considering that all techniques are strengthened by simultaneous understanding of both the underlying physics and the statistics. However, this distinction emphasizes a core premise of this recommended methodology in this book.
That premise is that the understanding of the physical phenomena behind pipeline failure should be the dominant basis of a risk assessment. Statistics, in particular historical event frequencies, should be secondary inputs.
The exposure-mitigation-resistance analyses that is an essential element of PoF assessment, is a key aspect that differentiates a modern pipeline risk assessment from classical QRA. Classical QRA does not seek the exposure-mitigation-resistance differentiation. Without this insight, past failure rates typically used in such assessments have questionable relevance to future failure potential.
Failure to quantify the exposure-mitigation-resistance influences leads to incomplete understanding which makes risk management problematic. Ideally, historical event rate information will be coupled with the exposure-mitigation-resistance analyses to yield the best PoF estimates.
The exposure-mitigation-resistance analyses is an indispensable step towards full understanding of PoF, as is detailed in later chapters. Without it, understanding is incomplete. Full understanding leads to the best risk management practice—optimized resource allocation—which benefits all stakeholders.
More will be said about improvements over Classical QRA approaches in later sections.
Statistical Modeling
To be clear, the message here is NOT that statistical theory is to be avoided but rather that statistics should supplement rather than drive risk modeling. Science and physics provide the model basis but statistics is very useful in tuning or calibrating inputs and results. Failure to use statistical theory would be an error.
In fact, the risk assessment framework proposed in this text has been successfully deployed as a model making increased use of statistical techniques. In one such application, Bayesian networks were established to better incorporate probability distributions, rather than point estimates, and learning or feedback processes were included.
The same essential elements as recommended here should be used in this application. This is especially important for the breakdown of PoF into separate, but connected, measurements of exposure, mitigation, and resistance.
In addition to the classical models of logic, new logic techniques are emerging that seek to better deal with uncertainty and incomplete knowledge. Methods of measuring “partial truths”—when a thing is neither completely true nor completely false—have been created based on fuzzy logic originating in the 1960s from the University of California at Berkley as techniques to model the uncertainty of natural language. Fuzzy
logic or fuzzy set theory resembles human reasoning in the face of uncertainty and approximate information. Questions such as “To what degree is x safe?” can be addressed through these techniques. They have found engineering application in many control systems ranging from “smart” clothes dryers to automatic trains.
See Also:
- What are they missing?
- Modern QRA
- Measuring failure potential
- Essential elements of good risk assessment
- Machine Learning, AI, Statistics
- Missteps, Myths, Past Practice
- Auditing risk assessments and risk management programs
- Managing Uncertainty
- Certification of a risk assessment
- Measurements vs Estimates: Two Types of Evidence
- Data Management