If you have ever tried to manage a risk log used by several people, you’ll understand the minefield that can become.
We like to assign scores to risk as an “objective” way of prioritising them, but people aren’t good at that. For example, I recall a firm with premises on the flight path of a regional airport which had marked the likelihood of a plane landing on their office as “high”. There is a mass of good data on the CAA website that says “very low”.
In another risk log, when we ran a simulation of all their logged risks, it predicted an average annual loss far above their actual turnover.
We can of course go a long way to improving these errors through moderating inputs, and peer discussion. Even when the risks are over-stated, we may gain some relative prioritisation.
My experience has always suggested that people in more numerate disciplines – engineers for example – are better at the game (I’ll look at formal quantitative risk assessment in another post). If you have actual data to refer to (it does happen) then it helps a great deal, so long as you know how to use it.
Daniel Kahneman in his book Thinking, Fast and Slow (Penguin, 2012) highlights the experimental evidence for some of these human biases. Just a few of these include:
- Focusing on the most recent or the most publicised events (availability errors)
- A tendency to over-estimate the probability of rare events (actually caused by us answering the wrong question)
- Failing to consider that risks could be more severe than previously observed
And a humbling observation is how we see mistakes in other people’s reasoning more easily than in our own. There’s a lot of truth in that.
But in the domain of general risk estimation, having had a chance to see other people’s mistakes, and actively moderated their numbers, I have often supposed that I have a more objective view. Well, Kahneman acknowledges that that can be the case. But probably the biggest message in the book is that intuition can be our worst enemy, and our “lazy” cognitive senses are all too ready to accept the intuitive estimate.
Now add the fact that most people have basic difficulties in understanding statistics (even the experts), and there is massive scope for error. So this leads us to the key question – how confident can we be about the prioritisation process, and are we under-planning in some cases and over-planning in others?
Of course, if you’ve looked at the science of it then you probably had a pretty good idea that that was one of the problems. And the process of logging risks is indeed an good way of exposing these subjective errors, biases and assumptions to a test.
By sharing our statement of the risk we gain an opportunity to moderate it, hopefully in the right direction. In that sense, a risk shared may literally be a risk halved.
In this way we reaffirm some of the good principles behind risk management: expose your assessment to others to be challenged; be ready to reappraise both the likelihood and the impact considerably; and check your facts.