Как вы знаете Дуглас Хуббард сейчас работает над своей 2 книгой по риск менеджме…

3
268

Наши популярные онлайн курсы

sample85
+ Подробнее

Риск-ориентированное управление. Самостоятельно

Курс направлен на развитие навыков риск-ориентированного мышления, которое позволяет выявлять, приоритезировать и моделировать влияние рисков на ключевые цели или решения организации.

25000 руб
sample85
+ Подробнее

Риск-ориентированное управление. С преподавателем.

Крупнейшая в России программа онлайн-подготовки к двум сертификациям: национальной и международной G31000

45000 руб
sample85
+ Подробнее

Количественная оценка рисков

Единственный в России и СНГ онлайн-курс по количественной оценке рисков и принятию решений.

33000 руб

Как вы знаете Дуглас Хуббард сейчас работает над своей 2 книгой по риск менеджменту. Недавно я участвовал в переписке на тему первопричин плохих практик в управлении рисками. Вот небольшой отрывок, который, как мне кажется обязан прочитать каждый риск менеджер (от одного из лучших риск менеджеров в мире и по совпадению идеолога 31000, кто догадается как его зовут получит приз):

When I first started getting involved with the witchcraft we call ‘risk management’, over 40 years ago, I was plotting various measures of risk against some other parameter such as distance, time or population size.

These calculations were always absolute and we tried to obtain best estimate results. Often we use criteria which sometimes we drew onto the risk vs other parameter graphs to designate significance.

There is nothing wrong with a simple tool that helps decision making providing it is soundly based and is used within its limitations. However, often (mostly) this is not true of so called risk matrices and heat maps.

Using such displays is well established in other related disciplines such as reliability. For example, MIL STD 1629 – dating back to 1974 for FMECA advises the use of a ‘criticality matrix’ (shown below) where absolute levels of probability are plotted against severity to “provide a means of identifying and comparing each failure mode to all other failure modes with respect to severity” and “provide a tool for assigning corrective action priorities”. Nothing wrong with that at all!

In the late 80’s I developed quite a few risk rating tools that allowed, for example, the the actions from an audit to be prioritised using a combination of the (absolute) likelihood of a hazardous event and the consequences on human populations. Often these combination were not simple product but were of the form of FN^n, where n was often 2 (the Okrent index) to reflect public aversion to catastrophes (see Williams JC, Purdy G: Practical Industrial Risk Management-Field Experience, Techniques and Priorities, Proceedings of the SRD Association, Inaugural Conference, December 1991).

When I first arrived in Australia in 1995 I was disturbed to see that the Australian and New Zealand standard at that time included in an appendix a 5×5 matrix diagram. This wretched diagram was always labelled for “illustrative purposes only” but never-the-less, became synonymous with compliance with the standard and was widely adopted over here by organisations of all shapes and sizes without thinking. The diagram was based on “relative risk ranking”, a method I had previous encountered in the mining industry in South Africa where the NOSA organisation adopted (and changed) the three factor method of Fine and Kinney (see attached).

What concerned me then (and why I got the wretched matrix taken out of the standard in 2004 when I was chair of the standards committee) was:
That it involved the comparison of relative, not absolute levels of consequences and likelihood;
Normally the scales involved were ‘ordinal’ not ratio and therefore could not be combined mathematically – but were;
That five levels of consequence did not suit most organisations as its was impossible to arrive at equivalent level of harm from different forms of consequence at each level of the scale. In fact, 6. 7 or 8 levels always works best, depending on the size of organisation;
Many organisations just copied the illustrative matrix from the old standard, with no appreciation that it would not really work for any organisation, let alone for theirs;
Often the consequence scales involved a mixture of absolute (e.g. number of $ lost) and relative measures (% delay in project or % shortfall in departmental budget) which meant that it could not be used consistently across an organisation;
Often, there were even no scales – just some vague labels like ‘high, 'medium' and ‘low’;
Labels were always pejorative and most people seem to only look at the labels to the scales and not the actual scales and their wordings;
Likelihood scales often involved a combination of frequency, return period and colloquial descriptors that conflicted and confused;
Organisations got in the habit of colouring the cells of the matrix – with ‘high' relative levels of risk being red and ‘low' being green;
Organisations were even drawing lines on the matrix to show “what is not acceptable” and what is “acceptable”.
It must be said that the main sinners were the safety fraternity who wanted to stop all work that scored a ‘red’ cell on the matrix. So often, it became quite confused whether they were measuring the current (relative) level of risk or the (forbidden) measure of ‘inherent risk’.

When all this became coded into software packages, it just made the situation even more confusing and technically invalid.

When organisations found that these poorly designed and implemented rating systems started to produce unacceptable and crazy results, then they tried to make them more complex – which just then compromised them even more. We had different matrices for different parts of the same organisation or different projects, mirror-image consequence tables that showed beneficial outcomes on one side and detrimental ones on the other, etc.etc.

Many of these crazy solutions still exist out there and the well is still being poisoned by risk management and safety people going on course where they are told the wrong things and given faulty examples – that they take back and adopt for their own organisations.

When we wrote an implementation guide here to ISO 31000, we tried to describe the correct processes that organisations should go through to develop these qualitative tools for risk ranking and risk criteria. An extract from that handbook (HB 436) and an appendix with a range of solution is attached.

However, after spending the last 15 years reviewing the effectiveness of probably hundreds of risk management in organisations across the globe, I can tell you that not one of them gets this right. And the upshot, quite simply, is that all their risk management activity, their reports and conclusions are invariably wrong!

I’m afraid that the mis-use of such matrices and heat maps remains widespread. I even see it is that basis for the US Terrorism security risk assessments!

Tony Cox has written some good, but technical papers rightly criticising all this. I’ve attached them in case you’ve not seen them.

I now regard all this stuff as just part of the huge, pointless edifice that risk management has become. 40 odd years ago I was conducting calculations at the request of decision makers so that they could understand how bad outcomes could come about and how likely these might be in particular circumstances and settings. However what was just one small part of decisions making has now taken on a Frankenstein-like life of its own and in most organisations, the large and forever growing risk management tumour is now effectively taking over its host – conscious and well informed decision making.

For me the whole matrix/heat map sags is a perfect illustration of what ails the risk management industry, where a little knowledge is a dangerous thing.

Присоеденяйтесь к официальной группе ИСАР в Facebook

3 COMMENTS

Comments are closed.