Never compare inherent to residual risk again RISK-ACADEMY Blog

0
312

Наши популярные онлайн курсы

+ Подробнее

Риск-ориентированное управление. Самостоятельно

Курс направлен на развитие навыков риск-ориентированного мышления, которое позволяет выявлять, приоритезировать и моделировать влияние рисков на ключевые цели или решения организации.

25000 руб
+ Подробнее

Риск-ориентированное управление. С преподавателем.

Крупнейшая в России программа онлайн-подготовки к двум сертификациям: национальной и международной G31000

45000 руб
+ Подробнее

Количественная оценка рисков

Единственный в России и СНГ онлайн-курс по количественной оценке рисков и принятию решений.

33000 руб

The concept of inherent risk originated in insurance where underwriters used Maximum Possible Loss (MPL) to determine the total value that could be lost in a catastrophic event. MPL considers the absolute worst-case scenario, with no controls in place, even if it’s highly unlikely to occur. The concept of considering the worst-case scenario loss has been present in insurance since its early days. However, the specific term MPL likely gained prominence in the 1950s or 1960s as risk assessment methodologies became more formalized.

This concept of inherent risks was later borrowed by internal auditors and risk managers, often used in a misleading and purely theoretical form.

Since then, insurance underwriters have actually moved on to use Estimated Maximum Loss (EML) instead of PML. EML takes a more realistic approach by considering the probability of different loss scenarios. It aims to estimate the maximum loss that is reasonably expected to occur based on historical data, risk assessments, and the specific characteristics of the insured risk. The shift towards using EML instead of MPL began to gain traction in the late 20th century, particularly in the 1980s and 1990s. This was driven by several factors:

  • Advancements in risk modelling: The development of sophisticated risk models allowed insurers to better quantify the probability of different loss scenarios, leading to more accurate estimations of maximum losses.
  • Increased focus on risk management: The insurance industry increasingly emphasized risk management practices, and EML provided a more practical and actionable metric for assessing and mitigating risks.
  • Regulatory changes: Some regulatory frameworks began to require or encourage the use of EML as a more realistic measure of potential losses.

This is an important fact because it shows that even the original users of inherent risk concept found it simplistic and exaggerated. A study by the National Association of Insurance Commissioners (NAIC) found that EML, by incorporating historical data and probabilistic models, reduces the capital buffer insurers need to hold, thus optimizing their financial stability and competitiveness. It also has to be noted that beyond some of the traditional uses in insurance and few other exceptional cases the concept of inherent risk has no practical application or value.

In order to further analyse the concept we need to distinguish between 3 types of risks:

  • Inherent risk – The risk that would hypothetically exist without any risk mitigation measures. It is the raw, natural risk associated with a particular activity or situation.
  • Current risk – The current level of risk given existing control measures assuming their current level of effectiveness. It essentially represents the level of risk that the organization is currently facing.
  • Residual risk – The future forecasted risk level after new mitigations measures will be implemented and assuming they will be effective.

Internal auditors traditionally compare inherent risk to current risk to determine which current controls are most critical and therefore should be included in the internal audit reviews. The risk managers on the other hand usually compare current risk to residual risk to determine the effect new mitigations could have on risk exposure to select the most effective combination of mitigations given the budget and to determine whether the cost of implementing is justified comparing to the reduction in risk.

The review of the internal audit methodologies showed that most of the time inherent and current risks are compared not in mathematical sense but rather as a difference in two expert judgements, denoted by high, medium or low risk level derived by combing likelihood score or rating and consequence score or rating. The IIA’s Global Internal Audit Common Body of Knowledge (CBOK) survey found that 60% of auditors use subjective scales (high, medium, low) to assess risks, leading to inconsistent and unreliable evaluations . A study by COSO also highlights that these qualitative assessments often fail to account for control effectiveness accurately, resulting in a misrepresentation of the true risk landscape.

In 1964 essay “Words of Estimative Probability,” Sherman Kent explored the inconsistency in how analysts used words like “likely,” “probably,” and “unlikely” to express their confidence levels in intelligence assessments. Kent’s study found that different analysts assigned different numerical probabilities to these words, leading to confusion and misinterpretations of intelligence reports. For example, one analyst might use “likely” to mean a 80% chance of an event occurring, while another might use it to mean a 30% chance. This lack of standardization in the use of estimative language was a significant concern in the intelligence community. As a result, Kent’s work led to the development of more structured guidelines and scales for expressing probabilistic judgments in intelligence analysis. It also highlighted one of the fundamental flaws in attempts by internal auditors to compare qualitative inherent risk levels to qualitative current risk level and made their estimates of control effectiveness misleading and unrealistic. A survey by the Institute of Internal Auditors (IIA) revealed that many organizations struggle to apply inherent risk meaningfully outside of insurance contexts. The IIA’s report shows that inherent risk assessments often lead to overstated risk levels, resulting in inefficient allocation of resources to mitigate unlikely events.

A better approach for internal auditors is to use decomposition to assess the effectives of controls by scoring against objective factors, like volume of transactions, historical errors, materiality, level of automation and so on. This is still simplistic but produces less error, according to MacGregor (2001) who directly advocates for decomposition in judgmental forecasting, while Armstrong (2001) supports combining forecasts, often involving decomposition, for better accuracy. Fildes & Goodwin (2007) emphasizes the value of breaking down complex problems and using judgment for each component, further supporting the benefits of decomposition in improving judgmental forecasting accuracy.

We have not observed any evidence to suggest that risk managers need to compare inherent risk to current risks or inherent risk to residual risk as it does not create any practical value or leads to any specific decisions, given the effort required.

Many risk managers we have interviewed, on the other hand, do compare current risk to residual risk when they optimise future mitigations against the budget or try to reduce the risk to an acceptable level. This analysis typically required deeper analysis of any given risk using a bow-tie, a decision tree or another decomposition technique. Quantifying the current risk and future residual risk allows the mature organisations to optimise mitigation actions and select the best return on investment. The collection of mitigation actions that produce the biggest reduction in value at risk and conditional value at risk for a dollar invested are usually selected.

Contrary to the common approach, risks should not be compared by risk level derived from multiplying probability and consequences, this gives an expected loss (EL). While it is useful for budgeting, it overlooks tail risk, making it unsuitable for comparing current and residual risks. Artzner et al. (1999) highlight the superiority of value at risk (VaR) and conditional value at risk (cVaR) as coherent risk measures, effectively capturing tail risk and adhering to desirable properties like subadditivity. For instance, a case study on BP’s risk management practices post-Deepwater Horizon disaster showed how moving from a static risk register to a dynamic, probabilistic risk assessment model helped the company better predict and mitigate future risks . Similarly, research by McKinsey highlights how a global pharmaceutical company used VaR and cVaR to optimize their risk mitigation strategies, leading to a 30% improvement in risk-adjusted return on investment .

It is advisable to report on residual risk once the organisation is mature enough to represent key risks as loss distributions and measure the change in VaR or cVaR.

Modern risk management is more integrated with decision-making processes, which means that any risk assessment approach, including discussions around inherent risk, should directly inform and align with strategic and operational decisions. This requires a more dynamic and probabilistic approach to risk assessment rather than static, categorical assessments.

RISK-ACADEMY offers online courses

+

Informed Risk Taking

Learn 15 practical steps on integrating risk management into decision making, business processes, organizational culture and other activities!


$149,99$49,99




+

Advanced Risk Governance

This course gives guidance, motivation, critical information, and practical case studies to move beyond traditional risk governance, helping ensure risk management is not a stand-alone process but a change driver for business.


$795