The actuary and enterprise risk management: Integrating reserve variability

  • Print
  • Connect
  • Email
  • Facebook
  • Twitter
  • LinkedIn
  • Google+
By Jeffrey A. Courchene, Mark R. Shapland | 22 August 2016

The first step in managing reserve risk is measuring that risk. Risk management is linked to risk monitoring, measurement, and reporting. The quality of measurement and reporting often determines to what extent monitoring is possible.

Routinely assessing reserve variability, as part of the regular reserve analysis process, can greatly benefit the risk management process. Integrating elements of reserve risk measurement within a continuously monitored enterprise risk management (ERM) framework can offer a number of advantages to your organization, including, but not limited to:

1. Ensuring that reserving assumptions are tracked and validated over time and that changes in those assumptions are justified relative to performance

2. Formalizing the governance around the process (i.e., clear assignment of risk ownership and consistent, accurate, and auditable controlling of deterministic methods, stochastic models, and actuarial methodology, etc.)

3. Providing a framework that allows actuarial resources to assess the effectiveness of the distributions of possible outcomes resulting from the reserve variability analyses (e.g., approximately 10% of observations as of each valuation date should fall within the highest and lowest 5% of the distribution of possible outcomes)

4. Providing a framework that includes an early warning system which translates actual outcomes of paid and outstanding loss into likely reserve estimate changes prior to any analysis

5. Enabling management to use key performance indicators (KPIs) to anticipate the results of future actuarial analyses and better understand and assess how prior assumptions have held up

6. Providing a framework that allows both managers to efficiently allocate actuarial resources (e.g., assigning the most experienced resources to the most challenging segments) and actuarial resources to hypothesize whether deviations are the result of a mean estimation error, a variance estimation error, or a random error.

In traditional deterministic reserving, back-testing plays a role, but people naturally tend to assume, or hope, for more “better than expected” back tests than “worse than expected” back tests. They also intuitively understand that “worse than expected” back tests are not abnormal, but a tendency to want more “better than expected” back tests can creep into the initial expected results in the form of bias. Alternatively, pressure to publish better financial results can push initial expectations lower.

The only way to test the significance of deviations from expected (i.e., back-testing assumptions) is to make use of the output from a reserve variability analysis to estimate a distribution of possible outcomes. With this enhancement, rather than reviewing whether an outcome is better or worse than expected, the question becomes whether an outcome is significantly different from expected. Without a reserve variability analysis, an assessment of the significance of deviations (on both a granular level and an aggregate level) is otherwise not quantifiable.

A new paper in the 2016 Casualty Actuarial Society (CAS) Forum, written by Mark Shapland and Jeff Courchene, is now available. “The Actuary and Enterprise Risk Management: Integrating Reserve Variability” discusses how your organization can get the most from your reserve variability analyses.

Read or print the article


Featured topics