
Milliman researchers in Paris certainly aren't and their new research, hot off the press, published on 22 February 2017, represents a significant development in mortality and longevity risk modelling. It is vital reading for anyone working in this sphere.
My colleagues have developed a robust statistical methodology to correct the implicit inaccuracies of national mortality tables which are used widely in sophisticated mortality and longevity risk modelling. The results are striking.
Here I take a closer look at the relevance of these national mortality tables, the problems with them, and the corrections available in order to enhance mortality and longevity risk models. I will touch on the key technical points behind these developments from an Irish/UK perspective, leaving the rigorous mathematical explanations to the underlying research publications--the 2017 publication can be found here and the 2016 publication can be found here.
The use of national mortality tables
In Ireland and the UK, to set basic mortality assumptions in our pricing and reserving work, we tend to use insured lives mortality tables, such as the Continuous Mortality Investigation (CMI) tables. However, national mortality tables based on the population as a whole are also used extensively in mortality and longevity risk modelling, where a greater quantity of data is required.
National mortality tables are used to calibrate stochastic mortality models, to derive mortality improvement assumptions, in sophisticated mortality risk management models, in Solvency II internal models, in pricing mortality/longevity securitisations, and in bulk annuity transactions.
Bulk annuity transactions are popular in the UK market, with a number of large deals executed during 2016, including the ICI Pension Fund's two buy-in deals completed in the wake of Brexit, totalling 1.7 billion. Legal & General completed a 2.5 billion buyout agreement with the TRW Pension Scheme in 2014.
Longevity hedging (in particular, use of longevity swaps) is also an attractive approach to the de-risking of pension schemes, and would equally require the use of national mortality tables. Transactions range from the large-scale 5 billion Aviva longevity swap in 2014 to the recent, more modest, 300 million longevity swap completed between Zurich and SCOR in January 2017.
While the use of internal models to calculate mortality and longevity risk capital requirements under Solvency II is not prevalent in the Irish market, which is due to the size of companies and the amount of risk retained, it is likely that reinsurers are looking at such models. In the UK, larger companies may opt to use internal models if they are retaining large exposures.
Indeed, national mortality tables also typically inform mortality improvement assumptions for all companies, as the analysis of improvements requires large volumes of data. Therefore, even companies that do not use sophisticated mortality and longevity risk modelling techniques are implicitly impacted by the new developments in relation to the construction of national mortality tables.
The problem with national mortality rates
Period mortality rates analyse individuals with a given age last birthday (e.g., 40) who are observed during the same year (e.g., 1960)1. As such, period tables provide information on how mortality evolves from one year to the next and are therefore the natural input for stochastic mortality modelling.
However, these period rates in national mortality tables are typically based on one particularly heroic assumption, that is, the uniform distribution of births. An example of the assumption of uniform distribution of births is that if there are 1,200 births in a given calendar year, the assumption is that 100 births occurred each month. Of course, in reality, this is quite inaccurate--there may well have been 200 births in January, 50 in February, and so on.
The mortality rate is calculated as the number of deaths divided by the exposure to risk:

We might assume that the number of deaths is reasonably accurate given the availability of death certificates. However, flash back to our survival models exams and we remember that the exposure to risk (the denominator) is not a straightforward number to calculate. Wasn't there something about integrals?
Yes, unfortunately there was something about integrals. The integrals reflect the fact that age and time are continuous. We have almost continuous age and time data for insured lives, given that daily snapshots are usually available. However, for national mortality rates, we don't have continuous oversight of individuals in the population who are alive at each point in time--all we have is the annual data, which contains the number of individuals alive by age at the end of each year.
When computing national period mortality rates, a typical approximation is to let the exposure to risk be the average of, say, those aged 40 at the start of the year 1960 and those aged 40 at the end of it. If we think through the impact of large fluctuations in the number of births in each month in the years such individuals would have been born, we can see the how this could confound the resulting mortality rates. The most significant fluctuations can be seen when birth rates fall dramatically during periods of war, such as World War I, and then spike afterwards. This can have a particularly severe impact on the mortality rates computed for older ages.
This assumption can result in the mortality rates for adjoining birth cohorts being either overstated or understated. The calculated mortality improvement rates are also distorted, as if one year's mortality rate is understated and the next year's is overstated, then the rate of improvement observed will be artificially high.
This leads to high volatility in the data used to calibrate our models as well as what look like isolated cohort effects. It may cause us to erroneously choose one statistical model over another or to include a cohort component that isn't really there. Ultimately, the impact on modelling decisions and on the level and volatility of mortality rates input into our models will produce suboptimal results.
Finding a solution to the problem with national mortality rates
Fortunately, the good people in Milliman, in particular Alexandre Boumezoued, have done the hard work for us in terms of analysing this problem and coming up with a solution.
Research conducted at Milliman and published in early 2016 found an approach to correct national mortality tables for five European countries that have particularly good data regarding the number of births per month, thus allowing them to correct the inaccuracies introduced by the uniform distribution of births assumption. New research has since been conducted at Milliman and I can now reveal that they have extended the methodology in order to correct the inaccuracies with national mortality rates for 31 countries, including those which do not have sufficient historical fertility data. For example, the birth rate data for Ireland and the UK only goes back as far as circa 1980 and 1970, respectively, compared with circa 1860 for France. However, regression models and rigorous statistical analysis have been used to express the correction required as a function of explanatory variables. So, if you are looking for corrected mortality tables for any of the countries in Figure 1, you're in luck.
Figure 1: Countries With Corrected Mortality Tables Available

Are the results significant?
To give a flavour for the significance of the results: the impact on historical mortality rates for Ireland can be as high as 6% and the same can be said for the UK. Like I said, pretty striking stuff.
In addition, crucially, the volatility of mortality rates and mortality improvement rates reduces when the tables are corrected. Volatility is a key input in any stochastic models and a reduction in volatility will lead to more predictable results and likely lower capital for mortality and longevity risk.
Finally, what previously looked like isolated cohort effects in the original national mortality tables virtually disappear when we correct the data, resulting in better informed choices regarding the inclusion of a cohort component and the statistical models used.
What next?
The challenge to the industry is to ask ourselves whether we have corrected the national mortality rates being input into our models, either directly or indirectly, and indeed whether we are using sufficiently sophisticated models in the first place.
The computation and analysis of mortality rates may no longer necessarily be our bread and butter, but a technical refresher and development of our thinking regarding this classic insurance risk is a welcome and timely contribution by Milliman researchers in Paris. If you would like more information, feel free to email me.
1Note that some of these individuals would have been born in 1920 and some would have been born in 1919, i.e., period mortality rates combine two different cohorts or generations of individuals. This is distinct from cohort mortality rates, which analyse individuals at a given age who were all born in the same year.