16 August 2018

Capital modelling: Why do more simulations matter?

Greater computing power adds significantly to the integrity and value of the capital modelling process. So how much longer can capital modelling actuaries resist the case for changing to higher performance platforms? Asks Sunnie Luthra

As the performance of the dominant modelling platforms has inched forwards, demands and expectations on modelling teams continue to increase unabated. The conflict between operating a capital model focused on regulatory imperatives versus a model that adds real business value, and the ensuing trade-offs, is a frequent topic of conversation between risk and capital professionals.

In a perverse way, modelling teams have perhaps become victims of their own success – the deep and sustained investment in management education and embedding model use has translated into increased demand.

Issues such as an overly narrow focus on one part of the curve driving model parameterisation and validation have been documented extensively. As model use and understanding has developed over time, approaches to addressing these shortcomings have become increasingly sophisticated.

However, challenges remain around model convergence, flexibility, validation and the speed at which management questions can be answered. Often, these issues are glossed over by a tacit acceptance that we need to work within the constraints of the computing performance at our disposal.

But recent advancements in stochastic modelling platforms and other analytical tools have begun to erode the credibility of this excuse.

In order to appreciate the size of the gap between the current and potential state, a good starting point is last year's market study conducted by Grant Thornton UK1 on model use, resources, performance, challenges and future priorities.

Figure 1: Number of Simulations

The thorny issue of simulations and model convergence

Around three quarters of respondents to Grant Thornton's survey run their models at no more than 100,000 simulations. Anecdotally, we expect many of those selecting "50,000 to 100,000" are at or near the bottom of that range.

Whilst perhaps not a great surprise, consider the fact this is the maximum used for production runs and over half of respondents acknowledged that different volumes were applied for intermediate runs (such as sensitivity testing to support validation or decision support).

Considering that for even a relatively mid-tier worldwide portfolio the natural catastrophe model component could be drawing from an event catalogue of greater than half a million unique events, it is clear that even at 100,000 simulations we are missing out on a wealth of information about potential volatility.

When we look at convergence, testing on our own model purely on gross / net underwriting risk demonstrates that at 50,000 simulations we are suffering from simulation error of ~3%. This reduces to ~1.5% at 100k, ~0.5% at 500k and ~0.1% at 1m simulations, at a typical 99.5% value-at-risk measure as used in Solvency II.

Dependencies is an area which is mired by issues such as coherence, positive-semidefinite adjustment, etc. The limitations are exacerbated by the inability to run sufficient number of simulations. As an example, the sensitivity tests to assess a change of +/- 5% change in the initial levels of dependencies are easily lost within the noise when running 50,000 simulations; even at 100,000 simulations, the differentiation between signal and noise is weak at best.

Sunnie Luthra, Lead capital actuary, International General InsuranceWe must ask ourselves how much credibility should we really attach to the model when we test the sensitivity of an expert judgement at a lower simulation run.

Even a simple model relies on hundreds of expert judgements and parameterisation decisions, all of which should be sensitivity tested in order to help us focus on validation of the key drivers. How do we balance the opposing forces of acceptable simulation count, the range of judgements to be tested and the omnipresent limiting factor of time?

Which one do we give way on? What are the implications in terms of the validity of our results? When management asks a question, would we be able to answer it before they forget the question?

For actuaries, adopting new technologies to improve capital modelling may well become a matter of professional responsibility. For them and the rest of the business, there is a stark choice: embrace the possibilities and build a case for change or struggle on with legacy models until the step-change in capabilities offered by new technologies becomes the default expectation of regulators and rating agencies.

Footnotes

1. Capital Modelling - Where are we now?

Sunnie Luthra is the lead capital actuary at International General Insurance. Email: [email protected]

People: 
Sunnie Luthra