24 December 2015

The evolution of risk modelling

At the heart of a modern insurer is a risk model that helps drive decision-making. But what are its limitations? And where is the appetite to improve? Participants in part one of this InsuranceERM/Towers Watson roundtable discuss 

Clockwise from bottom: Chris Cundy, Karina Lo Dico, Joel Fox, Kevin Borrett, John Rowland, Tim Thornham

Participants

Kevin Borrett, chief risk officer, Unum
Joel Fox, director, Towers Watson
Tim Thornham, financial modelling solutions director, Aviva
Karina Lo Dico, chief actuarial officer, Foresters
John Rowland, global head of life insurance capital modelling, Towers Watson

Chaired by Christopher Cundy, contributing editor, InsuranceERM

Chris Cundy: How has technology changed the way you model risk?

Karina Lo Dico: A few years ago, in my previous role, we kept being asked 'what does our business look like under this risk?' and particularly on how twists to the yield curve would affect us. That was nigh on impossible, but now with the flexibility of modelling platforms, the ability to run models much quicker and the sophistication of ESGs [economic scenario generators] this is now possible.

Karina Lo Dico, ForestersChris Cundy: That must help with steering the business, which is the ultimate aim, I guess?

Karina Lo Dico: Yes, you can make much better decisions around the use of shareholder capital and how to protect against downside risk. But there is a long way to go. We need to move away from the idea that you merely have a model that produces results to actually ensuring the information you have is clear, interpretable and communicable.

Kevin Borrett: What has changed most is expectations about what can be delivered through technology. There is a clear expectation by management that we will be able to provide greater granularity on business portfolios to aid decision-making and do so a lot faster than has hitherto been the case.

Tim Thornham: Risk modelling methods and the supporting technology have moved on dramatically in the past five years, as Solvency II has been brought in. Proxy modelling has enabled us to do fast modelling; it barely existed five years ago. We also have more of an 'enterprise enabled' capability. Previously, you would typically request model runs from a specialist team; now we have the ability to distribute to multiple users in multiple sites, putting the technology in the hands of the users.

Joel Fox: In order to ensure risk information is produced in a timely fashion, we have seen a game of trade-offs over the past 10 years, essentially balancing model speed against model accuracy or completeness. This has led to a lot of effort being spent considering the use of approximate methods versus more rigorous approaches; and between the breadth of models, and thus the multiple purposes you can use them for, versus the depth of models, whereby every nuance of product feature is modelled. Those compromises occur because technology has lagged behind what we want it to do. In more recent years, we have started to see a lot of companies recognise they can do a bit of both and the trade-offs are starting to fall away.

Better technology

Chris Cundy: What are the other drivers for better technology?

Kevin Borrett, UnumKevin Borrett: In two words: the marketplace. If we are looking to take advantage of opportunities, you need to be better informed in a timely fashion about past business performance and emerging trends. If you cannot be quick enough, in terms of your core decision-making, then the risk is that you either get marginalised, or you end up writing substandard business.

Karina Lo Dico: There are two motivations in my area: one is getting to a point where we can price in a very agile manner; the other is making sure the management understands the risks they are running.

Joel Fox: The easy answer would be to say regulation, but we find that regulation tends not to always drive companies to adopt new technology as they will tend to do just what they need to do and not necessarily push the boundary. The main driver is cost optimisation, which does tend to push people to look to better technologies.

John Rowland: The technology itself is driving the technological change; it is a bit of a tautology, but we are getting to a point where there is a step-change in what is now possible and that enables people to adopt technology that enables real-time pricing and more agile decision-making.

Tim Thornham: Going back a few years, actuaries would be doing some of the most advanced high-performance computing. This is not necessarily the case any more. Some of the more advanced technology is being developed to better understand customer behaviour. New approaches are being developed to leverage 'big data' and 'machine learning' to perform predictive analytics, for example.

Chris Cundy: Why have actuaries fallen behind the curve? Is it a lack of investment compared with the front end of the business?

Tim Thornham: There will be some actuaries involved on the customer-facing side, but we have been a bit deflected by Solvency II over the past four or five years. There is a chance now, once we go live in 2016, to catch up.

The driver-assumption gap


John Rowland, Towers WatsonChris Cundy: What can you not do now because of the technology restrictions?

Tim Thornham: I still think the development cycle is problematic. You need the right skill sets to be able to update your software cost-effectively and fast, whether it be a packaged solution or an in-house solution. At Aviva, we have embraced agile development for all projects, including the actuarial development cycle. It is a major step forward over the sort of traditional waterfall and iterative waterfall methods that we have historically used. If you are not using agile, I would thoroughly recommend it.

Karina Lo Dico: The way we are looking at risk modelling is very much, 'this is what the regulations say and this is what we are going to do.' But there is a level of granularity below that, where you have to think about the risks that are driving the business and then how to translate those risks into a language you can feed into the model and then into capital. That bit is currently missing. Larger insurers have risk scenario generators that try to create ideas about what the risk distributions are like, but it hasn't filtered down to smaller firms.

Tim Thornham: I definitely agree. We can and should do more to bridge the way we see the world and the way our risk models are expressed and calibrated. Take any of the geopolitical, demographic and socioeconomic drivers in the world ... there is conflict in the Middle East, China's productivity forecast for the next three years is lower than expected ... these fundamental drivers are reflected in our models through assumptions such as equity, inflation and interest rate levels and their correlations. These are items we can calibrate, as we have historic data – but it is not always straightforward for the end user to equate these to the real world scenarios they see.

Tim Thornham, AvivaKarina Lo Dico: Some firms do scenario planning to look at how geopolitical and other factors play out. But how do you bridge the gap and 'translate' it back into hard numbers?

John Rowland: One area where we have thought about linking models to the real world is mortality risk modelling. This used to be a case of 'get some data and extrapolate' but there was no real link to what was going on in medical research. So we coined this phrase, 'biologically plausible mortality modelling', whereby we build a model with assumptions that are based on understanding the impact of medical research coming through. The vision was being able to explain mortality assumptions in terms of how a cure for a cancer might impact the curve and therefore the capital. That has to be the next evolution of risk models: to link actual events to the analysis, to the conclusions and then link the conclusions back to the actual events again, so that you have a clear link between the world that people see and manage, and the models. It is less of a technology point and more of a methodology point, but I think the technology is enabling us to do so much more in models than we have been able to do before.

Tim Thornham: Every area of endeavour you look at is enabled by technology, to allow faster and improved analysis. And I think our models need to start to bridge into these areas. We are starting to do this, like the mortality example and there are examples in big data for predictive analytics. But I do not see much change yet on economic risk modelling, which tends to be driven from basic analysis of historic empirical data, plus expert judgement. I would expect that investment banks and hedge funds are further ahead than insurers on modelling the explanatory drivers of the stock market, for example.

John Rowland: One of the big differences between banks and insurers is that insurers tend to rely on one ESG when modelling pricing for a product, while banks use three or four. Obviously if you change to a different ESG, the answer will change, so we have a model risk component that is not currently in a lot of insurance models. I know a few companies that do explicitly hold capital against model risk, but it is really a minority.
Joel Fox, Towers Watson

Karina Lo Dico: Insurance companies have been dragged up to a level where we understand all our risks, but then the next level of risk is when a big financial market event happens and we are all in the same boat again. One of the next steps has to be developing a separate suite of models, which look at risks in different ways. That is what the ORSA is trying to get to as well.

John Rowland: Very few companies have taken that step because they have been focused on 'getting over the line' on regulations. Fundamentally, you are trying to do a nested stochastic calculation. We have not quite deployed the technology that would enable us to do that and so we are still making a lot of approximations.

Tim Thornham: The industry has not mastered these models yet: changing, governing, linking them together and automating them; and wherever the data is and wherever the assumptions are, linking them through to the engine and getting some results out. There is still some way to go to get our systems to the point where we can do that efficiently and shift our focus to analysis rather than production.

Part two of this roundtable can be read here.