2 August 2022

Modernising actuarial data processing - part two

Selecting the appropriate strategy and tactics to improve actuarial processes is vital if insurers are going to realise gains from their investment. Experts discuss their experiences in this InsuranceERM / Reitigh roundtable

Participants

Abi Holloway, Head of Actuarial Services, Phoenix
Brian Walsh, CEO, Reitigh
Clara Hughes, Head of Automation and Insight, Pension Insurance Corporation
Darragh Pelly, Chief Strategy Officer, Reitigh
Kim Giddings, Business Information Manager, National Friendly
Lorraine Paterson, Senior Manager Actuarial Assumptions & Methodology, Lloyds Banking Group
William Diffey, Chief Actuarial Officer – Europe, Assurant

Chaired by: Christopher Cundy, Editor, InsuranceERM

• This is the second of a two-part roundtable. To read part one, click here

Christopher Cundy: Is it always immediately obvious where the bottlenecks and risks in data processing lie?

Abi Holloway: We're very much driven by our working day timetable for evaluation, and we work backwards from there in terms of looking at the bottlenecks.

Kim Giddings: For the last few years, at the end of the valuation run, we've documented every single change we made to the data from the source through to the model and on the way back.

That's become a really useful exercise. Although it's a pain at the time to have to log everything you're doing, it allows you to step back and see where the system should be changing bits of data.

"For the last few years, at the end of the valuation run, we've documented every single change we made"

It's the basis for the project that we're looking at now, i.e. understanding where we're making changes and therefore where can we make the process more efficient. Do we push processes up into the code that's producing the data, or do we push them down to the models?

Darragh Pelly: Does the whole team see that and buy into that vision, or do people focus on what's happening in their particular team?

Abi Holloway: Pre-Covid, we had a meeting room and the whole timetable was on the wall. Post-Covid, with hybrid working, we have more regular team meetings so people probably see more of what's happening in their teams now.

Kim Giddings: We've gone the other way, as of last year. I'm the business information manager, so all the data sits under me. But I now report into the actuarial team, which is much better. No-one really knows where to position data – is it under IT? But sitting under actuarial means we're working a lot closer together.

Christopher Cundy: What sort of issues does having different data sources pose to you – and how do you solve them?

Clara Hughes: It is helpful to prioritise the datasets you're looking at. It is important to get data from core systems into datastores and introduce APIs so that systems can talk to each other.

For example, for IFRS 17, part of the process requires data from four different systems and spreadsheets. It pulls in data from databases or database-like structures and feeds it into the new system automatically.

"The more data sources you have, the greater the need for reconciliations between the different sources"

In addition, building out more convenient ways for end-users to access data is key. For example we've built an application that pulls data from databases into spreadsheets for ad hoc analysis. We'd like to avoid a proliferation of spreadsheet models, but actuaries can still see the data in Excel if they prefer.

Lorraine Paterson: The more data sources you have, the greater the need for reconciliations between the different sources. Creating a one-source data lake that is directly connected to policy administration systems with a good data control can cut down on the reconciliation work.

End-user access

Brian Walsh: How are you getting people to change from their normal way of doing things, and not just asking someone else to do it?

Lorraine Paterson: Spreadsheets have been the norm for a number of years. However greater insight can be obtained from using big data which requires a move away from the norm. There are many software tools that can handle much larger amounts of data that are easy to use.

Darragh Pelly: We have seen some data warehouse and data lake projects end in disaster because people were not able to access the data. The data warehouse was put in to facilitate reporting, but a standard report is not enough: people want to look into the data and do ad-hoc analysis, and they need to trust the data. If they don't have access to it, how can they trust it?

"It is important to give the end-user access to the data in a way that doesn't proliferate spreadsheet model"

There is a secondary win to allowing access to data: people will be able to see if there are issues with the core data that's going into your financial reporting.

Clara Hughes: It is important to give the end-user access to the data in a way that doesn't proliferate spreadsheet models and also guards against the risks of people writing Python on their desktop. Otherwise, we could end up with a very similar problem to Excel, but an even more complicated one to unravel because of the power of the tools people are able to use.

Darragh Pelly: It might be interesting to ask people when they pull the information, 'what is the question you started with?'. Over time, you might be able to see if people are being asked the same question all the time, and see if there is a set of questions being asked that could be built as a standard report or data extract.

RPA vs STP

Christopher Cundy: What are robotic process automation (RPA) and straight-through processing (STP) and why are they useful for actuarial data processing?

Darragh Pelly: They're both about automating and streamlining the processing of data.

As we have heard, there are multiple systems and sources, and they need to pass data between each other. In legacy processes, data comes out, there is a lot of manual handling, and it goes back in. That's slow, there's operational risk and from a people perspective it's not much fun to do every quarter.

"Before going down either route we would always say to clients: look at the process first"

With RPA, the process is the same, it's just a robot doing it. That will give you a partial speed improvement and should reduce the operational risk.

STP on the other hand involves completely taking away the manual handling in the middle by getting the systems to communicate electronically, with automated validation in between. Typically it's used when there are large volumes of data involved. This approach will give you far greater speed/scale and operational risk improvements. It also brings better governance and control to your processes.

Before going down either route we would always say to clients: look at the process first – if the process is inherently broken (not fit for purpose, with few controls) then simply getting a robot to run it won't deliver your target end-state. Often we see that RPA is a useful initial step to free-up time to look at STP – i.e an end-state with an automated and robust set of processes.

Christopher Cundy: Has anyone used RPA? What's your experience been?

Abi Holloway: We brought in RPA for our manual adjustments. We had less than 200 spreadsheets that all looked identical but were all doing different calculations. We thought they were a good candidate for robotics as at some points people were just going in, updating links and pressing F9. The cost of moving that process into the model was disproportionate with the benefit we would have got.

The team were happy because we took away the least challenging part of their job, and they're getting to spend more time seeing what the numbers mean rather than actually getting through the process.

But one of the weird problems was I had to stop the team from going in to check what the robot had done!

Christopher Cundy: Do you think there will be wider applications for RPA with a cleverer robot?

Abi Holloway: The task we set was simple and we had some basic error-checking, but with a machine learning robot we could ask it to check if a particular number was reasonable and if not, give a warning message for a human to look at it. That would be the direction I would like this to go because, again, it frees up the resources of that human being.

Christopher Cundy: How do you run your actuarial improvement projects? Do you tend to take a 'big bang' approach?

There is no point in creating visualisation and analysis tools unless you have good quality data

Lorraine Paterson: When I came into the role there was a 'big bang' transformation project in progress. It was not running to timeline, so we had to stop and take stock. We saw within the next six months it was not going to deliver everything that was in the requirements, so we had to prioritise development.

The first priority was the data. There is no point in creating visualisation and analysis tools unless you have good quality data.

We prioritised the most material products and once we were comfortable with the data quality, we then started to develop the analyses.

Using a building block approach enabled us to use the time saved from the efficiencies to further develop the next most important block. Continuing with this approach enabled more sophisticated developments over time until we reached our end goal.