15 December 2023

ChatGPT, Schrödinger's Cat, and emerging risk practices

Ari Moskowitz, Everest Group's chief risk officer, considers if it's possible for risks to have emerged and still be considered emerging at the same time.

I'm cognisant that this title certainly runs the risk of sounding like clickbait but ChatGPT garnered such prolific public attention that it warrants a discussion. However, I don't intend to expound on the risks it poses to a company, the options to address those concerns, or the opportunities that it can bring to the insurance industry overall. Rather my focus is to share a perspective based on discussions with industry practitioners as a reflection on the efficacy of existing emerging risk practices.

Ari MoskowitzLet's start with a simple statement: Artificial Intelligence (AI) and Machine Learning are not new. The technologies have been researched and under development for over 50 years. It's taken various forms and drastically improved over the years as computing power has expanded. In fact, many insurance companies had already invested in platforms and applications which utilise these technologies to enhance customer service, claims processing, policy and contract reviews, underwriting, risk selection...the list goes on and on. So, what's the big deal with ChatGPT?

Initially, I thought that one of the key differences with ChatGPT was the level of widespread publicity associated with it but I've been reminded how AI came to public attention decades ago. IBM's Deep Blue chess-playing computer hit the press in 1996 when it competed in a six-game chess tournament versus chess grandmaster Garry Kasparov. Even though it lost to the reigning world champion, "there's no such thing as bad publicity." It certainly helped when Deep Blue won the rematch a year later.

"IBM's Watson achieved notoriety in 2011 when it emerged victorious in a Jeopardy! match against the well-known champion Ken Jennings"

While Deep Blue was just one form of AI focused on chess algorithms, IBM's Watson took AI to a new level with the ability to process natural language, unstructured data, and provide human-like responses. Watson achieved notoriety in 2011 when it participated, and emerged victorious, in a Jeopardy! match against the well-known champion Ken Jennings. That was the one of the biggest publicity expansions for AI until ChatGPT's emergence. But similar to Watson, GPT was under development for a while. The first iteration of GPT from OpenAI appeared in 2018, two and half years after OpenAI was formed as a company.

All this leads to the critical question for the insurance industry: What is so different about ChatGPT that it has garnered so much attention from risk practitioners? And even more importantly, did your emerging risk framework sufficiently prepare you for this?

Emerging Risk Lesson #1: Ensure that you keep an eye on the development of existing risks, not simply the new ones.

I've seen frameworks which mandate that risks are removed from the emerging risk radar after a few years. The premise here is that there's a binary outcome. You've come up with a plan on how to manage that risk or you've determined that the risk is acceptable as is. Either way that risk no longer needs attention of emerging risk committees as it has shifted to becoming a well-managed enterprise risk. Simply stated, either a risk is emerging or it has already emerged. This is sensible and might be a safe approach for most instances. However, ChatGPT's evolution within the AI space has shown us that some risks might be pervasive and need constant focus or refocus.

Perhaps outcomes aren't as binary as emerging vs emerged; some risks might be both emerging and emerged risks at the same time. Another way to frame this is to recognise that some risks emerge while others have a propensity to reemerge as something different even when it's already in plain sight.

Emerging Risk Lesson #2: Consider rate of evolution as a vector in your emerging risk monitors.

The speed of evolution is an important vector to consider when monitoring emerging risks. Many practitioners already consider velocity when reviewing risks with a focus on how soon the risk is likely to emerge. Given the aforementioned concept of a risk being both emerging and emerged at the same time, it may be prudent to further include a vector of rate of evolution or reemergence. Ask yourself these two questions: How likely is this emerged risk going to reemerge in an evolved form? And how likely is it that the rate of evolution can also speed up?

For illustrative purposes, let's contrast climate change to AI. Climate is changing and its risk evolves, but it's hard to describe climate change as a rapidly evolving risk when comparing it to AI advancements. Certainly, our response to climate change must be swift given the severity of the impact and the amount of time it would take for our actions to properly mitigate impacts. But climate change, climate research, and regulatory responses seem to take a while when compared to the rate at which technological advancements can occur.

Emerging Risk Lesson #3: Consider rate of adoption as a vector in your emerging risk monitors.

Further consider how widespread or accepted public utility may be. The introduction to Deep Blue and Watson wasn't through public utilisation of the technology whereas ChatGPT became part of the public domain very quickly. ChatGPT's mass adoption may highlight the value it offers to individuals and thereby increase the pressure for industry adoption of AI, but it also poses the concern for risk managers to stay up to speed.

We can again compare this to climate change for illustrative purposes. The publicity of climate change exists, and certainly keeps evolving too, but the politicised nature of people's view on climate differentiates it from AI. There has not been uniform adoption en masse of climate risk management. More importantly, and unfortunately so, most people don't view climate change as a phone app that can make their life easier.

ChatGPT seems to have shown up in the public's eye and was accepted by everyone in one fell swoop. The speed of public acceptance is astronomically different than climate change. Both of these risks have emerged and are emerging, both demand our attention and action, and both remain on emerging risk radars; however, the nature of our responses differ given acceptance levels.

A Practical Suggestion: Use simple Venn diagrams to pull this all together.

I'm sure some may find the concept of having a risk as emerging and emerged as being too theoretical. It can appear that risk assessments can become a "Schrödinger's cat" exercise of risk management where something can exist in two states at once. But ultimately the adoption of such an approach is to ensure one's framework is not limited to binary views of risk classifications.

Binary views sometimes create a false comfort in mitigation strategies for already well-known risks when those strategies are not sufficiently evolving alongside the evolution of the risk. And what better way to move away from binary thinking than a good Venn diagram? A practical approach to this non-binary view of emerging risks would be to enhance enterprise risk radars (i.e. emerged risks) to identify risks which have the propensity to evolve rapidly versus risks which change slowly; and there could be a Venn diagram between emerged and emerging risks where AI could live in the set intersection. AI could have been addressed as an enterprise risk but continues to need being addressed as a future risk given its rate of evolution and adoption.

Disclaimer: This article was written by a human, not ChatGPT. During research for this article, ChatGPT was utilised but such usage did not occur on company hardware or network, did not use proprietary company information, and all outputs were independently verified through human research. But then again...would you even be able to tell?