Jonathan Bowden. Senior Model Developer at hyperexponential, shares some key best practices for insurers working with AI.
At hyperexponential, we’re excited about the potential that AI has for insurers—in all its guises—but it is important that the implementation is carried out responsibly. AI is not a plug-and-play solution; it requires discipline, collaboration, and a clear purpose to deliver on its promise.
Like many others, I've seen AI emerge from its roots in predicting survival from the Titanic to the oversaturation of LLMs in nearly everything we do online. As an actuarial model developer of some 15 years, I’ve heard the phase "we've been told to embrace AI" many times and I've seen how common AI missteps can create issues for developers, their teams, and the organisation as a whole.
AI implementations are still experiments, and should be designed and used as such. At this stage, insurers should be focusing on trying out high-impact use cases and getting excited for the future.
With all this in mind, what best practices should you be following as you incorporate AI into your models, projects and workflows? Let’s explore where the market stands today, how to approach AI adoption, and what can go wrong if we don’t get it right.
Where is the market today?
In insurance, AI adoption is accelerating rapidly, but unevenly. Some companies have embraced AI robustly, embedding it into their development processes and investing generously in resources and training. These organizations are fostering workforces where actuaries and underwriters feel comfortable with AI technologies.
Others, however, are still on the fence, grappling with questions like, “Where does AI fit into our business strategy?” or “Do we have the right people and tools to make this work?”
And then there are those rushing ahead without a clear plan, treating AI like a competitive checkbox rather than a long-term enabler. These hurried efforts often lead to poorly integrated systems and frustrated teams. This has been less of a problem within insurance—being a historically more cautious industry does have its advantages—but there are some striking examples from other sectors. Just look at Air Canada, who had to pay out for erroneous information given out by their chatbot, or Samsung, who had workers leak confidential data and proprietary source code through their use of ChatGPT.
There’s a clear competitive advantage to be unlocked by moving at pace to leverage AI, but the companies that succeed aren’t necessarily going to be the ones that move fastest.
The winners are going to be the insurers that move strategically, building a robust foundation for AI that prioritizes both technical excellence and real-world impact.
What core principles should guide insurers adopting AI?
Developer skills come first
When you are building pricing models with a modern coding language, AI can be an incredible accelerator. It can write code based on natural-language prompts, act as a first-pass code review, and generally be a co-pilot that takes care of the grunt work. But it’s not a world class developer in and of itself, and to use AI responsibly in this way relies on your model developers having a strong grasp on the fundamentals.
There is a limit to how good LLM code generation is right now. I once spent a good deal of time going round in circles, struggling to get a particular LLM to generate the right code for what I was trying to do for a particular problem. Ultimately a very kind developer on Stack Overflow was able to answer my question in about 30 minutes. LLMs are getting better, and they will learn from my struggles here, but there will always be the need to consult with experienced peers.
In my example, I could see that the output was incorrect because I had a good set of test data and I knew what the results should look like. This underscores two important points: the value of robust testing (a topic for a future blog series!) and the fact that LLMs can often be wrong, especially with difficult or nuanced code solutions.
And what happens when there isn't test data to verify the output? What if you're doing something new or different from the most common solution? What happens if you have both of these are issues at the same time?
There can be a temptation to hide usage of LLM code generation, because we want our seniors to think that we're efficient and smart. This is where trouble can arise, because when something goes wrong with "your" code, you could end up in over your head! Having an experienced reviewer on the team can help here, and transparency is key.
Data is the foundation for AI
AI is only as good as the data it learns from. Building a solid foundation is crucial, and it starts with clean, well-structured, and accessible data. Think of your data ecosystem like a building block set: with consistent Lego pieces, it’s easy to build something meaningful. If you had blocks that don’t quite fit together, it’s difficult to create a complex structure.
For many organizations, the journey toward leveraging AI starts with taking a hard look at their existing tools and processes. If Excel remains the backbone of your pricing models, it might be time to reconsider whether AI use cases are the right next step. While Excel has its place, it isn’t built to handle the demands of modern, data-driven pricing strategies or AI implementations. Transitioning to modern tools and platforms is essential for building the infrastructure you need.
Tools like hx Renew enable you to start building well-structured data for each line of business, drawing from a rich array of internal and external data sources. This structured approach not only supports immediate operational benefits, but also creates a foundation for future machine learning applications by accumulating clean, structured data—the stuff of dreams for any data scientist.
Build AI into your business plan
To deliver real value, AI initiatives must be tied to long-term, business-relevant objectives. Start by defining clear use cases that align with your organization's priorities and focus on these rather than trying to tackle everything at once. If this is hard to do, it's a sign that you're making a plan that isn't just born from hype.
Engage stakeholders at all levels early and often, making them part of the journey. When people across your organization understand the "why" behind AI, adoption becomes easier, and the results are amplified. Transparency and collaboration build trust, which is crucial for overcoming skepticism or resistance that often accompanies technological change.
AI best practices at a glance
Supervise and review work
Don’t leave unskilled developers unsupervised with LLMs. Oversight and thorough code reviews are essential to ensure quality and avoid errors. Pay particular attention to:
Overly complex code. Good code is understandable and readable!
Missed code context. LLMs can ignore or misunderstand key parts of your codebase, leading to bugs.
Poorly optimised code. While slow models often come to light through underwriter feedback, some issues may just quietly undermine your model, and your efficiency, over time.
Test rigorously
Testing is always a core part of development, and workflows that include AI are no exception. Test broadly to make sure the outputs are within your range of expectations and include weird and wonderful datasets to catch those edge cases for which you'd expect any unfinished model to fall down.
Be transparent
Transparency builds trust and aids in troubleshooting when things go wrong. Clearly document where and how LLMs are being used, down to the function level. If developers are using AI within your organisation, encourage them to honestly share their approaches and challenges when using AI tools.
Continue to improve
Embrace iteration. AI systems, your workflows, your business requirements and the people on your team will change over time, and so must your approach. Embracing a culture of ongoing refinement ensures you keep your workflows efficient, your outputs reliable, and your team engaged.
Next steps
We’ve explored the importance of strong governance, effortless data integrations, and enabling actuarial developers. At hyperexponential, this has been at the core of our offering since we were founded in 2017. Get in touch with us to learn more about why hx Renew is the platform for AI development in insurance.
I leave you with the words of Geoffrey Hinton, the godfather of AI: “There’s no use waiting for the AI to outsmart us; we must control it as it develops. We also have to understand how to contain it, how to avoid its negative consequences."