Can the insurance industry rise to the challenge of giving businesses a safety net for their AI usage? In a recent World Economic Forum report, nearly 1,500 surveyed professionals identified AI as their organization’s biggest technology risk.1 As AI continues to evolve, several prominent figures in the financial services sector are expressing their fears about its possible detrimental impacts, despite the enormous potential it holds.2
Insurers, however, are in the business of providing coverage to assuage worry, fear, and uncertainty—which is why, for many, AI risk mitigation could prove to be a meaningful business opportunity. Deloitte projects that by 2032, insurers can potentially write around US$4.7 billion3 in annual global AI insurance premiums, at a compounded annual growth rate of around 80% (figure 1).4
To get there, many insurance firms will likely need to build and expand their capabilities—and soon. The advantages of AI and its impact on the world are now, at a minimum, a topic of conversation among many organizations and consumers, making its ubiquitous adoption seem inevitable. To put things into perspective, there are estimates that AI could add over 10%—or roughly US$12.5 trillion—to global GDP by 2032.5 In the next few years, society may be hard pressed to find any aspect of daily life that does not have an AI engine in the background.
However, this revolutionary technology is not without both anticipated and unforeseen risks. Consider the following scenario: In the not-too-distant future, a person could take their self-driving car to a doctor’s appointment to get an AI-assisted diagnosis; a few weeks later, they could have AI-assisted surgery and eventually file an insurance claim through an AI chatbot. A lot of things can go wrong in this scenario; the autonomous car could bump into another vehicle, the initial diagnosis could be incorrect, or the chatbot could reject the valid claim outright. The risks stemming from AI in this example could range from a significant financial loss to a potential fatality. And while some of these risks may seem futuristic, they are already starting to materialize.
Research conducted in 2021 found multiple machine learning algorithms unfit to detect COVID-19 in clinical use.6 In 2023, a tutoring company was sued by a US federal agency for discriminatory hiring practices, stemming from its AI-powered recruitment software.7 Fatalities stemming from AI use are also concerning: A report from the National Highway Traffic Safety Administration highlighted 11 deaths involving vehicles using automated driving systems in the span of just four months in 2022.8 In fact, the 2023 Stanford AI Index reported a 2500% increase in the number of AI-related incidents and controversies since 2012 (figure 2).9
Liabilities arising from use and development of AI can potentially be both significant and unpredictable. However, in today’s competitive market, business leaders may feel pressure to adopt AI technology, despite the risks of diving into unknown territory. Consequently, leaders are often seeking security against unforeseen events.10
Currently, a number of AI solution vendors are providing some safeguards for their AI products, including indemnification from legal claims made against output of their generative AI tools.11 However, the current and future anticipated velocity of AI development could impact the magnitude and variety of risks that can unfold and may go beyond what can be managed by a few corporations on their own, particularly those that may have already been on the receiving end of lawsuits.12
Just from generative AI alone, businesses could face losses from risks such as cybersecurity threats, copyright infringements, wrong or biased outputs, misinformation or disinformation, and data privacy issues. Having an insurance policy to protect against such issues could help assuage concerns and even encourage further AI adoption at scale.
While the first gasoline-powered cars in the United States were made in the late 1800s,13 it took more than 30 years for a US state to mandate auto insurance.14 This long tail will likely not be the case for AI. Regulators globally are likely to soon demand safeguards and risk management practices around AI use, which will likely include insurance coverage.
The European Union is developing the world’s first comprehensive set of regulations governing AI,15 and has provisions for fines up to US$38 million.16 Several US states have also introduced bills or resolutions governing AI.17 At the federal level, President Biden issued the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.18 Even if these regulations do not mandate insurance, provisions for hefty fines may drive companies to seek insurance coverage for these risks. Another factor that could compel businesses to seek insurance coverage would be if there were an increase in the severity and frequency of AI damages and losses. As an analogy, instances of cyberattacks in United States doubled between 2016 and 2019 and led to a substantial increase in cyber insurance premiums, as well as the pricing of premiums.19
A few large reinsurers are already participating in the AI insurance market. Munich Re rolled out a specific AI insurance product, primarily meant for AI startups in 2018.20 They also launched coverages for AI developers, adopters, and businesses building self-developed AI models. Several insurtech startups are also beginning to operate in this space. Armilla AI launched a product that guarantees the performance of AI products.21
That said, the lack of historic data on the performance of AI models and the speed at which they are evolving can make assessing and pricing risks difficult. Insurers entering the market are developing in-house expertise and proprietary qualitative and quantitative assessment frameworks to better understand the risks inherent to these AI systems. As with all nascent and evolving risk, there is expected to be a learning curve, involving trial and error, but those who start early could potentially take the lead in developing the required capabilities and capturing share in a market with vast growth opportunity. “As we learn, as we get more data, then we’ll figure out what the next steps are,” Jerry Gupta, head of AI & Insurance, Armilla, says about developing AI-related insurance products.22
Most insurers are expected to follow a wait-and-watch approach, looking at large global carriers as they establish some pricing and loss history. Examining lessons learned from cyber insurance, carriers will likely demand stringent risk management practices and guardrails to limit their liabilities.23 Carriers may also rely upon model audit, attestation firms, and other outside AI expertise for help in understanding the “black box” better before pricing it.24
As the world continues to evolve, new risks will emerge. In their role of providing coverage for a wide range of risks, insurers will be called upon to architect protection and trust in a society where AI is pervasive. But to honor that vision, carriers should initiate first steps now. The starting point may be to begin putting together risk pricing models for AI. Then, they could keep iterating by gathering and assimilating more information about AI risks and consequent losses, as they emerge.
Our prediction considers the link between the digital economy and cyber insurance and calculates penetration rate. Similar to the way cyber insurance protects the digital economy, AI insurance should safeguard the economic value added by AI. Drawing parallels between the post–financial crisis growth of cyber insurance (2009 to 2017) with AI insurance, the Deloitte Center for Financial Services insurance research team estimates that AI insurance penetration can map a similar maturity curve as cyber did in its early years. We’ve based our prediction on these estimates.