It’s likely that no other technology offers as much transformative potential as generative artificial intelligence. This technology has the potential to change the way insurers operate, assess risks, launch products, and interact with customers, among other things. However, the journey toward scaling gen AI has challenges on multiple fronts. In the last year or so, many insurers have dipped their toes in gen AI proofs of concept, but despite the hype, and some initial successes, many may not yet be fully prepared to harness the full potential of gen AI. So, the question looms: How ready are insurers to scale gen AI?
In June 2024, Deloitte conducted a survey of 200 US insurance executives, of which 100 come from the life and annuity (L&A) sector and the remaining 100 from the property and casualty (P&C) sector, to assess their readiness to adopt gen AI. Survey findings reveal how insurers surveyed are preparing to move past the initial proof of concept (PoC) and experimentation stage and how they are overcoming implementation obstacles that exist to be ready to embrace and scale gen AI.
Survey findings indicate that many insurers have taken the first steps with gen AI implementations. Seventy-six percent of the respondents say they have already implemented gen AI in one or more business functions. L&A insurers appear to be slightly ahead, with 82% of L&A respondents saying they implemented gen AI in one or more business functions, compared with 70% of P&C respondents. There also seems to be a positive correlation between implementation and the size of the organization, with higher rates seen in larger-sized organizations (figure 1).
When it comes to where various insurers are in their implementation journey, initiatives are spread across various stages and functions among respondents (figure 2). While some insurers have already deployed gen AI across various functions, the largest cluster is still in the scoping stage. It seems that, after initial experiments with use cases and PoCs, insurers may be taking a more measured approach to scaling and adoption.
One reason for this approach may be related to some insurance leaders still weighing the potential risks against the potential benefits, especially insurers with annual revenues of under US$500 million. While 45% of respondents said that their leaders believe the benefits of gen AI outweigh the risks (figure 3), the majority either think that the risks outweigh the benefits, or are on the fence about gen AI’s value proposition.
This stance by insurers may be in part due to the complexity of their legacy technology infrastructure, which has presented challenges to modernization across a range of use cases.
Insurers appear to be becoming increasingly aware that the journey to scaling gen AI will include challenges. However, they can use the experience they gained from the POCs and early experiments to help position their organizations for long-term success.
For example, among respondents, the lack of business line support was deemed the most important factor that made implementation unsuccessful. Poor data and AI foundations, legacy IT infrastructure challenges, and inadequate collaboration between business and tech functions were also cited as reasons for unsuccessful implementations. Interestingly, underfunding was not cited as a major factor causing gen AI implementation failures.
Conversely, successful implementations paint a mirrored picture, as close collaboration across business, tech, data, and talent functions was cited as the biggest reason for success among those surveyed. When these functions work in harmony, they can create a favorable environment for AI initiatives to succeed. Respondents also highlighted having a strong data foundation in place as a key factor for success.
Insurers’ experiences with gen AI underscore the importance of organizational alignment and data readiness. By learning from both successes and failures, insurers can better navigate the complexities of scaling AI and take steps to become better prepared.
But questions remain: How can insurers assess their readiness? Where should they focus their efforts to achieve a state of scaling readiness, and how can they do it effectively? The 3R framework—resources, responsibility, and returns—can serve as a guide for insurers to consider in this journey (figure 4).
In the journey of scaling gen AI, insurers will need to navigate a labyrinth of known and unknown challenges, but the starting point should be building a strong data foundation. This data foundation should be one where data capabilities and processes are closely aligned with the organization’s gen AI strategy.1 It should include the processes, philosophies, approaches, and approvals for data-sharing and use.
Some insurers appear to have made progress in strengthening their data foundation over the past several years to support analytics and digital capabilities, which can help give them a head start to gen AI scaling. For example, since 2017, AXA UK has been working to create a data foundation to scale its analytics and machine learning capabilities. The company implemented a federated delivery model, granting autonomy to its business units while confirming consistency and support. It also continues to focus on data governance to help provide an understanding of data conditions and implications and promote responsible data use.2
The journey may be even more ambitious moving forward: creating sophisticated data ecosystems that can support gen AI at an unprecedented scale. Building on existing foundations, the spotlight has shifted to crafting data ecosystems that can seamlessly integrate both real-time and batch-processing environments. This convergence is important for scaling gen AI, allowing insurers to harness the power of diverse data types, including increasingly vital unstructured data.
A pivotal development in this journey is the rise of data products. These specialized data environments are designed to help support specific use cases such as fraud detection and customer segmentation. Data mesh architecture often plays a crucial role here, moving away from monolithic data warehouses toward a more modular and compartmentalized approach.
Closely tied to data foundation is data governance, which is the bedrock of ethical, secure, and trustworthy gen AI. Even organizations with advanced data foundations can uncover unforeseen challenges while scaling gen AI. There is a growing emphasis on data inventory and master data management projects, as some insurers appear to recognize the importance of knowing what data they possess, its sources, and how it supports their strategic objectives, working to confirm that only secure and valuable data is maintained and utilized. Insurers can benefit by re-evaluating their data on parameters of findability, accessibility, interoperability, reusability, and storage.
While establishing a solid data foundation and governance is key to scaling, talent can be just as, if not more, significant. Preparing to scale generative AI is crucial, but so is having a workforce that is willing and able to utilize it effectively. Resistance to using gen AI due to lack of familiarity is a challenge that organizations across industries looking to scale gen AI are facing.3 Insurers aren’t immune to this challenge.
In fact, when survey respondents were questioned about their organization’s preparedness for gen AI adoption across different areas, they revealed that they are least equipped in terms of talent availability and current workforce skill sets, compared with other readiness factors. It may not be surprising that insurers are prioritizing redesigning their talent management strategies for scaling gen AI (figure 5).
The respondents cited prioritizing candidates with digital literacy and AI knowledge for new job openings, followed by investing in data literacy and AI training programs for their existing workforce, as the most critical changes that their organizations were making or are planning to make in the way they manage and hire talent to help ease the long-term adoption of gen AI. With this dual approach, they can work to bridge the talent readiness gap.
Workforce resistance to gen AI can stem from unfamiliarity with the tools, and uncertainty around rapidly growing capabilities. Providing controlled access to an organizationwide gen AI tool can help employees become more comfortable with the technology. This approach can help foster a realistic understanding of gen AI, could help dispel myths, and may help unlock gen AI value across the organization.4
While training for technical skills is important, cultural acceptance of these fast-emerging technologies is also vital. Insurers should consider emphasizing human sustainability, underscoring how gen AI can help elevate employees’ value proposition. This approach could help secure greater employee buy-in.
Building employee trust in gen AI should include transparency, helping stakeholders understand the enterprise’s approach to AI applications, the intended value creation, and how the workforce can utilize these tools to boost efficiency and productivity. As insurers gradually increase gen AI access, they should proactively track, document, and communicate changes to employee responsibilities and process amendments to workflows.
For insurers, gen AI can present several general and sector-specific risks and regulations. One concern is the risk of algorithmic bias and discrimination. AI systems, if not thoroughly monitored, can perpetuate existing biases present in historical data. This can lead to discriminatory practices in underwriting and claims processing.
Moreover, the phenomenon of “hallucinations”—where AI systems generate incorrect or misleading information—poses significant risks for insurers operating in multiple states in the United States, as AI might misinterpret a statute from one US state as applicable to other states, which could lead to flawed decision-making.5
Insurers are already custodians of vast amounts of sensitive data. The integration of multiple internal and external data sets to run gen AI is only adding complexity to data governance and can increase vulnerability to cyberattacks.
Furthermore, employing gen AI in roles such as underwriting, claims processing, and fraud detection could lead to issues regarding transparency and accountability. Gen AI–generated output can often resemble a “black box,” referring to the inherent opacity in most AI systems,6 which may obscure the reasoning behind specific results and could make it challenging for stakeholders to comprehend the underlying logic, or for insurers to explain it to regulators.
Broadly, these risks can create a trust deficit for insurers with respect to their employees, customers, regulators, and society as a whole. According to Urs Baertschi, chief executive of P&C Reinsurance at Swiss Re, a big part of the industry’s focus around gen AI must be on defense, in addition to offense.7
To confirm sustainability and trustworthiness, insurers should be held to high levels of customer, regulatory, and societal responsibilities as they look to deploy large-scale gen AI initiatives.
As the insurance industry navigates these risks, regulators and governing bodies are also moving the needle on creating frameworks to govern the use of AI and gen AI in insurance. This regulatory push is motivated by the twofold goal of safeguarding consumers and warranting fair, transparent, and accountable AI practices.
The European Union leads the way with its AI Act, which sets up a comprehensive legal framework for AI. The act, which came into force in August 2024, classifies AI systems by their risk levels and imposes stringent regulations, particularly for high-risk systems. The act aligns with the General Data Protection Regulation (GDPR) to uphold data privacy rights.8
In the United States, the National Association of Insurance Commissioners has implemented a principles-based approach to AI regulation. The organization’s Model Bulletin, designed for issuance by state insurance departments, directs insurers to confirm that AI-driven or AI-supported decisions, affecting consumers adhere to all relevant insurance laws and regulations. It urges insurers to test for biases, evaluate the risks to individuals posed by AI use, and guarantee transparency and explainability of the process to consumers.9 Since its adoption in December 2023, 19 states have implemented this model bulletin, with several others taking similar actions.10
Additionally, state-specific pieces of legislation such as Colorado’s AI Act and California’s Senate Bill 1120 underscore the importance of human oversight in AI-driven decisions and mandate measures to mitigate algorithmic discrimination and bias.11
There are also multiple indirect laws and regulations that come into play. For example, as AI systems depend on vast amounts of sensitive data, insurers need to stay compliant with data privacy laws like GDPR and the California Consumer Privacy Act,12 which have implications for vendors and partners as well.
As risks evolve, so will the regulations. It’s likely that insurers will be held to high standards of accountability and transparency in their use of gen AI. Therefore, insurers should formulate their scaling plans within an uncertain regulatory environment. For example, Nationwide has employed a blue team/red team risk approach to manage AI deployment to counter the uncertain environment. In this approach, the blue team works on identifying use cases to enhance productivity and realize other gains, while the red team anticipates potential risks.13
Most survey respondents said that they are prioritizing data security and privacy concern, AI trustworthiness, and explainability of results while developing gen AI road maps (figure 6). Among immediate measures to mitigate gen AI risks, some insurers are developing overarching industry standard governance and risk management frameworks and are thinking through new business and intellectual property risk management approaches.
However, risk management and governance for gen AI encompasses much more, given the rapid changes and uncertainties presented by these capabilities. Gen AI scaling requires insurers to move beyond traditional risk management practices to actively build trust both internally and externally.14 Risk and trust should be considered and addressed throughout the gen AI life cycle, from design and development to deployment and scaling. This includes validation processes and feedback loops for human oversight to manage performance and accuracy. Additionally, it should involve establishing guardrails to ensure privacy, drive ongoing compliance, and promote agility in proactively responding to emerging risks. However, insurers should initially identify and champion their own core principles of trust like fairness, reliability, and transparency15 driven by their boards and C-suite executives.
While still early in the journey, one of the most significant benefits many respondents said they have realized through their gen AI implementations is improved efficiency and productivity (figure 7). It’s possible that these efficiency gains will be modest at first but could reach meaningful levels in two to three years. Such incremental improvements are consistent with past technology adoption patterns, where organizations initially target low-hanging fruit to capture value while building knowledge and confidence with the new technology.16 However, the rewards may not end there. While efficiency and productivity gains are important, gen AI’s potential lies in its ability to drive innovation and strategic transformation. Insurers appear to be aware that there is still a long way to go to achieve second-order benefits such as innovation, shifting talent to higher-value work, and developing new products and services.
Unconvinced or on-the-fence leadership could present a worrying picture for insurers. This may be exacerbated by the fact that insurers have struggled to demonstrate value in past technology transformations, often leading executives to divert sponsorship to other priority areas. Therefore, before scaling gen AI, the organization’s gen AI strategy and vision should be comprehensive and integrated with broader business objectives, with a top-down mandate.
To help overcome hesitation among leadership regarding the scaling of gen AI, consider focusing on demonstrating clear and measurable value from early initiatives. Without real value, leaders could experience experiment fatigue and lose interest. Any value communication should include both tangible and intangible benefits like enhancements in innovation, strategic positioning, and competitive differentiation. Proving, measuring, and communicating the benefits of gen AI—beyond just financial returns—can help secure leaders’ long-term commitment, as well as continued support and funding. Insurers’ initial focus on low-barrier, high-impact use cases across business domains, followed by reinvesting savings from efficiencies into innovation and higher-value use cases, can ease the path to adoption and scaling.
Participating in an impactful way in the rapidly evolving gen AI landscape will likely no longer be a sprint for insurers, but rather a marathon that includes challenges—from data security and talent readiness to regulatory compliance and leadership buy-in. By adopting a proactive and measured approach, insurers can not only navigate the challenges, but also unlock unprecedented efficiency, innovation, and organization value. The time has come for insurers to turn their wounds into wisdom and embrace the future with resilience and foresight.