AI called to Duty in the GI sector | Deloitte UK has been saved
Who this blog is for: Board members and senior executives of UK GI firms who work on technology, AI, regulatory affairs, risk, and compliance.
The increasing availability of AI models provided by third parties has enhanced the pace of adoption of this technology by insurers. Many firms are now using, or experimenting with, AI1 - especially its Machine Learning (ML) subset2 - for pricing, customer support and claims management activities. However, without the necessary safeguards in place, the use of AI can lead to poor customer outcomes. Therefore, supervisors globally are developing their expectations around AI-related risks.
The UK’s regulatory expectations around AI are also unfolding, UK regulators will follow a risk-based, context-specific and proportionate approach to regulating AI, and have been asked to publish their approach to AI supervision by April 2024. We also expect regulators to provide detailed guidance in early 2025. But, in the short to medium term, the UK Government’s AI strategy will rely on existing regulatory frameworks. The Duty is a case in point. In the absence of a formal regulatory approach to AI, it provides the FCA with “hooks” to take action where firms’ use of AI systems results in poor customer outcomes. Most importantly, delivering good customer outcomes should be central to insurers building out their AI capabilities, which need to be underpinned by appropriate controls, monitoring, governance and risk management to identify and mitigate the risk of customer harm.
In this article, we highlight how UK GI firms can look at their AI systems through the lens of the Duty. In particular, we rely on two key use cases of AI/ML by insurers to explore possible challenges and actions for firms in light of their responsibilities under the Duty: pricing and claims management.
Although the FCA does not specify in its Duty guidance exactly how insurers should think about their use of AI in the context of the Duty, all the Duty’s cross-cutting rules and outcomes apply. For example, insurers need to act in good faith by designing products and services that offer fair value and meet customers’ needs in the target markets. For insurers which use AI/ML in underwriting and pricing, this could mean thinking about whether algorithms can amplify or embed bias, and whether any foreseeable harm could be avoided. Similarly, the Duty requires firms to put themselves in their customers’ shoes when considering whether their communications provide customers with the right information, at the right time. Here, insurers using AI when interacting with customers need to make sure that the information is still tailored to their needs and helps them achieve their financial goals, even if this is done via a chatbot.
To start considering AI/ML in the context of the Duty, insurers should review the UK Government’s policy paper, which outlines some key principles to guide and inform the responsible development and use of AI. These include safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, as well as contestability and redress. While these principles summarise the key risks of AI/ML for firms, they are consistent with the Duty in many ways. Insurers that want to progress with their AI pilots ahead of a formal UK regulatory approach should reflect on how these principles apply to their use cases, as well as the Duty.
When it comes to accountability and governance, for example, a key requirement of the Duty is that insurers’ governance (e.g., controls and key processes) should be set up to enable the identification of poor customer outcomes. Insurers need to be able to demonstrate how their business models, culture and actions remain focused on achieving good customer outcomes. These considerations should underpin a firm’s AI strategy but are also key ingredients to evidencing full compliance with the Duty. Similarly, Boards have to sign off on a report that their organisation complies with the principles underpinning the Duty. Having an awareness of, and ability to challenge, risks to customer outcomes posed by AI systems will be key in this regard.
The UK Government also recently published its response to its White Paper and set out additional guidance3 and key questions that regulators should consider when implementing its AI principles (see our comprehensive summary here). Some questions which the Government poses to regulators will also be relevant to firms in the context of Duty compliance, including for example:
Below we take a closer look at two insurance-specific use cases and how insurers might want to think about them in the context of the Duty. We close with a list of key actions for insurers in their journeys to make the most of AI/ML.
Several UK GI firms either currently use or intend to use ML tools to enhance the speed and accuracy of underwriting processes, including pricing. For example:
UK GI firms have been under the regulatory spotlight in recent years regarding the fairness and transparency of their pricing practices. We expect insurers’ increasing use of opaque ML techniques for pricing purposes to amplify existing regulatory concern in this area, particularly following the introduction of the Duty. The FCA will be particularly wary of any potential exclusions of some customer cohorts as a result of more granular premium pricing. In the Duty guidance, the FCA specifically mentions that using algorithms within products or services in ways that could lead to consumer harm as an example of not acting in good faith4. This might apply where algorithms embed or amplify bias, leading to outcomes that are systematically worse for certain customers – unless the differences in outcome can be justified. As pricing is a use case for AI in general insurance, and one of the key outcomes of the Duty, it is crucial that GI firms can demonstrate fairness across groups of customers when using AI applications.
Key challenges:
1. Explaining how AI/ML pricing models do not lead to poor consumer outcomes: regulators are concerned about AI/ML models introducing or reinforcing bias in modelling, which could lead to unfair pricing. For example, poor-quality AI/ML model training datasets, or a lack of controls to prevent model drifts in case of unprecedented situations, could lead to irrelevant, inaccurate or biased model outputs, causing potential discrimination and customer harm. The Duty is very clear in its expectation that firms need to ensure their products provide fair value to all groups of customers, and that behavioural biases should not be exploited through pricing.
To mitigate this risk, firms need to have strong data and pricing governance frameworks in place. This includes reinforcing controls, monitoring and MI around model's data input and output to ensure customers with protected or vulnerability characteristics are not discriminated against. Firms will need to be able to justify that their fairness assessment is adapted to the product sold and the intended target market (the ICO’s work on dataset, design and outcome fairness can provide a helpful starting point for firms to develop their own fairness explanations).
The Duty also emphasises the need for firms to safeguard consumers’ data privacy.5 UK regulators may review how firms and their third party (TP) providers collect, manage, and use customer data in their AI systems. GI firms will be required to have sufficiently detailed documentation of the data used by AI/ML models to prevent data protection breaches and support model explainability.
2. AI expertise: GI firms also need to invest in enhancing AI expertise to be able to, where relevant, develop, maintain and challenge any new AI-driven pricing models in line with the Duty. While this is true more broadly across many insurance functions, the Financial Reporting Council and the Government’s Actuary’s department recently highlighted a lack the technical skills to handle advanced AI/ML techniques, especially in actuarial functions. This can possibly lead to overreliance on key stakeholders and key person risk. Where firms deploy AI/ML in the pricing process, they need to provide actuaries with the appropriate training and tools to guard against possible customer harm caused by the models. This should also extend to the independent risk, compliance and internal audit functions, which will play a key role in providing assurance that the pricing processes and policies are fit for purpose (especially where insurers build their own model). Only with the appropriate expertise will GI firms be able to demonstrate that their AI systems comply with the Duty.
AI is already widely used by GI firms in claims management as it increases the speed, reduces the cost, and could improve the customer experience. Common use cases of AI in claims handling processes include:
The FCA expects firms to remove unnecessary barriers in the claims management process to ease the consumer journey and provide fair value – whether claims are managed through AI or humans. The FCA will pay particular attention to claims settlement times as pointed out in its warning and 2023 portfolio letter to GI firms (targeting the health and motor sectors specifically). AI represents a promising solution to improve claims settlement timelines but can also contribute to poor customer outcomes. The FCA will, for example, expect firms to ensure that the use of AI does not lead to more burdensome or complex claims processes for customers. Under the Duty, having a complex claims process that could deter customers from pursuing claims could constitute an example of poor practice. Here, firms need to ensure that increased settlement efficiency using AI is not achieved at the expense of deteriorating outcomes for certain customers.
Key challenges:
1. Humans in the loop:6 where firms use automated systems (e.g., chatbots), the FCA stresses that firms should provide the appropriate options to help customers. For example, a GI firm providing an online chatbot to support customers for claims management without access to a real customer agent could lead to poor outcomes, especially for vulnerable customers who might not be able to navigate the chatbot easily. Firms should test the process maps for customers, distinguishing between those that can be safely managed through high degrees of automation, and those that require human contact. Firms also need to ensure they have a process in place whereby customers can complain to challenge the outcomes they get.
Regarding the claims management back-office, the PRA and the FCA have also warned about “automation bias”, i.e. where humans confirm decisions made by AI systems without providing appropriate challenge. This example is especially relevant when AI systems are used in the claims triage and adjudication processes. To tackle this, firms could involve dedicated experienced case officers for sensitive or complex cases. Humans in the AI loops should have an active role in overseeing the model and its output. Their ability to understand and challenge the model should be tested as part of the governance process, and continuously improved and updated through appropriate training.
2. Identifying vulnerable customers to prevent foreseeable harm: in its Financial Lives survey, the FCA identified that for 77% of the people surveyed, the burden of keeping up with domestic bills and credit commitments had increased between July 2022 and January 2023. Moreover, the cost-of-living challenges have led to 13% of insurance policyholders from mid-2022 either cancelling or reducing their policy cover. Insurers should build adequate processes to identify vulnerable customers and adjust chatbot suggestions accordingly; this could include changes to the information provided and adjustments to claims settlement time. Delayed settlements or unexpected premium increases due to poorly monitored use of AI systems and lack of second line oversight in the claims management chain can have a disproportionate impact on vulnerable customers, leading to further financial difficulties. Under the Duty, firms are expected to respond flexibly to the needs of customers with characteristics of vulnerability, whether this is by providing support through different channels or adapting the usual approach.
Review and control the datasets used as inputs for AI models |
|
Review contractual relationship and information exchange flows with TPs in light of the Duty |
|
Due diligence over 3rd party AI provider |
|
Enhance governance arrangements and data quality and lineage processes |
|
Model testing and assurance |
|
Build risk-based inventories of AI models to structure AI risk management processes and prepare for potential Model Risk Management requirements for insurers9 |
|
Monitor customer outcomes and track fairness |
|
Ensure that the existing skillset of the firm is sufficient to deliver the AI strategy over the medium term |
|
Ensure customers have adequate support/alternative avenues to challenge outcomes of AI models and, where necessary, interact with humans |
|
Ensure the adequate identification of customers’ risk characteristics. |
|
Consider using the FCA Digital Regulatory Sandbox |
|
Keep scanning the regulatory horizon |
|
AI-related technological breakthroughs present a great opportunity for insurers – they could greatly improve operational efficiency and reduce cost. But any AI systems need to be underpinned by appropriate controls to prevent the risk of harm to customers. Now is the right time to develop strong safeguards around the use of AI to ensure the delivery of good customer outcomes under the Duty as well as anticipate future UK supervisory approaches to AI. Placing good customer outcomes at the heart of the AI strategy will enable firms to obtain a competitive advantage, gain the trust of customers and regulators, and mitigate the risk of setbacks. Firms implementing those safeguards will then be well-positioned to leverage their AI/ML systems to prove compliance with the Duty, and monitor customer outcomes more effectively.
___________________________________________________________
1 For the purpose of this insight, we will use the PRA/FCA definition of AI in DP5/22: “AI is the simulation of human intelligence by machines, including the use of computer systems, which have the ability to perform tasks that demonstrate learning, decision-making, problem solving, and other tasks which previously required human intelligence”.
2 “ML is a subfield within AI, […]” and “refers to a set of statistical algorithms applied to data for making predictions or identifying pattern in data” it is “a methodology whereby computer programmes build a model to fit a set of data that can be utilised to make predictions, recommendations or decisions without being programmed explicitly to do so” - Bank of England, PRA, and FCA, “Discussion Paper 5/22: Artificial Intelligence and Machine Learning”, 2022, link; and “Machine Learning in UK financial services”, 2022, link
3 His Majesty Department for Science, Implementing the UK’s AI Regulatory Principles Innovation and Technology, 2024, available at: https://assets.publishing.service.gov.uk/media/65c0b6bd63a23d0013c821a0/implementing_the_uk_ai_regulatory_principles_guidance_for_regulators.pdf.
4 FG22/5 Final non-Handbook Guidance for firms on the Consumer Duty, paragraph 5.12, 2022, available at: https://www.fca.org.uk/publication/finalised-guidance/fg22-5.pdf
5 It notably refers to the ICO’s guidance to ensure sound data use in the context of AI.
6 “The measures in place to ensure a degree of human intervention/involvement with a model before a final decision is made” as per Bank of England, PRA, and FCA, “Discussion Paper 5/22: Artificial Intelligence and Machine Learning”, 2022, link
7 “Decentralized Machine Learning framework that can train a model without direct access to users’ private data” as per Deloitte, Federated Learning and Decentralized Data, 2022, link
8 Especially as discussions are ongoing on the relevance of introducing an SMF responsible for AI as part the SMCR review.
9 In the Policy Statement 6/23 on MRM principles for banks, feedback regarding the applicability of the MRM principles to AI/ML models indicated that both firms and the PRA were aligned on the mutual benefits of the proposed principles and their applicability to AI/ML models.
10 Provisional agreement on the AI Act: EU Council, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence and amending certain Union legislative acts, February 2024, available at: https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf
David is Head of Deloitte’s EMEA Centre for Regulatory Strategy. He focuses on the impact of regulatory changes - both individual and in aggregate - on the strategies and business/operating models of financial services firms. David joined Deloitte after 12 years at the UK’s Financial Services Authority. His last role was as Director of Financial Stability, working with UK and international counterparts to deal with the immediate impact of the Great Financial Crisis and the regulatory reform programme that followed it.
Suchitra is a Partner in the EMEA Centre for Regulatory Strategy and helps our clients to navigate the regulatory landscape around technological innovation. She sits on the UK Fintech Executive and leads our thought leadership on topics such as digitsation, cryptoassets, AI, regulatory sandboxes, Suptech, payment innovation and the future of regulation. She recently completed a secondment at the Bank of England, supervising digital challenger banks. Suchitra is a member of various industry working groups on innovation in financial services and has regularly featured in the Top 150 Women in Fintech Powerlist (Innovate Finance). She is a qualified Chartered Accountant and has previously worked in Deloitte’s Audit, Corporate Finance and Risk Advisory teams, where she led large-scale regulatory change projects.
Reny is a Deloitte Partner, leading the UK Insurance AI business where she focusses on driving and embedding the use of AI, data, and automation across different parts of the insurance journey including customer, servicing, and operations. She has a particular wider focus working with Life and Pensions companies throughout her career, having supported them on large transformation programmes for the past 13 years. She also focusses on driving innovation though new products and services disrupting the traditional Insurance business processes. She works with various AI based solutions to partner and deliver the new next generation of services.
Kareline is a director in Deloitte’s EMEA Centre for Regulatory Strategy, specialising in insurance regulation. Kareline has more than 15 years of experience in both prudential and conduct insurance regulation, providing high quality advice to firms in the UK market. At Deloitte, Kareline leads a team of experts to carry out horizon scanning and assess the strategic impact of regulation on the market. Kareline provides advice to insurance clients on the impact of regulation on their business, finance, and operating models. Kareline has led engagements supporting clients with a number of regulatory challenges including Brexit and restructuring projects, advice on impact of Solvency II/ Solvency UK over capital decisions and investments, supporting a top 3 retail general insurer on interpretation and compliance with Pricing Practices rules, and design and implementation of insurance products and customer journeys for a large life insurer. Kareline is a member of the ICAEW Risk and Regulation Committee and the Solvency II working party. Kareline has authored several publications and columns on insurance regulation and Solvency II over the past ten years.
Barry is a Director at Deloitte UK, where he leads our Algorithm, AI and Internet Regulation Assurance team. He is a recognised Subject Matter Expert (SME) in AI regulation, has a proven track record of guiding clients in strengthening their AI control frameworks to align with industry best practices and regulatory expectations. Barry’s expertise extends to Generative AI where he supports firms safely adopt this technology and navigate the risks associated with these complex foundation models. Barry also leads our Digital Services Act (DSA) & Digital Markets Act (DMA) audit team providing Independent Assurance over designated online platforms' compliance with these Internet Regulations. As part of this role, Barry oversees our firm's assessment of controls encompassing crucial areas such as consumer profiling techniques, recommender systems, and content moderation algorithms. Barry’s team also specialises in algorithmic trading risks and controls and he has led several projects focused on ensuring compliance with relevant regulations in this space.
John is one of our retail conduct financial services leads. He has more than 20 years experience in complex remediation and regulatory driven transformation programmes. He leads our outcome testing hub. Most recently, John has supported a number of firms with implementation of the Consumer Duty, in particular product governance and price and value assessments. John is also helping firms consider how they drive operational efficiency and value through getting customer journeys ‘right first time’ and control and governance frameworks that are simplified and add value to the business.
Matthew is a governance, risk and conduct consultant working with clients across the financial services industry, with a particular focus and expertise in retail banking, consumer credit and insurance regulation. Matthew provides strategic and commercially-focussed advice to deliver tailor-made solutions which enable clients to ensure they comply with and exceed their regulatory requirements. Matthew has been at the forefront of working with financial services firms as they innovate - either through their business models, digitalisation or in the adoption of innovative technologies - in a way that meets their regulatory obligations. Matthew has supported a number of Alternative Finance, FinTech and InsurTech firms as they have sought to develop their products, approach to conduct risk, and their risk and control environments. Matthew holds a masters-level qualification from the University of Oxford in Strategy and Innovation.
Linda is a Senior Manager in Deloitte’s EMEA Centre for Regulatory Strategy, specialising in general insurance regulation. Linda joined Deloitte in September 2019 after having worked as a senior supervisor for firms across both the company and Lloyd’s insurance market, as well as the UK retail banking sector, at the UK’s Prudential Regulation Authority (PRA).
Steve is a General Insurance Pricing Actuary with experience across personal and commercial lines. Steve joined Deloitte in August 2022 from the Lloyd’s and London Market. He has a specific interest in pricing and underwriting transformation.