Posted: 18 Oct. 2021 5 min. read

Artificial Intelligence and the Insurance Industry

Building trustworthy AI systems in the Insurance Sector from the EIOPA AI guidelines

Since its early days, the business of insurance has been built on risk and the ability to accurately assign likelihoods to events. A never-ending cycle of predicting and protecting against these events between insurers and their customers where mutual trust is fundamental in ensuring fairness and non-discrimination. An industry which is also entering the era of ‘digital’. Increasingly, insurance companies are embracing the benefits of technology with Artificial Intelligence (AI) emerging as a real differentiator. Its powers of analysing big datasets and automating processes in a way not possible for humans leads to significant advantages such as cost reductions and improved decision-making.

The scope of AI in the insurance industry is expanding and now insurers use it in a wide range of business streams, from product development to insurance pricing, risk calculations, campaign management and fraud detection. While the benefits of AI are undisputable, questions around its trustworthiness cannot be neglected. There is a need for insurers to adapt their governance structures and control frameworks and ensure their AI systems do not cross acceptable boundaries. Fairness and inclusion are essential for building trust. Regulatory attention, naturally, is shifting to promoting these principles.

This blog will cover the key principles behind the EIOPA AI guidelines for the insurance sector and explore the next steps for insurance companies in aligning their AI controls with industry expectations.

AI governance principles

The European Insurance and Occupational Pensions Authority (EIOPA) recently published its AI guidelines for the insurance sector where detailed elaboration is made on the effective governance principles for trustworthy AI. The paper builds on the earlier work of the European Commission’s Expert Group on AI and highlights six key principles of effective AI governance:

  • Proportionality – similar to Solvency II, the governance and control measures should be proportionate to the impact of these systems on customers and business (e.g. number of customers impacted, relevance to insurance lines of business etc.); this will require firms to conduct comprehensive testing and assessments of their AI systems before designing their control procedures in line with the other five principles
  • Fairness and Non-Discrimination – firms must design their AI systems to achieve procedural and distributive fairness1 where the interests of all parties are balanced; subjectivity must be mitigated and input data must be free of bias; firms should develop adequate metrics to measure the fairness of their systems, monitor the levels of inclusion, and use the outcomes in a fair way
  • Transparency and Explainability – a core principle of trustworthiness is equality; for this to be achieved, insurers must provide transparency to their customers and explain how the outcomes are arrived at after using AI; users should have knowledge of the input data, the meaning of the outcomes and the instructions to use the AI systems and their limitations, especially in the cases of insurance pricing, fraud detection and claims management
  • Human Oversight – as a key principle to achieve fairness and trustworthiness, insurers are required to design their AI systems in such a way that human oversight is possible; the staff responsible for overseeing the AI systems should be adequately trained and have the necessary knowledge and qualifications
  • Data Governance and Record-Keeping – the input data used for modelling the AI systems must be free of any errors and bias and firms must ensure adequate data governance principles are in place; if the input data used is not appropriate, accurate or complete, the outcomes of the AI models will be flawed and when used for activities such as insurance pricing/underwriting, product design or loss prevention, they could defraud or manipulate customers; firms must keep logs of data management to enable traceability and auditability
  • Robustness and Performance – insurers must use AI systems which are fit for purpose and designed for the particular task and area of work; calibration, validation and reproducibility of systems must be ensured and extensive testing performed to achieve robustness and accuracy, as well as confirmation that IT infrastructure is resilient to cyber-attacks

Growing adoption of AI – the future of insurance

Technology will keep evolving and the insurance sector will utilise the power of AI at an increasing speed. Digital transformation is a trend which is here to stay, and firms will have to adapt quickly should they wish to stay relevant and competitive.  With the growing use of AI comes the growing need to ensure trustworthiness, especially in the insurance industry where trust is fundamental.

Understanding the principles for trustworthy AI systems is not sufficient if they are not actively promoted as part of culture and corporate governance. Firms need to establish purposefully designed committees and steering groups to oversee AI systems with the endorsement of the Board and Senior Management. Culture setting from the top is a building block to successfully embed the principles of AI trustworthiness within a firm’s strategy and vision.

With adequate governance in place, insurers must re-assess their internal controls and risk management procedures and set up a roadmap for optimising AI operating models. The paper published by EIOPA on the governance principles for trustworthy AI sets the foundation on which firms should build.

Insurance companies should focus on the five pillars of an effective AI risk management framework; namely governance, development and testing, operations, monitoring, and documentation. These constitute the key considerations firms should have in designing their internal controls. The focus of the EIOPA principles on data governance and transparency will further necessitate the need for effective design, data collection, data preparation, modelling and evaluation for firms in their efforts of validating the work of their AI systems.

At Deloitte, we have developed a comprehensive controls and validation framework to support our clients with their AI and algorithm needs, derived from our extensive experience in the insurance sector and AI knowledge. If you would like to learn more about our AI and algorithm assurance offerings, please reach out and we will be happy to discuss.


Further reading and Resources:

Artificial intelligence governance principles: towards ethical and trustworthy artificial intelligence in the European insurance sector | Eiopa (europa.eu)

Ethics Guidelines for AI (europa.eu)
 

References:

1. Procedural fairness relates to fair business conduct and governance measures. Distributive fairness focuses on outcomes and addresses the material outcomes of insurance distribution.

Key contacts

Mark Cankett

Mark Cankett

Partner

Mark is a Partner in our Regulatory Assurance team. He is our AI Assurance, Internet Regulation and Global Algorithm Assurance Leader with 20 years of experience across financial services audit and assurance, regulatory compliance, regulatory investigations and disputes. He has led the development of our assurance practice as it relates to our approach to assisting firms gain confidence over their algorithmic and AI systems and processes. He has a particular sub-sector specialism in the area of algorithmic trading with varied experience supporting firms enhance their governance and control environments, as well as investigate and validate such systems. More recently he has supported and led our work across a number of emerging AI assurance related engagements.

Barry Liddy

Barry Liddy

Director

Barry is a Director at Deloitte UK, where he leads our Algorithm, AI and Internet Regulation Assurance team. He is a recognised Subject Matter Expert (SME) in AI regulation, has a proven track record of guiding clients in strengthening their AI control frameworks to align with industry best practices and regulatory expectations. Barry’s expertise extends to Generative AI where he supports firms safely adopt this technology and navigate the risks associated with these complex foundation models. Barry also leads our Digital Services Act (DSA) & Digital Markets Act (DMA) audit team providing Independent Assurance over designated online platforms' compliance with these Internet Regulations. As part of this role, Barry oversees our firm's assessment of controls encompassing crucial areas such as consumer profiling techniques, recommender systems, and content moderation algorithms. Barry’s team also specialises in algorithmic trading risks and controls and he has led several projects focused on ensuring compliance with relevant regulations in this space.

Dzhuneyt Yusein

Dzhuneyt Yusein

Assistant Manager

Dzhuneyt is an Assistant Manager within the Markets Assurance team of the Banking & Capital Markets Audit & Assurance Group. Dzhuneyt’s experience covers major regulatory transformation projects such as the IBOR transition and MiFID II Transaction reporting for a variety of FS clients across regions, including Tier 1 banks. His core skills and previous experience result in his detailed understanding of internal control processes and frameworks in banks and other financial institutions.