Posted: 18 Oct. 2021 5 min. read

Artificial Intelligence and the Insurance Industry

Building trustworthy AI systems in the Insurance Sector from the EIOPA AI guidelines

Since its early days, the business of insurance has been built on risk and the ability to accurately assign likelihoods to events. A never-ending cycle of predicting and protecting against these events between insurers and their customers where mutual trust is fundamental in ensuring fairness and non-discrimination. An industry which is also entering the era of ‘digital’. Increasingly, insurance companies are embracing the benefits of technology with Artificial Intelligence (AI) emerging as a real differentiator. Its powers of analysing big datasets and automating processes in a way not possible for humans leads to significant advantages such as cost reductions and improved decision-making.

The scope of AI in the insurance industry is expanding and now insurers use it in a wide range of business streams, from product development to insurance pricing, risk calculations, campaign management and fraud detection. While the benefits of AI are undisputable, questions around its trustworthiness cannot be neglected. There is a need for insurers to adapt their governance structures and control frameworks and ensure their AI systems do not cross acceptable boundaries. Fairness and inclusion are essential for building trust. Regulatory attention, naturally, is shifting to promoting these principles.

This blog will cover the key principles behind the EIOPA AI guidelines for the insurance sector and explore the next steps for insurance companies in aligning their AI controls with industry expectations.

AI governance principles

The European Insurance and Occupational Pensions Authority (EIOPA) recently published its AI guidelines for the insurance sector where detailed elaboration is made on the effective governance principles for trustworthy AI. The paper builds on the earlier work of the European Commission’s Expert Group on AI and highlights six key principles of effective AI governance:

  • Proportionality – similar to Solvency II, the governance and control measures should be proportionate to the impact of these systems on customers and business (e.g. number of customers impacted, relevance to insurance lines of business etc.); this will require firms to conduct comprehensive testing and assessments of their AI systems before designing their control procedures in line with the other five principles
  • Fairness and Non-Discrimination – firms must design their AI systems to achieve procedural and distributive fairness1 where the interests of all parties are balanced; subjectivity must be mitigated and input data must be free of bias; firms should develop adequate metrics to measure the fairness of their systems, monitor the levels of inclusion, and use the outcomes in a fair way
  • Transparency and Explainability – a core principle of trustworthiness is equality; for this to be achieved, insurers must provide transparency to their customers and explain how the outcomes are arrived at after using AI; users should have knowledge of the input data, the meaning of the outcomes and the instructions to use the AI systems and their limitations, especially in the cases of insurance pricing, fraud detection and claims management
  • Human Oversight – as a key principle to achieve fairness and trustworthiness, insurers are required to design their AI systems in such a way that human oversight is possible; the staff responsible for overseeing the AI systems should be adequately trained and have the necessary knowledge and qualifications
  • Data Governance and Record-Keeping – the input data used for modelling the AI systems must be free of any errors and bias and firms must ensure adequate data governance principles are in place; if the input data used is not appropriate, accurate or complete, the outcomes of the AI models will be flawed and when used for activities such as insurance pricing/underwriting, product design or loss prevention, they could defraud or manipulate customers; firms must keep logs of data management to enable traceability and auditability
  • Robustness and Performance – insurers must use AI systems which are fit for purpose and designed for the particular task and area of work; calibration, validation and reproducibility of systems must be ensured and extensive testing performed to achieve robustness and accuracy, as well as confirmation that IT infrastructure is resilient to cyber-attacks

Growing adoption of AI – the future of insurance

Technology will keep evolving and the insurance sector will utilise the power of AI at an increasing speed. Digital transformation is a trend which is here to stay, and firms will have to adapt quickly should they wish to stay relevant and competitive.  With the growing use of AI comes the growing need to ensure trustworthiness, especially in the insurance industry where trust is fundamental.

Understanding the principles for trustworthy AI systems is not sufficient if they are not actively promoted as part of culture and corporate governance. Firms need to establish purposefully designed committees and steering groups to oversee AI systems with the endorsement of the Board and Senior Management. Culture setting from the top is a building block to successfully embed the principles of AI trustworthiness within a firm’s strategy and vision.

With adequate governance in place, insurers must re-assess their internal controls and risk management procedures and set up a roadmap for optimising AI operating models. The paper published by EIOPA on the governance principles for trustworthy AI sets the foundation on which firms should build.

Insurance companies should focus on the five pillars of an effective AI risk management framework; namely governance, development and testing, operations, monitoring, and documentation. These constitute the key considerations firms should have in designing their internal controls. The focus of the EIOPA principles on data governance and transparency will further necessitate the need for effective design, data collection, data preparation, modelling and evaluation for firms in their efforts of validating the work of their AI systems.

At Deloitte, we have developed a comprehensive controls and validation framework to support our clients with their AI and algorithm needs, derived from our extensive experience in the insurance sector and AI knowledge. If you would like to learn more about our AI and algorithm assurance offerings, please reach out and we will be happy to discuss.


Further reading and Resources:

Artificial intelligence governance principles: towards ethical and trustworthy artificial intelligence in the European insurance sector | Eiopa (europa.eu)

Ethics Guidelines for AI (europa.eu)
 

References:

1. Procedural fairness relates to fair business conduct and governance measures. Distributive fairness focuses on outcomes and addresses the material outcomes of insurance distribution.

Key contacts

Mark Cankett

Mark Cankett

Partner

Mark is a Partner in our Banking & Capital Markets Audit Group in London. He is a leading member of our Benchmarks Assurance & Advisory team and a co-Chair of Deloitte’s Global IBOR Reform Steering Committee. Mark has 16 years’ experience across financial services audit and assurance, regulatory compliance, regulatory investigations and financial services disputes. This experience has provided him with a strong technical understanding of wholesale markets, financial benchmarks and related risk and control frameworks. His experience across the industry with respect to IBOR reform has provided him with a unique perspective on the regulatory reform agenda and he is actively assisting clients in this space at present.

Barry Liddy

Barry Liddy

Director

Barry is a Director within Banking & Capital Markets and is a qualified accountant (ACA). He has over 15 years’ experience spread across industry and financial services. He has worked extensively with Investment Banks in enhancing control frameworks and assessing their design and operating effectiveness to ensure full regulatory compliance. Barry is currently a member of Deloitte’s Algorithm Assurance team focused on assisting clients comply with recent requirements under MiFiD II (RTS 6). He specialises in supporting Banks, Asset Managers & HFT firms ensure they have strong controls in place to address the key risks associated with the development and use of algorithmic trading.

Dzhuneyt Yusein

Dzhuneyt Yusein

Assistant Manager

Dzhuneyt is an Assistant Manager within the Markets Assurance team of the Banking & Capital Markets Audit & Assurance Group. Dzhuneyt’s experience covers major regulatory transformation projects such as the IBOR transition and MiFID II Transaction reporting for a variety of FS clients across regions, including Tier 1 banks. His core skills and previous experience result in his detailed understanding of internal control processes and frameworks in banks and other financial institutions.