Article

How insurance companies can safely use AI in a regulated environment

Deloitte Lucid [ML] creates transparency in the use of machine learning models

Artificial intelligence opens up a promising range of optimization options and could revolutionize the insurance industry across the entire value chain - from product development, to a policyholder's customized risk profile, to fully automated claims handling. On the one hand, AI-powered innovation may increase customer expectations, creating pressure to innovate. On the other, insurance is a heavily regulated industry, more inclined to conservatism than innovation. How can insurers align the innovation imperatives to regulation, maintaining compliance and avoiding potential risks?

AI tools must be understandable

The use of AI in the insurance industry can increase the rigor in risk transfer decisions or have an even more transformative effect. "Deep learning" neural networks are able to account for a wide range of dimensions ("features"), recognizing complex interactions (patterns). Deep learning lives from complexity … and at the same time suffers because of it: in order to master complexity, the models themselves become complex… and opaque. Even if their recommendations may represent the best course of action, they may find little acceptance if poorly understood – by management or regulators alike.

The EU Data Protection Regulation (GDPR) requires that profiling and decisioning systems be comprehensible. Explainability is indispensable, both from the point of view of the companies using AI, as well as from their customers (Art. 22 GDPR). Only systems exhibiting sufficient transparency are permissible – whether to identify risks of where the “algorithm” has been poorly trained or developed, or to fulfill supervisory requirements and respect the rights of the consumer. For example, GDPR affords the consumer the right to demand the specific reason why an application was rejected.

 

AI "black boxes" as a risk

A recent workshop organized by Deloitte for a working group of the Leipzig Insurance Forum affirmed interest in the topic. Twenty-five experts across insurance companies of various sizes and lines of business took part in the workshop. Their collective aim: to work out the most material risks to innovation in the insurance sector. The concern about AI "black boxes" played a central role, identified as a significant barrier to acceptance of new technologies, a “hot topic” of great relevance and urgency. The general consensus was that decisions must always be comprehensible, even if a seemingly insurmountable challenge in the context of the multitude of parameters, layers and nodes of a neural network or similar algorithm associated with AI.

Deloitte aiStudio creates transparency with Lucid [ML]

Deloitte recognized both the promise and the risks associated with AI early on, which motivated the Deloitte aiStudio to develop the Explainable AI (XAI) tool, “Lucid [ML].” Lucid casts light on the “black box”, presenting the driving factors of a Machine Learning (ML) model in an intuitive way – both at a “global” and “local” level. At the "global" level, Lucid provides a general understanding of the ML model’s inner workings. At a “local” level, Lucid explains the drivers behind a specific risk decision, for example: a policy application, or a claims submission.

Lucid [ML] applies a variety of methods to achieve transparency, some of which are analogous to stress-testing, for which Lucid generates thousands of scenarios to observe how the ML model responds and iteratively deduces the key drivers. The use of multiple methods is a great strength of the tool, additionally assessing the robustness of the model under examination. Inconsistent outcomes across methods indicate an unstable underlying ML model. Lucid [ML] is method-agnostic, able to examine any ML model. It is also flexible to accommodate multiple input formats: tables (numbers), images and text.

XAI tools ("explainable AI") like Lucid [ML] pave the way for a regulatory compliant and transparent future in the application of AI. Business leaders can more easily gain comfort with AI recommendations. Supervisors can be satisfied that their requirements are met. Data scientists developing the AI innovations also benefit: through greater understanding, they can identify unstable models, optimize the algorithms, and more easily communicate innovations to their stakeholders.

The bottom line: companies no longer need to decide whether to enjoy the benefits of AI or remain compliant – they can have both. Highly relevant to insurers or indeed anyone using machine learning. AI explained. Lucid [ML].

Explore our aiStudio