Perspectives

Banking on the bots: unintended bias in AI

The use of AI in financial services is introducing new ethical pitfalls, risking unintended biases which are forcing the industry to reflect on the ethics of new models.

Whilst the benefits of AI are clear, the potential unintended consequences have been less obvious. As AI introduces new decision-making methods there is potential for discrimination, testing societies’ moral obligations if decisions adversely affect disparate groups.

Understanding the behaviour of AI is critical to detecting and preventing models that discriminate against or exclude marginalised individuals or groups. Whilst the mysterious nature of AI can seem like ‘magic’ at times, it has the potential to hugely magnify societal issues in financial services.

There are a number of ways bias can manifest itself in AI:

First, in input data. AI systems are only as good as the data we put into them. Bias present in input data, for example gender, racial or ideological biases, as well as incomplete or unrepresentative datasets, will limit AI’s ability to be objective. Uncertainty around input use is also an issue. Some methods of AI training may obscure how data is used in decisions, creating the potential for discrimination, for example if race or gender data was used in credit decisions or insurance premiums.

Second, in development. Many AI systems will continue to be trained using certain data, creating an ongoing cycle of bias. Subconscious bias or lack of diversity among development teams may influence how AI is trained, carrying bias further forward in the model. Models can also be unpredictable; as market conditions evolve, it may be difficult to predict how models will respond, with resulting portfolio and macro implications.

Finally, in a post-training, continuous learning drift towards discrimination. As AI systems self-improve and learn, they may acquire new behaviours that have unintended consequences, for example if an online lending platform began rejecting loan applications from ethnic minorities or women more than other groups. Ultimately, this has the capacity to create an erosion of trust between financial institutions, humans and machines.

Furthermore, AI may make it harder to explain solutions, compounding the impact of potential discrimination by making it harder to establish safeguards. Regulators often lack the technical expertise, time and resources to inspect algorithms, especially if development is improperly documented or there are persistent, system-wide gaps in governance. However, regulators are increasingly becoming mindful of the potential risks and unintended consequences that the use of AI by regulated firms may have and are examining the issues in a more comprehensive and sophisticated way. Deloitte has looked in more detail at how financial services firms and regulators can work together to approach these challenges from a risk-based perspective.

Of course, AI will have profound benefits for those that participate, both as part of the workforce and consumers, in the financial world. This includes credit card fraud detection, robo-advice, algorithmic stock trading applications and personalised chatbots. It will also seep into other areas of our lives, for example, driverless vehicles, insurance plans, job applications and even how we pick our next TV series.

We may not yet know, or anticipate, what we should be codifying into AI, but we do know that it will require public-private cooperation to address the unintended consequences for society as AI transforms the financial system. This will demand proactive collaboration between institutions and regulators to identify and address potential sources of bias in machine decisions and other exclusionary effects.

As the use of AI in financial services increases, it will become even more important to examine bias in the data. Regulators, in particular, are not going to be satisfied with the output of any algorithm if they cannot understand what underlies it.

Catch up on all things FinTech

Did you find this useful?