Trustworthy AI and effective financial crime detection | Deloitte Netherlands

Article

Trustworthy AI and effective financial crime detection: not a zero-sum game

AI-powered software can revolutionise the process of detecting financial crime, but what about privacy, fairness, transparency and stakeholder rights? Will the EU’s Trustworthy AI legislation limit possibilities? No, say Deloitte’s experts. It’s a matter of being responsible while applying smarter AI.

For financial and public sector institutions, corporate responsibility is a must to maintain market confidence and public trust. Unfortunately, financial crime can undermine their responsible image - and expose them to legal and regulatory risks as well. That is why they work hard to detect financial crime. AI-powered software can revolutionise this process, but what about privacy, fairness, transparency and stakeholder rights? Will the EU’s Trustworthy AI legislation limit possibilities? No, say Deloitte’s experts Hilary Richters, Frank Cederhout and Bart Witteman. You can reap the full benefits of AI in financial crime detection without overstepping ethical lines.

Financial and public institutions monitor financial crime to maintain market confidence and public trust, while avoiding financial loss, reputational damage and/or legal sanctions. The traditional approach to detecting financial crime is based on rules that reflect human experience and expertise. Automation has made this substantially faster over the years, but not necessarily smarter. Meanwhile, though, criminals are getting smarter at hiding financial crime. Moreover, they are adopting technology for committing fraud faster than organisations are adopting technology to protect themselves against fraud. Advanced analytics using artificial intelligence (AI) can be a real game changer in this setting. Dynamic, self-learning algorithms can very quickly analyse huge sets of data to detect patterns and outliers that may signal financial crime. Using AI can dramatically boost efficiency and lower costs, but it also brings new challenges and ethical expectations.

Society raising the bar

In recent years, society has raised the bar for financial crime prevention and detection with stricter laws and rules. Financial crime has a huge effect on trust in society and the financial system, therefore society expects the organisations involved to show leadership. Not just to obey the law, but to meet high ethical standards - doing so not only for their own protection or reputation, but because it is simply ‘the right thing to do’.

Lately we have seen a lot of debate on the impacts of financial crime focused on the financial services industry and public sector. Money laundering and fraud occur daily and represent a serious financial risk, with over 19 billion euros worth of transactions being deemed suspicious in the Netherlands alone1.

Organisations such as banks, insurance companies, pension funds and benefits agencies have access to huge sets of (personal) data about clients and citizens. This seemingly offers a wonderful opportunity for AI applications, and the temptation is strong to fully unleash AI on these data to produce lots of interesting insights. One practical problem is that the quality of the data is patchy, with a fair amount of incorrect and obsolete data contaminating the database. Another is that the scope of the data is limited to one side of a transaction rather than a whole network or chain of transactions. How seriously can we take “red flags” arising from such data, and is it ethical to classify clients as suspects of financial crime on such an unfirm footing? Moreover, how can we properly balance the fight against financial crime against other societal values such as privacy and fairness?

The fundamental challenge

When using AI to connect various data sets, the key challenge is maintaining trust. Organisations cannot function without the trust of their stakeholders and society at large. Detecting, preventing and correcting financial crime is a cornerstone of that trust, but the detection methods used must themselves be in line with our collective values. One of the possible effects of using AI is a reliance on decisions made without human interaction, which complicates accountability within an organisation. Who is accountable, given that an algorithm cannot be responsible on its own? Also, algorithms must be very carefully designed and trained to avoid undesirable results. Everyone applauds zero tolerance for criminals, but society has less than zero tolerance for software that criminalises innocent people. Meanwhile, AI-based financial crime detection is a data-hungry process. Criminal patterns can only stand out if masses of “innocent” data are included in the analysis. But everyone who is monitored is affected. People’s privacy and the related legislation need to be respected.

Ethical issues

When detecting financial crime with AI-based technology, it is important to take ethical considerations into account. The EU presents seven key principles for AI to be trustworthy. We highlight three of them here:

  • Privacy and data governance: Privacy is a human right that needs to be considered in financial crime detection processes, as anywhere else. Doing so does not have to leave financial crime detection toothless, however. Privacy laws explicitly contain provisions to make use of data possible, as long as it is proportionate and adequate safeguards are in place. It is also important to check whether there are other, less severe, means to reach the objectives. That said, the “wiggle room” provided must be used ethically. When processing sensitive categories such as political opinion, sexual orientation, health, ethnicity or criminal record, organisations must be very circumspect in their use of such data. Nobody wants a dystopian system that automatically denies a bank account to people from a specific country, for example.
  • Explainability: AI systems can be so complex that understanding how they reach a decision becomes near impossible – this is something known as the ‘black-box’ phenomenon. While explainability and human interpretability is not of great concern in some applications of AI (labelling images of cats, for example), it definitely is of great concern when the algorithm’s decisions can have a large impact on people’s lives. In such cases, humans need to be able to understand the reasons for decisions resulting from theses AI systems. Only through interpreting the algorithm can bias be mitigated and fairness be safeguarded.
  • Transparency: It also needs to be transparent when and where AI systems are being deployed within a decision making chain. When detecting financial crime with AI, human professionals should always be ‘in the loop’ of the decision-making process, with the ability to alter or overrule decisions if necessary. Furthermore, the people affected by the decisions should have access to redress for any automated decisions that are made about them regarding financial crime – whether that is on an individual basis or through an industry regulator. This means it must always be possible to audit a decision making process (making explainability even more important).

The role of regulation

The EU’s upcoming Trustworthy AI regulation considers these ethical issues (and more) within its key requirements, and national regulators are adopting these in their own guidance. The draft legislation has identified the public and financial services sectors, and applications like financial crime detection, as high-risk when it comes to using AI solutions, meaning that they will be subject to compliance. The regulation is different from existing regulation in that it is principle-based rather than rules-based. Privacy authorities and national regulators in Europe have been closely watching developments in the AI space and can also offer organisations guidance. In the Netherlands, the central bank (DNB) and market watchdog Autoriteit Financiële Markten (AFM) are taking the role of AI in the financial system seriously, listing ten points of attention for organisations using AI, while the Algemene Rekenkamer has developed an audit framework for algorithms used in the public sector2. Meanwhile, the Information Commissioners Office (ICO) in the UK sets another positive example, with a regulatory “sandbox” set up to enable organisations to test their AI applications in a controlled regulatory environment before launching them3. Together these measures should encourage organisations to use AI in a responsible and societally acceptable manner.

What do organisations need to do?

Timing is important. Investigating and resolving ethical issues is best done early in the design phase and revisited throughout development and deployment. Algorithms have no conscience - they do what their designers tell them to do. Therefore, the development team has to be trained and made aware not just about effective financial crime detection, but also about ethical issues that could arise, and safeguards they can build in to head off unwanted results. Moreover, there needs to be a comprehensive and robust process to continuously monitor and adjust the application to changes in society and human behaviour caused by external factors – like COVID-19 for instance. This may seem restrictive and costly, but avoiding negative effects like discrimination before they materialise will pay itself out in smoother stakeholder acceptance. Organisations should also take a close look at their data governance and consider steps towards greater maturity in their ethical use of AI. After all, they are not singling out targets for an email campaign, but for exclusion and prosecution.

Above all, organisations must understand that a rules-based approach is not enough, not only because the law leaves so much grey area, but because society expects more from them. While the legal framework is under development, organisations would therefore do well to define for themselves what ethical principles will guide the way they use AI. In this process, the EU’s thoughts on Trustworthy AI are valuable input. Organisations that follow this recipe can build trust while they efficiently and effectively fight financial crime.

1 https://www.fiu-nederland.nl/sites/www.fiu-nederland.nl/files/documenten/fiu nederland_jaaroverzicht_2019_en_0.pdf

2 Understanding algorithms | Report | Netherlands Court of Audit (rekenkamer.nl)

3 https://ico.org.uk/sandbox

Trustworthy AI is very much on the radar of Deloitte’s Digital Ethics team. Whatever question you have about AI, the ethical implications or the related risk and compliance issues, it is one we have already asked ourselves. As you embark on your AI adventure, we can support your company with expert advice and state-of-the-art tools.
As such, Trustworthy AI fits into Deloitte’s broader ambition to do responsible business and help its clients do the same. The Deloitte network is committed to driving societal change and promoting environmental sustainability. Working in innovative ways with government, non-profit organisations, and civil society, Deloitte is designing and delivering solutions that contribute to a sustainable and prosperous future for all.

More information?

For more information please contact Bart Witteman, Hilary Richters or Frank Cederhout via the contact details below.

Did you find this useful?