NextGen AML: data-driven change has been saved
NextGen AML: data-driven change
Making the most of data to transform AML
Any anti-money-laundering approach, and certainly a next-generation one, is fuelled by digital data. Fortunately, in our digital era there is no lack of available data to analyse. Indeed, financial institutions and other parties in the financial ecosystem are already struggling to handle the sheer mass of data generated by financial transactions and KYC processes. To leverage this valuable resource for AML, FIs must climb the data maturity ladder.
Go directly to
The current fight against money laundering involves a huge amount of data crunching by financial institutions (FIs). FIs in the Netherlands have plenty of data to work with, but many still lack the foundations to benefit from data. Due to IT migrations over the years (with scant attention to data management, and variations in tooling across groups, data is often scattered across a multitude of IT systems and in some cases even hard-copy files. This is perhaps worst for the older banks, which have foreign branches and a long history of mergers and acquisitions.
To make AML processes more effective (producing more valuable insights for the investigation of financial crime) and more efficient (fewer false positives), FIs are going through a data maturity journey. The first step is focused on data remediation: improving the quality of the data. Each FI is faced with the Herculean task of standardising its data and transferring it to its own centralised data lake or data mart. FIs are already tackling this inescapable task, but with traditional resources and approaches it’s slow and painstaking work. However, standardising and centralising the data is not enough. Sometimes, data that is meant to fuel the AML machine proves to be sand in the gears. This happens when the data that is centralised is incorrect. To make it suitable for automated processes, the bank needs to review its client information, repairing errors and supplying missing data. At most Dutch financial institutions in the Netherlands this review cycle is well under way as part of the mandatory Know-Your-Customer procedures. All the while, though, new data is generated and old data becomes obsolete due to mutations. Getting a grip on this constantly changing mass of data, and keeping sand out of the gears, will get easier over time, though. How? Thanks to more awareness, growing expertise and better technology.
The next stage is about automating and optimising data management, and strengthening AML with good data. This step is made easier by technical innovations like chatbots and image processing. They lead to better and smoother online client processes, which in their turn ensure that correct and consistent data, including good-quality images, is provided by clients directly in digital form. Besides this, a more structured approach to managing digital data also includes active data quality monitoring. Clients are thus regularly reminded – and incentivised! - to update their details themselves. Data details such as name changes after marriage, new beneficial owners with legal persons, address changes or an expansion of the client’s product portfolio need be taken into account when optimising the client data. As data is collected, the system can also enrich it with contextual data, enabling a wider and deeper understanding of client structures and behaviour. An FI with its own data lake full of such optimised data is halfway up the data maturity ladder. It is ready to start developing more advanced analytical models (watch out for our next blog!) and can achieve vastly better AML outcomes.
But just imagine how much more insight could be gained from combining and relating data within and across institutions in a safe manner. Pooling data is the final step that will bring FIs to the top of the data maturity ladder. Within the AML domain, it is widely recognised that data sharing across institutions and maybe even countries is required to fight crime. The Netherlands is a front runner in this space, for example with the pioneering initiative Transaction Monitoring Nederland (TMNL).The Financial Action Task Force (FATF) recently published its Stocktake on Data Pooling, Collaborative Analytics and Data Protection. This publication, based on discussions with a range of experts worldwide, makes the case for data sharing and collaborative analytics. Given that FATF’s standards and recommendations will find their way into US and European legislation, this can be seen as an important step towards more sharing. A series of approaches to share data have been trialed and described, but we will need to reach conclusions about the legal conditions and processes within which such sharing can be done responsibly.
What about privacy?
FATF does cite data privacy as a challenge in the context of data pooling. Indeed, data pooling may seem at odds with privacy regulations and interests. But GDPR is in fact not a set of prohibitions per se. It also defines guidelines and processes for using data responsibly: for a specific, legitimate purpose, in a way that is proportional to that purpose and explainable to stakeholders. Fighting financial crime is without doubt such a legitimate purpose, given its large societal impact. This doesn’t justify all kinds of data sharing, though. It just implies that stakeholders and actors should follow a due process in deciding which data can be shared for which purpose, and under which conditions. Additional guidance from regulators and/or more precise AML legislation would be invaluable in going through this process. Moreover, apart from the due process and legal boundaries, there is also technical innovation. Privacy enhancing technologies (like the one being described in the FFIS project) can open up new methods for sharing, while at the same time protecting the interests of ordinary, law-abiding consumers and businesses. This technology is still in the early stage of its maturity curve, but it may provide strong tactical solutions to reach the goal of responsible data sharing.
Financial data institute
At the top of the data maturity ladder, there would ideally be a non-profit organisation with a dual purpose. The first purpose would be to uphold standardisation and quality of data in the entire financial sector. This service would be offered in the private and possibly also public domain. This organisation could produce common data dictionaries (the granular definitions of “passport”, “first name”, etc.). Secondly, this same organisation could centralise the process of data collection, validation and storage by clients of FIs. What if each client (natural person or business entity) had their own “virtual data safe” where they store all the documentation required to be onboarded by FIs? With the client (as owner of the safe) deciding to whom, for what purpose and under which conditions the documentation is provided? Such a set-up would greatly enhance client data collection by FIs as well as the customer experience, while at the same time securing the data and providing transparency. This could form the basis of an industry-wide KYC utility, which we see being trialled at various locations around the globe.
Added value for AML
The journey up the data maturity ladder is tough and laboursome, but essential. Without it, there can be no NextGen AML. But however mature our data is, it all still needs to be analysed, and traditional tools, even if they could handle the sheer mass of data available, cannot extract real intelligence from it. Fortunately, new technology, based on artificial intelligence, is emerging that can turn all this data into relevant information, and make AML a lot smarter.
Find out more in our blog on technology-driven change in AML.
Click here for an overview of the blog series