The status quo is not sustainable. Some recurring problems are basic: on-boarding processes are not effective for particular clients; controls are not always properly documented; processes tend to remain “tick box” exercises rather than identifying risks; and transaction monitoring systems generate overwhelming numbers of false positives.
But more fundamentally the way in which financial crime risks are currently managed does not provide the foundations for sustainable improvement towards more effective systems that are better able to prevent financial crime.
Internal structures are not efficient, with responsibilities overlapping across multiple teams. Different elements of oversight can be siloed, with change in one area (e.g. fraud or sanctions) not being pulled through to others (e.g. anti-money laundering). Fixing this requires top-down organisational change, but also changes in how financial crime officers implement on the ground.
Backlogs will not come down without extra resource, whether hired externally or trained internally, but the Russia sanctions episode demonstrates that resourcing models also need to be able to flex to accommodate external “shocks” that otherwise draw significant resource away from business as usual processes.
Financial crime operating model reform must also incorporate technology strategy. Supervisors expect firms to be using technologies such as artificial intelligence (AI) and machine learning (ML), particularly given that supervisors are themselves using them, enabling them to explore significant volumes of data with analytics to target follow-up requests more accurately.
A central use case for new technology is to reduce the volumes of false positives in transaction monitoring systems to free up expert resource for higher value-added work. For firms not already moving in this direction, the starting point should be a review of existing systems to identify whether they enable new approaches; if they do not, firms should consider moving to tools which can. But the adoption of a new tool (and a potential change of vendor) is not a quick fix: a large institution could expect this process to take two years or more. Nor should these technologies be treated as “plug and play” – they depend on data quality, training, and proper governance, and should be subject to model risk management.