The scale of financial crime in the UK is in urgent need of being addressed. The amount of money laundered here annually is estimated to exceed £100bn, yet it was recently reported that there have been zero prosecutions for money laundering since new regulations were introduced in 2017.
The reasons for this are numerous and complex. One factor is how rules are configured for monitoring - if the net is too wide, there are too many false positives; if too narrow, too many false negatives. The difficulty is in calibrating these rules to make sure you don’t get it wrong.
The UK’s labyrinthine network of AML ‘police forces’ doesn’t help, with different professions overseen by an array of different regulators, while both the FCA and the NCA have multiple cases they are currently investigating.
On top of this, a Treasury Committee chaired by Nicky Morgan concluded that the country’s AML efforts are ‘highly fragmentary’, an opinion that is hard to refute.
Financial activity is hard to track and even harder to assess.
The USA has taken an interesting stance — invoking a ‘tech amnesty’ that encourages firms to test AI-based AML systems against historical data sets without fear of prosecution if this new tech uncovers historical wrongdoing in their midst.
The UK authorities are yet to adopt such a tactic but they might be wise to. Many large banks have already set up research labs to test new AI-based technology and, if the UK is to clamp down more effectively on criminal activity, others ought to be encouraged to follow suit.
Here are my top five ways a shift to AI-based solutions aided by a tech amnesty would see Britain taking the fight to financial criminals:
1. Black box benevolence
Much technology that currently exists can’t be used at the moment because regulators are nervous about machine learning systems that spit out an answer without showing any of their working.
What we should be doing is paying more attention to the outcomes. There is currently such a large volume of SARs being generated that technology will inevitably have to be used, and will try to focus on the ones that look most likely to be criminal.
The ‘black box’ system creates understandable uneasiness around this, but it is a misnomer. While regulators currently have so many SARs they don’t even know where to begin, AI can help to streamline that, but the crucial aspect is the human involvement - the parameters can be fully controlled by the operator.
Where AI can be really helpful is in taking away the drudgery of the work, and the sheer volume of it, which will allow people themselves to apply their talents to more important things.
What will also be useful is creating a feedback loop, where the system itself recognises when either too few or too many SARs are being raised and advises the company of this. Recognising that AI can mean a collaborative approach between man and machine should ease many concerns about the ‘black box’ nature of the systems.
2. Avalanche season
Legacy systems have been great at providing compliance officers with mountains of paperwork. It has been their job to sift through all the suspicious transactions coughed up by computers.
The problem is that most of them are perfectly innocent but every single red flag — and there can be hundreds of thousands produced in a matter of hours — has to be looked at individually by a real, live person. It’s an almost impossible task.
Yet it’s this human oversight on which companies are currently judged to be fulfilling their compliance obligations, even though the vast majority of the red flags produced by their systems should never have been raised in the first place.
This needs to change. It should be the way red flags are raised that we’re focused on. If a programme can stop producing so many false positives with better threat identification, not only will it mean a larger proportion of alerts turn out to be criminal but it won’t take as long to review them all.
It’s a virtuous circle of accuracy and efficiency meaning time is well spent for both companies and law enforcement agencies.
3. Catch me if you can
Everyone and their dog has been laying claim to using AI. Recruitment firms, investment platforms, even dishwashers (not kidding).
AI of the kind we use isn’t so much a set of rules or the ability to create rules. It goes much further than that.
AI-based platforms can sift through far more data and adapt their thinking without the need for further human instruction, which only slows things down. Machine learning algorithms can change the way they look for patterns of suspicious activity. Changing tactics is what money launderers are doing all the time — we need AML programmes that can try to pre-empt them.
And AI programmes can do this at the drop of a hat, testing and adapting thousands of times quicker than any human ever could.
An inefficient AML landscape means more people spend more time catching fewer criminals, fraudsters and money launderers.
This reduces the intelligence dividend — which is the overall benefit to law enforcement agencies over time. If we can increase the volume of effective intelligence, it is yet another opportunity for a virtuous circle that will ultimately result in more prosecutions and fewer offences.
The picture at the moment is somewhat worrying. Law enforcement agencies are showered with more reports of suspicious activity than they could ever deal with — 463,938 in the past year. This is the biggest factor holding back the intelligence dividend and is driven by a desire among those who must comply with AML regulations to show they are trying identify wrongdoing. That’s understandable but many of these SARs are unnecessary and AI will allow companies to strip back their redundant (false positive) SARs with confidence.
5. Obsolescence will be obsolete
Cost has been a huge barrier to firms adopting the very latest technology. But as with all forms of technology, it typically gets better and cheaper over time.
We’ve seen it happen with everything from VCRs to mobile phones and personal music players. It’s now the turn of the AML industry.
Over the next decade, expect dedicated AML SaaS solutions to become the norm and spare companies from falling behind. Because they update themselves, these programmes will effectively evolve inside client businesses rather than fall obsolete once their technology becomes out of date. There will be no excuse for not having the most up-to-date solutions.
A company installing an AI-based compliance system in 2019 will not suddenly find themselves in need of an upgrade a few years down the line. These systems are future proof, and will remain the vanguard against all suspicious financial activity for years to come.
Julian Dixon , Julian, CEO, Napier (opens in new tab)
Image Credit: Enzozo / Shutterstock