ThirdEye View

Asset recovery and AI in financial crime

This month, we celebrate significant asset recovery wins out of New Zealand, unpack how the regime works, and share what senior compliance leaders really think about AI, and what AUSTRAC expects from all of us. 

Why asset recovery deserves more attention

Asset recovery doesn’t always get the recognition it deserves. Criminal prosecution achieves the AML goal of detect, deter, disrupt, but it’s slow and resource-intensive. Asset recovery is leaner and hits criminals where it hurts: their ability to fund the next round of offending.

The mechanics are straightforward. Police need reasonable grounds to seize an asset, after which the burden reverses, it falls on the asset holder to prove the funds were clean. Contesting a seizure risks a much larger investigation, so many people choose not to challenge it. In both New Zealand and Australia, recovered proceeds are directed into crime reduction funds, so the criminals lose their assets and those assets go on to fund the effort to catch the next ones.

Three cases, three profiles, NZ$30 million

Three successful asset recovery outcomes from New Zealand in 2026, each involving a different type of offender. 

New Zealand’s biggest fraudster. A man described in media reports as New Zealand’s biggest fraudster was ordered to forfeit assets worth approximately NZ$16 million — eight properties, ten vehicles, farm machinery, a boat, and other assets. Read more. 

An overseas operator using NZ as a laundromat. A British citizen running fraud schemes in the United States set up companies and trusts in New Zealand to clean the proceeds. NZ$10 million held in New Zealand bank accounts was forfeited through the High Court. Read more. 

A professional money launderer.  Someone described by the court as a professional money launderer forfeited more than NZ$3 million in Auckland properties and luxury vehicles, following a covert police investigation. Read more. 

Around NZ$30 million across three cases. None of it happens without intelligence, and that intelligence starts with reporting entities submitting SARs. To the banks, fintechs, and financial institutions doing that work: this is what it leads to. 

What we heard at our AI roundtable

ThirdEye recently hosted senior compliance and financial crime leaders for lunch on the 89th floor of Eureka Tower in Melbourne, facilitated by Daniel Saade, our Head of Revenue, with Neil Jeans from Grant Thornton — more than 30 years in AML — as our expert speaker. Here’s what came out of it.

AI adoption is real, but no one has handed it the keys

Most hands went up when asked who had already deployed AI in some capacity, transaction monitoring, alert prioritisation, fraud detection, mule monitoring. But no one had relinquished control. Neil framed it well: what we’re seeing is a hybrid model, where AI enhances existing rules-based systems rather than replacing them. For those still in wait-and-see mode, that’s a defensible position for now, but the window won’t stay open indefinitely.

The false positive problem is the clearest business case

The shared pain point in the room was stark: too many alerts, too many false positives, not enough people. One participant described spending most of their time re-vetting false positives rather than investigating genuine risk; another cited a 90% false positive rate. This is where AI’s business case stacks up most clearly, triaging alerts, filtering noise, and freeing compliance professionals to focus on work that requires their judgement. As Neil put it: AI doesn’t sleep. Trust still takes time to build, but the use case is obvious.

Governance is the real friction point

Significant internal friction was a common theme, model governance teams, legal and privacy functions, and risk frameworks built before AI existed all creating bottlenecks. The practical message: don’t treat governance as a tick-box at the end. Get your privacy, legal, and model risk teams involved from day one, and be able to demonstrate your AI is working within defined parameters.

AUSTRAC’s position is clear — accountability stays with you

Neil was direct. AUSTRAC permits and encourages responsible AI use, but accountability doesn’t shift. If the AI gets it wrong, it’s your SMR, your compliance return, your name on the line. What the regulator expects: clear documentation of model logic, explainable outputs, human-in-the-loop decisions for suspicious matter reporting, and ongoing testing and independent validation.

Criminals are early adopters too

AI-enabled fraud is already here. Deepfake identities, social engineering at scale, AI bots sustaining romance scams indefinitely. One participant described fraudsters targeting superannuation funds, exploiting KYC gaps to impersonate members and redirect balances before detection. A real case, already happening.

Neil drew a line from the 419 fraud letters of the 1980s to today’s AI-generated personas. The vector keeps changing; the principle doesn’t. Criminals use whatever gives them scale, plausibility, and persistence, and right now, AI gives them all three. If you’re not yet on the journey, you’re playing catch-up against adversaries who aren’t waiting.

For the full roundtable write-up, read our AI and financial crime roundtable blog here.

This blog is based on the April 2026 episode of ThirdEye View, hosted by Jing Zhang, Business Development Manager, and Colin Dixon, CAMS-certified AML Solutions Specialist at ThirdEye. Colin has been with ThirdEye since its inception in 2012 and works closely with clients to help them maximise their platform capabilities.

Latest intelligence

Stay sharp with expert insights, tools, and intelligence that keeps you ahead of financial crime threats.