ThirdEye blog

Inside our AI and financial crime roundtable

We recently hosted a small group of senior compliance and financial crime leaders for lunch on the 89th floor of Eureka Tower. The views were spectacular. So was the conversation.

The session was facilitated by Daniel Saade, Head of Revenue at ThirdEye, and joined by Neil Jeans, Partner, Risk Consulting at Grant Thornton, with more than 30 years in AML across global institutions and regulators. Together, they guided a candid discussion about what AI actually means for financial crime compliance right now: what’s working, what isn’t, and what still needs to be figured out.

Here’s what we took away.

Adoption is real, but patchy

When Daniel asked the room who had already deployed AI in some capacity, most hands went up. Transaction monitoring, alert prioritisation, fraud and document detection, mule monitoring, enterprise risk assessments: AI is already embedded in parts of how some of Australia’s largest financial institutions manage financial crime risk.

But the room was just as clear about what AI isn’t doing yet. No one had handed it the keys. Neil framed it well: what we’re seeing is a hybrid model, where AI acts as an enhancement or overlay on existing rules-based systems, not a replacement for them. Rules-based monitoring has been the foundation of transaction monitoring for 25 years and it’s not going anywhere. AI is the next layer.

For those still in wait-and-see mode, the consensus was that this is a legitimate position, for now. But the window won’t stay open forever.

The false positive problem is the clearest business case

If there was one pain point that every person in the room shared, it was this: too many alerts, too many false positives, and not enough people to work through them.

One participant described spending the majority of their time re-vetting false positives rather than investigating genuine risk. Another noted that their rules-based system spits out false positives at a rate of around 90%. For teams that are small, stretched, and expensive, this is unsustainable.

This is where AI’s business case stacks up most clearly. Using AI to triage alerts, doing that first-level review, filtering out the noise, and prioritising what genuinely needs human attention, frees up compliance professionals to focus on work that actually requires their judgement. As Neil put it, AI doesn’t sleep. It can run in the background while your team focuses on the alerts that matter. The caveat everyone acknowledged: you still need to trust it. And trust takes time to build.

Guardrails are everything

The discussion got technical when Neil unpacked the different types of machine learning. Supervised learning (training a model on historic data with clear expected outcomes), unsupervised learning (letting it run freely across data to surface patterns), and semi-supervised somewhere in between.

The room’s instinct was clear: for most institutions, supervised learning is the right starting point, primarily because it keeps humans in control of what the model is doing and why. Data quality was the recurring concern: if you’re training a model on historic data, that data needs to be clean, labelled, and reliable. Garbage in, garbage out.

Unsupervised learning had its advocates too, particularly for root cause analysis and identifying clusters of risk that humans simply can’t see in large datasets. But the consensus was to use it carefully, in sandboxed environments, and with a healthy scepticism about what comes out.

The broader principle was consistent across the table: AI needs guardrails. Not because people distrust the technology outright, but because the compliance function cannot afford to let it run away. As Neil noted, compliance officers are naturally conservative, and that’s a feature, not a bug.

Governance is the real friction point

The conversation got candid when it turned to implementation. Several participants described significant internal friction around getting AI use cases over the line — particularly anything involving personally identifiable information.

Model governance teams, legal and privacy functions, and risk frameworks designed for a pre-AI world are creating real bottlenecks. Scrutiny of vendor claims is intensifying, with participants noting that bold AI promises don’t always hold up under evaluation. Another noted that case summarisation using PII-heavy data remains a challenge to get approved internally, even when the use case is clear.
The message was practical: don’t treat governance as a tick-box exercise you do at the end. Get your privacy, legal, and model risk teams involved from the start. Understand what data you can and can’t use. Know how your AI is making its decisions, or at least be able to demonstrate it’s working within defined parameters, even if the node-level logic is opaque.

AUSTRAC’s expectations were also unpacked. Neil was direct: the regulator permits and encourages responsible AI use, but the accountability stays with you. If the AI gets it wrong, it’s your SMR, your compliance return, your name on the line. What AUSTRAC wants to see is clear documentation of model logic and purpose, explainable outputs that investigators can actually use and justify, human-in-the-loop decisions for suspicious matter reporting, and ongoing testing, tuning, and independent validation. That accountability won’t shift just because an algorithm made the call.

Criminals are early adopters too

One of the most striking parts of the conversation came when Daniel asked: what about the other side of the fence?

The room didn’t need much prompting. AI-enabled fraud and scams are already here. Deepfake identities, social engineering at scale, AI bots that can sustain a romance scam indefinitely without fatigue or conscience. One participant described fraudsters systematically targeting super funds by exploiting KYC gaps, using minimal personal data to convince fund operators of their identity, then rolling balances into fraudulent accounts before detection. Not speculative. Already happening.

Neil drew a direct line from the 419 fraud letters of the 1980s, printed with fake stamps, sent by the thousands, through to email phishing, social media fraud, and now AI-generated personas. The vector changes. The principle doesn’t. Criminals will use whatever gives them scale, plausibility, and persistence. AI gives them all three.

The consensus wasn’t defeatism. It was urgency. You don’t need to have solved AI governance before you start learning. But if you’re not on the journey, you’ll be playing catch-up against adversaries who aren’t waiting.

The conversation we'll keep having

What struck us most about the afternoon was how much genuine value came from people simply being honest with each other. No posturing, no vendor pitches, no pretending to be further along than they are.

That’s the kind of conversation we want to keep creating. If you’d like to be part of the next one, we’d love to hear from you.

Latest intelligence

Stay sharp with expert insights, tools, and intelligence that keeps you ahead of financial crime threats.