Financial crime prevention is meaningful work. You’re protecting real people from real…
As we close out the year, we’ve been reflecting on what we’ve learned from working alongside dozens of risk and compliance teams through rule reviews, implementations, and optimisation sessions.
One pattern stands out: the teams getting the most value from their AML technology aren’t necessarily the ones with the most sophisticated rules. They’re the ones who truly understand what their rules are doing and why.
Here’s what we’ve learned, and some practical actions you can take in 2026 to strengthen your financial crime prevention.
The rule understanding problem
Throughout our client reviews this year, we’ve seen that the most common challenges aren’t technical. They’re organisational. And the biggest of these? Inherited rules that nobody fully understands.
Team members leave, often without a proper handover. The new person is left trying to make sense of rules that exist only in the departed person’s head. We’ve found that if you can’t explain a rule to a regulator, you’ve usually inherited it rather than created it.
The three-question test
Here’s a simple test for each of your rules:
- What does this rule detect?
- Why was this threshold set at this level?
- What risk in your risk assessment does this address?
If you can’t answer all three for a rule, that’s your starting point for January. And once you’ve worked out the answers, document them, so your successor isn’t in the same situation you inherited.
Documentation that actually works
The good news is that documentation doesn’t need to be complicated. We’ve seen clients transform their rule reviews simply by using the More Information section in ThirdEye to record what each rule does, why it exists, and any decisions about changes.
When we return for follow-up reviews with these clients, we spend our time discussing rule effectiveness rather than trying to work out what rules are supposed to do. That’s time well spent.
One useful trick: if you copy a rule expression into an AI tool like ChatGPT and ask it to explain what the rule does, it will return a plain-language explanation of the code. It won’t tell you why you have the rule or whether it’s effective (that requires your expertise), but it’s a helpful starting point for understanding inherited logic.
Getting rule tuning right
Once you understand your rules, the next question is: are they effective?
The teams doing this well use clear, specific closure reasons that tell a story. One client used “internal transfer” to flag alerts when their funds-in-and-out rule triggered on transfers between two accounts held by the same customer—perfectly normal behaviour that wasn’t suspicious.
This gave them the data to demonstrate wasted effort to their IT team and build a case for improving the data sent to ThirdEye. That’s tuning working as it should.
Compare this to what we see too often: every alert is closed as “false positive”. That tells us nothing. We can’t distinguish between a rule that’s somewhat useful and one that’s completely useless, which means we can’t advise whether to tune it or remove it entirely.
Why tuning doesn’t happen
We’ve seen analysts who know a rule is useless, but nobody does anything about it. The reasons vary: nobody speaking up, nobody listening, overly complex change processes, fear of modifying what a predecessor set up, and concern about what regulators might think.
None of these should stop you. Rules can be changed, and our customer success team is here to help. What matters is having a work environment and process that encourages improvement rather than inhibiting it.
The testing discipline
In our rule lifecycle session, we stressed one message above all: test your changes before making them live.
Here’s what happens when you don’t. One client decided to raise a threshold from $5,000 to $8,000. They’d reviewed all the lower-value alerts and had sound justification for the change. They figured they didn’t need to test it since the existing rule already picked up transactions over $8,000.
The next day, they got hundreds of alerts. They’d accidentally set the threshold to $800 instead of $8,000.
A simple test in draft mode would have caught this. Their team was not best pleased having to close all those alerts.
Our advice: take the tortoise approach. Adjust one rule, test it, make it live, then move on to the next. Slow and steady beats chaotic parallel changes every time.
Automation you might be missing
Throughout our ThirdEye Explore sessions this year, we showcased features that are available but underutilised. If you’re in Australia or New Zealand and still manually entering regulatory reports through online portals, you’re doing extra work and increasing your risk of human error.
Australian users can automatically submit SMRs, IFTIs, and TTRs to AUSTRAC. New Zealand users can submit STRs and PTRs directly to the FIU through their B2B interface.
Yes, the initial testing process with AUSTRAC or the FIU can be slow. But it’s a one-off setup compared to ongoing manual effort every time you submit a report. Clients who’ve been through this process tell us it was well worth it. Not just for time savings, but for the end-to-end audit trail when regulators come calling. (Learn more about our suspicious activity reporting and regulatory reporting capabilities)
Similarly, if you’re managing cases through Word documents and spreadsheets, that approach might work with a handful of cases. But as financial crime becomes more complex and regulatory expectations rise, case management provides a more streamlined process with full audit capability. It’s simple to enable whenever you’re ready.
The data foundation
Good data is the key to good outcomes. But it’s something many AML teams struggle with. They need to work with IT to get suitable, comprehensive data into ThirdEye, and changing personnel can mean the reasoning behind data decisions gets lost.
The key task for AML teams is to take time to understand your data: what’s being loaded, what’s not, and the quality of what you have. Remember that data can be changed: you’re not stuck with what you decided on day one.
One thing we see frequently: clients with lots of transaction properties they never use. Those properties add no value and just create noise in the interface. You can remove them. The same applies to customer and account properties.
Watch your error handling
A data load error indicates a transaction, customer, or account wasn’t loaded, potentially leaving suspicious activity undetected. Errors should be treated seriously.
We’ve seen cases where the AML team isn’t aware of any errors, and IT hasn’t done anything about them. Make sure you have a process to check and rectify errors, then reload the corrected data.
Three actions for January 2026
If you want to start the new year stronger, here’s where to focus:
- Understand your data. Know what’s being loaded, assess its quality, and have a process for addressing errors.
- Understand and document your rules. Use the three-question test, record your answers, and have a process for ongoing tuning.
- Build a culture that encourages change. The most common blockers aren’t technical; they’re organisational. Make sure your team can speak up about ineffective rules and that there’s a clear path to improvement.
We’re here to help
If you’re in any of these situations (rules you don’t understand, tuning you’ve been putting off, automation you haven’t enabled), get in touch with our customer success team. We’ve helped dozens of teams improve their use of ThirdEye this year, and we’re ready to do the same for you.
Thanks for being part of our ThirdEye community in 2025. We’ll see you in the new year.
This article is based on our latest ThirdEye Explore webinar about AML learnings in 2025. You can watch the full webinar and previous episodes in the ThirdEye Explore series on our website under Resources.
