There’s no shortage of AI talk in banking lately, but this one actually feels grounded in something real. FIS is teaming up with Anthropic to roll out what it calls “agentic AI,” starting with a system aimed squarely at anti money laundering work.
If you’re not familiar with FIS, that’s totally understandable. You see, it doesn’t exactly market itself to everyday consumers. Still, in the banking world, it’s huge. A lot of financial institutions rely on it for core infrastructure, meaning it quietly sits behind the scenes handling everything from transactions to account data. In many ways, it is the plumbing that keeps banks running.
The first result of this partnership is the Financial Crimes AI Agent. The idea is to speed up AML investigations, which today can drag on for hours or even days. Instead of analysts jumping between systems to gather evidence, the AI pulls everything together automatically, evaluates activity against known risk patterns, and flags the cases that actually need attention.
That’s really the pitch here. Less time spent digging, more time spent deciding. Investigators are still in control, but the tedious part of the job gets pushed onto the machine.
Anthropic is providing the brains behind it, using its Claude models for reasoning. What stands out is how closely the two companies are working together. This isn’t just an API hookup. Anthropic engineers are embedded with FIS, helping build the system alongside its teams. That suggests a more hands on approach, which probably makes sense given how tightly regulated banking is.

And regulation is a big part of this story. Banks are required to monitor and report suspicious activity, but the current process is expensive and often inefficient. Globally, trillions of dollars move through illicit channels every year, while banks spend tens of billions trying to catch it. Even then, a lot of the work still comes down to people manually piecing together information.
FIS says this AI agent can shrink investigation times down to minutes, cut false positives, and improve the quality of reports. Maybe. Those are big claims, and real world results tend to be less tidy than press releases suggest.
A couple of banks are already testing things out, including BMO and Amalgamated Bank. Wider availability is expected in the second half of 2026, assuming everything goes according to plan.
One thing FIS is emphasizing is control. It says client data stays inside its own infrastructure, and every decision the AI makes can be traced and audited. In banking, that’s not just a nice feature, it’s required.
What’s interesting is where this could go next. Financial crime is just the starting point. FIS is already talking about expanding into areas like credit decisions, fraud detection, onboarding, and even keeping customers from leaving. Basically, anywhere there’s repetitive work and a lot of data, it sees an opportunity.
Whether banks actually embrace this is another story. This is an industry that moves cautiously, especially when compliance is involved. Still, if something like this can genuinely cut down busywork without introducing new risks, it’s easy to see the appeal.
At the very least, this feels like a more practical use of AI. Not a chatbot slapped onto a banking app, but something aimed at the kind of behind the scenes work that most people never see, and frankly, don’t want to do.