How Financial Institutions Can Prepare for the Future of Fraud with Responsible AI Deployments – with JoAnn Stonier of Mastercard
As monetary organizations navigate an more and more subtle panorama of cybercrime, many need to generative AI and agentic methods for a aggressive edge. However, as the World Economic Forum notes in a 2025 white paper, whereas AI adoption guarantees to reinforce safety and streamline operations, it additionally introduces important complexities associated to knowledge privateness and accountable deployment.
According to the Federal Trade Commission, customers reported shedding greater than $12.5 billion to fraud in 2024, a 25% improve over the earlier 12 months, a staggering determine that underscores the pressing want for a sturdy, multi-layered safety method.
In the monetary companies sector, tackling the inherent downside of fraud whereas balancing the dangers of AI deployment is the central problem for the sector at massive. It additionally serves as a consultant case for nearly any enterprise dealing with delicate knowledge.
This article unpacks how monetary companies leaders can overcome these challenges to construct a sturdy, accountable method to AI purposes. Based on insights shared by JoAnn Stonier, Fellow at Mastercard, on a current episode of the ‘AI in Financial Services’ podcast, readers will acquire a clearer perspective on easy methods to construct the basis for a future the place subtle AI methods work in tandem with human oversight.
This article delivers three key insights for monetary leaders looking for to convey AI into their very own companies responsibly:
- Choosing maturity over hype: While cutting-edge purposes like generative and agentic AI seize headlines, extra established, deterministic AI capabilities can higher enhance real-time fraud prevention.
- Using knowledge to grasp patterns, not folks: Responsible AI in fraud detection isn’t about accumulating extra private knowledge, however about utilizing current knowledge extra intelligently to establish advanced behavioral patterns.
- Adopting a team-sport method to governance: Deploying AI responsibly requires a collaborative effort throughout a corporation, with a transparent objective and an iterative course of that considers a variety of dangers.
Listen to the full episode under:
Guest: JoAnn Stonier, Fellow of Data and AI, Mastercard
Expertise: Privacy, Data Governance, Data Ethics, and Responsible Data Practices
Brief Recognition: JoAnn, previously Chief Data Officer at Mastercard, leads enterprise-wide knowledge governance, analytics, and innovation efforts. Her profession spans privateness, danger, and AI technique throughout finance and know-how ecosystems. JoAnn can also be a seasoned professor with affect in regulatory and tutorial circles.
Choosing Maturity Over Hype
Much of the media consideration on AI focuses on the newest developments, corresponding to LLMs, generative AI, and rising agentic methods. While these applied sciences are highly effective, Stonier factors out that the actual, measurable enterprise worth in monetary companies usually comes from extra established, deterministic AI.
JoAnn explains that Mastercard, for instance, has been utilizing knowledge and analytics for over 17 years to observe its world community 24/7. Initially, this concerned patterns primarily based on previous habits to foretell fraud. However, with the evolution of AI, the firm can now analyze billions of transactions in actual time, enabling sooner, extra correct fraud detection.
Stonier continues, explaining that for the buyer, this implies fewer false positives — the dreaded second when your card is declined for a official buy. Stonier offers an instance of a buyer with a summer season dwelling: a couple of years in the past, a purchase order in a second, distant location may need been flagged as suspicious.
Now, AI can acknowledge spending patterns in each areas over time, permitting transactions to undergo with out interruption. This improved expertise is a direct consequence of superior, deterministic AI and analytics which were refined over time.
“What we’re in a position to do now’s have a look at patterns on a really completely different variety of scale, proper? And we’re additionally in a position to future-proof the means we have a look at issues.
So, whereas in the previous we had been in a position to take a look at patterns primarily based on previous habits after which apply them to present transactions. Now it’s far more real-time, and our sample evaluation is healthier. We can use AI to grasp your sample. So the expertise has gotten higher, and that’s all as a result of of AI.”
– JoAnn Stonier, Fellow of Data and AI at Mastercard
While these foundational AI methods are the workhorses of fraud prevention, Stonier acknowledges that new capabilities, like agentic AI, will change how these methods are deployed.
She describes “agent-ish” AI as a precursor to completely autonomous methods, noting that conversational bots have advanced into subtle task-doers. However, true agentic AI, corresponding to a self-driving Waymo automobile, is extremely advanced and requires important oversight.
As monetary establishments start to deploy brokers, they may be capable to analyze and strengthen completely different components of their community, however Stonier stresses that people should stay in the loop.
Using Data to Understand Patterns, Not People
The use of private knowledge in fraud detection could be a delicate subject for clients. Stonier clarifies that fee networks don’t acquire a person’s private info for fraud evaluation.
Instead, the firm receives a minimal set of knowledge factors for every transaction: date, time, location, service provider title, and transaction quantity. From this restricted info, AI can deduce spending patterns, corresponding to a buyer’s common grocery retailer or gasoline station, with out understanding their title or precise deal with.
When a suspicious sample is detected, a fee community will work with its financial institution companions, who then contact the cardholder. The succession of inquiry, which Stonier refers to as “knowledge minimization,” ensures that no extra info is collected or used than is critical to attain the particular objective of fraud prevention.
“We don’t use that for different functions. We’re utilizing it for the objective of stopping fraud,” Stonier says. Her distinction is essential to sustaining client belief and complying with privateness rules. The goal is to not know all the things a few buyer however to make use of knowledge responsibly to safe the whole fee ecosystem—for the profit of banks, retailers, and cardholders.
Adopting a Team-Sport Approach to Responsible AI Governance
For monetary leaders trying to deploy AI responsibly, Stonier emphasizes that it have to be handled as a workforce sport. It requires collaboration amongst product, innovation, and danger groups, with a shared understanding of each enterprise targets and potential challenges.
“When it involves AI, it’s actually a workforce sport. The leaders in creating merchandise and options actually perceive that there’s heaps of completely different dangers that have to be navigated [and] mix their understanding of all the dangers with their imaginative considering and what they’re making an attempt to attain.
First of all, when you don’t have a transparent objective outlined in what you’re making an attempt to attain, how are you evaluating mannequin drift, bias? And how are you going to actually be sure to’re deploying an iterative course of to drive design considering so that you just actually get innovation, however that you just’re additionally consistently doing the studying loops, you really need everybody concerned?”
– JoAnn Stonier, Fellow of Data and AI at Mastercard
For leaders trying to deploy AI successfully of their organizations, JoAnn outlines a transparent framework. She defines a profitable AI governance course of as one which begins with a clearly outlined objective and a deep understanding of the meant outcomes. Leaders should ask a collection of questions:
- Do we’ve got the appropriate knowledge, and is it of the proper high quality?
- How are we constructing and evaluating our fashions to account for issues like mannequin drift and bias?
- How can we handle the varied dangers—from knowledge safety to mental property—that accompany AI innovation?
Stonier concludes that whereas AI presents a panorama of “unknown unknowns,” the finest method is to be open to those challenges and maintain folks at the heart of the design course of. Products and options ought to be constructed with the particular person consumer in thoughts, guaranteeing outcomes that profit all communities. Her human-centric, collaborative method is the key to navigating the future of AI in monetary companies.
