
Why False Positives Are Costing Banks More Than Fraud - with Suvaleena Paul of Bank of America
The AI in Business Podcast • Daniel Faggella (Emerj)

Why False Positives Are Costing Banks More Than Fraud - with Suvaleena Paul of Bank of America
The AI in Business Podcast
What You'll Learn
- ✓Fraudsters are using a mix of real and fake data to create synthetic identities that can bypass KYC checks
- ✓Banks are seeing an increase in scams like 'love scams' where fraudsters build trust over months before exploiting victims
- ✓Behavioral spoofing and AI-powered tools are making attacks harder to detect with legacy systems
- ✓Banks are shifting from quarterly fraud rule reviews to a 'rapid response model' to deploy countermeasures in real-time
- ✓Enriched risk scoring using digital footprints, biometric data, and pattern analysis is key to identifying subtle fraud
- ✓Banks must balance fraud mitigation with minimizing friction for legitimate customers
Episode Chapters
Introduction
Overview of the evolving fraud landscape in banking and the need for AI-driven analytics
Synthetic Identities and Scams
Discussion of how fraudsters are using synthetic identities and social engineering tactics like 'love scams'
Rapid Response Capabilities
Explanation of how banks are shifting to a real-time fraud detection and response model
Enriched Risk Scoring
Overview of how banks are leveraging digital footprints, biometrics, and pattern analysis to identify subtle fraud
Balancing Fraud Mitigation and Customer Experience
Discussion of the importance of minimizing friction for legitimate customers while stopping fraud
AI Summary
This episode explores how AI-driven analytics are transforming fraud detection and prevention in the banking industry. The discussion covers the evolving fraud landscape, including the rise of synthetic identities, real-time account takeovers, and the need for rapid response capabilities. The guest from Bank of America discusses practical strategies for embedding AI into fraud workflows, improving threat identification, and balancing fraud mitigation with customer experience.
Key Points
- 1Fraudsters are using a mix of real and fake data to create synthetic identities that can bypass KYC checks
- 2Banks are seeing an increase in scams like 'love scams' where fraudsters build trust over months before exploiting victims
- 3Behavioral spoofing and AI-powered tools are making attacks harder to detect with legacy systems
- 4Banks are shifting from quarterly fraud rule reviews to a 'rapid response model' to deploy countermeasures in real-time
- 5Enriched risk scoring using digital footprints, biometric data, and pattern analysis is key to identifying subtle fraud
- 6Banks must balance fraud mitigation with minimizing friction for legitimate customers
Topics Discussed
Frequently Asked Questions
What is "Why False Positives Are Costing Banks More Than Fraud - with Suvaleena Paul of Bank of America" about?
This episode explores how AI-driven analytics are transforming fraud detection and prevention in the banking industry. The discussion covers the evolving fraud landscape, including the rise of synthetic identities, real-time account takeovers, and the need for rapid response capabilities. The guest from Bank of America discusses practical strategies for embedding AI into fraud workflows, improving threat identification, and balancing fraud mitigation with customer experience.
What topics are discussed in this episode?
This episode covers the following topics: Synthetic identities, Real-time account takeovers, Behavioral spoofing, Rapid fraud response, Enriched risk scoring.
What is key insight #1 from this episode?
Fraudsters are using a mix of real and fake data to create synthetic identities that can bypass KYC checks
What is key insight #2 from this episode?
Banks are seeing an increase in scams like 'love scams' where fraudsters build trust over months before exploiting victims
What is key insight #3 from this episode?
Behavioral spoofing and AI-powered tools are making attacks harder to detect with legacy systems
What is key insight #4 from this episode?
Banks are shifting from quarterly fraud rule reviews to a 'rapid response model' to deploy countermeasures in real-time
Who should listen to this episode?
This episode is recommended for anyone interested in Synthetic identities, Real-time account takeovers, Behavioral spoofing, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
Today's guest is Suvaleena Paul, Assistant Vice President and Senior Analyst in Fraud, Innovation, and Analysis at Bank of America. Suvaleena joins Emerj Editorial Director Matthew DeMello to discuss the integration of AI-driven technologies into fraud prevention workflows. She also shares practical insights on improving threat identification speed, fostering cross-team collaboration, and continuously refining AI models to maximize ROI and maintain regulatory compliance. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the 'AI in Business' podcast!
Full Transcript
Welcome, everyone, to the AI in Business podcast. I'm Matthew DeMello, Editorial Director here at Emerge AI Research. Today's guest is Suvelina Paul, Assistant Vice President and Senior Analyst in Fraud Innovation and Analysis at Bank of America. Suvelina joins us today to explore how AI-driven analytics are transforming fraud detection and prevention in the financial sector. Suvelina breaks down practical strategies for embedding AI into fraud workflows, improving threat identification, and response times. The conversation also highlights key workflow enhancements that enable faster decision-making and measurable ROI by reducing fraud risk and operational inefficiencies. Just a quick note for our audience that the views expressed on today's show by Suvelina do not reflect that of Bank of America or its leadership. But first, are you driving AI transformation at your organization? Or maybe you're guiding critical decisions on AI investments, strategy, or deployment? If so, the AI in Business podcast wants to hear from you. Each year, Emerge AI Research features hundreds of executive thought leaders, everyone from the CIO of Goldman Sachs to the head of AI at Raytheon and AI pioneers like Yoshua Bengio. With nearly a million annual listeners, AI in business is the go-to destination for enterprise leaders navigating real-world AI adoption. You don't need to be an engineer or a technical expert to be on the program. If you're involved in AI implementation, decision-making, or strategy within your company, this is your opportunity to share your insights with a global audience of your peers. If you believe you can help other leaders move the needle on AI ROI, visit Emerge.com and fill out our Thought Leader submission form. That's Emerge.com and click on Be an Expert. You can also click the link in the description of today's show on your preferred podcast platform. That's Emerge.com slash expert1. Again, that's Emerge.com slash expert1. Without further ado, here's our conversation with Suvalina. Suvalina, welcome to the program. It's a pleasure having you. Hi, hi, Matthew. It's a great pleasure being here. And I'm very excited to discuss all the interesting thing that we spoke about and dive deep into it. Absolutely. I think what we're seeing a lot in the fraud side, especially for banking, translates into the retail space. Heck, even translates into, with some limited visibility, into B2B spaces as well. But wherever we're seeing customer workflows, there's a lot of lessons to take in terms of solving these challenges from the lens of data. Among those challenges, we're hearing fraudsters grow more sophisticated with tools like AI and access to massive digital footprints, being able to make 100 email sign-on accesses at once, the speed and subtlety of the attacks are increasing from synthetic identities to biometric bypasses and real-time account takeovers. We're seeing a lot of different threats right now that maybe a couple of years ago we thought would start to really metastasize and it's right on time right now. For leaders in fraud strategy across industries, the challenge isn't just keeping up, it's anticipating what's next and preparing systems to adapt in real time. Just starting there, because I know your team has put a lot of effort into moving off of quarterly rounds of measurement that we're so used to, especially from my background in tax, and thinking more real time and technologies helping you get there. How are you seeing the fraud landscape evolve and what new threats do you expect to emerge in the next year and a half for two years? Honestly, the pace at which fraud right now is evolving is unlike anything we have seen before. And one of the biggest shifts is the rise of synthetic identities, like you did mention, and fraudsters are using kind of a mix of real and fake data to create completely new personas that can pass the KYC checks and slowly build trust. A layer of it also comes in here in the scam side, wherein just because we're talking about the synthetic IDs, I just got reminded of it that there is a particular type of scam called a love scam, wherein a person probably overseas or something will actually get into kind of a relationship with a customer, be it an older person or a younger person, and will build a friendship and a relationship over like four or five months and will gain trust and actually will get into a love relationship convincing that person I am your boyfriend or girlfriend and at the end of that four or five months they would say that I'm stuck in a very difficult situation or I have been arrested I need ten thousand dollars or something like that or even more and the person because they are so invested and they believe this person is real they end up sending over the money and even if not that they're going to probably cook up something so believable like i need your bank credentials because i cannot take out the money from my account so why don't you give me your credentials and i'll just log in and do the transfer myself and stuff like that so it's really interesting and also very very scary and people are so vulnerable to this, like as a bank, whenever we detect patterns like this, we go ahead and send out alerts and we warn the customers and at times they override it on their own. They're like, no, we do not want to go ahead with your alert. We want to go ahead and pay this person. So sorry for digressing but yeah these are some of the very very crazy things that are happening and pair that with the behavioral spoofing and AI powered tools that mimic how legitimate users behave when you got attacks that are almost invisible to the legacy systems So, yeah, we are also seeing some very real time account takeovers where bad actors don't just steal identities anymore. They actually suppress the alerts, reroute the messages and a similar kind of incident actually happened. And the case came to me probably in the first year of my tenure at Bank of America, wherein we analyzed and found out that someone known, probably a spouse or a girlfriend, boyfriend, I don't know what. They got a hold of the phone device for logging into the customer's account and they changed the, like they added their own fingerprint. And the first thing they did after logging in is suppress all the alerts, scrape out all the security questions, and then ended up draining out $800,000 over six months. And the person, the customer didn't even know anything because there weren't any traditional red flag. it seemed like the customer was making the transactions and hence it did not raise any flags from our end but this was a very uh unique case that helped us probably put in four or five strategies back to back that's gonna uh cumulatively handle such cases going ahead so yeah stuff like that just keeps happening and looking ahead like you asked like what do you see in the next 18 to 24 months so you're looking ahead i think fraud will become even more blending in like instead of brute force or suspicious spikes we'll see what i call a clean fraud like where everything looks right on the surface the timing the behavior but however some subtle mismatches will reveal something off and probably we need to be more like uh fbi here in the fraud space in the financial institution so detection has to evolve just as fast and down to the second, if not to the day. Yeah, very much challenges that we've heard across banking. You mentioned we have to be the FBI. We wrapped an interview not too long ago with Nick Lewis at Standard Chartered Bank, who leads the Crime Operations Unit. I'm tweaking his title a little bit to keep it short. But he also had mentioned one of the bigger trends we've seen over the last few years is that in terms of the government enforcement, they're really leaning on the financial institutions. You have the data. You have, it's your book report. We're just going to check your homework and make sure that you're targeting folks the right way. But we expect you to at least have an eye on this. And the big development, which we haven't heard yet across the show, but I think it's something folks have been anticipating for the last few years, is that the cat and mouse game has kind of caught up with the derivatives and now kind of like the exponential scaling that we see on the generative side, not just the synthetic data, but all the data. Also, that you're deploying these systems on a mass sign-on basis, creating a lot of content for folks to sift through. But it's all specifically tailored to target the KYC operations and standards that we already have in place, which are very much, I don't want to say, the regulations are very much even from before, you know, the machine learning period, the deterministic phase where we saw a lot of this technology. Now it's at the full-blown generative phase. We're seeing the mouse side of the table really start to work at scale. One of the advantages, and it's a limited advantage, but at least on the banking side, is you do have, in terms of a numbers game, you outnumber these organizations, at least in terms of manpower here. And that's one of the things they have to compensate for, which we're seeing in terms of the scaling of the technology. But how are you seeing folks on the banking side of the table start to think about these problems and try to build solutions that counter at that same scale? Well, we've had to completely rethink our approach, actually, where we used to review fraud rules on a quarterly basis. Like you said, we are now operating in what is called a rapid response model. Like if we see a pattern emerge, we may only have hours to act. So the team is set up to pivot quickly and deploy countermeasures almost in real time. Like probably a month ago or so, there was something that came up on a Friday evening at 6pm. And we actually had people and we have this entire process of where alerts come in and we get regular hourly emails saying if there's a spike or if there is a behavior or pattern change. So we actually brought in folks and it was almost an overnight endeavor wherein we controlled the entire thing and probably got it under control within five hours or so. So, yep, it is very, very real time. And obviously, we are also investing in a lot of enriched risk scoring. That also includes not just from where the transaction came from or a new device got added or some stuff like that. We also need to know things like how old the email address is or whether the phone number has been used for other accounts and what are the digital footprints look like. overall on top of that there is also something called biometric data like if you if you go on the website and wherever you move your cursor and for how long you stay there how many times you type in your password and backspace everything can be captured and we have technologies in place to capture all of that and sort of kind of like bring in all these data make a beautiful concoction cocktail and then finally get down to a pattern that we can actually follow and put down a rule. So this is what we are doing And of course we constantly balancing the equation between fraud mitigation and customer experience because it not just about stopping the bad guys It also about without blocking or frustrating our legitimate users We do not want to tamper with our customer experience because it matters a lot. Trust me, Matthew, it matters a lot. Like if we put in an alert that wrongfully, like as a false positive blocks out someone's account, they get really, really mad and it escalates. So we have to maintain a very sweet balance between the two, wherein we actually stop the fraud, but also not really harass or bother the legitimate legacy, especially. So yeah, so we measure success, not just by the fraud dollar stock, but also by how much friction we add in the process. It's all about stopping more while interrupting lists. Absolutely. And I think as the scales escalate, so does the risk. You had mentioned really taking a deeper look at the enriched risk scoring as a way to think more deeply about the signals that you're getting and how best to act on them in terms of that balance with the customer experience in the business. Wondering what you can tell us just about what goes into that enriched risk scoring vis-a-vis the problems we've been differentiating of fraud at scale versus kind of the first generation stuff we saw with this when AI first appeared? Obviously, these scoring techniques weren't in place before. And that is the reason the problems when AI came in first would be this only that we were putting in rules that had a lot of false positives, like I said. And at times there are people who just forget, who genuinely forget their password and who aren't trying to just log into someone else's account so so these are the kind of things so we have put in uh more technologies and more uh processes to uh gather more historic uh information about this so that we can handle it customer by customer or rather cluster it and probably there are a thousand customers who have this kind of behavior and who are legit. So we have a different kind of algorithm running for them. So we have a lot of ways of clustering and clubbing these kinds of behaviors and patterns. And for each one of these clusters, we have a different algorithm running. And based on how risky the behavior looks like, we assign a score and there are a couple of scoring metrics. This is something that I cannot talk at par yeah right so we don't want to tip off the bad guys too much but this is a this is great for our audience and uh since you you brought in uh machine learning and ai so one of the most cutting edge uh technologies i won't call it technology but this is an upcoming uh space where things are going to be really really helpful and this space is going to blow up real bad is graph theory in data science where especially in the financial space in terms of fraud prevention and risk mitigation we use graph theory to cluster uh the sources of fraud by first party fraud and then second line of fraud third line of fraud and basically there are at times when only it's not just one person who's in play it is a line of people two people in between are are selected as mules. They have no clue that they're actually carrying out a fraudulent transaction. Someone is just directing them to do it, and they go ahead and do it. So stuff like that. But graph theory is something very, very interesting, and it's an upcoming thing. And we are also leveraging it to some extent, and hopefully it's going to increase over the time. For a lot of what you pulled apart right there in terms of graph theory, enriched risk scoring, I think the move that we're seeing a lot from quarterly measurement to real-time strategy, I think, has a much larger space than just what we're seeing in financial services, audits for just about every industry. But wondering how even places like Bank of America, the big enterprise banks, are thinking about translating those priorities from quarterly to real-time strategy and what needs to be there in terms of signal architecture? So great question. We've sort of moved away from the old model of using just blunt thresholds like three logins in five minutes equals fraud. That kind of logic is like too rigid for today's fraud landscape. Instead, we're building layered signal architectures that look at behavior, transaction context, device intelligence, and more. And like I said before, also the biometric data. So together, all of this, think of it like I want the system to let good customers fly and bad ones crash. So that means personalizing our fraud detection method on who's interacting, how they usually behave and what context they are operating in. So we use device fingerprinting, login behavior and even network velocity to determine if something is off, sometimes within milliseconds. And the feedback loop is key when we sort of get something right and approve more good transactions. The data comes right back into the system to keep improving the detection architecture. And it's not about loosening the thresholds, but it's also about getting smarter with the signals we trust. Absolutely. And just because we're talking about a very regulated space, we have on one end, the super organized bad guys who are leveraging these latest technologies. We got customers in the middle who might be duped. You talked about catfishing earlier in the show, which is at least the first time we've heard it on the program. although catfishing has been around for 10 to 15 years just to see it weaponized at this scale is really something in terms of the technology But on the other side of the table you also need to stay compliant with regulations A lot of these regulations date back before even machine learning in terms of what their standards are for KYC, which are being outdated by this cat and mouse game we're starting to see play out at scale. What advice do you have for other enterprise leaders in financial services spaces like banking that are incredibly regulated in terms of finding that balance between going after the bad guys, letting the good customers go and making sure that their operations are compliant. So yeah, this is where it gets tricky. We are absolutely excited about what AI can do, especially for the patent recognition, anomaly detection and reducing false positives. But in a highly regulated environment, you can't just plug in an external AI model and call it a day. So for us, the safest and most strategic path forward is using internal closed loop ai systems that are tightly governed we keep external vendors at an arm's length unless they are fully vetted and even then we are cautious but ai can be an all-powerful ally but it's also not built and if it's not built and controlled correctly it can become a vulnerability as well like absolutely so we don't want to expose that uh in front of the bad guys and the stakes are huge. The fraudsters have nothing to lose, but we do. That asymmetry is what keeps us focused and we want speed, yes, but not at the expense of compliance, privacy or customer trust. So at your compliance question, this is what I say. this is a it's it's a rolling process wherein we keep revisiting whatever our existing processes are and even when we are building internal tools and uh different models we have to keep going and rechecking their uh performance their viability if it's on a weekly or a monthly basis it doesn't matter based on what is the importance level of the function we have to keep keep going and rechecking how they are performing and what is the rate at which they are caching fraud or identifying the true frauds and not coming up with a whole lot of false positives so yeah my team is sort of like leaning into a lot of in-house ai tools ones which we know exactly how they are trained tested and monitored so governance is sort of currently not just a checkbox it's like a foundation and this is i feel not just within bank of america this we will be able to see across most of the big financial institutions uh whenever i meet leaders from uh other banks as well this is exactly uh what i get to hear from their end as well and this is a challenge for sure but uh people are coming up with a lot of innovative ways of tackling this, but within the company, for sure. Absolutely. And I mean, it shows how, you know, a lot of these things kind of row in the same direction. The compliance is there to make sure you're taking in enough data so that the government can kind of play teacher student with you. Let me check your homework and make sure that you're doing it right in this kind of new method of enforcement, which has led to this very new dynamic, very different from 40 years ago of you're seeing the enterprise folks really get ahead of the regulations in terms of their own scrutiny and making sure that they can meet those new compliance challenges, because it means more for them in terms of business challenges in the short term than it did 40 years ago. Very, very fascinating stuff, especially to hear from Bank of America. Suvelina, really appreciate you being with us these last 20, 25 minutes or so and giving us an inside look. Thanks so much for being with us this week. Thank you so much. It was a pleasure talking to you. Thank you, Matthew. Wrapping up today's episode, I think there were three critical takeaways for enterprise leaders focused on fraud prevention, risk management, and AI-driven innovation. First, integrating AI into fraud detection workflows can significantly enhance threat identification speed and accuracy, reducing exposure to financial loss. Second, fostering cross-functional collaboration between analytics, fraud teams, and IT is critical to embedding AI insights effectively into operational processes. Finally, continuous testing and refinement of AI models ensure sustained ROI by adapting to emerging fraud patterns and maintaining regulatory compliance. Interested in putting your AI product in front of household names in the Fortune 500? Connect directly with enterprise leaders at market-leading companies. Emerge can position your brand where enterprise decision makers turn for insight, research, and guidance. Visit Emerge.com slash sponsor for more information. Again, that's Emerge.com slash S-P-O-N-S-O-R. If you enjoyed or benefited from the insights of today's episode, consider leaving us a review on Apple Podcasts and let us know what you learned, found helpful, or just liked most about the show. Also, don't forget to follow us on X, formerly known as Twitter, at Emerge, and that's spelled again, E-M-E-R-J, as well as our LinkedIn page. I'm your host, at least for today, Matthew DeMello, Editorial Director here at Emerge AI Research. On behalf of Daniel Fagella, our CEO and head of research, as well as the rest of the team here at Emerge, thanks so much for joining us today, and we'll catch you next time on the AI in Business podcast. Outro Music
Related Episodes

Governing AI for Fraud, Compliance, and Automation at Scale - with Naveen Kumar of TD Bank
The AI in Business Podcast
22m

Transparency for AI Systems, Regulations, and Humans in Agricultural Manufacturing - with Kun He of Bayer
The AI in Business Podcast
35m

Why Granular Visibility and Data Control Determines AI Success in Financial Services - with Chris Joynt of Securiti
The AI in Business Podcast
30m

Rethinking Clinical Trials with Faster AI-Driven Decision Making - with Shefali Kakar of Novartis
The AI in Business Podcast
20m

Human-Centered Innovation Driving Better Nurse Experiences - with Umesh Rustogi of Microsoft
The AI in Business Podcast
27m

The Biggest Cybersecurity Challenges Facing Regulated and Mid-Market Sectors - with Cody Barrow of EclecticIQ
The AI in Business Podcast
18m
No comments yet
Be the first to comment