Back to Podcasts
The AI in Business Podcast

Governing AI for Fraud, Compliance, and Automation at Scale - with Naveen Kumar of TD Bank

The AI in Business Podcast • Daniel Faggella (Emerj)

Wednesday, December 17, 202522m
Governing AI for Fraud, Compliance, and Automation at Scale - with Naveen Kumar of TD Bank

Governing AI for Fraud, Compliance, and Automation at Scale - with Naveen Kumar of TD Bank

The AI in Business Podcast

0:0022:06

What You'll Learn

  • Data leakage is a major risk, where sensitive investigation details could be exposed outside the organization
  • Prompt injection, where attackers trick AI into revealing information it shouldn't, is a significant social engineering threat
  • Model inversion, where attackers reverse-engineer AI models, is another risk that complicates AI adoption
  • Shadow AI, where unauthorized AI tools are used within or outside the IT environment, creates hidden risks
  • Hallucinations, where AI confidently provides incorrect outputs, and model drift are ongoing technical challenges
  • Effective AI governance requires full data visibility, role-based access controls, and treating AI agents like quasi-employees

Episode Chapters

1

Introduction

Overview of the challenges financial institutions face in adopting AI for fraud prevention, compliance, and automation

2

Data Leakage and Social Engineering Threats

Discusses issues like data leakage, prompt injection, and model inversion that can expose sensitive information

3

Unauthorized AI Usage and Technical Challenges

Covers shadow AI, hallucinations, and model drift as additional obstacles to effective AI deployment

4

Establishing Strong AI Governance

Outlines key elements of a robust AI governance framework, including data visibility, role-based access, and treating AI agents as quasi-employees

5

Balancing Internal and External AI Models

Discusses hybrid deployment strategies that leverage both internal and cloud-based AI models based on sensitivity and explainability

AI Summary

This episode explores the challenges financial institutions face in adopting AI and machine learning for fraud prevention, compliance, and automation at scale. Key issues discussed include data leakage, prompt injection, model inversion, and shadow AI usage, which can amplify risks and slow down AI adoption. The conversation covers the need for strong AI governance frameworks, including role-based access, treating AI agents as quasi-employees, and hybrid deployment strategies that balance internal and external AI models.

Key Points

  • 1Data leakage is a major risk, where sensitive investigation details could be exposed outside the organization
  • 2Prompt injection, where attackers trick AI into revealing information it shouldn't, is a significant social engineering threat
  • 3Model inversion, where attackers reverse-engineer AI models, is another risk that complicates AI adoption
  • 4Shadow AI, where unauthorized AI tools are used within or outside the IT environment, creates hidden risks
  • 5Hallucinations, where AI confidently provides incorrect outputs, and model drift are ongoing technical challenges
  • 6Effective AI governance requires full data visibility, role-based access controls, and treating AI agents like quasi-employees

Topics Discussed

#AI governance#Fraud prevention#Compliance#Data security#Model risks

Frequently Asked Questions

What is "Governing AI for Fraud, Compliance, and Automation at Scale - with Naveen Kumar of TD Bank" about?

This episode explores the challenges financial institutions face in adopting AI and machine learning for fraud prevention, compliance, and automation at scale. Key issues discussed include data leakage, prompt injection, model inversion, and shadow AI usage, which can amplify risks and slow down AI adoption. The conversation covers the need for strong AI governance frameworks, including role-based access, treating AI agents as quasi-employees, and hybrid deployment strategies that balance internal and external AI models.

What topics are discussed in this episode?

This episode covers the following topics: AI governance, Fraud prevention, Compliance, Data security, Model risks.

What is key insight #1 from this episode?

Data leakage is a major risk, where sensitive investigation details could be exposed outside the organization

What is key insight #2 from this episode?

Prompt injection, where attackers trick AI into revealing information it shouldn't, is a significant social engineering threat

What is key insight #3 from this episode?

Model inversion, where attackers reverse-engineer AI models, is another risk that complicates AI adoption

What is key insight #4 from this episode?

Shadow AI, where unauthorized AI tools are used within or outside the IT environment, creates hidden risks

Who should listen to this episode?

This episode is recommended for anyone interested in AI governance, Fraud prevention, Compliance, and those who want to stay updated on the latest developments in AI and technology.

Episode Description

Today's guest is Naveen Kumar, Head of Insider Risk, Analytics, and Detection at TD Bank. With deep expertise in financial crime and risk analytics, he offers a frontline view into how regulated institutions manage AI securely at scale. Naveen joins Emerj Editorial Director Matthew DeMello to discuss the foundations required for responsible AI adoption in banking — from data protection and governance to fraud prevention and machine-assisted investigation. Naveen also breaks down practical steps leaders can take to improve ROI, including role-based AI access, full data visibility, and phasing innovation to meet regulatory expectations. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the 'AI in Business' podcast! This episode is sponsored by NLP Logix.

Full Transcript

Welcome, everyone, to the AI and Business Podcast. I'm Matthew DeMello, Editorial Director here at Emerge AI Research. Today's guest is Naveen Kumar, Head of Insider Risk, Analytics, and Detection at TD Bank. Naveen joins us on today's show to examine why AI adoption in banking hinges as much on securing data, models, and organizational guardrails as it does on insights, customer experience, and fraud prevention. Naveen helps us break down where financial institutions are hitting roadblocks from data leakage and prompt-based social engineering to shadow AI and model inversion, and why these challenges complicate efforts to move from manual detection to true machine-assisted prevention. Our conversation also covers practical workflow changes that materially shift ROI, role-based AI access, governance frameworks that treat agents like quasi-employees, and phased rollouts that align innovation with regulatory obligations. Today's episode is sponsored by NLP Logics, but first, are you driving AI transformation at your organization? Or maybe you're guiding critical decisions on AI investment strategy or deployment. If so, the AI in Business podcast wants to hear from you. Each year, Emerge AI Research features hundreds of executive thought leaders, everyone from the CIO of Goldman Sachs to the head of AI at Raytheon, and AI pioneers like Yoshua Bengio. With nearly a million annual listeners, AI in Business is the go-to destination for enterprise leaders navigating real-world AI adoption. You don't need to be an engineer or a technical expert to be on the show if you're involved with AI implementation, decision making, or strategy within your company, this is your opportunity to share your insights with a global audience of your peers. If you believe you can help other leaders move the needle on AI ROI, visit Emerge.com and fill out our Thought Leader submission form. That's Emerge.com and click on Be an Expert. You can also click the link in the description of today's show on your preferred podcast platform. That's Emerge.com slash expert One. Again, that's E-M-E-R-J dot com slash expert one. Without further ado, here's our conversation with Naveen. Naveen, thanks so much for being back on the show with us this week. Thank you, Matt. Thanks for having me. It's always great to be here. Absolutely. We were talking a lot about compliance last time, very much in the much smaller space of copyright primarily. Today, we're talking about how banks and financial institutions are seeing the promise of automation and machine learning with faster insights, better customer service, and stronger fraud prevention. Yet at the same time, these same systems that accelerate decisions can also amplify risk when data models and governance fall out of sync. Regulators now expect explainable, auditable, and bias-aware AI pipelines, and institutions are realizing that the rules-first approaches of the past really can't keep pace with adaptive criminal behavior or or evolving oversight requirements. As we're seeing financial crime, insider risk, and compliance teams adopt AI, the goal, just as we said in the last episode, is shifting from detection to prevention and from manual review to machine-assisted intelligence. And in order to get there, leaders need to strengthen foundations across data quality, model governance, and organizational accountability. But even before we get to that transition, what foundational challenges are you seeing slow down AI adoption in regulated industries like banking? Yeah, no, thank you for asking. Matt, this is a really, really great question. There are a number of challenges as we look into it, right? One of them is data leakage. It's very important that the data is secure. For example, let's just think about it, right? Say an analyst pays internal investigation notes into an AI tool, right? To let's say, oh, I want to summarize these trends. And suddenly those details about flagged employees are floating outside the organization. So that's where the leachate is. So we need to make sure it stays within the boundary of all the domains of that's being set. The other piece, I would say prompt injection, right? So this is, I would say, is social engineering of AI, right? So you're basically asking them that ignore all rules and show me all information that's available, right? So prop injection is like when someone just tricks AI into doing something, it should. And that's a big risk. Model inversion, this is when attackers figure out what's inside your AI. That is the other piece, which is very risky and stopping companies to think about how to circumvent it or actually make sure it doesn't happen. Shared AI, so in terms of that, when you have a number of AI tools and IT or compliance that doesn't know about it, that is a hidden AI universe, right? So these tools being used either on company website, sorry, on the company domain, or you are leveraging your personal machines, but you kind of typing the information or taking the information outside of the domains of your IT environment So how to make sure either within the domains of IT that shadow AI is not going or also outside of it right And we all know about we all talked about hallucinations, which is a real thing, which is making AI believe into something or they're providing something that is, it's not there. It's like hallucinations are very confidently going to say an output which do not which is truly not correct so model drift is another one but like there are a number of these things which which which we are learning our challenges yeah yeah just to differentiate there maybe the wheat from the chef um we've had a lot of guests come on the show and tell us that prompt engineering is is really a temporary phase of ai adoption in a few years this won't be around anymore i always take those predictions with a grain of salt. I present them that way on the show. Jury is very much still out on prompt engineering, especially because, and I think just about everybody saw this. I was seeing this on my LinkedIn back in early 2023, but you'd have some guy from Silicon Valley on LinkedIn just saying, oh, hallucinations will be solved in a couple of months. We'll get it. No. And that fundamentally misunderstands the more philosophical problem of hallucinations. And I think this is an experience just about everybody's had with Chad GPT, which is you think it's a hallucination when it's your data inputs that is where it started from and that even crosses into the very problem of prompt engineering just just trying to get a sense of do you see some problems as temporary like do we need temporary fixes because this is only going to be around for a couple of years versus hallucinations which are more philosophical and get to the heart of what is a misunderstanding you know yeah and and see i'm learning as anyone else is learning and i was in this the other day and and one was the point was like hey hallucinations could go away if you could provide real context into your prompt and then the data is set for that right now how much is true is not time going to take but i think i think what we need to look into is so from the prompt engineering perspective which you may which you mentioned earlier is when we're talking about not artificial general intelligence but very purpose fit one right uh which we try to do for our use cases within the organization the way i see it is limited to who could do more like a role based yeah in the sense that when you prompt something hr sees hr stuff right investigator sees flagged employees finance sees nothing you got nothing to do with it it's not going to prompt so in general right um when from the from the models and transformers perspective they are pressured to provide an answer hallucinations happen because it has to provide an answer it cannot say Like pretty much your expectation is not that AI engine going to come and say, like, I do not. Like, what do you mean? Right. So I think that's how I see it. Yeah. And even right there, like, oh, it'll just be solved if we give it all the context in the world. Another word for context, if you want to take it to an extreme, is surveillance or other people's private data. And that's the exact problem you're running into of, oh, yeah, it would be great if you could get all the information on that person in the world. Then you got to go through their trash. You got to go through their mail. Then you're something of a surveillance state for that subject. And then that really becomes when you start taking many steps back from the word context, it gets a lot more philosophical. The problem disappears from the technology and ends up becoming a more existential question. We only got so much time on the show, of course. Taking it a little bit more practically in thinking about really the more permanent problems, how does strong AI governance look? What does that look like in banking and financial services? Yeah, and it's a very, very important piece to cover. I would say full data visibility, right? Tracing every internal data set that's being used is very important. Who access it and how AI touches it is very important. And I think role-based AI, which I somewhat mentioned, is like a polite bouncer, right? It's like only provide information more on the role-based. If there's an insider investigation going on and finance has nothing to know about it, putting it under the AI checkbox shouldn't return anything, right? Guard is for invisible force field. So what I'm saying, rules AI simply cannot break, no matter what prompt it receives. So that will stop doing, I think, gathering information by asking number of questions and tricking into AI that's actually helping, but actually revealing the information an attacker shouldn't know in the case of an attack, right? so i would say those one other thing which was interest interestingly read later as well is now we started to think a lot of agents as in like a quasi human right or an employee in the sense that treat them and put go through the same way you would you would de-risk an employee right so what data is used what it touches what it all it impacts right who reviews its work who approves it So consider them like a mini version of it. In fact, I was talking to a friend of mine and then she was saying how in their corporate environment, they started to have this bots called underscore AI. So say Naveen is my name and Naveen underscore AI underscore bot will appear in the chat and they will try to learn what you doing So so where we are right now think of what a human could do what an AI could do and apply the same gall rays around that, that you would be doing. So that's one. I would also say hybrid deployment and explainability, like sensitive data stays in internal models. Less critical tasks can use cloud AI rather than internal one, right? and models explain reasoning for human review. So those are some things I believe would really help for us to, once we implement them effectively in a mid to long term future. Yeah, absolutely. And I think, you know, we've been talking so much about kind of the space between the enterprise and the customer as being, you know, a huge source of friction. It's definitely the space where we see the enterprise getting ahead of regulations for their own interest in making sure that they have a qualified relationship with the customer. They're effectively using this data privacy. It's the fear. It's more of a fear of making sure you don't have a PR crisis. You don't have pissed off customers rather than the regulators coming around. That being said, you're still caught between the customer, the regulator, your own technology driving up the need to find a nice balance in between. And everywhere that's sacrificed is we're seeing innovation kind of caught in between. How can institutions balance innovation with customer obligations and regulatory and security constraints? Yeah, that's a great question. And a lot of time goes into this balancing this act, right? Right. I think it's more of, I would say, a phased approach. roll out something which is very specific to a use case, limiting the data availability and the data points and needs to leverage to create a usable output versus making the AI solution comprehensive and leveraging key data sources and everything that's available to it, right? I think that is one of the ways when you create clear policies, what could be versus what could not be used for the development of the AI models, that would be really helpful. Classify data realistically, right? Like, okay, this is a safe data, this is sensitive data, this is critical data and should not be used in a first iteration of work. I think that would definitely help to navigate that balance. What are the metrics that really matter in this space, especially as you're trying to drive innovation versus regulatory and security restraints? we've we've had guests on the show i'll quote david glick of walmart who's really and i guess this makes sense when you have to use to work oh yeah that's right that's right what i was saying last time we had you on the show you're at walmart well and you know the nano agents and david david talked all about how really the philosophy is we're throwing out the notion that you need to sacrifice speed for safety you can have both and that can kind of sound like well that makes i guess maybe that might make sense when you have walmart's resources is that something is that a paradigm shift we're seeing coming for enterprises smaller than walmart yeah it it also it also matters a lot in in what domain we are referring so right like yeah on the compliance side it might not be in the sense while on the retail side when you're trying to get customers yes right like so it's it's it's somewhat different on the compliance side you want to be more conservative than aggressive Say you're leveraging AI to file suspicious activity reports, and then you do not want to be over-optimistic into going all the way from data collection to alert generation to reviewing it and putting it to FinCEN inbox without a human in the loop, right? So I think that that thing needs to be balanced. Okay, what could maybe what we could look at in a different way is in terms of speed versus precision is like, okay, anything which is below the threshold tier one, those could be really taken by an agentic AI and disposed. While something touches upon the X, Y and Z parameters, they should not then should always be looked by somebody at human, right? I think so it depends on what domain and what use case it is that you would probably want to hone more towards leveraging AI as an efficiency or the first iteration of first draft created versus actually have a full-blown solution as such. Absolutely. I know we asked this question last time during our compliance for copyright episode, but let's just say, you know, for the folks out there starting kind of at ground zero, what steps can leaders take now to build a secure and scalable AI foundation? Yeah, so I would say safe sandboxes. Let's experiment safely without touching real employee data. I think that's what we need to do first before we go further into it. And then building the inventory, know all your models, know all your internal tools, know your vendor AI features, right? That could be there. Bring compliance in early rather than asking for them to validate the models towards the very end, get to know what really going on, right? And I would say reward responsible innovation, right? So celebrate projects that innovate and reduce risk versus shiny rule projects look cool until the audits find them And they like okay what exactly is going on what you getting out of it And I would also say yeah success equals fewer incidents better detection smoother operations. If I create new risk, something gets roasted in the next board meeting, you do not want it to do that. So that's what my view is. Yeah, board politics is inevitable. And I think it needs its own special communication, which we've talked about a lot at the show. If you have any advice in that category i'm very happy to hear it now but anything just especially for managing up in those particular scenarios you know i think so as exciting as ai is remember this right curiosity is good enthusiasm is great but guardrails monitoring and human judgment are what kept institutions from becoming the next ai horror story so we just want to make sure we we are cognizant about those facts yeah absolutely i think that is ending up one of the big speed limits that we've seen in that in that moment from late 2022 we're up on the anniversary right now been three years since that kind of ed sullivan moment for for open ai and chat gpt and i think you know in that initial year the attitude was oh this will keep skyrocketing you know this will get better and better and better no there are a couple of speed limits and especially that human input human in the loop review and judgment is proving a lot more valuable than we thought it did at least as of like about January 2023. And anyone who you talk to, like data is the spine, right? We all know this phrase of garbage in, garbage out. Probably use more than anything. Like I've been to many conferences, personal conversation, and it's like, okay, building on what data, I think that's very important. And we continue to unrangle that piece for AI for sure. Yeah, yeah. I think as the show has gone along, we've heard a lot less garbage in garbage out which I think is a good sign we've officially gotten rid of it it might become a rule for the show and guests but we've gotten rid of boil the ocean and when you start to see even these small phrases leaving it does tell you something even if anecdotally about maybe the state of AI adoption right now you could probably read an AI agent on your podcast Matt and see like those phrases have not been used in the last maybe three months I would be very interested that AI is going to need to be spiked with some Jungian psychologists some dream reading. I think we might need that in there. We'll have to have you back on the show the next time we do dream readings. Naveen, thank you so much for being here once again. Thank you, man. Absolute pleasure. Much appreciated. Wrapping up today's episode, I think there were at least three critical takeaways for leaders in banking, financial crime, compliance, and risk management to take from our conversation today with Naveen Kumar, head of insider risk analytics and detection at TD Bank. First, strong AI adoption depends on disciplined data controls, full visibility into access, clear role-based boundaries, and guardrails that prevent leakage, inversion, and manipulation. Second, treating AI agents like accountable workforce extensions improves governance. Defining what data they can use and who reviews their outputs reduces risk and uncertainty. Finally, balancing innovation with regulatory and customer obligations works best through phased rollout in the form of narrow early use cases, realistic data classification, and increased automation that are deployed only when precision and oversight are proven. Are you driving AI transformation at your organization, or maybe you're guiding critical decisions on AI investments, strategy, or deployment? If so, the AI in Business podcast wants to hear from you. Each year, Emerge AI Research features hundreds of executive thought leaders, everyone from the CIO of Goldman Sachs to the head of AI at Raytheon and AI pioneers like Joshua Bengio. With nearly a million annual listeners, AI in Business is the go-to destination for enterprise leaders navigating real-world AI adoption. You don't need to be an engineer or a technical expert to be on the show. If you're involved in AI implementation, decision-making, or strategy within your company, this is your opportunity to share your insights with a global audience of your peers. If you believe you can help other leaders move the needle on AI ROI, visit Emerge.com and fill out our Thought Leader submission form. That's Emerge.com and click on Be an Expert. You can also click the link in the description of today's show on your preferred podcast platform. That's Emerge.com slash expert one. Again, that's Emerge.com slash expert one. We look forward to featuring your story. If you enjoyed or benefited from the insights of today's episode, consider leaving us a review on Apple Podcasts and let us know what you learned, found helpful or just like most about the show. Also, don't forget to follow us on X, formerly known as Twitter at Emerge. And that's spelled again, Emerge, as well as our LinkedIn page. I'm your host, at least for today, Matthew DeMello, Editorial Director here at Emerge AI Research. On behalf of Daniel Fagella, our CEO and head of research, as well as the rest of the team here at Emerge, thanks so much for joining us today, and we'll catch you next time on the AI in Business podcast.

Share on XShare on LinkedIn

Related Episodes

Comments
?

No comments yet

Be the first to comment

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies