Back to Podcasts
The AI in Business Podcast

Why Granular Visibility and Data Control Determines AI Success in Financial Services - with Chris Joynt of Securiti

The AI in Business Podcast • Daniel Faggella (Emerj)

Monday, December 15, 202530m
Why Granular Visibility and Data Control Determines AI Success in Financial Services - with Chris Joynt of Securiti

Why Granular Visibility and Data Control Determines AI Success in Financial Services - with Chris Joynt of Securiti

The AI in Business Podcast

0:0030:24

What You'll Learn

  • Financial services institutions face higher stakes in adopting AI due to their reliance on sensitive data, brand reputation, and complex regulations.
  • The shift to unstructured data and the widespread use of AI across the organization create a need for granular visibility and control over data, beyond just securing access to models.
  • Perimeter-based thinking is no longer sufficient, as the hard parts of AI adoption have shifted to managing the explosion of unstructured data and the decentralization of AI usage.
  • Manually processing unstructured data is a crucial step in enabling responsible AI, despite the common 'white belt' mentality of wanting to automate everything.
  • Adopting AI in financial services requires a significant paradigm shift in data architecture and governance, not just a simple technology swap.
  • The stakes are high, as any interruption or incident can lead to tremendous brand damage for financial institutions.

Episode Chapters

1

Introduction

The episode introduces the topic of the challenges financial institutions face in adopting AI due to the need for trust in sensitive data.

2

The High Stakes of AI in Financial Services

Chris Joynt explains why the stakes are higher for financial institutions in adopting AI, due to their reliance on sensitive data, brand reputation, and complex regulations.

3

The Paradigm Shift in Data Architecture

The discussion explores the need for a significant shift in data architecture and governance to enable responsible AI deployment, beyond just transferring data to a new format.

4

The Explosion of Unstructured Data

The episode delves into the challenges posed by the explosion of unstructured data and the need for granular visibility and control, moving beyond perimeter-based thinking.

5

Overcoming the 'White Belt' Mentality

The conversation touches on the common 'white belt' mentality of wanting to automate everything, and the importance of manually processing unstructured data to enable responsible AI.

6

Conclusion

The episode concludes by emphasizing the high stakes and the significant paradigm shift required for financial institutions to successfully adopt AI.

AI Summary

The episode discusses the challenges financial institutions face in adopting AI due to the need for trust in sensitive data. Chris Joynt, Director of Product Marketing at Securiti, explains that financial services is a data-driven industry with high stakes, as their brand and products are built on trust. The complexity of regulations and the risk of cyber threats add to the difficulty of ensuring data security and compliance. Modernizing data architectures to enable responsible AI requires a paradigm shift beyond just transferring data to a new format, as the 'everything everywhere, all at once' problem of unstructured data and the expansion of AI usage across the organization create a need for granular visibility and control.

Key Points

  • 1Financial services institutions face higher stakes in adopting AI due to their reliance on sensitive data, brand reputation, and complex regulations.
  • 2The shift to unstructured data and the widespread use of AI across the organization create a need for granular visibility and control over data, beyond just securing access to models.
  • 3Perimeter-based thinking is no longer sufficient, as the hard parts of AI adoption have shifted to managing the explosion of unstructured data and the decentralization of AI usage.
  • 4Manually processing unstructured data is a crucial step in enabling responsible AI, despite the common 'white belt' mentality of wanting to automate everything.
  • 5Adopting AI in financial services requires a significant paradigm shift in data architecture and governance, not just a simple technology swap.
  • 6The stakes are high, as any interruption or incident can lead to tremendous brand damage for financial institutions.

Topics Discussed

#Data security and governance#Unstructured data management#Responsible AI deployment#Financial services industry challenges#Data architecture modernization

Frequently Asked Questions

What is "Why Granular Visibility and Data Control Determines AI Success in Financial Services - with Chris Joynt of Securiti" about?

The episode discusses the challenges financial institutions face in adopting AI due to the need for trust in sensitive data. Chris Joynt, Director of Product Marketing at Securiti, explains that financial services is a data-driven industry with high stakes, as their brand and products are built on trust. The complexity of regulations and the risk of cyber threats add to the difficulty of ensuring data security and compliance. Modernizing data architectures to enable responsible AI requires a paradigm shift beyond just transferring data to a new format, as the 'everything everywhere, all at once' problem of unstructured data and the expansion of AI usage across the organization create a need for granular visibility and control.

What topics are discussed in this episode?

This episode covers the following topics: Data security and governance, Unstructured data management, Responsible AI deployment, Financial services industry challenges, Data architecture modernization.

What is key insight #1 from this episode?

Financial services institutions face higher stakes in adopting AI due to their reliance on sensitive data, brand reputation, and complex regulations.

What is key insight #2 from this episode?

The shift to unstructured data and the widespread use of AI across the organization create a need for granular visibility and control over data, beyond just securing access to models.

What is key insight #3 from this episode?

Perimeter-based thinking is no longer sufficient, as the hard parts of AI adoption have shifted to managing the explosion of unstructured data and the decentralization of AI usage.

What is key insight #4 from this episode?

Manually processing unstructured data is a crucial step in enabling responsible AI, despite the common 'white belt' mentality of wanting to automate everything.

Who should listen to this episode?

This episode is recommended for anyone interested in Data security and governance, Unstructured data management, Responsible AI deployment, and those who want to stay updated on the latest developments in AI and technology.

Episode Description

Today's guest is Chris Joynt, Director of Product Marketing at Securiti. Securiti is a leader in AI-powered data security, privacy, and governance across hybrid and multi-cloud environments. Chris joins Emerj Editorial Director Matthew DeMello to explore why trust in data remains the defining challenge for AI in financial services, and how institutions can modernize their architectures to enable safe, compliant, and scalable AI adoption. Chris also breaks down the practical workflows that reduce regulatory and cyber risk — mapping data flows, classifying unstructured data, preventing shadow AI — and shows how stronger governance ultimately accelerates ROI by enabling faster, more reliable AI deployment across the enterprise. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the 'AI in Business' podcast! This episode is sponsored by Securiti.

Full Transcript

Welcome, everyone, to the AI in Business podcast. I'm Matthew DeMello, Editorial Director here at Emerge AI Research. Today's guest is Chris Joint, Director of Product Marketing at Security. Security's Data Command Center enables AI-powered data security, privacy, and governance providing unified visibility and control across hybrid and multi-cloud data environments. Chris joins us to unpack why trust in data remains the central bottleneck for AI and financial services, how Gen.AI changes the risk equation, and what it really takes for banks, insurers, and capital markets firms to modernize data architectures for safe, compliant AI at scale. Our conversation also gets practical, exploring how to shift from perimeter-based thinking to data flow-centric controls, how to classify and label sensitive unstructured data at enterprise scale, and how to detect and rein in shadow AI before it creates invisible risk. Today's episode is sponsored by Security, but first, are you driving AI transformation at your organization? Or maybe you're guiding critical decisions on AI investments, strategy, or deployment. If so, the AI in Business podcast wants to hear from you. Each year, Emerge AI Research features hundreds of executive thought leaders, everyone from the CIO of Goldman Sachs to the head of AI at Raytheon and AI pioneers like Yoshua Bengio. With nearly a million annual listeners, AI in Business is the go-to destination for enterprise leaders navigating real-world AI adoption. You don't need to be an engineer or a technical expert to be on the show. If you're involved in AI implementation, decision-making, or strategy within your company, this is your opportunity to share your insights with a global audience of your peers. If you believe you can help other leaders move the needle on AI ROI, visit Emerge.com and fill out our Thought Leader submission form. That's Emerge.com and click on Be an Expert. You can also click the link in the description of today's show on your preferred podcast platform. That's Emerge.com slash ExpertOne. Again, that's E-M-E-R-J dot com slash expert one. Without further ado, here's our conversation with Chris. Chris, welcome to the program. It's a great pleasure having you. Happy to be here. Thanks. Absolutely. Financial services is broadly racing to adopt AI, no matter what MIT says about it. Yet they face an uncomfortable paradox. Their competitive advantage lies in that same sensitive data that exposes them to extraordinary regulatory and reputational risk. The pressure to modernize data systems collides with the reality that legacy architectures were never built to support AI scale, visibility or governance. Chris, you've been at the forefront of these issues going back to IBM 15 years ago. What are the reasons that you're seeing that keeps trust in data that central challenge for financial institutions adopting AI? Yeah, no, thanks. Great, great question. You know, everybody needs trust. Everybody says, oh, we need trusted AI. We need trust in the data. Right. But for FinServe, you know, it's it's different. First of all, FinServe is the preeminent data-driven industry. They've kind of been ahead of everybody on the adoption curve for years, right? Ask IBM where they were selling their mainframes back in the day, right? So they've kind of always been the data-driven industry. But things are different for FinServe. I think when it comes to trust in the data and trust in the AI, it's just that the stakes are higher, right? Similar problems, but the stakes are much higher in FinServe. First of all, their entire business, if you think about it, is built out of sensitive data, right? And their brand, their product that they sell is trust, right? Trust not only in am I going to have reliable outputs here, but how is this data handled? How is this data managed, right? All of those things are really big in terms of the brand that you're building in FinServe. And any kind of interruption or incident there can have tremendous brand damage, right? So that's a big reason that the stakes are much higher. Also, I would say that, you know, just given the nature of the data that FinServe institutions process, they're the juiciest possible target for cyber threats, right? So that's kind of another angle of things, you know, so they have to think about how do we protect our data from outsiders and malicious intent. And then, of course, just the overall complexity of regulations. You know, FinServe institutions are among the most highest regulated entities in the world, overlapping webs of various agencies and geographies and stuff like that. It's very complex, right? So that kind of makes the stakes much higher as you go to roll out AI and, you know, hey, wait a second, is this compliant with all of these various regulations? That's not always an easy question to answer. So I think it's just higher stakes for FinServe institutions. Absolutely. And I think there's this kind of fallacy out there, especially as this kind of moves to the board level. Maybe the mechanics kind of get simplified and stereotyped, especially when they get out there into the broader public and then back to the board. But I think there's this kind of fallacy of, oh, we're already a data institution. That's what a bank, that's what an insurance company, whatever your financial institution is. It's just all in paper and in a legacy tech stack. And all we got to do is transfer it to this new format. The new format has these things called hallucinations and all these different kinds of challenges. Yeah. And that's a simple version, a simple version. I don't think really that's the simple version. How can institutions evolve their data architecture to enable that responsible AI that's kind of outside that simple story that's kind of getting told over and over? Yeah, good question. It's a paradigm shift, right? So I think your typical FinServe institution might say, hey, we've been doing machine learning for decades, right? We've already been doing this stuff. We already know about security and securing our models and things of that nature. But this is a little bit different, right? The old model was kind of built for highly structured inputs and outputs and sort of assessing risk within these narrowly scoped things and these narrowly defined things, right? So it was like, how am I going to secure access to model? How am I going to monitor for data drift and things like that, you know, all kind of within a, within a certain scope. Now we have what I call the everything everywhere, all at once problem, right? A customer, I'm going to steal his, steal his line, but he said in reference to chat GPT being kind of the, this big, you know, technological and cultural watershed moment of like, boom, Now it's out there. Now we have this era of Gen AI. He said in reference to that, our unstructured data became gold overnight, right? And when you consider that your unstructured data and a lot of organizations, even FinServe that processes so much, you know, structured data, it can be 80 to 90% of the total volume of data that you have. And you consider that like you know in this era of AI with models that are as capable as they are that can reproduce anything that have such general capabilities to kind of go off script and do unanticipated things The degree of fine visibility and control that you need at that scale is quite different. So it's not just a, hey, swap this out and voila, now we're ready in the AI era. I would say this is a significant paradigm shift. And it's also shifting in terms of like, who, right? You know, what used to be in the realm of my data scientists and my ML ops people, now everybody's doing it. Now everybody wants it, right? So it's a huge expansion in scope and, you know, the level of granularity that I have to have my visibility and control in order to be safe in deploying AI and conserve institutions. Yeah. And you mentioned a really interesting paradigm there because I call this kind of your white belt mentalities with AI, which is a white belt mentality that I'm coming across more and more often is manual processes are the devil. We're getting a laser that's going to shoot all the manual processes and get rid of them. They are the enemy. They are the bad. Yeah, that's white belt. When you're first getting started, yeah, it might make sense. When you get to your black belt, I call it, you kind of absorb the cursive fallacy, which is cursive was never the devil cursive was never a waste of time in schools compared to web coding cursive is a technology that brings information to the forefront of your neurobiology especially if you get taught it really young and that as a manual process is incredibly important as a human that still consumes food that is a carbon-based life form that you're going to need to really centralize the information that's most important to you and when you're using when you're you know at your ai black belt and you're surrounded by agents that all do your bidding for you. You might be best served by a little paper and pen notebook where you write cursive and keep what's most important to you at the forefront. Those are kind of the black belt mentalities, but it sounds like you're kind of bringing us a new white belt mentality or demystifying a white belt mentality with what one of your clients was saying about unstructured data is the gold. Because I think in the white belt mentalities, it's a lot like, oh, structured data is easy. This is just going to go through the the data machine and come out the other side and be value. And unstructured data is a giant pain in the ass. It's probably lost to time. No, the unstructured data is the value. And you're going to need to kind of bite the bullet and do the hard work. Sorry, I know, you know, this is the biggest myth of AI is it's going to be easy. No, it will make your life easy. You're going to have to do more work than you've ever done to get there. That's, you know, that's kind of the rub. Really, really appreciate that perspective. But I also want to want to kind of nail down in your last answer. You were talking about these, you know, kind of perimeters. And it sounds like maybe the problem is those perimeter parameters are pre AI mentalities. They are the kind of legacy parameters. Right. And then they have to change in the paradigm shift. Am I getting am I getting that kind of right in your last answer? Yes, yes. A good bit. I think you're in abstract describing what's difficult about unstructured data, right? Because that's the real thing. We have to shift to unstructured data. And as an aside, I'm an actual white belt now. I just started jujitsu. So your metaphor hits home for me. Absolutely. But yeah, when it comes to AI and just technology innovation in general, I have this philosophy that with every new innovation, every new wave of innovation that comes along and this excitement that comes with it is, oh yeah, now our wildest dreams are going to come true. It's going to be easy. It's never actually easy. It's just the hard part has moved somewhere, right? That now, once we get some time under our belts and we start getting away from that white belt mentality and start getting some experience, we find where those hard parts are and we learn how to deal with them. So I think one of those big shifts right now is the shift to unstructured data. And it's such a huge explosion. I mean, just answering the question of what's in these files. I'll give you an example of a customer. We have one big customer that has such an enormous system or data estate, I should say. They have over 200,000 data systems that we're protecting and working with. And they generate in logs alone, just logs, a petabyte a day, right? So they've got a ton of unstructured, semi-structured, multi-structured data all over the place, right? So that's billions of files. And when we're talking about AI, right, we're talking about feeding things into AI, you can't just throw a bunch of slop in there and hope the AI is going to figure it out, right? If we want to stay secure, we need to know what's in those files. Now, answer me that question when there's a billion of them, right? It's not as easy as putting it through the structured data machine, right? So it's a different animal. It's a different animal. And I think that, you know, the more enterprising organizations are, you know, they're embracing that challenge, right? And they're not shying away from it. So I think that's what's exciting about where we are in this AI space right now. Absolutely. And we got to speak the language of the white belts, right? And this is what's sort of serendipitous about you being a white belt in jujitsu right now. And these podcasts and Emerge being founded, I don't know if you knew this, but our CEO. Oh, yeah. It's something we bonded over because before my jujitsu days, I was a Muay Thai guy. And it's like jujitsu and Muay Thai are kind of like the yin and the yang, right? There's very little overlap between the two, but there's a deep respect between the two disciplines for each other. And for everybody listening at home that doesn't know our deeper history, Dan, our CEO and head of research, used to be a world competitor in jiu-jitsu. So we can very, even though I've never touched the discipline, I've done taekwondo, which is all glorified yoga. Give me a break. I did taekwondo as a kid. But I can at least bridge these. We need to talk to the white belts. There's teaching strategies you can only do with the white belts that are never going to make sense with the black belts and vice versa. That's a truism of education, of martial arts. Let's talk to the white belts in AI right now. What practical steps define that first phase of AI-ready data governance, especially around all these misconceptions we've described? Yeah, so I think the thing to remember is we're preparing data for AI, right? Everybody knows data is the inputs to AI, right? We want to be safe with AI. We need to be safe with our data. You know, not a big surprise there. But I would say that when it comes to control, right, when it comes to control and security, we need to really kind of double down on our focus on data. And the reason is, if you look at, just look in the news and see any of these model jailbreaks or anything like that, you cannot build security and control into the model the way that you think you can, right? You might be able to stop a model from certain harmful outputs or saying things that are biased against a certain group of people or things that are offensive. Fine. You might be able to build that type of stuff into a model. But the real control that you need, especially the specific level of control that you need in your business, the way you're going to deploy an AI, not just some general chatbot, but the way you're going to deploy AI within one of your business processes, the level of control you need, you have to exert control on the data flows across the AI system. Right So so the idea of like oh we going to do model observability or we going to have some kind of wizardry Right To to build to build the security into the model itself I think we need to start with the data right And just this basics basics, basics of data security is visibility into it, right? So we need visibility into our data across different clouds and on-prem structures, hybrid structures. And we need visibility into what models is this data exposed to? We need to kind of map those things, right? Because the way AI works, especially models with the power that they have today, once it's in the model, we've lost control of it, right? So we need to double down and focus on the data and securing the flows of data. And that visibility, I think another layer of it, I probably should have said, another layer of it is just like deep knowledge. When I say what's in the data, What I mean is, can we classify that data by level of sensitivity? And there's many different specific types of sensitive data, especially in FinServe, right? We have PII, we have transactional data, we have data that's subject to various regulations, right? We may have business secrets or sensitive business information. All these are various different types of sensitive data that we need to classify and we need to label, right? We need to put labels on that unstructured data. So now this is a hook or a handle that AI can latch onto and knows how to handle it. And that's the type of control that I can program into a system. I can't just kind of throw all the data in there and hope that I've trained the model to only do good things, you know? Right, right. I don't want to get hung up on the wording of my question. Just in that, like, you're emphasizing data flows that comes after, well, you got to find the right data. Just trying to get a sense of, especially if the end is control, then what's coming first? Doesn't seem like a chicken or the egg situation. You need right data first, but what's getting lost is this focus on where it's going. Because, yeah, I think for the most part, even the global economy is past the data as the black gold. If anything, it's the unstructured data, as you were saying a moment ago. But I think just all data being precious and valuable is run up against, no, only certain data is precious. But what's getting lost, and I'm just nutshelling your last answer, what's getting lost is it's great to have the right data. If you're not telling it where to go effectively, you're never going to get control. Right. And you need to map those things. You need to have visibility. And the scary part, I think, for a lot of leaders is that might already be happening within the organization. People might already be doing AI in the organization. If you don't have discovery and you don't see where that data is going, there can be, you know, have you heard the term shadow AI? Of course. You know, there's AI systems that could be processing data that we don't know about. That's a huge, huge level of risk. I mean, the biggest risk is things that you can't see, right? I'm going to go back to another martial arts analogy. But, like, the punch that knocks you out is the one you didn't see. right? They're the worst. So I think when I talk about visibility and mapping, that's kind of a first preliminary step just to see into our various systems and see where the data is going, because we need to have control of that. And part of the problem with AI is these can be complex systems, right? These can be complex systems where data is moved many times. Data is kind of transformed, merged, you may be using a vector database where unstructured content is transformed into just vectors and numbers, which is unrecognizable to us, but indexable for AI use. So when you're transforming data like that, how do we still track the flow of it and have visibility into it? That's kind of the first level that is a big challenge. And our customers tell us, look, you know, you know, one of our customers was in CDO magazine today, right? And they were talking about, they were talking about just data modernization for AI. This was Citibank, right? And, you know, they said, this cannot be manual for us. It can't be, right? We need, we need the visibility and control at scale that, you know, it can't be, can't be super manual just because the scale of it. And again, the granularity of visibility that they need. Yeah. Yeah. I'll call that maybe, maybe yellow. I don't know if you had green bell or have green belt in in maybe that's taekwondo but yeah a little yellow belt green belt mentality but yeah because they're farther along right they're definitely past the first phases but you know manual is still the enemy is still there you know which still continues even when you're kind of at the cursive isn't bad black belt phase like oh yeah manual manual processes are still more or less the enemy it's just then it turns into where do you want your manual processes for human in the loop for ultimate observation for that deep connection to organ based data processing systems, otherwise known as human brains. But yes, yes. I think another part of the conversation that comes up in these moments is then, okay, well, we're going to need to slow down to get that right. And I think that's especially true in FinServe. We've had folks on the show, David Glick at Walmart's kind of the big one in my memory for maybe the last half year, who came on the show and talked about how, you know, hey, well, we at Walmart, you know, we, our mentality is you never see, you can go faster and more responsible all at the same time. And grain of salt, I think they're really doing some incredible stuff over there. And that might have broader lessons for a lot of different industries and enterprises. I think kind of, even when, when I kind of bring this mentality to other industries and guests, the reaction is a little bit, well, like, oh, that's great when your Walmart, you know, and you've got the resources. So, and I think, I think, especially for defensive industries, regulatory, the assumption is, and it's not a bad one. It's not a bad part of the status quo. You're going to need a slowdown to get that kind of control. How can leaders balance innovation speed with the need for control, especially in financial services? Fantastic question. Fantastic question. I think, and you're, you're spot on. I think the, the era where these are kind of two opposing sides, right? Where, you know, we, Hey, on the one side, it's, we need to move fast and break things. And then on the other side, it's like, no, no, we got to shut down everything because that's the only way to be safe, right? We're going to lock it all down. That, that is a, of a bygone era, right? Move fast and break things is not going to work in the, in the AI era. And at the same time, you, you cannot just say, you know, I'm going to put draconian measures on everybody in my organization, because then you're, what you're going to do is you're going to increase the prevalence or the likelihood that shadow AI is going to pop up because people need efficiency gains and the, and, and all the things that they can get from AI. They're having a consumer AI experience and they want to bring that to their work. Right. So I think we need to strike a good, a good balance, or I would say even a better, a new paradigm, right? Where we're looking at governance as the strategic capability that helps us to move fast over the long run, not a sprint to the first deployment, but over the long run. And what I mean by over the long run is this is where we avoid sensitive data leakage. This is where we avoid things like, you know, a cyber vulnerability or the brand damage that comes with these things. This is where we avoid getting a knock on our door from regulators who are going to slow everything down. By being proactive and addressing those risks and avoiding that, we're going to move faster over the long run. To use the speed analogy the fastest cars have the best handling They have the best steering systems right Because where we going we need that right We need that And I think a big part of that comes down to people, right? If you look at personas inside of these organizations, right, in FinServe organizations, it's like, what do they need? And where are they coming from? And what's their historical perspective? And how do we kind of get them to build cross-functional teams that can really deliver governance, AI security and governance as a strategic accelerator, right? You know, you have a few personas. You have your AI people, your data engineers and your AI people, right? And they're coming from a background of like, you know, they know how to build models and they want to get their hands on data and they want to see what's interesting in there. And they want to build stuff and see what they can do, right? That's what they want to do. And they win when they get something into production and they actually get to see their baby grow up and walk around in the real world. So what do they need? What do they need with that unstructured data where you have unknowns lurking under the surface that you have to account for? They need clean, sanitized, approved data that says, here you go. Take this. Go nuts. Go build something. That's kind of what they need and how they're coming to it. Of course, they're aware of data quality and data security issues and how AI might exacerbate those. Of course they are. But really what they want is they want to just go build things, right? They want to get their data and they want to go build things. And then you have, you know, kind of another persona that's like the CISO's office, right? That says, hold on, we're building AI systems and I need to make sure that these are safe at the system level, right? I need to address these vulnerabilities. And what's different for them is kind of they're used to saying, okay, here's the thing I need to protect. Let me build a perimeter around it and make sure that I have proper access of who comes in and out. But it's a way deeper level of visibility and control that they need in the AI era, right? Is there poison data? Is there an indirect prompt attack or something hidden in this data? Is there potential for sensitive data leakage? Is something over-permissioned, right? Is there an enhanced risk here for insiders, right? All of those things are kind of CISO organization concerns. And then you have people that are coming at things from a risk management standpoint or governance and compliance standpoint that's saying, okay, I have new regulations. I have new systems here. I have to make sure they're compliant. And also, I kind of need some top-down visibility, right? I kind of need some top-down. What's my risk exposure across the enterprise, right? And that's where we need to help say, here's the frameworks that you can implement that prescribe certain controls, like the NIST AI, RMF, or like the OWASP, right? And here's how you can ensure that you're compliant with these prescribed controls, and you can at least take a measured approach, right? We did that with one of our customers who was able to actually achieve a 2 out of 5 possible points, so a 40% improvement against the NIST 853, which is a security framework. So they improved 40% on compliance with that framework. And, you know, by taking an approach of, you know, how can I build these controls into my data systems that I can observe and continue to monitor, right? So long-winded response. But my point is we have kind of some different personas that are coming at things from a different angle that we need to help meet in the middle and say, okay, this is how we can make control of our data and AI security and governance a strategic accelerator. accelerator so we can roll out use case number one, number two, number three, right? When we want to add fresh data, right? On a continual basis, we know we've got the machine working pretty well, right? And we can scale up, we can add different users, right? We can do the things that we need to do. And those risks are controlled for and managed for. Absolutely. And just nutchilling that response, you're going to need control first, we'll give you innovation speed as you go. There you go. Chris, really, really appreciate you being with us. Thank you so much for joining today's show. We really appreciate it. My pleasure. Thanks for having me. Absolutely. Wrapping up today's episode, I think there are three critical takeaways for financial services leaders, data and AI executives, as well as risk stakeholders to take from our conversation today with Chris Joint, director of product marketing at Security. First, effective AI adoption starts with visibility into data flows. Without knowing where sensitive data lives and how it moves, you cannot exert meaningful control over AI systems. Second, unstructured data is now both the greatest source of value and the greatest source of risk. Classifying and governing it at scale is essential to preventing shadow AI and enabling safe, reliable use cases. Finally, strong governance accelerates innovation. When security, data, and risk teams align on shared controls and frameworks, organizations can reduce exposure while scaling AI more quickly and confidently. Are you driving AI transformation at your organization, or maybe you're guiding critical decisions on AI investments, strategy, or deployment? If so, the AI in Business podcast wants to hear from you. Each year, Emerge AI Research features hundreds of executive thought leaders, everyone from the CIO of Goldman Sachs to the head of AI at Raytheon and AI pioneers like Yoshua Bengio. With nearly a million annual listeners, AI in business is the go-to destination for enterprise leaders navigating real-world AI adoption. You don't need to be an engineer or a technical expert to be on the show. If you're involved in AI implementation, decision-making, or strategy within your company, this is your opportunity to share your insights with a global audience of your peers. If you believe you can help other leaders move the needle on AI ROI, visit Emerge.com and fill out our Thought Leader submission form. That's Emerge.com and click on Be an Expert. You can also click the link in the description of today's show on your preferred podcast platform. That's Emerge.com slash expert1. Again, that's Emerge.com slash expert1. We look forward to featuring your story. If you enjoyed or benefited from the insights of today's episode, consider leaving us a review on Apple Podcasts and let us know what you learned, found helpful, or just liked most about the show. Also, don't forget to follow us on X, formerly known as Twitter, at Emerge, and that's spelled, again, E-M-E-R-J, as well as our LinkedIn page. I'm your host, at least for today, Matthew DeMello, Editorial Director here at Emerge AI Research. On behalf of Daniel Fagella, our CEO and Head of Research, as well as the rest of the team here at Emerge, thanks so much for joining us today, and we'll catch you next time on the AI in Business podcast. you

Share on XShare on LinkedIn

Related Episodes

Comments
?

No comments yet

Be the first to comment

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies