
Driving the Systemic Change for AI – with Deborah Golden of Deloitte
The AI in Business Podcast • Daniel Faggella (Emerj)

Driving the Systemic Change for AI – with Deborah Golden of Deloitte
The AI in Business Podcast
What You'll Learn
- ✓The 'corporate immune system' resists novel AI capabilities, as large enterprises are optimized for stability and conformity
- ✓Leaders often get stuck in 'pilot purgatory', failing to connect AI to a new vision for transforming business models and operations
- ✓Funding models need to shift from projects to a portfolio approach, enabling staged investment and learning
- ✓Cross-functional teams with shared accountability are critical to speed and relevance of AI work
- ✓Decision-making must evolve from intuition-led to data-driven, leveraging AI's probabilistic nature
- ✓Executive AI fluency - understanding capabilities, applications, and strategic potential - is key to unlocking imagination
Episode Chapters
Introduction
The host and guest discuss common failure patterns in AI transformation efforts and the key challenges enterprises face
The 'Corporate Immune System'
Deborah Golden explains how large enterprises' processes and culture are optimized for stability, resisting novel AI capabilities
The Crisis of Imagination
The discussion explores how leaders often get stuck in 'pilot purgatory', failing to rethink business models and operations
Overcoming the Challenges
Deborah outlines three key areas leaders must address: funding models, team structures, and decision-making frameworks
The Importance of Executive AI Fluency
The hosts discuss the need for leaders to deeply understand AI's capabilities, applications, and strategic potential
Conclusion
The episode concludes with a summary of the key insights on driving successful AI transformation
AI Summary
This episode explores why many AI transformation efforts stall or fail to scale, despite having the right technology and infrastructure in place. The key issues identified are the 'corporate immune system' that resists novelty, and a crisis of imagination among leaders who get stuck in pilot purgatory without rethinking how work and business models should change. To overcome these challenges, the discussion highlights the need to shift funding models from projects to a portfolio approach, create cross-functional teams with shared accountability, and evolve decision-making frameworks to be more data-driven. The episode emphasizes the importance of executive AI fluency - understanding AI's capabilities, applications, and strategic potential - to unlock the imagination required for successful AI transformation.
Key Points
- 1The 'corporate immune system' resists novel AI capabilities, as large enterprises are optimized for stability and conformity
- 2Leaders often get stuck in 'pilot purgatory', failing to connect AI to a new vision for transforming business models and operations
- 3Funding models need to shift from projects to a portfolio approach, enabling staged investment and learning
- 4Cross-functional teams with shared accountability are critical to speed and relevance of AI work
- 5Decision-making must evolve from intuition-led to data-driven, leveraging AI's probabilistic nature
- 6Executive AI fluency - understanding capabilities, applications, and strategic potential - is key to unlocking imagination
Topics Discussed
Frequently Asked Questions
What is "Driving the Systemic Change for AI – with Deborah Golden of Deloitte" about?
This episode explores why many AI transformation efforts stall or fail to scale, despite having the right technology and infrastructure in place. The key issues identified are the 'corporate immune system' that resists novelty, and a crisis of imagination among leaders who get stuck in pilot purgatory without rethinking how work and business models should change. To overcome these challenges, the discussion highlights the need to shift funding models from projects to a portfolio approach, create cross-functional teams with shared accountability, and evolve decision-making frameworks to be more data-driven. The episode emphasizes the importance of executive AI fluency - understanding AI's capabilities, applications, and strategic potential - to unlock the imagination required for successful AI transformation.
What topics are discussed in this episode?
This episode covers the following topics: AI transformation, Corporate culture, Leadership and decision-making, Funding models, Cross-functional teams, Executive AI fluency.
What is key insight #1 from this episode?
The 'corporate immune system' resists novel AI capabilities, as large enterprises are optimized for stability and conformity
What is key insight #2 from this episode?
Leaders often get stuck in 'pilot purgatory', failing to connect AI to a new vision for transforming business models and operations
What is key insight #3 from this episode?
Funding models need to shift from projects to a portfolio approach, enabling staged investment and learning
What is key insight #4 from this episode?
Cross-functional teams with shared accountability are critical to speed and relevance of AI work
Who should listen to this episode?
This episode is recommended for anyone interested in AI transformation, Corporate culture, Leadership and decision-making, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
The gap between a promising AI pilot and enterprise-wide, scalable impact remains a critical divide where momentum and resources often stall. Today's guest is Deborah Golden, U.S. Chief Innovation Officer at Deloitte who returns to the show to join Emerj CEO Daniel Faggella to tackle the costly reality of stalled AI initiatives. This conversation moves beyond traditional theory, offering a new strategic lens for driving the systemic and cultural change required to scale AI responsibly. Deborah shares how leaders can overcome deep-seated organizational inertia that typically hinders initiatives, architecting novel approaches for innovation – like AI sandboxes, portfolio-based funding, and "blameless postmortems" – forging the critical, and often missing, connective tissue between experimentation and measurable business impact. The conversation delivers concrete strategies for connecting innovation to core operations and unlocking the promise of AI across your enterprise. This episode is sponsored by Deloitte. Discover how your company can connect with Emerj's audience through our curated media offerings: emerj.com/ad1. Share your perspective with an audience of enterprise AI decision-makers — apply to join the AI in Business podcast: emerj.com/expert2.
Full Transcript
Welcome, everyone, to the AI in Business podcast. I'm Matthew DeMello, Editorial Director here at Emerge AI Research. Today's guest is Deborah Golden, U.S. Chief Innovation Officer at Deloitte, who returns to the program to join Daniel Fagella, CEO and head of research here at Emerge, to explore why so many AI initiatives stall and how enterprises can turn strategy into measurable impact. Deborah joins Dan to unpack common failure patterns from the quote-unquote corporate immune system that resists novelty to crises of executive imagination and shows how leaders can reframe questions, funding, and team structures in order to succeed. Throughout their conversation, she shares leadership frameworks for helping executives switch between roles as shields, translators, and enablers for AI, and also highlights actionable levers, including portfolio-based funding, cross-functional teams, defined AI sandboxes, and blameless post-mortems. Today's episode is part of a special series on AI infrastructure sponsored by Deloitte. But first, Interested in putting your AI product in front of household names in the Fortune 500? Connect directly with enterprise leaders at market-leading companies. Emerge can position your brand where enterprise decision makers turn for insight, research, and guidance. Visit Emerge.com slash sponsor for more information. Again, that's Emerge.com slash S-P-O-N-S-O-R. Also, are you driving AI transformation at your organization? Or maybe you're guiding critical decisions on AI investment strategy or deployment. Without further ado, here's our conversation with Debra and Dan. So Debra, welcome back to the program. Thank you so much for having me. Absolutely. Glad to be able to fly in here. We're kind of diving into some of what makes AI work in practice. And there's a lot of places we could start, but I sort of want to start with failure patterns. I mean, you guys get to work in almost every industry and all different big enterprise companies. You know, why do you see so many AI transformation efforts stall, sort of end up in some purgatory mode, even if it looks like the tech and even if it looks like the infra is there? Because I think people know maybe they need the right talent and infrastructure, but that doesn't ensure success. What do you see as like the stall commonalities? You know, I think the efforts stall because the technology, you know, which thrives on certainty is being force fitted into business systems that are built for predictability. I mean, you can't scale a probabilistic system on a structure that demands determinism. I mean, I think it's like operating a model clash. And it's the fundamental reason that failure rate for AI projects is so much higher than standard IT projects. You know, like, let's take a maybe a step back from that. If you think about the two specific roadblocks that I see constantly, first you have what I'll call their corporate immune system. Large corporations are designed for stability. Their processes, compliance frameworks, and even their culture are optimized to, I won't say stamp out completely variants, but they slow down variants, right? They want that conformity. When a truly novel AI capability appears, the system instinctively tries to reject it. It's just simply not built for it. And the data shows this, you know, 86% of executives say their current technology is outdated. They're trying to run an adaptive AI on rigid systems. You know, it's like trying to run an F1 engine on a freight train track. The core infrastructure fights the innovation. Second, I think what I see is what I'll call a crisis of imagination amongst leaders. They kind of get stuck in this pilot purgatory. You know, every time I'm somewhere, I ask everybody, how many of you are stuck in pilots? You know, and everybody raises their hand. You know, you launch into dozens of small AI projects for optics, but they fail fundamentally to rethink how to work itself should change. And it's not just the work, right? It's the systems. So instead of perhaps asking, you know, how can we have AI transform our business model? Maybe we can ask, where can we apply AI and why should it change the underlying infrastructure? You know, that failure to connect the technology to a new vision, or perhaps why we should have a new model in and of itself, I think is why so many promising pilots candidly never translate into enterprise wide scale or meaningful value. Yeah, well, and we can poke a little bit into both of those. I mean, the first thing you brought up here sort of reminds me of this dynamic that people close to the tech, I think, understand, and some enterprise leaders don't, which is that AI is not IT. It's much more like R&D. You know, like you had said, it's, you know, we need room to experiment. We need room to, you know, try multiple ideas. We need systems and processes that can sort of adapt to a probabilistic system, not a deterministic system. And so much of that is sort of culture based. You know, we're going to talk about the role of leadership towards the end of this episode and just how important that is. I think that's going to tie to your imagination problem as well. But when you think about the people who get over that, in other words, like they get to an understanding that, you know what, this isn't going to be a plug and play software like a MailChimp where we, okay, you know, we sign up, we integrate it, we upload our emails, we click this button, it sends email. Like they know sort of it's a different animal. They're going to need this buffer room for experimentation and R&D. For the folks that get to that culture shift, what are the commonalities? Because that stall pattern is less common than we saw five years ago, but it is still common. What have you seen overcoming? You know, I think in order to evolve some of that, it's less about the amount of ideas. I mean, I think I constantly hear people saying, we need more ideas. It's not actually the lack of ideas. We have plenty of people and systems that come up with ideas every second, every minute of every day. It fails because of the systems gap. Again, I think the outdated systems that turn good ideas into missed opportunities. And I think the leaders who get there fundamentally re-architect the three things that help them. And specifically, I would say how they fund that change, how they structure their teams and the governance of their data. And maybe we can dig a little bit into each of those. You know, first, when you think about the funding model, of course, you know, everything always comes down to what's my investment? How do I how do I fund this? What am I going to do? Where do I do it and why do I do it? And I think the funding model has to shift from, you know, projects to a portfolio basis. You know, the annual budget process demands upfront certainty. And we know, again, as we talked about, you know, AI isn't always, quote unquote, certain. And the right approach is to manage a portfolio of bets with clear metrics. Don't get me wrong. We need to have metrics, but it needs to be for staged funding. And that provides for a little bit more of a discipline similar to that of a VC, balancing near term results with long term capability building. And it ensures your funding learning. You're not just guaranteeing wins. I think the second thing is the team architecture. We know, and I don't have to repeat that silos or where breakthroughs, you know, maybe go to die, although I do believe in some individual thinking. But we know that 67% of employees cite silo collaboration as a major barrier. And candidly, those numbers oscillate depending upon the day of the week, but we know it's not good. And the solution is creating, you know, fused cross-functional teams. And I would say nowadays it's hybrid teams and those teams are not hybrid because of location necessary. It could be digital teams, it could be human teams, but it is going to be a mix of both human and digital, physical and non-physical teams where business, tech and data experts are co-located and most critically and share accountability from day one. Now that moves decision making to the edges and dramatically increases the speed and relevance of the work. And finally, when I said the decision making framework must evolve, I mean, you know, we talk about intuition led decisions and perhaps people saying, what if this were what if that worked? Yeah, well, and there's, you know, this portfolio versus project approach, I think really is how mature companies start to see it. Luckily, and maybe you could concur, you know, we see a bit more of that than we did even three years ago. I think part of the way we've seen enterprise learn is like they all have to learn this the hard way. I mean, maybe they don't, but they seem to. They all start with a bunch of popcorn project in disjointed corners of the business. And then at some point they learn enough to be like, hmm, we need to have cross-functional teams do this. Hmm, we need to be thinking about multiple programs at once. Hmm, we need to learn from these initial wins and failures and use that as kind of a cultural capital to continue to move forward and wake up the value of data. Do you see at least a bit of improvement in that over the last few years? Oh, for sure. I mean, and I think whether it's trial and error, whether it's because they had no other choice, whether it's because of competitive, you know, desire. For sure, seeing changes and glimmers of hope. I mean, I think without hope and optimism, we wouldn't have some of this state for people looking to solve these problems. Yeah, and you mentioned kind of the crisis of imagination or something akin to that. I feel like there's a coinable phrase there. Just a bit of a tee up and then I love for you to build on this We think about a concept called executive AI fluency where in order to have imagination around what AI can do an executive needs to understand sort of generally what AI can do generally the range so like conceptually generally the range of applications and use cases that might be relevant to their business function maybe it's anti-money laundering maybe it's drug development maybe it's inventory you know management whatever the case may be just a representative set of use cases like well tangibly what can it do conceptually what can it do tangibly what can it do in my domain and then third strategically where can it help to bring us to our five-year goals to differentiate in the market to win market share to ultimately be a more effective company etc etc and that sort of without those three things it's actually really hard to quote unquote imagine any kind of project other than some dead-end popcorn project that makes somebody look cool for six months and then goes nowhere but maybe you've got a different set of criteria for how this is built because we see the same crisis in imagination it's getting a bit better but how do you frame it how would you tee it up you know i think as you think about this crisis of imagination. And I believe it because in business, it's the leadership failure to see how a new technology and whether that's AI or something different can fundamentally reshape a company's value operations or business model. And the reason I do think AI is slightly different than anything we've seen before is back to that probabilistic versus deterministic value. I think we have many leaders today still saying, well, it's the same like it was before. It's fundamentally not. you know, before, quote unquote, were things that actually didn't change. The O's and Ones of the world were if-then statements. I mean, these if-then statements are fundamentally different. And so when you think about this crisis of imagination that's required to truly think differently, to think about something that is basically creating an organization and infrastructure to support the unknown, you're building a system that is built for the unknown, which is counter to the way that things have been built, quote unquote, before. I mean, it is literally the state, you know, for me, the crisis of imagination is the state of being trapped in incremental mindset, you know, making the current business 10% better, as opposed to a transformational one, creating a new business or a market. And when you think about what AI is, and that really unpredictable market, that is how you get to a transformational market. And so leaders aren't asking, again, what new things are now possible. you need to ask, how can we use this tool to literally reshape our entire market, not just do it cheaper? There is certainly a need for the efficiency. I think I said on our last podcast, you know, AI is table stakes. AI for operational efficiency needs to be done. There is no doubt has to be done. But the how do I do new things? How do I create the art of the possible? How do I truly reinvent my business? That is the unlock for AI. And so the crisis isn't due to the lack of intelligence. It stems from the deeply ingrained structures of successful corporations. I mean, we get stuck in flawed metrics. I mean, these types of things manifest into several recognizable ways within an organization. Yeah, well, and you've already touched on a few components of where I'm diving in with the next question. You know, right now, you're addressing the imagination consideration. And I would also really double down there. I think it's incredibly, incredibly important. And I know towards the end, we're going to talk about executive leadership role and culture shift. So for our listeners, you're going to want to be tuned in for that, because I think a lot of it's going to linchpin around those options or those items there. But when it comes to sort of organizational systems that have to evolve to enable this kind of scalable innovation, you've teed up a few of them. But I just want to see if maybe there's a list, some bullets, some additional points you want to chip in. You've teed up the importance of having a more of a portfolio approach. You've teed up the need to have a different way of kind of allocating funds that isn't so like I put in one dollar here and six months later I make one dollar and 17 cents. We need a different way of allocating finances. We need a different way of thinking about how we measure ROI of our projects. You also mentioned, and I don't know if this is an organizational system, but the importance of learning, making sure we're learning from these early projects. Data is going to wake up. It's going to define our business. Can we get better even if not every project works out? Can we ourselves improve? that I don't know if that is part of like an HR training organizational system, quote unquote. But when you think about the systems that undergird a business that have to level up, where do you really like to stick the thumbtack when you're talking to the C-suite? You know, I mean, I think some of this is about how do you just simply shift the conversation? And I know that sounds incredibly naive, but in everything that I look to do or every question that we look to have, you know, there are so many strong leaders who fundamentally believe that what they're doing is changing culture or is doing the things that you just referenced in the laundry list of things. And yet when you take a step back and listen to the way they're talking or the way that they're trying to encourage those changes, they're not actually fundamentally changing the quality of the question. And that quality of the question determines the quality of the strategy. And I do think that there is a really important thing about shifting the conversation. And by doing that, you establish this deliberate way that the business, instead of asking how can AI optimize whatever our call center, you know, what business would we be in if we could predict every customer's needs before they even knew them? Two fundamentally different questions. And by really shifting that narrative, you probably will get different responses, outcomes from systems, people, processes than you thought could exist. Yeah. So you're almost, this kind of is an organizational systems answer that ties to the imagination answer, right? I mean, it kind of sounds like if you frame the question in that way from the get go, you're automatically going to get answers that are a bit more far reaching, that require a different kind of investment, that require a different way of thinking about ROI. Is part of this just like teeing up the right questions within the business habitually? Well, I think that's some of it. I mean, look, the cultural shift required for AI is profound. I think we often focus on the technology. But when you think about what that shift is required, it's where you give your best people the license to be intelligently wrong. You know, we love to say we're a culture of failure. We're fundamentally not. We need leaders who can create those environments where every experiment, success or failure, produces valuable insight that makes the organization smarter. If you look at examples where corporations are, quote unquote, getting it right, they can not just prove the technology works. They can prove how it's integrated to complex, regulated workflow systems capabilities, turning days and weeks long tasks into hours long tasks while reinventing business models. And so there's a difference. They're de-risking business transformation, not just for technology. You know, they're validating business cases by building a clear path to production. In order to do that, you have to give people the license to be intelligently wrong. Very smart, intelligent humans and systems being wrong. Our systems don't allow for that. Our performance management systems don't allow for that. The moment you're wrong, the moment you're told you're wrong, not how to go valuably learn from that to make not just you smarter, but the organization around you smarter. Yeah. And so much of that involves starting at the top, just from my many, many conversations here and also just riffing with you on a few different occasions as well. But, you know, because I know we're going to dive into the importance of leadership, we're going to save ourselves maybe even 10 minutes for that question. From an organizational systems perspective, is there a way to manifest this willingness to not get everything totally right? This willingness to learn, to fail forward in kind of reimagining the business? Like, are there, you know, in addition to like a CEO who just gets it? Okay, that's nice. But, you know, underneath that, are there systems that support what you just said? Yeah, I mean, I do think one of the big ones that I often see is the metrics culture that we're in, which again, we need metrics, whether that's, you know, funding metrics or performance metrics or operational metrics, they're all needed. But again, how do we rethink them in this world? Again, they were built for different deterministic systems. How do you change that for an unpredictable world? And again, it's not to say that this is about complete rip and replace. This is not about saying we don't need compliance or regulation. All those things are needed. They're absolutely needed. You need to have goals and objectives. We need to have strategies. It's how do you actually inherently make them coexist in reimagined in worlds where you're thriving on that unpredictability. And so you can't just promote urgency by asking, we got to do it better than our competitors. I think that sense of threat actually doesn't work anymore. And so how do you actually look at it and say, there's an external imaginative perspective that transforms this necessity rather than a choice. And so part of that is, I think, distinguishing between two types of failure. You know, you heard me mention, you know, intelligent failure. I think that's the desirable type of failure versus what I'll call sloppy failure. That's preventable. Like we want to and it important to distinguish those two You want to allow for the intelligent failure the sloppy failure that you can prevent you know that comes from negligence or cutting corners or not paying attention to detail There little to be learned there right That should be avoided you know where and if at all possible. The intelligent failure, you know, that happens when you're at the edge of your knowledge, when you're pushing the boundaries, trying something new, it's the result of well-designed experiment. That's what you should reward for. And that's where our talent systems and our operational systems, you know, haven't really rallied behind that type of a failure where you can turn that quote unquote idea from a dead end loop into continuous improvement. And I really do think that's where you can create kind of a better framework for how to use your words, fail forward in a better capacity. And so whether that's defining what the hypothesis is for what you're going to fail, test the experiment of how you're going to test it, analyze it and truly do a blameless postmortem. That's often where we also fail because we don't actually truly do blameless postmortems and then integrate that adjustment and do these things in very rapid postmortem. And candidly, what I find even personally, you know, when you're sitting there and whether that's, you know, within a small group, whether that's in, you know, with the client group, whether that's with an external group, when you're doing that, quote unquote, blameless postmortem, ego tends to get in the way. And by the way, there's both positive and negative ego, but it's really hard to look at when the experiment's over, how do you analyze the results and doing it from an intellectually honest perspective, separating the outcome from the people involved. If you can get that right, prohibiting blame, the focus must be on what can we truly learn? That true psychological safety is not only essential, that will get you over the hump of being able to fail forward faster and more efficiently. And that I think is where I've seen the best organizations truly succeed at that. Yeah, well, and it's so much of that starts at the top, which I know we're going to get into. But I like your your points here around making blameless postmortems normal, you can do that at any level, that doesn't have to just be at the C level. I mean, that's a great habit. And then also, kind of putting into question a little bit the metrics we're using and saying, do these metrics even allow us to fail forward? Do these metrics even allow us to try a portfolio approach versus like a immediate push button ROI as if this is some little knickknack IT project over here. I think those are good places to start. And this sort of ties us into question number three around how essential it is to have kind of a trial and error sandbox for AI projects. You're talking about this necessity of moving to more of a portfolio approach. Again, we hear this in many industries that people that are starting to get it are already kind of doing this a bit. But maybe you can walk us through the importance of this trial and error sandbox, even within regulated industries, and maybe even some of the reasons companies resist it because some folks still are not doing this. So what's your take there? Yeah. And I mean, I'm all for it. Just, you know, bottom line up front, I'm all for it. And I do think, you know, going back real quick, I think another, to your point, and I would agree on like making sure we have that postmortem, it is also critical. And this is where I think the sandbox comes hugely into play, is making sure that we actually know the experiment we're testing before we test it, because we often change the question 17 times to get the answer to come out quote unquote right. So if we actually have the blameless postmortem, and then the systems to reward that, perhaps we would keep the test accurate. So we would isolate the variables, measure everything, timebox it, and use a sandbox, if you will, to actually test it, so that when we actually get to the blameless postmortem, we're not changing the hypothesis this with the experiment, if you will, because that's often what I see happen, which I think is actually why part of the reason organizations actually tend to not use these sandbox environments, because they're not actually, you know, very specific about the scope. It's just like everybody just go in and play, which again, by the way, could be a valid use for a sandbox. Some sandboxes, training, learning, education, development, very, very different between training and learning and development. So again, what is the purpose of the sandbox? If it's just to go in and play, great, go in and play. That's a great use for it. Sandbox is absolutely essential for that. If sandbox becomes a theoretical playground for real world R&D, slightly different, but also incredibly important because without that bridge, even the most important idea would die in the lab. And so again, when we now loop this back to how do we go from taking that hypothesis, proving out whether it succeeds or fails, we need this type of a sandbox to be able to produce valuable data, making sure the entire organization again is smarter, and reduce that risk at every step. Again, I go back to I think where these fail or organizations tend to shy away from this is because we haven't necessarily defined A, the controlled environment, B, what the hypothesis is, or C, how it is we expect the outcome to come. And then of course, going back to the postmortem. So if the true purpose of the sandbox is to actually not just test code in isolation, but about de-risking the entire business transformation, the tech, the process, the people, you can do all of that in that kind of capacity. But I do think you have to make sure you're being pretty deliberate. it. Again, if you want to use it for simply playing around, learning, you know, making sure you're getting your hands on and understanding things, purpose for that as well. I think you just need to make sure you're defining what your purpose is and 100% valuable. Okay, this is interesting. So you're saying that there's some cases where you're designing a sandbox in order to do a particular kind of iteration. And let's call it kind of portfolio learning, willingness to fail forward, willingness to experiment around a particular outcome, in which case you can kind of define what is this sandbox for? What kind of experiments are we going to run within what kind of time horizon? What are we hoping to learn by the end of this, et cetera, et cetera? Or maybe some companies are just like, hey, within this department, we want to be able to do enough of this stuff to sort of learn and just get our hands dirty with the tools. And maybe the outcome there is a little bit more general. It's like, hey, you know, in six months, we're going to do a show and tell and at least highlight some of the areas or some of the lessons we learned about data cleanliness or some of the lessons we learned about, you know, what we can do with, you know, a Gentic or whatever the case may be. Is that sort of set up front by whoever's kind of creating the sandbox in the first place? That's right. And the latter one we've seen as a good use case for like super users. So when you want to get people trained in a non-production environment and you want to, you've already tested out and you've done the business transformation, you've already done your R&D and development, but you want to have a training facility, if you will, it's more sandbox, but not yet quite production. You can put your super users in there. It's a great way to get folks hands-on experience before you deploy it at scale. That's cool. Okay. I think I actually, I don't think we've ever heard that stated on the show here. I mean, the idea that we want to have room to be able to fail is great. But I think saying, hey, when you do a sandbox, set up sort of what its purpose is, what the criteria of success are, how you're explicitly going to learn from it and have a way to use it deliberately as opposed to just like, well, I guess we tried a bunch of stuff for 18 months. Who knows? You know, that's entirely right. Because you could have a sandbox environment solely dedicated to, say, proving out your cybersecurity or your backup and recovery. And that's, you know, what that sandbox is intended for. But it's very specific. So, again, you'll know, did that succeed? What portions of it succeeded? If something failed, what parts of the hypothesis failed? But it's, again, also in a very controlled environment. So you know exactly what you were running, how you were running it, why you were running, what else was running. If it failed, why did it fail? Again, it's much easier to be able to diagnose so that when you come out the other side and make fixes, etc., you're not waiting for that to be in a production environment. Yeah. Okay, cool. So this is a very actionable sort of initial step here. I think there's probably people with sandboxes who are thinking about stepping into them who absolutely could put that into action off the jump. In terms of kind of putting a bow around a lot of this, we've referenced the necessity of culture change. We've referenced kind of the crisis of imagination and how that might be opened up by making some system level changes. But obviously, a lot of this starts at the top. And often you are talking to people at the top. So I want to ask you, you know, what role executive leadership really has to play in culture shift and what qualities define the leaders who succeed at a very high level? What do you think other C-level folks need to understand about this? Yeah, I mean, look, I think leadership in an AI era, you know, is less about being a commander, although there are certainly times when you need to be in command and control, but more about being an architect. You know, how do you become an orchestrator, if you will? You can't direct the change solely from the top, particularly as you think about all the different ways that AI is integrated into an organization, both inside and outside. And so I think about it more of an orchestrator of ecosystems. And it requires leaders to play, I think, many different roles. You know, I've got kind of maybe three in mind as I think about it, you know, and no real names here or no specific order of importance. But, you know, one is what I'll call like the shield, where the leader's job is to protect nascent, high potential initiatives from the organization's own bureaucracy. So in the places where that hasn't been updated needs to stay the same. It means defending those experimental budgets, creating that psychological safety we talked about for teams to take those risks and ensuring that those failed experiments are correctly framed as valuable investments in learning Second the translator at times leaders have to become that bridge between the technical and business worlds I think we still have and see quite often the challenge between business and technology How do you translate complex probabilistic AI capabilities into the language the board and the market understands? What is competitive advantage? What is margin improvement? What is net new revenue? You know, Without that type of translation, certainly innovative projects die in the lack of business context and support. But I would say when continually asked, what's the pace of our AI adoption? You have to bring that back to what is the business? What are these things that we're looking to do associated with AI and the complexity of these systems that we're talking about? So this translator becomes super important, again, back to the point earlier around looking to change that narrative, which gets me to my last role, which is the enabler. And to me, this is the most active non-passive role. The leader, maybe the most obvious, cannot just be a casual observer. They need to be a direct interventionist. They have to be willing to get their hands dirty and dismantling these barriers that create friction. And I don't mean dismantle for the sake of dismantling. I don't mean disruption for the sake of disruption. I mean, forcing a change in some of the somewhat otherwise rigid processes, breaking down silos between divisions that don't understand or don't see eye to eye or perhaps are so copacetic that they haven't even disrupted themselves in many, many years, or completely rewiring an outdated policy that's holding a team back and they don't even know it. So it's not about just authorizing change, perhaps it's about clearing a path for it. And so I think ultimately, when you think of these roles, it shows that AI isn't just a technical milestone. It really is a leadership test. You know, it's about who has the courage to shield, the clarity to translate and the conviction to enable. And I think when you think about those, you know, three traits, it's not solely what you need to move from pilots to platforms, but it is what's going to aid in moving you from incremental gains to true transformation. I really like I'm going to nutshell how I think leaders could put this into action. I like this breakdown a lot. I mean, change starts at the top has been said, I don't know, 750 times on the podcast over the course of the last 10 years. But, you know, and with with enterprise, I, of course, it's so true. And, you know, OK, you need to get multiple stakeholders in the room and, you know, you need to be willing to set aside strategic budgets and allow people underneath you to fail. Like we've kind of heard these things and they're still they still ring true. But what you're doing is you're almost putting three little sub hats underneath like this broader executive AI catalyst as these three sub hats of sort of the shield, the translator, the instigator. And the way I almost think about it for the listener is like, hey, listening to those three, is there one of those you maybe aren't doing as much of, you know? It's almost like a litmus test. Across these three, where are you on a 1 to 10 in terms of embodying it? Because maybe just one of them can fail. You're not doing enough shielding. And everything you're supporting and trying to instigate is not carrying forward. I don't know. How do you think about it, Debra? I'm trying to make sure the audience can apply what you're saying here. Yeah, no, precisely. And then when and where do you need to oscillate? That's the thing. I think when you bring up some of these components, people say, well, innovators don't want to have metrics. Or they don't want to be held to returns. And that's, I think, fundamentally false. They just want to shift what those things look like and how they're bound to them. You know, when you're taking a gamble on something that may be six months out, what can you do to have metrics that show up today and in six months? Is it and? How can you have those both of those metrics? And so I do think the question on are you shielding, are you translating, are you enabling, it's more about when do you need to be a third or a quarter or 50% here and 25% there. It's not about how do you be 100% of these things all the time. Every situation is going to cause for you to constantly be checking in with yourself and with your teams about how to best oscillate in those areas. Yeah, this is a good way to nutshell and sort of put these ideas into action. And I think some of the abstract notions of sort of bringing more voices into the room and being more supportive of projects, I think really do roll up under maybe hats that you can wear. So I like where we're bringing people in terms of executive lessons. Deborah, as we wrap up, is there anything else you'd want to mention to kind of C-suite folks who want to embody the success patterns? You know, the people that are able to break through, change culture enough to really see the more transformative gains, let their imagination actually turn into something. Any parting advice as we wrap up? I mean, I think we've kind of, you know, really hit the nail on the head. But I mean, we know that the most important point is that this isn't fundamentally a technology challenge. You know, AI is absolutely a catalyst. It's an incredible tool. It's forcing a long overdue conversation about disruption. And, you know, no matter what industry I'm talking at, meeting with industry leaders, people want the disruption. Some of them don't know where to begin, how to begin, where to go. And that's okay. You know, I watch people sometimes just be not happy that others are equally confused, but they're almost taking a sigh of relief that somebody else also doesn't know where to begin. And so I think part of it is embracing the fact that it's okay maybe to not know where you're going. And back to this whole AI is built for unpredictability. The world is in a moment of unpredictability. And that's okay. And so how do we hold up a mirror to our legacy structures, our siloed cultures, our perhaps slow decision making processes, and not just look at this as a moment to deploy yet another new technology, but use it as an impetus to build a faster, smarter and a more adaptive organization. I think what you'll find is that a lot of people are very anxious, if not unsure, but anxious about getting there. Yeah, I think that lesson about what our company is turning into through these efforts, I think, has to be bared in mind the entire way through. And I think if there's one, you know, there's a bunch of threads to pull on here, but there's a thread of learning that has started from the first minute of this discussion here, Debra, that I think every enterprise should take to heart. So I'm wary of where we are on time, but it is always a pleasure. Thank you so much for being with us again, Debra. And thank you so much for having me. Wrapping up today's episode, here are three key takeaways for enterprise leaders who want to move AI from isolated pilots to real measurable impact while overcoming cultural barriers and scaling innovation responsibly. First, move beyond pilots stuck in purgatory by using portfolio-based funding, cross-functional teams, and clearly defined AI sandboxes, giving teams room to experiment and learn without jeopardizing operations. Second, address cultural resistance by challenging the quote-unquote corporate immune system, fostering imagination and normalizing blameless postmortems to turn intelligent failure into organized learning. Finally, leaders must act as shield, translator, and enabler, depending on the circumstances, to protect high-potential initiatives, bridge technical and business understanding, and actively dismantle friction to achieve measurable impact. Are you driving AI transformation at your organization, or maybe you're guiding critical decisions on AI investments, strategy, or deployment? If so, the AI in Business podcast wants to hear from you. Each year, Emerge AI Research features hundreds of executive thought leaders, everyone from the CIO of Goldman Sachs to the head of AI at Raytheon and AI pioneers like Yoshua Bengio. With nearly a million annual listeners, AI in business is the go-to destination for enterprise leaders navigating real-world AI adoption. You don't need to be an engineer or a technical expert to be on the show. If you're involved in AI implementation, decision-making, or strategy within your company, this is your opportunity to share your insights with a global audience of your peers. If you believe you can help other leaders move the needle on AI ROI, visit Emerge.com and fill out our Thought Leader submission form. That's Emerge.com and click on Be an Expert. You can also click the link in the description of today's show on your preferred podcast platform. That's Emerge.com slash expert1. Again, that's Emerge.com slash expert1. We look forward to featuring your story. If you enjoyed or benefited from the insights of today's episode, consider leaving us a review on Apple Podcasts and let us know what you learned, found helpful, or just like most about the show. Also, don't forget to follow us on X, formerly known as Twitter, at Emerge, and that's spelled again, E-M-E-R-J, as well as our LinkedIn page. I'm your host, at least for today, Matthew DeMello, Editorial Director here at Emerge AI Research. On behalf of Daniel Fagella, our CEO and head of research, as well as the rest of the team here at Emerge, thanks so much for joining us today, and we'll catch you next time on the AI in Business podcast. Thank you. you
Related Episodes

Governing AI for Fraud, Compliance, and Automation at Scale - with Naveen Kumar of TD Bank
The AI in Business Podcast
22m

Transparency for AI Systems, Regulations, and Humans in Agricultural Manufacturing - with Kun He of Bayer
The AI in Business Podcast
35m

Why Granular Visibility and Data Control Determines AI Success in Financial Services - with Chris Joynt of Securiti
The AI in Business Podcast
30m

Rethinking Clinical Trials with Faster AI-Driven Decision Making - with Shefali Kakar of Novartis
The AI in Business Podcast
20m

Human-Centered Innovation Driving Better Nurse Experiences - with Umesh Rustogi of Microsoft
The AI in Business Podcast
27m

The Biggest Cybersecurity Challenges Facing Regulated and Mid-Market Sectors - with Cody Barrow of EclecticIQ
The AI in Business Podcast
18m
No comments yet
Be the first to comment