Back to Podcasts
Practical AI

Cracking the code of failed AI pilots

Practical AI • Practical AI LLC

Thursday, September 11, 202546m
Cracking the code of failed AI pilots

Cracking the code of failed AI pilots

Practical AI

0:0046:44

Episode Description

<p>In this Fully Connected episode, we dig into the recent MIT report revealing that 95% of AI pilots fail before reaching production and explore what it actually takes to succeed with AI solutions. We dive into the importance of AI model integration, asking the right questions when adopting new technologies, and why simply accessing a powerful model isn’t enough. We explore the latest AI trends, from GPT-5 to open source models, and their impact on jobs, machine learning, and enterprise strategy. </p><p>Featuring:</p><ul><li>Chris Benson – <a href="https://chrisbenson.com/">Website</a>, <a href="https://www.linkedin.com/in/chrisbenson">LinkedIn</a>, <a href="https://bsky.app/profile/chrisbenson.bsky.social">Bluesky</a>, <a href="https://github.com/chrisbenson">GitHub</a>, <a href="https://x.com/chrisbenson">X</a></li><li>Daniel Whitenack – <a href="https://www.datadan.io/">Website</a>, <a href="https://github.com/dwhitena">GitHub</a>, <a href="https://x.com/dwhitena">X</a></li></ul><p>Links: </p><ul><li><a href="https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf">The GenAI Divide: State of AI in Business 2025</a></li><li><a href="https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/">MIT Report: 95% of generative AI pilots at companies are failing</a></li></ul><p>Sponsors:</p><ul><li>Miro – The innovation workspace for the age of AI. Built for modern teams, Miro helps you turn unstructured ideas into structured outcomes—fast. Diagramming, product design, and AI-powered collaboration, all in one shared space. Start building at <a href="http://miro.com/">miro.com</a></li><li>Shopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce.<br>Start your one-dollar trial at <a href="http://shopify.com/practicalai">shopify.com/practicalai</a></li></ul><p>Upcoming Events: </p><ul><li>Join us at the <a href="https://midwestaisummit.com/">Midwest AI Summit</a> on November 13 in Indianapolis to hear world-class speakers share how they’ve scaled AI solutions. Don’t miss the <strong>AI Engineering Lounge</strong>, where you can sit down with experts for hands-on guidance. Reserve your spot today!</li><li>Register for <a href="https://practicalai.fm/webinars">upcoming webinars here</a>!</li></ul>

Full Transcript

Welcome to the Practical AI Podcast, where we break down the real-world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind-the-scenes content, and AI insights. You can learn more at practicalai.fm. Now, on to the show. Welcome to another episode of the Practical AI podcast. in this fully connected episode. It's just Chris and I, no guests. And we'll spend some time digging into some of the things that have been released and talked about in AI news and trends and hopefully spend some time helping you level up your machine learning and AI game. I'm Daniel Whitenack. I am CEO at Prediction Guard. And I'm joined as always by my co-host, Chris Benson, who is a principal AI research engineer at Lockheed Martin. How are you doing, Chris? Doing great today, Daniel. So much happening in the world of AI. Holy cow. Yes, yes. Lots to catch up on. There have been a number of interesting developments in the news that maybe people have heard about. And yeah, it might be good to just kind of distill down and synthesize a little bit in terms of what they mean, what they signal, how people can think about how certain things are advancing. So yeah, looking forward to this discussion. Any things that have been particularly standing out to you that you've been hearing about, Chris? Sure. I think the thing that has really been noticeable in recent weeks has been so many people that are both in the AI world and outside of it, but impacted like everybody is, are talking about jobs. And we've talked about the impact of AI on jobs many, many times on the show over time, but people are really, really feeling it at this point. The job market is pretty tight. I've talked to lots of people out there looking, whether they're currently employed or whether they're out of school. And particularly, there's a lot of people in technology coming out of university that are really struggling right now. And I believe there was a report recently from MIT that highlighted that. Yeah, it's interesting that we spent a good number of years on this podcast, of course, talking about, you know, occasionally talking about some of the wider impacts of this technology, you know, within a certain company or industry. Now this is kind of like a global across all people thing, you know, all sectors, you see things being hit hard, especially in, you know, maybe it's sales and marketing or kind of junior developer type of cases. If I remember right, Chris, the MIT report that you're talking about, which we can link to some of the news articles about the MIT report. I don't think I've actually seen the actual MIT report yet. So I guess our listeners can keep that in mind. But one of the things that I've actually heard on multiple calls, and I was last week at an AI corporate type event where there were a bunch of corporate leaders, and they were certainly talking about this. One of the things talked about in the MIT report was that 95% of AI pilots fail. And that I think has generally spooked a lot of business leaders, investors, lots of different people across industry, just the level at which these kind of 95% of AI pilots fail. What's your thought on that? I think it's a weird juxtaposition right now that we're in, in that that's accurate, that you're having a tremendous number of Gen AI in particular efforts fail. But at the same time, companies are holding back on hiring junior devs out of school. And so you have this weird mesh of people going hardcore on trying vibe coding and things like that, but with very limited success, struggling to get it adopted. And at the same time, they're making bets on the future by not going ahead and bringing in the junior level developers that they always have, which kind of leads to an interesting kind of what if situation in the months ahead. If you're, you know, junior developers eventually, historically, have turned into senior developers. And right now, companies are betting on those senior developers with these new AI capabilities over the last couple of years to make up for that deficit, hoping to save money. But if you're failing 95% of the time, it puts things into an interesting place. Yeah. One of the things that I'm just reading one of these articles about the AI pilots. And one of the things that it highlights is that this isn't from the report's perspective. It's not that the AI models were incapable of doing the things that people wanted to prove out in the AI pilots. But that there was a kind of major learning gap in terms of people understanding actually how an AI tool or workflow or process could be designed to operate well. So I find this very, very prevalent across all of the conversations that I have, especially in workshops and that sort of thing, is there's kind of this disconnect. And people are used to using these kind of general purpose models, maybe personally. And there's this concept that the way I implement my business process with an AI system is similar to the way I prompt an AI model or a chat GPT type interface to summarize an email for me. And that is always due to create some pain. Number one, because these AI models only sometimes follow instructions. But number two, your business processes are complicated, right? They're complicated. They rely on data that is only in your company and probably has never been seen by an AI model unless you accidentally leaked it. jargon, all of those sorts of things. And number two, these business processes that people are trying to automate or create a tool around, often the best thing for that is not a general chat interface. It's not like you want to create a chat interface for everything you want to do in your business. No, actually, in one particular case, it may be that you want to drop a document in this SharePoint folder. And when it's dropped in the SharePoint folder, it triggers this process that takes that document and extracts this information and compares it to information in your database and then creates an email and sends out an email to the proper people or add something. These sorts of processes are not general chat interfaces. So people are coming at it like, oh, I know how to kind of prompt these models to do some things. And so they try to kind of build or prompt these models in a certain way without the kind of, I don't know why there's such a disconnect, but without the understanding that really what they need is maybe a custom tool or automation. They need data integration, data augmentation to these models. They don't just need a model plus a prompt. And I think that that's a pitfall that I see, unfortunately, very, very often. So it's not super surprising for me to see this kind of highlighted in the MIT report. Yeah, I agree. It's I think, you know, to your point, these chat interfaces, you know, it's kind of becoming the universal hammer in most people's head and everything is starting to look like a nail for that hammer to hit. And they are neglecting a toolbox full of tools that give them the right software components for getting their workflows put together the way they want. And so, yeah, I think it's a certain amount of over-expectation that is then exacerbated by choosing the wrong software approach or an incomplete software approach to try to get the job done, that workflow, that business workflow done. So that's certainly my sense of it. I think a lot of these are driven top down, you know, by excited executives that haven't taken the time to really understand how to optimally use these tools. Yeah. And another thing that was kind of interesting in the study is this kind of just prompting of the models generally kind of failed. There's kind of a knowledge gap on how to integrate data and how to build custom solutions in a way that could succeed in kind of a POC sort of thing. But I actually don't kind of agree with the premise of the, you know, that this should spook investors, you know, from kind of AI, maybe some AI companies, but AI companies that especially are verticalized in general. And, you know, I'm part of an AI, I lead an AI startup, so I may be biased, but our AI startup isn't one of these kind of verticalized application layer things. So I feel like I can maybe speak objectively with respect to those. it's really these kind of AI companies that are in whatever it is, healthcare or the public sector or education or finance or that sort of thing. They are putting in the work, I think, at least many of them are to understand business processes and build robust AI workflows and kind of tools that are fitting certain business use cases and that sort of thing. And I think one of the stats in the report were that a lot of the kind of trials of these sorts of tools did actually succeed kind of a majority of the time. And so there's this kind of major gap between on one, I guess in summary, what I'm trying to say is there's this major gap between on one side, you have this idea that all you need is access to a model And then on the other side these kind of pre purpose AI systems for particular verticals that understand the business processes There's a whole kind of gap in the middle because many companies, especially in the enterprise setting, will have to customize something. I don't think in the end they will be able to always use a tool off the shelf that works completely for them. If you look at any enterprise software, it's always customized, right? at some level, whether it's manufacturing software, ERP software, CRM, whatever, like it's always customized. I think that's where maybe this is highlighting that gap of companies not understanding, you know, the gap between having a model and a verticalized solution for their company. And that actually does require, you know, significant, you know, understanding of how to architect and build these systems, which unfortunately, there's a skills gap and a learning gap in terms of people that actually have that knowledge, right? Yeah, I think you're drawing out a great point there. And I know we've talked a little bit about this in previous shows where, you know, the model constitutes a component in a larger software architecture. And we know, as you just pointed out, that the expertise of those business workflows being integrated into vertical software stacks, you know, where, where it is, it is designed to solve the problems and not just a chat box is really important to getting to a good solution that works for your business. And, and I think that there is, I think this is where one of those challenges that we're seeing in a lot of folks out there in the business world is kind of forgetting that, that core tenant and leaping straight for the model will run my business from this point forward without that supporting infrastructure. So maybe there are some hard lessons to be learned in the days ahead for some companies, but hopefully that will happen. Well, friends, when you're building and shipping AI products at scale, there's one constant complexity. Yes, your regular models, data pipelines, deployment infrastructure, and then someone says, let's turn this into a business. Cue the chaos. That's where Shopify steps in, whether you're spinning up a storefront for your AI-powered app or launching a brand around the tools you've built. Shopify is the commerce platform trusted by millions of businesses and 10% of all U.S. e-commerce, from names like Mattel, Gymshark, to founders just like you. With literally hundreds of ready-to-use templates, powerful built-in marketing tools, and AI that writes product descriptions for you, headlines, even polishes your product photography. Shopify doesn't just get you selling, it makes you look good doing it. And we love it. We use it here at ChangeLog. Check us out, merch.changelog.com. That's our storefront. And it handles the heavy lifting too. Payments, inventory, returns, shipping, even global logistics. It's like having an ops team built into your stack to help you sell. So if you're ready to sell, you are ready for Shopify. Sign up now for your $1 per month trial and start selling today at shopify.com slash practical AI. Again, that is shopify.com slash practical AI. Well, Chris, there's a couple of things that I've been following with respect to the model builders. But it does actually connect maybe to this MIT report as well, because one of the other kind of common things that I see that companies or a way of thinking that companies have when they approach kind of AI transformation is they come at the problem of AI adoption kind of with the question of what model are we going to use? which I think is the completely wrong question to be asking for a number of reasons. First of all, if you're coming to AI for the first time with your company and you want to transform your company with AI and build knowledge assistance and automations and adopt other tools and build verticalized solutions. The model actually will shift over time a lot. So that's, I think, number one, there's no one kind of on the market that, at least right now, there's certainly many providers of models. There's a lot of good models. No one knows who will have kind of the edge on models in the future. And I think what we're actually seeing is that the model side is fairly commoditized. You can get a model from anywhere. The second reason I think that that's the wrong question to be asking is that if you're trying to build an AI solution within your company, again, think about that SharePoint thing that I talked about. I'm going to process this document from SharePoint and extract these things and send it to an email. You actually don't need a model. You need a set of models and potentially other things in the periphery around those models. So you likely need a document structure processing model like a dockling. You need a language model to process pieces of that. You maybe need embedding models to embed some of that text or do retrieval. You need a re-rank model because you've got to re-rank your results after doing retrieval. You need safeguard models because you want to be responsible and check your inputs and your outputs of your model. So once you start adding these things up, even for that simple use case of processing this document through SharePoint and out the other end, if you're coming to one of these proof of concept projects like we've been talking about and you're thinking, what is the model I'm going to use for this? and you decide, okay, the model I'm going to use for this is a GPT model or a LAMA model or whatever, well, you're already setting yourself up for failure. Because what you don't need is a model. What you need is a set of models. You sort of need an AI platform. You need an AI system that gives you access to multiple kind of different types of functionality, right? And so I think that that kind of perspective plays into this kind of POCs failing thing. And I have more thoughts about that related to the model builders, but do you think I'm off in that? Oh, no. How would you correct me? No, I have no correction. We've talked about this as well a bunch of times and I keep waiting for, you know, I think we've had a lot of really in-depth conversations with people on the show in the past about the need for multiple models to tackle different concerns within your larger business focus and the software architecture that supports that. And I think as you look forward, and we're seeing agentic AI, we're seeing physical AI developing more and more, and all of those require a number of different models to do that. Kind of obviously, very different, distinct things that you can see in those spaces. And so there seems to be this hang up in the public about the model, you know, and which model, and then I pick the model. And as you pointed out, that's completely the wrong thing. It's what does your model architecture look like as related to your business workflows and the job that you need doing and how do you do that? And as you were describing some of those potential models that one might have in a workflow earlier, as I was listening to your examples, I was thinking, wow, sounds a lot like software architecture. You know, just it's just each each component is invested with with one or more models now in that component. But there are still many components that make up a full business workflow. And so I guess maybe because we talk about it fairly regularly on the show here, It seems quite obvious to me that that's the case, but clearly it's not. If you look at the business decisions that are being made out there, some of which is there is clearly a need at this point. You may have a senior developer type by whatever title you're applying, working and kind of sort of knowing software dev at some level and sort of vibe coding and putting their knowledge of architecture together for a solution. But if you're not going to bring in junior devs, then you're making a gamble that you're not going to need that at some point. And yet what we're seeing is, you know, per that report, a 95% failure rate on using these tools at the current point in time we're at now on that. So it's there's some it you know, we've we had a recent episode on risk management and from a risk management perspective, I think that there are a lot of very risky decisions being made by executives largely in ignorance, I think, of kind of understanding how models and software architecture fit together to your point. So, no, I'm I'm in violent agreement with what you were saying a moment ago. Yeah, this idea also that you kind of bring in a model also produces a little bit of problematic behavior around adoption of models, particularly in private kind of secure environments. And I know this one from experience where you kind of think, well, which model am I going to use? And then you think, well, there's a couple of categories of models, right? There's closed models, there's open models, like closed models being the GPTs or Anthropic or etc. The open models being the, you know, Llama or DeepSeek or Quinn or whatever. And, you know, you have smart people in your company. and whoever, Frank over in infrastructure, he's like, yeah, I can spin up a Lama model in our infrastructure, right? And there's innumerable ways to do that at this point. So you can use VLLM or you can use Ollama or whatever it is, right? I can spin up one on my laptop. And so you spin up the model and then you're like, all right, well, let's build our POC against Frank's model that he's deployed internally because we now know how to do that. But again, it's not so much that Frank did a poor job and the deployment is bad or the tools are bad, like VLLM or something is very powerful, but it's not a proper comparison because what you've deployed is a single model, not a set of AI functionalities to build rich AI applications. You now have a private model, which again only does what a private model does that one particular model It doesn give you that rich set of AI functionalities And so it not really a knock against open models What it is an indication of is that you maybe shouldn roll your own AI platform, right? So there's a lot of things that go into that. And there's various ways to approach it. But I think that misunderstanding of what model do I need also impacts the perception of these open source models, because most of the time when you deploy that open source model, you're only getting a single model endpoint versus kind of a productized AI platform. Yeah, I guess, you know, I know that in your own business that, you know, you do help bridge some of that gap there and some of the things that you guys do. But in general, if you're talking in the broader market, you know, how do people, they have service providers out there that can give them some of those services. But as they are going and deploying, and in a sense, we've always encouraged open source and open weight models out there. We've talked about that a lot. And we like to see that. And yet there is this skill gap or understanding gap that you've just defined in the business community of, yes, you have these capabilities, but you've got to connect all of your resources, you know, using the term in a very generic way, all of your resources together to give you the capabilities you need for your business to operate the way you envision. And, you know, that's definite falling down and understanding within that gap space. You know, what are some of the different options how people can get through that gap? Well, I think one of the things that can be done is to approach this sort of problem from a software and infrastructure and architecture standpoint to what you were saying before. A lot of what we're talking about really kind of falls in that architecting side of things. And so I think from the beginning, the question is not, again, if you come to it with the question not being, what model are we going to use and who's going to deploy it internally, right? But you come to it from the standpoint of, we will be using models. There will be many models. They will be connecting to many different software applications. Okay, well, that changes the game a little bit in terms of managing that and making it robust over time. And there's very many capable engineering organizations that know how to scale bunches of services and keep them up and set up uptime monitoring around them and alerting and centralized logging and all of those sorts of things. But You never get to have those conversations if you kind of cut it out before you get there by just saying, we will have a model living somewhere. And so you really need to approach it from this kind of distributed systems standpoint. And once you start doing that, you start talking to the experts that are on your team. And there are very many tools, depending on the standpoint that the company wants to approach this from. everything from the company still managing a lot of the, the, what they want to deploy and using, you know, orchestration tools, whether that be like a rancher or, you know, something like that, that's generic and not AI related, right. But they're used to using it. Maybe they're already using it in their organization and they can orchestrate deployments of various AI things. And then there's AI specific, you know, approaches to this as well. So I think it is really a perspective thing. And as soon as you kind of get into that zone of, well, we do need this software architecture, we need this kind of SRE and DevOps kind of approach to things. then you really have to ask some of the hard questions like, can we vibe code our way through this? What kind of software knowledge do we need to actually support this at scale? And I think what people will find is you do at the minimum, I think, still need that software engineering and infrastructure expertise to do it at scale well, or at least to kind of guide some of the vibe coding type of things that happen, right? So there needs to be an informed pilot to help guide some of these things and make sure the ship is going in the right direction. I think that's very, very well put. And I think, you know, kind of going back to the fact, you know, kind of combining this with the hiring decisions that we're seeing out there in the job market and kind of the collapse of the bottom end of the software dev industry, there is a lot of developed expertise over the course of a career. And while all of the discrete points of knowledge may be captured in various models out there, there's still the necessity of extracting what you need from those models in the right context and the right order and at least for the time being in this kind of Gen AI dominated world, you have to have somebody who can provide that kind of architectural view, know how to provide the context to get the things you need from your Vibe coding. I think people are finding small successes under that where they say, I want an app that does this thing. And they describe the app in great detail. And the models that they're using will turn out kind of an app. It may or may not be architected the way for sustainability. And, you know, there's a whole bunch of issues that might make a very good prototype. But if you're not going to bring in junior level coders that in the future will be your senior level coders that have this knowledge, then you're kind of betting on today's talent, producing something. and you're hoping that your model gets the nuance of all of those components and is able to generate its own context without your expertise, which may happen, but it's a big gamble if you're a company right now. It seems to me a lot less risky to go ahead and continue to bring in some junior level developers for the purpose of growing them over time and being able to have that. And maybe at some point that does change in the future. But I think the companies that are doing that today are taking some fairly significant business risks that are largely invisible to their executives. Well, friends, you don't have to be an AI expert to build something great with it. The reality is AI is here, and for a lot of teams, that brings uncertainty. And our friends at Miro recently surveyed over 8,000 knowledge workers, and while 76% believe AI can improve their role, most, more than half, still aren't sure when to use it. That is the exact gap that Miro is filling. And I've been using Miro, from mapping out episode ideas to building out an entire new thesis. it's become one of the things I use to build out a creative engine. And now with Miro AI built in, it's even faster. We've turned brainstorms into structured plans, screenshots into wireframes and sticky notes, chaos into clarity, all on the same canvas. Now you don't have to master prompts or add one more AI tool to your stack. The work you're already doing is the prompt. You can help your teams get great done with Miro. Check out Miro.com and find out how. That is Miro.com, M-I-R-O.com. Well, Chris, I do think that there's some consistent news stories from other sources outside of the MIT report that kind of reinforce some of what we've been talking about. and also I think are just generally interesting as individual data points. And I've seen a number of those as related to OpenAI specifically in terms of if we just look at what kind of has happened with OpenAI in the previous number of weeks, some interesting things have happened. And I think that they signal some things that are, like I say, very consistent with what we've been talking about as prompted by this MIT report, just to highlight a few of those, and then we can dig into individual ones of them. But one of the things that happened was OpenAI released GPT-5, which we haven't talked about a ton on the show yet, but they released GPT-5. Generally, the reception in the wider public has been that people don't like it. And sort of it's fallen flat a bit, I guess would be a way to put it. So that's kind of thing one. At the same time, OpenAI open sourced some models again for the first time in a very long time. Five years. Open sourcing a couple of reasoning models, LLMs that do this type of reasoning. And they open source those. And also near the same time, I forget the exact dates, Someone listening can maybe provide those in a comment or something. But the other thing that happened was that they opened kind of a consulting arm of their business and are entering into the services consulting type of engagements, which are not cheap. I think the minimum price for a consulting services arrangement with OpenAI was like $10 million or something like that. So you've kind of got this thing that's happening, which is a model that's kind of in this area of what has been their moat, kind of these closed models, kind of falling flat. But then giving out some of the model side publicly, openly, and then opening the services business side of things. Now, I've drawn my own conclusions in terms of what some of those things signal and mean, but any initial reactions or thoughts that you've had as things have come out like that, Chris? I think, you know, Sam Altman is the CEO of OpenAI. And he's a curious individual with, you know, he noted in January that maybe OpenAI had been, and I quote, on the wrong side of history, close quote, when it comes to open sourcing technologies. But it was Sam who made those decisions. And I think what he's seeing now is the market is evolving. It's maturing as you would expect, you know, kind of the early phase of focusing on these foundation you know these kind of frontier foundation models that were driving the hype for the last few years and might be producing diminishing returns And even though the GPT model is more capable and that what I use for most of my stuff maybe some of the nuances, for instance, the interface itself, the way it works, the way the models work, people were preferring the 4.0 model. There was quite a bit of personification of that model, I think, going on with the public. And that I think, I think OpenAI realized that, you know, there are these concerns in addition to the fact that they had kind of left the services market to others. And so, you know, my belief is that they are starting to open source some of these models with their open weights for the purpose of supporting a solid footstep and at the, you know, kind of the premier end, the expensive end of the services market. And I think, I mean, I think that's the motivator right there is doing that. I think that's making sure that with their competitors all having open source models that they can play in the space as well. And they can go in with their services organization and make money on services and point to their own open source models to be able to support that services business model that they're doing. So maybe a little bit of a jaded answer for me, potentially, just having like you watch them over a number of years, month by month. But yeah, I would definitely say they're trying to lean in, recognizing that the business of AI is both expanding and maturing into that area. Yeah, I think if we combine this with the knowledge from the MIT report around kind of these use cases and enterprises failing, what we know at this point and what I would kind of distill down as very, or trends and insights that are backed now by various data points is that generic AI. So the generic AI models, just getting access to a model does not solve any kind of actual business use cases and problems and, you know, verticalized solutions that are needed. That's what we kind of learned from the MIT report. And this would be true whether your company has access to ChadGPT or Copilot or whatever models you have access to. These are generic tools. These are generic models. It's very hard to translate that into customized business solutions. Right. And that is why, you know, the insight one from my perspective is just having access to a generic model or generic tools is not going to solve your business solution. And that's partially why a lot of these POCs are failing. Now, OpenAI offers those generic models and tools, right? Which is really great on the consumer side, but enterprise-wise, the ones making the money, at least so far, it hasn't been OpenAI. They've been losing a ton of money. The ones making the money is Accenture, Deloitte, McKinsey, et cetera, the services organizations, right? Because really how you kind of transform a company with AI and bring these models in and do something is by creating custom data integrations, creating these custom business solutions. And that is still really a services related thing, or it's at least a kind of customization related thing. There's data integrations there. So this is totally consistent for me with open sourcing models at the same time that they're creating the services side of the business, because essentially from the business or enterprise side, there really is not a moat on the model builders front. It doesn't matter from my perspective, at least. And of course, I'm biased. It doesn't matter if you're using GPT. It doesn't matter if you're using Quad. It doesn't matter if you're using Llama or DeepSeek or Quinn. Really doesn't matter. Any of those models can do perfectly great for your business use case solution. And I think that's true. I've seen it time and time again. What makes the difference is your combination of data, your combination of domain knowledge, integration with those models and creating that customized solution. And either you're going to do that internally or you're going to hire a services organization to do that. Right. On the one front, you need software architects and developers. And, you know, even if they are using Vibe coding tools, you will need that expertise. On the other side, you know, you can pay millions of dollars to one of these consultants or to open AI and their services business, etc. And again, it's a hard thing because those resources are scarce. Right. Which I think is why it is a good time if you're kind of providing that level of services around the AI engineering stuff. Yeah, I think you think you've hit the nail on the head. And I'll offer sort of a way of restating it with an analogy. You know, when you're going to have friends over and you want to have a magnificent dinner at your dinner party. And so you walk into the kitchen and you may have a lot of great things to make stuff with. And some of those might be big expensive things that are raw materials. Things like, you know, in our analogy, those things represent models and other software components. But there's some skill in putting that meal together and going into the refrigerator and picking the right things out and going into the pantry and picking the right things out. and putting them together according to a recipe that is your business objective and understanding how to produce that final dinner, which is maybe a little bit different from the way your neighbor would do it and maybe a little bit different from the way another friend would do it to produce that fine meal that you're able to enjoy at the end of the day. And that meal is a bit unique because in our analogy, your business is a bit unique. But it takes the skill. And, you know, we do expect technology to develop and those refrigerators to be smart refrigerators and other things to help in the kitchen. And that might be represented in our vibe coding thought. But we might not be all the way there yet. So if you're kind of buying your ingredients and thinking, well, I don't really need to have great skill in the kitchen because I'm sure that some of this technology that's coming into play will take care of that for me. Maybe eventually, but I don't think we're quite there yet is what we're seeing. And I think that report that we've been talking about has kind of provided some evidence of that fact. And so, yeah, there's still the need for nuance and complexity to be addressed. and the recognition that these commoditized models, whether they be closed source or open source, either way, it's going to take more than one and you're going to need to have the recipe to make it all come together the way you're envisioning. So a lot of good lessons for hopefully some of the managers and executives in these companies making some of these decisions to do that might help them out going forward. Yeah, I love that analogy. And it fits so well because you can develop that cooking expertise internally or you can hire in a professional chef into your house. It's going to be expensive, right? But you can do that. But it is a necessary component. So I love that. I love that analogy. I do want to highlight, we always try to highlight some learning opportunities for folks as they're coming out of this. Maybe you're motivated to not let your AI POC fail and you want to kind of understand what it takes to build these solutions. There's a couple of things I want to highlight. One is I'm really excited about we're having this Midwest AI Summit in November, November 13th in Indianapolis. and I'm helping organize that. It's going to be a really great time. One of the unique features about this event, different from other conferences, is we're going to have like an AI engineering lounge where you can actually sit down at a table with an AI engineer. Maybe you don't have that expertise in-house, but you don't want your POC to fail. You can actually sit down with an AI engineer and talk through that and maybe get some guidance. So I haven't seen that at another event. I'm pretty excited that we're doing that. And you can always, as I mentioned in previous episodes, go to practicalai.fm slash webinars. There's some webinars there as well that might be good learning opportunities for you. That's awesome. And on the tail end, as we close out of learning opportunities, I just wanted to share one two second thing here. My mother, once upon a time, she's in her mid 80s. And once upon a time was a computer science professor at Georgia Tech. She also happened to work for the same company I worked for, Lockheed Martin, years ago. But she had retired and kind of moved out of the technology space. But she is very aware of what I do in the space and our podcast and stuff. But she, in her mid-80s, reached out to me this weekend and said, I'm thinking about going back to school for AI, and maybe even into a PhD program or something like that. I don't know. And we talked about it for a while and starting small. She's into some Coursera courses now. And I just, as we're thinking about learning and ramping up, I just want to, you know, we've talked about learning recently on the show. You know, we had a couple of episodes where we talked about kind of, it's never too late. We've had some, we had a congressman who was not a spring chicken, not too long back, diving in incredibly inspirational. And I want to say if my mom in her mid 80s and decades out of the computer science space is willing to dive in and do technical work on Coursera courses, I would encourage all of you to reconsider you're never too old. And I just wanted to leave that as we talked about learning items to say, go get it. The world's changing fast. And my mom in her mid 80s doesn't want to get left behind and wants to be on top of it. And I think I think it's a good thing for all of us to take some inspiration from it. Go do. That's awesome. Appreciate that perspective, Chris. It's been a fun conversation. Thanks for hopping on. Absolutely. All right. That's our show for this week. If you haven't checked out our website, head to practicalai.fm and be sure to connect with us on LinkedIn, X or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner, Prediction Guard, for providing operational support for the show. Check them out at predictionguard.com. Also, thanks to Breakmaster Cylinder for the beats, and to you for listening. That's all for now, but you'll hear from us again next week.

Share on XShare on LinkedIn

Processing in Progress

This episode is being processed. The AI summary will be available soon. Currently generating summary...

Related Episodes

Comments
?

No comments yet

Be the first to comment

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies