
The State of Enterprise AI
The AI Daily Brief • Nathaniel Whittemore

The State of Enterprise AI
The AI Daily Brief
What You'll Learn
- ✓The Agentic AI Foundation was launched as an independent organization to foster innovation and ensure neutrality in agentic AI technologies like the Model Context Protocol.
- ✓OpenAI released its first set of AI certification courses called 'AI Foundations' to help develop practical AI skills in the workforce, in partnership with companies and universities.
- ✓The U.S. Department of War launched a new AI platform called GenAI.mil to provide access to powerful AI models like Google's Gemini for government use cases.
- ✓Enterprises are increasingly focused on scaling AI deployments in 2026, but many are facing challenges in breaking through 'AI plateaus' related to governance, data readiness, and process mapping.
- ✓Common standards and interoperability are critical for the widespread adoption of enterprise AI, as companies realize the benefits of rallying around shared protocols rather than competing standards.
Episode Chapters
Introduction
Overview of the key news and discussions covered in the episode, including the launch of the Agentic AI Foundation and OpenAI's new certification courses.
Agentic AI Foundation
Details on the new industry initiative to establish open standards and ensure neutrality in agentic AI technologies.
OpenAI Certification Courses
Information on the release of OpenAI's 'AI Foundations' and 'ChatGPT Foundations' certification courses.
GenAI.mil - U.S. Military's New AI Platform
Overview of the U.S. Department of War's new AI platform for government use cases, including the involvement of Google's Gemini.
Scaling Enterprise AI Adoption
Discussion on the challenges enterprises face in moving beyond AI pilots and experiments to achieve scaled deployments, and the role of Superintelligent's 'Plateau Breaker' assessment.
AI Summary
This episode of the AI Daily Brief covers the latest news and developments in the enterprise AI space. It discusses the launch of the Agentic AI Foundation, a new industry initiative to establish open standards for AI systems. It also covers the release of OpenAI's new AI certification courses and the U.S. Department of War's unveiling of a new AI platform called GenAI.mil. The episode highlights the growing importance of enterprise AI adoption and the need for common standards and interoperability to drive widespread deployment.
Key Points
- 1The Agentic AI Foundation was launched as an independent organization to foster innovation and ensure neutrality in agentic AI technologies like the Model Context Protocol.
- 2OpenAI released its first set of AI certification courses called 'AI Foundations' to help develop practical AI skills in the workforce, in partnership with companies and universities.
- 3The U.S. Department of War launched a new AI platform called GenAI.mil to provide access to powerful AI models like Google's Gemini for government use cases.
- 4Enterprises are increasingly focused on scaling AI deployments in 2026, but many are facing challenges in breaking through 'AI plateaus' related to governance, data readiness, and process mapping.
- 5Common standards and interoperability are critical for the widespread adoption of enterprise AI, as companies realize the benefits of rallying around shared protocols rather than competing standards.
Topics Discussed
Frequently Asked Questions
What is "The State of Enterprise AI" about?
This episode of the AI Daily Brief covers the latest news and developments in the enterprise AI space. It discusses the launch of the Agentic AI Foundation, a new industry initiative to establish open standards for AI systems. It also covers the release of OpenAI's new AI certification courses and the U.S. Department of War's unveiling of a new AI platform called GenAI.mil. The episode highlights the growing importance of enterprise AI adoption and the need for common standards and interoperability to drive widespread deployment.
What topics are discussed in this episode?
This episode covers the following topics: Enterprise AI adoption, AI standards and interoperability, AI certification and workforce development, Military and government use of AI, Overcoming challenges in scaling AI.
What is key insight #1 from this episode?
The Agentic AI Foundation was launched as an independent organization to foster innovation and ensure neutrality in agentic AI technologies like the Model Context Protocol.
What is key insight #2 from this episode?
OpenAI released its first set of AI certification courses called 'AI Foundations' to help develop practical AI skills in the workforce, in partnership with companies and universities.
What is key insight #3 from this episode?
The U.S. Department of War launched a new AI platform called GenAI.mil to provide access to powerful AI models like Google's Gemini for government use cases.
What is key insight #4 from this episode?
Enterprises are increasingly focused on scaling AI deployments in 2026, but many are facing challenges in breaking through 'AI plateaus' related to governance, data readiness, and process mapping.
Who should listen to this episode?
This episode is recommended for anyone interested in Enterprise AI adoption, AI standards and interoperability, AI certification and workforce development, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
<p>Today’s episode breaks down new reports from OpenAI and Menlo Ventures that show enterprise AI adoption accelerating quickly, with coding emerging as the first true killer use case, reasoning models driving deeper workflow integration, and the gap between leaders and laggards widening as frontier firms compound their advantages. The conversation also looks at early agent deployments and what these trends signal for the 2026 boom-versus-bubble debate. In the headlines: Anthropic donates MCP as OpenAI, Anthropic, and Block form the Agentic AI Foundation, rumors swirl around GPT-5.2 and a new image model, OpenAI launches AI Foundations certifications, and the US military unveils its GenAI.mil</p><p><br></p><p><strong>Brought to you by:</strong></p><p>KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. <a href="https://www.kpmg.us/AIpodcasts">https://www.kpmg.us/AIpodcasts</a></p><p>Gemini - Build anything with Gemini 3 Pro in Google AI Studio - <a href="http://ai.studio/build">http://ai.studio/build</a></p><p>Rovo - Unleash the potential of your team with AI-powered Search, Chat and Agents - <a href="https://rovo.com/">https://rovo.com/</a></p><p>AssemblyAI - The best way to build Voice AI apps - <a href="https://www.assemblyai.com/brief">https://www.assemblyai.com/brief</a></p><p>LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/</p><p>Blitzy.com - Go to <a href="https://blitzy.com/">https://blitzy.com/</a> to build enterprise software in days, not months </p><p>Robots & Pencils - Cloud-native AI solutions that power results <a href="https://robotsandpencils.com/">https://robotsandpencils.com/</a></p><p>The Agent Readiness Audit from Superintelligent - Go to <a href="https://besuper.ai/ ">https://besuper.ai/ </a>to request your company's agent readiness score.</p><p>The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614</p><p><strong>Interested in sponsoring the show? </strong>sponsors@aidailybrief.ai</p><p><br></p>
Full Transcript
This podcast is sponsored by Google. Hey folks, I'm Amar, product and design lead at Google DeepMind. Have you ever wanted to build an app for yourself, your friends, or finally launch that side project you've been dreaming about? Now you can bring any idea to life, no coding background required, with Gemini 3 in Google AI Studio. It's called Vibe Coding and we're making it dead simple. Just describe your app and Gemini will wire up the right models for you so you can focus on your creative vision. Head to ai.studio slash build to create your first app. Today on the AI Daily Brief, two different studies on the state of enterprise AI. Before that, in the headlines, a group of companies come together to establish the Agentic AI Foundation, and Anthropic donates the model context protocol. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Hello, friends. Quick announcements before we dive in. First of all, thank you to today's sponsors, Gemini, Superintelligent, Rovo, Robots and Pencils, and Blitzy. To get an ad-free version of the show, go to patreon.com slash AI Daily Brief, or you can subscribe on Apple Podcasts. And if you are interested in sponsoring the show, send us a note at sponsors at aidailybrief.ai. Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. Now, our title lead story for the headlines is about Anthropic donating the MCP to the newly created Agentic AI Foundation. But I did want to do a quick check-in on the GPT 5.2 waiting room, given that this is expected to be the biggest news of the week. So initially, prediction markets, by which I think at this point we can safely say OpenAI Insiders, had initially pointed to Tuesday as the release date. It now appears that Thursday is the day. However, outside of just polymarket predictions, we're also starting to see rumors of a new GPT image model being tested on Design Arena and LM Arena. The model is codenamed Chestnut and Hazelnut across the two platforms and seems pretty strong. AI developer Kan wrote, Key observations. World knowledge similar to Nano Banana Pro can generate celebrity selfies with very similar quality to Nano Banana Pro can write code in images very well. Others thought these new rumored models still had more of an AI sheen to them, and some even noticed that the distinctive yellow tinge of GPT image models, affectionately known by some, apologies in advance for the gross name as the Piss Filter, is still there. While the rumors were swirling on X, we also got a full write-up of the Code Red plans from the Wall Street Journal. Altman told the journal that two models are planned, Garlic or GPT 5.2, and another model in January. GPT 5.2 will deliver a boost in capabilities for AI coders and enterprise customers, as well as, hopefully, generally build some momentum. The January model is intended to have better images and personality, and the Code Red will supposedly end upon its release. Take all that together and it means that for eight weeks, all work on Sora development, the focus on AGI, all of that stuff has been put aside in favor of improving the ChatGPT experience. Altman framed the issue as existential, stating that for the company to survive, they may need to postpone the quest for AGI and give people what they want in the here and now. I remain, as always, very excited to see what the new model offers. But for now, let's shift to this new Agentec AI Foundation. So the AIIF will be a directed fund that's held by the Linux Foundation, ensuring its independence from any single AI company. OpenAI, Anthropic, and fintech company Block all put aside any differences to become co-founders of this new foundation and each made their own founding contributions. Block donated their Goose agent framework, while OpenAI donated the agent.md instruction format. Still, the big one that caught notice was Anthropic donating the model context protocol standard. In announcing the move, Anthropic wrote, bringing these in future projects under the AAIF will foster innovation across the agentic AI ecosystem and ensure these foundational technologies remain neutral, open, and community-driven. OpenAI engineer Nick Cooper said that the neutral organization was necessary to ensure that agents and systems work together without competing standards. He commented, we need multiple protocols to negotiate, communicate, and work together to deliver value for people. And that sort of openness and communication is why it's not ever going to be one provider, one host, one company. Now, this is something that I've talked about a lot this year. one of the things that has been really interesting about the competitive landscape of AI is that pretty much all of these companies figured out quite quickly that it was to their benefit to rally around the standards that people seem to be adopting rather than try to each have their own standards. There was a moment back earlier in the year when it seemed like maybe OpenAI would offer a competitive version of MCP, but when they decided not to and when Google embraced it as well, it really showed how as intense the competition may be between these companies, they do still get that common standards rise the tide and a rising tide lifts all boats. Still, having all of this embedded in an independent foundation certainly institutionalizes that spirit of cooperation in a more durable way. Now, as for the new AAIF, it will be a distinct entity from the rest of the Linux Foundation, allowing for a governance structure to deal specifically with the development of agentic AI systems. Not only will there be an independent steering committee for each standard, but the AAIF will also deal with overarching issues like agent safety and interoperability. Said Anthropics Chief Product Officer Mike Krieger, MCP went from internal project to industry standard in a year. Now it gets the long-term stewardship it deserves. Now moving on, while we didn't get GPT 5.2 yesterday, we did get the first ever OpenAI certification courses. Back in September, OpenAI announced their plans to introduce formal certifications and a related jobs platform to help develop AI skills in the workforce. Now the first set of courses are here, and they're called AI Foundations. OpenAI said that the courses are designed to help users learn core, practical AI skills that apply across roles and industries. The courses are presented in collaboration with Coursera and can be accessed directly in ChatGPT through the integrated Coursera app. The courses are being deployed directly in the enterprise through partnerships with Walmart, John Deere, Lowe's, BCG, Accenture, and many more. The courses are also being piloted in universities with Arizona State and California State participating in the first trials. In addition to AI Foundations, OpenAI is also launching a ChatGPT Foundations course for teachers, dealing with the essentials of how ChatGPT works, how to navigate and personalize the tool, and how to apply it to real classroom and administrative tasks. Lastly today, another one that could easily be a full main story, so we'll just touch the highlights. The U.S. Department of War has unveiled a new AI platform for the military. The platform is called GenAI.mil and will host various AI services, with the first being Google's Gemini for Government. In a press release, the department said, This initiative cultivates an AI-first workforce, leveraging generative AI capabilities to create a more efficient and battle-ready enterprise. Announcing the new platform, Secretary of War Pete Hegseth said, The future of American warfare is here and it's spelled AI. This platform puts the world's most powerful frontier AI models directly into the hands of every American warrior. At the click of a button AI models on Gen can be utilized to conduct deep research format documents and even analyze video or imagery at unprecedented speed We will continue to aggressively field the world best technology to make our fighting force more lethal than ever before And all of it is American Now, Google, for their part, was far less oorah in their description of the platform. They gave example use cases like summarizing policy handbooks, generating compliance checklists, extracting key terms from statements of work, and creating risk assessments for operational planning. Indeed, you can almost feel them trying to find the most benign use cases that have nothing to do with the lethality of the American military. By reading the Google announcement, it seems from their side at least to be more about giving the military access to LLMs for white-collar work among service members. Google even underscored that the system is only allowed to be used for unclassified business processes. Now, there is a lot cooking when it comes to the U.S. military and their use of AI, although I think we'll have to save that for another episode. In the press release about GenAI.mil, Emil Michael, the Undersecretary for Research and Engineering, said, There is no prize for second place in the global race for AI dominance. We are moving rapidly to deploy powerful AI capabilities like Gemini for government directly to our workforce. AI is America's next manifest destiny, and we're ensuring that we dominate this new frontier. So, friends, a nice, uncontroversial use of AI to close out our headlines. For now, though, that is going to do it for the headlines. Next up, the main episode. Today's episode is brought to you by my company, Superintelligent. Superintelligent is an AI planning platform. And right now, as we head into 2026, the big theme that we're seeing among the enterprises that we work with is a real determination to make 2026 a year of scaled AI deployments, not just more pilots and experiments. However, many of our partners are stuck on some AI plateau. It might be issues of governance. It might be issues of data readiness. It might be issues of process mapping. Whatever the case, we're launching a new type of assessment called Plateau Breaker that, as you probably guessed from that name, is about breaking through AI plateaus. We'll deploy voice agents to collect information and diagnose what the real bottlenecks are that are keeping you on that plateau. From there, we put together a blueprint and an action plan that helps you move right through that plateau into full-scale deployment and real ROI. If you're interested in learning more about Plateau Breaker, shoot us a note, contact at bsuper.ai with Plateau in the subject line. Meet Rovo, your AI-powered teammate. Rovo unleashes the potential of your team with AI-powered search, chat, and agents, or build your own agent with Studio. Rovo is powered by your organization's knowledge and lives on Atlassian's trusted and secure platform, so it's always working in the context of your work. Connect Rovo to your favorite SaaS app so no knowledge gets left behind. Rovo runs on the teamwork graph, Atlassian's AI isn't a one-off project. It's a partnership that has to evolve as the technology does. Robots and Pencils work side-by-side with clients to bring practical AI into every phase. Automation, personalization, decision support, and optimization. They prove what works through applied experimentation and build systems that amplify human potential. As an AWS-certified partner with Global Delivery Centers, Robots and Pencils combines reach with high-touch service. Where others hand off, they stay engaged. Because partnership isn't a project plan, it's a commitment. As AI advances, so will their solutions. That's long-term value. Progress starts with the right partner. Start with Robots and Pencils at robotsandpencils.com slash AI Daily Brief. This episode is brought to you by Blitzy, the enterprise autonomous software development platform with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise-scale code bases with millions of lines of code. Enterprise engineering leaders start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and precompiles code for each task. Blitzy delivers 80% plus of the development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint. Public companies are achieving a 5x engineering velocity increase when incorporating Blitzy as their pre-IDE development tool, pairing it with their coding pilot of choice to bring an AI-native SDLC into their org. Visit blitzy.com and press get a demo to learn how Blitzy transforms your SDLC from AI-assisted to AI-native. Welcome back to the AI Daily Brief. Over the past few days, we have gotten a slew of reports that deal in some way with the state of AI in the business world. Now, interestingly, over the course of 2026, especially the back half, this conversation has gotten a lot more significant. Whereas before it mattered mostly to the people who were in enterprises and trying to figure out where they stood relative to AI adoption, now there is a whole additional dimension based on the broader questions of boom versus bubble that dominate market conversation. Basically, as I've said before, there's really no way to prove if it is a bubble in the sense that the things that would show us that, for example, OpenAI isn't going to be able to meet its $1.4 trillion in commitments are so far in the future that it leaves investors looking for evidence now of why those things might happen, because the things that would be the bubble popping can't happen yet. And that's where enterprise adoption becomes really important. It is one of the areas that can both A, continue to show growth and boom-type behavior, and B, in how much opportunity remains, show that there is still big revenue room to grow. The two reports we're looking at today are OpenAI's State of Enterprise AI and Menlo Ventures' third annual state of generative AI in the enterprise. Together, they tell some pretty common stories. Adoption is up, people are reporting ROI, coding is clearly the first killer app, agents require a lot of work still, and the gap between leaders and laggers is growing. So let's look at the OpenAI report first. From a methodology perspective, this analysis draws from one, as you would imagine, real world usage data from OpenAI enterprise customers. And secondly, a survey that the company did of 9,000 workers across almost 100 companies that documented their patterns of AI adoption. Story one, to the shock of no one, at least no one listening to this episode, is that adoption is growing in major ways. ChatGPT enterprise seats have increased 900% year over year. Since November of last year, weekly enterprise messages have grown approximately 800%. The average worker is now sending 30% more messages than they were a year ago. And importantly this underscores another part of the story which is that it not just that usage is more frequent it also deeper While we will see towards the end of this episode that we still have a long way to go when it comes to making highly functional and truly autonomous agents there is clearly a shift away from very surface-level work towards much deeper work and greater levels of automation and integration into core workflows. Indeed, while enterprise messages have grown 8x in aggregate, the number of weekly users of custom GPTs and projects is up 19x. So why does that matter? Well, custom GPTs and projects are basically alternative interfaces for interacting with chat GPT that can be configured with different types of instructions, custom actions, specific knowledge bases and context, which means that especially in an enterprise context, they're going to be frequently used for common repeatable multi-step tasks. So the fact that this is growing more than twice as fast as overall usage suggests that enterprise users are not just, like I said, getting broader, but also going deeper. In recent months, they wrote about 20% of all enterprise messages were processed via one of these custom GPTs or projects. When it comes to different industries, the short of it is that they are all up, although some are up more than others. The median sector right now expanded 6x year over year, but even the slowest growing sector, which for OpenAI was education services, was still up 2x year over year. The fastest growing sectors overall are technology, healthcare, and manufacturing at 11x, 8x, and 7x growth, respectively. In another episode recently where we talked about the Open Router study, one of their big reflections was the dramatic shift away from non-reasoning tokens towards reasoning tokens. And sure enough, OpenAI saw that as well. And in fact, these numbers are maybe even more dramatic. In the past 12 months, they found that average reasoning token consumption per organization is up 320x. Now, to be fair, that was from a starting point of near zero a year ago because reasoning models were at this time just coming online. But still, I think a big part of the story of why we've seen such massive adoption this year is that these reasoning models allow for more complex and sophisticated work. But what about whether people are actually getting value from this? I am, of course, swimming in ROI data right now. For those of you who contributed to the AI ROI benchmarking study, by the way, appreciate your patience. We ended up with over 5,000 use cases being shared. So we're taking a little bit longer to do even more analysis than we had anticipated, and that'll be coming next week. In any case, as a little bit of a preview, I will say that what we found is a significant amount of self-reported ROI, and that is exactly what you get from OpenAI study as well. They wrote that 75% of those surveyed workers reported that using AI has improved either the speed or quality of their output. Users of ChatGPT Enterprise are saving between 40 and 60 minutes a day, with certain categories like data science, engineering, and communication, saving more like 60 to 80 minutes per day. 87% of IT workers report faster IT resolution. 85% of marketing and product users report faster campaign execution. 75% of HR professionals report improved employee engagement. And 73% of engineers report faster code delivery, which I think is important, not just for the magnitude of the numbers, but for the breadth of impact across the organization. Now, if you are a regular listener, you'll know that one of the things that really interests me is when AI moves from an efficiency technology, time savings, cost savings, etc., to an opportunity technology where people are doing new things that weren't possible before. And that shows up in a major way in this OpenAI study. 75% of workers they surveyed reported being able to complete tasks they previously could not perform. Those tasks include things like spreadsheet analysis and automation, technical tool development, custom GPT or agent design, programming support, code review. And the biggest part of this, which isn't surprising even though it is still profound, is the rise of coding-related messages in areas outside of engineering, IT, and research. They found that among ChatGPT enterprise users, those messages, which are again outside of traditional technical functions, grew an average of 36% over the past six months. And frankly, I think those numbers are a little soft. because as you know, if you are a non-technical vibe coder, what percentage of your vibe coding are you doing in the main chat GPT interface versus in some other tool or IDE or platform? Now, of course, in the enterprise, there are going to be more restrictions on where you can engage. So it's not a one-to-one comparison to how you might do things personally. But still, what I'm saying is that I think that that 36% growth in non-technical coding related messages definitely represents the bottom end of what that growth actually is. Now, one of the most interesting things that I wanted to harp on here, because it's a theme that I see coming up over and over again, and I'm increasingly convinced is going to shape a lot of the beginning of 2026, is that rather than everyone rising at the same rate, we're seeing compounding growth from the people who are getting out ahead. In other words, the people and organizations who are using AI the most are getting more value from it and translating that value back into faster growth, which allows them to get farther ahead relative to their peers. The gap between leaders and laggers is increasing because what constitutes that gap is making the leaders grow faster than the laggers. There's some evidence of this on both the individual and the organization level in this report. Basically, ChatGPT found when it came to individuals, the people who consumed the most intelligence as measured by credits used reported higher time savings. The group that saved over 10 hours a week used eight times more credits than the group who reported saving zero hours a week. Frontier workers who OpenAI defines as in the 95th percentage of adoption intensity generates six times as many messages as the median worker. What they use it for is also different. Frontier users send eight times as many creative media-related messages, nine times as many information-gathering messages, 10 times as many analysis and calculation messages, 11 times as many writing and communication messages, and 17 times as many coding-related messages. At the firm level, Frontier firms, which again are in the 95th percentile, generate twice as many messages per seat overall, and seven times as many messages to those custom GPTs where a lot of contextual knowledge and workflows live. OpenAI writes, these firms invest systematically in the infrastructure and operating models required to embed AI as a core organizational capability rather than a peripheral productivity tool. So that's the story from this OpenAI report. Let's move over into the Menlo report. As I said, the background noise for this one is definitely the boom versus bubble conversation. And in their minds, everything that they found in this third annual study points strongly to boom territory. The Menlo study comes from a survey of 495 U.S. enterprise AI decision makers, meaning technical leaders, VPs of engineering and product, and C-suite executives, that was conducted in partnership with an independent research firm with the interviews happening last month between November 7th and 25th, so very recently. Once again, big conclusion number one is that this thing is growing fast. menlo calls it the fastest scaling software category in history which at 37 billion in spend this year captures six percent of the 300 billion dollar global sass market just three years after chat gpt was released now as one small quibble which isn really a quibble with menlo but more just something that i think is important for our own understanding on the one hand we kind of have to call AI a software category but I think that that not only really undersells what it is I also think it's quite distracting for these enterprise users who are trying to figure out how to make sense of this. A software category is something that can fit comfortably on a Gardner Magic Quadrant. AI doesn't. It is a total systems change enabled by a broad and diverse category of software that should ultimately have a significantly larger total addressable market than the SaaS market, given that it's going to remake so much of what gets done now, not just replace existing software. The report also once again validated the idea of coding as Gen.AI's first killer use case. According to Menlo, enterprises spent $4 billion on AI coding this year, which was 55% of what they call overall departmental AI spend. By way of comparison, marketing AI spend was just 9%. The individual categories of AI coding are all up as well. Code completion is up 5.1x, AI app builders are up 10x, and code agents are up 36.7x. In many ways, it turns out that 2025 was the year of AI agents. It was just coding agents, not general agents. Now, related to coding being the killer app, Menlo argues that at this point, Anthropic is fairly definitively the enterprise AI leader. They estimate that Anthropic now earns 40% of enterprise LLM spend, which was up from 24% in 2024 and 12% in 2023. Google also saw a big jump going from 7% in 23 to 21% in 25. And all this happened at the expense, a little bit of meta, which went from 16% in 23 to 8% in 25, and a lot at OpenAI, which went from 50% in 23 down to 27% in 25. Now, this is well-trodden territory. It's why, among other reasons, that OpenAI is as aggressively focused on codecs and their coding models as they are. By the way, Menlo also puts Anthropics 2025 coding market share at 54% compared to OpenAI's 21% and Google's 11%. In terms of what exactly companies were spending on, the big trend that Menlo noticed here was the rise of the application layer, which they clocked at $19 billion compared to Infrastructure's $18 billion. Horizontal AI was up 5.3x, departmental AI up 4.1x, and vertical AI up 2.9x. And interestingly, alongside the growth in application layer AI, startups also made up a lot of ground versus incumbents. Now, one of the interesting phenomena about AI as compared to some previous technology waves is that the incumbents had many more advantages than they are used to when it comes to a new technology. Usually the way these things work is startups come in and innovate, taking advantage of their nimbleness and ability to iterate more quickly, and eventually some gain traction and become the incumbents of the future. However, given how deeply integrated AI is to the enterprise ecosystem, the entrenched distribution, the data moats, the enterprise relationships, the scaled sales teams, all of that puts points in the incumbents' favor. However, between 24 and 25, Menlo saw a flippening. In 2024, 64% of enterprises said they preferred buying from incumbents. But this year, in actual reality, startups captured about $2 in revenue for every $1 that was earned by incumbents in the application layer, with startups commanding 63% of the market overall. Part of that was to do with where companies were focused, the success of Cursor versus GitHub Copilot, and in sales, the success of AI-native startups like Clay and Actively. Now, on a similar theme of how enterprises are engaging with AI, the full build versus buy boomerang has finally completed. Back in 2023's enterprise survey, Menlo found that 80% of enterprises were buying software versus just 20% building, but that shifted dramatically in 2024, when they found 47% of solutions being developed in-house, with 53% being sourced from vendors. Now, I argued at the time that what that reflected was, one, a growing confidence of enterprises in engaging with AI, but two, the immaturity of vertical and departmental and functional level startups that I thought was going to change pretty quickly. It just seemed to me that even with increased confidence, and even with the decreased cost of building software, once highly focused application layer startups came online for key use cases, I thought that was going to boomerang back to about that 80-20 split. Sure enough, in this study, they found 76% of AI use cases being purchased rather than built internally. This does not mean that enterprises aren't engaging with and excited to build certain categories of software. In fact, I would argue that the buy-build paradigm is the blurriest it's ever been when it's come to AI. There's really nothing that's truly off the shelf. There is always going to be some amount of integration, wiring up the data, permissioning, governance, that's going to involve technical work from the enterprise. In fact, that's one of the things that's slowing adoption. But yes, in general, I think that this three quarters or four fifths split between buying and building is a little bit more what I would expect going forward. Now, two more things that I want to talk about quickly before we get out of here. One is a specific follow up to what we discussed in the open router study that found such growing adoption of Chinese models. That is nowhere to be seen among the enterprise. Open source LLMs in general are down from 19% in 24 to 11% today. Menolo speculates that in fact, part of that is the stagnations of Meta's Lama models. Enterprises are very clearly wary of Chinese open source models, which account for just 10% of enterprise open source usage, which means just 1% of total LLM API usage overall. Now that is of course very different than what we're seeing across developers. Frankly, this is one that I don't expect to change all that much, especially with all of the closed-source model providers working on ever more performant, small, cheaper models. I just think that the morass of dealing with China-originated models isn't going to be worth it for most companies. Lastly, as much excitement as there is about the future of agents, right now that is not where most companies are. Co-pilots represent 10 times as much enterprise spend as do agents. And part of that is because, as Menlo puts it, a modern AI stack is still in development. They point out that overall, AI architectures remain surprisingly simple, even in production. They found that only 16% of enterprise deployments qualify as true agentic systems, and even those are relatively simple. 39% are fixed sequence workflows, as opposed to just 8% that are, for example, multi-agents. Now, as I was prepping this, I noticed that Anthropic had dropped a new report about how enterprises are building AI agents in 2026. But I think we're going to have to save that for a deeper dive on the state of agents in the enterprise. For now, that is the story of enterprise AI, at least through the lens of this OpenAI study and the Menlo study. Over the next couple of weeks, I'll be doing a number of episodes trying to turn this into some practical ideas about what the future might look like. For now, that's going to do it for the AI Daily Brief. Appreciate you listening or watching as always, and until next time, peace. you
Related Episodes

4 Reasons to Use GPT Image 1.5 Over Nano Banana Pro
The AI Daily Brief
25m

The Most Important AI Lesson Businesses Learned in 2025
The AI Daily Brief
21m

Will This OpenAI Update Make AI Agents Work Better?
The AI Daily Brief
22m

The Architects of AI That TIME Missed
The AI Daily Brief
19m

Why AI Advantage Compounds
The AI Daily Brief
22m

GPT-5.2 is Here
The AI Daily Brief
24m
No comments yet
Be the first to comment