
Why the AI Bubble Debate is Useless
The AI Daily Brief • Nathaniel Whittemore

Why the AI Bubble Debate is Useless
The AI Daily Brief
What You'll Learn
- ✓The White House's planned executive order on AI regulation has been paused due to pushback from Republican lawmakers, who prefer a legislative compromise over executive action.
- ✓Corporate insurers are seeking to exclude AI risk from their policies, citing the 'black box' nature of large language models and the potential for systemic, correlated losses.
- ✓Google needs to rapidly expand its AI infrastructure, with a goal to double its compute capacity every six months over the next 4-5 years to keep up with demand.
- ✓OpenAI has hired over 40 people from Apple's hardware engineering team in the past month, drawing talent from nearly every relevant Apple department.
- ✓The AI startup Sierra has reached $100 million in annual recurring revenue, validating an enterprise-focused approach to AI deployment.
Episode Chapters
Introduction
The episode introduces the main topics, including the debate around the AI bubble, the White House's paused executive order, and the news in the AI industry.
White House AI Executive Order Paused
The episode discusses the White House's planned executive order on AI regulation, which has been paused due to pushback from Republican lawmakers.
Insurers Seek to Exclude AI Risk
The episode explores the concerns of corporate insurers about underwriting the risks of using large language models and the potential for systemic losses.
Google's AI Infrastructure Expansion
The episode discusses Google's need to rapidly expand its AI infrastructure to keep up with the growing demand for AI compute.
OpenAI's Talent War with Apple
The episode covers OpenAI's aggressive hiring of talent from Apple's hardware engineering team, drawing from nearly every relevant department.
Sierra Reaches $100M ARR
The episode highlights the success of the AI startup Sierra, which has reached $100 million in annual recurring revenue within seven quarters.
AI Summary
This episode of the AI Daily Brief discusses the debate around the AI bubble, the White House's paused executive order on AI regulation, the insurance industry's concerns about AI risks, Google's need to rapidly expand its AI infrastructure, and the ongoing talent war between OpenAI and Apple. The episode highlights the shifting political landscape around AI policy, the challenges insurers face in underwriting AI risks, and the intense competition in the AI infrastructure space.
Key Points
- 1The White House's planned executive order on AI regulation has been paused due to pushback from Republican lawmakers, who prefer a legislative compromise over executive action.
- 2Corporate insurers are seeking to exclude AI risk from their policies, citing the 'black box' nature of large language models and the potential for systemic, correlated losses.
- 3Google needs to rapidly expand its AI infrastructure, with a goal to double its compute capacity every six months over the next 4-5 years to keep up with demand.
- 4OpenAI has hired over 40 people from Apple's hardware engineering team in the past month, drawing talent from nearly every relevant Apple department.
- 5The AI startup Sierra has reached $100 million in annual recurring revenue, validating an enterprise-focused approach to AI deployment.
Topics Discussed
Frequently Asked Questions
What is "Why the AI Bubble Debate is Useless" about?
This episode of the AI Daily Brief discusses the debate around the AI bubble, the White House's paused executive order on AI regulation, the insurance industry's concerns about AI risks, Google's need to rapidly expand its AI infrastructure, and the ongoing talent war between OpenAI and Apple. The episode highlights the shifting political landscape around AI policy, the challenges insurers face in underwriting AI risks, and the intense competition in the AI infrastructure space.
What topics are discussed in this episode?
This episode covers the following topics: AI regulation, AI insurance, AI infrastructure, AI talent war, Enterprise AI deployment.
What is key insight #1 from this episode?
The White House's planned executive order on AI regulation has been paused due to pushback from Republican lawmakers, who prefer a legislative compromise over executive action.
What is key insight #2 from this episode?
Corporate insurers are seeking to exclude AI risk from their policies, citing the 'black box' nature of large language models and the potential for systemic, correlated losses.
What is key insight #3 from this episode?
Google needs to rapidly expand its AI infrastructure, with a goal to double its compute capacity every six months over the next 4-5 years to keep up with demand.
What is key insight #4 from this episode?
OpenAI has hired over 40 people from Apple's hardware engineering team in the past month, drawing talent from nearly every relevant Apple department.
Who should listen to this episode?
This episode is recommended for anyone interested in AI regulation, AI insurance, AI infrastructure, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
<p>This episode argues that the AI bubble conversation has become one of the least helpful frames for understanding what actually matters in AI (at least if you're not an investor). It’s a sentiment-driven market narrative shaped by macro pressure, uncertainty, and impossible long-range predictions—none of which tell operators anything about how to use AI or plan for it. The real signals come from adoption patterns, financing structures, and the shifting economic context around AI infrastructure. Plus: headlines on the paused White House EO, insurers excluding AI risk, Google’s compute expansion, OpenAI’s Apple talent drain, and Sierra’s rapid growth.</p><p><strong>Brought to you by:</strong></p><p>KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. <a href="https://www.kpmg.us/AIpodcasts">https://www.kpmg.us/AIpodcasts</a></p><p>Rovo - Unleash the potential of your team with AI-powered Search, Chat and Agents - <a href="https://rovo.com/">https://rovo.com/</a></p><p>AssemblyAI - The best way to build Voice AI apps - <a href="https://www.assemblyai.com/brief">https://www.assemblyai.com/brief</a></p><p>Blitzy.com - Go to <a href="https://blitzy.com/">https://blitzy.com/</a> to build enterprise software in days, not months </p><p>Robots & Pencils - Cloud-native AI solutions that power results <a href="https://robotsandpencils.com/">https://robotsandpencils.com/</a></p><p>The Agent Readiness Audit from Superintelligent - Go to <a href="https://besuper.ai/ ">https://besuper.ai/ </a>to request your company's agent readiness score.</p><p>The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614</p><p><strong>Interested in sponsoring the show? </strong>sponsors@aidailybrief.ai</p><p><br></p>
Full Transcript
Today on the AI Daily Brief, why the AI bubble debate is, for most of us, pretty useless. Before that in the headlines, that White House AI executive order gets paused. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. All right, friends, quick announcements before we dive in. First of all, thank you to today's sponsors, KPMG, Blitzy, Robo, and Robots and Pencils. To get an ad-free version of the show, go to patreon.com slash AI Daily Brief, or you can subscribe on Apple Podcasts. And if you're interested in sponsoring the show, send us a note at sponsors at aidailybrief.ai. Lastly, today is the last day that you will hear me yapping about the AI ROI benchmarking study, at least until we have the results. I'm going to close this survey up at the end of the day on Tuesday, so we can dig into all the numbers. So last chance to get in your use cases and get the full readout, you can find it at roisurvey.ai. Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. Today was supposed to be the day that we were getting a big White House executive order on AI. However, the plans for that executive order, including its approach to preempting state AI laws, have been scuttled as the White House faces pushback from Republican lawmakers. Last Wednesday, President Trump posted on Truth Social, investment in AI is helping make the U.S. economy the hottest in the world, but overregulation by the states is threatening to undermine this growth engine. We must have one federal standard instead of a patchwork of 50 state regulatory regimes. Now, a draft of the executive order was later leaked to the press and disclosed some pretty heavy-handed tactics. The draft order, for example, instructed the Justice Department to set up an entire task force for suing individual states over the constitutional validity of their AI laws. There was also talk of withholding national broadband funding to states that had passed their own regulations. On Friday, Reuters reported that the executive order had been put on hold, citing sources but not having many details. Washington trade paper Punchbowl News had the scoop later in the day. They wrote that lawmakers were looking to, quote, negotiate a legislative compromise rather than have the White House address the situation by executive order. One solution being pushed by House Majority Leader Steve Scalise is to insert a provision into the Must Pass National Defense Authorization Act. Some lawmakers went on the record to criticize the administration's approach to the issue. California Republican Jay Obernolte said, I don't think the executive branch has the authority to enforce preemption on the states. If they've found some legal angle, I haven't heard about it. Tom Tillis, who cast the deciding vote against preemption over the summer, said he'd, quote, rather do it through the law rather than executive order, saying that an executive order would not, quote, give us long-term certainty. Now, this is a little bit deeper than we normally get into the political jostling in D.C., But the chain of events does seem to suggest a shift in power and perception around AI. Until now, the White House has enjoyed a lot of latitude to dictate policy direction on AI. However, now we're seeing multiple Republicans breaking ranks and doing so publicly. And it's very clear that Republicans don't like the idea of their White House being involved in multiple lawsuits against the states to block AI protections during an election year. Politico's Friday newsletter asserted that voters are ready to turn against the GOP on this issue. Tim Wu, a Columbia professor and Biden-era tech policy advisor, argued that the politically savvy move coming into the midterms is to come out in favor of AI regulations. He said, I don't think the public is too excited about losing their jobs to an army of robots. It is very clear to me that the political winds are shifting on this issue, and I sort of think the midterms are going to be a bit gruesome. Moving on, corporate insurers are seeking to exclude AI risk from their policies. The Financial Times reports that AIG, Great American, W.R. Berkeley, have asked regulators for permission to offer policies that exclude AI risk. So what is actually going on here? There are a lot of nuanced issues. When it comes to underwriting the risks of using major LLMs, Dennis Bertram, the head of cyber insurance for Europe at specialty insurer Mosaic, commented, it's too much of a black box. Rajiv Dottani, the co-founder of an insurance startup called the Artificial Intelligence Underwriting Company, says no one knows who's liable if things go wrong. A handful of corporate liability claims have been brought so far and suggest that insurers could see a lot of exposure. A solar company called Wolf River Electric recently sued Google for $110 million claiming defamation. Google's AI overview feature had falsely claimed the company was under investigation by the Minnesota Attorney General. Last year, a tribunal ordered Air Canada to make good on a chatbot's offer of full refunds for travel that had actually taken place. Now, the damages were only a few hundred dollars, but the case highlighted the massive potential exposure if chatbots go haywire. For insurers, the issue isn't so much about idiosyncratic problems and one-off claims. Aon's head of cyber, Kevin Kalanick, said, Insurers can afford to pay claims in the hundreds of millions for isolated losses. But what they can't afford, he continued, is if an AI provider makes a mistake that ends up as a thousand or ten thousand losses, a systematic, correlated, aggregated risk. Moving over into the wild world of compute, Google needs to rapidly expand their AI infrastructure to keep up with demand. At an all-hands meeting at the beginning of the month, Amin Vedat, a VP at Google Cloud, addressed the company on the topic. He remarked, the competition in AI infrastructure is the most critical and also the most expensive part of the AI race. A slide deck viewed by CNBC included a slide titled AI Compute Demand, which read, we must double every six months, the next 1000x in four to five years. The meeting came on the same week that Google reported earnings, forecasting a jump in 2025 capex for the second time this year. Now, unlike Meta and Microsoft, the market responded positively to Google's earnings. Many analysts concluded that Google's revenue growth was strong enough to support the more ambitious target. The reveal here was that behind closed doors, this year's infrastructure buildout is a tiny sliver of what's expected over the coming years. Vedat told employees that Google's job is of course to build this infrastructure, but it's not to outspend the competition necessarily. We're going to spend a lot, he added, but the real goal is to provide infrastructure that's more reliable, more performant, and more scalable than what's available anywhere else. During the meeting, CEO Sundar Pichai told staff that 2026 would be intense, arguing that they're in a good place to deal with a boom or bust. He said, We're better positioned to withstand misses than other companies. It's a very competitive moment, so you can't rest on your laurels. We have a lot of hard work ahead, but again, I think we are well positioned through this moment. OpenAI and Johnny Ive continue to build out their consumer device team at the expense of Apple. In his Apple-focused newsletter, Bloomberg's Mark Gurman reported on the latest big news out of Cupertino. He said that rumors of CEO Tim Cook imminent resignation were blown out of proportion Last week many linked the rumors to a failure to execute on an AI strategy but in Gurman opinion Cook still has years left in the role Succession planning is underway at Cupertino but Gurman believes the reports of a resignation by the middle of next year were in his words, simply false. However, some real news is that OpenAI continues to poach from Apple's hardware engineering team. Multiple Apple hardware executives had already moved across to Ive's new company. However, Gurman reports that Apple is facing a full-on exodus of talent. Over the past month, Gurman wrote, OpenAI has hired more than 40 people for its devices group, with many of the new hires coming over from Apple. He added, From what I've heard, Apple is none too pleased about OpenAI's poaching, and some consider it a problem. The hires include key directors, a fairly senior designation, as well as managers and engineers. And they hail from a wide range of areas. Camera engineering, iPhone hardware, Mac hardware, silicone, device testing and reliability, industrial design, manufacturing, audio, smartwatches, Vision Pro development, software, and human factors. In other words, OpenAI is picking up people from nearly every relevant Apple department. It's remarkable. Lastly today, Brett Taylor's Sierra is the latest AI startup to reach $100 million in ARR. Taylor, who also serves as chairman of OpenAI's board, founded the company in February of last year with former Google Labs executive Clay Baver. Sierra provides AI customer service and sales agents to enterprise clients. In a blog post, the co-founders remarked on reaching the milestone within seven quarters, writing, that's a heck of a lot quicker than we expected and make Sierra one of the fastest growing enterprise software companies in history. In addition to their long list of tech forward customers like Discord, Ramp, Rivian, and SoFi, they also have brought on customers with much older businesses like Vans, SiriusXM, and Rocket Mortgage. Indeed, the co-founder said they were a little surprised that so many older companies were comfortable with integrating AI into their customer service workflows. I could do a whole episode, honestly, on this. What Sierra got right out of the gate was that doing AI for enterprises was not going to be creating a flashy app, agent, or demo, and hoping they figured it out on their own. It involves a lot of messy work of going in, wiring systems together, making sure the company has the right type of dev support and enough of it. And I think that this milestone is validation of an approach that many enterprise AI companies would do well to copy. For now, that's going to do it for the headlines. Next up, the main episode. What if AI wasn't just a buzzword, but a business imperative? On You Can With AI, we take you inside the boardrooms and strategy sessions of the world's most forward-thinking enterprises. Hosted by me, Nathaniel Whittemore, and powered by KPMG, this seven-part series delivers real-world insights from leaders who are scaling AI with purpose. From aligning culture and leadership to building trust, data readiness, and deploying AI agents. Whether you're a C-suite executive, strategist, or innovator, this podcast is your front-row seat to the future of enterprise AI. So go check it out at www.kpmg.us slash AI podcasts, or search You Can With AI on Spotify, Apple Podcasts, or wherever you get your podcasts. This episode is brought to you by Blitzy, the enterprise autonomous software development platform with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise-scale code bases with millions of lines of code. Enterprise engineering leaders start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and pre-compiles code for each task. Blitzy delivers 80% plus of development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint. Public companies are achieving a 5x engineering velocity increase when incorporating Blitzy as their pre-IDE development tool, pairing it with their coding pilot of choice to bring an AI-native SDLC into their org. Visit blitzy.com and press get a demo to learn how Blitzy transforms your SDLC from AI-assisted to AI-native. Meet Rovo, your AI-powered teammate. Rovo unleashes the potential of your team with AI-powered search, chat, and agents, or build your own agent with Studio. Rovo is powered by your organization's knowledge and lives on Atlassian's trusted and secure platform, so it's always working in the context of your work. Connect Rovo to your favorite SaaS app so no knowledge gets left behind. Robo runs on the teamwork graph, Atlassian's intelligence layer that unifies data across all of your apps and delivers personalized AI insights from day one. Robo is already built into Jira, Confluence, and Jira Service Management Standard, Premium, and Enterprise subscriptions. Know the feeling when AI turns from tool to teammate? If you Robo, you know. Discover Robo, your new AI teammate powered by Atlassian. Get started at ROV, as in victory, O.com. Today's episode is brought to you by Robots and Pencils. When competitive advantage lasts mere moments, speed to value wins the AI race. While big consultancies bury progress under layers of process, Robots and Pencils builds impact at AI speed. They partner with clients to enhance human potential through AI, modernizing apps, strengthening data pipelines, and accelerating cloud transformation. With AWS-certified teams across US, Canada, Europe, and Latin America, clients get local expertise and global scale. And with a laser focus on real outcomes, their solutions help organizers work smarter and serve customers better. They're your nimble, high-service alternative to big integrators. Turn your AI vision into value fast. Stay ahead with a partner built for progress. Partner with Robots and Pencils at robotsandpencils.com slash AI Daily Brief. Welcome back to the AI Daily Brief. One of the interesting challenges over the last few months with this show has been to determine how much to spend time on the AI bubble conversation. Now, sometimes there are clearly things that are important, major new announcements that actually will shape the destiny of the field that by extension bring up the bubble conversation. And pretty much that's where we've been focused up until now. But if you pay any attention to markets or mainstream news in general, you will know that this is just one of, if not the loudest and most dominant themes. Every day, there's 10 new thought pieces about why it's an AI bubble compared to probably one or two about why it's not. And it is getting increasingly confused and confusing. Just on a deeply human level, it was hard not to resonate a little bit with NVIDIA CEO Jensen Huang, who in a conversation as part of a recently leaked meeting, effectively said that the company was just in a no-win position. This was from an all-hands last Thursday, and came right after NVIDIA had delivered a jaw beat with revenue growth at a 62 annualized pace Now you might remember that in after trading NVIDIA surged but then the next day it was actually down That prompted the conversation inside NVIDIA where Huang said the market did not appreciate our incredible quarter. More than just the short-term market reaction, however, Jensen, I think, had a pretty accurate description of what the vibe is right now on Wall Street. He said, If we delivered a bad quarter, it is evidence there's an AI bubble. If we deliver a great quarter, we are fueling the AI bubble. If we were off by just a hair, if it looked even a little bit creaky, the whole world would have fallen apart. Now, NVIDIA earnings were honestly about as good as they could possibly have been. And what Jensen is getting at is that we have now reached the point where sentiment concentration is so intense that NVIDIA is a proxy for the entire stock market and economy. He even referenced the memes about NVIDIA floating around. Have you guys seen some of them, he asked? We're basically holding the planet together. And it's not untrue. And it was listening to this and reflecting on how the discourse has changed over the last few days, with, of course, the other big catalyst having been the launch of Gemini 3 and what it potentially means for OpenAI, that made me want to do this episode, which is a little bit catch up on where we are, but also kind of my statement around how I think it makes sense to handle this going forward. So let's talk about the first reason that the bubble conversation is increasingly useless. This is very much a market conversation, not an operator conversation, which of course means that if you are an investor, you can ignore what I'm saying about it being a useless conversation. But I kind of think that if you're just an AI user, you could ignore what they are saying. What I mean is that there is basically no one, including many of the loudest AI bubble skeptics, who are arguing that AI isn't transformatively powerful. The questions are ultimately about economic structures, speed of transformation, return on investment, price pressures and new business models, and how all of that adds up to commitments for these big deals that are being booked and digested by markets right now. And yet, because everyone is talking about it, it can be very easy for it to feel like that is the important conversation. Even though, as I'm saying, I think that if you're just an AI user and an operator trying to figure out your own personal future or the future of your company with these tools, the bubble conversation really doesn't matter. Okay, so first idea is that it's a market conversation, not an operator conversation. But the second idea is that even the market conversation isn't really or at least just about AI. We have had two and a half years of AI holding the entire economy up. We went from post-COVID boom to runaway inflation, to the fastest rate hiking cycle in 40 years, to persistent sticky inflation, to volatility around the elections, to volatility following the elections with regard to policy. And throughout that entire time, market actors' general desire to see the stock market go up had all of their hopes pinned on AI and specifically the MAG-7. And for two and a half years, despite all of those other factors, the market has been up and to the right because of this very small category of companies. What is very clear at this stage, at the end of November 2025, is that AI cannot any longer prop up the market and the rest of the economy because other parts of the economy are just getting unignorable. U.S. consumer sentiment reached one of the lowest levels on record this month. It was down from 53.6 in October to 51 in November. Views of personal finance were the lowest since 2009. Auto loan delinquencies for subprime borrowers hit 6.65% in October, which is the highest level on record in data going back to 1994. Now to some, auto and mortgage defaults are the most important sign of acute stress in the economy because people are not going to risk losing their car or their house through a default unless they've completely run of options. Four-year college grads now comprise a quarter of overall unemployed workers, which is a record level, and the unemployment rate for young people aged 20 to 24 is now at 9.2%. Honestly, you could basically pick any economic indicator you wanted and it would not be good. Mike Green of Simplified Asset Management wrote a blog post that went viral over the weekend that recalculated the poverty line for a family of four in America. The usual metric, which simply triples the average food cost, has the poverty level at $31,200 in household income. Green's updated metric, which actually factors in the high cost of medical care, housing, and child care, and recognizes that food is only 5-7% of the average family budget, found the modern poverty level at a household income of $136,500. The average U.S. household income, meanwhile, is $80,000. The bottom half of the country, in other words, is already living in recession conditions, and they've been doing so for several years. But even if markets chose not to care about that, there are still big factors that have nothing to do with AI that are coming home to roost in AI. We're dealing with a lack of data post-shutdown. Official labor market data won't be published for October, but ADP private payrolls data said that 29,000 jobs were lost on net in September and just 42,000 jobs were added in October. The labor market, especially outside of government employment, looks very weak. The BLS also canceled October inflation data, but it appears we're now running at 3% as of September data, which is the highest since January and nowhere near the 2% target. Then there's the Fed. The markets are, of course, incredibly focused on what monetary policy is going to do, rightly or wrongly. And I have never in my years of covering this stuff, seen the market so unclear about what's going to happen at the upcoming Fed meeting. A December Fed cut was at 30% odds to end last week, but it's since spiked to 75% and is rising quickly. For careful observers, it is clear that the tightness from the Fed has absolutely played a role in driving markets down. Indeed, NVIDIA's drawdown has less to do with their earnings or even the AI narrative than it is to do with the Fed. And it's not just the Fed in the short term. There's major questions of who will replace Powell as chair and just the independence of the institution in general. The point is, not only are we having a market conversation, it's a market conversation that isn't really or at least not just about AI. But it's also a conversation that is fundamentally about an unknowable future. The bubble talk is all about whether companies can meet their obligations. OpenAI has $1.4 trillion in spending commitments stretching out for around eight years. The rest of the hyperscalers are committing year to year because they control their destiny a little bit more. But at Google's current run rate of $95 billion for this year, they'd be at $750 billion over those eight years. If we were being humble, we would recognize that we are in uncharted territory and we don have precedent for understanding whether these companies can raise or make and spend this much money and how fast AI is going to turn profitable as a sector Current Wall Street analysts haven ever seen a capex spend like this They used to analyzing tech buybacks And so this conversation that we having is fundamentally about an unknowable future. What's more, there really is no short-term catalyst that could actually prove AI is a bubble. There are only factors that could put evidence in one column or another. I expect to see endless focus and dissection on every new announcement around revenue, token usage growth, consumer demand, etc. Because the reality is, because no short-term catalyst can actually prove AI is a bubble, in fact, because of the unknowability, that means that investors on both sides of this bet are going to be extremely loud about their perspective on it. Billions of dollars in directional bets and investments are going to be at stake. And in the absence of being able to actually know the way to make money in the interim, is to convince the market of your opinion. Basically, the most circular thing happening for the foreseeable future won't be the deal-making in AI, it will be the debates on Wall Street. And none of the conversations will actually help anyone trying to use AI or figure out how to use AI in any meaningful way. In fact, it is likely to do the opposite. It is likely to be distracting. It is likely to suggest that there's some question about whether you should be investing in AI at all. Because the nuance of transformative technology but market questions gets beaten out by the relentless parade of headlines. So in terms of my plans, as much as I can, I will probably not be covering the play-by-play of market sentiment. It's just too much. And as you can see from the title, I just don't think it's all that valuable. Now, we will do broad coverage as big patterns shift. You sort of have a phenomenon where after a week or two, there's an accumulation of enough signal that it's worth checking in on. And of course, if there are actual events that are extremely meaningful in some ways, I will cover those as well. But by and large, my sense and my belief is that most people are listening more to understand the way that AI is going to impact their daily lives than to have this market conversation. Although if that's wrong, please feel free to let me know. I like endless market conversations. Don't get me wrong. It's just not what I think that this podcast is for. So that concludes the why the AI bubble conversation is useless part of the show. But if you are still here and you are personally or professionally interested in the market dynamics surrounding AI, here are the things that I actually think are interesting to watch for. Maybe starting with what not to watch for. Easy first one is fade reports unless you're actually going to read them. No surprise and nothing novel here, but the media environment is one that promotes and rewards extreme views and even occasionally twists the findings of a report into a clickable headline. The greatest example of this was the MIT report, which had wildly limited methodology, and even in the report acknowledged that it was a pretty cursory glance at a very limited set of data. But because of media amplification, and by the way, the fairly disgusting opportunistic usage by companies trying to sell AI services, even though they knew the report was kind of a joke, in its headline version became the most dominant force in the second half of the year in the AI conversation. Okay, so we fade the reports we're not going to read, but then what is worth watching for. One big one is financing approaches. We've started to see many of the AI firms spin up off balance sheet vehicles to finance AI data centers, essentially a separate company that can raise funds using the data center as collateral. Now, this sounds a lot scarier than it actually is. Yes, it is the sort of thing that financial firms did a lot of in the lead up to 2008, but it's also something that energy companies have done relatively safely for decades. The risk isn't the financing vehicle, it's whether the data center is profitable. That said, debt funding is another thing to watch. For the past few years, most of the AI build-out has been cash flowed. In other words, the hyperscalers took their monumental balance sheets and started pushing that free cash flow into all of this capex. However, over the summer, hyperscalers started using debt and the numbers are growing rapidly. Now, of course, the build-out is at the point where it necessitates debt, and debt in and of itself isn't a bad thing. It's how, in general, people build stuff that's going to be profitable in the future. But certainly it makes things more complicated and is worth keeping an eye on. As we try to understand better risks, one of the big risks could be that financiers stop allowing the debt to be rolled over. For that, it's useful to watch credit markets and specifically credit default swaps. Credit default swaps are financial instruments that pay off in the event of a default, so they tend to trade according to the market's perception of default risk. They're also a much more sophisticated market than equities, so arguably have a richer signal. Last week, the market for Oracle CDS blew out, which caused a lot of attention on Wall Street. And while it's clear that this was the market pricing in an increased risk of default, it's important to note that the risk being priced in is still minuscule. Prior to last week, you could ensure a portfolio of Oracle five-year debt by paying 0.4% per year. That's now tripled, but it's nowhere near enough for a credit downgrade. The pricing currently implies a 6-8% chance of Oracle going bankrupt sometime before the bonds mature in 2030. Another thing to be aware of is that some lenders are starting to look for unconventional ways to hedge their exposure. Earlier this month, the FT reported that Deutsche Bank was looking to hedge their exposure to data center lending by shorting AI stocks. Now, this kind of activity risks pushing the market around and distorting the signal. In other words, AI stocks going down might not be a signal of bearishness in the market. It could be a large lender shorting the stock to hedge their long equity exposure. Now, to be clear, I do not think that's what's happening right now. What's happening right now feels like a pretty clear macro drawdown on recession fears. But that kind of complicated signal is worth paying attention to. One really useful resource, if you are trying to keep track of all of this, comes from Azim Azar and his exponential view. If you go to boomorbubble.ai, you can see their real-time gauges tracking economic strain, industry strain, revenue momentum, valuation, heat, and funding quality, and see how they're changing over time. So there, my friends, is why I think the AI bubble conversation is useless, but also what I think is useful to watch for if that's a conversation you care about. Ultimately, what pretty much everyone agrees on, with I guess the possible exception of Gary Marcus, is that these technologies are incredibly powerful and they will change how you work and likely what you work on in the years to come. That is the part that I am excited to hopefully help with and where I will continue to focus. For now, that's going to do it for today's AI Daily Brief. Appreciate you listening or watching as always. And until next time, peace. you
Related Episodes

4 Reasons to Use GPT Image 1.5 Over Nano Banana Pro
The AI Daily Brief
25m

The Most Important AI Lesson Businesses Learned in 2025
The AI Daily Brief
21m

Will This OpenAI Update Make AI Agents Work Better?
The AI Daily Brief
22m

The Architects of AI That TIME Missed
The AI Daily Brief
19m

Why AI Advantage Compounds
The AI Daily Brief
22m

GPT-5.2 is Here
The AI Daily Brief
24m
No comments yet
Be the first to comment