

Are We In An AI Bubble? In Defense of Sam Altman & AI in The Enterprise | EP99.24
This Day in AI
What You'll Learn
- ✓Sam Altman has been heavily criticized by former OpenAI board members, including allegations of withholding information and lying to the board
- ✓The host argues that these criticisms are driven by jealousy and a desire to score 'internet points' rather than substantive concerns
- ✓OpenAI has grown into a highly successful company, with over $13 billion in revenue, and the host believes it would be unwise to bet against the company's continued success
- ✓The transition from a non-profit to a for-profit model has led to internal power struggles, with board members attempting to remove Altman from his position
- ✓The host suggests that the altruistic researchers who initially formed the non-profit board have been overshadowed by the company's commercial success
- ✓Elon Musk, who was an early supporter of OpenAI, is now suing the company, potentially driven by jealousy over its success
Episode Chapters
Introduction
The hosts discuss recent controversies surrounding Sam Altman and OpenAI, including criticism from former board members and a lawsuit from Elon Musk.
Criticism of Sam Altman
The hosts analyze the accusations made against Altman by former board members, arguing that they are driven by jealousy and a desire for 'internet points'.
OpenAI's Commercial Success
The hosts discuss the rapid growth and commercial success of OpenAI, and why they believe it would be unwise to bet against the company's continued success.
Transition from Non-Profit to For-Profit
The hosts explore how the transition from a non-profit to a for-profit model has led to internal power struggles within OpenAI.
Elon Musk's Lawsuit
The hosts discuss Elon Musk's lawsuit against OpenAI, suggesting that it may be driven by jealousy over the company's success.
AI Summary
This episode discusses the recent controversy surrounding Sam Altman, the CEO of OpenAI, and the accusations made against him by former board members. The host defends Altman, arguing that the criticism is driven by jealousy and a desire to score 'internet points' rather than substantive concerns. The episode also touches on the evolution of OpenAI from a non-profit to a successful for-profit company, and the internal power struggles that have emerged as a result.
Key Points
- 1Sam Altman has been heavily criticized by former OpenAI board members, including allegations of withholding information and lying to the board
- 2The host argues that these criticisms are driven by jealousy and a desire to score 'internet points' rather than substantive concerns
- 3OpenAI has grown into a highly successful company, with over $13 billion in revenue, and the host believes it would be unwise to bet against the company's continued success
- 4The transition from a non-profit to a for-profit model has led to internal power struggles, with board members attempting to remove Altman from his position
- 5The host suggests that the altruistic researchers who initially formed the non-profit board have been overshadowed by the company's commercial success
- 6Elon Musk, who was an early supporter of OpenAI, is now suing the company, potentially driven by jealousy over its success
Topics Discussed
Frequently Asked Questions
What is "Are We In An AI Bubble? In Defense of Sam Altman & AI in The Enterprise | EP99.24" about?
This episode discusses the recent controversy surrounding Sam Altman, the CEO of OpenAI, and the accusations made against him by former board members. The host defends Altman, arguing that the criticism is driven by jealousy and a desire to score 'internet points' rather than substantive concerns. The episode also touches on the evolution of OpenAI from a non-profit to a successful for-profit company, and the internal power struggles that have emerged as a result.
What topics are discussed in this episode?
This episode covers the following topics: OpenAI, Sam Altman, AI startups, Non-profit to for-profit transition, Internal power struggles.
What is key insight #1 from this episode?
Sam Altman has been heavily criticized by former OpenAI board members, including allegations of withholding information and lying to the board
What is key insight #2 from this episode?
The host argues that these criticisms are driven by jealousy and a desire to score 'internet points' rather than substantive concerns
What is key insight #3 from this episode?
OpenAI has grown into a highly successful company, with over $13 billion in revenue, and the host believes it would be unwise to bet against the company's continued success
What is key insight #4 from this episode?
The transition from a non-profit to a for-profit model has led to internal power struggles, with board members attempting to remove Altman from his position
Who should listen to this episode?
This episode is recommended for anyone interested in OpenAI, Sam Altman, AI startups, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
<p>Join Simtheory & experience MCPs in action: <a href="https://simtheory.ai">https://simtheory.ai</a><br>----<br>00:00 - Chris Has a Merch Sponsor<br>02:42 - In Defense of Sam Altman<br>20:29 - Are We In An AI Bubble? & What is Working in The Enterprise?<br>43:58 - Anthropic's Code Execution with MCP: Problems with MCP Context<br>52:44 - Kimi-K2 Thinking Model Release<br>1:00:45 - "In the Middle of a Bubble" Song<br>----<br>Thanks for your support and listening, we appreciate you!<br>Join our Discord: https://discord.gg/TVYH3HD6qs</p>
Full Transcript
To put that into perspective, that is more than 2.2 times the economy of Australia. The entire size of the UK economy. Sounds fine, honestly. That sounds fine to me. No worries. We failed to invest last time we were discussing these guys, and we'll probably do 10 episodes a time like, damn, we could have been billionaires again. so chris before we start today's show we did need to cover two important sort of housekeeping topics one of them actually is housekeeping for you which was people mentioned in the comments the bookshelf it's fixed especially for those that watch and uh have like ocd and we're so concerned about your bookshelf it's somewhat fixed let's be that's right and to be clear i didn't fix it for the audience i fixed it because we have a house guest at the moment and uh we we had to get things in order. So it's sort of a side effect. But for those people who bother, look at it. It's so much better. And the second thing is there's a little carrot mascot behind you. You're wearing a hat that says, I dig carrots. And you have a T-shirt on from a company in California, Bolthouse Fresh. That's right, Mike. So I've actually got my own sponsor. So this is nothing to do with the show. It's not, well, it is to do with the show, but it's like you didn't get anything. I got all this merch. And as I said, anyone who sends me merch, I will loudly and proudly wear it and promote your business no matter what it is. And Bolthouse Fresh has done that. They're the leading provider of carrots in America. They've been around for like 120 years or something like that. And what a great company. And thank you for the amazing merch. And they sent so much. Look, I've got a drink bottle. I've got the mascot at the back. I've got two jackets, shirts. This is a merch pack to end all merch packs. If you bought this on the Sim Theory scam store, it would probably cost you $1,000. I love how someone actually called your bluff, which was like, if you send me merch, I'm going to wear it. The kind people at Bold House Fresh did. I actually, I remember these carrots. I think they rebranded, but they had them in Whole Foods and Trader Joe's, and I would actually get them. It's the only carrots I'll eat. Anything else is the poor substitute. The leading carrot, Bolthouse Crush. There you go. So send us merch and we'll do a free plug. This is really dodgy. Imagine that it's funny that we're an AI podcast and the first sponsored thing we've ever done is carrots. And also it's not sponsored. It's just like really just me being desperate for free clothes. Yeah, well it's better than our other fake sponsored Nord VPN. But anyway, let's get into it. This week there has been a huge pylon on our man Altman and it came out of this clip on the, I think it's B2B podcast with Brad Gerstner. Let's listen to the clip. Hanging over the market is, you know, how can the company with 13 billion in revenues make 1.4 trillion of spend commitments, you know, and you've heard the criticism, Sam. We're doing well more revenue than that. Second of all, Brad, if you want to sell your shares, I'll find you a buyer. I just enough. Like, you know, people are, I think there's a lot of people who would love to buy open AI shares. I don't, I don't think you want to. That is a slap in the face. Like what a, what a bold response. I love it. So that's, that's one of the slap downs, but it did feel like after that, uh, there was a lot of talk about the AI bubble. We'll get to that in a minute, but then there was just a pile on after So there was clips being circulated of our favorite Australian OpenAI former board member, Helen Toner. This is what she had to say on another podcast, the TED AI show. For years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board. At this point, everyone always says, like what? Give me some examples. and I can't share all the examples. That's because there are none. This woman is a disgrace. Like, honestly, there is nothing worse than a snivelly little person who waits until there's a pylon and then reveals this information. Like, she's a board member, right? Like, if you were on the board and you really felt that this was true, that he lied to the board, then why didn't you, A, resign, if, you know, it's not in the right spot, or call for his resignation, which I think she did. Like they tried to get him fired, right? They did fire him. Well, yeah, okay. So this all comes out of the deposition. So basically what happened is Elon Musk has sued OpenAI saying they've, you know, they've gone away from their core mission of being open. What has happened earlier this year, Ilya Sitskeva had to give evidence to lawyers. That's since been unlocked and sort of revealed to the world. and the sort of key points as this person summed up on X is Ilya plotted for over a year with Mira Moretti to remove Sam Altman Dario, you might remember him from Anthropic or my necklace he wanted Greg Fired and himself to be in charge of all research Mira was upset because Sam was pitting her against Daniella which is Dario's sister so it was just basically like gossipy internal politics and jealousy and backstabbing. And then that led Mira to tell Ilya like, oh, look at all the bad stuff he's done on Slack, turning everyone against each other. Ilya wrote a 52-page memo, took it to the board saying that Sam should be fired. And obviously they followed through with it. And then what they didn't anticipate and Ilya didn't anticipate is the backlash that would then be received. and of course we all know what happened after that they appointed that random reddit ceo for like a day and then eventually sam was back rudolph hess kind of thing but look my opinion is this everyone is jealous of the success that this guy has created and the only reason they're able to come out and say this stuff and have any weight to it at all is because of the success of that company like he he made it so big like i i you know i'm not the biggest sam altman fan but i really feel like this is ridiculous behavior from all of these people. He's made a company that is making, what did he say, more than $13 billion in revenue and probably well on the increase. Like he says, if this thing went public, which it sounds like they may actually do, it is going to go absolutely insane and wild. It'll make Tesla look like a rounding error. It's going to be enormous. And to come out and be like, oh, well, a year ago, I knew that he was kind of shit at his job. It's like, what a stupid thing to say. It's idiotic. And also, Also, get behind the company you're a director of. You should be like, look at what we've done. It's amazing. And we're going to smash it and continue to smash it. It's just such a weird thing to want to score internet points and people to think you're so smart when the reality is that's not really what your job is. I just don't understand this stuff. It doesn't make sense. Okay, sure. I'm not a financial expert. Maybe there's fundamentally wrong things with the company. But people said that about Tesla for so long. They're like, oh, well, you know, it's going to go bankrupt. It's going to go bankrupt. And people lost so much money shorting this company that would you really bet against OpenAI at this point? And this is what I found really interesting because, you know, we obviously pay out OpenAI and Sam Altman to somewhat keep them honest. Not that it really matters what they care, but we do it to call out what we believe is the truth. Like we talked about Sora being like a ridiculous waste of time. I stand by that. I think it's just a money sinkhole for them, and they should be focused on practical use cases. But in this case with Altman, I think that the guy does deserve, like you said, a ton of credit. He stood by OpenAI after Elon Musk did, in theory, from all reports, abandon it and basically say, you guys are hopeless, like I'm leaving. Then he made it the success that it was. He had the foresight to say, yeah, just put ChatGPT out there. I'm sure he didn't know how big it would become at the time, but that really created the AI explosion and all this investment in infrastructure and started the entire market. Then you have Elon Musk coming along with, with essentially jealousy being like, well, I stepped away from that, but I helped founder, which I know he did. And so he's now suing them. Everyone else is dragging this guy through the ringer because, you know, they want to win. Like this is very competitive. What would Elon Musk say if the company had just failed? Like, you know, if there was nothing, all that sort of stuff, he wouldn't have had anything to say about it. It's only because of the success that Sam Altman's created that anyone even cares about it in the first place. And, like, Elon Musk copied it. He made his own version of it. I mean, the XAI thing follows the OpenAI standard. It is a cheap imitation of the OpenAI models. It's just crazy that these people are coming out, like, insulting this guy who really took it somewhere. And the whole like not-for-profit, profit thing, it's like, well, all right, business has changed. Things pivot. Like you discover things along the way and learn and you change the way things are. Who does he have to adhere to? Did he sign some like religious pledge of allegiance that he'd always be a nonprofit? Like who are you answering to here? Like do what you want. Like there's no laws about this stuff. And if there are laws, I'm sure that they can afford the lawyers to change them. You know, like it's it's I just don't understand the hatred for this guy and that his own board is piling on publicly for Internet brownie points. I just honestly it's disgraceful. Former OpenAI board member. But I think I get the impression early on what happened was these altruistic AI researchers that are researchers, right, do and did have concerns about the path that AI could potentially go on one day. So a way of appealing to these people was to structure the company in such a way that was this not-for-profit and it had this sort of, you know, let's call it like non-experience board, like altruistic board instead of a typical board for governor. You mean radically altruistic? You need the word radical there to be accurate. But that's what I think happened, right? Like this board was formed under the guise of like, you know, we'll get all these researchers in because this is what they're idealistically about. and then they've discovered this sort of breakthrough in terms of chat gpt this explosion's occurred and then they've been like this is a real business like this could be far bigger than google and facebook and microsoft and everyone which i kind of believe it probably will be if they play their cards right or or what's correct a little bit and so then yeah as you said you've got helen toner who quite frankly like what has she achieved in life nothing um and and has this outsized overweight say on this guy's character. When if you actually read the, the, the documentation of this lawsuit, it's all hearsay. It makes the board look utterly stupid. Like they essentially got a dossier of gossip and then fired the CEO based on gossip without even really giving him a chance to say anything or comment. And the other thing they didn't realize is, that the large majority, the vast majority of the staff when this occurred were like, you know, we want him back. Like, this is our leader. Like, how dare you fire him and put Mira Moretti in charge? And then, you know, they all capitulated and then went their separate ways. It just sounds more like a bit of a power struggle, again, after it became successful because there was something there to take away. Like, you know, there was something there of value that they all wanted to control. And like even, you know, there was talk behind the scenes with Anthropic at some point when Sam was fired to essentially like put Dario in charge, merge Anthropic and OpenAI back together and put him back in charge. So anyway, like it's weird to be starting the episode by sitting here defending Sam Altman because I don't really think he needs, you know, that much defending. But I don't know. I do feel sorry for him a little bit. I think he's showing genuine strength and leadership in this moment. He's being tested by all of the people around him. He's clearly been fighting against forces in his own company that want to bring about his demise, and yet he's come through it. And then in the face of people criticizing him, he's like, damn right, I'm going to do well with this stuff, and we're way better than you think. Think about Jeff Bezos. He had the same thing. People criticized him for like 10 years straight, being like, Amazon, it's such a bad company. you guys just lose money all the time. And he just sat there with a smile on his face. He's like, yeah, you're right. We lose a shitload of money. Um, and jinx and, um, all that. And then now he's the king, you know? So I think, I don't know, there's something about this period of Sam Altman that makes me really like him and respect him. And I think that all these other snivelly little people around him can just like, leave if you want, like he says, sell your shares, get out, don't, don't be part of it anymore. There's heaps of people who want to be. i think though a lot of people were saying it's just disrespectful to talk like that to one of their most major investors who's backed them really early on and given them a ton of money and so the other side of the equation was like why is this guy so defensive like clearly they don't have 1.3 trillion and will never get it in terms of these spending commitments also this week their cfo came out and basically implied that they should be able to loan money for infrastructure and to pay all these suppliers using the government as a backstop, like too big to fail type 2008 stuff. And so that sort of spurred even more talk about a bubble and all the defensive rhetoric coming out of these guys. But, you know, I think just in terms of like all of this saga, it needs to just be put to bed. Like this case, this lawsuit just should be dropped and everyone should just move on and get on with it. Like it just seems like a bad attack vector. like instead of attacking his character which i have done many times on the show so i'm a hypocrite yeah well that's our main attack yeah that's our main attack yeah so so um he's just he's just a bad person yeah like this is the thing it's sometimes hard to defend him because he does come across as someone who you cannot trust and um you know someone who's like scheming uh all the time i i think the big problem with open ai right now is that they have too many projects too many focuses and they've got so much competition we've got likely in the next couple of weeks gemini 3 coming out which i think is going to blast away gpt5 as the best model like i think it will absolutely dominate and you see behind the scenes now they're preparing gpt5.1 which is said to be like even cheaper and faster. So I think that's going to be their sort of like weapon against the release of Gemini three. But to me, it's, it's time to focus and, and really knuckle down here and, and be really good at some, some core things like the models and maybe the enterprise capabilities as well. Like they sort of announced last week and we covered on last week's show about this whole pivot. But interestingly this week, some data was released saying that the enterprise large language model API market share, OpenAI has just completely and utterly lost a lot of ground. It says OpenAI fell from 50% in late 2023 to 25% by mid 2025. Anthropic now leads enterprise LLM API usage with 32%, while OpenAI has 25% share. Now, interestingly enough, a lot of these companies aren't necessarily switching, it's just the market's getting bigger and Anthropix capturing more of that market. It's pretty interesting data. It is a little bit outdated so it doesn't cover recent developments as much I don think GPT was even out when they did this so it can change and does change as we know rapidly in terms of what the enterprise does Yeah you right And probably maybe that isn necessarily the metric to measure is relative share because if the market's growing as a whole, they can still grow, even if the competition is getting some of the market as well. Yeah, I think it's more the story in this is like, if you look at that chart, Anthropic is just like sort of like straight up and to the right Google is too and so it's sort of I think what it's showing is they're not as dominant as they've been saying like it's not like we have some technology that is so far ahead or so far greater than the others that it's being well I mean you know my opinion I don't even think they have the best model I don't think GPT-5 is the best model I know most people disagree with me so fair enough but I don't I think the real problem is like despite what I've just said defending the guy himself, I just don't think they really have much to boast about. They've muddied the waters with all of these different projects. They lack vision. They lack cohesion. And the problem is going to be that the technology is clearly easily replicated. And all they really have on their side is early brand recognition and being seen as what AI is to most people. But I feel like as the people with money, as the people like institutions looking at how they spend their AI budget, which every company and every institution and every government has now, like every single large organization in the world has an AI change officer and a massive budget to spend on this stuff. They're going to put it somewhere. And I would argue that open AI isn't the most compelling, not even close in terms of how you spend that money. And unless they're able to do that, then maybe they really will have problems with committing to spend a bunch of money if they're not the one that get it. I think it's also the brand perception, right, of ChatGPT because it's seen, at least in my eyes and my friends and the people I associate myself with, it's seen as a consumer thing. Like the ChatGPT app is seen as a consumer AI. Yeah, like that's toy, that's kid stuff. Like now you need to move on to the professional stuff. Yeah, and I think this is where Anthropics positioning themselves is like the enterprise one. And so I would argue that... Adopt.com, so to speak. Yeah. And I would argue, though, that, and I argue it all the time, is like, really, you should just see these models as tools and just use the right tool for the job and not lock into a singular model. And you can kind of see that in the enterprise. You've got to be promiscuous. You've got to go out there and just try something new every night. But to me, that's the challenge that they face. And so it's like, okay, well, if you're going to go all in on consumer, maybe go all in on consumer and worry about the business stuff later. And or have two product tracks with a very singular focus where it's just here is our consumer unit in our business like Google do. Here is our like workspace business unit. and just hone in on those products and focus on the product, not this whole like boil the ocean strategy that they have right now. And I think that's where they're going to come undone because they're almost forced to boil the ocean to generate more revenue to prove that their 500 or half a trillion valuation is justified. Like that revenue has got to keep going up. But I think- Yeah, you can't on one hand say we're for business, with the future of AI, but also we've made it so you can make anime porn to share with your friends. You know, like, the two things don't go together. Yeah, I think it's just a constant, and maybe the enterprise market doesn't hear as much of this if you're not following it at this granular level, but to me, that is the issue, right, is, like, it's just this mixed signaling. It's like, oh, we're going to be a platform now for developers, oh, and here's the anime porn thing, or here's the Sora thing. Yeah, there's a lot of problems around that. But a lot of this talk, interestingly, led to, and I know your feeds on socials were filled with it, and the media was filled with it as well, is a lot of speculation, are we in an AI bubble? And I wanted to first rattle off some stats to just ground the audience in where things are at. Because when I put this together, I was shocked. I was like, oh, God, this might be real bad. So right now, at the time of recording, NVIDIA is valued at 4.5 trillion dollars for normal speak. 4.5 trillion. And that's American trillions, right? American trillions. A thousand billion. To put that into perspective, that is more than 2.2 times the economy of Australia or two times the economy of Canada. or the entire size of the UK economy. Another way to think about it is it's 16% of the entire US economy. The entire US economy. That's pretty big. Also an estimated $1.2 trillion in AI-related debt that's currently floating in the financial system. So this is corporate debt, $1.2 trillion. I hate to get philosophical, but none of this is real, is it? Like money doesn't exist. This is just all just numbers in a book, not even a book, numbers on a GPU somewhere. Yeah. And I mean, that is almost the case of the bubble, like it doesn't exist. So obviously, this has led a lot of people to calling it a bubble. Just to look at the market, it's currently trading on average at 30 times earnings. This is skewed by companies like Tesla, which I think is like 250 or 300 times earnings right now. So if you have one 30th of the earnings of the UK, that's pretty good. I'd take that. The historical average on the S&P 500 is 17 times. Sounds fine, honestly. That sounds fine to me. No worries. In fact, everyone. We failed to invest last time we were discussing these guys, and we'll probably do 10 episodes at a time. We're like, damn, we could have been billionaires again. And this is the thing, right? Like, even if it is a bubble, these bubbles can last a long time from when everyone starts calling it a bubble. Like, I think when people were calling out the dot-com bubble, Like they started to, it took four or five years for it to actually implode and like, you know, make a mess of things. But anyway, I wanted to explain the circular financing as well. So one other aspect of this, why everyone's calling it a bubble is the bubble really begins, if you think, in, I think it was Trump's first term when he said that you could repatriate your foreign, you know how the tech companies put all their money in Ireland and places where they didn't have to pay tax on it. and just sat it overseas. Well, I believe under the Trump administration, the first time they said, bring all the money back in and we'll give you a huge tax discount. Like you won't have to pay the money. You can bring it back in and that will lead to investment in the US economy. So what happened is- And the rise of the US dollar, I imagine as well. Yeah, so they brought all that money back in. I think it was like something like 1.5 to 2 trillion in combined cash from the tech companies. And now they're sitting on all that cash And they've really got two options, like buy back shares or invest in data centers in this AI boom. So a lot of that cash is going in to data centers and like infrastructure for AI. Then the other argument for the bubble is this loop where it's like NVIDIA invests $100 billion in OpenAI. These are all true facts, by the way. These are not just me making stuff up. Then OpenAI spends $100 billion of that investment to buy NVIDIA chips. So NVIDIA's earnings go up. Open AI's, you know, valuation goes up. Then NVIDIA's revenue goes up as a result of that. And Microsoft invests in OpenAI. They invested $135 billion. And then OpenAI commits to $250 billion in Azure cloud services and so on and so forth. This is the exact kind of stuff that IBM was doing in the 80s. They were pros at this stuff. so i guess my question uh is like do you think this is some huge bubble and this will all boil over and uh we're going to go into this like huge trough of disillusionment and everyone's going to give up on it i don't for the reason i said earlier because companies have budgets for this stuff all of them every company small large local councils schools universities governments like everyone who is sitting on piles of cash has a budget to spend on ai and what is the downflow effect of that they they are going to maybe have intermediary services that have value ads on top but at the end of the line it's got to run models and the models regardless of who you pay like if you pay open ai they've bought the gpus if you pay amazon they've they've got their own gpus that you're paying for, right? All of which, not all of which are NVIDIA because there's like Google has their TPUs, right? They've got their own hardware that run theirs. However, at the end of the line is always the hardware. There is going to be increased usage of this, not just a little bit, a lot. It's going to increase so much because everyone in the world is using it on a consumer level, right? Like everyone who lives in a country that can afford it and has a phone even is using it, then you've got every business is going to use it. Every large institution and government is going to use it. The usage is just going to be incomprehensibly large. And yeah, okay, the models will get more efficient, the hardware will get better. But that's always been the case in computing and it continues to grow. I just don't see how it doesn't stay. Okay, yeah, sure, maybe they're doing some financial shenanigans within individual companies, but we're talking about a bubble in general, right? Like, okay, one or two companies might fail, but someone will be there to pick up the pieces because the demand's there. I just genuinely don't. I mean, maybe this is famous last words or something, and I'm not a finance expert, especially at this scale. I just don't think that this is going to go away because it's incredibly useful for businesses. And I know there's all that stuff like, oh, you know, Gartner did a study and found that people weren't as productive as they thought with AI and whatever. But I just don't agree with that because I see it firsthand just how productive it can make businesses. And I just think that demand is going to stay. I don't think people are going to be like, you know what, actually, it's kind of crap and I don't want it. I think there's just a disconnect when we talk about between the financial markets, which I obviously like you don't know that much about and don't really care about that much. It's like the finance bros pumping and dumping AI stocks. And then the other side of the equation, which is the technology, which we really care about. And I think my lived experience of seeing businesses adopt AI and the use of AI and the productivity gains of it. Really, a lot of that stuff's only, in my view, just coming to fruition now. We're just starting to see a lot of people had failed pilots, failed attempts at this stuff, which you're seeing in a lot of the research. But the companies that are rewriting things from the ground up and saying, okay, we have to sort of rebuild all of our processes and workflows around incorporating AI. Those are the ones that are succeeding and seeing benefit. And so I'm like insanely confident there's so much benefit there and you'll see the impact in earnings and efficiency. In fact, I found one stat where it said something like where you're seeing the biggest benefits is in the SMB. 51% of SMBs that adopted generative AI reported revenue increases of 10% or more. And that goes back to your point earlier on previous shows around how, you know, that small accounting firm or law firm or whatever it is can start to take on more work because they can get a lot more done faster. And so I just think the enterprise, because they're so slow to adapt and change processes, unless you've got some sort of, you know, just gun-ho CEO out there going like, we're going to do this and put it in every aspect of the business. This is why these projects are failing, because there's just this misalignment in strategy in these large enterprises where they've got the AI budget, like you say, but they have no idea how to spend it to actually benefit. But the thing that has struck me about it, and this comes from direct experience, you know, with seeing people adopting the technology, is that it's bringing smart people alive. They suddenly have capabilities they didn't have before and are able to enter realms that were formerly reserved for tech people or large, long-scale plan projects. Suddenly, things they've always wanted to do or believe should happen within a company, they can actually go and do it. And I think that it's that kind of thing that is really going to open up some companies. And yes, some companies will fail and die because of this. Some jobs will go, some jobs will be replaced. But I think ultimately, as a whole, economic output will increase because the companies who embrace this are going to be able to create more wealth for everyone by adopting it correctly. And it's just very interesting in organizations how it isn't like just one dude. We say that maybe one person's driving it, but the demand for it is universal and people are doing more with it. I don't see how you can have that happening everywhere and not have a positive impact as a whole. But MIT said that 95% of organizations are seeing zero and no measurable return. Yeah, but I mean, this is similar to the studies where like, you know, when the internet came out, newspapers were like, oh, people still prefer to get the newspaper delivered on a Saturday. It's like, yeah, okay, well, you keep believing that, bro, and we'll just keep making the technology that replaces that. But here's the sort of bubble counter to this, is that we probably are in a bubble, right? And the reality is that investors and everyone's just gotten so excited about a new technology that we're probably investing like five years ahead maybe of where they need to be. But that won't necessarily be a bad thing. And in like the next, like I would say things are inflated basically 10 years ahead. So if you fast forward 10 years where AI is like the internet was, where it's just in everything and there's all these other use cases that form and productivity benefits and so on and cars driving themselves and robots in your home, you know, opening your door poorly. Yeah. No, I just mean I think in the next decade that will inevitably happen, right? I have no doubt. And so it's just going to take time for this stuff to be adopted, and it's going to take time for this infrastructure to be built out and the power to be connected and the lights to come on. And the real question is, does the demand that at least we see, is that going to match up with supply? And I think that will determine, you know, bubble or not. But what is a bubble really? Like a bubble, in my understanding of it, is, you know, it's speculators. It's like people investing ahead of time thinking this is going to crank. Like we're going to make a shitload of money if we invest now. And they're like, oh, if I don't invest now, I'll miss out. And then it gets way overvalued, right? And what's the consequences of that? Maybe a couple of companies like overspend and then ultimately fail because they were, you know, writing checks they couldn't cash at a time where this speculation was going on because they're, you know, they've got a ton of money, whatever. They can issue new shares and profit and whatever. But this isn't going to cause a depression or destroy a market because the actual value is being delivered to some degree, maybe not to the degree the speculators think there. But I just don't see how it's going to be like, oh, actually, we were all wrong about AI and it doesn work And all of this stuff is nonsense And therefore all that wealth is gone I just I just think like screw these people who are just like trying to like gamble on the stock market and stuff like that. Let's focus on the actual productivity element of it and build real stuff. Like that's what I care about. I don't really care about people who are on like Wall Street bets and stuff like gambling on it. Like it's not really what, despite promoting AI gambling heavily all the time. I just don't think that that financial market stuff matters so much in terms of everyday businesses and people. I just think people who are doing real stuff are going to get on with it regardless of what these psychos do. The issue I have with it and the thing that scares me though is that by these CEOs like Altman and others going out there and hyping up AGI and talking about people losing jobs and making people live in fear, it set up this environment where anything they release or anything the industry does is seen, you know, through the lens of those statements, right? And that hype. And so therefore, like you can never meet or live up to the expectations when you're building products that have to evolve and respond to users and, you know, improve things in an organization through successive iteration and working through their use cases and problems, right? Like, I think everyone thought that you'd go and spend on AI and it would just magically transform the business and you wouldn't have to do any work because it was like sentient or something and it would just like navigate the halls of the organization and figure out how to improve things and that that of course is this fantasy and i think a lot of the investment is led by that fantasy the reality is somewhat different like you have to be in the trenches working hard getting things done figuring out how to deploy the technology creating champions in an organization to help roll it out and understand it and these are all the things they have not done. And therefore, you know, it's not like that kind of Salesforce or Oracle type playbook here where they've gone and said, like, how do we now deploy this technology into enterprises? And so what's happened is a lot of the early adopters have got absolutely spanked by it. They don't know how to measure the results. They're not spending as much. They're disappointed. The pilots aren't going ahead because they've failed. And I think that puts this whole like negative spin on the whole technology. And I worry that this could lead to if this bubble pops and the share market crashes a little bit. And then we go into this period where everyone's like, oh, we tried that. It's stupid. And then we just go into this black hole with the technology, which I think would be sad. And I do think in that period, you could have the people that then go and adopt it come out the other end, just wiping out the other businesses. Yeah, I think so. And I think that, you know, we've discussed this ourselves. I think that the people who can get through this period of, like, the ups and downs in terms of the way people are feeling about it and just focus on things that they can do now with the current technology and be practical about it. Think about, you know, medium term impacts of the technology, what it's likely to do to your industry or your profession or whatever. And then just start to try it, like actually give it a go and see how it actually works for you. People are smart. They'll figure out where the strengths and weaknesses are, what the likely future improvements are going to do to their industry. And you can have a thought in the back of your mind, okay, well, if they happen to crack the AGI, then maybe that's a black swan event that would destroy our industry. But it's unlikely and it's far more likely the technology incrementally improves, becomes cheaper and is everywhere. And so we need to work around that and not worry so much about all the financial speculation side. Someone will be there with the GPUs providing the service. You don't really need to lock yourself into one. It's not required to hitch your wagon to open AI, and if they fail, you fail. It's not necessary. There's heaps of options. Yeah, it seems like even in a failure sense, like there would be a lot of infrastructure, right, that needs to be used. And second of all, there's so many good frontier models now. If one or two of these companies fails, I don't really see the big deal like I mean I'd be sad because I like the diversity of models and they have their strengths and weaknesses of different things but if OpenAI was to go out of business tomorrow in my workflow as it stands today I probably wouldn't notice at all they could all go out of business tomorrow and you could leave me with a couple of PCs with big GPUs running like Kimi K2 or GLM 4.6 and I could make businesses for the next 50 years. The existing models are so good that you can do so much with it already. And it's the tooling and the implementation of it in businesses that is the work bit now. And the models happening to get better just makes it faster to get there and gives you a few more options about what you can do. But the fact is that every single large AI company, including NVIDIA, could go out of business today and there would still be productivity gains for years from what we've already found. it's like discovering copper or something and then like the guys digging up the copper copper all die of gas or something and then you're like this um copper sitting around but the the bubble is pursed so what's the point you know but you've still got the copper go use it build stuff make swords make shields like whatever else you make with copper big churches copper churches they exist yeah i i think the there's so much to be gained from the technology and it it's here to stay and i I kind of get the feeling some of it's wishful thinking from people that are just resistant to change and they want it to go away. Like, oh, it's just a bubble. It's just a fat. It's going to disappear. Like when Bitcoin was going up and you didn't own any. You're like, oh, I hope this bloody thing just bursts and everyone loses all their money. Yeah, exactly. I think it's very similar. And then there's all these people out there saying, oh, just like Bitcoin, you know, there's no utility with AI. And it's just what I struggle with a lot of this commentary is it's just so different from my day-to-day lived experience with the technology, using it all day, every day for everything. I'm like, okay, yeah, there's no utility at all. Yeah, exactly. I think that's the thing. It's probably a little bit of ignorance when people are talking about the stuff or just not wanting to believe or just maybe the numbers don't quite make sense yet. but we've seen over and over that these irrational situations can work themselves out over time. Like the smart people behind this, it's not like these are idiots and they're just making stuff up. Like a lot of it's real and yeah, they get overexcited and a little bit ahead of themselves and stuff like that. But how else do you get to that point? You sort of have to be a bit like that to get to that point. You have to be optimistic. You have to believe, you have to think, oh, oh, if we did this and we did this, then this could happen. Like, it's those kind of people who get to those positions and therefore it's always going to be a little bit overhyped. Do you know what I think happened? I think everyone got excited about ChatGPT. Enterprises were like, we want to be AI first, which I totally get. Like, we want the benefits of this technology in our organization. A bunch of people went out and bought co-pilot licenses and ChatGPT enterprise licenses. And then they're like, cool, we did it. We did AI, like AI is done. And then we gave our developers like cursor accounts. AI is done. Like we're done here. Like we're an AI first organization and they saw very little benefit because they didn't invest in training or figure out how to adopt it or figure out what processes it can improve in the organization. And like that is, it seems to me what happened. And then they just, after a lot of them are like, oh, we give up. Like it's, it's not great. Like, um, it's gotta be it, right? Like, I don't understand how else it could have played out. Like, that's what we at least hear from time to time talking to businesses. Like, oh, we bought everyone copilot licenses and that, you know, that was the thing. Remember in the big short, the movie of it, where there's that moment where he talks to the guy at the sushi restaurant and he's like asking him about how he does the loans and he's boasting about how, you know, he's layering these loans on these loans and all this sort of stuff. And then the guy from the office is like, oh, my God, this thing's a bubble and, you know, it's all going to collapse. I feel like I have that moment every time I see like someone discover, hang on, if I can get all of my data into this MCP and into the AI models, then I'm going to be able to do this. And I just saved a week's work or two weeks work or everyone could use this. Like I have the opposite feeling. I'm like, wow, this is going to happen for millions of people. Like millions of people are going to have this moment where they're like, hang on a second, all this stuff I do all day, if I can get the information in in the right way to the AI, it's going to be able to do it all for me. And like the second you see that with your own stuff, it's like that moment in the movie, but in reverse where you're like, this is going to be so big that it'll be part of everything. yeah and i think that's that that was my point earlier about what i see on the ground is the you see people's faces light up you see the opportunity it creates you see like how empowering it is and you do get that feeling you're like you get the feeling again that you first had when you maybe first use something like chat gbt or i talked about uh the experience of losing my fsd virginity last week and how i felt um it's that same feeling and i is that what someone meant i I saw in the comment about you losing your virginity. I'm like, that's weird. I don't recall talking about that on the podcast. Yeah, yeah. I think I compared it to one point. But it's interesting. In that MIT study that criticized saying AI doesn't work, one of the things they cited was poor data quality. And it said organizations lack AI-ready data. Data sits inside our legacy systems. Teams underestimate the effort to clean and integrate it. And I think that this is the thing, right? You don't have to integrate it or clean it up. You can just use MCPs to connect this data in. So it's like this poor data quality problem early on, because I would assume a lot of these solutions that were adopted didn't have good connectivity. This protocol probably didn't even exist. They didn't have the tooling that's now available today with asynchronous tasks, being able to let the AI go off and do work across that data. So, yeah, I think a lot of this data, too, is just blatantly wrong. and can be completely misread. Like, oh, all the pilots are failing, so therefore the whole industry is doomed. I tell you the real gatekeepers to the future of AI, it's the IT dudes in companies who control the databases and access to them and access to the internet for the MCPs. They truly have the power. These are the people you need to win over because they're the ones who can help you get your MCPs live with your real data, and unless you can convince them, you're never going to get anything. So I would suggest if you're in an organization, and become really, really good friends with the IT department and make sure they understand or trick them and lie to them or become really good friends with them because you're going to need their help. So one other thing I wanted to talk about on the show was code execution with MCP, building more efficient agents. We talked about this last week, the week before, the week before that, the week before that, I think. Sorry. But one of the problems with MCPs is that when you have a lot of MCPs connected, we talked about it last week with the GitHub MCP being our example. When you have a number of different tools, like tools for everything in the MCP and a number of these enabled, they take up a huge amount of the context window. So think about that as just pasting a huge chunk of text, every query before you say like hi. There's like a huge amount of data that has to be sent to the model just for you to say hi in case you might want to call a bunch of tools. And Anthropic released this blog post on their engineering blog around code execution with MCP to build more efficient agents and basically reduce the burn or expenditure of MCPs. And you, of course, we got linked to this like a thousand times because our audience really wants to, especially in Sim Theory, reduce any excess token use possible. But you had some thoughts around the... I've got several thoughts. Firstly, I don't agree that writing code with the model and then executing is the solution. I don't think that's a good solution. There's a few ways you can do it. Firstly, you can just use a really cheap model to filter the tools down. So you say, okay, they said hi, here's my list of 100 tools. Which ones might be relevant to this question? And the answer with hi is none of them are. And if they've said, oh, you know, check this file out of my GitHub repo, then we'll, okay, I'll limit it just to the GitHub tools. Now, that can be done with an incredibly cheap local model that's super fast. And so you actually reduce the tools right down before you even send them to the AI. And that's really good. So I think that's one technique. We use that to some degree, but I tend to ease off using that for the second reason I have, which is the models are really smart. You want to use them for their intelligence. You want them to have a holistic picture of everything they're able to do, or at least I do. And so what I don't want to do is sort of put blinkers on and sort of limit what it can see. And so it's not aware that it has all these other abilities because some filtering processes happen. What I'm really relying on is that the models are going to get better in terms of context window and cheaper and faster. And I truly believe that that is going to be huge evolutions here where we're going to see. I mean, you already mentioned GPT 5.1 might be faster and cheaper. and I think that the providers are going to realize that. They're going to realize that people just want to dump everything in and have the model be smart. I think that's really where we need to see advancements. Yes, you can use these techniques, but my opinion is it's a premature optimization. Like, I don't think that... Do you think it's like RAG? Like, it wasn't really a premature optimization in a lot of ways. I strongly agree. We noticed from real-world experience with Sim Theory, The problem with RAG was people were not happy with the answers. The system couldn't quote verbatim from their source text, for example. It couldn't identify specific things they answered. It was sort of like vibe docking where it's like, okay, it's got a general idea of what you're talking about, but it doesn't actually know. And I think it's similar with the tool calling. I feel like, okay, sure, yes, there are ways to optimize it if saving money is your goal. But I don't think the goal of using AI should be to save money. I think the goal should be, what can I produce with this? And I understand not being wasteful and I understand not everyone can afford it. So there are cases where, yes, these should be options that you have. I want to work in a token efficient way and therefore filter things or use anthropics technique and write code to figure out what we're going to do, come up with a plan and reduce things. But I think that there's also that purest experience. I want to see what this model is capable of if I throw everything at it and go full bore into my task. And so I think that this is an area where we're going to look at things like what we will call skills. Other people might call agentic use where you might have specialist agents that only have a smaller mix of tools. And it's like, okay, they've asked a finance question. Let's consult the finance agent and ask it what it thinks. And then it has its set of tools encapsulated and limited to that finance area. But the sort of orchestration agent is able to call on its various experts that have their sets of tools rather than it having access to absolutely everything. So I think that there's going to be actually different models of working with agents and agentic systems and skills that is far superior than trying to just reduce token usage in your primary LLM calls. I just don't think that the energy spent there is worthwhile. I think that we sort of already know the path to getting more out of the models and the way they work together and the way they're able to plan and accomplish tasks. I think that should be the focus not like how can we halve the amount of tokens we use per request when chatting with an assistant That not where I want to be spending my time anyway I do think too to a large extent it a very solvable problem of just teaching the user. Like, if you're just chatting with a model, turn off the skills. Like, just turn off the MCPs, right? If you have a specialist thing, like, I was working on a party invite last night for my son's birthday and I wanted to do a character reference image so he looked like Batman on the invite like really realistic one and so I have an image edit assistant with just the image tool as the MCP and with Haiku my preferred model for iterating with an MCP and that's I just switched to the image edit tool the assistant because that's what I want to use now and that's very token efficient if I'm working on support I have my help scout I have my sim MCP that allows us to interact with the system. And so, yeah. And so they're sort of acting like, okay, well, we can dynamically create that situation for you by writing code. But the downside of that is, firstly, the models, despite everything everyone says, they're not really great at tasks like that. Like, yeah, they can do them, but they're inconsistent. And also it's freaking slow, you know, like it adds all this extra time. Like you don't want to be sitting there working on that, waiting for Anthropic to write and execute code in some container and then come back and be like, oh, I know. I'll use the image edit tool. Yeah. But I've saved you 40 cents in tokens, Mike. Amazing. You know, like stuff like that. It's just, it's simply not worth it. And it's not. And I agree. If you want to have the magical experience of like all the tools are switched on and I don't care how many tokens I burn, I just want it to use its own mind correctly. the way to do it still is to stuff it all into the context and use a fast model like haiku and it works brilliantly but for those more precise things and think about when you're working day to day you have shifts in focus you might be working reviewing a legal document you might be doing some accounting work you might be answering emails or whatever it is you tend to have like you sort of switch hats. Like writing code is different to answering emails. And so just saying to the user, you're an adult, you're a grownup, you set the modes, you switch the context. I think that's okay for now. This whole like AI has to be this magical AGI black box where it just figures stuff out is stupid. You're just not getting the benefit out of the model by doing that. Yeah. And I think that the next step will be where you have an agentic model where you are setting goals and then it is able to look at the resources it has available and maybe create those roles or those sets of MCPs or those sets of tools and be like, okay, I'm going to need a finance guy. I'm going to need my document producing person in this team. Build out your team with the various sets of tools and then set them tasks and work in that sort of hierarchical fashion. And that'll be token efficient, faster, and still enable you to have the full set of tools, but that's going to be a little way off because we need the tooling around that. It's still not funny every answer in LLMs right now is just get the model to write code because that's the one thing it's good at. Yeah, exactly. I just don't, I'm just not a fan of that approach. Like, yeah, okay, it gives us a little bit of acceleration to get things that you may not be able to do right now, but I just don't think that it's sitting around writing and executing code all day is really the answer to most tasks. Okay, so there was a new model release, Kimi K2. Kimi K2, we actually wrote a song about once because we liked it so much because it was like fast and great at tool calling back when most models were bad at tool calling. This variant is Kimi K2 thinking. It is, according to them, it's performed really well on some hand-selected benchmarks. it's an agentic reasoning this is where they've said it's really great at text only with tools they say it beats beats sonnet rock and I don't know which other that's 4.5 thinking sorry I should say NGPT5 in agentic coding they gave some examples like a component heavy website where it was able to do like a Microsoft Word clone all the sort of typical vibe code examples they like to give. I did put it to the test as well, trying to create this black hole simulator, which I thought was pretty cool. It was able to create something in like one shot. You know, it's the same sort of vibe code example. There's a black hole in the middle that's in 3D and there's like matter going into the black hole and you can increase the mass of the black hole and then things get sucked in faster. Yeah, I mean, it's a pretty cool example. I would say it's not that representative of like what probably would happen, but I don't think anyone's that sure. I gave it a horse bet to do because I thought that's always my test of a model if it can win me money. And it was interesting. It picked a value play at 625 to one horse, which lost. But then it's like just, you know, just in case that loser's bet on this one, this is the most likely to win. And it did. So I bet total $50 and I won 70. So $20 profit paid for it too. Passed the investment test. Yeah, that's right. If this winning streak continues, I'll be so rich by the next episode, I can help OpenAI out with their funding gap. I love how you almost said OpenAI like the song. OpenAI. OpenAI, like the singing models do. So it has a 256K context window, 16K max output, is that? That's right. Yep. 16, 325 or whatever it works at 284. What is the number? I don't know. Yeah. 16 something. 16, 384 tokens. Knowledge cut off June 2025. So here's my initial assessment just playing around with it. It seems like a great coding model. It really seems to excel with code. I haven't tested it extensively, but just in the limited testing I've done, it does seem to excel at code. In terms of using MCPs and agentic workflows, which they harp on about in the blog post, it's absolutely stupidly, shockingly bad. And I don't know if that's in sim theory, like we haven't tuned the model correctly. But here's the counter argument I made to you that I will now share on the show. Is any other frontier model we plug in, like Sonnet 4.5, Haiku, GPT-5, Gemini, they take a little bit of tuning. but that's more to get them like real good but they work first go like they're just very plug and play but there's something about Kimi K2 and these Chinese models where they say they're really good at it but uh when you try and plug them in they're not play they're just yeah and I think like that's it's possible that yeah with the right tuning and working directly and specifically with that model you might may get to where their benchmark scores are but you're right it's sort of like most people want a general purpose um model where they can just drop it into their existing workflow and have it be better or cheaper or faster or whatever the advantage of that model is and and here's the thing like i'm speaking specifically to a sim theory user like why would i use this over haiku if i'm working with mcps no point like they're both free of charge like you can just both use them without burning through your tokens yeah why would i touch this Like it's just simply, there's no case to use it maybe for a bit of code, but when I've got access to like GPT-5, Gemini 2.5 Pro, Claude Sonnet 4.5, even Haiku, just for basic coding stuff. Again, no tokens being burned. Why would I touch this? The only good use case is it might coincidentally be better at horse racing. Just by total fluke. That's what I always hope. I'll finally find the model that gets it right every time. Yeah. I think some of these models, the claims versus the reality, it's getting a bit exhausting to even have to test them. Like, I don't know. Who would even take the time to verify the benchmarks are accurate? They can put anything there. They can say anything they want. No one is going to spend all day analyzing it, being like, you know what? They're off by a factor of 0.03 here. This is a travesty. Yeah, this model feels a bit T-Moo-y again, like the T-Moo version of Claude. and yeah I don't know like have a look at this example just so you're like would you use it as your life coach for example thinking it may or may not give me good advice I'm going to ask it for some life advice now I said research current commentary around an AI bubble and form a view on the current state of the market and so it hits Google it says search for a 2024 AI bubble market commentary analysis so it takes it literally which no other model would like analyzing like like detergent bubbles or something? No, like the economic bubble. Oh, right. Okay. Around AI. Come on, that's pretty clear. And then the next tool call is search Google for today's weather information. And then it tries to search for weather conditions, weather information, and so on and so forth. Okay, listen to this. So I asked it for some just, I said, I need some life advice. What should I do today? It says, given it's 1109 on a Friday morning in Sydney, here's what you should do. Take 30 minutes to clear your inbox, make a concrete weekend plan, book a restaurant for tonight, or plan a morning hike. Text three friends right now and organize a spontaneous Friday evening drink. Go to a movie this afternoon. It's like, wow, so it's all about relaxation, not productivity. This could ruin the whole world if people take this model seriously. Yeah, anyway, I think it's one of those models, like, why bother? No one cares about, no one will care about. People would only care if we didn't add it to Sim Theory. Other than that, no one will ever use it. Yeah, and here's what will happen. We'll add it, it spikes, and then, like, nothing. It's fine. Like, that's why we do it. And I do want to play around with it a little bit more. But, yeah, I think it's just the state we're at. It does claim, though, in their blog post, Kimmy K2 Thinking can execute up to 200 to 300 sequential tool calls without human interference, reasoning coherently across hundreds of steps to solve complex problems. Like I would like to try and tune it and test for that because right now it's checking the weather 200 times. Isn't that helpful? But again, it's a bit. Yeah. It didn't, it didn't mention they were different tasks or like accomplish some goal. It's just able to keep going. Yeah. Um, all right. That'll do us this week. Any final thoughts? The, the, we actually defended Sam Altman for about 20 minutes. Uh, and, uh, the, the, the bubble, we could be in a bubble. Uh, any yeah here's my thoughts before you knock something try it get out there and use this stuff it's exciting and fun and good and it'll help you it's not it's not something to worry about like the the the big bubble markets it doesn't matter just just use it all right i couldn't help myself i did put together a song for this week to talk about the bubble that could be uh and it's called the song's called in the middle of the bubble i'll play it after the rollout of the show we will see you next week thanks again for listening goodbye Balance sheet dreams, money moving like a metronome Round trip cash flies out and boomerangs home Nvidia to open eyes or in between Oracle spins up racks, neon power green But future revenue, everybody cares Same hundred dollars doing laps for years But we can't light the racks when the grid says no 30 person idle while the charts still grow CFO says Capex, onward go Sign that line, watch the numbers flow In the middle of the bubble I feel it in the floor Four on the floor while we're buying more In the middle of the bubble Money, I'm sure I spend Circular love round out back in In the middle of the bubble We're all right so thin We keep the baseline pumping while the margins wet thin In the middle Circles all Side-chain my doubts Up to the kick Troop Repetriate the cash Where the tax cut go Credit stack high Say build and grow Paused our buybacks Pushed Ben to the brim H-2800 pre-orders on a whim Data's running dry, test time, compute Make the model thing longer, cost-reboot Pilot graveyard line and up the hall Five percent flying while the rest of the starboard Flow first, hear the winners call Redesign the work or the bots will fall In the bubble, bubble, I feel it in the floor Four on the floor, I would buy more In the middle of the bubble, bubble Money on short spin Circular love, round out, back in In the middle of the bubble, bubble We're all white so thin We keep the baseline Pumping while the margins wear thin Hands up, warehouse lights Phantom midnight megabytes Power's tight, profit's light Still we dance through fiscal night Loop it, book it, lease it, loop it Mark it up and re-execute it GPU glow, but we can't quite use it Round and round and round, don't lose it Michael Berry on the line, check your leverage, friend If the credits fade away, does the music end I'm praying for a metric I can finally defend Even on the beat that we can't recommend We're dancing on the trouble In the middle of the bubble I feel it in the floor Four on the floor, I would buy more In the middle of the bubble, bubble Money on short spin Circular love round out back in In the middle of the bubble, bubble Finds that scent and grin If the loop unwinds, will the melody win? I chain my fears to a kick so subtle Cause I'm still dancing In a love and bubble
Related Episodes

4 Reasons to Use GPT Image 1.5 Over Nano Banana Pro
The AI Daily Brief
25m

GPT-5.2 Can't Identify a Serial Killer & Was The Year of Agents A Lie? EP99.28-5.2
This Day in AI
1h 3m

ChatGPT is Dying? OpenAI Code Red, DeepSeek V3.2 Threat & Why Meta Fires Non-AI Workers | EP99.27
This Day in AI
1h 3m

The 5 Biggest AI Stories to Watch in December
The AI Daily Brief
26m

Claude 4.5 Opus Shocks, The State of AI in 2025, Fara-7B & MCP-UI | EP99.26
This Day in AI
1h 45m

Exploring OpenAI's Latest: ChatGPT Pulse & Group Chats
AI Applied
13m
No comments yet
Be the first to comment