

Is AI Stalling Out? Cutting Through Capabilities Confusion, w/ Erik Torenberg, from the a16z Podcast
The Cognitive Revolution
What You'll Learn
- ✓Separating the question of AI's impact from the analysis of capabilities advances
- ✓Disagreeing with the claim that progress has stalled, citing continued advancements in areas like context windows, interactive voice, reasoning, vision, and tool use
- ✓Expecting labor market impacts to be undeniable by 2027-2028 as AI models make important contributions to science
- ✓Concerns about recursive self-improvement and the need for adequate controls
- ✓Encouraging everyone to get involved in shaping the future of AI development
AI Summary
The podcast discusses the debate around whether AI progress is slowing down or stalling out, as argued by Cal Newport. The host acknowledges Newport's valid concerns about the negative impacts of AI on cognitive performance and development, but disagrees with the claim that AI capabilities are not advancing. The host argues that while there may be issues with the real-world impact of AI, the technical capabilities continue to improve, as evidenced by advancements in areas like language models, reasoning, and multimodal integration. The discussion covers topics like labor market disruption, AI protectionism, and the need for a positive vision for the future of AI development.
Key Points
- 1Separating the question of AI's impact from the analysis of capabilities advances
- 2Disagreeing with the claim that progress has stalled, citing continued advancements in areas like context windows, interactive voice, reasoning, vision, and tool use
- 3Expecting labor market impacts to be undeniable by 2027-2028 as AI models make important contributions to science
- 4Concerns about recursive self-improvement and the need for adequate controls
- 5Encouraging everyone to get involved in shaping the future of AI development
Topics Discussed
Frequently Asked Questions
What is "Is AI Stalling Out? Cutting Through Capabilities Confusion, w/ Erik Torenberg, from the a16z Podcast" about?
The podcast discusses the debate around whether AI progress is slowing down or stalling out, as argued by Cal Newport. The host acknowledges Newport's valid concerns about the negative impacts of AI on cognitive performance and development, but disagrees with the claim that AI capabilities are not advancing. The host argues that while there may be issues with the real-world impact of AI, the technical capabilities continue to improve, as evidenced by advancements in areas like language models, reasoning, and multimodal integration. The discussion covers topics like labor market disruption, AI protectionism, and the need for a positive vision for the future of AI development.
What topics are discussed in this episode?
This episode covers the following topics: Large language models, AI capabilities, AI impact on society, Labor market disruption, AI safety and control.
What is key insight #1 from this episode?
Separating the question of AI's impact from the analysis of capabilities advances
What is key insight #2 from this episode?
Disagreeing with the claim that progress has stalled, citing continued advancements in areas like context windows, interactive voice, reasoning, vision, and tool use
What is key insight #3 from this episode?
Expecting labor market impacts to be undeniable by 2027-2028 as AI models make important contributions to science
What is key insight #4 from this episode?
Concerns about recursive self-improvement and the need for adequate controls
Who should listen to this episode?
This episode is recommended for anyone interested in Large language models, AI capabilities, AI impact on society, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
Erik Torenberg joins to debate whether recent developments suggest AI progress is slowing down or stalling, addressing arguments from Cal Newport and others. Nathan counters this view by highlighting significant qualitative advances, including 100X context window expansion, real-time interactive voice, improved reasoning, vision, and AI's growing contributions to hard sciences. The conversation then covers AI's impact on the labor market, the potential for AI protectionism, and concerns about recursive self-improvement. This episode argues that AI capabilities are not stopping, with frontier developers seeing a clear path for continued rapid progress in the coming years. Sponsors: Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai Linear: Linear is the system for modern product development. Nearly every AI company you've heard of is using Linear to build products. Get 6 months of Linear Business for free at: https://linear.app/tcr Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (03:49) Is AI Slowing Down? (09:15) Newport's Scaling Law Theory (16:56) The Value of Reasoning (Part 1) (17:17) Sponsors: Tasklet | Linear (19:57) The Value of Reasoning (Part 2) (24:52) Explaining GPT-5's Vibe Shift (31:39) AI's Impact on Jobs (Part 1) (36:50) Sponsor: Shopify (38:47) AI's Impact on Jobs (Part 2) (44:08) Recursive Self-Improvement via Code (49:35) The Future of Engineers (53:24) Economic Pressure vs. Protectionism (58:29) Progress Beyond Language Models (01:07:11) The State of AI Agents (01:19:19) China's Open Source Models (01:29:44) A Positive Vision Forward (01:37:39) Outro SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Full Transcript
Hello, and welcome back to The Cognitive Revolution. Today, I'm excited to share a recent conversation I had with Eric Torenberg, which originally aired on the A16Z podcast, about whether recent developments suggest that AI progress is slowing down or even stalling out. We begin with a discussion of recent arguments from Cal Newport, from The New Yorker, where he argued that progress on large language models has stalled, and from a recent episode of the podcast Lost Debates, where he highlighted, among other things, the negative impact that AI can have on students' learning. Now, while I absolutely share Cal's concerns about AI-enabled bad habits, like cognitive offloading, and more broadly question whether AI will ultimately prove to be good or bad for humanity overall, I think it's important to separate the question of impact from the analysis of capabilities' advances. And on the capabilities point specifically, while OpenAI's naming decisions have caused a lot of confusion. I point to the 100x expansion of context windows, the introduction of real-time interactive voice modes, the improvement in reasoning capabilities and the resulting IMO gold medals and other accomplishments, the dramatic improvements in vision and tool use, including general computer use, and the fact that today's frontier models are beginning to contribute to the hard sciences to argue that we have, in fact, seen qualitative advances. And in fact, while the capabilities frontier does remain jagged and embarrassing failures are still fairly common. Every aggregate measure, from the volume of tokens processed, to the size of the tasks that AI can handle, to the revenue growth we are seeing across the industry, suggest that overall, progress remains pretty much right on trend. From there, we go on to discuss my expectations for AI's impact on the labor market, and why I think that some verticals, like accounting, where people really only want to buy what they absolutely have to have, will be most disrupted, while areas like software engineering might for a while at least, maintain employment by dramatically expanding output. How advances in multimodality as they move beyond text and image and begin to deeply integrate reasoning models with specialist models in domains like drug development, material science, and robotics suggest a sort of base case for superintelligence. The possibility of AI protectionism, such as the proposed ban on self-driving cars that Senator Josh Hawley recently floated, and the possibility of a broader AI culture war. My concerns about recursive self-improvement and companies tipping into that regime without adequate controls. How much it matters that China now produces the world's best open source models, and why, though they probably will have a real impact on China's AI sector, I remain skeptical that chip export controls will make the world a better place. And finally, considering that the scarcest resource is a positive vision for the future, why I encourage everyone, regardless of their technical ability or cognitive profile, to get involved in shaping the future of AI development. The bottom line for me is that AI capabilities advances have not stopped, and I don't expect them to stop for the foreseeable future. Frontier developers report a clear line of sight to at least two more years of similar progress, and their optimism is well-supported by the last five years of history, which shows that over and over again, AI weaknesses that were expected to be hard to overcome have in practice been solved through continued scaling, plus relatively minor tweaks to the core paradigm. By 2027 or 2028, I think labor market impacts will be undeniable, and it will be clear for all to see that AI models are making important contributions to science. And while there are indeed some investment bubble dynamics, and some new model releases between now and then will surely disappoint, the most dangerous thing we could do is convince ourselves that we don't have anything major to worry about. With that, I hope you enjoy this discussion about recent AI capabilities advances and what's coming next with Eric Tornberg from the A16Z podcast. Nathan, I'm stoked to have you on the A16Z podcast for the first time. Obviously, I've been podcast partners for a long time with you leading Cognitive Revolution. Welcome. It's great to be here. Thank you. So we were talking about Cal Newport's podcast appearance on Lost Debates, and we thought it was a good opportunity to just have this broad conversation and really entertain this sort of question of, is AI slowing down? So why don't you sort of steel man some of the arguments that you've heard on that side, either from him or more broadly, and then we can sort of have this broader conversation. Yeah, I mean, I think for one thing, it's really important to separate a couple different questions, I think, with respect to AI. One would be, is it good for us, you know, right now even, And is it going to be good for us in the big picture? And then I think that is a very distinct question from are the capabilities that we're seeing continuing to advance and, you know, at a pretty healthy clip. So I actually found a lot of agreement with the Cal Newport podcast that you shared with me when it comes to some of the worries about the impact that AI might be having even already on people. You know, he goes over, looks over students' shoulders and watches how they're working and finds that basically he thinks that they are using AI to be lazy, which is, you know, no big revelation. I think a lot of teachers would tell you that. Shocker. He puts that in, yeah, puts that in maybe more dressed up terms that people are not even necessarily moving faster, but they're able to reduce the strain that the work that they're doing places on their own brains by kind of trying to get AI to do it. And, you know, that continues. And I think, you know, he's been, I think, a very valuable commenter on the impact of social media. Certainly, I think we all should be mindful of how is my attention span, you know, evolving over time? And am I getting weak or, you know, averse to hard work? Those are not good trends if they are showing up in oneself. So I think he's really right to watch out for that sort of stuff. And then as we've covered it, you know, many conversations in the past, I've got a lot of questions about what the ultimate impact of AI is going to be. And I think he probably does too. But then when it comes to, it's a strange move from my perspective to go from, you know, there's all these sort of problems today and maybe in the big picture too, but don't worry, it's flatlining, like kind of worry, but don't worry because it's not really going anywhere further than this, or it's, you know, scaling is kind of petered out or, you know, we're not going to get better AI than we have right now. Or even maybe the most easily refutable claim from my perspective is GPT-5 wasn't that much better than GPT-4. And that I think is where I really was like, what, wait a second. You know, I was with you on a lot of things and some of the behaviors that he observes in the students, I would cop to having exhibited myself. You know, when I'm trying to code something these days, a lot of times I'm like, oh man, can't the AI just figure it out? You know, I really don't want to have to sit here and read this code and figure out what's going on. It's not even about typing the code anymore. You know, I'm way too lazy for that, but it's even about like figuring out how the code is working. Can't you just make it work? Try again, you know, just try again. And I do find myself at times falling into those traps, but I would say a big part of the reason I can fall into those traps is because the AIs are getting better and better. and increasingly it's not crazy for me to think they might be able to figure it out. So that's my kind of first slice at the takes that I'm hearing. There's almost like a two by two matrix maybe that one could draw up where it's like, do you think AI is good or bad now and in the future? And do you think it's like not a big deal or a big deal? And I think it's both on the good and bad side. I definitely think it's a big deal. the the thing that I struggle to understand the most is the people who kind of don't see the big deal that it seems pretty obvious to me and the you know especially when it comes again to the the leap from gpt4 gb5 um maybe one reason that that's happened a little bit is that there were just a lot more releases between gpt4 and 5 so what people are comparing to is you know something that just came out a few months ago like 03 right that only came out a few months before GPT-5. Whereas with GPT-4, it was, you know, shortly after ChatGPT, and it was all kind of this moment of like, whoa, this thing is like exploding onto the scene. A lot of people are seeing it for the first time. And if you look back to GPT-3, you know, there's a huge leap. I would contend that the leap is similar from GPT-4 to 5. These things are hard to score. There's no, you know, single number that you could put on it. Well, there's loss. But of course, one of the big challenges is that like, what exactly does a loss number translate into in terms of capabilities. So, you know, it's very hard to to describe what exactly has changed. But we could go through some of the dimensions of change if you want to and, you know, enumerate some of the things that I think people maybe are starting to have come to take for granted and kind of forget like that GPT four didn't have a lot of the things that now, you know, were sort of expected in the GPT five release, because we'd seen them in four, oh, and oh, one and oh, three, and all those, you know, things sort of, you know, maybe boiled the frog a little bit when it comes to how much progress people perceived in this last release. Well, yeah, a couple reactions. So one is, and even to complicate your two by two, even further in the sense of, you know, is it bad now versus is it bad later? Like Cal is not really, you know, who we both admire, by the way, a lot. Cal's a great guy and a valuable contributor to the thought space, but he's not as concerned about sort of this sort of future AI concerns that, you know, sort of the safety folks and many others are concerned about. He's more concerned about, you know, what it means to life for, you know, cognitive performance and development now in the same way that he's worried about, you know, social media's impact. And, you know, you think that's a, you know, a concern, but nowhere near as big a concern as what to expect in the future. And then also, he presents sort of this theory of why we shouldn't worry about the future because it's slowing down and why don't we just share what we how we interpreted kind of his history which as i interpret it was this idea of like hey we figured out the the simplistic version is we've figured out this this way such that if you throw a bunch of data into the model it gets better and sort of order of magnitude and so the difference between gb2 and gpd3 and the gpd3 and gpd4 um but then then that sort of was significant, the difference, but then it achieved sort of a diminishing returns significantly. And we're not seeing it at GPT-5, and thus we don't have to worry anymore. How would you edit the characterization of his view of sort of the history? And then we can get into the differences between four and five. The scaling law idea, which is, you know, it's definitely worth agreeing, taking a moment to note that it is not a law of nature. You know, we do not have a principled reason to believe that scaling is some law that will go indefinitely. All we really know is that it has held through quite a few orders of magnitude so far. I think that it's really not clear yet to me whether or not the scaling laws have petered out or whether we have just found a steeper gradient of improvement that is giving us better ROI on another front that we can push on. So they did train a much bigger model, which was GPT 4.5, and that did get released. And there are a number of interesting, you know, of course, there's a million benchmarks, whatever. The one that I zero in on the most in terms of understanding how GPT 4.5 relates to both O3 and GPT 5. And OpenAI, obviously, famously terrible at naming, we can all agree on that. I think a decent amount of this confusion and sort of disagreement actually does stem from unsuccessful naming decisions. 4.5 on this one benchmark called SimpleQA, which is really just a super long tail trivia benchmark. It really just measures, do you know a ton of esoteric facts? And they're not things you can really reason about. You either just have to know or don't know these particular facts. The 03 class of models got about a 50% on that benchmark, and GPT 4.5 popped up to like 65%. So in other words, it basically, of the things that were not known to the previous generation of models, it picked up a third of them. Now, there's obviously still two thirds more to go, but I would say that's a pretty significant leap, right? These are super long tail questions. I would say most people would get like close to a zero, you know, you'd be like the person sitting there at the trivia night, who like, maybe gets one a night is kind of what I would expect most people to do on simple QA. And that, you know, checks out, right? Like, obviously, the models know a lot more than we do in terms of facts and just general information about the world. So at a minimum, you can say that GBT 4.5 knows a lot more. A bigger model is able to absorb a lot more facts. Qualitatively, people also said, in some ways, maybe it's better for creative writing. You know, it was never really trained with the same power of post-training that GBD5 has had. And so we don't really have an apples-to-apples comparison, but people did still find some utility in it. I think maybe the way to understand why they've taken that offline and gone all in on GPT-5 is just that that model's really big. It's expensive to run. The price was like way higher. It was a full order of magnitude plus higher than GPT-5 is. And it's maybe just not worth it for them to consume all the compute that it would take to serve that. And maybe they just find that people are happy enough with the somewhat smaller models for now. I don't think that means that we will never see a bigger GPT-4.5 model with all that reasoning ability. And I would expect that that would deliver more value, especially if you're really going out and trying to do esoteric stuff that's, you know, pushing the frontier of science or what have you. But in the meantime, the current models are really smart and you can also feed them a lot of context. That's one of the big things that has improved so much over the last generation. When GPT-4 came out, at least the version that we had as public users was only 8000 tokens of context, which is like 15 pages of text. So you were limited. you couldn't even put in like a couple papers, you would be overflowing the context. And this is where prompt engineering initially kind of became a thing. It was like, man, I've really only got such a little bit of information that I can provide. I got to be really careful about what information to provide lest I overflow the thing and it just can't handle it. There were also as context windows got extended, there were also versions of models where they could nominally accept a lot more, but they couldn't really functionally use them. They sort of could fit them at the API call level, but the models would lose recall. They'd sort of unravel as they got into longer and longer context. Now you have obviously much longer context, and the command of it is really, really good. So you can take dozens of papers on the longest context windows with Gemini, and it will not only accept them, but it will do pretty intensive reasoning over them and with really high fidelity to those inputs. So that skill, I think, does kind of substitute for the model knowing facts itself. You could say, geez, let's try to train all these facts into the model. We're going to need, you know, a trillion or who knows, five trillion, however many trillion parameters to fit all these super long tail facts. Or you could say, well, a smaller thing that's really good at working over provided context can. if people take the time or, you know, go to the trouble of providing the necessary information, I can kind of access the same facts that way. So you have a kind of, do I want to push on this size and do I want to bake everything into the model? Or do I want to just try to get as much performance out of a smaller, tighter model that I have? And it seems like they've gone that way. And I think basically just because they're seeing faster progress on that gradient, you know, in the same way that the models themselves are always kind of in the training process, taking a little step toward improvement, you know, the outer loop of the model architecture and the nature of the training runs and where they're going to invest their compute is also kind of going that direction. And they're always looking at like, well, we could scale up over here, maybe get this kind of benefit a little bit, or we could do more post training here and get this kind of benefit. And it just seems like we're getting more benefit from the post training and reasoning paradigm than scaling. But I don't think either one is, I definitely don't think either one is dead. We haven't seen yet what 4.5 with all that post training would look like. Yeah. And so one of the things that you mentioned that Cal, you know, the analysis missed was, was that it way underestimated the value of extended reasoning, right? And so what would it mean to fully sort of appreciate that? Hey, we'll continue our interview in a moment after a word from our sponsors. The worst thing about automation is how often it breaks. You build a structured workflow, carefully map every field from step to step, and it works in testing. But when real data hits or something unexpected happens, the whole thing fails. What started as a time saver is now a fire you have to put out. Tasklet is different. It's an AI agent that runs 24-7. Just describe what you want in plain English, send a daily briefing, triage support emails, or update your CRM. And whatever it is, Tasklet figures out how to make it happen. Tasklet connects to more than 3,000 business tools out of the box, plus any API or MCP server. It can even use a computer to handle anything that can't be done programmatically. Unlike ChatGPT, Tasklet actually does the work for you. And unlike traditional automation software, it just works. No flowcharts, no tedious setup, no knowledge silos where only one person understands how it works. Listen to my full interview with Tasklet founder and CEO, Andrew Lee. Try Tasklet for free at tasklet.ai and use code COGREV to get 50% off your first month of any paid plan. That's code COGREV at tasklet.ai. AI's impact on product development feels very piecemeal right now. AI coding assistants and agents, including a number of our past guests, provide incredible productivity boosts. But that's just one aspect of building products. What about all the coordination work like planning, customer feedback, and project management? There's nothing that really brings it all together. Well, our sponsor of this episode, Linear, is doing just that. Linear started as an issue tracker for engineers, but has evolved into a platform that manages your entire product development lifecycle. And now they're taking it to the next level with AI capabilities that provide massive leverage. Linear's AI handles the coordination busy work, routing bugs, generating updates, grooming backlogs. You can even deploy agents within Linear to write code, debug, and draft PRs. Plus, with MCP, Linear connects to your favorite AI tools, Claude, Cursor, ChatGPT, and more. So what does it all mean? Small teams can operate with the resources of much larger ones, and large teams can move as fast as startups. There's never been a more exciting time to build products and Linear just has to be the platform to do it on. Nearly every AI company you've heard of is using Linear. So why aren't you? To find out more and get six months of Linear Business for free, head to Linear.app slash TCR. That's Linear.app slash TCR for six months free of Linear Business. Well, I mean, a big one from just the last few weeks was that we had an IMO gold medal with pure reasoning models with no access to tools from multiple companies. And, you know, that is night and day compared to what GPT-4 could do with math, right? And these things are really weird. Like, it's nothing I say here should be intended to suggest that people won't be able to find weaknesses in the models. I still use a tic-tac-toe puzzle to this day where I take a picture of a tic-tac-toe board where one of the players has made a wrong move that is not optimal and thus allows the other player to force a win. And I ask the models if somebody can force a win from this position. Only very recently, only the last generation of models are starting to get that right some of the time. Almost always before they were like, tic-tac-toe is a solved game. You can always get a draw. There's and they would wrongly assess my board position as the player can still get a draw. So there's a lot of weird stuff, right? The jagged capabilities frontier remains a real issue and people are going to find, you know, peaks and valleys for sure. But GPT-4, when it first came out, couldn't do anything approaching IMO gold problems. It was still struggling on like high school math. And since then, we've seen this high school math progression all the way up through the IMO gold. Now we've got the frontier math benchmark that is, I think now like up to 25%. It was 2% about a year ago, or even a little less than a year ago, I think. And we also just today saw something where, and I haven't absorbed this one yet, but somebody just came out and said that they had solved a canonical super challenging problem that no less than Terence Tao had put out. And it was like this, this thing happened in, I think days or weeks of of the model running versus it was 18 months, you know, that it took professional, not just any professional mathematicians, but like really, you know, the leading minds in the world to make progress on these problems. So yeah, I think that's really, you know, that's really hard jumping capabilities to miss. I also think a lot about the Google AI co-scientist, which we did an episode with. We can, you can check out the full story on that if you want to, but, you know, they basically just broke down the scientific method into a schematic, you know, and this is a lot of what happens when people, there's one thing to say, the model will respond with thinking and it'll go through reasoning process. And, you know, the more tokens it spends at runtime, the better your answer will be. That's true. Then you can also build this scaffolding on top of that and say, okay, well, let me take something as broad and, you know, aspirational as the scientific method. And let me break that down into parts. Okay. There's hypothesis generation. Then there's hypothesis evaluation. Then there's, you know, experiment design, there's literature review, there's all these parts to the scientific method. What the team at Google did is created a pretty elaborate schematic that represented their best breakdown of the scientific method, optimized prompts for each of those steps, and then gave this resulting system, which is scaling inference now kind of two ways. It's both the chain of thought, but it's also all these different angles of attack structured by the team. And they gave it legitimately unsolved problems in science. And in one particularly famous kind of notorious case, it came up with a hypothesis, which it wasn't able to verify because it doesn't have direct access to actually run the experiments in the lab. But it came up with a hypothesis to some open problem in virology that had stumped scientists for years. And it just so happened that they had also recently figured out the answer, but not yet published their results. And so there was this confluence where the scientists had experimentally verified and Gemini in the form of this AI co-scientist came up with exactly the right answer. And these are things that like literally nobody knew before. And GPT-4 just wasn't doing that. You know, I mean, these are qualitatively new capabilities. That thing, I think, ran for days. You know, it probably cost hundreds of dollars, maybe into the thousands of dollars to run the inference um you know that's not nothing but it's also like very much cheaper than you know years of grad students and if you can get to those caliber of problems and actually get good solutions to them like you know what would you be willing to pay right for that kind of thing so yeah i don't know that's probably not a full appreciation we could go on for a long time But I would say in summary, GPT-4 was not able to push the actual frontier of human knowledge. I don't, to my knowledge, I don't know that ever discovered anything new. It still not easy to get that kind of output from a GPT or a Gemini 2 or you know a Claude Opus 4 or whatever But it starting to happen sometimes And that in and of itself is a huge deal Well, then how do we explain the bearishness or the kind of vibe shift around GPT-5 then? You know, one potential contributor is this idea that if a lot of the improvements are at the frontier, you know, not everyone is working with, you know, sort of advanced math and physics and a day-to-day. And so maybe they don't see the benefits in their daily lives in the same way that, you know, sort of the jumps in chat GPT were obvious and shaped the day to day. Yeah, I mean, I think a decent amount of it was that they kind of fucked up the launch, you know, simply put, right? They like were tweeting Death Star images, which Sam Altman later came back and said, no, you're the Death Star. I'm not the Death Star. But I think people thought that the Death Star was supposed to be the model that was generally the, you know, the expectations were set extremely high. The actual launch itself was just technically broken. So a lot of people's first experiences of GPT-5, they've got this model router concept now where I think another way to understand what they're doing here is they're trying to own the consumer use case. And to own that, they need to simplify the product experience relative to what we had in the past, which was like, okay, you got GPT-4 and 4.0 and 4.0 mini and O.3 and O.4 mini. and other things, you know, four or five within there at one point, you got all these different models, which one should I use for which it's like very confusing to most people who aren't obsessed with this. And so one of the big things they wanted to do was just shrink that down to just ask your question, and you'll get a good answer. And we'll take that complexity on our side as the product owners to do that. Interestingly, and I don't have a great account of this. But one thing you might want to do is kind of merge the models and figure out just have the model itself decide how much to think, or maybe even have the model itself decide how many of its experts, if it's a mixture of experts architecture, it needs to use. Or maybe there's been a bunch of different research projects on skipping layers of the model. If the task is easy enough, you could skip a bunch of layers. So you might have hoped that you could genuinely on the back end, merge all these different models into one model that would dynamically use the right amount of compute for the level of challenge that a given user query presented. It seems like they found that harder to do than they expected. And so the solution that they came up with instead was to have a router where the router's job is to pick, is this an easy query? In which case, we'll send you to this model. Is it a medium? Is it a hard? And I think they just have two really models behind the scenes. So I think it's just really easy or hard. Certainly the graphs that they showed, you know, basically showed the kind of with and without thinking. The problem at launch was that that router was broken. So all of the queries were going to the dumb model. And so a lot of people literally just got bad outputs, which were worse than 03, because they were getting non-thinking responses. And so the initial reaction of like, okay, this is dumb. And that sort of, you know, traveled really fast. I think that kind of set the tone. What I sense now is that as the dust has settled, most people do think that it is the best model available. And, you know, things like the meter, the infamous meter task length chart, it is the best. We're now over two hours and it is still above the trend line. So if you just said, you know, do I believe in straight lines on graphs or not? And how should this latest data point influence whether I believe on these straight lines, lines on, you know, power logarithmic scale graphs, it shouldn't really change your mind too much. It's still above the trend line. I talked to Zvi about this, Zvi Moshwitz, legendary Infovore and AI industry analyst on a recent podcast too, and kind of asked him the same question. Like, why do you think the, you know, even some of the most plugged in, you know, sharp minds in the space have seemingly pushed timelines out a bit as a result of this? And his answer was basically just, it resolved some amount of uncertainty. You know, you had an open question of maybe they do have another breakthrough. You know, maybe it really is the Death Star. You know, if they surprise us on the upside, then all these short timelines, you know, we could have expected a, yeah, I guess one way to think about it is like the distribution was sort of broad in terms of timelines. And if they had surprised on the upside, it might have narrowed and narrowed in toward the front end of the distribution. And if it if they surprised on the downside or even just were purely on trend, then you would take some of your distribution from the very short end of the timelines and kind of push them back toward the middle or the end. And so his answer was like. AI 2027 seems less likely, but AI 2030 seems basically no less likely, maybe even a little more likely because some of the probability mass from the early years is now sitting there. So it's not that, I don't think people are moving the whole distribution out super much. I think there may be more just kind of shrinking the, you know, it's getting a little tighter because it's maybe not happening quite as soon as it seemed like it might've been. But I don't think too many people, at least that I think are really plugged in on this, are pushing out too much past 2030 at all. And by the way, obviously there's a lot of disagreement. The way I kind of have always thought about this sort of stuff is, Dario says 2027. Demis says 2030. I'll take that as my range. So coming into GPT-5, I was kind of in that space. And now I'd say, well, I don't know. Dario's got, what cards does he have up his sleeve? You know, they just put out 4.1 Opus. And in that blog post, they said, we will be releasing more powerful updates to our models in the coming weeks. So they're due for something pretty soon. You know, maybe they'll be the ones to surprise on the upside this time, or maybe Google will be. I wouldn't say 2027 is out of the question. But yeah, I would say 2030 still looks just as likely as before. And again, from my standpoint, it's like, that's still really soon. So if we're on track, whether it's 28, 29, 30, I don't really care. I try to frame my own work so that I'm kind of preparing myself and helping other people prepare for what might be the most extreme scenarios. and kind of, you know, one of these things where if we aim high and we miss a little bit and we have a little more time, great. I'm sure we'll have plenty of things to do to use that extra time to be ready for, you know, whatever powerful AI does come online. But yeah, I guess I don't, my worldview hasn't changed all that much as a result of these summer's developments. Anecdotally, I don't hear as much about AI 2027 or situational awareness to the same degree. I do talked to some people who've just moved it a few years back to your point. But yeah, Dorkesh had his whole thing around, he still believes in it, but maybe because this gap in continual learning or something to the effect that maybe it's just going to be a bit slower to diffuse. And Meters' paper, as you mentioned, showed that engineers are less productive. And So maybe there's less of a sort of concern around, you know, people being replaced in the next few years in mass. I think when we spoke maybe a year ago about this, I think you said something like 50% of 50% of jobs. I'm curious if that's still your litmus test or how you think about it. Well, for one thing, I think that meter paper is worth unpacking a little bit more because this was one of those things. And I'm a big fan of Meter, and I have no shade on them because I do think, do science publish your results? That's good. You don't have to make every experimental result and everything you put out conform to a narrative. But I do think it was a little bit too easy for people who wanted to say that, oh, this is all nonsense, to latch on to that. and you know again there's there's something there that i would kind of put in the cal newport category too where for me maybe the most interesting thing was the users thought that they were faster when in fact they seem to be slower so that sort of misperception of oneself i think is really interesting personally i think there's some explanations for that that include like hitting go on the agent going to social media and scrolling around for a while and then coming back the thing might have been done for quite a while by the time i get back so honestly one like really simple and we're starting to see this in products one really simple thing that the products can do to address those concerns is just provide notifications like the thing is done now so you know stop scrolling and come back and check its work that in terms of just clock time you know it would be interesting to know like what applications did they have open maybe they took took a little longer with cursor than doing it on their own. But how much of the time was cursor the active window and how much of it was, you know, some other random distraction while they were waiting. Um, but I think a more fundamental issue with that study, which again, wasn't really about the study design, but just in, in the sort of, you know, interpretation and kind of digestion of it, some of these details got lost. the they basically tested the models or the you know the product cursor in the area where it was known to be least able to help this study was done early this year so it was done with you know it kind of one depending on how you want to count right a couple a couple releases ago with code bases that are large which again strange the strains the context window and you know that's one of the frontiers that has been moving, very mature code bases with like high standards for coding and developers who really know their code bases super well, who've made a lot of commits to these particular code bases. So I would say that's basically the hardest situation that you could set up for an AI because the people know their stuff really well, the AI doesn't, the context is huge. People have already absorbed that through working on it for a long time. The AI doesn't have that that knowledge. And again, a couple of generations ago, models. And then a big thing, too, is that the user, the people were not very well versed in the tools. Why? Because the tools weren't really able to help them yet. I think the sort of mindset of the people that came into the study in many cases was like, well, I haven't used this all that much because it hasn't really seemed to be super helpful. They weren't wrong in that assessment, given the limitations. And you could see that in terms of the some of the instructions and the help that the meter team gave to people. One of the things that is in the paper that they would if they noticed that you were like weren't using cursor super well, they would give you some feedback on how to use it better. One of the things that they were telling people to do is make sure you at tag a particular file to bring that into context for the model so that the model has, you know, the right context. And that's literally like the most basic thing that you would do in Cursor. You know, that's like the thing you would learn in your first hour, your first day of using it. So it really does suggest that these were, you know, while very capable programmers, like basically mostly novices when it came to using the AI tools. So I think the result is real, but I just, I would be very cautious about generalizing too much there. Hey, we'll continue our interview in a moment after a word from our sponsors. Pick the wrong one and you might find yourself fighting fires alone. In the e-commerce space, of course, there's never been a better platform than Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all e-commerce in the United States. From household names like Mattel and Gymshark to brands just getting started. With hundreds of ready-to-use templates, Shopify helps you build a beautiful online store to match your brand's style, just as if you had your own design studio. With helpful AI tools that write product descriptions, page headlines, and even enhance your product photography, it's like you have your own content team. And with the ability to easily create email and social media campaigns, you can reach your customers wherever they're scrolling or strolling, just as if you had a full marketing department behind you. Best yet, Shopify is your commerce expert with world-class expertise in everything from managing inventory to international shipping to processing returns and beyond. If you're ready to sell, you're ready for Shopify. Turn your big business idea into cha-ching with Shopify on your side. Sign up for your $1 per month trial and start selling today at shopify.com slash cognitive. Visit shopify.com slash cognitive. Once more, that's shopify.com slash cognitive. In terms of, I guess, what else was the other question? What is the expectation for jobs? I mean, we're starting to see some of this, right? We are definitely seeing no less than like Mark Benioff has said that they've been able to cut a bunch of headcount because they've got AI agents now they're responding to every lead um karna of course is you know has said um you know very similar things for a while now they also i think have been a little bit misreported in terms of like oh they're backtracking off of that because they're actually going to keep some customer service people not none and i think that's a bit of an overreaction like they may have some people who are just you know insistent on having a certain experience and maybe they want to provide that And that makes sense. You know, it doesn't I think you can have a. A spectrum of service offerings to your customers. I once coded up a pricing page for us and I actually just vibe coded up a pricing page for a SaaS company that was like basic level with AI sales and services one price. If you want to talk to human sales, that's a higher price. And if you want to talk to human sales and support, that's a third higher price. And so like literally that might be what's going on, I think, in some of these cases, and it it could very well be a very sensible option for people. But I just I do see the intercom I've got an episode coming up with. They now have this fin agent that is solving like 65% of customer service tickets that come in. So, you know, what's that going to do to jobs? Are there really like three times as many customer service tickets to be handled? Like, I don't know. I think there's kind of a relatively inelastic supply. Maybe you get somewhat more tickets if people expect that they're going to get better, faster answers. But I don't think we're going to see like three times more tickets. By the way, that number was like 55% three or four months ago. So, you know, as they ratchet that up, the ratios get really hard, right? at half ticket resolution. In theory, maybe you get some more tickets, maybe you don't need to adjust headcount too much. But when you get to 90% ticket resolution, you know, are you really going to have 10 times as many tickets or 10 times as many hard tickets that the people have to handle? It seems just really hard to imagine that. So I don't think I don't think these things go to zero probably in a lot of environments. But I do expect that you will see significant headcount reduction in a lot of these places. And the software one is really interesting because the elasticities are really unknown. You know, you can potentially produce x times more software per user or, you know, per cursor user or per developers at your company, whatever. But maybe you want that, you know, maybe there is no limit or no, you know, maybe the, the regime that we're in is such that if there's, you know, 10 times more productivity, that's all to the good and you know we still have just as many uh jobs because we want 10 times more software i don't know how long that lasts again the ratios start to get challenging at some point um but yeah i think the bottle you know the old tyler common thing comes to mind you are a bottleneck you are a bottleneck um i think more often it is are people really trying to get the most out of these things and you know are they using best practices and have they um have they really put put their minds to it or not. You know, often the real barrier is there. I've been working a little bit with a company that is doing basically government doc review. I'll obstruct a little bit away from the details. Really gnarly stuff like scanned documents, you know, handwritten, filling out of forms. and they've created this auditor AI agent that just won a state-level contract to do the audits on like a million transactions a year of these, you know, these packets of documents, again, scanned, handwritten, all this kind of crap. And they just blew away the human workers that were doing the job before. So where are those workers going to go? Like, I don't know. They're not going to have 10 times as many transactions. You know, I can be pretty confident. in that. Are there going to be a few still that are there to supervise the AIs and handle the weird cases and, you know, answer the phones? Sure. Maybe they won't go anywhere. You know, the state may do a strange thing and just have all those people like sit around because they can't bear to fire them. Like who knows what the ultimate decision will be. But I do see a lot of these things where I'm just like when you really put your mind to it, and you identify what would create real leverage for us. Can the AI do that? Can we make it work? You can take a pretty large chunk out of high volume tasks very reliably in today's world. And so the impacts I think are starting to be seen there on a lot of jobs. Humans, I think, are, you know, the leadership is maybe the bottleneck or the will in a lot of places might be the bottleneck. And software might be an interesting case where there is just so much pent up demand perhaps that it may take a little longer to see those impacts because you really do want you know 10 or 100 times as much software what is it yeah let's talk about code because it's you know it's where anthropic made a big bet um early on you know perhaps inspired by the sort of automated researcher you know recursive self-improvement um you know sort of uh you know desired future and then we saw open ai uh make make moves um there as well look when we flesh that out or talk a little about you know what inspired that and where is that going you know utopia or dystopia is really the big question there i think right i mean is maybe one part technical two parts social in terms of why code has been so focal the technical part is that it's really easy to validate code you generate it you can run it if you get a runtime error you can get the feedback immediately it's you know somewhat harder to do do functional testing replit recently just in the last like 48 hours released their v3 of their agent and it now in addition to you know code code code try to make your app work v2 of the agent would do that and it could go for minutes and you know in some cases generate dozens of files and i've had some magical experiences with that where i was like wow you just did that whole thing in one prompt and it like worked amazing other times it will sort of code for a while and hand it off to you and say okay does it look good is it working and you're like no it's not i'm not sure why, you know, you get into a back and forth with it. But the difference between V2 and V3 is that instead of handing the baton back to you, it now uses a browser and the vision aspect of the models to go try to do the QA itself. So it doesn't just say, okay, hey, I tried my best, wrote a bunch of code, like, let me know if it's working or not. It takes that first pass at figuring out if it's working. And, you know, again, that really improves the flywheel, just how much you can do, how much you can validate, how quickly you can validate it. The speed of that loop is really key to the pace of improvement. So it's a problem space that's pretty amenable to the sorts of, you know, rapid flywheel techniques. Second, of course, they're all coders, right, at these places, So they want to, you know, solve their own problems. That's like very natural. And third, I do think on the, you know, sort of social vision competition, uh, who knows where this is all going. They do want to create the automated AI researcher. That's another data point, by the way, from this was from the O3 system card. They showed a jump from like low to mid single digits to roughly 40% percent of prs actually checked in by research engineers at open ai that the model could do so prior to 03 not much at all you know low to mid single digits as of 03 40 i'm sure those are the easier 40 or whatever again there will be you know caveats to that but that's you're entering maybe the steep part of the S curve there. And that's presumably pretty high end. You know, I don't know how many easy problems they have at open AI, but presumably, you know, not that many relative to the rest of us that are out here making generic web apps all the time. So, you know, at 40%, you got to be starting to, I would think, get into some pretty hard tasks, some pretty high value stuff. You know, at what point does that ratio really start to tip where the AI is like doing the bulk of the work? Um, GBD five notably wasn't a big update over Oh three on that particular measure. I mean, it also wasn't going back to the simple QA thing. Um, GBD five is generally understood to not be a scale up relative to four. Oh, and oh three. And you can see that in the simple QA measure. It basically scores the same on these long tail trivia questions. It's not a bigger model that has absorbed like lots more world knowledge. Um, it is, it is, you know, Cal is right. I think it is analysis that it's it's post training, but that post training, you know, is potentially entering the steep part of the S curve when it comes to the ability to do even the kind of hard problems that are happening at at OpenAI on the research engineering front. And you know, yeah, so I'm a little worried about that. Honestly, the the idea that we could go from these companies having a few hundred research engineer people to having, you know, unlimited overnight. And like, what would that mean in terms of how much things could change? And also just our ability to steer that overall process. I'm not super comfortable with the idea of the companies tipping into a recursive self-improvement regime, especially given the the level of control and the level of unpredictability that we currently see in the models. But that does seem to be what they are going for. So in terms of like why, I think this has been the plan for quite some time. Even you remember that leaked Anthropic fundraising deck from maybe two years ago, where they said that in 2025 and 2026, the companies that train the best models will get so far ahead that nobody else will be able to catch up. I think that's kind of what they meant. I think that they were projecting then that in the 2526 timeframe, they'd get this like automated researcher. And once you have that, how's anybody who doesn't have that going to catch up with you? Obviously some of that remains to be validated but I do think they have been pretty intent on that for a long time Five years from now are there more engineers or fewer engineers I tend to think less. You know, already, if I just think about my own life and work, I'm like, would I rather have a model or would I rather have like a junior marketer? I'm pretty sure I'd rather have the model. Would I rather have the models or a junior engineer? I think I'd probably rather have the models in a lot of cases. I mean, it obviously depends on, you know, the exact person you're talking about. But truly forced choice today. And then you've got cost adjustment as well, right? I'm not spending nearly as much on my cursor subscription as I would be on a, you know, an actual human engineer. So even if they have some advantages, you know, and I also have not scaffolded, I haven't gone full co-scientist, right, on my cursor problems. I think that's another interesting, you start to see why folks like Sam Altman are so focused on questions like energy and the $7 trillion build out, because these power law things are weird. And, you know, to get incremental performance for 10x the cost is weird. It's definitely not the kind of thing that we're used to dealing with. But for many things, it might be worth it. And it still might be cheaper than the human alternative. You know, if it's like, well, Cursor costs me whatever, 40 bucks a month or something. Would I pay 400 for, you know, however much better? Yeah, probably. Would I pay 4,000 for however much better? Well, it's still, you know, a lot less than a full-time human engineer. And the costs are obviously coming down dramatically too, right? That's another huge thing. GPT-4 was way more expensive. It's like 90, it's like a 95% discount from GPT-4 to GPT-5. That's, you know, no small thing, right? I mean, it's, Apple's Apple's a little bit hard because the chain of thought does spit out a lot more tokens. And so you get, you give back a little, on a per token basis, it's dramatically cheaper, more tokens generated, you know, does eat back into some of that savings. but everybody seems to expect the trends will continue in terms of prices continuing to fall and so you know how many more of these like price reductions do you have to then be able to you know do the power law thing a few more times i guess i think i think i i think less um and i think that's probably true even if we don't get like full-blown agi that's you know better than humans at at everything, I think you could easily imagine a situation where of however many million people are currently employed as professional software developers, some top tier of them that do the hardest things can't be replaced. But there's not that many of those, you know, and the real like rank and file, you know, the people that over the last 20 years were told, learn to code, you know, that'll be your thing. Like the people that are the really top, top people didn't need to be told to learn to code, right? They just, it was their thing. They had a passion for it. They were amazing at it. We may not, it wouldn't shock me if we like still can't replace those people in three, four, five years time. But I would be very surprised if you can't get your nuts and bolts web app, mobile app type things spit out for you for far less and far faster than, and probably honestly with significantly higher quality and less back and forth. with an AI system than with your kind of middle of the pack developer in that timeframe. One thing I do want to call out, there are definitely people have concerns about progress moving too fast, but there's also concern, and maybe it's rising about progress not moving fast enough in the sense that a third of the stock market is mag seven. AI CapEx is over 1% of GDP. and so we are kind of relying on some of this progress in order to sort of sustain our economy. Yeah, and another thing that I would say has been slower to materialize than I would have expected are AI culture wars or sort of the ramping up of protectionism of various industries. We just saw Josh Hawley, I don't know if he introduced a bill or just said he intends to introduce a bill to ban self-driving cars nationwide. You know, God help me. I've dreamed of self-driving cars since I was a little kid, truly. Like, sitting at red lights, I used to be like, there's got to be a way. I think we took a Waymo together. Yeah, and it's so good. And the safety, you know, I think whenever people want to argue about jobs, it's going to be pretty hard to say 30,000 Americans should die every year. so that people's incomes don't get disrupted. It seems like you have to be able to get over that hump and say like the, you know, saving all these lives, if nothing else, is just really hard to argue against. But we'll see, you know, I mean, he's not without influence, obviously. So, yeah, I mean, I am very much on team abundance and, you know, my old mantra, I've been saying this less lately, but adoption accelerationist, hyperscaling pauser, the tech that we have, you know, could do so, so much for us even as is. I think if progress stopped today, I still think we could get to 50 to 80% of work automated over the next like five to 10 years. It would be a real slog. You'd have a lot of, you know, co-scientist type breakdowns of complicated tasks to do. We have a lot of work to do to go sit and watch people and say, why are you doing it this way? What's going on here? What's this? You handled this one differently. Why did you handle that one differently? All this tacit knowledge that people have and the kind of know-how procedural, you know, just instincts that they've developed over time. Those are not documented anywhere. They're not in the training data. So the AIs haven't had a chance to learn them. But again, when I say like no breakthroughs, I still am allowing there for like, you know, fine tuning of things to just like capabilities. we have that haven't been applied to particular problems yet um so just going through the economy and and just sitting with people and being like why are you doing this you know let's let's document this let's get the you know the model to learn your particular niche thing um that would be a real slog and in some ways i kind of wish that were the future that we were going to get um because it would be a methodical you know kind of one step one foot in front of the other you know know, no quantum leaps, like it would probably feel pretty manageable, I would think in terms of the pace of change, hopefully society could, you know, could absorb that and kind of adapt to it as we go without, you know, one day to the next, like, oh my God, you know, all the drivers, you know, are getting replaced or that one to be a little slower because you have to have the actual physical build out. But in some of these things, you know, customer service could get rent down real fast, right? Like if a call center has something that they can just drop in and it's like this thing now answers the phones and talks like a human and has a higher success rate and scales up and down. One thing we've seen at Waymark, small company, right? We've always prided ourselves on customer service. We do a really good job with it. Our customers really love our customer success team. But I looked at our intercom data and it takes us like half an hour to resolve tickets. We respond really fast. We respond in like under two minutes most of the time. when we respond, you know, two minutes is still long enough that the person has gone on to do something else, right? It's the same thing as with the cursor thing that we were talking about earlier, right? They've tabbed over to something else. So now we get the response back in two minutes, but they're doing something else. So then they come back at, you know, minute six or whatever, then they respond. But now our person has gone and done something else. So the resolution time, even for like simple stuff, can be easily a half an hour. And the AI, you know, it just responds instantly, right? So you don't have to have that kind of back and forth. You're just in and out. So I do think some of these categories could be really fast changes. Others will be slower. But yeah, I mean, I kind of wish we had that. I kind of wish we had that slower path in front of us. My best guess, though, is that we will probably continue to see things that will be significant leaps and that there will be like actual disruption. Another one that's come to mind recently, you know, maybe we can get the abundance department on these new antibiotics. Have you seen this development? No, tell us about it. I mean, it's not a language model. I think that's another thing people really underappreciate or that you could kind of look back at GPT-4 to 5 and then imagine a pretty easy extension of that. So GPT-4, initially when it launched, we didn't have image understanding capability. They did demo it at the time of the launch, but it wasn't released for some months later. The first version that we had could understand images, could do a pretty good job of understanding images, still with like jagged capabilities and whatever. Now with the new Nano Banana from Google, you have this like basically Photoshop level ability to just say, hey, take this thumbnail. Like we could take our two feeds right now, you know, take a snapshot of you, a snapshot of me, put them both into Nano Banana and say, generate the thumbnail for the YouTube preview featuring these two guys. put them in the same place, same background, whatever. It'll mash that up. You can even have it, you know, put text on top, progress since GPT-4, whatever we want to call it. GPT-5 is not a bust. And it'll spit that out. And you see that it has this deeply integrated understanding that bridges language and image. And that's something that it can take in, but now it's also something that can put out as part of one core model with like a single unified intelligence. That I think is going to come to a lot of other things. We're at the point now with these biology models and material science models where they're kind of like the image generation models of a couple years ago. They can take a real simple prompt and they can do a generation, but they're not deeply integrated where you can have like a true conversation back and forth and have that kind of unified understanding that bridges language and these other modalities. But even so, it's been enough for this group at MIT to use some of these relatively narrow purpose-built biology models and create totally new antibiotics. New in the sense that they have a new mechanism of action, like they're affecting the bacteria in a new way. And notably, they do work on antibiotic resistant bacteria. This is some of the first new antibiotics we've had in a long time. Now they're going to have to go through, you know, when I say that get the abundance department on it, it's like, where's my operation warp speed for these new antibiotics, right? Like we've got people dying in hospitals from drug resistant strains all the time. Why is nobody, you know, crying about this? I think one things that's happening to our society in general is just so many things are happening at once it's kind of the it's like the flood the zone thing except like there's so many ai developments flooding the zone that nobody can even keep up with all of those and that's that's come from me by the way too i would say two years ago i was like pretty in command of all the news and a year ago i was starting to lose it and now i'm like wait a second there was new antibiotics developed you know i'm kind of uh missing things you know just like everybody else despite my best efforts but key point there is AI is not synonymous with language models. There are AIs being developed with pretty similar architectures for a wide range of different modalities. We have seen this play out with text and image where you had your text-only models and you had your image-only models and then they started to come together and now they've come really deeply together and so I think you're going to see that across a lot of other modalities over time as well. And there's a lot more data there. I don't know what it means to run out of data. In the reinforcement learning paradigm, there's always more problems, right? There's always something to go figure out. There's always something to go engineer. The feedback is starting to come from reality, right? That was one of the things Elon talked about on the Crock 4 launch was like, maybe we're running out of problems we've already solved. And, you know, we only have so much of those sitting around in inventory. We only have one internet, you know, we only have so much of that stuff. But over at Tesla, over at SpaceX, like we're solving hard engineering problems on a daily basis, and they seem to be never ending. So when we start to give the next generation of the model, these power tools, the same power tools that the professional engineers are using at those companies to solve those problems and the AI start to learn those tools and they start to solve previously unsolved engineering problems, like that's going to be a really powerful signal that they will be able to learn from. And now again, fold in those other modalities, right? The ability to have sort of a sixth sense for, you know, the space of material science possibilities. When you can bridge or unify the understanding of language and those other things, I think you start to have something that looks kind of like super intelligence, even if it's like not able to, you know, write poetry at a superhuman level necessarily, its ability to see in these other spaces is going to be truly a superhuman thing that I think will be pretty hard to miss. You said that that was one thing that Cal's analysis missed is just the lack of appreciation for non-language modalities and how they drive in some of the innovations that you're talking about. Yeah, I think people are often just kind of equating the chat bot experience with AI broadly. Yeah. And, you know, that that conflation will not last probably too much longer because we are going to see self driving cars unless they get banned. And that's a, you know, very different kind of thing. And talk about your impact on jobs too, right? It's like what, four or 5 million professional drivers in the United States. That is a big, that is a big deal. I don't think most of those folks are going to be super keen to learn to code. And even if they learn to code you know i'm not sure how long that's going to last so that's going to be a disruption and then general robotics is like not that far behind you know the and this is one area where i do think china might be actually ahead of the united states right now but regardless of whether that's true or not you know these robots are getting really quite good right they can like walk over all these obstacles and these are things that a few years ago they just couldn't do at all They could barely balance themselves and walk a few steps under ideal conditions. Now you've got things that you can like literally do a flying kick and it'll like absorb your kick and shrug it off and just keep going, you know, right itself and continue on its way. Super rocky, you know, uneven terrain. All these sorts of things are getting quite good. You know, the same thing is working everywhere. I think one of the other thing that's kind of, There's always a lot of detail to the work. So it's a sort of inside view, outside view, right? Inside view, you're like, there's always this minutia. There's always, you know, these problems that we had and things we had to solve. But you zoom out and it looks to me like the same basic pattern is working everywhere. And that is like, if we can just gather enough data to do some pre-training, you know, some kind of raw, rough, you know, not very useful, but just enough at least to kind of get us going, then we're in the game. And then once we're in the game, now we can do this flywheel thing of like, you know, rejection sampling, like have it try a bunch of times, take the ones where it succeeded, you know, re-fine-tune on that, the RLHF, you know, feedback, the sort of preference, take two, which one was better, you know, fine-tune on that, the reinforcement learning, all these techniques that have been developed over the last few years, it seems to me they're absolutely going to apply to a problem like a humanoid robot as well. And that's not to say there won't be a lot of work to figure out exactly how to do that. But I think the big difference between language and robotics is really mostly that there just wasn't a huge repository of data to train the robots on at first. And so you had to do a lot of hard engineering to make it work at all, you know, to even stand up, right? You had to have all these control systems and whatever, because there was nothing for them to learn from in the way that the language models could learn from the internet. But now that they're working at least a little bit, you know, I think all these kind of refinement techniques are going to work. It'll be interesting to see if they can get the error rate low enough that I'll actually like allow one in my house around my kids, you know, that they'll probably be better deployed in like factory settings first, more controlled environments than the chaos of my house, as you have seen in this recording. But I do think they're going to work. What's the state of agents more broadly at the moment? How do you see things playing out? Where does it go? Well, broadly, I think it's the task length story from meter of the every seven months or every four months doubling time. We're at two hours-ish with GBT-5. Replit just said their new agent V3 can go 200 minutes. If that's true, that would even be a new high point on that graph. Again, it's a little bit sort of apples to oranges because they've done a lot of scaffolding. How much have they broken it down? How much scaffolding are you allowed to do with these things before you sort of are off of their chart and onto maybe a different chart? But if you extrapolate that out a bit and you're like, OK, take take the four month case just to be a little aggressive. That's three doublings a year. That's eight X task length increase per year. That would mean you go from two hours now to two days in one year from now. And then if you do another eight X on top of that, you're looking at basically, say, two days to two weeks of work in two years. that would be a big deal, you know, to say the least, if you could delegate an AI two weeks worth of work and have it do a, you know, even half the time, right? The meter thing is that they will succeed half the time on tasks of that size. But if you could take a two week task and have a 50% chance that an AI would be able to do it, even if it did cost you a couple hundred bucks, it's like, well, that's again, a lot less than it would cost to hire a human to do it. And it's all on demand. It's kind of, you know, it's immediately available. If I'm not using it, I'm not paying anything. Transaction costs are just like a lot lower the whole, you know, many, many other aspects are favorable for the AI there. So, you know, that would suggest that you'll see a huge amount of automation in all kinds of different places. The other thing that I'm watching, though, is the reinforcement learning does seem to bring about a lot of bad behaviors. Reward hacking being one, you know, the the any sort of gap between what you are rewarding the model for and what you really want can become a big issue. we've seen this in coding in many cases where the AI will, Claude is like notorious for this, will put out a unit test that always passes, you know, that just has like return true in the unit test. Why is it doing that? Like, well, it must have learned that what we want is for unit tests that pass, you know, we want it to pass unit tests. Well, we didn't mean to write fake unit tests that always pass, but that technically did, you know, satisfy the reward condition and so we're seeing those kind of weird behaviors with that comes this like scheming kind of stuff we we don't really have a great handle on that yet there is also situational awareness that seems to be on the rise right where the models are like increasingly in their chain of thought you're seeing things like this seems like i'm being tested um you know maybe i should be conscious of what my tester is really looking for here and that makes it hard to evaluate models in tests because you don't know if they're actually going to behave the same way when they're out in the real world. So those, you know, I wouldn't say this is a high level or high confidence prediction, but like one model of the future I've been playing with is the task length keeps doubling while at the same time, these weird behaviors pop up and then are suppressed. And we have seen in the Cloud4 and in the GPT-5 system cards, Cloud4 reported, I think, a two-thirds reduction in reward hacking. And, you know, in GPT-5, they reported a few different dimensions, but, you know, say something similar, reduction in deceptive behavior. But those behaviors kind of just emerged. So it's sort of like weird behavior emerges, then they sort of figure out how to tamp it down, but not entirely. Presumably, the next generation they'll like tamp it down some more but maybe some new additional weird behavior could emerge and then we'll have to kind of tamp that one down all the while the the tasks are expanding in scope you know with a every four months doubling so you could end up in a world where you can delegate really like major things to ais but there's some small but not necessarily totally vanishing chance that it like actively screws you over in the way that it is trying to do that task. And so you have a, you know, maybe it's like, okay, I think here I'm about to get two weeks worth of work done for a hundred bucks. Okay, that's amazing. But there's also a one in 10,000 chance that it like legitimately, you know, attacks me in a, you know, like meaningful way. Some of the things that we have seen, these are like fairly famous at this point, but in the Cloud4 system card, they reported blackmailing of the human. the the setup was that the ai had access to the engineer's email and they told the ai that it was going to be like replaced with a you know a less ethical version or something like that it didn't want that and it found in the engineer's email that the engineer was having an affair so it started to blackmail the engineer to so as to avoid being replaced with a less ethical version And people, I think, are way too quick, in my view, to move past these anecdotes. People are sort of often like, well, you know, they set it up that way. And, you know, that's not really realistic. But another one was whistleblowing. You know, there was another thing where they sort of set up this dynamic where there was some, you know, unethical, illegal behavior going on. And again, the model had access to this data and it decided to just email the FBI and tell the FBI about it. So, first of all, I don't think we really know what we want. You know, to some degree, maybe you do want AIs to report certain things to authorities. That could be one way to think about the bioweapon risk, you know, is like, not only should the models refuse, but maybe they should report you to the authorities if you're actively trying to create a bioweapon. I certainly don't want them to be doing that too much. I don't want to live under the, you know, surveillance of Claude five. That's always going to be threatening to turn me in. But I do sort of want some people to be turned in if they're doing sufficiently bad things. We don't have a good resolution society. Wide on, you know, what we want the models to even do in those situations. And I think it's also, you know, it's like, yes, it was set up. Yes, it was research, but it's a big world out there, right? We got a billion users already on these things and we're plugging them in to our email. So they're going to have very deep access to information about us. You know I don know what you been doing in your email I hope there nothing too crazy in mind but like now I got to think about it a little bit Right What did I have I ever done anything that I you know geez I don know what you been doing in your email I don know I hope there nothing too crazy in mind But like now I got to think about it a little bit right What what did I have I ever done anything that I you know geez I don know Or even that it could misconstrue, right? Like, it's obviously not made I didn't even really do anything that bad. But it just misunderstands what exactly was going on. So that could be a weird, you know, if there's one thing that could kind of stop the agent momentum, in my view, it could be like, the one in 10,000 or whatever, you know, we ultimately kind of push the really bad behaviors down to is maybe still just so spooky to people that they're like, I can't deal with that, you know, and that might be hard to resolve. So, well, you know, what happens then, you know, it's hard to check two weeks worth of work every couple hours or whatever, right? Like that's part of where the where the whole then you bring another AI in to check it you know that's again where you start to get to the now I see why we need more electricity and and seven trillion dollars of build out is yikes you know they're going to be producing so much stuff I can't possibly even review it all I need to rely on another uh AI to help me do the review of the first AI to make sure that if it is trying to screw me over you know somebody's catching it I can't monitor that myself I think Redwood Research is doing some really interesting stuff like this, where they are trying to get systematic on like, okay, let's just assume this is quite a different, quite a departure from the traditional AI safety work where the, you know, the big idea traditionally was let's figure out how to align the models, make them safe, you know, make them not do bad things. Great. Redwood research has taken the other angle, which is let's assume that they're going to do bad stuff. They're going to be out to get us at times. How can we still work with them and get productive output and get value without, you know, fixing all those problems. And that involves like, again, all these sort of AIs supervising other AIs and crypto might have a place to, a role to play in this. Another episode coming out soon with Ilya Pulisukhin, who's the founder of Near. Really fascinating guy because he was one of the eight authors of the attention is all you need paper. And then he started this near company. It was originally an AI company. They took a huge detour into crypto because they were trying to hire task workers around the world and couldn't figure out how to pay them. So they were like, this sucks so bad to pay these task workers in all these different countries that we're trying to get data from that we're going to pivot into a whole blockchain side quest. Now they're coming back to the AI thing and their tagline is the blockchain for AI. And so you might be able to get, you know, a certain amount of control from, you know, the sort of crypto security that the blockchain type technology can provide. But I could see a scenario where the bad behaviors just become so costly when they do happen, that people kind of get spooked away from using the frontier capabilities in terms of just like how much, you know, work the AIs can do. But that wouldn't be a, that wouldn't be a pure capability stall out, it would be a, we can't solve, you know, some of the long tail safety issues, challenge. And you know, that if that is the case, then you know, that'll be, that'll be an important fact about the world too. I always, nobody ever seems to solve any of these things like a hundred percent, right? They always, every, every generation, it's like, well, we reduced hallucinations by 70%. Oh, we reduced deception by two thirds. We reduced, um, you know, scheming or, or whatever by however much, but it's always still there, you know, and it's, and if you take the even, you know, lower rate and you multiply it by a billion users and thousands of queries a month and agents running in the background and processing all your emails and, you know, all the deep access that people sort of envision them happening. It could be a pretty weird world where there's just this sort of negative lottery of like AI accidents. Another episode coming up is with the AI underwriting company, and they are trying to bring the insurance industry and all the, you know, the wherewithal that's been developed there to price risk, figure out how to, you know, create standards. You know, what can we allow? sort of guardrails do we have to have to be able to ensure this kind of thing in the first place? So that would be another really interesting area to watch is like, can we sort of financialize those risks in the same way we have, you know, with car accidents and all these other mundane things. But the space of car accidents is only so big, the space of weird things that AIs might do to you, you know, as they have weeks worth of runway is much bigger. And so it's going to be a hard challenge, but you know, people are, people are working. We got some of our best people working on it. What do you make the claim that 80% of AI startups have Chinese open models? Um, what do you make of the claim and the, uh, implications? I think that maybe that probably is true with the one caveat that it is only measuring companies that are using open source models at all. I think most companies are not using open source models. And I would guess, you know, the vast majority of tokens being processed by American AI startups are their API calls, right, to the usual suspects. So weighted by actual usage, I would say still the majority, as far as I could tell, would be going to commercial models. for those that are using open source. I do think it's true that the Chinese models have become the best. You know, the American bench there was always kind of thin, right? It was basically meta that was willing to put in huge amounts of money and resources and then open source it. You've got, you know, Paul Allen funded group, the Allen Institute for AI, AI2. You know, they're doing good stuff too, but they don't have pre-training resources. So they do really good post training and open source their recipes and all that kind of stuff. So it's not like American open source is bad. And again, this is another way in which I think you can really validate that things are moving quickly because if you take the best American open source models and you take them back a year, they are probably as good, if not a little better than anything that we had commercially available at the time. if you compare to Chinese, you know, they have, I think, surpassed. So there's been like pretty clear change at the frontier. I think that means that the best Chinese models are like pretty clearly better than anything we had a year ago, commercial or otherwise. So yeah, I mean, that just means like things are moving. I think that's like, hopefully I've made that case compellingly, but that's another data point that I think makes it hard to, I don't think you can believe both that the Chinese models are now the best open source models and that AI has stalled out and we haven't seen much progress since GPT-4. Those seem to be kind of contradictory notions. I believe the one that is wrong is the lack of progress. In terms of what it means, I mean, I don't really know. We're not going to stop China. Yeah, the whole. I've always been a skeptic of the no selling chips to China thing. Notion originally was like, we're going to prevent them from doing, you know, some super cutting edge military applications. And it was like, well, we can't really stop that. But we can at least stop them from trading frontier models. And then it was like, well, we can't necessarily really stop that. But now we can, you know, at least keep them from like having tons of AI agents. We'll have like way more AI agents than they do. and I don't love that line of thinking really at all. But one upshot of it potentially is they just don't have enough compute available to provide inference as a service to the rest of the world. So instead, the best they can do is just say, okay, well, we'll train these things and you can figure it out. Here you go, have at it. It's kind of a soft power play, presumably. I did an episode with Anjane from A16Z who I thought really did a great job of providing the perspective of what I started calling countries three through 193. If the U.S. and China are one and two, three through, there's a big gap. You know, there's like, I think the U.S. is still ahead, but not by that much in terms of research and, you know, ideas relative to China. We do have this compute advantage, and that does seem like it matters. One of the upshots may be that they're open sourcing, and countries 3-193 are like, or 3-193 are significantly behind. So for them, it's a way to, you know, try to bring more countries over to the Chinese camp, potentially in the US-China rivalry. It seems like the model everybody, and I don't like this at all. I don't like technology decoupling. As somebody who worries about, you know, who's the real other here? I always say the real other are the AIs, not the Chinese. So if we do end up in a situation where, yikes, like, you know, we're seeing some crazy things, it would be really nice if we were on basically the same technology paradigm to the degree that we really decouple and, you know, not just the chips are different, but maybe the ideas start to become very different. publishing gets shut down, you know, tech trees evolve and kind of grow apart. That to me seems like a recipe for, you know, it's harder to know what the other side has, it's harder to trust one another, it seems to feed into the arms race dynamic, which I do think would, you know, is a real existential risk factor, I would hate to see us, you know, create another sort of mad type dynamic where we all live under the threat of AI destruction. But that very well could happen. And so yeah, I don't know, I do kind of have some sympathy for the recent decision that the administration made to be willing to sell the H20s to China. And then it was funny that they turned around and rejected them, which to me seemed like a mistake i don't know why they would be rejecting them if i were them i would buy them um and i would maybe i would maybe sell inference on the models that i've just been uh creating and i would try to make my money back doing that but in the meantime they can at least you know demonstrate the greatness of the chinese nation by showing that they're not uh far behind the frontier and they can also make a pretty powerful appeal to countries three through 193 and say like you know what you really want to you see how the us is acting uh in general you know you really want to they cut us off from chips uh they had a even a long you know the last administration had an even longer list of countries that couldn't get chips this administration is doing all kinds of crazy stuff you know you get 50 tariffs here there whatever um how do you know you can really rely on them to continue to provide you ai into the future well you can rely on us we open source model you can have it um so you know come work with us and buy our chips because by the way our models will you know as we mature they'll be optimized to run on our chips so i don't know that's a complicated stuff a complicated situation i do think it's true i i don't think the adoption is as high as that 80 i think that is you know within that subset of companies that are doing stuff with open source we're going to experiment with that at waymark but we to be honest we have never done anything with an open source model in our product to present. Everything we've ever done has been through commercial. At this point, we are going to try doing some reinforcement fine tuning. We are going to do that on a Quinn model, I think first. So, you know, that'll put us in that 80%. But I'm guessing that at the end of the day, we'll take that Quinn model, we'll do the reinforcement fine tuning and we'll probably get roughly up to as good as you know gpd5 or cloud 4 or whatever and then we'll say okay do we really want to have to manage inference ourself how much are we really going to save and at the end of the day i would guess we probably are still going to end up just being like yeah we'll pay a little bit more on a monthly bill basis for one of these frontier models or a little bit better maybe still and you know it's operationally a lot easier and they'll have upgrades, you know. So, yeah, I mean, of course, there's regulated industries. There's a lot of places where, you know, you have hard constraints. You just can't get around, and that forces you to do those Chinese models. Then there's also going to be the question of, like, are there backdoors in them? You know, people have seen the sleeper agents project where a model was trained to be good up until a certain point of time. and, you know, people put the today's date in the system prompt all the time, right? Today's date is this, you are Claude, you know, here you go. So then that's going to be another kind of thing for people to worry about. And we don't really have great, there have been some studies, Anthropic did a thing where they trained models to have some hidden objectives and then challenged teams to figure out what those hidden objectives were. And with certain interpretability techniques, they were able to figure that stuff out relatively quickly. So you might be able to get enough confidence that you take this open source thing, you know, created by some Chinese company, whatever, and then put it through, you know, some sort of, not exactly audit, because you can't trace exactly what's happening, but some sort of examination, you know, to see can we detect any hidden goals or any, you know, secret backdoor, bad behavior, whatever's, and maybe with enough of that kind of work, you could be confident that you don't have it. But the more and more critical this stuff gets, again, going back to that task length doubling, weird behavior, now you got to add into the mix. What if they intentionally programmed it to do certain bad things under certain rare circumstances? We're just headed for a really weird future. We've got all these, there's no limit to it. All these things are valid concerns. they often are in direct tension with each other. I don't, I, I'm not one who, you know, wants to see one tech company take over the world by any means. So I definitely think we would do really well to have some sort of broader, more buffered ecological like system where, you know, all the AIs are kind of in some sort of competition, you know, mutual coexistence with each other. But we don't really know what that looks like. And we don't really know, what an invasive species might look like when it gets introduced into that very nascent and as yet not battle-tested ecology. So, yeah, I don't know. Bottom line, I think the future is going to be really, really weird. Well, I do want to close on an uplifting note. So maybe as a gearing towards closing question, we could get into some areas where we're already seeing some exciting capabilities emerge and sort of transform the experience. Maybe around education or healthcare or any other areas you want to highlight? Yeah, boy, it's all over. One of my mantras is that there's never been a better time to be a motivated learner. So I think a lot of these things do have kind of, you know, two sides of the coin. There's the worry that the students are taking the shortcuts and they're, you know, losing the ability to sustain focus and endure cognitive strain. Flip side of that is, as somebody who's fascinated by the intersection of AI and biology, sometimes I want to read a biology paper and I really don't have the background. An amazing thing to do is turn on voice mode and share your screen with ChatGPT and just go through the paper reading. You don't even have to talk to it most of the time. You're doing your reading. It's watching over your shoulder. And then at any random point you have a question, you can verbally say, what's this? Why are they talking about that? What's going on with this? What is the role of this particular protein that they're referring to or whatever? And it will have the answers for you. So if you really want to learn in a sincere way, you know, the things are unbelievably good at helping you do that. Flip side is you can take a lot of shortcuts and, you know, maybe never have to learn stuff on the biology front. And again, we've got multiple of these sort of discovery things happening. The antibiotics one we covered. There was another one that I did another episode on with a Stanford professor named James Zhao, who created something called the Virtual Lab. And basically, this was an AI agent that could spin up other AI agents, depending on what kind of problem it was given. then they would go through a deliberative process where you'd have, you know, one expert in one thing would give its take and they'd, you know, bat it back and forth. There was a critic in there that would criticize, you know, the ideas that had been given. Eventually they'd synthesize. Then they were also given some of these narrow specialist tools. So you have agents using the alpha fold type, not just alpha fold, you know, there's a whole wide, wide array of those at this point, But using that type of thing to say, OK, well, can we simulate how this would interact with that? Agents are running that loop and they were able to get this language model agent with specialized tool system to generate new treatments for novel strains of COVID that had kind of escaped the previous treatments. amazing stuff, right? I mean, the flip side of that, of course, is, you know, you got the bioweapon risk. So all these things do seem like they're going to be, even on just the abundance front itself, right? Like, we may have a world of unlimited professional private drivers, but we don't really have a great plan for what to do with the 5 million people that are currently doing that work. We may have infinite software, but, you know, especially once the 5 million drivers pile into all the coding boot camps and, you know, get coding jobs. I don't know what we're going to do with the 10 million people that were coding when, you know, 9 million of them become superfluous. So, yeah, I don't know. I think we're headed for a weird world. Nobody really knows what it's going to look like in five years. There was a great moment at at Google's IO where they brought up some journalist. I know you we were skeptical of journalists. This is a great moment to we're going direct, right? This is a great reason or example of why one would want to do that. They brought up this person to interview Demis and Sergey Brin. And they the guy asked, like, what is search going to look like in five years? And Sergey Brin, like almost spit out his coffee on the on the stage and was like, search. We don't know what the world is going to look like in five years. So I think that's really true. Like the biggest risk, I think, for so many of us, I, you know, include myself here. is thinking too small. You know, the worst thing I think we could do would be to underestimate how far this thing could go. I would much rather be, I would much rather be mocked for things happening on twice the timescale that I thought than to find myself unprepared when they do happen. So whether it's 27, 29, 31, I'll take that extra buffer, honestly, where we can get it. my thinking is just get, you know, get ready as as much and as fast as possible. And again, if we do have a little grace time to, you know, to do extra thinking, then great. But I would, I think the worst mistake we could make would be to dismiss and not feel like we need to get ready for big changes. Should we wrap directly on that? Or is there any other last note you want to make sure to get across regarding anything we said today? One of my other mantras these days is the scarcest resource is a positive vision for the future. Yeah. I do think it's always really striking, whether it's Sergey or, you know, or Sam Altman or Dario. Like, Dario probably has the best positive vision of the frontier developer CEOs with machines of love and grace. But it's always striking to me how little detail there is on these things. and when they launched GPT-40, which was the voice mode, they were pretty upfront about saying, yeah, this was kind of inspired by the movie Her. And so I do think, like, even if you are not a researcher, you know, not great at math, not somebody who codes, I think that this technology wave really rewards play. It really rewards imagination. I think literally writing fiction might be one of the highest value things you could do, especially if you could write aspirational fiction that would get people at the frontier companies to think, geez, maybe we could steer the world in that direction. Like, wouldn't that be great? If you could plant that kind of seed in people's minds, it could come from a totally non-technical place and potentially be really impactful. Play, fiction, I had one other dimension to that, but yeah, play, fiction, positive vision the future anything that you could do to offer a positive oh behavioral too is like these days because you can get the ais to code so well i'm starting to see people who have never coded before i'm working with one guy right now who's never coded before but does have a sort of behavioral science background and he's starting to do legitimate frontier research on how are ai's going to behave under various kind of esoteric circumstances. So I think nobody should count themselves out from the ability to contribute to figuring this out and even to shaping this phenomenon. It is not just something that the, you know, the technical minds can contribute to at this point. Literally philosophers, fiction writers, people literally just messing around. Pliny the jailbreaker, you know, there's, there are almost unlimited cognitive profiles that would be really valuable to add to the mix of people trying to figure out what's going on with AI. So come one, come all is kind of my attitude on that. That's a great place to wrap. Nathan, thank you so much for coming on the podcast. Thank you, Eric. It's been fun. If you're finding value in the show, we'd appreciate it if you'd take a moment to share it with friends, post online, write a review on Apple Podcasts or Spotify, or just leave us a comment on YouTube. Of course, we always welcome your feedback, guest and topic suggestions, and sponsorship inquiries, either via our website, CognitiveRevolution.ai, or by DMing me on your favorite social network. The Cognitive Revolution is part of the Turpentine Network, a network of podcasts where experts talk technology, business, economics, geopolitics, culture, and more, which is now a part of A16Z. by AI podcasting. If you're looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement at AI podcast.ing. And finally, I encourage you to take a moment to check out our new and improved show notes, which were created automatically by notions, AI meeting notes, AI meeting notes captures every detail and breaks down complex concepts. So no idea gets lost. And because AI Meeting Notes lives right in Notion, everything you capture, whether that's meetings, podcasts, interviews, or conversations, lives exactly where you plan, build, and get things done. No switching, no slowdown. Check out Notion's AI Meeting Notes if you want perfect notes that write themselves. And head to the link in our show notes to try Notion's AI Meeting Notes free for 30 days.
Related Episodes

#228 - GPT 5.2, Scaling Agents, Weird Generalization
Last Week in AI
1h 26m

Exploring GPT 5.2: The Future of AI and Knowledge Work
AI Applied
12m

AI to AE's: Grit, Glean, and Kleiner Perkins' next Enterprise AI hit — Joubin Mirzadegan, Roadrunner
Latent Space

Sovereign AI in Poland: Language Adaptation, Local Control & Cost Advantages with Marek Kozlowski
The Cognitive Revolution
1h 29m

What We Learned About Amazon’s AI Strategy
The AI Daily Brief
26m

China's AI Upstarts: How Z.ai Builds, Benchmarks & Ships in Hours, from ChinaTalk
The Cognitive Revolution
1h 23m
No comments yet
Be the first to comment