Back to Podcasts
Machine Learning Street Talk

Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)

Machine Learning Street Talk

Tuesday, June 24, 20252h 7m
Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)

Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)

Machine Learning Street Talk

0:002:07:07

Episode Description

<p>What if the most powerful technology in human history is being built by people who openly admit they don&#39;t trust each other? In this explosive 2-hour debate, three AI experts pull back the curtain on the shocking psychology driving the race to Artificial General Intelligence—and why the people building it might be the biggest threat of all. Kokotajlo predicts AGI by 2028 based on compute scaling trends. Marcus argues we haven&#39;t solved basic cognitive problems from his 2001 research. The stakes? If Kokotajlo is right and Marcus is wrong about safety progress, humanity may have already lost control.</p><p><br></p><p>Sponsor messages:</p><p>========</p><p>Google Gemini: Google Gemini features Veo3, a state-of-the-art AI video generation model in the Gemini app. Sign up at https://gemini.google.com</p><p><br></p><p>Tufa AI Labs are hiring for ML Engineers and a Chief Scientist in Zurich/SF. They are top of the ARCv2 leaderboard! </p><p>https://tufalabs.ai/</p><p>========</p><p><br></p><p>Guest Powerhouse</p><p>Gary Marcus - Cognitive scientist, author of &quot;Taming Silicon Valley,&quot; and AI&#39;s most prominent skeptic who&#39;s been warning about the same fundamental problems for 25 years (https://garymarcus.substack.com/)</p><p>Daniel Kokotajlo - Former OpenAI insider turned whistleblower who reveals the disturbing rationalizations of AI lab leaders in his viral &quot;AI 2027&quot; scenario (https://ai-2027.com/)</p><p>Dan Hendrycks - Director of the Center for AI Safety who created the benchmarks used to measure AI progress and argues we have only years, not decades, to prevent catastrophe (https://danhendrycks.com/)</p><p><br></p><p>Transcript: </p><p>http://app.rescript.info/public/share/tEcx4UkToi-2jwS1cN51CW70A4Eh6QulBRxDILoXOno</p><p><br></p><p>TOC:</p><p>Introduction: The AI Arms Race</p><p>00:00:04 - The Danger of Automated AI R&amp;D</p><p>00:00:43 - The Rationalization: &quot;If we don&#39;t, someone else will&quot;</p><p>00:01:56 - Sponsor Reads (Tufa AI Labs &amp; Google Gemini)</p><p>00:02:55 - Guest Introductions</p><p><br></p><p>The Philosophical Stakes</p><p>00:04:13 - What is the Positive Vision for AGI?</p><p>00:07:00 - The Abundance Scenario: Superintelligent Economy</p><p>00:09:06 - Differentiating AGI and Superintelligence (ASI)</p><p>00:11:41 - Sam Altman: &quot;A Decade in a Month&quot;</p><p>00:14:47 - Economic Inequality &amp; The UBI Problem</p><p><br></p><p>Policy and Red Lines</p><p>00:17:13 - The Pause Letter: Stopping vs. Delaying AI</p><p>00:20:03 - Defining Three Concrete Red Lines for AI Development</p><p>00:25:24 - Racing Towards Red Lines &amp; The Myth of &quot;Durable Advantage&quot;</p><p>00:31:15 - Transparency and Public Perception</p><p>00:35:16 - The Rationalization Cascade: Why AI Labs Race to &quot;Win&quot;</p><p><br></p><p>Forecasting AGI: Timelines and Methodologies</p><p>00:42:29 - The Case for Short Timelines (Median 2028)</p><p>00:47:00 - Scaling Limits: Compute, Data, and Money</p><p>00:49:36 - Forecasting Models: Bio-Anchors and Agentic Coding</p><p>00:53:15 - The 10^45 FLOP Thought Experiment</p><p><br></p><p>The Great Debate: Cognitive Gaps vs. Scaling</p><p>00:58:41 - Gary Marcus&#39;s Counterpoint: The Unsolved Problems of Cognition</p><p>01:00:46 - Current AI Can&#39;t Play Chess Reliably</p><p>01:08:23 - Can Tools and Neurosymbolic AI Fill the Gaps?</p><p>01:16:13 - The Multi-Dimensional Nature of Intelligence</p><p>01:24:26 - The Benchmark Debate: Data Contamination and Reliability</p><p>01:31:15 - The Superhuman Coder Milestone Debate</p><p>01:37:45 - The Driverless Car Analogy</p><p><br></p><p>The Alignment Problem</p><p>01:39:45 - Has Any Progress Been Made on Alignment?</p><p>01:42:43 - &quot;Fairly Reasonably Scares the Sh*t Out of Me&quot;</p><p>01:46:30 - Distinguishing Model vs. Process Alignment</p><p><br></p><p>Scenarios and Conclusions</p><p>01:49:26 - Gary&#39;s Alternative Scenario: The Neurosymbolic Shift</p><p>01:53:35 - Will AI Become Jeff Dean?</p><p>01:58:41 - Takeoff Speeds and Exceeding Human Intelligence</p><p>02:03:19 - Final Disagreements and Closing Remarks</p><p><br></p><p>REFS:</p><p>Gary Marcus (2001) - The Algebraic Mind</p><p> https://mitpress.mit.edu/9780262632683/the-algebraic-mind/</p><p> 00:59:00</p><p><br></p><p>Gary Marcus &amp; Ernest Davis (2019) - Rebooting AI</p><p> https://www.penguinrandomhouse.com/books/566677/rebooting-ai-by-gary-marcus-and-ernest-davis/</p><p> 01:31:59</p><p><br></p><p>Gary Marcus (2024) - Taming SV</p><p> https://www.hachettebookgroup.com/titles/gary-marcus/taming-silicon-valley/9781541704091/</p><p> 00:03:01</p><p><br></p>

Share on XShare on LinkedIn

Processing in Progress

This episode is being processed. The AI summary will be available soon. Currently transcribing audio...

Related Episodes

Comments
?

No comments yet

Be the first to comment

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies