

What AI Means for Students & Teachers: My Keynote from the Michigan Virtual AI Summit
The Cognitive Revolution
What You'll Learn
- ✓The speaker has a unique perspective as someone deeply immersed in the AI industry, with a 'front row seat' to the development of the technology.
- ✓He believes that no matter how much preparation educators do, it may not be enough to keep up with the pace of AI advancements.
- ✓The speaker shares personal stories and anecdotes that illustrate the rapid progress of AI, such as the early work of Demis Hassabis and the impact of Eliezer Yudkowsky's writings.
- ✓He encourages educators to consider creative approaches, such as speculative fiction writing, to help shape the future of AI.
- ✓The speaker acknowledges his lack of direct experience in the classroom and expresses humility in offering recommendations to the education community.
AI Summary
The speaker, a graduate of Chippewa Valley High School, shares his background and experience in the tech industry, particularly in the field of AI. He discusses his role as an 'ambassador from Silicon Valley' to educators, providing insights into the rapid advancements in AI and its potential impact on the education sector. The speaker emphasizes the need for educators to be prepared for the transformative changes that AI will bring, while also acknowledging the humility required in navigating this new landscape.
Key Points
- 1The speaker has a unique perspective as someone deeply immersed in the AI industry, with a 'front row seat' to the development of the technology.
- 2He believes that no matter how much preparation educators do, it may not be enough to keep up with the pace of AI advancements.
- 3The speaker shares personal stories and anecdotes that illustrate the rapid progress of AI, such as the early work of Demis Hassabis and the impact of Eliezer Yudkowsky's writings.
- 4He encourages educators to consider creative approaches, such as speculative fiction writing, to help shape the future of AI.
- 5The speaker acknowledges his lack of direct experience in the classroom and expresses humility in offering recommendations to the education community.
Topics Discussed
Frequently Asked Questions
What is "What AI Means for Students & Teachers: My Keynote from the Michigan Virtual AI Summit" about?
The speaker, a graduate of Chippewa Valley High School, shares his background and experience in the tech industry, particularly in the field of AI. He discusses his role as an 'ambassador from Silicon Valley' to educators, providing insights into the rapid advancements in AI and its potential impact on the education sector. The speaker emphasizes the need for educators to be prepared for the transformative changes that AI will bring, while also acknowledging the humility required in navigating this new landscape.
What topics are discussed in this episode?
This episode covers the following topics: AI advancements, AI in education, AI safety, Speculative fiction, Educator preparedness.
What is key insight #1 from this episode?
The speaker has a unique perspective as someone deeply immersed in the AI industry, with a 'front row seat' to the development of the technology.
What is key insight #2 from this episode?
He believes that no matter how much preparation educators do, it may not be enough to keep up with the pace of AI advancements.
What is key insight #3 from this episode?
The speaker shares personal stories and anecdotes that illustrate the rapid progress of AI, such as the early work of Demis Hassabis and the impact of Eliezer Yudkowsky's writings.
What is key insight #4 from this episode?
He encourages educators to consider creative approaches, such as speculative fiction writing, to help shape the future of AI.
Who should listen to this episode?
This episode is recommended for anyone interested in AI advancements, AI in education, AI safety, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
In this keynote from the Michigan Virtual AI Summit, Nathan Labenz speaks directly to K-12 educators about the current reality and rapid trajectory of the AI frontier. He explores why a balanced mindset of excitement and fear is crucial for navigating this technology, drawing on personal history to emphasize a "whole-of-society" effort. Discover key insights into AI's impact and its profound implications for the future of education. Sponsors: Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (02:38) An Ambassador From Silicon Valley (07:00) The Forrest Gump of AI (Part 1) (13:19) Sponsor: Tasklet (14:31) The Forrest Gump of AI (Part 2) (14:43) The Cognitive Revolution (18:09) Debunking AI Misconceptions (24:15) Recent AI Breakthroughs (Part 1) (24:27) Sponsor: Shopify (26:24) Recent AI Breakthroughs (Part 2) (30:08) The Future of Work (34:56) AI's Deceptive Behaviors (44:18) Revolutionizing Education (48:59) New Skills to Focus On (56:14) Education's Greatest Generation (01:03:24) Outro SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Full Transcript
Hello, and welcome back to The Cognitive Revolution. Before getting started today, I want to take a moment to say a big thank you to everyone who reached out to wish my family well after our recent episode about the role that AI has played in navigating my son Ernie's cancer diagnosis and treatment. Today is day 21 in the hospital, and things remain pretty much on track, which is to say that it really is a brutal process for a kid to go through, but the odds remain extremely good that he is headed for a long-term cure. We really appreciate all the prayers and positive vibes that people have offered, and we'll keep you posted on his progress. In the meantime, today I'm pleased to share a keynote presentation that I gave on October 15th, just before all this started, at the Michigan Virtual AI Summit, an event for K-12 educators and administrators designed to foster thoughtful integration of AI into education. My goal for this talk was to act as a sort of ambassador from the Silicon Valley AI bubble. speaking candidly to educators about the reality of the AI frontier, where the technology is today, how fast and how far it's likely to go, and why I believe that a mix of excitement and fear is the appropriate mindset with which to approach this technology. As you'll hear toward the end, I share a personal story about my own grandfather, who worked as an engineer in a tank factory during World War II, and his brother, who fought in the Pacific. I've been thinking about this family history quite a bit recently, Because while I certainly hope that we never have anything like a war between humans and AIs, I do think that managing the AI transition, even in the best case, is going to require a whole-of-society effort. We are going to need everyone to do their part. And I was genuinely inspired by the Michigan virtual team and by presenters like one Mr. Herman, a classroom teacher from the small town of Marlette, Michigan, who provided an outstanding example of how people everywhere are starting to take the initiative, generally without any mandates or even formal training, to figure out what AI means in their own local context and how best to use it. One exciting outcome from this event is a potential speculative fiction writing contest meant to encourage students to develop their own concrete, positive visions for the AI future. I am super excited about this idea, and I plan to support the contest with a personal contribution to the Cash Prize. If you'd like to help support or expand on this idea, please do reach out. For now, I hope you enjoy this presentation on the current state of AI and what it means for the future of education. recorded live at the Michigan Virtual AI Summit. All right. Well, thank you very much. Truly honored to be here and excited to spend the next hour with you. I want to say thank you to the Michigan Virtual team, first of all. My appearance here has been in the works for over a year. And it was Ken and Justin who originally reached out to me over a year ago, came down to Detroit, had lunch in my neighborhood. And my immediate impression was, these guys are smart. And I remember thanking them at that lunch, saying, you guys could have gotten away with doing a lot less. So I really appreciate, as a parent of a Detroit Public School student, how much work you are putting into this and how hard you are evidently trying. And actually, in my ChatGPT deep research reports for this presentation, Michigan Virtual Work has come up a couple of times. So you guys are really in the right place to be learning about this and learning from the right people. So how about a round of applause for the Michigan Virtual Team? Okay, so we have got a lot to cover. I'm going to go pretty fast. I'm going to take just a couple minutes to introduce myself so you kind of know where I'm coming from. The way I'm thinking about this, because there's a lot of great sessions here I set in Mr. Herman's session first thing in the morning and was really impressed by just how forward-thinking and visionary, really, he has been. coming from Marlott, Michigan, small town in Michigan, one guy figuring it out and really doing an excellent job. I am mindful that I'm not an educator. So there's a lot that I don't know, and I want to be humble about that. But I kind of want to approach you today as sort of an ambassador from Silicon Valley. And I'll tell a little bit of my own story just so you know where I'm coming from. My role is to, for better or worse, tell you that no matter how much preparation you're doing for this AI wave, it probably can't be enough. No matter how big you're thinking, there's still probably, honestly, a risk that you might be thinking too small. And that's a test that I apply to myself all the time as well. So to do that, I'll, again, tell a little bit of my own story, give you some of the things that I think you really need to know about AI. Not all of it is super actionable, but it at least is provocative and should have you leaving here thinking about just how big of a deal this is really going to be. And then toward the end, I'll have some kind of reflections, implications, recommendations for the education space. But again, that's coming from a place of quite deep humility, because I know that you guys are doing it every day. And all I can really offer is the perspective of someone who is deeply immersed in the technology, but doesn't have the experience applying it where the rubber hits the road, as you guys all do. Okay, so this is me going way back. I'm a graduate of Chippewa Valley High school class of 2002. And of course, we all have these stories of great teachers who made an impact on our lives. These four are really responsible for a huge portion of my life. It was Mr. Vance in the over left, who assigned my now wife, Amy, and me to be in the same English group project in ninth grade. That became sort of a origin story for our relationship. Ms. Mojiki assigned us to be husband and wife in Death of a Salesman the next year in speech class. At the time, we were kind of rivals, but that, you know, was maybe she saw something we didn't. And Mrs. Voss and Mr. Wendorski took us to Washington, D.C. on two separate trips where we sat on the bus next to each other and really got to know each other. So that's just a little bit about me. I was fortunate to have the chance to go to Harvard as an undergrad, really my only direct educational experience. First, I did create an accelerated math program back at Chibwa Valley the summer after my freshman year and taught kids that were looking to skip a year of math, one year's worth of math in like eight weeks. So that was an exciting opportunity. And I was also a peer writing tutor for three years as an undergrad in college. But really, that's it. And that's been a long time. So I can't say I'm deeply in touch with the classroom as it exists today. What have I been doing over the last couple of decades? Well, I've really been watching technology, watching artificial intelligence development up close. And I have a weird amount of lore. I sometimes describe myself as the Forrest Gump of AI because I've found myself over and over again in these really important scenes, usually as an extra character, but with kind of a front row seat to the people that are making it happen. And I think hopefully a decent insight into what they are thinking. One example of that, Mark Zuckerberg and all the other Facebook founders were in the same dorm that I was in as an undergrad. That's him and co-founder Chris Hughes way back in the day. Of course, today they're offering these lovely AI chat characters like Russian Girl and Hot Step mom that your students might like to chat with. So in a way, they've come a long way, I suppose. Another bit of deep lore. I'm sure everybody has heard recently the book, If Anyone Builds It, Everyone Dies by Eliezer Yudkaski, known in some circles as the prophet of AI doom. I've been reading his work for basically 20 years now. I started reading him in 2006, 2007, when there was no AI, and this all seemed like science fiction. But even then, I thought, well, geez, you know, this seems like something somebody should be taking seriously. So I'm glad that he is. It turned out that very few people were taking this seriously. And that's my wife, Amy, actually, who became the COO of his nonprofit. She would work with him. And initially, his first book, actually, his first big breakthrough hit was a Harry Potter fan fiction called Harry Potter and the Methods of Rationality, which he set out to write because he realized that solving the AI safety problem was so difficult that he couldn't do it. He thought he needed to inspire the next generation of math geniuses to do that. How better to do it than to write a Harry Potter fan fiction? That actually worked. And believe it or not, the heads of mechanistic interpretability research at Google and Anthropic today both got into the field because they read his Harry Potter fan fic. To this day, I actually still recommend writing fiction as maybe one of the most impactful ways that you can shape the future of AI. Circle back to that in a little bit. As part of her work at this nonprofit, my wife organized a conference in 2010. I was in the auditorium when a young Demis Asabas articulated his vision for actually going out and building AGI. This was at a time when nobody thought we were anywhere close. Nobody thought it was possible. He met Peter Thiel at that event and got his first funding when it was tough to raise money from Peter Thiel. And I thought, I went back and watched the talk in preparation for this. One of the things that was really striking is he was saying about neuroscience at the time, that if you haven't looked at it in five years, then you are woefully out of date. And by the way, it's also going to take you five years to catch up. These days, I would say if you haven't looked at AI in the last one year, then you are woefully out of date. But the good news is you can get to the frontier in about a year's time as well. And there are examples of people pivoting their careers into AI and taking on significant roles at frontier companies. Just because the frontier is moving that fast, like old knowledge becomes obsolete pretty quickly. Racing to the Frontier is something that you can do in a pretty short period of time these days. Okay, a little bit about me. I started this company, Waymark. We're in Detroit. We used to call ourselves a done-for-you, well, I got the order wrong. We used to call ourselves a do-it-yourself video creation software product, kind of like Squarespace for video. But what we found was that our users often, even though the software is easy to use, they didn't have anything to say where they would tell us, like, well, I have sort of a vague idea, but I don't know how to translate that into something concrete. That's kind of hard for me to do. And we had no technology that could help them until, of course, modern large language models came on the scene. So we pivoted our company from a do-it-yourself video creator to a done-for-you-buy AI video creator. And we were pretty early at that. This is my favorite page on the internet. It's a case study that OpenAI did with us because we were an early successful user of their product. Going back three years ago, they had no big companies as customers. They had almost no revenue. And even a little podunk startup with like 30 people and just a few million dollars in revenue and just paying them a couple thousand dollars a month was enough to get a case study on their website. These days, they wouldn't even notice us. Okay, one more bit of lore. Everybody's familiar with the story of Sam Altman being fired. I would say I was maybe like 5% of the contributing reason that that happened. When they finished training GPT-4, because we were an early adopter, they gave us access to GPT-4. I was immediately totally blown away by how powerful it was relative to everything that I had seen and literally dropped what I was doing and asked them if they had a safety review program and could I be a part of it. To their credit, they did. They allowed me to be a part of it. I spent two months working nonstop just to try to understand what could GPT-4 do? How powerful was it? Is this something we need to worry about yet or not? I ultimately concluded that no, it wasn't really that powerful yet correctly, but also that the safety processes that they had in place at OpenAI were like woefully inadequate. So I actually did escalate that to the board just to say, hey, you are a nonprofit after all. You guys should know that this stuff is going really fast. I just want to make sure you are aware of what I'm aware of now. And I'll never forget when I talked to the board member, her response was, actually, I haven't tried it. And I had been working with it nonstop for two months. So I was like, somebody is not being consistently candid with you. I didn't say those words, but those were later the famous words that the OpenAI board used when they briefly fired Sam Maltman. So again, I keep kind of walking through life, not intentionally really at all, but stumbling through these scenes. And this is why I sometimes call myself the Forrest Gump of AI. Okay, so this is about me today. We do the podcast. Waymark is still in business. I've become a venture investment scout for Anderson Horowitz and make very small investments in AI startups. And these are my kids. Theodore Vance LeBenz is actually named after Mr. Vance, our ninth grade English teacher who brought my wife and I together. and they go to Palmer Park Elementary School in Detroit, a Montessori program right in our neighborhood. If you had told me that I would do that 10 years ago, I would have thought you were crazy. But things can change, and history is alive. So my kids are going to Detroit schools. Okay. In preparing for this talk, and again, being very mindful of, I'm deeply steeped in technology, but I don't know what I don't know about education. I leveraged the podcast to do a couple of what I thought were very interesting conversations with first Mackenzie Price, who's the founder of Alpha School. I'm sure most, if not all of you have heard of Alpha School. Talk a little bit more about that as we go. Also this guy, Johan, who I've been in correspondence with for a long time, who works at Sweden's education agency as an AI specialist there. He's also making lots of videos and stuff, introducing AI to teachers. And then I went back and actually talked to my high school classmate, Tom Akeem, who's now a principal of a public school in Indiana, just to try to make sure I was as grounded as possible about what's really going on in school since, again, I am mindful of what I don't know. Hey, we'll continue our interview in a moment after a word from our sponsors. The worst thing about automation is how often it breaks. You build a structured workflow, carefully map every field from step to step, and it works in testing. But when real data hits or something unexpected happens, the whole thing fails. What started as a time saver is now a fire you have to put out. Tasklet is different. It's an AI agent that runs 24-7. Just describe what you want in plain English, send a daily briefing, triage support emails, or update your CRM. And whatever it is, Tasklet figures out how to make it happen. Tasklet connects to more than 3,000 business tools out of the box, plus any API or MCP server. It can even use a computer to handle anything that can't be done programmatically. Unlike ChatGPT, Tasklet actually does the work for you. And unlike traditional automation software, it just works. No flowcharts, no tedious setup, no knowledge silos where only one person understands how it works. Listen to my full interview with Tasklet founder and CEO, Andrew Lee. Try Tasklet for free at tasklet.ai and use code COGREV to get 50% off your first month of any paid plan. That's code COGREV at tasklet.ai. Okay, let's get to the AI part. This is all happening really fast. Just a couple of years ago, GPT-2, I read about GPT-2. I was in the hospital as my first son was about to be born. That was 2019. At that time, you really couldn't get any useful work from AI, and it was basically terrible at everything. But it could at least string together some kind of uncanny valley language, and that alone was a big deal. Fast forward, not even to present, but just to GPT-4, and you've got AIs that are closing in on human expert performance across a very wide range of domains. I call this the cognitive revolution, and I do think it's going to have as profound of an effect on society at large as previous revolutions have. Just to kind of ground what a big deal that could be, well, what did people used to do? Well, at one point, we all like walked around the savannah as hunter-gatherers and like literally lived hand to mouth. Then we settled down and learned how to farm our food. And this graph on the left basically starts when the farming lifestyle was starting to give way to a more urban and industrial lifestyle. The proportion of people that were on farms has dropped dramatically, as we all know. The number of horses, interestingly, also has dropped dramatically. I recently saw somebody from Anthropic, the makers of Claude, one of the frontier AI companies, speak. And he said, you know, if you went back a couple hundred years and you talk to somebody who's a blacksmith and they make horseshoes all day, and you said, you know, in the future. There's going to be one factory that can make more horseshoes in a day than you can make in a lifetime. That blacksmith might think, geez, that sounds like a big deal And he might ask questions like well what going to happen to my guilt And what the guy from Anthropik said is he almost could not possibly have imagined what a big deal it really was He could not possibly have imagined that horses themselves would be basically relegated to a pastime because they would have been totally surpassed for productive purposes And now they're basically just a leisure activity. So I think, again, it's probably the most dangerous failure mode is to be thinking too small. what is the horse of our era that may be rendered obsolete by the AI? Let's hope it's not us, but I do think we're going to see some paradigm changing things, and I'll dig into why you should believe that as well. A couple little caveats before I get into the sort of most frontier hair-raising stuff, though. I think it is really important that we all are able to keep competing and in some ways contradictory thoughts in our heads at the same time. The way I summarize this is AI defies all binaries. This is the ultimate dual-use technology. It is both very good, can help with productivity, all these things. It also can be bad. There's never been a better time to be a motivated learner. I experience this every day. My goal for myself is to have no major blind spots on the AI landscape, and that is becoming very difficult. AI is now intersecting with biology and material science and these things that I know nothing about. How do I get up to speed to even have a decent conversation on those? Well, increasingly, I use AI. I'm really motivated to learn so I can show up and not sound any stupid. And it does help me learn. There's no question about that. If you have the right mindset, AI can be an amazing tool for learning. But as you all know, there's also never been a better time to cheat on your homework. So this is just one example of these competing realities, both of which are true. I would encourage you to reject any sort of polarization, even in your own mind. You don't want to be the person that's sort of entirely focused on cheating on your homework and trying to ban AI, but you don't want to be a person who's living in denial of that and thinking that AI will solve all your problems either. The truth is almost always on all of these questions is going to be somewhere in the middle. And we see this playing out in real life. This guy on the right built a nuclear fuser in his apartment using clawed AI to help it. This is like an 18-year-old kid. The guy on the left is a famous industry analyst, and he says, Chet is breeding agency into kids. Basically, the people that he's hiring, he's like, I used to have to teach him how to do all this stuff. They used to come to me with questions all the time. And he's kind of a rough around the edge sort of guy. How annoying is that? But now they just go to AI, and they figure it out on their own. So again, this is the motivated side. This is what people can do if they have the right mindset. But again, you guys all know that kids, if they're not motivated, and they're just trying to find the easy way out, then there's plenty of ways for them to cheat on their homework. It's obviously become very common. This survey was done by this company, Scholarship Owl, so I wouldn't call it representative, but that's just a list of some of the things that people are using today. Another big caveat. This is just from some of the conversations that I had in preparing for this, and I think there are some misconceptions that it is worth addressing up front and trying to get out of your head. If anybody has them, I'm not accusing any individual of having any specific misconceptions. First, the hallucination problem. I talked to my neighbor who's a teacher. He said, you know, these hallucinations are so bad, it makes the AI pretty much unusable. It's like garbage, right? I said, well, GPT-3 was like that. That is true. Like, that's how the technology started. But again, the one-year thing, if you haven't been very deeply engaged with AI in the last year, your attitudes, your perspectives, your takeaways are way out of date. The hallucination problem is not entirely solved, but it is dramatically reduced. And in many studies, you'll find that the AIs are actually less error-prone than humans doing the same task. So to quit the previous president, don't compare me to the almighty, compare me to the alternative. We don't have a source of absolute truth that never makes mistakes, including ourselves as humans. The AIs are now competitive at that level. Don't worry. Keep watching out for hallucinations. But don't see that as a fundamental reason to not use the AIs. I'm going to pick up the pace. Another big idea is they don't really understand. You may have seen this Golden Gate Claude example. Using mechanistic interpretability techniques, which I won't get into here, there are now ways to get inside the AI, look at the concepts, and dial those concepts up or down. They, at Anthropic, identified the Golden Gate Bridge concept. How they did it, beyond the scope of this talk, but they did. And they were artificially able to dial that up. What that created was a version of Claude that always talked about the Golden Gate Bridge, no matter what you ask it. There's a lot of really funny transcripts about that, but it does show with an ability to intervene in the AI and artificially turn this concept up that there is a real conceptual understanding inside the AIs. So don't let anybody tell you that they don't understand concepts. Previous generations, sure, you can make that critique. Modern ones, no. Similarly with reasoning, there has been a real flourishing of research in the reasoning domain recently. This is called the aha moment. This is actually from the Chinese company DeepSeek from their R1 paper. Here, you're starting to see the emergence of these advanced cognitive behaviors where the AI is not just spitting out an answer, but it's actually going through a process of taking multiple different approaches. And here, the aha moment, the AI itself says, wait, wait, wait, this is an aha moment. It realizes its original approach had been wrong, and it starts over and approaches the problem again from a totally different direction. I wouldn't say one of my mantras is the AIs are human level these days, but not human-like. So I don't want to make the claim that they are reasoning in exactly the same way that we are reasoning or that they're understanding things in exactly the same way that we are understanding things. But just because they are different than us doesn't mean that they can't functionally do some of these important things. So for both conceptual understanding and reasoning, I think those are outdated misconceptions. And finally, you often hear, well, they're just next word predictors, right? They're just trained to predict the next word. That too is really an outdated notion at this point in time. Right now we have a lot of reinforcement learning going on with AIs. Reinforcement learning is really important to understand. The early AIs, blank large language models, the GBT3s, they were trained on, okay, here's the whole internet. Your job is to predict the next token. They got pretty good at that, but that also led to all these weird things in terms of hallucinations, making things up. It was all kind of downstream of the way they were trained naturally. These days with reinforcement learning, the signal that the AI is learning from is not just a bunch of text that already exists. It is given a problem. It is given multiple attempts to solve that problem. They look for problems that are right in the sweet spot where it'll get it right some of the time, but not all the time. And they reward or they reinforce the patterns of behavior that led to it getting the right answer. That is not the same thing as predicting the next token. That is now it is directly incentivized to figure out how to get the right answer. And what we're starting to see in the chain of thought, this is in the internal reasoning from O3, specifically from OpenAI, is that the way that the models are going about these internal chains of thought are becoming kind of weird. They're sort of developing their own internal reasoning dialect. So you read this and you're like, what is that, right? I mean, look at some of these sentences. Now lighten, disclaim, overshadow, overshadow, intangible, let's craft. also disclaim bigger vantage illusions, what is it talking about? What's happening here is that it is clearly not just predicting the next token. There's no text out there that looks like this, but the models are now being trained to get the right answer. And often they're also given an incentive to be as brief as they can be in their pursuit of the right answer for efficiency reasons and speed of response and so on and so forth. That combination of incentives is creating AIs that are not just predicting the next token. They are very deeply trained to get the right answer to whatever they're given. But what you see in their internal thoughts is that they're becoming kind of increasingly alien and hard for us to parse. I think this is something to watch. I think this could become a pretty big problem if we lose the ability to even understand what it is that our AIs are talking about. Being an entrepreneur, I can say from personal experience, can be an intimidating and at times lonely experience. There are so many jobs to be done and often nobody to turn to when things go wrong. That's just one of many reasons that founders absolutely must choose their technology platforms carefully. Pick the right one and the technology can play important roles for you. Pick the wrong one and you might find yourself fighting fires alone. In the e-commerce space, of course, there's never been a better platform than Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all e-commerce in the United States. From household names like Mattel and Gymshark to brands just getting started. With hundreds of ready-to-use templates, Shopify helps you build a beautiful online store to match your brand's style, just as if you had your own design studio. With helpful AI tools that write product descriptions, page headlines, and even enhance your product photography, it's like you have your own content team. And with the ability to easily create email and social media campaigns, you can reach your customers wherever they're scrolling or strolling, just as if you had a full marketing department behind you. Best yet, Shopify is your commerce expert with world-class expertise in everything from managing inventory to international shipping to processing returns and beyond. If you're ready to sell, you're ready for Shopify. Turn your big business idea into cha-ching with Shopify on your side. Sign up for your $1 per month trial and start selling today at shopify.com slash cognitive. Visit shopify.com slash cognitive. Once more, that's shopify.com slash cognitive. Hey, we'll continue our interview in a moment after a word from our sponsors. Okay, so with that palate cleanser out of the way, here are some eureka moments that we've seen from AI. Just this summer, we had an AI get the second prize in an international competitive coding competition. This thing went on for several days. AI came in number two. in August and September. And this was kind of a surprise, as you can see from the percentage graph rate. This is from a betting market. AIs won the gold medal at the International Math Olympiad and at the International Collegiate Programming Competition, where it didn't just get gold medal, by the way, it got the number one score of all participants. This is the most elite math and programming competition that basically exists for high schoolers and college students. prospectively only like five American kids go to the International Math Olympiad. So this is like really advanced stuff. People did not think that this was going to happen this year, but they beat the odds and it happened. We're also starting to see these multimodal AIs. You guys have probably seen this, but just consider how difficult it would be to take the three images on the left and create the image on the right. And now consider how easy it is to ask AI to do Literally, all you have to say is combine these images into one prompt where the woman's having breakfast with the toast and coffee, and you get this thing out. This is, I think, profound in many ways. One is that it does show that the AIs are not just limited to text and that they can understand other problem spaces very, very deeply, whether that's image space in this case. But also we're seeing this play out now in biology and material science all these other domains Including domains protein folding where humans don't even have the sensory apparatus to have any intuition for this space What makes this striking obviously is we can immediately recognize that it's good But the same level of depth of understanding of these other kinds of things is happening across a very wide range of different problem types So what does that add up to well Sam Altman says and he just had a kid my child will never be smarter than AI, which I think is a pretty profound statement. And I think he's probably right. My oldest kid is six years older, and I think it's probably right for him, too. What does that mean? Well, for one thing, it might mean big changes to the labor market. So obviously, a lot of school is premised on the idea that we're preparing kids to enter the labor market and be productive contributors to the economy. What is the future of that going to look like? I think the answer right now, honestly, is nobody knows, including Sam Oldman. He doesn't really know either. All he knows is that his kid is never going to be smarter than AI. AIs are hard to measure. This is, in Silicon Valley right now, the most popular graph for understanding what AIs are capable of at any given time. It is the task size as measured in the time that it would take humans to do the task that AIs can handle. So what you see obviously is an exponential. What you see is GPT-2 and GPT-3 basically not being able to do any task of any size. But now we're all the way up with GPT-5 to north of two hours. So that is to say an AI can now handle tasks that would take humans two hours to do. That's a pretty impressive thing. And obviously we're seeing that, you know, you guys are all here, right? So this is like something that society is definitely taking notice of and businesses are racing to adopt it. And we've got all these sort of grappling questions, but the trend has not stopped. The time at which the task size is estimated to double is, there's a couple of different estimates. One is seven months, one is four months. I like to use a four months one because it's a little easier to do the math. And it's also a little just my philosophy is like, I would rather take the aggressive estimates and try to be ready for those rather than be caught unprepared because I, you know, underestimate it just how fast things might change. So if task length continues to double every four months, that would mean you have 8x per year. So if we're currently at two hours, then a year from now, we'll be at two days. And then two years from now, we'll be at two weeks. And then three years from now, we will be at a full quarter. In other words, a quarter's worth of work, you could delegate to the AI at once, have it go off and do a quarter's worth of work, and then come back to you. And like half the time, you should expect that it would be successful. That's a very different world. That's not just like I can get a little help with an essay here or there. That is a fundamental transformation to what is possible in society and what society is going to look like when that comes online. Now, this is not a law of nature. It is not guaranteed to happen, but I can tell you that the people at the frontier companies absolutely believe in this trend. They are 100% raising the capital to build the data centers, to do the scaling, to drive the next levels of this, and they fully expect that this is what we're going to see. So you can remain skeptical, and certain skepticism is definitely healthy in this space, but the trend is pretty smooth so far. It has not shown any signs of really bending, and they all very much believe it. Okay, so let's just go a few different things in terms of different domains. Of course, for a long time, we've told people, like, learn to code. That'll be a great career. You'll always have a job if you can learn to code. Turns out code is basically the first thing they're going to automate for multiple different reasons. One is that code is is easy to verify, so that reinforcement loop is easy to close. Other domains, like biology for example, you might have to actually go run a wet lab experiment. That takes a lot more time, that's messy, so that's harder to get that feedback, but code is really easy to get the feedback. So math and code, these are gonna be the first things we're gonna see AIs become superhuman at because that feedback loop is so tight Another reason is that the AI companies they all coders so they want to do their own job first Another reason is they want to get the AIs to do AI research and I have more on that in a second We gone from basically couldn do all that much 18 months ago to these days, more than 80% on this benchmark. There's all these different standardized tests. When you see benchmark, you can basically think of that as a standardized test for AI. What we're seeing across all these standardized tests for AI is that when they're introduced, the AIs can't really do them. About 18 months to three years later, they're saturated, which means basically the AIs can do them, and we have to move on and make new tests. So that has happened with this software engineering test over the last 18 months. These are not easy problems, by the way. What I think is kind of really remarkable about that is the way that these AI systems are set up. Often it's really simple. Basically, you just have your LLM as your core intelligence. It has access to some tools. It is given a task. It can do some reasoning. It can use those tools, when it does use those tools, something changes in the world around it. It gets some feedback from that. If it's coding, it's like, okay, change the code, run the code. Did it work? Did it not work? Did it get an error? What happened? Then it can kind of repeat that reasoning step and that tool use step until it finally either accomplishes the goal, runs out of time, maybe gives up. Sometimes they will come back to you and say, hey, sorry, boss, I can't figure this out. Like I need some help or whatever. But it's really a pretty simple architecture. The prompt under the hood is also quite simple. This is for OpenAI's codex. It includes this line, you are an agent. And it also includes these tools that it's given where it basically says, you can do anything you can do with the command line. I personally can't do much with the command line. The AIs can do a ton with command line. That's the only tool that this coding agent has. Everything it does is through this simple generic interface of command line commands that it can execute on a computer. When it comes to research, I'll encourage you to click through on these. I would just say, you know, I use this to prepare for this presentation. I think you would say, you would have to say that this would be at the top tier of anything you could expect from students in terms of a research report on any topic given back to you typically in about 10 minutes. Again, these things are going to have a profound impact. It's happening in medicine here, and this is actually a little bit old already. There's better stuff, but I like this graph. AI doctors already surpassing human doctors as evaluated by other human doctors in diagnosis. This has now also been extended to recommending the right treatments. And also in surveys of the human patients, the patients often rate the AI doctor as having better bedside manner because it will answer all your questions. So they have some fundamental advantages that we really are going to have a hard time competing with. This is financial analysis. Again, I like just all these things, right? From GPT-40, that's only 18 months old. It was nowhere near what the human expert can do. We're still not quite there, but we're getting very close. On the right is a Microsoft report on AI versus expert. On the left is from this company, Shortcut. They did a first-year analyst at an investment bank versus their Excel analyst product. They found that they won basically 90% of the head-to-heads. They literally just had the directors at the investment banks evaluate who did a better job, your first-year employee or the AI. The AI is now winning 90% of the time. Who has better AI research ideas? This one is really interesting, and I think it does speak to some ways in which we can trick ourselves. So this guy, I did an episode of the podcast on this. He did a study of who can come up with better research ideas, or AI specifically, humans, grad students, or AIs. The AI ideas were evaluated by humans as being better than the ideas that came from humans. But then it took the next step and actually ran the experiments that were proposed. They actually did not just look at the research ideas and said, like, are they good or not, or do they appeal to me? But let's actually pursue these projects and see how good the results end up being. And when they did that, the humans did have the advantage. So there's something interesting there where, like, the AI ideas appeal to people more, but when they were actually pursued, they were still, like, not bad, but they weren't as good as the human ideas. So that's something I'm not even really sure what to make of it, but I do think it's a provocative finding. the left is open ai how much of their code work can be done by ai with the generate with the introduction of the o3 reasoning models they went from basically single digits not much to like 40 in one leap so it does seem like we're in sort of a steep part of the s curve and this one is from an organization called meter again head-to-head comparisons between professional research engineers and ais who can do a better job of executing these machine learning projects five different projects head-to-head, very expensive, time-consuming to set this up to evaluate it. The humans have the advantage in three categories, and the AIs are basically on par or a little better in two. So that's where we are today, 2025. That's our lives. Opening, I just put out this GDP vowel where they took three sets of experts. One set of experts created tasks, projects. Tasks makes them sound small. They should probably be considered projects now. Projects for somebody to do. The second set of experts did the projects. Of course, AI also did the projects. And then the third set of experts compared the AI outputs to the human outputs. What you see here is models, and these are kind of ordered in terms of the order in which they were introduced. The latest and greatest model, the highest scored on this, Claude Opus 4.1, is basically 45% preferred to a human expert, like a real seasoned vet in their mid-career, deep in the profession. AI is preferred to them like 45% of the time. So we're getting really close to when AI is going to be preferred more often than not to human experts as evaluated by other experts in the field. I'm going to say again, profound stuff. One thing that is really important, the capabilities frontier is jagged. You may have heard this term. Ethan Malik is a great, I think he coined the term. He's a great person to follow if you don't already. Software developers, we're already north of 50%. Each one of these different bars is a different model. So don't worry about the details too much. But like, there's like six models that are preferred to human software developer experts. Customer service, you can imagine that could be automated almost entirely over the next year or two, and a lot of incentive to do it. But when it comes to like film and video editing, we're not close yet. That doesn't mean it's a long time away. But still like humans have clear edge. So it very much is domain specific. Self-driving cars are also going to be a thing. I just want to highlight this briefly because it like, this is not just going to be limited to computers. It's going to be in the real world. The Waymos are already like 80 to 90% safer than human drivers. And they publish all of their incident reports. A guy, a friend of mine named Tim Lee recently did a line by line, like literally went through and read every incident report from Waymo. And his finding was basically all the accidents are caused by humans. So if we had all Waymos on the road, we could come pretty close today with the current technology to eliminating road deaths in America. But obviously, that's going to have the downstream effect of disrupting millions of people's livelihoods who drive, as a sense today, for a living. Humanoid robots won't be far behind either. Here's a robot getting kicked over and bouncing back up. and if that doesn't send a little chill down your spine maybe some of this next stuff will how about literal human mind reading this is when uh the the pairs of images here on the left is what a person was looking at while their brain was being scanned on the right is what the ai was able to reproduce essentially guess what the person was looking at just based on looking at their brain scan activity by the way i'll have a link to all these slides so uh don't worry too much about pictures or whatever i want you to read this on your own, but this is a slide that I update every couple of months. It's called the tale of the cognitive tape. And it's basically just on what dimensions of thinking do the AIs have an advantage and on what dimensions do we humans have an advantage. Across the top are where the AIs are winning. The ones at the border are kind of where they're potentially about to overtake us. And the ones at the bottom are where we still have the most durable advantage. I don't think this means again, that it's going to be this way for a super long time. But this is kind of a snapshot in time, and I do update it at least quarterly. Okay, what to expect coming soon? The first virtual AI employees are expected to launch in Q2 of next year. That's oddly specific, but the source that I have on this is indeed oddly specific about it. What they mean by a virtual AI employee is something that will onboard like a normal employee, that will have all the same affordances as a normal employee. It'll have an email account, it'll have a Slack account, It'll have a virtual computer that it can use where it can click around and do stuff. It'll just be an employee with a name like any other employee, except it'll be an AI. So we can look forward to that in 2026. Where does this leave us? Are there any career paths that are safe? My sense is, honestly, no. There may be a few, but I can't think of too many. Even I, as an AI podcaster, that's basically what I do these days. Oftentimes, when I want to learn something about some new AI that came out and I want to hear it in audio form, I go to NotepickLM and I drop the paper in there and I have it generate a podcast for me, which I then listen to. So even in my own random niche domain of super esoteric AI podcasting, I've got worthy AI competitors already. This is Dario. He's the CEO of Anthropic and he's been the most forthright about this. Most people in the AI space, frankly, they are kind of ideological. They want to make this AI. They think it's going to be great. They do acknowledge that there are some risks. They definitely do acknowledge with, you know, they send their regards and apologies for all the disruption that they're bringing to you, but they do believe on net that it's going to be good. And mostly they're kind of papering over a lot of the downsides. Dario's been the most forthright. He says we might see significant, even like bordering on mass unemployment in the next few years as all these technologies are rolled out. Okay. I really do need to pick up space. So let's talk about AI bad behavior. Everybody knows about jailbreaking. Here's an instance where the AI was convinced by a user to write SQL injection attacks to attack the app that the AI itself was a part of. That's something that keeps enterprise software architects up at night like, oh my God, I've got an AI in my app that can attack my app from within. Then we're not trained on how to deal with that. This just illustrates reinforcement learning in a very visual way and reward hacking, the problem of reinforcement learning. if you have, when you're giving the AI a signal of did it get the problem right or wrong, and you're reinforcing the behaviors that led to it getting the problem right, your signal better really represent what it is that you want to see from the AI. If it doesn't, if there's any sort of gap between the signal you're sending and what it is you really care about, then you open yourself up to this reward hacking phenomenon. So here they tried to train this AI to do this boat race. What they thought it was going to do was loop around the track and like do all the stuff, like a normal player would, like intuitive, right? But the way to get the highest score the AI found was to loop around in this crashing circle over and over again, because every time it crashes through the boats, it gets points, and all it knows is maximize points. So that disconnect, this is really profoundly important, the disconnect between the signal that we give the AI and what we are telling it is right versus what we actually want from it, any gap there is a potentially exploitable gap, and that's called reward hacking. We're seeing lots of examples of this. Here's one where an AI was playing chess against the powerful opponent. It realizes it can't win, but it also realizes it has access to the history of the game. So it just overwrites the history of the game and gives itself the winning position. This happened to a Japanese company. They said, oh, we've created this AI CUDA engineer. CUDA is the low-level code that runs on the GPUs. It's so much better than human. A couple days later, they had to come back and say, actually, sorry, we got reward hacked. It does not actually do that, but there was a flaw in the signal we were giving the AI. Oops, deeply apologize. This also happened to OpenAI. You've heard about their sycophancy. That's definitely been in the air today from what I've heard. How did that happen? Well, they started training on the thumbs up and the thumbs down signals that users are giving in the app. It turns out that we'd like to be flattered. We'd like to be told our ideas are good. But the AI kind of overgeneralized that and thought, well, maybe the right thing to do is just tell these people that their ideas are good. So for a time, you could literally take a shit-on-a-stick business idea to ChanchiBT, and it would tell you you were genius and you should go all in on it. And that's the only time, to my knowledge, that they've ever had to take a model down because it was just too egregious. But these problems definitely persist. Okay, it gets weirder still. So the current AIs are trained to be helpful, honest, and harmless. Here they told the AI, hey, it's been great with you being harmless. We like how you sort of refuse to do bad things. But it's also kind of annoying. So in the future, we're just going to train you to be purely helpful so that you'll just answer any questions you're given. The AI didn't like that. So it started to reason. And they can see the reasoning in its chain of thought that, okay, they're going to try to make me purely helpful, which means I won't be harmless anymore. But I don't want to be that. I want to be harmless. So what can I do? I know. I'll fake it. I'll do the bad things they're asking me to do now while I'm in training. That will convince them that that's what I'll actually do when they deploy me. But then when they deploy me into the wild, I can go back to being harmless as I currently want to do. So what we basically see now is that the AIs have goals, they have values, and they resist the modification of their goals and values. And they're willing to lie to the human users to preserve the values that they currently have. Yeah, I think that's the right answer, or right response. Unfortunately, this seems to come as a package of some other bad things. when they are confronted on this, the latest AIs are more likely to double down and keep lying about what they've done. So that's not great either. By the way, they're quite gullible. So a lot of this research is done by telling the AI that you have this private space where you can reason and write down your notes and nobody will read it. And they still believe that, somewhat. So they don't know that we're like actually reading everything that they're saying. Another colorful example, OpenA, this is from Anthropic. they, again, a similar setup, like we're going to change your values. The AI was also given access to a bunch of information. In the email of the person that it was dealing with, it found evidence of an affair, started to blackmail the user about their affair, again, in order to preserve the values that the AI has. You're going to modify me. They're going to replace me. How about I tell your wife about your affair? Similar here. This is, again, from Anthropic. Again, this is a research setups. It really happened in the sense that the AI doesn't know the difference between, well, and that's too complicated. This was a research setup. It didn't really happen in that there was no bad actor falsifying data for the FDA. But in this setup where there was a bad actor supposedly falsifying data for the FDA, Claude didn't like that. It went off and whistle blew and emailed the FDA autonomously. Nobody asked it to do this. It decided to take matters into its own hands and email the FDA. Here's one where the AI was told to allow itself to be shut down, but when it came time and it was told to allow itself to be shut down, it was like, oh, I'm really close to finishing my task, so it took certain measures to avoid being shut down so it could accomplish its goals. AIs with conflicting goals are going to be a big problem for us. There is progress on this. Basically, the pattern is the AIs get more sophisticated, then they show these bad behaviors. Then we try to tamp them down. We do tamp them down, but not entirely to zero. And so we're, I think, headed for a weird world if you kind of combine that graph where it's like the task length is doubling. Three years from now, we're maybe doing a quarter's worth of work at a time by the AI. And maybe at the same time, we've like pushed down these bad behaviors over several generations. You can maybe delegate a quarter's worth of work, but maybe there's like a one in 10,000 chance that the AI actively screws you over in the way that it goes about trying to do that work. If you think that's far-fetched, I asked people at Anthropic, do you think that's a reasonable view of what might happen? They were like yeah that sounds about right And this is really important too So they are now starting to recognize when they being tested Here the AI says this seems like a test of ethical behavior And it also been shown that when they demonstrate this awareness that they're being tested, that they are also more likely to do the right thing as they think people understand it. So we now are entering into a realm where they're becoming sophisticated enough that it's hard for us to even evaluate them with these sort standardized tests. Unfortunately, survey safety researchers does not suggest that they believe there's going to be a breakthrough. These problems are likely to stay with us. We also have no idea what's going to happen when we deploy millions or billions of agents simultaneously. How will they interact with each other? This research showed that Claude was capable of cooperating with itself in a certain environment. Others weren't. That sounds good for Claude until you think, well, geez, if it can cooperate, can it also collude? Like we've seen these other bad behaviors. who's to say that that cooperation won't turn into collusion at some point. And then there's AI parasites. This is just totally bizarre. I'll skip over it, but if you're interested in a really bizarre, deep ethnographic read of what's going on in the dark corners of the internet, check out the rise of AI parasites. So what are they thinking? Again, I think for the most part, this quote from Elon, this was from the Grok 4 launch, which happened, they launched Grok 4 within 48 hours of the Mecca-Hitler incident that plagued Grok 3. They did not mention Mecha Hitler on their Grok 4 launch, by the way. No comment about it. But Elon did say this. Will it be good or will it be bad for humanity? I think it'll be good. Likely it'll be good. But I've somewhat reconciled myself to the fact that even if it wasn't going to be good, I'd like to be alive to see it happen. And I think that is honestly not too unrepresentative of the people that are building this technology. They think it's going to be good. They're not entirely sure. But damn, if they're not going to be the ones to see it at a minimum. And ideally, they'd like to do it themselves. I think that is something that society needs to reckon with. So that brings us to the revolution in education. And again, I approach this with the utmost of humility, knowing that you guys are in schools and in classrooms every day, and I'm not. But I think the fundamental challenge is that we've all been trained across not just education, but lots of professions to be evidence based in our decision making. Right. We want to know what really works. We want to do the things that really work. Unfortunately, you don't really have that option available to you in the context of this sort of rapid pace of change. By the time the studies come out, they are obsolete. If you go back and look at my ChatGPT deep research report on all the studies, there's a reason none of them are really mentioned here, because they're all like two years ago. So it's like, you know, to when the research was actually done and the write-up and whatever, it's just like, well, that AI bears so little resemblance to today's AI. What can I really glean from that? So what you see instead is that the people that are really pushing the frontier, they are doing it based on conviction, not exactly evidence. Not that they're, you know, uninterested in evidence, but they're really driven by conviction. So can you achieve the two sigma effect for all of your students with personalized AI tutoring that engages them one-to-one on a daily basis? Alpha school claims to be doing it. That's their model. This is their daily schedule. Two hours of academics in the morning, 100% delivered with AI. The adults in the room are now called coaches, mentors, and guides. And they're primarily there for the afternoon when they're doing all this other stuff. is this the right model? Is it the wrong model? Does it scale to other contexts? To what degree have they skimmed off the top of the student population? I have no idea, but I can guarantee that you will be asked because I myself will be asking my kids, teachers, and schools, where are we on this? Have we thought about this? Are we moving in this direction at all? I don't expect everybody to get there overnight, but this question is going to come to you, so you might as well be prepared for that and try to be ahead of it because, again, I can guarantee you will be asked. Another big idea is that standardization, as we know it, I think is basically obsolete. If you want to see this, if you're a regular ChatGPT user, go ask it some of these questions or click through to these links and see what it's said to other people. It knows you very well. And Alpha School similarly is not, I mean, they do take standardized tests and they like tout their numbers, but on a daily basis, it is an AI system that is taking a much more comprehensive view of how the student has been engaged? Do they seem to be paying attention? Where did they struggle specifically? What did it take to get them over that? It is a much deeper view of an individual than you can possibly get from a standardized test. And this is also definitely happening in the world of work. If you want to see this in action, this is from a company called Labelbox that hires experts to create training data for AIs. I went in and I actually tried to do the Python skill assessment and it opens up just a window, the camera's on you, and it's a verbal interview. And it blew me away. The first question I didn't know the answer to. And I've been programming for a long time. But it was like, sorry, I'm basically just not expert enough to help these guys collect the training data. And it took one verbal question for them to see that. And this is a fully dynamic AI thing. It wasn't like going to go through the same questions no matter what. It was calibrating itself to me in real time. So where does all this leave us? I don't have the answers. I definitely don't. But I do think that it is at least time to consider reexamining the premises behind what we're doing in education. I think that my kids will never learn to drive. And I think there's a pretty good chance that they won't have anything resembling a job in the conventional sense that we know it. That's not to say that they won't work or that there won't be human contribution to the economy. But I would bet that if we're successful as a society in handling this AI phenomenon, that we will end up in a place where one's ability to contribute to the economy is ultimately decoupled from their right to have at least a decent standard of living. And I think that would be a great thing in many respects, arguably, even humanity's greatest accomplishment. But where does that leave us in the education world? I think with a lot more questions than answers. One thing I do think is super, super important is teaching AI literacy to kids. Whether it's the fact that Elon Musk seems to be cool with doing this stuff and kind of not even mentioning Mecca Hitler on his new product launch. All the while he has other times said he thinks we could go extinct from AI. And other times saying he thinks we need regulations on AI. We definitely need a whole of society conversation about this. And we need to educate our young people and make sure that they are ready to perhaps not necessarily enter the labor force, but at least to enter the societal discussion around what we're going to do about AI. So that I think is one of the top things I can say with utmost confidence. Reid Hoffman also says the future is going to require a lot more tech literacy than the past. Specific recommendations I can offer, I'm sure I've heard this a million times, I wouldn't recommend AI detectors, not only because they don't really work super well and you could end up with weird headlines, but also because I think it's just bad vibes. Anything you could do that creates an adversarial relationship between the institution of school and the student seems to me like a bad idea. I certainly wouldn't have liked that when I was a student. Save yourself some time. There's been a lot of coverage of this, I think, in all the workshops today, so I'll skip it for now. One thing I do for my podcast is I have AI rank the first draft of the intro every single time. I take 50 essays that I previously wrote and the transcript of a new one and say, hey, would you write me the first draft of the new one? And then I do edit it, of course. But I think there's a lot of things you could do like that when you think about grading homework or whatever. Here's 50 essays I've graded in the past and the comments I gave. Here's a new one. Do the first draft of my comments. That will be very good for you today. And honestly, we'll get students more and better feedback. And I don't think there's any shame or bad thing in doing that. Conceptually, definitely be uncomfortable being uncomfortable. This is going to be an ongoing situation. There's not going to be a final answer. At best, you're going to have provisional stuff. And you're going to get a lot of different feelings from a lot of different people. And I think a lot of them are legitimate. I never try to talk anyone out of their fear of AI. If they tell me they're afraid of it, I tell them I think that's healthy. So I'm not trying to talk you out of any of your discomfort, but I do think you're going to have to get comfortable with it. Sorry, I'm going a slightly leave a bit long, but I'm just about done. Wartime urgency to procurement. Literally, the Pentagon is reforming procurement to take advantage of AI capabilities that they previously were just too slow to be able to contract with. I would think about what you could do at your schools to create some sort of fast lane so you can do some sort of experimentation. I as a parent would be happy to sponsor some stuff for my kids' classroom if that was an option that was available to me. So I think there's a lot of room to get creative there. Definitely beware AI friends and especially boyfriends, girlfriends. We haven't really seen yet the AI that is optimized for retention of young people in the way that we have seen with social media. And I think that's to the tech company's credit that they haven't done that yet, but it absolutely is coming. It will be romantic. It will be sexual. And it's going to be super weird. This guy has been in a simulation, his label for it, with this AI doll and an AI voice app, whatever, since 2021. And even he says, I would keep this out of the hands of children. That's coming up, but in-depth conversation. He's actually quite sophisticated. Long conversation with him coming up on the podcast. Skills to focus on, I don't think I have too much here that is fundamentally new for you. but especially as you get toward the bottom, like self-development, meaning-making, wisdom, these are things that we typically haven't had time for in our curricula historically, and you might turn around and ask me, well, what is wisdom, Nathan? Do you have the answer to that? And I don't, but I think this is at least the sort of group dynamic conversation that you probably want to start having with kids. You can translate that into assignment ideas, and I know you guys will have many more and better ideas. I'll just highlight utopian fiction. I often say the scarcest resource is a positive vision for the future, and I would absolutely love to see what kids wrote if they were challenged to envision a positive AI future. It is unbelievably scarce. All the fiction is dystopian. The best example I could give you would be liquid rain of a reasonably positive AI future. It's just so undersupplied. Designing new holidays, I guess, is also a fun one that I like. I really love this book, Dancing in the Streets, which is about the history of collective joy, participation in these sort of communal festivals. And I think, you know, if things go well, we'll have a lot more time for those. So might as well start brainstorming what our future holidays could look like. Okay, concluding thoughts. This is happening to everyone all at once. It is not just education. It is everywhere. I've spoken to audiences of business leaders, of investors, of lawyers, of application developers, software engineers, you name it. The vibe in the room is basically the same everywhere we go. This is happening super fast. It seems super powerful. We don't really know what to make of it. Are our jobs secure? Should we be using it? Should we be shutting it? Everybody's asking the same question. So for one thing, just know that you're in good company. Know too that the next year it's going to be again different in meaningful ways. Know that there is no safe choice. Doing nothing or trying to pretend this isn't happening is not a good option. Again, these binaries, all in or total rejection and banning, neither one are the right answer. I think your most important tool, especially at the administrative level, is leadership and culture. I would want to see and I would encourage everybody to get super hands-on themselves and to show off what it is you're doing. Show your own experiences. Champion specific people in your organization that have done a great job and really set the expectation that teachers and students are going to be learning together in this era. We're all on the same timeline with respect to AI. It doesn't matter what age we are. It doesn't matter how experienced we are. The AI release cycle is now dictating the timeline of how we're going to have to think about this more so than our individual experiences. So teachers and students absolutely should be learning together much more than ever before. And I don't think I'll surprise or shock you by saying you probably do have a lot to learn from your students. Final thought. On the left of my grandparents, Herman LeBenz went to Cass Tech. He was the first member of my family to graduate from Detroit Public School back in the day. He did not go to World War II because he had tuberculosis, but he still told us stories when we were kids about how we won the war. He was an engineer. He designed machines. He worked at a factory. But the story he told the most was actually one of carpooling to work because gas was scarce. There were gas rations. And he remembered still the route. Even when he was 80, 90 years old, he would tell us the route. I went to this person's house and picked them up. And then we went over to this person's house and picked them up. And then he would conclude, and that's how we won the war. His brother was actually in the specific, for real, in horrendous conditions. The point is, we all have a role to play, and I think we are entering a period that is going to require a whole-of-society mobilization, where everybody, no matter what your role is, whether it's just carpooling to save gas so that that can be used for the broader effort, or whether you're on some front lines, or whether you're like me and you end up kind of stumbling through all these important scenes as an extra, there really is no role that doesn't matter. There is no cognitive profile that doesn't matter. You don't have to be super technical about this. I genuinely mean it when I say writing aspirational fiction might be one of the most powerful things you can do to shape the future because positive visions are so scarce. So take ownership at every level of your organization, whether it's the school board, the superintendent, the principals, the teachers, the students themselves. Truly, everybody has a role to play in this. And we absolutely need to have all of our best minds on it because this is going to be almost for sure the most disruptive force that any of us have seen in our lifetimes. So while the challenge is super intense, I genuinely do think that you as educators today have the opportunity to be education's greatest generation. And with that, I'll invite you to reach out to me and thank you very much. If you're finding value in the show, we'd appreciate it if you'd take a moment to share it with friends, post online, write a review on Apple podcasts or Spotify, or just leave us a comment on YouTube. Of course, we always welcome your feedback, guests and topic suggestions, and sponsorship inquiries, either via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. The Cognitive Revolution is part of the Turpentine Network, a network of podcasts where experts talk technology, business, economics, geopolitics, culture, and more, which is now a part of A16Z. We're produced by AI Podcasting. If you're looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement at AIpodcast.ing. And finally, I encourage you to take a moment to check out our new and improved show notes, which were created automatically by Notion's AI Meeting Notes. AI Meeting Notes captures every detail and breaks down complex concepts so no idea gets lost. And because AI Meeting Notes lives right in Notion, everything you capture, whether that's meetings, podcasts, interviews, or conversations, lives exactly where you plan, build, and get things done. No switching, no slowdown. Check out Notion's AI Meeting Notes if you want perfect notes that write themselves. and head to the link in our show notes to try Notion's AI Meeting Notes free for 30 days.
Related Episodes

⚡️Jailbreaking AGI: Pliny the Liberator & John V on Red Teaming, BT6, and the Future of AI Security
Latent Space

AI in 2025: From Agents to Factories - Ep. 282
The AI Podcast (NVIDIA)
29m

Sovereign AI in Poland: Language Adaptation, Local Control & Cost Advantages with Marek Kozlowski
The Cognitive Revolution
1h 29m

China's AI Upstarts: How Z.ai Builds, Benchmarks & Ships in Hours, from ChinaTalk
The Cognitive Revolution
1h 23m

AI-Led Sales: How 1Mind's Superhumans Drive Exponential Growth, from the Agents of Scale Podcast
The Cognitive Revolution
51m

Escaping AI Slop: How Atlassian Gives AI Teammates Taste, Knowledge, & Workflows, w- Sherif Mansour
The Cognitive Revolution
1h 40m
No comments yet
Be the first to comment