Back to Podcasts
Gradient Dissent

GitHub CEO Thomas Dohmke on Copilot and the Future of Software Development

Gradient Dissent

Tuesday, June 10, 20251h 9m
GitHub CEO Thomas Dohmke on Copilot and the Future of Software Development

GitHub CEO Thomas Dohmke on Copilot and the Future of Software Development

Gradient Dissent

0:001:09:44

What You'll Learn

  • Dohmke shares advice for founders going through an acquisition, including the emotional challenges and need to maintain transparency with employees.
  • The GitHub acquisition is considered one of Microsoft's most successful, with the company maintaining a high degree of independence and developer-first focus.
  • The OpenAI partnership and development of GitHub Copilot have been key to GitHub's growth and innovation since the Microsoft acquisition.
  • GitHub has seen significant revenue and user growth under Microsoft's ownership, exceeding $2 billion in annual run rate.
  • Dohmke emphasizes GitHub's commitment to putting developers first, which has been a core part of the company's culture since its founding.

AI Summary

In this episode, the host interviews Thomas Dohmke, the CEO of GitHub, about the company's acquisition by Microsoft and the development of GitHub Copilot. Dohmke shares insights on the emotional experience of being acquired, the keys to a successful acquisition, and how GitHub has thrived under Microsoft's ownership. He discusses GitHub's developer-first approach, the benefits of the OpenAI partnership, and the growth and innovation that has occurred since the acquisition.

Key Points

  • 1Dohmke shares advice for founders going through an acquisition, including the emotional challenges and need to maintain transparency with employees.
  • 2The GitHub acquisition is considered one of Microsoft's most successful, with the company maintaining a high degree of independence and developer-first focus.
  • 3The OpenAI partnership and development of GitHub Copilot have been key to GitHub's growth and innovation since the Microsoft acquisition.
  • 4GitHub has seen significant revenue and user growth under Microsoft's ownership, exceeding $2 billion in annual run rate.
  • 5Dohmke emphasizes GitHub's commitment to putting developers first, which has been a core part of the company's culture since its founding.

Topics Discussed

#Startup acquisitions#GitHub#Microsoft acquisitions#GitHub Copilot#OpenAI partnership

Frequently Asked Questions

What is "GitHub CEO Thomas Dohmke on Copilot and the Future of Software Development" about?

In this episode, the host interviews Thomas Dohmke, the CEO of GitHub, about the company's acquisition by Microsoft and the development of GitHub Copilot. Dohmke shares insights on the emotional experience of being acquired, the keys to a successful acquisition, and how GitHub has thrived under Microsoft's ownership. He discusses GitHub's developer-first approach, the benefits of the OpenAI partnership, and the growth and innovation that has occurred since the acquisition.

What topics are discussed in this episode?

This episode covers the following topics: Startup acquisitions, GitHub, Microsoft acquisitions, GitHub Copilot, OpenAI partnership.

What is key insight #1 from this episode?

Dohmke shares advice for founders going through an acquisition, including the emotional challenges and need to maintain transparency with employees.

What is key insight #2 from this episode?

The GitHub acquisition is considered one of Microsoft's most successful, with the company maintaining a high degree of independence and developer-first focus.

What is key insight #3 from this episode?

The OpenAI partnership and development of GitHub Copilot have been key to GitHub's growth and innovation since the Microsoft acquisition.

What is key insight #4 from this episode?

GitHub has seen significant revenue and user growth under Microsoft's ownership, exceeding $2 billion in annual run rate.

Who should listen to this episode?

This episode is recommended for anyone interested in Startup acquisitions, GitHub, Microsoft acquisitions, and those who want to stay updated on the latest developments in AI and technology.

Episode Description

In this episode of Gradient Dissent, Lukas Biewald sits down with Thomas Dohmke, CEO of GitHub, to talk about the future of software engineering in the age of AI. They discuss how GitHub Copilot was built, why agents are reshaping developer workflows, and what it takes to make tools that are not only powerful but also fun. Thomas shares his experience leading GitHub through its $7.5B acquisition by Microsoft, the unexpected ways it accelerated innovation, and why developer happiness is crucial to productivity. They explore what still makes human engineers irreplaceable and how the next generation of developers might grow up coding alongside AI. Follow Thomas Dohmke: https://www.linkedin.com/in/ashtom/ Follow Weights & Biases: https://twitter.com/weights_biases https://www.linkedin.com/company/wandb

Full Transcript

You're listening to Gradient Dissent, a show about making machine learning work in the real world. I'm your host, Lucas B. Wald. All right. Today, I'm talking with Thomas, the CEO of GitHub. I've obviously been looking forward to this interview for quite a while. GitHub is one of those products that I've used nearly every day of my professional career for over a decade, and I have a real attachment to it. I was really excited after Microsoft bought GitHub and their innovation accelerated. And it's interesting timing for me because if you haven't heard, Weights and Biases was recently acquired by CoreWeave and GitHub was one of the analogies that I used in my head of a successful acquisition. So I talked to him and get advice from him actually on camera about how to make an acquisition successful, but we also get to get into how GitHub Copilot works, why it's worked so well, where it's weak today, and how they think about the future of coding. I hope you enjoyed this one. All right, so Thomas, thanks for joining us. I guess we were just talking before we started recording about acquisitions and you've been on both sides. Do you have any advice for me and my team post-acquisition of weights and biases? I would say stay sane, get some sleep in between the craziness as that passes. As you mentioned, I've been both the seller of a startup. I was the co-founder of a company called Hockey App that we sold to Microsoft in late 2014. And so I know what that was like with my team going through that acquisition until the last moment we were just chatting before the pod started. And then everything was signed. We were sitting at dinner and it was an emptiness. You feel like you should be more happy than you are in that moment. and that it takes some time to actually process that everything is now changing your life. I've been obviously involved in the GitHub deal in 2018, mostly behind the scenes. And then after we had acquired GitHub, we bought I think six or seven small to medium-sized companies. And I remember a number of conversations with the founder CEOs of the company we were acquiring and the emotions that went through them and I could empathize with them and understand where they're coming from, kind of like relating back to my own experience. So it is something, you know, for a founder, CEO or founder team to sell their company, an experience in life that is probably, you know, hard to compare to anything else. Well, I wonder if it was like this for you, but, you know, one thing about this transaction is, you know, I couldn't talk about it with anyone for quite a long time. And it was also kind of very slow where, you know, I kind of, it felt done, but then we're kind of working through lots of details. And there's like lots of regulatory approvals. So, you know, it feels, um, honestly to me, like very empty climactic and also kind of like, okay, I actually have been wanting to like, you know, just kind of like work on the future stuff, um, you know, for a long time, but, but couldn't. So, um, it felt, uh, very unmomentous, I guess to me, but it probably felt like a much bigger deal to all the people around me who are kind of just learning about it and going through Well, and why you can't, during that phase where you cannot tell people that this is happening, then you also have to, you're almost forced into, you know, lying to a certain degree or planning for a future that you know will no longer happen, where you may commit, you know, to, you know, an offsite or a customer event. and then you know, okay, but by the time that actually would materialize, there's like a mosquito about me. By the time that actually materializes, the world will look fundamentally different. And then you tell people, and of course you deal with a lot of uncertainty and doubt. And I'm sure you have gotten the question, Lucas, what's going to be my role in the future company? What is going to be your role? and people the tendency is that they doubt what you're saying is actually the truth given that as soon as these deals are closed and you moved into the new office either virtually or in reality you realize that the grass isn't always greener on the other side and there's as many problems often in the buyer company than they were in the seller company I hope everything works smoothly for you and your team And next time, maybe we talk about the life after and what it's now like after everything is fully closed and integrated and so on. Yeah, no, I appreciate it. It's a very unfamiliar experience for me, honestly, because I've spent, you know, most of my career running companies. And I didn't think I was like attached to the, you know, CEO title particularly. But it's, you know, one big difference actually is like, you know, I can't like really guarantee people that I can control, you know, what happens. Like it's a very different kind of conversation. I feel a lot more empathy for the managers that have worked for me in the past now. I actually have another experience where, you know, I feel like it's really making more sense to me why people want so much clarity from their leaders and, you know, fast decision making and things. I actually think it would make me a better CEO to be on the other side of this for a period of time. Yeah, naturally you lost a little bit of control, right? Because one way or another, now you have a manager above you or a set of managers and the board and whatnot. And while people might trust you, they haven't got to the point yet where they have seen you reacting to instruction direction, you know, command, if you will, from your management chain. And that certainly is a different life. I think in many ways that is what is so fundamentally different between being a leader in a medium-sized to large company, including me as GitHub CEO, because obviously we're part of Microsoft and I have a management chain within that construct and deal with the Microsoft finance team and HR and all these kind of things. with us being a founder, CEO, where you live with a different uncertainty. What does tomorrow look like or what does the company look like in a year? But there's other than your board, if you have one, there isn't really anyone telling you what to do. And I always felt like as a startup founder, that is the ultimate freedom that you can have, even more so if you have a bootstrap startup where there isn't really a board or venture capitalists or investors or anything like that. But it's not that you wake up every morning and everything is a happy place. And even more so when you have a product in market and now you have customers and they have support issues. And I'm sure you have seen that, that they emailed you and says, hey, your team hasn't gotten back to me. And I'm really frustrated and I'm paying you so much money. And then you're like, OK, I was planning to have a nice night with my family. And now I'm beating with this customer escalation. Yeah, that is the job. I'm curious, actually, you know, we're immediately going off script a lot of it, but, you know, I mean, I feel like GitHub actually, from the outside, feels like one of the most successful acquisitions that I can even think of. I mean, this is such a softball question, I guess, but what do you think made the GitHub acquisition so successful? And was it all that you hoped, you know, having like kind of thought about it before it happened and now running it? Yeah, I mean, I would say if you ask around in Microsoft leadership, the answer would clearly be yes, this was one of our most successful acquisitions. I think the other two that are always listed in a similar realm are LinkedIn, which has a very similar integration model where they have a certain level of independence. Ryan Roslansky is the CEO. They have their own offices, their own brand. If you go to LinkedIn.com, there isn't much Microsoft in terms of branding or anything like that. And it's very much perceived as an independent platform within the Microsoft ecosystem. The other one is Minecraft, which was a very different acquisition. Also how it started and how it went through. But similarly, the brand is very independent. Most kids playing Minecraft. that's probably their first touch point with the Microsoft ecosystem unless they're playing Minecraft on a PC. But if they have it on their iPhone or Android phone or on a tablet but they're getting handed from their parents, it's not even that. And so you have these three brands and GitHub kind of sits in the middle. It is more of an enterprise brand. That's where we make most of our revenue. But at the same time, it has this recognition from consumers Whereas like my favorite example is when I go to the Lego store in the Pacific Northwest, naturally a lot of the employees there are students. And so they know what GitHub is. And so they are as much a fan of the company that I'm running as I'm a Lego fan. And you don't get that many other enterprise companies. And part of that is obviously that GitHub is both an open source company hosting all these open source projects and a company that sells enterprise products to many of the companies that want to work like an open source project and want to emulate a lot of these practices. Or they're sitting somewhere in the middle that they are both providing open source like your company and have a part of the stack that is open source. so yeah we're really happy about the acquisition you know revenue has grown we announced in July of 2024 over 2 billion run rate and you can look up yourself what the rumors were back in 2018 I don't think that's an officially published number but it's you know significantly up acquisitions often have scorecards so the scorecard on the revenue side is more than green the scorecard on the growth side is more than green and with Copilot you know post-acquisition and with the OpenAI investment in 2019, you know, we kind of materialized the best of Microsoft with its cloud and experience and responsible AI and all these kind of pieces, the best of GitHub with our developer first approach and the OpenAI partnership, which gave us access to first GPT-3 and then the codex model to build, you know, the original GitHub copilot. Okay, but so why is it so green? Is it the OpenAI connection? Because I feel like even before that, one thing that I noticed from the outside is that it seemed like the development actually accelerated post-acquisition. That was my impression as a user. I don't know if it felt like that on the inside. But yeah, what happened there? As part of the deal, we actually defined the three acquisition principles or integration principles. and the first one was and I think that's really part of our success is that above everything we put the developer first and in many ways you could argue GitHub has always done that since it was founded it was founded by hackers that had discovered that Git is cool Thomas did you know actually did I tell you they worked for me so I'm actually like user number 7 of GitHub or something I don't know if I ever mentioned that I don't think you did Yeah, yeah, it's funny. I was like there when they started. It was on my co-founder's couch actually that they were hacking. So yeah, Chris, Tom, PJ, and then Scott as kind of like the fourth late founder, right? It's really actually fun to go to the Wayback Machine and look at what github.com looked like in the very early days. Totally. I think many folks have a much more evolved picture in their head of what the UI looked like and what the front page looked like. But the reality was that it was almost like a commit log with single line updates from Chris, I think, mostly saying, yep, we have this new feature, check it out. That was all that was there on the front page. And the UI looked much more basic than it is today. And I think many people kind of have this 2015, 2016 evolution in their head as that was what GitHub was always like. But obviously in the early days of this cloud native, you know, Web2 or whatever you want to call that wave of startups look much more simple than today. And so we put developers first, right? And so, you know, whether we write blog posts, whether we build products, we have all internal processes. Everybody at GitHub across every role, you know, HR, finance, legal, product management, they're all using GitHub every single day. I think the only other tool that we're using as much as GitHub is Slack. That's the two tools as a remote-first company that we're using very intensively. But it helps us to evolve the product. The second principle was, and that I think is the mistake in many acquisitions, we thought about how is Microsoft accelerating GitHub. But I think in many acquisitions it's the other way around. You think about how you find synergies between the company that you bought and the company that's buying you. and you're looking at how is the startup accelerating the big company. And we thought about how is Microsoft accelerating GitHub. And I think that's where you see things like GitHub Actions, which was an effort where we took an existing technology out of Azure DevOps called Azure Pipelines and brought that into GitHub. And it looks like GitHub and it has a different format of defining these workflows, but the underlying technology and part of the team came over in early 2019 to build what is today GitHub Actions. And so you can kind of track that progress. Part of the success was that we very quickly figured out how can we do joint sales with the Microsoft sales team around the world, which gave us coverage that GitHub, you know, at the time of the acquisition was about 800 employees, could never have. And that helped us to accelerate. but really Microsoft invested into GitHub to let it grow. And I think that is crucial. And in many ways, it's almost like a D round or E round or whatever, series E of an investment. And then the last principles, the third one was, how does GitHub accelerate Microsoft? And we always said that comes later. And we will think about that when we have figured out the other two pieces. And Copilot, I would say, was probably the first one, very, very intentional about taking our learnings, partnering with Azure, you know, building on top of a joint stack between the GitHub Copilot and all the other Copilots that Microsoft now has, leveraging the responsible AI pieces that Microsoft offered us, but also, you know, contributing back to that. And so that, I think, is what describes the success of GitHub now almost, you know, seven years, seven years after we announced acquisition in 2018. And so, I mean, you know, Copilot is this really incredible product. I think it was for a lot of people, you know, the first time really seeing, you know, the power of LMs. I remember, you know, when I first, you know, when I first kind of started to use it, just like this, oh my God, you know, feeling of like kind of seeing like how amazing, you know, this could be. I guess, you know, I wonder with that product, like I feel like there's so much momentum in Silicon Valley about like you hear like kind of like, you know, Codium and you hear about like kind of Cursor and it doesn't feel like Copilot has the same, I don't know, like purchase inside the brains of at least like Silicon Valley companies. Like how do you think about that? Like do you feel like, you know, you're behind in some ways and you're planning to catch up or is it like, like I guess, what is like Cursor doing even that's like different than what like Copilot offers? Yeah, I mean, I think, you know, we were very early when we started with Copilot in the middle of 2020 when there was almost no AI hype whatsoever. GPT-3 was, you know, available to some nerds and data scientists and people that understood what large language model do. And oftentimes, you know, you had to explain what a transformer is first to a customer before we get to the point of why this might be changing the world. And June 2021 is when we launched a preview of Copilot. And, you know, you may say we created that whole market with that one announcement. happened and it was a very short blog post and a site that really, a landing page that showed how code completion worked for different languages and different examples. And if you travel back in time on Hacker News and other pages there was equal amount of skepticism than excitement And we started giving people access to the private preview and some were skeptical and then turned around And you kind of saw that evolution happening of saying, oh, after using it for a while, I actually see the significant impact. and there were legal challenges and discussions around fair use on the training side and replicating a code and licenses. And so a lot of the investment in the early days was just scaling out a co-pilot. I think we had a million users on the waitlist within less than a year and then in the private preview, so waitlist and then going through the waitlist into the private preview. So we had to also scale the architecture in that first year. And that was the public launch, what we call GA, general availability, was June 2022. So even that was before ChetGPT, right? Like this whole time horizon is so compressed with so many things that have happened since then. It's easy to forget that a co-pilot was not only in preview, but it was a GA product before ChetGPT came out. And I was actually on a trip right after the chat GPT launch, and I could see it in every customer conversation I had, whether it was in Tokyo, Singapore, and Sydney, that the world had dramatically changed. It went from AI as kind of this cool technology, but we're far away from that future and actually being useful to Thomas tell me more how you build Copilot, how we can adopt Copilot. We had the first case studies where we showed that developers are 55% faster. But ultimately, you know, the play for GitHub is that we have the platform. And so when you buy GitHub Copilot, you not only get it in VS Code and in JetBrains and in Xcode and Android Studio and whatnot, you're also getting it in GitHub and you're getting a code review agent to look at your pull requests. And so just in earnings, Microsoft earnings last week, we published that Copilot has now 15 million users. And Copilot Code Review Agent, just as an example of that platform play, has reviewed over 8 million pull requests. And I think that's where we're seeing the future is this holistic workflow where you have Copilot for auto-completion. You have chat, you have agent mode, you have MCP integration. and then it continues in your pull request to review code to fix security vulnerabilities and with Project Padawan, which we announced earlier this year as one of the early, or how do you say this, as one of the projects we're working on to implement a software engineering agent or SWE agent, you really didn't have a flow where you can go from issue to pull request just with the help of an agent. and do you have like a sense i mean i think you've thrown out different numbers a lot of people kind of talk about you know productivity gains i i myself wonder how you even possibly really measure that but do you have any sense on like what like what the speed up is or like like i guess what are you even like kind of shooting for with this with these projects we are ultimately shooting for making every software engineering team, starting with ourselves, able to deal with the ever-growing number of lines of code and the complexity that comes with that. The problem is when you compare productivity gains between without Copilot and with Copilot, the challenge you have is you're never working on the same thing twice. You can do case studies, and that's where the 55% came from, where we gave 50 developers a task to do without Copilot and 50 with Copilot and the team with Copilot. And everybody did the same thing, right? And the team with Copilot was 55% faster. But that obviously is a clinical study, if you will. It isn't reflective of the real world. I would never have 100 developers in my team do exactly the same thing. And even the one that works on the same topic, right, on the same subsystem or micro system or whatever, they're not doing the same thing day on, day on. In fact, they're adding more stuff to the existing service. And so the complexity is always rising. And so when you look at productivity gains, you've got to ask what's the baseline of what you're comparing against. And ultimately, the AI, making AI part of the developer workflow had two impacts. One is that we actually increased the complexity for the developers building this system because full-stacks now means backend and database and all that infrastructure, front-end, and now you also have AI, which means you have to integrate models and eval them and fine-tune or post-train them and operate them. And then you have things like what OpenAI recently saw that they have a new version of the model and they tuned it too far in the one direction and they have to roll that back. But even if you don't want to wait for customer feedback on that, you have to have really good eval test suites and a really solid monitoring process, which is not fundamentally different to monitoring your uptime and your exceptions in century and all these kind of things. But it adds another complexity layer to our development teams. So what AI has done to almost all software engineering companies, because the expectation is now that you're integrating AI into your stack, is that you increase the complexity. And on the other side, it has made coding much easier. I think autocompletion is all about flow and staying in flow state. When you're typing a code in VS Code yourself and it predicts the next 10 lines of code, it removes you from the obligation to go into your browser and search for how you use a key vault or how you do this algorithm. I mean, even simple things when you work in a Ruby on a Rails app and you have a model with a couple of columns and now go into the controller and instantiate the model, just having it tell you what are the different attributes of that model. And some of that already obviously work with IntelliSense or IntelliCode, just more classic autocompletion, but it really keeps you in that flow state. And oftentimes developers just can accept code and then it's close enough to what you're trying to achieve to then go and modify the pieces that are broken, have syntax errors and whatnot. And let's face it, that was also true when we copied code of Stack Overflow or of GitHub repos or from blog posts, right? That code was never actually perfect. You had to take it, you had to paste it into your code and then you had to make it work. And so that's where I think autocompletion has really improved the flow state And now with next edit suggestions, you can just tap, tap, tap from one suggestion to another. The latest VSCode has actually now the autocompletions also with syntax highlighting. So you can even pass that a bit more than the original ghost tag. And this brings us back to the early days, this UI concept of ghost tags, of basically showing you 10 lines of code within the editor that you can autocomplete into. that didn't exist. We built that in partnership with the VS Code team for the original copilot. Before that, auto-completion was a dropdown where you could scroll through the different option from the documentation. Or in TextMate, I think it captured what else it had in that file and created kind of like a DSL out of that. Not a DSL, but like a menu of options. And so now if you go into agent mode and SWE agents or things like bold.new or lovable, you know, Vercels, V0, right? In many ways, if you know how to prompt it, you can actually have it write all the code, right? Like V0 as an example, like you give it a prompt and it renders you the whole web page and behind the scenes, it obviously uses Next.js and then you click a single button and it deploys it to Vercel. And so it was never easier to spin up a quick web page for your wedding or for your birthday invitation or even to sell like a pop-up store, like sell a product real quick. And then you can use prompts to redefine that code. But I think where we are, and so that's super efficient, right? Because you often don't even need to hire a developer anymore. And you see a bunch of examples out there where the users of these tools are not actually folks with a software engineering background. I just heard about this on your podcast elsewhere, that this is cool. and then they're trying it out and similar to how they've used ChatGPT to render an image and when the image isn't exactly what you want, you redefine it with more prompts. They're doing that with these web pages. Where I think the biggest challenge lies is that the majority of actual, you know, big scale software projects like VS Code is one, GitHub is one, you know, Weights and Biases and others is that there's the complexity of these projects is so high that today the agents cannot actually tackle those. Much in the same way that, you know, there isn't still a, so far we haven't, you know, gotten a car into the market that you can buy that has no steering wheel. The closest is, you know, Waymo in San Francisco and a bunch of other cities, but it's ring-fasanced to that problem space and has a bunch of extra sensors. I think that comparison applies to agents as well. There are certain scenarios where that works well, but there's a whole big complexity out there where it doesn't work well and where the agent, or agent mode in VS Code takes a file and adds the two lines you wanted and it removes 5,000 lines or screws them up in some form. And I think that's where it's becoming crucial that developers, A, still need to know how to code, and B, can make a very educated decision whether they want to use the agent or just go into the file themselves and make that one-line change where they know exactly how to do it, where to do it. Like it's almost like a waste of energy and time if you're trying to write a prompt for something that you can write much faster and code yourself. And so having this continuum from having AI infused into the editor with code completion to asking coding questions, explain the code, inline chat, and those kind of things, all the way to agent mode and autonomous agent, I think those two things all belong together. And as developers, we want to move in both directions up and down that spectrum. And if you see a pull request that was created by a SV agent and you see a bug in there, what you want to do is go into the comment line, say, you know, use the GitHub CLI to check out that pull request and then just keep working in VS Code or any of your other favorite LEs. Do you think that's going to be true three or four years from now? Like when I, you know, yeah. Like what, how do you know that, right? Because it seems pretty clear, like you could look at the situation and be like, well, it's getting better and better. Every month it seems to be better. Why won't it get better at going into specific parts of the code and changing it? Which I totally acknowledge you definitely have to do today, especially on a larger code base. But so many smart people are working on it. Why doesn't that get fixed? It's getting better. Certainly the models are getting better. But in some ways, you could argue now in 2025, we're still fighting some of the same problems we were fighting back in 2020 with GPT-3 and Codex. And where the model just writes code that is a little broken or, you know, mistypes an import statement in Python. And now with agent mode, it can use tools and it can fix itself by looking at the error message. But I think fundamentally, the role of a software engineer is not just to create new code and new files, but actually to design an application that is unique for the mission, the OKRs, whatever that's driving you. And part of that is that you want to build a successful business that grows revenue and returns profit to the owners or the shareholders. And I think that systems thinking, how to design the application, how to decide between MySQL and Postgres or between React and jQuery, I don't know. That maybe is an easy decision. Go and use React. You know what I'm saying? I guess there's 10,000 choices to make when you're building an application with decent complexity. just like you're building a house and almost no house looks exactly the same as another house and almost no application looks exactly the same. And if that were true, if it can just create code with a few prompts, that startup has no value because then everybody else could also build that. And so the complexity will just significantly grow as these agents become more powerful. I think the flip side of that is that Well, we're here in 2025 and many, if not all of the financial services institutions, banks, insurances have still some kind of mainframe COBOL legacy code, unless the bank was founded in the last 10 years and didn't do any acquisition. And so that journey of managing old code that somebody else has written and figuring out what the developer was intending, how can I refactor this, are there actually unit tests, and if there are unit tests, do they cover enough of that? That is actually the bigger part of software engineering. Right. Like if I switch sides and join your company as an engineer today, my biggest challenge is to understand how this all works. And while I could use an agent to achieve my goal here, how do I validate that what the agent has written is actually correct and doesn't introduce a new security vulnerability and or doesn't make the thing so much slower that you need, you know, 10 times the compute to run that platform? I think that back and forth between the agent and its output and the developer is going to stay crucial for many years to come, let alone that obviously there's also a lot of security risks of accepting code into the code base without understanding what the code actually does. This IT calls it zero trust principle. Today, most companies don't allow any developer to push into the main branch. They force a pull request review with a different reviewer, a branch protection, all those kind of things, run CSED. And only if all these things are green, then you can merge it back and deploy it. We're doing that because we're worried about attacks. We're worried about the model. we're going to be worried about the model writing code that injects a security vulnerability, whether that came from an attacker or whether it came from a built-in vulnerability into the model, is effectively irrelevant. We have to deal with that risk, and that risk is mitigated by human reviewing that code. And maybe they're using AI for that as well, and I can see that happening, but I think ultimately we've got to have humans understanding code, You've got to have humans designing systems and making decisions of when to use an agent burn, you know, through GPU cycles to write code that actually generates value for the companies we work for. I guess you've written about sort of like this long arc of kind of like leveling up to higher levels of abstraction in the world of software development. I mean, I totally agree, right? Like, I think, you know, like, you know, I've, I sort of felt bad when I was in college for not like going deeper on like C cause that sort of felt like an essential software, you know, skill, but maybe unless you're in certain domains, like less and less, you know, important, you know, over, over time. But I guess you're, you're also talking here about like sort of going in and being able to like, there's sort of an architectural level of kind of almost like a manager, like software architect, you know, mindset of like, you know, like what database should I use here and how should I kind of like lay out my application. But you're also talking about sort of like going in and actually kind of modifying some of the code, or at least like understanding it for security reasons or reliability reasons. But it also seems like why wouldn't the software be better than a human at kind of like writing tests and doing security reviews? Like, why doesn't that get automated away? It will get automated, I think, to a high degree. If you look at having AI writing documentation or unit test cases it already works to a significant degree Although we also got to be real you know the most popular benchmark right now is Sweebench SWEbench which has 2,000 pairs of issues and pull requests from a dozen Python repos. And the best agent in Sweebench is like at 62, 63%. I think... Well, you know, we were at the top of it until recently. For a while. I will tell you... But 65% of Python, dozen Python repos is like returning with 65% from a math test in high school. That's not a great result, right? 90%, I would say we're getting there. But even that is like a 10% gap of uncertainty in a business. This week, the founders of 3Bench built out the multilingual version. And they also announced an agent that can generate new test data for those agents. And with this multilingual, I think the numbers go significantly down into the 20s to 30s ranges. So we are far away from this agent to actually be at 100%. Will that point happen? Of course, in the same way that we are going to see fully self-driving cars. But we don't know when we get to that point because it's not an easy problem to solve. and it's not one that hasn't been solved before. But I wanted to come back to, you know, the fundamental part of your question, which is the, you know, the machine that we're ultimately building software for still runs on a processor that has, you know, a built-in instruction set that is abstracted all the way to the programming language, but not higher than that. The programming language is the last deterministic abstraction layer that basically maps directly into the instructions that the CPU or GPU executes. Natural language is inherently non-deterministic because you can both say the same sentence or describe the same user interface and then build a completely different website with different design and a different functionality, different databases and so on. And so having that, I think, is going to be the craft of a software engineer being able to jump between those two abstraction layers and making decisions of when can I just describe it in natural language, because that's ultimately how we're thinking in our brains and how we're thinking, you know, when we wake up in the morning and we have an idea about something you want to build, we're thinking that in English or German or whatever language you have in your head. And then the challenge in the past was how to convert that now into programming language. And that's often more frustrating than most folks want to admit, because it takes way longer than you thought it might take. And then it's easy enough, you know, to pull in an open source library until you realize, okay, so this actually created a whole new problem in my application. Maybe now you have an dependency that requires you to upgrade to a newer version of Rails or what have you. And so I think that conversion of being able to take my idea, describe it in a few bullets, or even use Cloud 3.7 Sonnet is super powerful and kind of this reason with the model of what my specification looks like. and then you create a markdown file in your project and then you're feeding that into co-pilot agent mode to have it then actually implement that. I think that's the new way of working, but it still requires an engineer to do that thought process, figure out the user interface flow, building an amazing product, and then jumping down into the coding layer to find the one line that's broken and that you need to fix to compile the application. Do you find with suddenly all these people using automated systems that your user metrics are changing a lot in the last year? I would think that automated code generation would potentially make a massive increase in the amount of code getting sent to GitHub. to GitHub, I would think that like, you know, you might get kind of more branching because like, you know, models are like more comfortable, like just, you know, generating pub to pull requests. Are you seeing effects like that in your own data? I mean, I think the biggest one we are seeing is that adoption has significantly increased. Adoption of? Adoption of AI. Yeah. AI is now, you know, part of the lifecycle. and it's no more question whether software developers should use AI as part of their day-to-day workflow, whether that's in the inner loop, you know, when writing on my computer and code completion chat, agent mode, everything that's kind of like synchronous with me and my machine, or whether it's, you know, in the outer loop and pull request reviews, you know, summarizing the pull request, having, you know, the SWE agent build the pull request itself And so I think it's accepted best practice now to use AI in your workflow. That doesn't mean everybody has adopted it, but I think we are way past the chasm. And we are now talking about the late majority, if not the laggards that are even later. And that's where you see this very intense competition in the developer tool space. I would argue I've never seen anything like that in my 30 years since I started coding on a Commodore 64 in terms of, you mentioned a few startups earlier, but it's not only the startups. It's also all the hyperscalers that have AI coding tools. IBM has one. And everybody is playing in that space. And the extended version of that, from training models to fine-tuning models to having an AI stack all the way to coding tools. If you really know, the biggest use case for AI today is software development. And maybe that gets us to the point where then AI becomes AGI when the AI can write its own code and improve its own code. And so adoption is one thing. I think it's accepted now that AI is part of the developer workflow. How much code it's writing is constantly increasing. The models are getting better. We have been now shipping a new code completion model every month. We have an ongoing mid and post training process to improve the code completion model with offline and online testing. And seeing in those experiments how much the accepted and retained characters, that's one of the key metrics that we're looking at. It's like when you get a completion, how many of those accept characters do you accept? And then how do you retain them? And so there's a window where we're looking in the editor what you're doing and if you're modifying those characters. So that's the metric we're tracking. And based on that metric from offline and online testing, we then decide, is that model flight good enough to become the, what is it now, the May model, which is the code completion model that CodePilot is using. And I think that's where, you know, even though code completion, you could argue, is somewhat commoditized now and everybody considered as a standard feature, there's still a lot of improvement happening based on metrics and making the models more efficient, you know, improving the margins or just the number of GPUs you need for it and ultimately improving the user experience and having them get more accurate code with lower latency. and latency by the way is i think you know latency in many ways is the core metric for every developer tool like if it isn't fast um you're always going to have an audience of software developers that are going to uh be not happy and and um even with these agents and i think that's going to be a big problem uh both for the synchronous agent on your machine and the asynchronous agent that runs in the cloud is like i mean i don't want to watch i don't want to watch an agent do its work for 15 minutes. What I do is I start it, if it doesn't have any question, it goes running and then I go and do something else. But we all know context switching is hard. So how many of those agents can you really start and then go to another branch on your computer before this becomes more annoying and destroying your flow than actually helping you? So even that, how do you keep the developer in the flow state? and how do you keep the agents fast enough that it feels fun, I think that's going to be described, that's going to be describing a lot of the innovation that's going to happen in the next year. And that's why you hear many of the folks in the industry saying that with agents, what the software development looks like will fundamentally change over the next 6, 12, maybe 24 months. Yeah, it's right. Yeah, I guess from where I sit, it kind of feels like it's already changed. Like, I mean, this is my own personal workflow, I guess, but I find myself using a lot more like agentic code generation than tabs these days, which I would just, I think it's a lot less fun, actually. Like, I kind of like that you're talking about the fun of coding because the context switching is like horrible, right? It goes from like coding taking me into a flow state to coding, you know, meaning I have to fight my brain to stay focused while I'm a frustrating way for the thing to finish. But I think it's more efficient for me. Obviously, you have to support both uses. Do you look at how much people are doing in those two different ways of engaging? And am I the unusual one here that I've kind of moved to more of the longer pauses? Are most people just tab completing? I mean, yes. I mean, most people are still mostly tab completing. I think that's just the nature of things, right? Like you're not changing how the majority of developers work overnight. overnight and that there's certainly within the developer and ai cogent space a whole different s-curve of you know early adopters of agent mode i think that's still a smaller population than the large majority that had adopted code completion and let's face it code completion you that was one of the magic of the original copilot you didn't have to adopt anything um and and we could have probably announced this and not even mention ai uh and just say here's cooler code completion and it's there. And if you don't like it, just ignore the ghost text or, you know, command shift peak to save a copilot is always fair game if you're annoyed by it as well. Especially when you're working in a file where you might not even want, you know, the content of that file go to model inference. And so we see an adoption spectrum between, you know, code completion, which has the highest adoption, chat and inline chat, all the way to agent mode. And agent mode, you know, I think to some degree is still something that developers have to learn. And where there's scenarios where it makes sense to now start vibe coding with the agent versus just doing it yourself. And Toby Lutke, the Shopify CEO, recently had this memo for the whole company saying AI is now baseline expectation at Shopify. And I think the most fun bullet in that memo was if you want more headcount, you first have to show that you can do it with AI and increase the productivity of the company. And I think that shows you where the thought process is. But the thing with agent mode is that if it takes too long, as a developer, you're always going to have the question in your hand is could I have done that faster myself? And so I think that's a culture change and a mindset change that will keep going as the models get faster and more efficient and kind of user behavior shows at what point you want to take over and say, okay, this is good enough. I can do it. We are going to see a behavioral change across the population. I actually think a lot of the iWeek encoding is fun in the sense that, look, my hobby projects, I don't run any unit tests and I don't put a pull across against myself either. Because what I really want is I have this idea. I want to write some code. I want to see it light up as an app in my dark or as a web page. And then I want to feel good about this. And I was like, look, look what amazing thing I just created with just a bunch of lens of code and a computer. And that was always the magic for me in software development. You didn't have to build a factory first and tools and all that. You could get any computer, the earliest PCs or the Apple II. They had a programming language. Our building system was basic, even on the Altair, even though they didn't even have a keyboard and a monitor. And so I think that dream is still there for many developers. I have an idea and I want to implement something. So now if you can do that and then spin off an agent in the backend to write to the unit tests, and you don't even have to watch that because you can just write the next method. And in the meantime, it has built you the 15 unit tests for that one method. And then we even run it and fix it itself. And that happens in the background and you have to watch that. You can just focus on the next method in your class. that I think is keeping you in the flow state because you're not going from the thing that is fun which is really building features and functionality to the thing that is usually not fun for software developers which is writing unit tests and then running them and you wait and then the last one fails and now you're, okay, what did I do wrong here and why is this one not meeting the assertion? And I think that's where we're going to see a lot of user interface innovation over the coming month. spinning off these agents and keeping the developer in the flow state and letting them really do what they want to do. Is there like a core guiding principle for you to make it fun? Because I would think a lot of CEOs would be like, I don't really care if my developers are having fun. I want them to be productive here. Certainly, developer happiness is certainly one of our principles. We definitely believe that if developers are not happy, they're not productive. And if they're productive, they're not building amazing products. I think those are very strongly correlated to each other. And it goes both ways, right? Like if your tool chain is bad, that means your NSAID or your NPS is going down and your developers get grumpy about the tool chain and they have to wait forever. And we know that even at the scale of Microsoft, before I became the GitHub CEO for some time, I ran Microsoft's engineering system team called 1ES. and one of their biggest projects was bring down the build time for the office suite from days down to, I think, eight hours is where we ultimately ended. Wow, it's eight hours to build the office suite? Yeah, yeah. I mean, eight hours sounds like a lot, but if you're coming down from like three days, you realize that eight hours is actually good because it means you can start a build and then take off for the night and then the next morning you have a fresh build, right? Wow. If it took days and so paralyzing... What happens if somebody breaks the build in some cross-functional way? Is it just down for a month while people figure out what went wrong? Well, I don't know what the answer to that is because I was in the team. But I mean, obviously, at the scale of an Office Suite or Windows, you have many, many builds running every night. And so one broken build is not slowing everybody down. is just slowing down that team on that branch or that feature workflow. And then they had to probably spend the day out. But yeah, I mean, that's the latency problem at a really large scale, right? Like if your turnaround time is eight hours and we have that actually in model training now, right? Like model training runs for these really large frontier models takes so long that if you make a mistake or there's a power outage and that interrupts the training, you have wasted a lot of ultimately money in form of energy and resources that is now wasted time. So keeping these supercomputers or clusters up and running to do the model training in itself is, I think, a crucial part of training frontier models. But yeah, so coming back to this office example, right? So the goal there was just bring build down the build time, which means to have to parallelize the builds over multiple machines, which then raises the question, okay, so how do we do that with dependencies? And where do the object files go? And what artifact storage do you have that's fast enough? So uploading the file into an artifact store and then downloading it on another machine for that next object file compilation isn't actually slowing down the build process. I think it these kind of like optimization problems that we are going to see in a new form in AI in this AI native workflow I think the funny thing is actually AI native is a misnomer right Because it's AI first or AI infused, but nobody is actually AI native yet. We all learn coding without AI, right? So we might be cloud native. We grew up with a cloud, although I didn't. I think you didn't either. But certainly it became a whole and then you can become a cloud native startup. The first really AI native startups, I think, are yet to come. It's like truly, truly AI native. And what we, you know, when we in 10 years look back, say, yeah, this is one and things fundamentally changed. And now companies were just built in a whole different way. You know, a conversation that I have all the time, I have a feeling, you know, you might have this too, is like talking to executives that want to like push their team to use more of these, like, you know, kind of use more code automation. It always seems really baffling to me. Like I sort of feel like, you know, if it's obviously more productive, like wouldn't, you know, people adopt it? Like I'm not going to tell my engineers, you know, what tools to use, but I'm going to be frustrated if they're like not as productive as they could be. I mean, do you come across this? I just scratch my head with like, if an engineer wouldn't adopt a tool that would make them like 50% more productive, it just seems really like ridiculous to me. But I'm curious what your experience is with this. Do you like push like using tools internally to try to get your team to be more AI native? So we adopted or adapted the Shopify memo in some form within some modified form within GitHub and made that, you know, similar. or I made similar statements to GitHub that the usage of co-pilot and AI tools is part of our culture and mandatory where it makes sense. Obviously, we don't want a salesperson use co-pilot in the same intensity as an engineer does, but there's many other AI tools that they should be using as part of their workflow. So that's where I think we're past that point where that is actually a question in the same way that, you know, at GitHub, it is part of every employee to use Git and GitHub. That's not optional. You cannot say, okay, I store my source code, you know, on Azure Blob Storage or Dropbox or anything like that, which, you know, it's like you're laughing, but like we also know from telemetry that that still is very prevalent in software development, especially in hobby projects, student projects or companies are just, you know, doing something else in software development is somewhat, you know, a corner case of what they do day in, day out. Like, while Git adoption certainly has grown significantly over the lifetime of GitHub, there's also still a market to capture for us just on the platform and, you know, moving companies into Git. So, you know, nobody questions whether employees at GitHub should be using Git. I think we're at the point where that is also true for co-pilot and AI tools in general. But that doesn't mean, you know, that that needs to be 100% of the day. And I think you still need to think about when it makes sense and doesn't make sense. That decision making, you know, do you do control space on your Mac to open ChatGPT, the inter-impercentral asset question or do you Google it? And I think that feels somewhat more natural now, but it is not binary. There's a gray area we could do both, and then often it's driven by your belief system. And I think the question you ask is, why do developers not use AI, even though the metrics are clear? I think part of that is belief system, and this is how I have always worked. And now that I have these auto completions or the agent mode, I'm forced into a way of working that I don't actually like. And now I have to read more code and understand it than write code. And I have to write prompts, but I don't want to really write prompts. And it's this culture change and ultimately the change of how we do things as humanity that we know takes time. And it's just not done by forcing people into this new way of working. And in fact, I think what we're going to see is a bit of a shift in the performance curve of the employee population. Like, you know, when you do performance rankings or evaluations and those kind of things, the employees that were good maybe five years ago in that way of working may not be at the top of their, you know, whole or class in that new way. And because adopting to that world, it's, you know, I'm a big F1 fan. And in Formula One, you know, this way it always works is there's new regulations. And then the first two years or so, one team had it figured out early. And so they're ahead of everybody else, Red Bull in the current regulations. And then they're getting closer and closer to each other. And then you have really, really tight races and you have a fantastic season like 2021. And then they're changing their regulations. and now there's new regulations and there's a new way of driving these cars and all of a sudden a different team and a different driver can handle this new type of car, current era is crown effect but the Vettel area, Sebastian Vettel area was where the diffuser was connected to the exhaust and whatnot, right? So there's different drivers handling those conditions better and then they win the championship and I think same is true for developers in this new world of AR. You've got to be able to adapt, learn, and be willing to evolve your mindset into what is now the state of the art of software development. Totally. I totally agree with that. Do you also feel like the skills that someone would need to be a good developer are changing? I sort of feel like maybe a more product-oriented mindset would probably help in a world where the actual implementation gets easier. The technical details have gone away, right? Like in the 90s, you had to know about the CPU and the cache and the memory and the bus, if you even had a hard drive or cassette tape. So you had to really be in the intricacies of the hardware architecture you're building for. And that led to things like the 386 having a turbo switch. The turbo switch wasn't to make things faster. was to slow down the software that was so optimized for the architecture that would run way too fast and you couldn't play the games anymore. So you slow things down because everything was so optimized. And so those developers that know exactly how the PC works and the instruction set of the CPU and maybe write an assembler function to run that certain part faster, they are like a minority of of the whole developer ecosystem. Similar for developers that can write shaders in computer games, that there's certainly a demand for that, but the majority of full-stack developers don't know that skill. And so I think we have always moved up the stack and left something behind intentionally because the complexity has grown so much. And so the skill that I think the future developer needs, if not the current developer already, is systems thinking. Like taking this problem that you're getting from your product manager or your boss, your engineering manager, and breaking it down into a small enough chunk that now you have enough of a way of taking that abstract problem and converting it into code, right? That ultimately is what software developers do today. You take a description of a feature, and then you're kind of like in your head with a pen and paper. You're breaking it down into small pieces of work that you can do during your workday. And I think that thinking doesn't go away, but it moves at an even higher layer. And today, you know, if you go into Copilot, JetGPT, Bold, whatever, and you tell it to build your GitHub, well, it gives you probably a page that looks like GitHub. Maybe you should try that after the podcast. But it certainly doesn't give you, you know, all the features. And it certainly does give you the scale of hosting, you know, more than 500 million repos, let alone it doesn't give you the Git stack or you call it Git systems and all the pieces that are part of that application. And, you know, that goes back to Avi in five years at the point where that worked with just AI. I doubt it because like designing, you know, a system like GitHub or Weights and Biases or CoreViv or whatever, that still requires engineering skills, architecture. That's why I like to summarize it as systems thinking. the other side is this notion of a full stack builder where you actually no longer have a PM and a designer and an engineer and testers are already gone for most companies anyway and SRE has emerged with software development as part of DevOps roles but basically having a single person that can take an idea and work with one model or agent to mock out the spec and the designs and then create the figmas and probably using some form of design system. The one that we have at GitHub is called Primer because then the AI can compose these components into a page and then taking it into the business logic and application logic with an agent, writing all the test cases, writing your actions, deploy script, deploying it to the cloud and have an SIE agent monitor this all, scale it up and down, look at exceptions, maybe take those exceptions, file a GitHub issue and then you take the SWE agent to implement that fix. I think that full-stack builder, if it not already exists in some small startups or very innovative companies, is going to soon become a role in a company. Well, if you're out there, I want to work with you. I'll say that. That sounds like an amazing... So what you're going to have is the Venn diagram between what a PM is doing, what an engineer is doing, what a designer is doing. even a marketer is going to have a much higher overlap. So you can, you know, a really good PM that can write a really good GitHub issue of what a feature should look like and hit exactly the right abstraction level to then assign it to Copilot and then Copilot, you know, spins up GitHub Actions and uses tools and MCP and all these kind of pieces to implement that feature, creates a pull request and then you review that pull request And that's where, you know, the engineer might come in and say, yep, what the PM created here. But you might have the PM basically building the whole code change, the diff in the form of a pull request with the help of an agent. And so the engineer becomes much more of a, you know, a conductor of an orchestra of agent together with, you know, the co-workers in the company. But I do think, and, you know, I'm not the only one predicting that, but I do think we are going to see very small companies that have very high valuations. In fact, with WhatsApp and Instagram and others, we have already seen examples of that before AI, but we're certainly going to see that. And I think that's super exciting. You can kind of build a business in your garage all over again at a much bigger scale. Yeah, because not everything is... I mean, I think first of all, like, you know, making something as advanced as GitHub is kind of like on the outer edge of like, you know, what you need to do engineering wise to make something useful. And these systems are getting better, right? Like, I don't know about you. I think we're about the same age. Like, I remember like, you know, programming, you know, like really like, you know, basic computers and like just being excited. I can make like, I don't know, like a circle grow in size or something, you know, like I've like, you know. And it's funny to watch my daughter at age five. We vibe code together in the same... It's the same mindset, I think, when I was a little kid doing it with my dad. But she can make a real web app. And kids have such weird, crazy, creative ideas. I sort of feel like when these... When teenagers and even younger can start deploying their own... like really realizing their own ideas, like into things. I just feel like we're going to get like a real Cambrian explosion of, of applications. Yeah. I mean, absolutely. By the way, I'm born 78. Yeah. I learned coding first in East Germany on, on, in the geography lab in the late eighties with a robot phone computer. It was a clone of a Z80 chip, I think was in there and a cassette tape and all that it was you know in my head i think you know it it it merges with my early experience on a commodore 64 which i bought i think 1991 um and that's where i really learned coding because i had that at home connected to tv um and when you say growing a circle in my head i'm like well drawing a circle even that you know the commodore 64 didn't come with a graphical user interface is, right? It was all text-based and then you had to switch into pixel mode. I don't think there was a pretty fine method to draw a circle and you couldn't just download an open source library either. And so you bought, I bought magazines and books and then the listings in the magazines I will not forget that the listings were not actually listings, they were checksums. And so you had to mail stamps to the publisher to get a disk back with the program, the app, Nobody called it APEC on the program that decoded those checksums back into the code because you could print the checksums almost like a QR code of the 1990s. You could print them much more dense than you would print code. So you would type those checksums into the Commodore 64 then to decode the checksums into the code. And then you had a library that you could use to draw a circle or do a side scroller and those kind of things. And yeah, my kids are 12, almost 13 and 10. And so same thing, you know, I actually bought them Copilot before we had the free tier that we have now. I bought them Copilot because they always would come to my desk, show me their MacBook with their Python, mostly Pi game stuff and say, here, I have a problem here. Go and fix my bug. And I'm like, well, I'm talking to Lucas right now. Can we do that later? And obviously for them, the answer is no. You know, this is way more higher priority. And then you give them something I Copilot. And now they can highlight the code and ask questions or use agent mode. And I think kids are so much faster in adopting that new way of thinking because it aligns so closely to how they discover the world and learn, right? Like by asking questions and asking questions. And when you have the first time actually my kids use AI was I think it was on their own was Adobe Firefly. And they had seen that somewhere in school and they came home and say, look, we have these puppy images. And I'm like, where did you get these puppies? Oh, we signed up for an Adobe account. And I'm like, what? And I was like, yeah, you type prompt and you get a puppy image and you type another prompt and you get a better puppy image. And so they very quickly learned this. Okay. So if I don't like what the answer I got, why ask another question? Because that's exactly how they interact with their families and their parents and their teachers anyway, right? Like they keep asking questions. They don't like the answer that you're giving them. They're asking you another question. And I think that, but you have to combine that with getting coding and computer science into every school. And in fact, on Monday this week, May 5th, in partnership with Code.org, more than 250 CEOs signed an open letter to bring computer education into more schools. because again like okay so you have an agent that does powerful things but if you don't understand what it actually created if you cannot read the code if you cannot validate um if you cannot you know fix bugs you're actually giving up you know something that that makes us human in the same way that we're teaching all the kids physics and chemistry uh and obviously art and sciences and literacy and math and that doesn't mean you just become a physicist just because you had physics in school, right? Like there's still, you're picking your career path based on your interests and, and, uh, you know, economic value and those kinds of things. But I think it's fundamental for everyone on this planet to understand code and computer science as these devices will only have, the devices and the agents will only have a higher, a more stronger and higher importance role in our lives than, than they already play. Well, that seems like a nice place to end, Thomas. I appreciate Thanks at your time. This is a lot of fun. Thank you. Thank you so much for having me. And I agree. It was super fun and great conversation. Thanks so much for listening to this episode of Gradient Dissent. Please stay tuned for future episodes.

Share on XShare on LinkedIn

Related Episodes

Comments
?

No comments yet

Be the first to comment

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies