

⚡️ 10x AI Engineers with $1m Salaries — Alex Lieberman & Arman Hezarkhani, Tenex
Latent Space
What You'll Learn
- ✓10X compensates engineers based on output rather than hourly rates, enabling them to earn million-dollar salaries
- ✓AI and large language models have enabled 10X engineers to be 10x more productive than traditional engineers
- ✓10X hires engineers who are 'long-term selfish' and care about maintaining client relationships, not just maximizing story points
- ✓10X uses a dual-role model with AI engineers and technical strategists to align incentives and ensure quality
- ✓10X has delivered impressive prototypes and projects for clients in a fraction of the time it would have taken traditional teams
- ✓The founders believe the way engineers are compensated needs to change to reflect the transformative impact of AI
AI Summary
The podcast discusses the co-founders of 10X, a company that aims to revolutionize how engineers are compensated by focusing on output rather than hours. They explain how AI and large language models have enabled engineers to be 10x more productive, and how 10X compensates these highly skilled AI engineers with million-dollar salaries. The founders share examples of impressive projects their team has delivered in a fraction of the time it would have taken traditional engineering teams.
Key Points
- 110X compensates engineers based on output rather than hourly rates, enabling them to earn million-dollar salaries
- 2AI and large language models have enabled 10X engineers to be 10x more productive than traditional engineers
- 310X hires engineers who are 'long-term selfish' and care about maintaining client relationships, not just maximizing story points
- 410X uses a dual-role model with AI engineers and technical strategists to align incentives and ensure quality
- 510X has delivered impressive prototypes and projects for clients in a fraction of the time it would have taken traditional teams
- 6The founders believe the way engineers are compensated needs to change to reflect the transformative impact of AI
Topics Discussed
Frequently Asked Questions
What is "⚡️ 10x AI Engineers with $1m Salaries — Alex Lieberman & Arman Hezarkhani, Tenex" about?
The podcast discusses the co-founders of 10X, a company that aims to revolutionize how engineers are compensated by focusing on output rather than hours. They explain how AI and large language models have enabled engineers to be 10x more productive, and how 10X compensates these highly skilled AI engineers with million-dollar salaries. The founders share examples of impressive projects their team has delivered in a fraction of the time it would have taken traditional engineering teams.
What topics are discussed in this episode?
This episode covers the following topics: AI engineering, Compensation models, Productivity and efficiency, Talent acquisition, Incentive alignment, Rapid prototyping.
What is key insight #1 from this episode?
10X compensates engineers based on output rather than hourly rates, enabling them to earn million-dollar salaries
What is key insight #2 from this episode?
AI and large language models have enabled 10X engineers to be 10x more productive than traditional engineers
What is key insight #3 from this episode?
10X hires engineers who are 'long-term selfish' and care about maintaining client relationships, not just maximizing story points
What is key insight #4 from this episode?
10X uses a dual-role model with AI engineers and technical strategists to align incentives and ensure quality
Who should listen to this episode?
This episode is recommended for anyone interested in AI engineering, Compensation models, Productivity and efficiency, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
Alex Lieberman and Arman Hezarkani, co-founders of Tenex, reveal how they're revolutionizing software consulting by compensating AI engineers for output rather than hours—enabling some engineers to earn over $1 million annually while delivering 10x productivity gains. Their company represents a fundamental rethinking of knowledge work compensation in the age of AI agents, where traditional hourly billing models perversely incentivize slower work even as AI tools enable unprecedented speed. The Genesis: From 90% Downsizing to 10x Output The story behind 10X begins with Arman's previous company, Parthian, where he was forced to downsize his engineering team by 90%. Rather than collapse, Arman re-architected the entire product and engineering process to be AI-first—and discovered that production-ready software output increased 10x despite the massive headcount reduction. This counterintuitive result exposed a fundamental misalignment: engineers compensated by the hour are disincentivized from leveraging AI to work faster, even when the technology enables dramatic productivity gains. Alex, who had invested in Parthian, initially didn't believe the numbers until Arman walked him through why LLMs have made such a profound impact specifically on engineering as knowledge work. The Economic Model: Story Points Over Hours 10X's core innovation is compensating engineers based on story points—units of completed, quality output—rather than hours worked. This creates direct economic incentives for engineers to adopt every new AI tool, optimize their workflows, and maximize throughput. The company expects multiple engineers to earn over $1 million in cash compensation next year purely from story point earnings. To prevent gaming the system, they hire for two profiles: engineers who are "long-term selfish" (understanding that inflating story points will destroy client relationships) and those who genuinely love writing code and working with smart people. They also employ technical strategists incentivized on client retention (NRR) who serve as the final quality gate before any engineering plan reaches a client. Impressive Builds: From Retail AI to App Store Hits The results speak for themselves. In one project, 10X built a computer vision system for retail cameras that provides heat maps, queue detection, shelf stocking analysis, and theft detection—creating early prototypes in just two weeks for work that previously took quarters. They built Snapback Sports' mobile trivia app in one month, which hit 20th globally on the App Store. In a sales context, an engineer spent four hours building a working prototype of a fitness influencer's AI health coach app after the prospect initially said no—immediately moving 10X to the top of their vendor list. These examples demonstrate how AI-enabled speed fundamentally changes sales motions and product development timelines. The Interview Process: Unreasonably Difficult Take-Homes Despite concerns that AI would make take-home assessments obsolete, 10X still uses them—but makes them "unreasonably difficult." About 50% of candidates don't even respond, but those who complete the challenge demonstrate the caliber needed. The interview process is remarkably short: two calls before the take-home, review, then one or two final meetings—completable in as little as a week. A signature question: "If you had infinite resources to build an AI that could replace either of us on this call, what would be the first major bottleneck?" The sophisticated answer isn't just "model intelligence" or "context length"—it's controlling entropy, the accumulating error rate that derails autonomous agents over time. The Limiting Factor: Human Capital, Not Technology Despite being an AI-first company, 10X's primary constraint is human capital—finding and hiring enough exceptional engineers fast enough, then matching them with the right processes to maintain delivery quality as they scale. The company has ambitions beyond consulting to build their own technology, but for the foreseeable future, recruiting remains the bottleneck. This reveals an important insight about the AI era: even as technology enables unprecedented leverage, the constraint shifts to finding people who can harness that leverage effectively. Chapters 00:00:00 Introduction and Meeting the 10X Co-founders 00:01:29 The 10X Moment: From Hourly Billing to Output-Based Compensation 00:04:44 The Economic Model Behind 10X 00:05:42 Story Points and Measuring Engineering Output 00:08:41 Impressive Client Projects and Rapid Prototyping 00:12:22 The 10X Tech Stack: TypeScript and High Structure 00:13:21 AI Coding Tools: The Daily Evolution 00:15:05 Human Capital as the Limiting Factor 00:16:02 The Unreasonably Difficult Interview Process 00:17:14 Entropy and Context Engineering: The Future of AI Agents 00:23:28 The MCP Debate and AI Industry Sociology 00:26:01 Consulting, Digital Transformation, and Conference Insights
Full Transcript
Okay, we're here in the Reboot studio with Alex Lieberman and Arman, oh my god, I did not forget this. Leave it in! How do I cut it? Leave it in! I just didn't say Arman. Keep it rolling, Swix! Keep it rolling! If it makes you feel bad for the first probably 20 times that I said Arman's name, I said it the wrong way and he was very polite in guiding me to the right pronunciation, he used to say Arman, not Arman. So it's okay. It's totally fine. Point the finger on me too. I don't think you have to. I mean, it's Hezarkhani, but we don't need to. Hezarkhani. Yeah, yeah, yeah. Armand Hezarkhani is fine. Amazing. Yeah, that's fine. Honestly, it would be even funnier now where as you're about to introduce him, you just dub Armand's saying his own name over your mouth. Introducing Armand Hezarkhani. That's so funny. It's like when you're like on a voicemail and you're like saying your name like the automated machine. Totally. So you guys are the co-founders of 10X and also MCs and speakers at AIE, right? So, I mean, and I think for me, I have a little bit of extra context on Alex because I follow Morning Brew for a while. You have been an inspiration on the newsletter business. But let's talk about 10X, you know? Like, I think my goal here is just to introduce people to you guys, maybe you individually and then you together. So whoever wants to take it first. Well, I can give you a little bit of the backstory behind the business and how Armand and I got to know each other. And then Armand, I'm sure, will fill in some gaps. You know, Armand and I met in 2020 when I had invested in his previous business, Parthian. And Parthian was an AI financial tools business, originally for consumers, being AI tooling for financial advisors, our RIAs. And, you know, throughout Armand building that business, we had continued to talk about just our philosophy on product, how AI was influencing just product in general. And I kind of think, especially for non-technical folks like myself, there's like a moment where you get smacked in the face by how profound this technology can be if harnessed in the right way. And I experienced that moment in conversation with Armand. So it was probably this point nine months, nine, nine ish months ago. Armand and I were talking and he had shared a story about with Parthian, he unfortunately had to downsize his engineering org. And when he had downsized his engineering org, he had to decrease the size of his engineering team by 90%. And when he did so, he had to rebuild, he had to basically re-architect the entire product and engineering process to be AI first because he just no longer had a human resource. And so we needed to like accelerate it with this technology. And basically what Armand had shared with me is that output of production ready software had text after making this shift with the org. And I kind of didn't believe him at first because I had never seen kind of that level of leverage. Like I'd use ChatGPT, I'd use Grok, I'd use all these things. But and yes, they've been life changing for me, but I wouldn't I wouldn't have explained them as 10x experiences. So we basically talked through it and he kind of shared with me why AI and specifically LLMs have made such a profound impact on engineering as a type of knowledge work. And from there, the thought was around how the way in which engineers are compensated has to change materially. because if you think about it, like historically, people charge for their time by hour. And then all of a sudden, let's just say you're a new, like you're truly an AI engineer who's truly 10X higher throughput. Imagine you're selling your work and someone's used to spending $100 an hour for an engineer. And you go to them and you look them dead in the eyes and you're like, yeah, I'm a thousand bucks an hour. You're going to get laughed out of the room, even though you're a better engineer than the engineer they would have hired. And also you're perversely incentivized because you leveraging AI in your work as you operating faster, but your incentive just like a lawyer or just like any hour pay-based knowledge worker is to rack up as many hours as possible. And so like actually the kernel of insight that started all of 10X was, how do we hire the best engineers in the world? How do we offer them unlimited upside by compensating them for output rather than hours? And then how do we harness that in the right direction to help companies transform with AI in their business. So I know there's a lot there, but Armand, is there anything I missed? I mean, basically, yeah, like I think Alex covered it. Like I was writing code and I was deeply incentivized to generate more output, high quality output, but faster, more, right? And it's because it was my company. But the whole thought is like, when you work at someone else's company, even if you have some equity, even if you deeply care about the mission, you're not deeply incentivized day in and day out to try new AI tools and push yourself to work better and faster and smarter. And so the economic model behind our company is one that does drive that. And my talk is basically to show how we do that and how I think other companies might be able to adopt similar models. This is very tempting because every question I'm asked might actually just leak your talk. It's okay. The talk will just reiterate very important points. I mean, it should stand on its own on YouTube, right? So it's whatever. Like people, I do like to encourage people to remix the content in different formats. So this is the podcast version. So I think like, I think that the classic thing is, well, what is a unit of output of a software engineer? Is it a PR? Is it a story point? It's extremely unclear and it's very basically unsolved. Like, I mean, don't tell me you've solved it. You know, like what's, maybe you have, I don't know. But like, I'm default skeptical on the, well, what gets measured gets gained. Yeah, we do use story points. But you're right that it's easy to game it, right? Like if we were to hire somebody who just, like if you think about a technical system, right? A smart hacker will find ways to exploit it. And the easy way to exploit the story point system is to deflate the concept of a story point and decide that, okay, any line of code, like lines of code are gonna be directly proportional and equal to story points. Well, then of course you've hacked the system, right? But your clients will churn and you'll probably get let go of And it just won't work long term. And so what we found is that hiring to what we found is that this problem gets solved in the hiring process and it gets solved by hiring people who fall into two buckets One is people who are selfish but they long selfish Everybody selfish but we need to look for people who are long selfish people who understand that these incentives are longer than just today's story points. They're forever, right? And we need to think about how do we maintain the client relationship. And that means that we're going to give them very robust story points so that we can maintain the relationship and continue to make money. But the other is that we hire people who just like writing code and like working with really smart people and they're not sharp elbowed and they just want to do great work. And that sounds squishy, but that really is a part of it as well. I think both are really important. Just two other things I would quickly add is one, when we work with clients at 10X, there's basically two role players. There's the AI engineer and then there's the technical strategist. And one of the best ways to fight perverse incentives is to incentivize two people at odds with each other in a healthy way. And so our technical strategists are incentivized based on NRR, are incentivized based on retention and account growth for a client. And they are the final one to sign off on the engineering plan for a client before we begin a sprint. So like they are the last line of defense of quality before a client ever sees anything. So that's one thought. The interesting thing, and I don't know if Armand has thoughts on why this is, is we have not yet, and again, we're a young company, so this could change at some point, but we have not yet had any clients argue about how we assign story points or ever feel like we are sandbagging story points or any of these things, which is just interesting because I think to your point, Wix, like I would have expected that to have already happened. Yeah, it can be a political process when things go not well, but when things go well, you know, everyone's just like steaming ahead. Okay, you hire great people, you work well with story points. I think one thing I'm trying to get my guests to do a better job of is just brag. Could you brag a bit? Just like some really impressive project that you accomplished just opens people's minds. Like, let's get specific without maybe naming the exact client unless you can. And then also, like, what's the highest hourly rate that one of the engineers has made since he's technically on path? Yeah. So I'll answer the last one or the second one first. we will probably have more than one engineer make million dollars cash next year based on this model. And that is just with story point compensation. It's very likely that we will have more than a handful of folks make more than a million dollars next year. The answer to the first question, like, for example, one project we built. So we work with this company that's a they build they work. They partner with retailers to basically make cameras in the business more valuable. And the way that they do that is they deploy what was historically like a Gen 4 Raspberry Pi to the stores and they would run like one model on that device. We basically took some off-the-shelf models and trained some models ourselves and then quantized them down so they could actually run on that four, but also on Jetson Nanos. And we got them to all run in parallel. So now basically what these models allow you to do is as a store, you can get a heat map. You can see where the lines and the cues are forming in your store. You can even get pictures of shelves and understand what needs to be stocked. And you can do things like theft detection because we have body analysis and we can understand things like things are crossing arms, right? And this took our team two weeks to put together early prototypes. And now we're just refining accuracy and improving metrics from there. And again, this was like one of the many examples. I think, of course, with that specific example, that's more of a research project and it's going to take a while to improve accuracy and things like that. Like we're not claiming that we're like these magical beings. But previously, that alone, building a prototype of that would take several quarters for robust teams of engineers together. And we were able to prototype that out very quickly. And now we're working together with that team for for a year to build more and all that stuff. Alex, anything? I mean, I guess another one, Snapback Sports, we built them a mobile app in a month that hit 20th on the app store globally. And there was no AI in this app. It was a really fun trivia app, but we built it together, deployed it, hit 20th in the world. Yeah. I mean, one other example I would just add is, and this is just looking at things from a different angle, which is sales. I think the power of AI engineering and fast prototyping is incredibly powerful within sales motions now. And so just one example is we had a big influencer who wanted to basically build basically ChatGPT, but specifically as if it is your fitness, like your health and your health coach and your nutritionist. So it has all this context. As a fitness influencer. Yeah, exactly. And we originally reached out to work with him. And basically he said no, because he was like, you guys are like too early. You don't have your like a design team built in yet. And so he said no. And it seemed like the conversation was done. One of our engineers was like, I'm just going to build a working version of this app as soon as humanly possible. So basically within, I don't know, it probably took him four hours. He got, he had just like a working version of the app that was in the hands of this influencer. And that influencer hasn't launched the app yet, but we are number one on their list right now to actually do this build and the only reason is is the speed by which like working product could be in hands of someone is faster than it's ever been yeah that's amazing okay so uh like quick question on just the uh the stack that you guys have landed on like is there a house stack what are you guys finding in terms of like the various coding agents and all that yeah um we do work in a number of different stacks a number of different languages and stuff but we feel pretty strongly in like high structure allows for agents to work autonomously for longer. And so our default stack is TypeScript front end, TypeScript back end with a shared file where or a shared folder where all of our shared types and schemas and things like that live. And typically React front end or even something as simple as like express on the back end, like we don't really care about the frameworks. It's more just like TypeScript allows us to have that flexibility to like the flexibility of JavaScript, but the constraints of TypeScript. And then those error messages allow the cloud code or cursor agents or whatever to iterate on themselves and run things, see the errors and continue. In terms of the actual like AI engineering stack and what coding agents and things like that we're using, I always tell clients this, like our team doesn't have a favorite coding agent of the year or of the month or even of the week. Like if I go over there to our team right now and I ask them, what model is performing the best for coding right now They say today at 4 we noticing that Claude Code is actually performing better because of XYZ reason But yesterday Codex was outperforming Claude Code on activities like X, Y, and Z, right? And we stay really, really deeply on top of all the different models, all the different applications of these agents to make sure that we're getting, they're really pushing the most out of this and so that we can advise teams on how they should best use these things. I mean, well, so yeah, but you're gonna, it's very anecdotal, right? Like, don't you need more comprehensive evals? Because otherwise it's like you are just behaving or believing things based on the luck of the draw. I think at this state, did a samurai have a measurably better sword than the person to their left or right? No, right? At a certain point, I think a warrior's weapon becomes something of a feel. And I think that at this point, a lot of these like the coding agents are so good like yes you can have evals that that provably show that one is better than the other but for a lot of these things it really is feel it's like hey this agent actually like it just i can work better with it on a warm-blooded level or it writes code more like i like to or whatever and at least that's what we've noticed yeah fair enough and so i think like there's this you're you have like kind of a swat team approach you're paying you're very meritocratic, I think is probably the right term in this. Are you human bound or are you agent bound? Like what is your limiting factor in 10x becoming a bigger business than, you know, either of you have run before? Today, it's human bound, 100%. You're recruiting. Yeah, yeah, yeah. We are. The thing that keeps us up at night is how can we hire enough good engineers fast enough? And then the second thing that keeps us up is how do we match those, the great people in the business with the right process such that delivery doesn't suffer as we scale. And I think more and more as we build this business, like technology is going to be an enabler of the work we do. And I think long term, if we're to talk about the long term of the business, there are ambitions of this business beyond just acting as a transformation and engineering partner for companies. There are ambitions to build our own technology. But today, and probably for the foreseeable future, we are human capital constrained. How do you interview? You don't have to like give the exact interview questions, but like has interviewing changed for either of you guys pre-AI versus post? Yeah, this is actually somewhat controversial. A lot of my friends stopped doing take-home interviews after AI. We still do take-homes, but our take-homes are immensely, they're like our take-homes are unreasonably difficult. And so when I first wrote them up, I told Alex, I was like, hey, man, like people might get mad at you. You know, like you have a public persona. Like we're sending these to people. Like your reputation might take a hit if we send these to people because they are so unreasonable for us to ask this of people. And Alex in classic Alex fashion was like, F it. Let's just do it. You know, like let's send it. If this is the bar, then we need to do it. Yeah, exactly. And what we found is that 50% of the people don't even respond to the take home interview. But because our take-home is so difficult, our interview process is actually quite short. We do two calls before the take-home, then we send the take-home, then we review the take-home, and then if it goes well, we do maybe one or two meetings afterwards. So it can be done in the fastest in a week. It's very, very quick if people can get through that take-home. Yeah, and just a few things to add. Like, I'm thinking about what are some of the most common questions we ask? A few that Arman asks that I really like are one, he basically says, Like if you had infinite resources to build a AI senior software engineer, like truly one that could replace either of you on this call right now, what would be the first major bottleneck that you would have to figure out how to overcome to build that? That's one question that he always asks. And Armand, out of curiosity, I don't know if you want to share it because then people will start giving the right answer on that. But is it, oh, I guess just for Swix, like, is there- I can offer one. I don't know. Yeah, yeah, let's hear it. I mean, so the classic answer is just model intelligence, right? Like, we think the models are good, but, like, actually, they have been really trained into a certain sort of local minima of, like, well, here's all the Python because 3Bench is all Python, all Django. And actually, beyond that, we've maybe generalized, like, a little bit of front end, but hasn't really done, like, full backend distributed services and all that. So model intelligence is going to be, like, the main blocker, but I don't know if that's a good answer because it's kind of like, well, you just wait and maybe the Frontier Labs will solve it. Yeah. I generally think that it has to do with context. I think it's not necessarily context length. I think that it's context engineering in Andre Karpathy's words, right? It's the problem of how do you get the right context into the LLM and get the LLM to pay attention to the right parts of that context, all of which I would consider context engineering. and then from there it's like okay there's a lot of ways you could solve that right you can on the model layer do a lot of work to make sure that the attention mechanisms are paying attention to the right stuff you could do work on the application layer to to context engineering you can extend context lengths like there's a lot of different work and then it leads to a really interesting discussion so so yeah that's that's one thing one thing i was just gonna say is i feel like dan who's one of our engineers at 10x he shared a different answer that i remember your reaction to it was like it it broke your brain a little bit do you remember what his answer was no i should ask him i believe it had to do with entropy i should ask him what it was there he is dan come here what was the question that we asked you during the interview remember do you like i asked you if you had unlimited resources and you needed to build an ai engineer what would you need to solve you gave me an answer that like what would be the um what would be like the limiting factor the honest thing yeah what was it i said it was like controlling entropy oh there it is yeah controlling entropy controlling swix does not does not agree with damn you if there is some error rate and your question was basically come closer so they can hear you come closer to my headphones this is swix you're on a podcast yeah we can hear you we can hear you that's good that's good we're rolling with it so basically like if there is some your question was about a fully autonomous coding loop. So what would it take to get the human out of the loop? If in that loop you have some error rate let say it 99 accurate with code even that 1 error rate will just multiply and decay more and more and that entropy will build and like accumulate and that like kind of a compounding thing that will derail the agent more and more and so i think it's less of like a context engineering question per thing you're implementing and it's like um making sure that the agent can reduce the entropy for a given task such that it gets to 100 accuracy um and then you don't have this like accumulating error issue. Cool, man. Thanks, brother. No, that was impressive. Oh, no, actually. So that's like, that's the sophisticated version of context engineering, right? Like a lot of people are going to answer context engineering. Exactly. We are one of the people that coined context engineering coming to speak, I think, in one of the early sessions on Friday. And yeah, but this is actually the advanced, like, yes, this is one of the four ways in which long contexts fail. And if you have enough experience, you know that this is the one that gets a lot of agents off track. and once they're off track, it's really hard to get them back on track. Exactly, exactly. And going back to your question. I wouldn't use the same words, but yeah, I get it. And going back to your question about constraints in the business, it's just how do we find more people like him is the thing that keeps us up at night. Well, you know, I'm in the business of making more. You're helping to contribute by putting this conference together where we're just sharing knowledge and the more people that watch are kind of drawn to you. They might answer your call to action of trying out one of your super hard tests or at least just learning and just advancing the state of the industry. For sure. So I'm excited to have you guys. Do you have any questions for me? I mean, you know, it's like a whole like two, three day affair. You know, I've done this a little bit now. One question I have for you is like, as Armand knows that I'm voraciously curious and I'm a lifelong learner, but I'm also not an engineer by training. And my goal is to get as smart about this space as quickly as possible. And so like, you know, I, one of the first things I did is, was it Armand, did you send me the three blue, one brown lecture? Like, you know, three blue, one brown, like does the lecture on LLMs. I took that. Then he's like, if you want to go super deep, do any of Andre Karpathy's, like he does like the lecture series on how ChatGPT works. And he's like, actually like write out notes by hand. And like, you truly understand like the math behind these models. And Armand did that. And he was like, it's just like, that's how you understand things at the deepest level. So when I'm not either working or taking care of a four-month-old at home, that is next on the list. But I guess like my question for you is like, as a non-technical person who's always been like both enamored and intimidated by technical folks, what would you do if you were me to make the most of this conference? When I'm not like the core archetype of the person who's there? Jeez. Yeah, that's a tough one because I spend zero time thinking about that. okay so so i think like latch on to the key words and whatever people are excited about like context engineering people were excited about maybe four or five months ago and now it's like entering the mainstream typically the the people at this kind of conference would be sort of stewing around those ideas like mc uh the last time we were here in new york mcp was kind of just taking off and we did the workshop and that really blew up mcp and i think like that is something that you will see a little bit of like the just like by the way arma armand grins grins because he has very strong feelings about mcp very strong feelings prolo inside we're hosting a debate i just think that mcp is a three-letter word for api and like alex always every time he hears someone say the word the letters mcp in uh in that order he tells them that i hate mcp and starts a a war a religious debate. Well, I will say though, I do think a few of our engineers have warmed you up to it more with specific use cases, Armand. Yeah. I mean, like our MCPs useful. Like, of course I use all the MCPs with cloud code. I just think that there's like, what, what bothers me is when people create a new name for something and then use that to raise some inordinate amount of money because they know that three letter acronyms get investors excited. That's like the thing that like, that's, that's why I giggle when I hear MCP because I'm like a lot of people just say that and like the tweets that bother me are like MCP is coming for your job here's why you need to know about MCP and it's like no it's just like a useful thing you know maybe maybe this is relevant to Alex's question I do take a sociological and anthropological stance to tech in terms of like different groups of people coming in have different terminology to communicate with each other and it's It's just human behavior. It's like, I'm kind of non-judgmental about it. Like people just got to do what people do and they always invent new language and there are only so many ideas going around in the world. They're going to be recycled. Yeah, totally. That's it. I will defend MCP in the sense that there actually are other parts of the spec that are not just API wrappers, but people just comparatively don't use them as much. But I think it's a little unfair to MCP, the whole protocol. But that's why we have a debate where we actually have like a podcast booth and we're actually hosting like, you know, pro and con debater. And I think it's really fun. Yeah. Yeah. That's awesome. So I actually really want to get into this because I think we learn more by contrast than by agreement, right? Like, so in a single talk, like, you know, you're the authority, you're up there on stage, you say whatever you want to say and no one can really, like people just fight in the comments, but they're never going to rise to the same level. I think in a real debate, you can learn from both sides and make up your mind. And I think that's what we're going to see. That's awesome. what we're trying for. Yeah, yeah. I love that. Well, it's great to meet you guys. I'm looking forward to your talk, Amon. And Alex, you're opening the show for us. So all power to you. I do think I intentionally left that block that Armand's in as the consulting block. We also have McKinsey speaking, but McKinsey's not in the consulting block. So I'm very curious because I think my theory is that a lot of our attendees will be from the enterprises that might be looking to talk to you guys. And I'm curious to see how this sector grows. It's not something I'm personally not familiar with because I mostly just work in companies as an engineer. But like the sort of consulting digital transformation industry is kind of new, but it's also like very, very in demand, as you guys know very well. And I'm just excited to feature it for the first time. We're super excited to be there. And thank you for having us and pumped to learn a ton from you and from the other speakers there and just the people who are attending. Yeah, yeah. I mean, like everyone from the labs to the Fortune 500, it'll be a whole party. All right. Thank you. Love it. Thanks, man.
Related Episodes

⚡️Jailbreaking AGI: Pliny the Liberator & John V on Red Teaming, BT6, and the Future of AI Security
Latent Space

AI to AE's: Grit, Glean, and Kleiner Perkins' next Enterprise AI hit — Joubin Mirzadegan, Roadrunner
Latent Space

World Models & General Intuition: Khosla's largest bet since LLMs & OpenAI
Latent Space

Anthropic, Glean & OpenRouter: How AI Moats Are Built with Deedy Das of Menlo Ventures
Latent Space

⚡ Inside GitHub’s AI Revolution: Jared Palmer Reveals Agent HQ & The Future of Coding Agents
Latent Space

⚡ [AIE CODE Preview] Inside Google Labs: Building The Gemini Coding Agent — Jed Borovik, Jules
Latent Space
No comments yet
Be the first to comment