Back to Podcasts
Latent Space

The Future of Email: Superhuman CTO on Your Inbox As the Real AI Agent (Not ChatGPT) — Loïc Houssier

Latent Space • swyx + Alessio

Thursday, December 11, 2025
The Future of Email: Superhuman CTO on Your Inbox As the Real AI Agent (Not ChatGPT) — Loïc Houssier

The Future of Email: Superhuman CTO on Your Inbox As the Real AI Agent (Not ChatGPT) — Loïc Houssier

Latent Space

0:000:00

Episode Description

From applied cryptography and offensive security in France’s defense industry to optimizing nuclear submarine workflows, then selling his e-signature startup to Docusign (https://www.docusign.com/company/news-center/opentrust-joins-docusign-global-trust-network and now running AI as CTO of Superhuman Mail (Superhuman, recently acquired by Grammarly https://techcrunch.com/2025/07/01/grammarly-acquires-ai-email-client-superhuman/), Loïc Houssier has lived the full arc from deep infra and compliance hell to obsessing over 100ms product experiences and AI-native email. We sat down with Loïc to dig into how you actually put AI into an inbox without adding latency, why Superhuman leans so hard into agentic search and “Ask AI” over your entire email history, how they design tools vs. agents and fight agent laziness, what box-priced inference and local-first caching mean for cost and reliability, and his bet that your inbox will power your future AI EA while AI massively widens the gap between engineers with real fundamentals and those faking it. We discuss: Loïc’s path from applied cryptography and offensive security in France’s defense industry to submarines, e-signatures, Docusign, and now Superhuman Mail What 3,000+ engineers actually do at a “simple” product like Docusign: regional compliance, on-prem appliances, and why global scale explodes complexity How Superhuman thinks about AI in email: auto-labels, smart summaries, follow-up nudges, “Ask AI” search, and the rule that AI must never add latency or friction Superhuman’s agentic framework: tools vs. agents, fighting “agent laziness,” deep semantic search over huge inboxes, and pagination strategies to find the real needle in the haystack How they evaluate OpenAI, Anthropic, Gemini, and open models: canonical queries, end-to-end evals, date reasoning, and Rahul’s infamous “what wood was my table?” test Infra and cost philosophy: local-first caching, vector search backends, Baseten “box” pricing vs. per-token pricing, and thinking in price-per-trillion-tokens instead of price-per-million The vision of Superhuman as your AI EA: auto-drafting replies in your voice, scheduling on your behalf, and using your inbox as the ultimate private data source How the Grammarly + Coda + Superhuman stack could power truly context-aware assistance across email, docs, calendars, contracts, and more Inside Superhuman’s AI-dev culture: free-for-all tool adoption, tracking AI usage on PRs, and going from ~4 to ~6 PRs per engineer per week Why Loïc believes everyone should still learn to code, and how AI will amplify great engineers with strong fundamentals while exposing shallow ones even faster — Loïc Houssier LinkedIn: https://www.linkedin.com/in/houssier/ Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction and Loïc's Journey from Nuclear Submarines to Superhuman 00:06:40 Docusign Acquisition and the Enterprise Email Stack 00:10:26 Superhuman's AI Vision: Your Inbox as the Real AI Agent 00:13:20 Ask AI: Agentic Search and the Quality Problem 00:18:20 Infrastructure Choices: Model Selection, Base10, and Cost Management 00:27:30 Local-First Architecture and the Database Stack 00:30:50 Evals, Quality, and the Rahul Wood Table Test 00:42:30 The Future EA: Auto-Drafting and Proactive Assistance 00:46:40 Grammarly Acquisition and the Contextual Advantage 00:38:40 Voice, Video, and the End of Writing 00:51:40 Knowledge Graphs: The Hard Problem Nobody Has Solved 00:56:40 Competing with OpenAI and the Browser Question 01:02:30 AI Coding Tools: From 4 to 6 PRs Per Week 01:08:00 Engineering Culture, Hiring, and the Future of Software Development

Full Transcript

We, as a human species, like we started to write because we didn't have like enough storage for stories that we were telling to each other. So we had to write to store those stories. Now, like all the content can be stored in YouTube, in TikTok or whatever. It's like, what's even the need to write? What's the need? Because everything can be vocal. And I see kids now, they don't read article. They want a TikTok video talking about the article. Being a bit more grounded, what does it mean about like the future of the user experience for email and communication? will people still type or will they just talk to emails and they want to hear an email? And this is where it becomes interesting because Rahul as a CEO, maybe next year, he doesn't want to write to you with the new feature. Maybe he wants to talk to you. And then the way you will have received our marketing campaign about the new features, you in your car commuting, listening to Rahul talking about that. Hey everyone, welcome to the Layton Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swix, editor of Layton Space. I just realized I have the tough job of always pronouncing names. I know, man, you gotta prep. You're on YouTube, namepronunciation.com. Loicuzie, welcome. Wow, I'm impressed. Did I get it right? I know, you got it right. I'm surprised. Usually I make a joke about like, yeah, you know what, you can delete it like the way you want and everything. but like you nailed it so i'm impressed thanks for having me guys yeah of course thanks for coming by so your cto is superhuman mail which is the new name for superhuman i've been using superhuman for a long time i think i was one of rahul's personal onboarding things back in the day and yeah i mean we're here to talk about all things ai engineering but also you have a lot of history and product board first base docusign and nuclear submarines yes yes that's kind of like the the fun like icebreaker that I give to people sometimes like like two truths and a lie like I went into a submarine and people are like yeah no way but I did I did I spent one year working around submarines and the trajectory is a bit weird you were an engineer and then you were sort of chief of staff on some submarines thing yeah so then you went back to engineering I started like studying math so I'm a math graduate I was about I was about to do like a PhD in math and applied math in cryptography uh so like crypto before crypto to some extent it was cool for a moment and then i was like no way i spent like three years of my life on the same topic but in the same lab there was a bunch of people doing like security like offensive security type of stuff and i was like that's what i want to do so i was basically an engineer i was a security researcher in that lab but i did that in a pretty big corp i saw one in telco and then in the defense industry and the defense industry they have this nice kind of like career framework like you're young high potential-ish between quotes. So they want you to do like different type of jobs and kind of like have a spiral of careers so that you at some point reach to the sea level eventually. So they gave me the opportunity to be out of the tech industry for a year. And I went in a harbor and I was there as, I mean, financial controller, process improvement type of person and basically helping people do a better job, which was interesting because I had no clue. torpedo system, radar systems, like even like nuclear engine inside a submarine. But still, I had to help people take a step back from what they were doing and everything. And that was really fun because I came from Paris, came with my tie and my suit and my ego. I was used to drive people through my technical legitimacy on the security space. And all of a sudden, I didn't have any technical legitimacy at all. But I still had my ego. So like it was a pretty fast ramp up and like put my ego in my pocket and basically drive by questioning people like, how does that work? Like, help me. Like, I don't get it. And just by questioning, I kind of like build a new skill, which is like getting curious and understanding how people are working and being comfortable facing people that are way smarter than me, knowing better their field. but probably having a way to ask questions to help them, like identifying gaps or productivity gaps, for example. So that was cool, but I miss the tech. So I moved back to the tech industry after basically two years. Yeah, what are some of the other maybe highlights or stories you have been told about other experiences? I mean, DocuSign is another product that we all use. Yeah, any other... No, DocuSign was cool. I mean, DocuSign was cool because it was an acquisition. Open Trust and OkuSign, yeah. Yeah. So I was a CTO of a small company in Paris. And we were like a typical, I would say, European company, Alessio. So like very focused on the tech, not very focused on the marketing. And we were trying to, like we were one of the biggest signature company in Europe. But it's a very fragmented market. So we were winning France, starting to expand. And OkuSign is coming and like, guys, we need to do a partnership and everything. And pretty soon they understand that European market is tough and like the technology behind the Q-Sign is not sufficient, like of standards, like of compliance and everything. So pretty soon they were like with us or against us. But the way they were explaining the value of that holy cow, like we're not talking the same language. We're doing the same job. We're selling the same type of software. But like we're talking to CIOs from a technical standpoint. They're talking to head of HR, head of functions, and sell them the value. So pretty fast, it was easy for us to understand, like, wow, wow, wow. Not the way to sell a product. Better to partner with them. So they did an acquisition, but it's not a full acquisition. It was a security-oriented company, two business lines, one doing signature, which is, I would say, the one that Antoqusen was interested in. The other piece was doing strong authentication. So PKI stuff, SSL certificates, those type of things. and we were working for the Department of Defense in France. So we had the Ministry of Finance in France basically saying, no way, no go, you cannot sell. So we had to do a carve-out, which is like the funniest acquisition type you can do. So you have your team, you need to divide everything in two, your team, your systems, your quad source, and all of that. Even your data center, you have to replicate and get rid of like all the shared systems and everything. So we did that for like something like six months to be able to sell the new Carvalhoot company to DocuSign. Crazy. Don't do that. Are you still involved at all with the French startup ecosystem? I'm curious how you've seen things evolve since then. Yeah, it's pretty interesting. I've seen a change. Now that I'm getting some gray hair and I have some experience, I try to give back to some extent. So I spend more time helping the ecosystem there. But it's funny to see the difference. When you're here, we live in a small bubble. And it's crazy to see how even like other tech scenes are different. So like the great, like just like the great, like to get shit done and like to move forward and everything. They have a great education. When I said they're sorry, I'm like, we? They? I don't know where am I now. So great education, great engineers and all of that, but not the mindset of like creating things. So not a lot of entrepreneurs that much. It's changing. We had some successes in Europe, and especially in AI, there's some cool stuff happening. But still, the way to think about product-led growth, like superhuman nailed it. But ways to think about the way to structure your organization to scale fast, the level of ambition as well. How to maybe not target France or target Italy to start with, target English and the world from the get-go. And that would be something to think about. So I'm doing that quite some, highly rewarding, but it's, yeah, it's pretty cool. There's a common question that people have about DocuSign that I'm just going to indulge. What do all the people do at DocuSign? I love it. You know, this is a meme, right? No, no, no. I'm sure inside of DocuSign. Why do you need like so many people? You have signing. Why do you need 3,000 engineers? It sounds crazy, but like you want to make Europe. You need a different product. You need a different team to run your local data centers because of the compliance. You cannot just run your data centers from the US. So you need a local team there. Oh, and by the way, the way to do digital signature in Europe, totally different. So like the stack itself is different. So like the way to make a digital signature is different. Not the same standards in the same ways. So you need dedicated team to maintain that thing. The same way some people want to have DocuSign on-prem. So you need a team building appliance to basically plug and play. like, okay, you have your DocuSign appliance. There's a DocuSign box? There's a DocuSign box. Wow. Acquisition made in Tel Aviv at the time. Wonderful people building like security appliance and where like you, you shake the box, the keys disappear. Like if someone is like stealing your box, no one can sign in your name. Oh, you're kidding me. Oh my God. No, I mean, some banks. What if there's an earthquake? That's a good question. They are mounted like on some like, so there's like earthquake mitigation associated to this. So just that, but like apply to FedRAMP, dedicated teams, dedicated data centers. And like, oh, and we need, I would say, to have like a DocuSign run in Canada because data residency. Oh, we need the same in Australia. Okay, cool. And now you have like something even different. We want Japan as a market. Oh, but Japan is not signature. It's an ankle. It's kind of like a stamp. So you need a team to understand how Japanese market is thinking about even processing an agreement. Totally different. And then you have like verticalization, like some different verticals and everything. I mean, it's a good business. It's well run. And like people are not ghosting there. So there's a lot of work. And it's very interesting to see it from the inside because when you see those memes, you're like, yeah, I know. But like, damn, I see people. You're the VPN, so you know. You actually know. Yeah, yeah. I just wanted to get that. So I hope it's providing some... This episode is not about DuckySign, but we have to ask. No, no, of course. Of course. Totally legit. Yeah, let's talk about superhuman. So you joined January 2025. Yes. Just give people a lay of the land of like superhuman AI. I think a lot of people that are listening are familiar with the email client. Yeah. I think the AI stuff is generally new. So just maybe get the canonical definition of what you want to do with AI in superhuman, and then we'll kind of dig through. The main driver is how you can put AI in the product to accelerate the productivity of people. It's not to just like do AI things and like sparkles and everything. We don't care that much about it. Our people are pretty high expectation oriented and they don't want to slow down. So you cannot add latency. You cannot. So everything that we do is done in a way to like improve the productivity of people. So AI included. First thing that we started to do is like auto label emails. Like, is it a pitch? Is it marketing? Is it kind of like a typical classification that you could do? And so people can say, okay, everything that is a pitch, I will look at that like on a Friday. So like during my days, like typical days, I don't look at it. So like that, that was one of the first thing. Summaries, like you have a long thread. What is this thread about that someone like shared with me? Okay, you have like a quick summary. So nothing that is, that was very like groundbreaking, but like just wealth thoughts. Just like adding things that make sense at the right time. Another example is now, like we automatically detect if one of your email requires an answer. And if no answer after two days, popping up, hey, this one needs to be like, you need to send another email to the person because you didn't have an answer. So that was the first step. Second step was like, hey, you know what? The draft is already ready. You can just hit send. So it's very subtle, but it's like adding a, oh, damn, shoot. Yes, I wanted to remind people to give me an answer. And the thread, the draft is already there. Pretty cool. Send. And now we have like more and more of that. Now it's detecting, oh, this is a request for you to ask for your availability. Oh, you have an executive admin that is doing that for you. Your draft is like, hey, let me CC the right person. And boom, so that it's ready. It's done. It's done. and the typical chatbot, because more and more of the use case we see in people using AI inside Superhuman is to query your emails. A good example, as I would say, tech people will receive like a bunch of sub-stacks, like a bunch of newsletters. I would say some are great. Sometimes like, eh, the content is made. I probably have like, I don't know, 30, 40 subscriptions because everyone has like something interesting to say at some point. Now I don't read them. I auto-archive those. And like every week on the Friday, I just ask AI, which is the name of the feature. I ask my email, tell me about the summary of all the sub-stacks that I received this week. What should I pay attention to? And then I can deep dive outside the place where I want to pay attention to. So this is always thought in a way to accelerate outside the pace and try to not be in your way. Hopefully. Feel free to ping me if that's not the case. I would say, I don't know if this is a recent change, but I feel like Ask.ai, I've started using it a lot more. I've been a superhuman user for many years and you've had it a while, but somehow this year it kicked up a notch. And I don't know if it's because anything changed in the product because I wasn't using it before, or is it just me trying it again? Now, that's a good question. Yeah, that's a good question. I think people are more and more used to the muscle of querying things because chat GPT. So the general consumer behavior is... Yes, exactly. So the user experience, people... I mean, now like every single product has a chatbot when you can ask questions. So it's becoming like more and more natural to ask questions compared to managing like a to-do list of emails. And agentic search as well. Like previously I was like, oh, you have to embed my documents and then it's just going to retrieve. And like, that's not what I want. But agentic search, where you can actually figure out what do I mean when my question is half formed, you expand it, and then you actually answer it. It's actually really good. Yeah, and we spend a lot of time on the quality of the answers. So quality of the answers, and you talk about the agentic framework, but one thing that is... And this is a framework, it's not Langchain, right? It's your own framework. Yes. I mean, we've done a lot of iteration, and there's a lot of subtleties and multiple pieces there, and multiple different models based on where they're really good at. But where we spent quite some time lately is around quality and making sure, across different dimensions, but making sure that we are generally good for typical queries and very optimizing for them. And especially one thing we try to solve for is agent laziness. So through this chatbot, one of my use cases is I receive a Slack and I'm like, hey, can you review this document, please? because whatever, it's a tech strategy document or I need to review the doc. I take the link, I go to Ask AI and I basically pass and say, hey, find me 15 minutes tomorrow. I need to review this doc. And I don't need typically the agent to say, hey, I found this slot and this slot and this slot. Which one do you prefer? I just asked for 15 minutes. Find it, do it. I have an admin when I was asking her like on Slack, find me 15 minutes. She's not asking me if I need like on the morning, on the afternoon, she's not doing it. So working on this agent laziness because the handoff they were doing to the user is losing time. So working on making things happen faster, we spend a lot of time on this. So that's why you might have felt like the overall quality is better. My old joke was because the way that you trigger it is you actually type it in the search bar and when I was trying to normally do search, it would sometimes accidentally trigger the SKI and I was like my joke is like most of my AI usage is just accidental because I actually wanted to just search but then I started just using it more and then the kind of questions that you ask changes yeah I use it to like find people's phone numbers stuff like that it's like hey I use it to find my contracts because I have so many contracts right from all my sponsors and like venue things like yeah yeah yeah one of the use cases that I would say that blew my mind I was looking for like I was at a conference they shared with me like a PowerPoint link and it was like six months ago and I couldn't find the deck because I wanted to reuse some of the content and everything. Couldn't find it for whatever reason. Asking, I'm pretty sure they shared with me like a PowerPoint link or something like this. Can you find it? So facing the context and the link, I couldn't. I saved like probably 30 minutes searching through my emails. So it's pretty cool. It's two years, right? Yes. Because there's no way you can fit all your email into a context window. Yeah. Right? No. Anything else that's more complicated? So we had to do some pagination because if you do, like, let's say I'm doing that, like, oh, I'm pretty sure I had a conference I attended where they shared in like a link with me. In my case, I don't do like plenty of conference, but still someone like Rahul, my CEO, is basically doing a conference every three weeks or something. Not kidding but the use case That is his job And that is his job And he fantastic at it Damn I learning so much from him But clearly depending on the use case That is his job And he fantastic at it Damn I learning so much from him But clearly depending on the use case I mean of course you have more than 40 30 like even hundreds of emails that can semantically be close to your answer. So you need to go through that. So we had to implement a pagination search. So like semantic search for like the first, I would say, 40 deep search. Not that one. Okay. Next 40, next 40. So I'm kind of like using this agentic loop. And while you don't have find the answer, continue and even extend the semantic search proximity until you find the right one. Because it might be buried page two of the search page, technically. How did you design the tools to get to the agent? Just maybe give people an overview of the framework, what it looks like. How are you structuring these interactions? Is there just one superhuman agent that does everything? Or do you have separate ones? We have separated tools, clearly. So even an agent, like I would call about, I would say tools. So there's a bunch of tools, tools to detect your availability, tools to understand who are the people you interact with, a tool to write an email, the tool to like, so every single action is very tool specific. So it's not a magic B tool that can do pretty much everything. It's a set of small tools that are used within the agentic framework. So like there's a first step that is like, Hey, what is the best tool to do this? Kind of like building a plan. like for each step what is the tool and then making the calls yeah i think now the tools versus skills that entropic talked about is like the hottest thing of how much you want to put and there's like the mcps discussion i'm curious how you evaluate the tools too like when you build them it's like how do you think about how to name them like how to give the description it's like how much work have you had to do to nail it i don't think we spend that much time into and again I will defer to my three engineers working on it, which is interesting. We can talk about the amount of people you need to work on those stacks when you want to be serious. And I have fantastic people, so I feel blessed. And most of the time was trying the different agentic framework, trying to understand the different models, the ones that are solving which type of problems, because every single model is good for something. Sonnet was really great for agent head off. Like the laziness was really great. OpenAI version of it was not that good. Now we have Gemini coming in the room last week. Okay, that one is cool as well. So I think we are, I guess everyone has appealed like a way to, for one, switch easily from one router to the other. Model routers. Everyone has like an LLM proxy to some extent and like an agent proxy to implement different stuff, which is becoming interesting because the way to tweak them and tune them is different. So it's still easy to switch from one hatcheting framework to the other. But at some point, I think it will be harder and harder and the stickiness of them will be tricky. But to answer your question, we didn't spend that much time on the tools themselves, I believe. How do you think about evals? Are you evaling one email draft at a time? Are you evaling a longer workflow? Just run us through, when you're testing Gemini, how do you decide what it's good at, what it's not good at? was like the email structure? At first, we had a relatively naive approach, query answer, query answer, and having like a set of queries. We, over time, evolved into like thinking more about like the different dimensions that we want to target. Agent and off is a very typical type of problem space that you want to make sure you select the right model for. So typically getting a bunch of queries, targeting hard and off that we've identified by through the footing or whatever, but trying to target a set of what we call canonical, I would say, queries along that dimension of, I would say, that specific problem space of agent handoff. But there's more, there's the deep search, like shit ton of emails, and you want to find that needle in the headstack. That's a different type of category. So you need to have canonical queries that are targeting that type of dimension because every single user will have their own way to question their own data set. And we cannot replicate every single data set of people. The good thing is we have a bunch of users like Rahul, like myself, who receive like a shit ton of emails. Not on my French, by the way. I don't know if it's okay for the show. But he receives probably like 500 to 1,000 emails a day. He's still part of the onboarding. It's like, I'll send an email to Rahul and he'll reply. I'm sure it's not actually him. Sometimes it's him. He's reading like pretty much everything. I don't know how he's doing it, but he is really, really paying attention, especially at the tone and why something is like going sideways and everything. He really associates the brand and tone of like the people talking the company with himself, which is kind of like bringing us to the next level as well. So thinking about all those dimensions is really key. So like, even if you have like an eval tool, like the way you structure your different queries to target those dimensions is important. And then we have those specific queries, like the road queries, typically. The one we joke about and the one that was one of the first we used as a way to calibrate our quality was Weird Stories. But he did some, like five years ago, some refurbishing in his house. And he had this table, specific type of wood. And he was discussing this with the contractor. and he wanted to have Haski find that email and the type of wood that was discussed in the thread with that guy five years ago. And until we nailed that query, he was not satisfied with the deep search approach. And this is where we're like, oh, damn. Okay, so that's a different set. But we're also talking about dates. Like another, I would say, you mentioned dates. What is last quarter compared to today and everything? Large language models are not really good with dates. So how do you manage that? So these specific queries for that. So we're like, oh, okay. So there's a dimensions that we need to care of. So now we structure the old evals. And as you are asking end to end, what is the query? Whatever happens there? There's like an answer. Was there like a good agent end off? I would say date, were they nailed or not? And et cetera, et cetera, et cetera. So it's pretty intensive in terms of brain power put in the quality. again because superhuman is a high perceived quality type of product so we had to invest that amount of time there yeah high real quality it's not just perceived no but i think this is this is important because um what is quality i don't know yeah the the feeling like if i buy a car like that is a toyota it's good quality and i get the quality for my box if i buy an od or Porsche. I expect a different grade. So maybe it's grade. The grade is different. And it's high grade, but high expectations, so a high amount of time spent on quality. In PMing, there's this concept of the high expectations user. Rahul was one example of those. I was just wondering who are the most outlier, extreme people? How are they using AI in their email? Just in general, the most extreme examples that you've come across, obviously because that's how you work oh that's a good question because for example you had how much time do i spend in waymos last month right which is basically turns your email into an accounting system because it's kind of a source of truth i don't know if i would do that in is it reliable it is reliable wow and when you think about like the amount of work and we're working right now with anthropic to basically do kind of like a building on the fly small kind of a keynote of lambdas that will bid the code to do the aggregation. This is an easy example. This is like a code execution thing. Yes, it's a code execution piece. But this one is relatively simple because you just have to have the agent extract from the email. So select the emails from Waymo, from the Waymo, extract the time, like the duration of the trip, and then do the aggregation. But that's not easy. That aggregation is not easy. And LLMs are not good at math. so like that there was some support about it and right now we're discussing about like extending this approach to more is it are you operating on the email file itself or is there a fundament is it like a role in a database and you're just writing a sql query no the aggregation is so we don't extract that data on the so when we ingest you know we ingest the data yeah we ingest the data so we rely on gmail and outlook of course because they do they are doing like some great stuff that we don't want to do, spam detection. And superhuman will never do it. And probably. Probably never. Probably. Which is being an IMAP server. Exactly. Do I want to do that? Probably not. Probably not. I mean, maybe. You know, Hay, Haymail did it? Yeah. But like, is it something where we want to spend time? Is it valuable for our end users? Really? Not sure. They live in an ecosystem. They will live in a different company. Outlook. Yeah. So like, they have Outlook. They have Gmail. it's already there. So like if we can just plug and make that better, I mean, it's good there. I mean, in some case, Superhuman was the original wrapper company. If you think about GPT wrappers, this is the Gmail wrapper, the Gmail wrapper. At first it was LinkedIn wrapper and now Gmail wrapper. I think more for it than Gmail itself, so. It's very true. It's very true. That said, you can question like what is like an SMTP server for real? Like it's... It's a server that conforms to a spec. Yeah. With some database. Maybe not even. Maybe not even. Maybe not even. I mean, they're doing like way more stuff. Like they have like crazy, like especially Gmail, like the search capabilities, of course, like I would say crazy good and all of that. To do what you do, you need a server-side clone of MyGmail. And then you need also a local cache. We didn't, we need local cache. We work offline. That was one of the things that we did as initially beside the UX, beside the speed. We have everything local. One of the reasons is we want to be fast and under like every interaction should be under 100 milliseconds. Yeah. I mean, with network, you cannot, you just can't. So everything needs to be local. So yes, so we have like a copy of emails local on device and works in the enterprise world because interestingly for mobile, it used to be Realm. Yeah, RealmDB. Yeah, yeah, yeah. Is it Facebook tech? Mongo. Mongo. It has been acquired by Mongo. Yeah. But now it's like somewhat unsaid it. So we need to find a different way to do things now. Might be SQLite. But yeah, so on-device stored. But that was like the old search where we had basically like a database with rows of the emails. But everything that is AI, like we have all the embeddings and all of that. So we have a hybrid search and we use, I don't know if we can name brands, but we use a TurboPuffer on the back end to store like five years of the story. relatively public with their customer list. So I don't know. No, yeah. And I think we'll let the PR department talk about it anyway, but it's a, I mean, stable infrastructure. They do things pretty well. It's fast. I'll briefly comment that I know any number of local first database companies that would love to work with you. If you're saying that you're on the market for a realm replacement, they will come and talk to you. I mean, I'm more than happy. I'm more than happy. That's my AI, my mobile team. Like they're really looking for something. They would love nothing more than to be super humans database. Okay, I want to just focus on the AI side, right? Sure. So people want to know where is their inference running? What are you sending over? What can the provider see? It depends. Yeah. It depends. It depends on the use case, depends on the type of model we want to use. So there's some stuff we run on inference company with open models. There's some stuff that we run with open AI, we're anthropic. So it's pretty diverse. It changed because also based on the quality of the models. We're a GCP shop. So lots of credits for Gemini. Yes. So we have an incentive to probably like spend some dollars there. I mean, it's nice that they're also a leading model anyway. So you're not actually compromising. They're doing some pretty good stuff there. But we use Base 10 to run some LAMA, some BERT model for classification. communication there's uh we're doing probably some discovery discussion with like some yc companies about like a model on device as well because yes they work offline yes and interestingly those companies they started to do like on device mostly for cost reduction that was their pitch will reduce your cost i mean we don't care that much our people our users they want quality and And they are okay to pay for that quality. But we want to solve for offline. Like if you're offline, semantic search doesn't work as well. So we are discussing with... What are your design constraints for offline inference? For example, right? Like DeepSeq v3.1 would be like 600 billion parameters. I don't think you want to take out 600 gigs. And people are somewhat complaining about like our footprint. Yeah, it's probably like two gigs already. Both in memory and both on device because we store local emails. Like when you install Superhuman, we download the last 30 days of emails so that we can do search when you're offline at least for the last 30 days. But we keep that historic. So it's starting at 30 days. And if you're like a customer for like two years, technically we optimize for two years of email in your device. So that's interesting. On the local model, any thoughts on like every app is going to have its own model versus you're going to have a device model that people run? I mean, it's a lot of space. What would you prefer? I'm curious. Would you rather have the user just take care of the inference and rely on that? Or do you want to own the whole experience? I mean, a superhuman will want to own the food experience. Like we're pretty picky in the way things are, I would say, happening. but at the same time if we talk about mobile you want the mobile experience to feel like your device so we are basically not doing React Native we are doing Swift, we are doing Kotlin because we want the app to feel like the user experience generally on iOS or on Android but for the models that's a good question I would love the device provider to be better we can question like local devices, like iOS has done some work there, but it was underwhelming so far. They're still working at it. And that's why we have like YC companies that are spending time there and doing some cool stuff. Yeah, amazing. Interesting question on Base 10. They're a very different cloud inference provider for open models compared to, let's say the Fireworks and the Together AIs. The general pitch is that they don't charge by token. They charge by box, effectively. Anything else that's interesting working with them versus the other inference providers that you buy? They're easy to work with. Yeah. I mean, that's when you're a startup, you want to move fast. They're really easy to work with. So the priority is like what? Cost? Speed? For us, it's quality. So it's quality and speed. It's all open models. It's all the same quality. We would always start with the highest and more expensive model to get the right quality. and when the quality is nailed, then we can spend time trying to optimize. Right. But all these providers, Base 10, Fireworks, together, all these, they all have the same access to the same models. Fair. So unless they quantize heavily, which all of them say they don't. So in that case, like the fact that it's a box, you control your cost way better. Yeah, yeah. So it's like fixed capacity. It's fixed capacity. So you know, when I discussed with my CFO, when it's token-based, The exercise is way more trying to understand, like, how is it the adoption and all of that? But that's serverless. That's serverless. Sure. You said case, serverless, it scales up, scales down? Fair, but the cost control is becoming a thing. It was a thing before the acquisition. Now that we are part of a bigger umbrella, understanding your cost structure and being able to make projections that are closer to the reality is more important. like all pre-IPO-ish companies, you want to really understand where you will be like in three months, six months from a cost standpoint. So based on that, it's pretty cool because you have more latitude to stay within the bracket of like a box. Yeah, I was thinking about this. You know, a lot of people think about cost in terms of dollars per million tokens. Sure. And I think that that is actually amateur thinking. It's only the kind of pricing you care about if you're a solo developer. But once you're in a large scale like you guys, and it's also something I've learned about in Cognition, you should actually cost care about price per trillion tokens because we spend multiple trillions per month. And when you unlock that scale, you unlock different ways to spend. That's not a serverless token-based pricing. So basically, I think Base 10 makes a lot of sense on the price per trillion. Yeah, I didn't look at it that way. It's pretty interesting, but no, no, that's fair. And I mean, we built like so many different models trying to understand like the cost per million of tokens And then you have to infer like what is the average number of tokens Because we treat every single email that really short emails very long emails it like you have to understand your data like uh what is the median and all of that to make your protection and it always there's always some magic the reality is like you don't have the time to i mean i'm an advocate or like let's move fast and if the if it's successful it's great even if it's expensive so rather than trying to optimize the cost too early like just go with something that you control and fast and you'll have time i mean it's a good problem to have success is a good problem when do you think it's gonna break from like a cost perspective say you were to like draft every single email that i get i'm sure you will lose money on the 40 bucks a month yes and no i think that it's a matter of like how more productive we make you like we have some customers that told us like you know initially when you're talking about like the different models and everything like take the better model like I'm ready to pay like 200 bucks a month but like get the best model like I don't want half crap because it's less expensive so like always give me the best because these are all like high value CEOs I mean one hour of their time is worth like 10 times the amount of the subscription so why isn't there $200 a month that's a good question yeah I'm not in charge of the pricing and packaging okay maybe an example would be like, well, what's one thing that you would like to do that you cannot do with today's models, even though you tried pushing quality? Your customers are telling you, actually, we really want this. Or maybe Rahul is telling you that he really wants this. I don't know. Yeah. I don't know. I think we have the means. We have the means to do like pretty much everything that we want to do. Like it's a matter of executing and doing it right. The way I'll put it is like, if you can articulate what you cannot do today that you think you should be able to do and your customers would pay you for it, the model less will make it happen. But the problem that you have and the problem that I have with Kong is we cannot articulate what it is. We will know if it's better, but only once it exists. No, that's a good framing. And the other piece that I think it's pretty tricky is that there's a transformation that is happening in the user experience. Like even the way we're thinking about the user interface right now, it's totally switching. Like the way we think about emails, right now, it's still like some sort of like a to-do list. It's a table to some extent with rows. What would it be like in a year? Because people would be more and more interacting with their systems through a conversational aspect. Like I see my kids. My kids, they don't type on their phone. They talk. I mean, all my kids, I have three kids, all they talk with their phones. Working, college, and middle school. Okay. On WhatsApp? WhatsApp, because they're European and they need to talk with the family. The reality is like Snapchat, it's like a TikTok, like whatever, like Instagram, like they communicate over Instagram. Like, that's not an image tool or something? I feel like a boomer. Yeah, I am. I'm definitely am. But what is interesting is that they, and we can debate about like, but like we as a human species, like we started to write because we didn't have like enough storage for stories that we were telling to each other. So we had to write to store those stories. now like all the content can be stored in youtube in tiktok or whatever it's like what's even the need to write what's the need because everything can be vocal and i see kids now everything is vocal they don't read article they want a tiktok video talking about the article so coming back and i'm sorry like i'm getting like pretty high here but being a bit more grounded what does that mean about like the future of the user experience for email and communication will people still type or will they just talk to emails and they want to hear an email? And this is where it becomes interesting because Rahul as a CEO, maybe next year, he doesn't want to write to you with the new feature. Maybe he wants to talk to you. And then the way you will receive our marketing campaign about the new features will be discussed to you or talk to you with his voice, not just voice and tone in terms of like writing, but like really like you in your car commuting, listening to Rahul talking about that. So coming back to what cannot be done right now, I think the main problem is nailing the new user experience. I mean, OpenAI, now you can do stuff with emails. They're trying to do some stuff there. Like all those chatbots, they try to be like basically the new OS to some extent. So how do you interact with those new apps? So what is an app even in this new world? So that's what is really interesting. And that's why I'm glad to work with Rahul, because the guy is so freaking visionary. And if there's one company to nail it, there's not a lot. And I believe like Superman is one of them. Yeah, I think the inbox is like the ultimate private data source. I feel like even when I see all these companies that are like, you know, talk to like your AI clone to get advice or like, you know, things like that. I feel like so many times, man, I'm just writing the same thing over and over. Like, you know, how many founders email me asking about help for XYZ task? and like the answer is almost always the same you know and like there should be a way almost for superhuman to like be the advisor on my behalf in a way is that you should be able to predict what i will respond to this email it's called auto draft for respond uh we're still we're testing internally because like there's especially sorry to to catch you up but like um same same for me like how many companies are reaching out to me to pitch whatever like ai frameworks or like ai tooling or like whatever and my answer is like although i don't answer because i receive like hundreds of them uh or i say like thank you don't have the time and everything that's cool but like because i want to be polite like right now like like it's automatically generated for me because they learn that i'm usually don't care yeah and that's my answer or if it's someone that is pitching me for uh like hey i want to work with you guys and everything like like someone that is applying, my answer is usually, oh, please reach out to HR. I'm assisting HR and everything. So now I was able to understand how you reply typically. But it's always like if it's covering only 80% of your use cases and you need to discard 20%, where is like the cost benefit value? Is it annoying to have like 20% where you like, ah, discard, I want to write it myself. Is it good? Like, what is the limit? 90-10, 80-20? I think it's like AI plus the snippets. that you have i think that's kind of like like i have snippets for a bunch of things like vendors i have this like super long snippet thank you so much for reaching out about your company sounds like a great product we're not currently in the mark blah blah blah blah it goes on and then the response is like thank you so much for your thoughtful response and i'm like great get it out of the way but i feel like if you could use that plus ai to do the small kind of like last smile thing. I think that would be enough. You don't really need a GI. I'm excited for it. Q1, Q2, something like this. I pay 200 bucks a month to open AI, to one Tropic. Like I'll give you 200 bucks a month if you like make me not write the same thing over and over. I think more generally what he's trying to get at and what superhuman is starting from a very good basis, but not there yet. It's kind of like AIEA. I don't know if this comes up a lot. Why I have people I work with who do read my emails and respond for me yeah and they have memory and they know my normal preferences they have human judgment which lms don't have is that something that you would want to build or do you think what you want to leave to others that that's that's a good when we kick off really like the revamp of our ai world and what ai means for like superhuman uh raw did a pretty good like pitch on it and there was like a pretty nice video i think it was in March for the launch of like the new AI. That's the vision. Like the vision is like you have an EA and most of the people who are using superhuman, CSU founders and all of that. So pretty fast, they need someone to help them with their emails. And we want to do like most of that job. So we're getting there. We're getting there, but that's, that's the goal. That's the goal. Like the first thing, like answering your availability right now, we can do it. I mean, right now it's in beta, but right now my emails like internally, When someone is asking, can we meet next week for lunch? Automatically, I will have like three slots proposed in a draft and I can just like send a draft that is prepared for me. It's still up to you to decide whether or not you want to send a draft. That's the thing. I don't want to be involved. And this is where your EA will always be better than LLLM because she knows the type of people you are okay to have lunch with. Or maybe they have the context because, oh, yeah, sometimes you're busy, but you're like, oh, VIP, I will move this. Exactly. You know what I mean? I get into it. Your calendar is not going to know. I mean, we're getting closer because we know how much time you interacted with that person. But like how much time you interacted, it doesn't mean that maybe last week you had like a bad discussion with them. And now you're not friends anymore for whatever reason. But your EA would know. So there will be like always limitation to this. But and that's why we want two people to always be in the loop. And maybe it's your EA that is in the loop. It's so helpful when I'm not in the loop. Yeah. We can batch it and like I have my one-to-day call with the EA. But yeah, obviously that will happen. You know, some ways that other people are pursuing this, like Notion is trying to go after it, right? They have Notion Mail, Notion Calendar, then obviously they really care about AI. Some other people are doing this interesting thing where they buy an EA company, like a company that already does virtual assistants and then just monitor what they do. And then just, first of all, Superhuman can provide me an EA that is a human and then slowly replace parts of it with AI. I'm curious what you think about that. That's a more aggressive approach. If you really want to. I mean, that's probably the best way to understand how an year is working and like the type of work that they're doing and everything. Yeah, make your own data. Yeah. I mean, that's intense. That's intense. But like, sure. You have the money. And you pretty fast understand what are the type of workflows you want to automate first. So like having that data would be like, I would say pretty, pretty interesting. One of his portfolio companies, they bought a sort of legal for, yeah. Yeah. Do you think that's an accurate description or am I glorifying it too much? No, it's an accurate description. It's like, it just behaves as a law firm though. Right. Just treat it as a law firm and then internally start to optimize. I mean, you have now so many customers that it might be, you might need a lot of EAs to do it for everybody. But I'm curious, I think like the, yeah, the memory is kind of like the killer feature of the EA. It's like understanding in real time. I'm curious, like now that you're like within superhuman, the company, you know, superhuman mail. Yep. Do you feel like there's like a lot of advantages of being email plus documents plus being embedded in everything? Like, do you feel like that helps closing some of these gaps? Yeah. So, for example, like Coda is an interesting, I would say, piece of software. So, Coda is like an ocean equivalent. Yeah, we used it at Amazon. Yep. It's a pretty good one. And a lot of like enterprise companies start to like use Coda more and more because of the flexibility and everything. And Coda has this concept of like Coda packs, which is integrations, glorified integrations, if I would say I can say this in this way. But they're ingesting the data. So like the data is there. So like every time you have Coda. So we have technically an ingestion pipeline that can aggregate all the knowledge about you in the company, which is great. And now if you add Grammarly, Grammarly is ubiquitous. Our users of Grammarly, Grammarly knows that you're in Google Doc. Grammarly knows that you're, I would say, crafting a post on LinkedIn. Grammarly knows, technically they can know. Doesn't mean that they use the data, but they're everywhere. So when you have this, I'm everywhere, oh, you're getting into your email, but I know that you are currently on Jira with that context. So all of a sudden I can pop up like some of the context. I know that you're writing to that person. Oh, it's about this. I can expand and like augment your email because I know where you were coming from. So the data will be there through Coda. Grammarly knows basically where you're switching from Google Doc to Salesforce to LinkedIn. And now you're writing an email. So we have this augmented context even more. So like much more precise compared to something like a chat GPT, for example, they don't know where you are because you're switching windows. You're coming from to, I would say to GPT, from Salesforce to chat GPT. They don't know where you were. They wait for you to pass the content to get the context. If you're Grammarly, I know where you're coming from. So like when everything will be converged, and we've been acquired only like three months ago, but when everything will be converged from a contextualization standpoint, a knowledge standpoint, we know way more. So we'll be like way more accurate in the way to help you. You might be predicting a fourth acquisition, but wouldn't it make sense to have your own browser? That's a good question. I think there's much more to be done on the productivity space before like, I would say solving a browser and everyone is trying to do a browser. Yeah, Atlassian, Perplexity, OpenAI. I'm still sad that Arc is not getting into development anymore because of Dia. But Dia has been stopped. They're rebuilding Arc in Dia. Yeah, but it feels very unstable now. So more and more people are basically saying, okay, let's go back to Firefox. I mean, more and more people are doing that because there's so many browsers. You want to wait for the war to be done and to have the clear winner. No, no, no, no, no, no, no. I disagree, I disagree. You should go all in. What are you using? I use Atlas. Atlas. Yeah. I'm also Atlas now. Oh, interesting. I'm still on Arc. It doesn't have profiles still. That's the biggest issue. Based on the different emails I have, logins I have, I switch between Atlas and Chrome and Arc. Interesting. Yeah. Yeah. My personal one is on Chrome. I'm just saying like, well, okay. If that context matters to you, right? If Coda and all those things, then grab my others. You might as well have your browser. This is the season of, No one will get upset at you for saying, oh, we have a browser. Like, it will be like, yeah, it makes sense. Or it will be like, oh, no, one more. But it's the superhuman one. And that's a good brand. That's interesting. I foresee, like, browser to disappear completely. Ooh, okay, that's the title. I mean, my main, like, central, I would say, piece of software that I use in my productivity tool is Raycast. Ah, yeah. I mean, I'm a Mac user, so I use Raycast. for the people that don't know Raycast it's basically like a way better spotlight on Mac and I don't need bookmarks in my browser anymore what is doing a browser besides providing you a view on a website? Nothing so it's just like even to some extent Raycast should be just a web view because what I do with Raycast is like then you're turning Raycast into a browser is that a browser if it's just rendering HTML? yeah everything is browser So yeah, if it's only like a rendering HTML. What else do you want? You want JavaScript? Do you want local storage? Local storage is one. Extension. You need a browser to have your local extension. But to have your local storage that is pretty massive, like superhuman. But I mean, what's left? If you like everything that was making a browser a browser before, which was like bookmarks, like basically the last e-story that you had, maybe like cookies. and like what's if you get rid of that it's just a view a web view to some extent yeah it's a clean application platform with that open app store you know that there's a mark and gson line of well the operating system is just a poorly debug set of device of device drivers for the browser because the browser is the actual application interface uh from the person that made the browser yeah yeah i i think the browser would be like more and more thin i believe they would be like thinner and thinner but um they will they will disappear or they would be like just embedded in the os eventually yeah so one more technical sort of thing and then we can go to organizational things you mentioned understanding the person you know part of memory is just like the knowledge graph and one part of knowledge graph that really matters is the entities that I deal with, right? Like I deal with him for, for four years and, and we have that context and basically what exists today in superhuman and maybe what is possible in future, right? Do you, for example, do you use a graph database or something like that? Not yet. And in centurism, because you were mentioning like what's missing right now, I think that these knowledge graph, like oriented database, I'm not there yet to some extent. But have you actually tried or you're just saying no we didn't try yeah that's the thing like it's not fair to say they're not there yet correct yeah correct um but uh even from a taxonomy standpoint uh when you think about those entities what are those uh if you are verticalized companies yes um but like then you start in you'll start talking like um about projects but is the project is it a task is it an initiative is it a hierarchical aspect to those how about how deep is the tree these are all the other questions i think it's very like you know superhuman's history is reported where like the person is like the core of the universe no no but there's some obvious entities yeah but like if you think uh if you want things to be really personalized these entities are like very very subjective like i a user of obsidian yeah so i a note nerd And for the people that use Obsidian it Another local first app It's another like local first app in which you build your own workflows and where you will basically through templates define your own entities that make sense for you. And there's no two like graph that is similar, even if you're using the note app for the same thing. So trying to infer a generic knowledge graph that can be reused with dedicated entities, people, tasks, projects, and everything, it's harder than it seems. Interestingly, we were thinking about it when I was at ProductBoard. On ProductBoard, we have the roadmaps of so many tools. Based on that, you can probably infer some taxonomy about what is a SaaS product. But even trying to generalize this into like a tree that can be repeatable for people, it's hard. There's some common stuff, authentication, authorization, billing, user management, dashboards, whatever. Every SaaS company has this. But then when you enter like the domain of the company, totally different because their features, their surface area is very different. So like even there, trying to form the knowledge that you have, abstract the entities that will be the same for everyone is not easy. So it means that then for each user, you need to have an unoptimized graph that is like subjective and dependent of the people. So you need to build the graph based on the, like just the data. And you don't have like a real way to optimize for it. But you're fair, like you're right. We didn't try. but also because many people have failed it's fine and and i don't even foresee a path where that can be surfaced into like more productivity gain at the end of the day what is the problem you're trying to solve it's super nice from a technology standpoint and like even like a thinking process standpoint like what is like the ultimate data model for productivity nerd and all that but what are you improving from an experience standpoint is it like the accuracy of your draft I want my AI EA to remember everything I've talked, everything I've done, everything I've talked to, everyone, every conversation I've had, you know. Yeah, but then it's Jarvis and it's like almost AGI to some extent. You have the context that anyone else has. Yeah, but like the amount of compute and the amount, because you need to recompute like your graph every time you receive new stuff and everything. So it's, ah, it's an interesting space. I think, so to your point, we probably, as an endpoint solution, we probably won't be the ones solving for that. I think that there's like companies that should focus on this and be like, hey, I'm the engine that will ingest everything that you're doing. And we'll build a graph and the graph will be like the best graph ever. And it will be like for each account or each tenant will build a graph for you. That would be great. But is it something for job offer? Is it something for those vector database companies to solve for? Maybe. I don't know. So for what it's worth, I'm actually dating someone who's doing Upside and they're mining emails for the CRM population and building a knowledge graph from emails. Interesting. So basically, they're happy that you're not doing it. I'd love to have an intro. Because obviously, if you do it, then you are a very serious competitor. No, but I think it's not easy. So I would love to discuss. I think we would be probably more a consumer of the outcome rather than the builder of that layer. I think the other big consumer, obviously, would be OpenAI. Of course. They clearly want to eat everything inside of ChatGPT. I mean, this is a cool exit strategy for such a company. For them, yeah. I mean, do you want to build a superhuman app inside of ChatGPT? or I feel like the answer is no, right? Oh, the answer is like chat GPT, like OpenAI and superhuman are competitors. Okay. Like this is what we fight against to some extent. We have a different approach, I think, but especially this ubiquitous, grammarily presence, we are everywhere and everything. I think we want to be more proactive because where you work, we can be more proactive compared to chat GPT that is waiting for you to do things, to help you do the thing. So there's reactive versus proactive. I think we're more on the proactive side, but that's a competition. Like just, I would say for notes, but like when Rahul is questioning the quality of our AI queries on superhuman, he's comparing us to Gemini, he's comparing us to OpenAI. So that's a competition we're fighting against. Yeah, I mean, and speaking of which Gemini, the chat app obviously has privileged access to all of Google. So they can also- Privilege access, yes, and like the search engine is crazy good break them up rahul break them up yeah awesome on a more broader side so you mentioned you only have three people working on ai what's kind of like the coding ai adoption at superhuman on the engineering team yeah interestingly like our path was so we started to really think about it like in q1 like a bunch of like people using some stuff and everything we didn't have any data just anecdotal feedback and all of that the first thing we've done is cut the red tape I will approve the budget in one hour you can try anything you want and deal with the security team 24 hours turnaround to get things approved from a security standpoint because you don't want to do some crazy things huge Q1 was like everyone was trying everything it was really interesting to see how things were like working super well on the front end, a bit less on the back end. We were a GoShop on the back end. And everyone like working on iOS and Swift were like, eh, not that good at the time. But like a huge adoption in terms of tooling. Also like on the product side, a lot of like VZero, we said. For Next.js? No, VZero is kind of like a bolt. Yeah, because they build Next.js sites, right? or apps? Yes. We just use it for we just use it for like a prototyping to be like as close because we have a founder that is very picky and wants to review the design and like the design on Figma is great but like when you can click and do like real stuff it's so much better and Figma is not there just yet. Figma has Figma make we interviewed him Figma! Sure. It's a sure. It's getting better. It's getting better but like ask a PM they use V0 or whatever like a tooling like this because it's no lovable superhuman is like v0 v0 is a standard and and again it was like free for all try whatever you want and free market right so free market uh and free market v0 one always winning it's still a free market q2 was more about okay let's try to understand where this is working where this is not working so compile a huge list of wins and an area where like ah to do this not good wow to onboard in a new i would say code area amazing i used to spend like a full day to understand all the entry point the dependencies on the code stack that i didn't know now i need like 30 minutes with a cloud code and i understand how things are working even for me like i'm not in the code anymore but like instead of like asking my engineers like how are we managing like the refresh tokens with gmail like now i just like cloud code and i'm using warp i'm a warp warp warp is good but anyway uh work code like how this city is working and boom boom boom boom boom i'm providing like the links to the right files explaining you like the high level concept and everything and i don't waste my engineer's time to just answer a question so pretty cool so that was q2 and we started measuring uh so every pr you have to put a label i used ai or i didn't use ai and if i used ai it was productive or it was not so trying to understand uh the layer of the land. Roughly said, I think we have like 80% of people that are really flagging the PR. Out of that 80%, probably 90% of AI usage. So it's all declarative. We're not plugging any tool to measure the real number of tokens and everything. And out of those 90%, again, 90% of positive impact. But it's not always in the code. It might be just the discovery, understanding the layer of the land, stuff like this. So 81% 90 times. So technically, yes, it's like 90 of 80. But by inference, I would like, if I caricature, I would say 80% of usage and happy usage. So like roughly 80% of lines of code written in Superhuman. But probably not in one line of code. Probably more than that. Yeah, because it's the discovery. Like most of the time you spend is not writing code. It's like trying to understand what you need to solve for. And this is the part that has been reduced. In terms of real KPI, and AI is not only the only reason why we have accelerated, but in Q1, we were roughly said at four PR per engineer per week. In Q1, Q2, we were closer to five PR per engineer per week. In Q3, we're closer to six. So the global throughput, and again, PR per engineer per week, we can debate, but that's a throughput measure. And it increased quite a lot. But again, AI is only a piece of it. Technical strategy, clarity of what you want to do, organization. There's a lot associated to that. So we feel pretty good. One question that a lot of like the AI leadership people I talk to have is like, am I supposed to like ask more of my engineering team now? Like, am I supposed to like, you know, hire less people? Should we ship more as a company? I think most, the thing about AI is like, you can do a lot more, but most companies are not built to do a lot more. like you know especially like if you ship a hundred more features you don't really have marketing to market a hundred more features you don't have support to learn a hundred more features like how do you think about structuring teams and like the expectations of it that's interesting because um superhuman historically was um very lean in terms of organization so like superhuman like we have 50 crazy 50 engineers and your user base is roughly uh million yeah Less than that. Like paying users probably 100,000, something like this. So it's still like relatively small. You're still supporting a lot. But it's, yeah. So it's, I would say, a small team, pretty senior. And the average tenure is probably four years. So like long tenure, fully remote as well, which is interesting. So my AI team is distributed between Patagonia and Canada. so access to a different pool of I would say right people not trying to compete in the bay because people want to go to Anthropic they want to go to OpenAI and like those guys they have like the they pay too much money yeah I mean it's not the same I would say competition so we find the people where they are and people that don't want to move to the bay and all of that and there's some great people there anyway long story short relatively small teams and we increased the capacity We try to not move too fast because we're qualitative. Because it's kind of like a vicious circle. Oh, we can do more. Let's do more. But all of a sudden, the number of bugs coming in is also growing and everything. So we try to be conscious. Now we're working on the Grammarly slash the new superhuman. So there's also an incentive to invest a bit more because it's a product that is working. and Shishir is really willing to implement a model that is called the compound startup. We're still a startup within Grammarly. So we have our own P&L. We have still like Raul as a founder. The only difference between now and before is that our board is Shishir and the exec team at Grammarly slash superhuman. But we want more people. We want, I would say, superhuman to have more reach and to do a bit more. So now we are kind of scaling that and we are adding more capacity. So AI is helping, of course, but it's also helping for the onboarding. It's helping for a lot of that. But we're adding some capacity. Yeah, I think the mainstream maybe pushback on it is like, hey, look, you used to pay me X to do four PRs a week. So am I getting paid 50% more than I just ship six PRs a week? I think that's the thing. That's why there's a lot of pushback around AI as well from people. It's like, hey, look, I'm using this and you're getting more out of it, but I'm not getting more out of it. I think it's like the usual, like, you know, new attack. I would say disagree with that statement. Yeah, I disagree too. I mean, I'm saying like when you listen to people outside of our bubble, there's like a lot of like this discussion around, you know, where the value is accruing. So basically, if you only look at it as you're paying for output, was the previous payment wrong or was the current payment wrong? One of them is wrong. Exactly. No, no, no. That's an interesting point. The way I see it is like engineers are well-paid. Like we are like a very fortunate part of the population. Our salaries are probably pretty good and part of like the top, whatever, like 5% in the country or like even in the world. I think that when we talk about the Maslow Pyramid, like engineers at some point when they're like pretty senior, they don't rush like for like 10 more K or 20 more K. or like, I mean, if we talk about like millions and everything, sure. But that's like the 1% of the 1%. For the rest of the population like us, I think that just like the joy and the dopamine is coming from what you ship. So like having this ability to ship more value and have more customers like being happy with what you do, like you end your day and you feel like, damn, that was a good day. So I think that the discussion is not about like the money itself. It's like, oh, damn, I'm in an environment where I ship fast. I can have like all the tools that I request within 24 hours I can basically be like the best version of myself and I have fun in a good team you don't have a lot of attrition when I would say you have an environment like this so like sure money so you need to pay people like fair amount but if you're like just like fair people tend to stay if you have the right environment and like helping them to go from four PR a week to six they're like shoot like I'm so much better than beginning of the year that's so cool and you don't have that everywhere yeah i'm with you i'm curious to see more of the discourse about um awesome any parting thoughts just generally uh your take on ai on the software industry you've been in this for two decades do you think that people should still learn to code do you think the junior developers screwed you know any any of those opinions that are common topics yes of course of course you need to learn to code. I see this kind of like the switch from assembly to C. Yeah, it's a higher level. It's just another level of abstraction. But at the end of the day, you still need to understand how a computer is working. You need to understand how memory is working. Swaps and all of these things happening on the server, like how a server is working, like serverless between quotes. It's always a server of someone. You need to understand the fundamentals to be good with AI. I do believe that AI will do only one thing. It will separate faster the good engineers from the bad engineers. If you're a good engineer and you're using AI well, you will be an amazing engineer. If you're a poor, lazy engineer and you don't want to understand things that you're doing, AI will make you even worse because you will have the feeling that you get it, but you're not going behind the magic, behind the curtain, behind things and how they work. So I think AI is a blessing for our job. Awesome. Any final call to actions, hiring people, things you want people to do and trying the product and give you feedback on? Of course, try the product. Of course, complain to me if things are not doing, I would say great and they're not great. Yes, we're hiring. So we're hiring product engineers. So people that have a strong appetite for the user experience, because I do believe that in the world where the technical moat is not that a moat anymore, because like startups in two weeks, they can build something that is close to what you're building. The difference is like the, how you think about the user, the flow and all of that. So people that have this appetite for nice interface, beautiful product that people love, this is the type of engineers we want. Good engineers, that's a baseline, of course, but like with this like spike into like the user experience, even if you're a backend engineer, backend engineer, but you care about like the latency because it's having an impact on the end user and all of that. This is the type of engineers we're looking for. And we don't care where you are. So you can be in Patagonia, as I said, or you can be like up north in Canada. We try to limit things to like Americas, basically. But yeah, just looking for like bright, gritty people that want to have fun. We're seriously fun. Cool. Thanks for joining us, man. This was fun. That was cool. Thanks for having me. Thank you.

Share on XShare on LinkedIn

Processing in Progress

This episode is being processed. The AI summary will be available soon. Currently generating summary...

Related Episodes

Comments
?

No comments yet

Be the first to comment

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies