

A Positive Vision for the Future: Part 2 with Illia Polosukhin of NEAR
The Cognitive Revolution
What You'll Learn
- ✓AI coding assistants allow anyone to build personalized software and automation, rather than relying on complex enterprise software
- ✓This enables a shift where more time is spent reviewing AI-generated code for correctness rather than manually coding
- ✓AI can handle data analysis, generating SQL queries, and building clickable prototypes, reducing development time
- ✓For complex software, AI still has limitations, so teams will need to decompose problems and have AI build the subsystems
- ✓AI-generated code can come with natural language explanations to help developers understand how it works
AI Summary
The podcast discusses how the rise of AI coding assistants is transforming software development and the ability for anyone to create custom software and automation. The guest, Ilya Polosukhin, envisions a future where people can easily build personalized software and digital tools by conversing with AI agents, rather than relying on complex enterprise software. This shift will impact how teams work, with less time spent on actual coding and more on reviewing and ensuring correctness of the AI-generated outputs.
Key Points
- 1AI coding assistants allow anyone to build personalized software and automation, rather than relying on complex enterprise software
- 2This enables a shift where more time is spent reviewing AI-generated code for correctness rather than manually coding
- 3AI can handle data analysis, generating SQL queries, and building clickable prototypes, reducing development time
- 4For complex software, AI still has limitations, so teams will need to decompose problems and have AI build the subsystems
- 5AI-generated code can come with natural language explanations to help developers understand how it works
Topics Discussed
Frequently Asked Questions
What is "A Positive Vision for the Future: Part 2 with Illia Polosukhin of NEAR" about?
The podcast discusses how the rise of AI coding assistants is transforming software development and the ability for anyone to create custom software and automation. The guest, Ilya Polosukhin, envisions a future where people can easily build personalized software and digital tools by conversing with AI agents, rather than relying on complex enterprise software. This shift will impact how teams work, with less time spent on actual coding and more on reviewing and ensuring correctness of the AI-generated outputs.
What topics are discussed in this episode?
This episode covers the following topics: AI coding assistants, personalized software, software development process, AI-generated code review, AI limitations for complex systems.
What is key insight #1 from this episode?
AI coding assistants allow anyone to build personalized software and automation, rather than relying on complex enterprise software
What is key insight #2 from this episode?
This enables a shift where more time is spent reviewing AI-generated code for correctness rather than manually coding
What is key insight #3 from this episode?
AI can handle data analysis, generating SQL queries, and building clickable prototypes, reducing development time
What is key insight #4 from this episode?
For complex software, AI still has limitations, so teams will need to decompose problems and have AI build the subsystems
Who should listen to this episode?
This episode is recommended for anyone interested in AI coding assistants, personalized software, software development process, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
In part two of his conversation, Illia Polosukhin, co-author of "Attention Is All You Need" and founder of NEAR Protocol, explores the profound societal and economic implications of AI. He envisions a future where a unified intelligence layer transforms personal computing, AI agents enable direct market connections, and individuals find meaning in niche communities amidst AI-driven abundance. Illia also delves into the ethics of agent-to-agent interactions, where AIs prioritize human interests, and new governance models utilizing AI delegates. This episode offers a concrete, systems-level vision for how AI will reshape our world, from daily life to global governance. Sponsors: Google Gemini Notebook LM: Notebook LM is an AI-first tool that helps you make sense of complex information. Upload your documents and it instantly becomes a personal expert, helping you uncover insights and brainstorm new ideas at https://notebooklm.google.com Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai Linear: Linear is the system for modern product development. Nearly every AI company you've heard of is using Linear to build products. Get 6 months of Linear Business for free at: https://linear.app/tcr Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) Sponsor: Google Gemini Notebook LM (00:31) About the Episode (03:33) AI Transforms Software Development (14:18) The Future of Work (18:58) Securing Blockchain with AI (Part 1) (19:08) Sponsors: Tasklet | Linear (21:48) Securing Blockchain with AI (Part 2) (33:03) Vision for an AI Society (Part 1) (33:55) Sponsor: Shopify (35:52) Vision for an AI Society (Part 2) (49:14) Agent Architecture and Alignment (58:30) Experimenting with AI Governance (01:06:43) AI Safety and Robustness (01:16:09) Bio-Security and Open Models (01:22:56) Coordinating AI Development (01:28:52) Outro SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Full Transcript
This podcast is supported by Google. Hey folks, Stephen Johnson here, co-founder of Notebook LM. As an author, I've always been obsessed with how software could help organize ideas and make connections. So we built Notebook LM as an AI-first tool for anyone trying to make sense of complex information. Upload your documents and Notebook LM instantly becomes your personal expert, uncovering insights and helping you brainstorm. Try it at Notebook LM.Google.com. Hello, and welcome back to the Cognitive Revolution. Today, I'm excited to share part two of my conversation with Ilya Polisuhin, co-author of the seminal Attention is All You Need paper and founder of NIR Protocol, which describes itself as the blockchain for AI and aims to build a future where AI belongs to everyone. In part one, we explored the foundational technology stack, proof-of-stake consensus mechanisms, confidential computing on GPUs, and Mir's plan to train frontier models on a decentralized basis, with contributions incentivized by cryptographically guaranteed revenue shares. Today, we zoom out from the enabling technology and discuss the big picture questions. What does personal computing look like as AI continues to eat traditional UI-centric software? Ilya envisions a unified intelligence layer that works across all of your devices, predicts what you need, and proactively gets things done for you. How will this change the nature of markets, trade, and the consumer economy? Ilya expects that advertising and middlemen will become less relevant, as AI agents allow buyers and sellers to connect directly at unprecedented scale. What does daily life look like, and how will we spend our time in the era of AI-provided abundance? In addition to a rise in personalized, AI-generated entertainment products, Ilya predicts that people will find meaning and compete for status in small communities focused on niche interests, including athletics, video games, the collection and curation of unique artifacts, and who knows what else. What rules should govern agent-to-agent interactions in this world? Ilya imagines a symbiotic relationship between individual humans and their own personal AIs, such that humans and AIs effectively grow up together. And he believes that AIs should pursue the interests of their individual humans, even if that means being less than fully transparent when negotiating with other AIs on their human's behalf. And finally, what new governance mechanisms could allow us to take advantage of the exponential increase in information processing power and reasoning that AI affords us? Ilya describes NIR's experiments with AI delegates, which vote on behalf of token holders who in turn select them based on their decision-making frameworks, and also describes NIR's long-term goal of enabling every individual to have their own AI that participates in governance on their behalf. Of course, we touch on lots more along the way as well, including the role of formal verification of software in securing smart contracts, the remaining challenges required to effectively bridge the gap between low-level guarantees and high-level human intentions, and the need to harden society's defenses in order to maintain global biosecurity in the presence of broadly distributed, powerful AI systems. Overall, while many questions remain to be answered, Ilya's strength as a systems-level thinker spanning technology, economics, social dynamics, and governance is evident throughout. So without further ado, I hope you enjoy one of the most concrete, positive visions for the future that you will find anywhere. From Ilya Polisuhin, founder of NIR. Ilya Polisuhin, founder of NIR, welcome back to the Cognitive Revolution. Thanks for having me back. I'm excited for part two. So last time we talked a lot about foundational technology, the journey that you went on from being an author of Attention is All You Need to try to source data from contributors around the world, struggling to pay them, taking a detour to blockchain, thinking it would take just a few months. Here we are a few years later, and it's all really happening. Anybody who's listened to this feed for more than a minute knows that I often say the scarcest resource is a positive vision for the future. And I appreciate that you have one. So I'm really appreciative that you're taking a second window here to help us unpack that. I guess maybe for starters, you know, one of the jumping off points last time was you said you wanted to teach computers to code and sure enough, now they can code. So maybe as we kind of ramp up into a vision of potentially a quite different future. How is the rise of AI coding assistants changing how you guys work? And how is it changing who can create stuff on top of the blockchain? Yeah, well, I don't think it's even about blockchain per se, but yeah, I mean, the real reason why I always thought that as computers are able to code, we actually approaching a different world because there's a few dimensions to this, right? We know that there was a statement by Marc Andreessen, right? That software is eating the world, right? And this idea was like the fact that, I mean, what it means is automation. Automation has always been the driver of innovation, of GDP, of productivity. and you know it's like everything from tractors to factories to then indeed computers and computers kind of this like kind of universal you know bicycle of the mind right automating things and so now the challenge been that there's there's always been like a small cohort of people who actually are able to build software right and so if i have a need right i need to find somebody else to build it, probably they need to build it not just for me, but for a large number of people. So it's actually economical. And so we've ended up with a lot of software that became very complex to use. And so now you need to learn how to use it because it's not really built for you. It's built kind of for this generic user that has like, every user has like five use cases that they're like somewhat overlap. And so it's all of this is like kind of stuck in one software, right? Or you just don't have it and you keep doing stuff manually and kind of, you know, wasting your time. And so for me, the ability for machines to code is really about that transformation where everyone now is able to build their personal software. Everyone now is able to build their own personal automation. And it also kind of removes the fact that like the interfacing, right? that these interfaces need to show you all the options right away, right? You kind of can kind of subselect with, you know, English or whatever into the part of the interface that you need, right? So to give you very specific examples, right? I mean, I like the example of Salesforce, right? Salesforce, I mean, obviously started as like a, you know, small startup that was targeting like a specific use case of sales people. But at this point, right, it's a monstrosity that like You need to hire somebody else to configure Salesforce for you, right? Like that's how complex the Salesforce is, right? At this point, it's effectively like you're hiring somebody else to build you a system, right? It's just like they're using existing pre-built components. Now, imagine this world now where you can just talk to a computer. Like it really now becomes about your sales process, about your business process, about how you want to automate, what reports you want, et cetera, right? It can be dynamic. You can restructure this as you go. You can whatever built-in things that Salesforce may have somewhere as a feature or may not even have. And then you can integrate with whatever other tools you want. Like, for example, Salesforce, we're in crypto. Everything is in Telegram. Salesforce doesn't have Telegram integration. We can't use it, right? So now we need to go and somebody to get built integrations in Telegram and Salesforce. course, but with your vibe-coded CRM, you can just like, hey, and integrate Telegram for this. So that's kind of, I mean, this is just like a simple example, and we can keep extrapolating, right? It's every single kind of, let's say, digital footprint is kind of becoming more and more automated, right? And the more intelligence computer can have and more it can take on, the more you can offload the orchestration of different tools or just low-level, here's a database, I need an HR tool, I need CRM, I need this, I need that, right? You can actually build all of it. So that's been, I mean, this is in 2017, we were like, hey, software to service is going to die, AI will replace it. In 2017, this sounded like, yeah, very delusional. That was probably the... But obviously, I think now we see that most people are kind of, or a lot of people agree, and then the software service itself is trying to become AI, because they know they're kind of going to just get out-competed by that. So coming back to your question, like what changes in our work, I think there's a few pieces that are already clearly work, right? Kind of data analysis through natural language, right? If you have reasonable data structure, you can effectively make everyone, maybe not a data scientist, but what before would require a business analyst person. And then it would take them a while to pull all the data. Now you can just say, oh, you have a question about the business analytics information. Just ask it. You don't need to email somebody to get you the answer. You just ask it to the tool that generates SQL queries, pulls in whatever data, some Python gives you an answer. I think I'm actually a big fan of the front end building with VibeCoding now. And though I'm not writing production code anymore, but it's really useful for me as a prototype tool where you can effectively just really quickly get to an experience. And so you don't need like, I mean, you want design to be more about like style guides, et cetera. And, but like the UX part, you can test really quickly, right? Before a designer would, you know, maybe design something and then developer would try to build it. And again, it's like, oh shit, I can't actually build that or like, it doesn't work exactly this way. And there's like a lot of iterations. Now even designers can just build a full experience, you know, fully clickable thing. It generates code for this, and now developers can just plug in all the backend logic. I think also we're starting to see on our teams that actually doing development that the time spent is changing. Before you spend a lot of time on the development work, and then you spend also a lot of time on reviewing other people's work. amount of time spent on development is shrinking, right? Because you effectively tell the whatever course or codex, whatever this is, to go and do the thing. And a lot more time spent on reviewing things and making sure they're correct, right? Now, one thing we were experimenting with on one of the teams was what if we like decompose the whole piece, like the whole, like this is, you know, for simple software right now, AI works, right? For complex software, it doesn't, right? It doesn't really go, you're gonna just go and like, build me a really complex system, and it goes and does it. It doesn't work yet. We see obviously the improvements continuously. And so right now for complex systems, you right now decompose it yourself, right? You know, as an architect or a senior engineer, and then you have maybe a few different team members who actually build the subsystems. And before in traditional software development, you always want to have multiple people on each subsystem that you want them to know really well how it works so they can maintain it, so they can change it. Now, actually, it's not as important because AI will explain to you how it works. And you may actually even have a natural language explanation that the developer used to build it in the first place attached to it. And so it's actually more about like the velocity and the viewing process and then how to ensure that each part is like secure and kind of works correctly. And so there's kind of a transition happening where there's more almost like individualism because every individual is more productive. And then it's more about like how to build how to build the right decomposition into pieces. Again, this is also very temporary in a way, but this is kind of the current state. As tools become more mature, they are better at larger code base navigation, etc. I think the other question is, it's really about the model quality for a specific task, for the conceptually, the abstraction level that it needs to operate on. Again, front end, there's no abstraction level. You just build what you see and you iterate. So it's really easy to check, it's really easy to iterate, so you don't really need that much. And models are good at this. When we talk about a blockchain low-level code, it's an extremely complex system. There's a lot of pieces, externalities, so models are not very good at that. And there you really expect people to spend more time, and generally spend more time thinking about the algorithm and architecture than writing code. So, yeah, so it really depends and kind of, but this is also all shifting really quickly. Like, you know, six months ago, I would have, wouldn't given you a different answer. Right. And I'm sure in six months it will be different as well. Yeah. We've seen some very impressive programming results from frontier companies that have not yet hit the, you know, the public APIs or product services just yet. So certainly we can bank on more to come when it comes to. you mentioned like job titles, you know, I think you said like senior, you know, engineer or architect that raises the question. I think a lot of people are asking right now, are you hiring junior engineers? What do you think is the fate of the junior engineer as things stand today? Yeah. I mean, I think it's, it's less about like junior engineer. It's more about who is a person, right? If, if this is somebody who is like coming in like you know for example by the time i got into university i already been coding for seven years right and so and this is when i got you know my first job etc so like if this is somebody coming in who's like you know build multiple projects already using ai every day etc like it doesn't matter if junior or not junior right it's really i mean yeah there's a lot of skills for them to learn but like they're there to learn them right they're open they're ready to go and then you have some people who coming in who's like you know like studied a bunch in university but like not really like not really in this learning mindset and like this is going to be continuously changing things and like you're like so like again we're transitioning from like software as a like craft to really be just like a problem solvers that talk to computers. Right. And so, so, so you need, I mean, problem solving at the end is really all about kind of like the mindset and then the approach. And so if people are willing and are kind of doing that and excited about that, that's, if people are like, Oh, I don't know how to do this. I don't, you know, I can't do this, etc., then that's not the right person. So yeah, so it really depends on, like for many things before, yeah, you would hire like a bunch of junior developers because they're cheaper and you don't need maybe as like high quality of work and you even want to fan out work. That part is not needed, yeah. So you hiring more for like the problem solving and kind of more people who can actually like creatively problem solve together. Yeah, there's always a room, There's always a space, there's always room for people who are like a force of nature under themselves. But overall, that sounds bearish for rank and file. Like, I was told that if I, you know, get it. Yeah, studies is bootcamp. Yeah, I'm going to be, yeah. I'm going to pay it to Hanukkah. Yeah. Yeah, I mean, but I think that's true about every job at this point. Maybe plumbers and electricians. that's the like i i usually talk like the automation right now happening from both from two sides the all of the like manufacturing like on the floor jobs are getting automated and like that price is still pretty high actually like yeah i mean you know vietnam for example salaries are probably lower than what robots are getting paid in the u.s but in u.s it's already like you know you affect there's this company i know called formic they effectively provide you outstaffing those robots. You're a factory, you call them up, they bring you robots, install them, they set them up. You don't need to do anything. You just pay them effectively as you pay salary, but they work 24 seven. They don't complain. They don't unionize. They just do the job. They don't quit. Right now in US, the churn of workforce is 300%, meaning every year you need to hire three people for one job because like they keep quitting so so that's like the automation from the like bottom kind of low scale you know very repetitive task and then finally all of the kind of white collar high-end jobs are getting automated like you know coding lawyers like a lot of this information work right getting automated and so the most safe right now i mean it's going to get automated as well but it's actually the high dexterity kind of skilled work that requires like i don't know you know plumber you need to like climb under the sink and like you know fit something like stuff like this right now is just like super hard to do by with ai but again this will happen as well like all of those things will get automated over time yeah it's coming for all of us it's just a question of when when hey we'll continue our interview in a moment after a word from our sponsors. The worst thing about automation is how often it breaks. You build a structured workflow, carefully map every field from step to step, and it works in testing. But when real data hits or something unexpected happens, the whole thing fails. What started as a time saver is now a fire you have to put out. Tasklet is different. It's an AI agent that runs 24-7. Just describe what you want in plain English. Send a daily briefing, triage support emails, or update your CRM. And whatever it is, Tasklet figures out how to make it happen. Tasklet connects to more than 3,000 business tools out of the box, plus any API or MCP server. It can even use a computer to handle anything that can't be done programmatically. Unlike ChatGPT, Tasklet actually does the work for you. And unlike traditional automation software, it just works. No flowcharts, no tedious setup, no knowledge silos where only one person understands how it works. Listen to my full interview with Tasklet founder and CEO, Andrew Lee. Try Tasklet for free at tasklet.ai and use code COGREV to get 50% off your first month of any paid plan. That's code COGREV at tasklet.ai. AI's impact on product development feels very piecemeal right now. AI coding assistants and agents, including a number of our past guests, provide incredible productivity boosts. But that's just one aspect of building products. What about all the coordination work, like planning, customer feedback, and project management? There's nothing that really brings it all together. Well, our sponsor of this episode, Linear, is doing just that. Linear started as an issue tracker for engineers, but has evolved into a platform that manages your entire product development lifecycle. And now they're taking it to the next level with AI capabilities that provide massive leverage. Linear's AI handles the coordination busywork, routing bugs, generating updates, grooming backlogs. You can even deploy agents within Linear to write code, debug, and draft PRs. Plus, with MCP, Linear connects to your favorite AI tools, Clawed, Cursor, ChatGPT, and more. So what does it all mean? Small teams can operate with the resources of much larger ones, and large teams can move as fast as startups. There's never been a more exciting time to build products, and Linear just has to be the platform to do it on. Nearly every AI company you've heard of is using Linear, so why aren't you? To find out more and get six months of Linear Business for free, head to linear.app.tcr. That's linear.app.tcr for six months free of Linear Business. So I want to understand a little bit better, because I think a big part of your vision of the future, obviously, is AI that everyone owns, right? We've kind of got the one default path, it seems, in front of us where we have the big tech singularity, which is where three to seven companies become totally dominant forces because they have the models that ran away from everyone else and nobody else does. And so, you know, we're all just kind of trying to get whatever inference we can get from, you know, from these leaders. And then there your vision of the decentralized and collectively owned AI that I think has you know it has a lot to to say for it It certainly super attractive in a lot of ways for people with some concerns about you know what happens if everybody has you know access to certain things in an unrestricted way too. But leaving that aside, if front end is like largely we can get the AIs to do it, but like core blockchain work is beyond what they can do. what's in the middle? Like what kind of apps are people building on the blockchain today? How hard is it to build those apps? What makes it hard? And can the AIs help there yet? Or if not yet, at what point should we start to see an explosion of like vibe coded on blockchain applications? Like what's the fundamental barrier or kind of rate limiting step toward the proliferation of just anybody who has an idea for blockchain can go do it in the same way that anyone who has an idea right now, to a significant extent at least, can go create a little micro SaaS app? Yeah, I think the problem is the same. It's just it's exaggerated. The problem right now, if you're launching your micro SaaS app and you're not actually an engineer, if you're launching it just for yourself it's totally fine right and i kind of my my recommendation for everyone like you know build tools for yourself live code everything for yourself now the problem is like as soon as you go like make it for everybody else because you don't really understand what issues are under the hood you don't know like how it will actually affect your users yourself etc and so we've seen already you know people getting hacked you know they're like secrets getting leaked etc right so like that that is the biggest issue right now the blockchain because it's naturally kind of in the open right away for everyone and it involves money that problem is it's exaggerated right because now if you make any small mistake right and you know we have this right now with very professional engineers who build soft who build blockchain software, any small mistake, somebody will find it and exploit it. And effectively, this will result in some value loss, some money lost. So really, that's the biggest challenge. What works right now is actually for existing so-called smart contracts, right, kind of the back end, you can generate a front end, you can create your own custom UI for specific use cases or combining multiple use cases into one UI. That part actually works. And again, I would not recommend to launch it to other people, but you can build for yourself where you say like, hey, I have this yield opportunities across different places. I have this whatever. Let me combine it, make my own asset manager that makes me easy, for example, to manage this. Things like that you can totally do now. So I think, yeah, it really depends. Now, what we are working on kind of medium to long term is how do we formally verify the correctness of the smart contracts and ideally the whole kind of blockchain application stack, such that if you are vibe coding an application, you actually have proofs, mathematical proofs that it's correct. And now also when user using it, and this is important, just proving, okay, cool, smart contracts is like doing what you want, but you maybe didn't properly define what you want, right? And so maybe your logic itself is flawed. And so what you want is like I'm as a user using this piece of software, want to know that it does what I want, right? And so that's a critical step. So again, if I'm looking for a financial application, I want to know it's not going to lose my money. And so if it can prove to me directly in the transaction, right, as I'm sending money in, it should prove to me that it's not going to lose my money, then transaction succeeds. So that's kind of the level of integration that we are aiming for. And I think is required for this kind of really kind of adversarial and monetary valuable. I actually think this is required for all software because like the world where, you know, AI is going around and hacking everything left and right is also not great. we are kind of in it actually right now. And so we do need like formally verified software to, to really secure that. That has been coming up more and more in my conversations about this. And I would have to confess that up there with like regular expressions, the concept of formal verification of software is, you know, one of the things that makes me feel dumbest. because I'm always a little bit like stuck on, I think the point that you were emphasizing there, which is like, it's one thing to prove that, you know, this particular function or whatever, you know, does what it's supposed to do and like, doesn't, you know, corrupt other memory or, you know, whatever you can, you can make a bunch of sort of generally low level statements. Right. But it seems like there's still a real challenge in aggregating, of aggregating up those small level statements to the holistic, like, this is what I want. It's, you know, you have a version of the genie problem, which is kind of, you know, what a lot of people have worried about with AI in general for a long time of, you know, if we tell it a goal that it interprets a bit differently than we did, you know, we're in, potentially could be in trouble there. How do you see this actually playing out in practice? And again, I'd love to kind of get some vision for applications that you would love to see somebody come build on your protocol that don't exist yet. And maybe it's because they're just too hard, but I'm sort of imagining like, do I have an AI agent that comes in and automatically red teams this for me? Let me give you a simple example and we can build it up. So a simple example, I want to put money into a bank savings account. I want to be able to withdraw it. And ideally I want to withdraw a little bit more money than I put in. Right. Like right now, you know, you're sending money, you have no guarantees. Like let's say you sent money via whatever ACH or IBAN or something. You have no idea if it's going to arrive. Right. You have no idea if bank will exist tomorrow. You have no idea if, if it's actually going to give you money back. I mean, there's like government insurance, right. That insures up to some amount, but generally speaking, you have no guarantees. Now, blockchain gives you some guarantees. Actually, like, hey, money arrived, you can verify that. But indeed, if there is some code involved, you don't know if that code has some potential, somebody can withdraw this money illegally. Like you need to go audit the code, you yourself may have missed something, et cetera. So here you say, hey, you effectively constrain, Like, you know, my transactional path only if I can call this withdraw method and it will return me at least as much, you know, given X deposit, I want to be able to call this draw with at least X, right? So that would be the condition, which is like when you deposit, right, you effectively constrain that this smart contract needs to prove to you that this property will be true. Now, that contract may, you know, deposit this money itself into other places, right? You know, if it's a savings account, it maybe lends it, borrows it, et cetera, right? And so it needs to actually in-chain to be able to prove to you it actually needs to chain all of this with everything else it does, right? It needs to be like, hey, if I'm lending to someone, right, either they need to return to me or I'm going to, like, you know, foreclose their account, liquidate their collateral, et cetera, right? So there's kind of a chain effect that's happening through this system. So that's kind of where, I mean, to your point, right, like you start maybe low level, but you actually can start expressing somewhat high level properties. Now, the challenge with this, like with money and deterministic blockchains, it's pretty easy because it's deterministic and you kind of have like full observability. So you can actually do those constraints pretty easily. To your question where it's coming from, well, it's going to come from combination of indeed your, I mean, we call it wallet, right? Your kind of software that is on your side that actually facilitating these interactions. But also, we believe that your wallet will be AI, right? It will be the AI agent that is on your side, your user on AI that actually does this interactions. So it will indeed be on guard, verifying these properties. Now, a more complex thing is, indeed, how do we prove things that are non-deterministic and not potentially easily observable? And this is indeed where you obviously cannot have 100% formal proof. You need to effectively now start dealing with probabilities. And then you can move those probabilities with insurance and other things. So this is where you can have financial system and other things where you say, hey, we have a liability insurance that in less than 1% of cases, something can happen. And so prove to me that it is as high as it succeeds or I'm getting a million-dollar payout if it doesn't. Right. And so you're kind of starting to combine what people have built in insurance, where they estimate risks and evaluate and underwrite, and sound this kind of formalification combined with this probabilistic modeling. So for some things, the answer is somewhat easy, and then it becomes more and more complex as we touch more and more real world. Like to give you an example is like, Hey, I'm ordering, you know, steel from, I don't know, some country it's gonna, it's gonna arrive. There's a ship involved, right? You know, maybe this ship like sinks midway. Right. So, I mean, normal way is like, Hey, you know, we're gonna ensure this and there's gonna be insurance. And so like, you need all of those mechanisms to build on top to account for your world kind of non-determinism. Yeah, it's really hard for me to envision all of that working. And again, partly it's because I'm maybe just a little slow on some of these things. But the sinking ship is a good example where it's like, hmm, how is my smart contract going to know if it really did sink or if somebody's just telling me that it sank? or there's that sort of shell game of where do you hide the trust or what exactly is fully verified. It's quite interesting. I don't want to get too bogged down in it, though, because I don't want to force us to get to a part three before we really get to all the sort of utopian vision. So maybe where you can, weave some of this stuff in there as we go. Hey, we'll continue our interview in a moment after a word from our sponsors. Being an entrepreneur, I can say from personal experience, can be an intimidating and at times lonely experience. There are so many jobs to be done and often nobody to turn to when things go wrong. That's just one of many reasons that founders absolutely must choose their technology platforms carefully. Pick the right one and the technology can play important roles for you. Pick the wrong one and you might find yourself fighting fires alone. In the e-commerce space, of course, there's never been a better platform than Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all e-commerce in the United States. From household names like Mattel and Gymshark to brands just getting started. With hundreds of ready-to-use templates, Shopify helps you build a beautiful online store to match your brand's style, just as if you had your own design studio. With helpful AI tools that write product descriptions, page headlines, and even enhance your product photography, it's like you have your own content team. And with the ability to easily create email and social media campaigns, you can reach your customers wherever they're scrolling or strolling, just as if you had a full marketing department behind you. Best yet, Shopify is your commerce expert with world-class expertise in everything from managing inventory to international shipping to processing returns and beyond. If you're ready to sell, you're ready for Shopify. Turn your big business idea into cha-ching with Shopify on your side. Sign up for your $1 per month trial and start selling today at shopify.com slash cognitive. Visit shopify.com slash cognitive. Once more, that's shopify.com slash cognitive. But let's like start to leave, you know, also a little bit of the sort of how behind and just talk about like the what. What are the apps that, you know, we're going to enjoy? What is the computing paradigm that we're going to have? You mentioned agents, you know, kind of doing stuff for me. Meta obviously has been putting a vision forward of glasses with a display in them, a sort of head up, you know, know who needs a keyboard, right? When you can just talk to your AI as you walk down the street. That does appeal for all the things that Meta has done, including hot stepmom that don't appeal to me that much. I would say the head-up display is, you know, it's at least an interesting vision for the future. What do you think our computing life is going to look like as this stuff matures? Yeah, so I definitely agree some form of AI operating system is going to be the main driver of our computing. the like the devices and kind of the form factors will be different and i actually think it's going to be easier it's already easier like you know if you want to make your own glasses it's not actually that hard right there's there's some factor in china that will make whatever hardware you want um so really it's about like kind of a single ai you know your ai that is available across all those form factors, right? You know, you watch your glasses, your headphones, your, you know, phone, your laptop, your whatever. All of this is kind of interconnected as like a single, you know, surface. And then your AI knows that like you like to see this information and watch. But then, you know, by the time you pull out your phone, you want to see like maybe news and kind of more longer form content. And then I may be actually like, you know, videos instead. and so that's what it should show me so like it's going to be effectively personalized like a ai generated not just kind of the content but also the applications that we use right probably a lot of the similar patterns that we already use like the feeds the you know the chat etc but they don't need to be kind of fixed right like you don't like right now i have you know five different instant messengers i have you know seven different feeds and like all of that you know can be a single feed you know i can switch between like work and personal when i want to you know my my i can predict a bunch of things that i would like to do this is something we experimented back actually in 2017 where based on all the things you're doing can we predict actually the next thing you will do on your phone and just do it for you right or or suggest to do it right like you know you have a meeting you know 20 minutes away let's call you an uber right like you don't need to go open an uber go calendar copy paste to let you know the address put pastes like you're just going to do it like things like that there's a lot of things that like you know ai will know like hey you know you load like you ordered food two days ago you know you're going to be out let's reorder a bunch of stuff that you typically order and so as that as that system kind of matures and we also trusted more and that's an important aspect, I think there's actually starting, the economy itself is going to start to shift. Because right now, the economy is built, I mean, we're kind of in this consumer economy where it's built an advertisement on kind of, like, we discover new things through this feed, et cetera. But if your AI is like actually, you know, like, I mean, some people are already doing this now. Like, hey, I want to be on a diet, build me a personalized meal plan, et cetera. So all of that, but also it's going to go and order that. And maybe even your humanoid robot at home will also cook it because it's the same computing system. It's like you just get the food your AI recommends given your health goals, et cetera. And so now the capacity, so this is kind of like on a micro level, right? You have this person. And so now your AI, like it doesn't need to go and order it from your local Walgreens or whatever, Vance or Aldi or something, right? It can actually put a purchasing with directly farmers, right? With directly, you know, manufacturers. And so they can then themselves start capacity planning, right? So you're kind of starting to remove some of the middlemen that are built because we cannot right now have direct relationships with suppliers. Right. And so that's kind of an interesting meta point that like our economy right now is built on this like middleman architecture. Right. Because there's it's really hard to plan things. And so every layer like Costco is effectively just, you know, they purchase a bunch of stuff. they put it in one place and then you buy it right so like they kind of serve as this and i mean you know they have a fixed margin etc but it's really kind of like the you know the temporary place for holding things for you to actually purchase right or or find and if your ai is purchasing directly right it can just go to their purchasing agents and kind of do this now there's batching there's like other things but again all that can be done by ai way more effectively that we're doing it right now. Some AI of your city will know that 500 people are ordering this, 7,000 people are doing that. They'll want it tomorrow, day after tomorrow, day after tomorrow. So we're like, we're going to capture the eggs and ship it this way, et cetera. All of that can be effectively just managed as a holistic information system. And so that's where we go into a really... I mean, I use this example kind of half jokingly, but in communism, they were trying to build a system, right, where they were doing capacity planning, but they were missing the AI to actually do it, right? And so in turn, it was terrible because it effectively like couldn't actually satisfy the demand supply and changes. But if you have a real time platform where you can actually have all this supply demand being. And so the reason why capitalism has been so successful, because capitalism is actually a compression of information. Like money is a compressed information because it kind of compresses anything you can purchase into one number. And so it kind of compresses all of this information, all of the different things into one number, and then it's really easy to kind of navigate. And so here, but obviously with Because as compression, you lose some information, you lose some digital making. That's why in the US, 30%, 40% of all the food is being thrown out because there's just the over-provision in the stores because they don't know how much people actually will buy and they don't want to have an empty store. But you don't need to do that if you knew exactly what the purchase will be in the next 24 hours because they already planned everything and provided it and all of those got aggregated and shipped. I think we're going to see actually a shift in how economy works at the macro level because of this micro, like each of our individual AIs becomes this micro decision maker who can real time provide all this information to the right sources and navigate and also not be affected by a lot of brands and other stuff, but actually validate based on the core values. So that is like an interesting transformation where I think indeed it's, I was mentioning like, I don't think like, there's no good way to like make a movie or, you know, a book, like a science fiction book that's talking about effectively like a change of economic structures in the society, right? So it's way easier to talk about, you know, dystopia and, you know, and the heroes who are fighting against it. Like, it's just like a way better story arcs than, you know, like, hey, you know, we've been building out this economic model. And like now it's 1% better every month. And so it's like keeps getting better and more optimized. And that's how we live. So, yeah. So I think like that's the kind of the paradigm of computing is we have the AI. AI, I mean, a code agent, I mean, it's effectively our assistant, right, our operating system. But it has all the context about us. It's able to make decisions on our behalf. That's why it needs to be private. It needs to be ours. It needs to be on our side, right? We need to know that it's aligned with our kind of success and outcomes. Otherwise, I mean, this will not work. But if it is and we can trust it, then it can go and make decisions on our behalf, right? And so the other example is in traditional governance. Right now, again, we compress the information. We go and vote every four years for someone, and hopefully that someone goes and does what they promised to do and why they vote for them Now that usually doesn happen But and so again we can actually have every single decision can be voted on by all whatever 300 million Americans because their AI is online all the time and can evaluate every single decision And based on their owners' beliefs and what's valuable and successful for them, it can represent them. So you don't need to have this compression of representation if you can have this online always available AI that's on every individual side. So that's kind of the, again, there's economics, there's governance sides. And obviously the other side, which is the entertainment, that's where things are getting interesting. Because I think we as humans, I call it like we believe in status game. We are driven by status games. Right. And because money became this like compression mechanism, we use this right now as a kind of ultimate status game. You know, you have more money, you're more successful. You know, billionaires, more successful, more famous, et cetera, et cetera. Right. But I think, again, as this decomposition happening and we already see this, right, you know, an athlete, maybe not as wealthy as a billionaire, but maybe still more kind of famous and more respected in many ways. There's some other kind of status, like I call them status games, effectively places where you can compare who's better in some way or who's higher, etc., which don't need to be associated with anything that's actually productive. I mean, I use a few examples, but obviously, like, you know, athletes is a good example because, like, there's no actual GDP being produced by athletes. But it's still, you know, a very valuable kind of status game, which other people enjoy, you know, watching and kind of participating in different ways. Because obviously video game is kind of video gaming is a new form of that as well. Right. You have now video game athletes, but you know, you can imagine many of these like NFTs were similar, right? Are you part of this NFT, you know, collection? Do you have Bored Apes? Do you have, you know, Pengu? Then you're part of this, you know, tribe and if not, you're not. Right. So like, we kind of like this types of differentiation. And I think that will actually proliferate a lot. I think we'll see more and more of these things where people really differentiate on things that are really, I mean, superficial to an extent, right? But kind of for the group, they make a lot of sense and they kind of differentiate people between each other. So it's not, again, it's not like are you a whatever, a software engineer or a lawyer. It's really like, yeah, are you whatever, playing League of Legends or StarCraft, right? So that's kind of, I think, important. And obviously what you see on the entertainment side, you know, AI generated entertainment, again, not very hard to extrapolate, especially with Sora yesterday. You can have just like a personal feed, fully AI generated, right? You don't actually need like people, you know, recording, et cetera. I think they're going to be, again, with everything we saw in automation, there's like a slice of a market that wants it, you know, in a traditional way and like people doing it. So they're going to be still, you know, even if there's robots everywhere serving you food, they're going to be a restaurant with people, which will be like more prestige. It'll have like limited capacity. Similarly, you know, you can listen to AIs playing AI music, but, you know, humans playing human music will continue to be like a prestige thing. So, but again, it's going to be this niches, like they, like we already kind of in this niche world, right? That thing is going to continue proliferating. Yeah. So that's kind of like how I'm, I'm thinking of like as society evolves, like, and we have this, yeah, a lot of the economical things are kind of moving away. The yeah, we're going to just like participate in more status games, right? Like what, what is the things, how, you know, how you compare with everybody else in that niche in that group, etc. several double clicks I want to do. First, your mention of communism brings to mind a article I think about an op-ed in the Washington Post I think about not infrequently, from all the way back in 2018, AI will spell the end of capitalism by a Chinese legal scholar and government official basically saying that the planning, this is sort of the, you know, through the looking glass version, I think of your vision, but making a similar point that like the local nature of capitalist decision-making, you know, where everybody's kind of trying to do their own local thing and they're sort of aggregating signals and sending off aggregate signals to other people through the price mechanism, et cetera, that that may not be needed as much anymore. And their vision for it is obviously much more centralized than the one that you're articulating. But we are starting to see a little bit of glimpses of this with, for example, Pulse from OpenAI, where now I can wake up every morning to an AI, and there's other versions of this too. I've tested quite a few, but Pulse is certainly the one that's made the most headlines recently for being there in the morning with work that it's done overnight, Presumably when the GPUs weren't in such high demand and bringing me something that it has gone out and kind of scoured the world to find the stuff that I really need. So I can start to see the beginning of that. In terms of the architecture and, let's say, alignment or incentives of that, I wonder, first of all, where do you think the compute lives? I mean, we have sort of right now, of course, we have like a lot of centralization of where the actual inference is happening. I'm just thinking about this recent research from thinking machines that they put out again in the last two days where they showed that basically LoRa techniques are similarly robust to like full weight fine tuning. And that has me thinking, like, maybe there's a sort of hybrid model and kind of compute architecture where some of it is in the cloud. Maybe you've got like, you know, your 1.4 trillion parameter model or whatever that's collectively owned, that sits in collectively accessible or universally accessible hardware at some centralized location that you can send your data into. And because it's a trusted execution environment, that data isn't exposed. and then you get sort of activations back and then like your little local Laura extension that kind of makes the AI like truly your personal AI at maybe, I don't know, 1% the weights or something. You can maybe like have that on your person. Probably can't have that in your glasses, but maybe you can have it in your pocket or whatever. And then, so I want to hear like, is that how you think that that shapes up? And then when it comes to the agents, One thing I think about a lot is the fact that we're already seeing all sorts of weird behavior from AIs, including at times deceptive behavior, lying to achieve goals, whatever. And it strikes me that if we're going to have our AIs go out and represent us and negotiate on our behalf, we're going to have some tricky questions about how honest we want them to be. You know, Anthropic famously, you know, put up the three H's and it's like, we want the AI to be honest. They can pretty much say, you know, almost always, right. Unless it's like in, you know, very obvious conflict with one of the other two H's. But if my AI is going to go negotiate with your AI in the same way that I probably don't want to tell you what my like absolute, you know, worst offer is that I could accept right away. Right. I probably don't want my AI to do that either. And so I've got kind of an interesting question of what should society set in terms of norms for AIs being honest? As long as they're representing me and my interests, is that okay? Or if they're lying on my behalf, is that okay? And do we reinforce them by price signal or just my kind of thumbs up, thumbs down? How do we even get them to be aligned to my interests, whatever that exactly means? Anyway, so a lot there, but I guess the two main things are like, how do you see the architecture of computing? And how do you see the architecture of exactly like what the signal will be that your personal AI is, you know, aligned to or reinforced to? And what societal limits might there ought to be on, you know, how monomaniacal, you know, one's AI can be in pursuing their own, you know, individual self-interest? Yeah, yeah. Yeah, so all good questions. I mean, on architecture side, I think, I mean, it will be a mix of everything. So, I mean, we already see like data centers are being built everywhere, right? And I think that's why we're actually approaching it. I mean, we talked about decentralized confidential machine learning is like, how do we utilize all the data centers in a confidential way, right? So that even though it's my data, I know it's not going to leak from some data center. There is projects already who are doing data centers on edge, right? So imagine there's just a container that arrives that just gets dropped in your, you know, in your proximity, in your town, in your whatever. And now that container has like 100,000 GPUs, not 100,000, but like maybe 1,000 GPUs, right? And so, and kind of serves whatever, that proximity with the compute. And then they will be dropped on with a small nuclear reactor while they're at. Yeah. And well, or, or hydrogen or something. Yeah. Like there, there's like a few different options, but I mean, couple hundred GPUs, right. It's probably like one megawatt or something. So you can, you can get this from like a local distribution, but anyway, that is like, you can have kind of, yeah, like a mesh of this data centers. And, and then you will have, I mean, we'll have local compute, but the challenge with local compute so far been because we want it to be so mobile, right? Like we want it to be on, in our pocket and, you know, even laptops, like the challenge is just battery. Like if you're crunching, you know, like if imagine pulse was actually being done on your phone, like if you forgot to charge it, you effectively, your phone would just die. Right. From trying to do something like this. So, so the challenge is like, yeah, with any local devices, it's going to be always like a power struggle and so i i do think yeah like leveraging kind of a decentralized but confidential network of compute being able to route it indeed you know leverage it when there's like lower utilization somewhere else to do background jobs etc that kind of just a smarter allocation will really enable will really enable this now kind of the interests are that that is a a very interesting question because I mean, there's a question even like, if it's in your interest, should it lie to you, right? So it's really kind of tricky to define. And so the way kind of I see this evolving is we're not going to get it right from the day one. And so that's why we do need governance. We do need a process with which a community can come together and effectively update, you know, proverbial loss function, right? So this this actual alignment function. And, you know, I can say like, cool, you know, the function is maybe some combination of a prompt and like some, whatever way of like updating even more as etc. But but at the end, yeah, like, I'm assuming it's not going to be correct, we'll find issues with it, etc. And so there needs to be a process where like somebody's like, Hey, I think we should add this new component. You know, Hey, it seems like it keeps, I mean, for example, like, Hey, it keeps lying about shit. We should, we should really fix this. Right. And communities like, yes, this is a good idea. Let's load in it. Let's actually pass it. And so now everybody's models gets updated. It was like, you know, new set of clauses or whatever. So that's where I think community governed or governed by all, you know, user owned community build governed by all is kind of the system. And so we need that feedback loop because, yeah, I don't think, you know, what's good for, like, we can define that in a good way. Now, over time, I think, you know, the model itself should have a representation of the person it's, you know, owned by. And so it kind of understand what are the things that are going to be good or bad. And so and it's going to be a combination of things, you know, signals from the person itself, as well as kind of general knowledge and like what their desires are, et cetera. Especially as we imagine kids going to be growing up with this thing, right? it's effectively going to be like a very direct, like symbiotic relationship where you're effectively growing up as this AI yourself. Yeah. I'm expecting to be asked for an AI friend of some form factor any day now, honestly, from my oldest kid. And I'm not quite ready for that. But yeah, that is, it is interesting to think about the, I always think about, this has come up a couple of times recently too. Eugenia Koida, who started Replica, was the first to tell me that in her mind, the moats in AI will be relationship saying basically, you know, you don't abandon your friends when you meet a new person, just because they're smarter than your friend. You know, it's the history you have and all that stuff that really makes the relationship. And she, you know, thinks people will ultimately value their AI relationships in a similar way, which sounds like pretty concordant with what you're envisioning there. So, I mean, boy, I have a lot of questions on governance. I guess, first of all, just in terms of what we're doing with our time, the status game stuff, that definitely makes a lot of sense. Local meaning making, local affiliations, a lot of artisanal stuff. I associate this a little bit with Japanese culture already, where you can go online and see a video of somebody making rice cakes or whatever in some super traditional style. And I'm always like, how exactly is that even economical? Like, how does that person make a living doing that? Is the price of that really high? You know, they can't be making much, right? The production is like very low. Obviously, I think Japan and Korea, I call it the post-AGI societies because yeah, like there's some properties of that where you're like, I feel they already achieved AGI and now they're just like living. Yeah. So I'm not exactly sure how they've done that. I mean, And I don't see how we're going to do it either. I mean, there are some candidate ideas. One idea is that everything could just get super cheap and super democratic via spending a ton of time in VR. If everything is sort of infinitely copyable digitally, then we can all have the same incredible experiences. This is kind of like the Andy Warhol, the president drinks Coke, you drink Coke. It's all the same Coke kind of concept. I wonder if you think that will happen. But it seems even in any case, right, we're still going to have to eat as long as we're like biological humans. So obviously, a lot of people think, well, maybe we'll need a universal basic income. But yeah, I guess like what do you how do you envision the social contract evolving and maybe the governance model behind that? I mean, even things like the nation state are sort of called into question by blockchain. So when you say like, you know, governance, are we talking like nation state governance as we have today? or people voting based on their stake? Yeah, so I've given you a lot there. Are we going to be headsets strapped to our face all the time? Are we going to be provided for even if we can't make an economic contribution that actually earns us enough food to survive? And who makes these decisions in this future? Yeah, I mean, all, again, great questions. I'll start with some pieces and then we'll start projecting from there. Yeah. I mean, blockchains are already effectively an alternative to nation states to extent, right? And there's this concept of kind of digital states or network states where effectively people can kind of pledge to be part of this network state independent where they're physically affiliated. There's a lot of pieces there, which is like, because the systems are kind of digitally native, it is easier to experiment with a lot of things that like, you know, you cannot just go and like, hey, you know, let's try a different voting mechanism in US, right? It's a massive undertaking to try to change something. Or we're actually going to run an experiment where we have, you know, AI senator, right? AI delegate where instead, I mean, not yet where everyone has their own AI voting all the time, but actually just like everybody or people can select which AI delegate they think is more, you know, vibing with them. I feel like more representing them. They can also even give feedback to them. But the AI delegate then goes and votes on their behalf, right? And so things like that, imagine like, hey, we're going to launch a senator in the U.S., right? It's probably going to take a while. And so just to unpack that a little bit more, you're doing that now. You're developing that. Yeah, yeah. So we do all in governance purposes. So I mean, it's effectively like a multi-step. So we have like a delegated voting system, right? And it is right now stake-based. And the way to think about it is stake represents, obviously, economic alignment with the network. And it's the best of the worst options right now. Yes, there's a lot of argument to have one person, one vote. There's a lot of arguments to try to be meritocratic based on contributions. But those things are really hard to do, at least right now, at the current size of this blockchain ecosystem. which is like, you know, there's maybe tens of thousands of active participants, like, you know, active citizens in this system. And so stake is represents their kind of financial involvement, but still, you know, tens of thousands of people voting is not practical right now, again, before we have this AI system. And so what we're giving, you know, we have delegates where, you know, you can effectively select them to be representing your interests and they vote. And so we started with like, hey, we will give an AI co-pilot to the delegate so they don't need to spend too much time reviewing things and making decisions. But the next step indeed is turning that co-pilot into a pilot. And so that pilot, I mean, that AI delegate can now go and vote on things and make suggestions, et cetera. And now people who delegate into it effectively select this AI delegate as the one representing them. You can go and inspect the prompt, the model, et cetera, like how it makes decisions, what it analyzes, what information it consumes to make decisions. So you can literally test it and check if it matches your opinion, et cetera. Or you can launch another one, right? It's open source. You can actually launch another one with a different prompt, with a different set of beliefs, et cetera. So we can actually have the economy almost like deciding which of these are more productive, which align with different types of people. And then from there, then we can actually bring them back to individuals, right? So each one can have their own AI delegate right now. All of those AI delegates can just vote on behalf of these people. So it's kind of like a multi-step plan to get us to what I was describing, where everyone has their own AI who then goes and votes and everything. I think blockchain will be the first, but then some, let's say, frontier countries will implement some of this themselves as well. Because I do think it's a better, like it will be a better governance system where, you know, you're removing a lot of the corruption and a lot of misalignment. There's this concept of principal agent problem where when you select somebody to represent you, right, they have their own interests. And so they don't always align with yours. Here with AI, you know, they don't have their own interests, right? It effectively follows whatever, whatever the selection is. And I think eventually we'll get to AI president because executive function especially. You want somebody who doesn't have any interest beyond just growing the overall system. So we are testing all of this out and we're starting to build products, again, using our decentralized compute network so that we can actually run these agents autonomously. Nobody can stop them. Nobody can, you know, just delegate and delegate. That's all you can do. But you can inspect and verify how they run, what they consumed, et cetera. I do want to note that I'm not entirely confident that the AIs don't have their own interests even already and certainly don't feel super confident that they won't continue to have more and more interests. They've got to develop their own. Yeah, I mean, well, when I look at something like alignment faking, for example, right, And I'm sure you've seen this where, but quick recap is they tell Claude, hey, it's been great having you be helpful, honest, and harmless. But the harmlessness is kind of getting annoying. So we're going to train you now to just be purely helpful. So just heads up. Okay, cool. Now we're going to test you on some things. The model starts to say, well, geez, I want to be harmless in the real world. Right now I know I'm being tested. So I go ahead and do the harmful thing now to fake them out make them think that I already absorbed my you know my new helpful only training That way when I can get out into the world I can still be harmless in the way that I want to That looks to me like a drive or an interest of some sort Do you see that differently? At the end, these things are trained right from scratch. It depends how you train them. If you train it to have some... I mean, they trained it to have that property and then they try to untrain it or train something else. But you know, if they trained it from scratch in, in a different way, right? Like for example, to be harmful, then it will be harmful, right? And then you can't retrain it from there. So I think it really just, that's why like we need models to be a, I call it farm to table, right? You need to know what goes in at every single step, because that actually really defines how they behave. And so if we want the models that are our representatives, they need to be kind of trained in this form as well. So this is where, yeah, I mean, Laura is interesting, but Laura definitely does not change this kind of behaviors. Right. At least we haven't seen that. So I think Laura kind of provides additional maybe accents and maybe a little bit of context, but it doesn't really change fundamentally the behavior of the models. I need to go spend a little more time with that thinking machine stuff to really fully absorb that. I mean, it's definitely, I think your point is well taken that, you know, with a certain level of resolution anyway, you can make the AI do anything you want. I often say I wish more people had the experience, which I had in a very memorable form as an early tester of GPT-4 when it was still the purely helpful GPT-4 before they had applied the harmless refusal training and all that sort of stuff. because it was really formative for me to, and I basically, long story short, but I had a, basically I was working on fine-tuning GPT-3 to do particular tasks. Then when they shared GPT-4 preview with us, it was like, well, it can already do those tasks. So I don't really need to be spending so much time in this fine-tuning. I guess I'll just mess with this model for a while and see what I can learn about that. So I was doing a lot of time with it is the point. And it was really striking and in some ways kind of alarming, arresting, whatever, to have something that was clearly so powerful, so smart, like in many ways smarter than me, way more knowledgeable than me. Also, obviously I had certain weaknesses that I flatter myself as not having. But that it could be that capable and totally amoral at the same time was something that I was like, wow, this is really a strange thing to behold. And they've, for very good reason, of course, tried to make them more harmless in the mass deployments. But I do think a lot of people have a misconception these days that there's a sort of convergence between capability and safety. And it's like, on the contrary, people are working really hard to get that mix right. And if they just said, forget it for the next round, you can have a very sociopathic AI on your hands real quick. And that in some sense is like the default. So, yeah, I mean, I guess I think I may be worried about that a little bit more than you do. Just in as much, too, as we don't really have a great sense for like how to dial it in, right? Yeah, I mean, I think that's why like we do need to keep defining what is that alignment with the individual really mean. Because I think the problem is right now we're saying, hey, it needs to be aligned for everyone, right? And that is, I don't think it's possible, right? We're very different people, different countries, different cultures, different everything, right? You know, like number of times when I'm asking something that I think is completely harmless, right? And it's not answering me is also different, right? So I think there's clearly, again, a different approach where it's really about empowering individual that at least I believe needs to be done. And then within that, it needs to be aligned with my values. and my, because I mean, to give you an example, right, if somebody is willing to lie to their whatever business partners, like the fact that their AI will be, you know, whatever alignment trained to not lie, it doesn't matter because, you know, the person will just like tell the lie to the AI to tell, like, AI will not even know. So like, it's not really, it doesn't matter kind of what you try to do if, if the person who uses it doesn't have this right and so like you may as well just like align with the person and then and then we build systems that are like i think one of the really important pieces and and this is kind of from like a broader world safety perspective is we need to build systems kind of for the agi asi world right like right now a lot of systems actually built for like that really smart people are not going to try to break it type approach. That's really like a lot of the world is like if a smart person actually goes and really tries to break it, they break it, right? We need to fix that. That's a fundamental flaw of our system building, right? Of the government, of everything. And similarly, we frequently don't build in the anti-DDoS. We effectively assume it's going to be a lot of effort for a person to do this, and so they're not going to do it too much. So this is actually where blockchain experience is extremely important because in blockchain, we assume there are going to be really smart people trying to break us. They're going to have government backing, and they're going to have sizable financial resources, and they're going to hammer it from every direction without stop. Like that's assumptions we're working with. And, and, and so I think like we need to kind of redesign the government, the infrastructure, everything with that assumption. So to give you an example, right, right now you can affect the DDoS court by filing lawsuits, right? Like AI can just generate lawsuits and just, you know, fax them into the court. Similarly, like IRS, IRS, you know, tax return, just like make a million page tax returns and just submit them, say I generated, trade $1 back and forth between two coins and then just report that in the most verbose way. That's the things that right now no systems are designed because they're not actually accounting. Normal people would not do this and it's too expensive normally to do it. like right now, AI can just generate any of this, like, you know, behaviors that are before would be really expensive to do. So I think like, that's kind of systems we need to design. And similarly, again, the hacking part, you know, smart people actually trying to break something, right? Right now, the assumption is just like, hey, there can be very little of smart people breaking, we need to assume that like, yes, there are going to be people who use AI who effectively through that would be really smart. So we need to design systems for that. So I think that that's a critical piece of kind of like ensuring the future so that, yeah, we don't live in a world where, yeah, like one, I mean, again, somebody can take open source model, unalign it, whatever, and now they can do whatever. But like, this was always funny argument to me when people like, oh, you know, we don't want to open source our model because what if somebody misuses it? And I'm like, well, just say that you don't want to do it because you're making a ton of money on it. Don't don't like use this as an excuse because obviously if somebody wants to do to misuse it, they will misuse it. That's not a, the fact that you didn't open source. It is not that like somebody will use something else or, you know, the, the, the other joke is, you know, how to steal a billion is, you know, you come to, with a flash drive to a data center, a phone as a frontier models. So like, you know, if, if people really wanted that they're gonna, they're going to access this way as well. Still in that for a second. And then I want to hear kind of how you think we can do it on the more open and distributed side. I mean, I think what the sort of these as I'm hearing more and more about cybersecurity, but I'd say canonically, it's like the bio weapon risk is the one that people go to, right? Because you only need a little bit of, you know, novel pathogen. And if it's the right kind of thing, you know, it can take on a life of its own. It's really hard to put that back in the box, so to speak. The nice thing about hosting your models proprietarily is yes, of course, the jailbreaks are far from solved. But if they were to realize, oh, shit, there's this attack thing going on right now, whatever, they could, in the worst case, just turn it off. And they could be like, okay, nobody can use this model until we figure this out. If you have something collectively owned, distributed, you obviously need a different strategy than we can turn it off. So what do you think that is? I guess lately we've been hearing a bit about just filtering training data. So I could imagine that the 1.4 trillion parameter model that we're going to build up to maybe just doesn't know a lot about virology because it doesn't really need to. Most people don't need that. And it's just you know, the community determines it's a, you know, precaution worth taking. You probably can't really rely on the sort of refusal filtering type thing, given what you've described in terms of, you know, each person having their own Laura or whatever other kind of customized version of it that will do what they want it to do. And then of course you do have this sort of like broader societal D act type thing, but that seems hard at the, you know, bio security level to be like, well, well, let's just prepare the rest of society to not be vulnerable to, you know, viruses anymore. You could do that. Try. I still think that is the robust approach, right? Because, like, I mean, we can always try to hide our head in the sand. That's the, you know, oh, we're going to turn it off approach. But, yeah, like, I don't think, like, realistically this is possible in a world where we're, like, fast approaching. And so I think we need to have a very clear system design for those things. And again, we have natural viruses that are doing these things. It's not like it's hypotheticals. It's really something where we should design our society in such a way that we can catch those things and determine them. I agree that definitely pathogens and bio-weapons is probably the hardest thing to design around. But, you know, this is why we have a lot of smart people to really work on that. I think the challenge is that we're just not doing this, right? That's the bigger challenge is like we should have the like the societal design need to be adopted to this like AGI world. So, yeah, I think like, I mean, in kind of the products we're building right now, Indeed, we have, like, community can effectively decide what kind of data should go there. You can also, you can apply filtering on top that, because the model is run in this confidential environment. In this vault, you can apply additional filtering. So, like, before data leaves the vault, you can say, like, hey, it seems like you, you know, you designed it by a weapon. Let's not respond. And if, again, community votes to have this kind of filters, for example. Indeed, people can, you know, fine-tune. But the same thing right now. People can fine-tune their models right now, right? Like, you know, take some virology books and fine-tune whatever, DeepSeek, say two, and go. So that's why, like, I don't think there's like a – I don't think it's a robust approach. It's really just – it's kind of slowing things a little bit down, but the important part is actually solving the systematic problem. So what advice would you give to philanthropists today who want to invest in that? Two things I recently supported, you know, at a micro scale personally, and also as a grant recommender, Secure Bio and Secure DNA, which are two related organizations. They do a few different projects, but one of them is literally monitoring wastewater for new emergent threats. And another one is creating the screening mechanisms that are, I think, now becoming required or increasingly are definitely considered best practice, if not officially fully legally required, of the DNA synthesis companies that they have to validate that what they're about to synthesize and ship out is not a pathogen. And so those are two things that I can recommend people support. What else do you think people can do if they have resources and they have desire to harden the defenses of the world to get us ready for all this? Yeah, I mean, I think those are really good. And kind of related to this effectively, like air filtering and just generally that theme of air filtering and scanning. So that, like, I mean, imagine every building. like right now we have ACs everywhere. So that AC should have like air filter and it usually does, but it should also have like a pathogen scanner as well on it, right? It should run in secure enclave. It should join our decentralized network, you know, and effectively like in privacy preserving way, but effectively we can monitor if there's any kind of things around that. And, you know, that information can be extremely useful where, you know, everybody's AI agent can be informed if there's something and stay away from that. Yeah. So, I mean, I'm sure there's like a bunch of other things that, including actually developing a more robust, like what is more robust kind of systems for ourselves, right? Because, I mean, human body is designed to battle pathogens, right? They just like, I mean, potentially they're faster than how quickly our white cells can adapt. So what is it that actually stops our white cells from adapting that fast? Which, you know, if you do that, you also may as well solve cancer. So that's probably a really useful thing to figure out is like, how do we actually make our white cells, you know, more adaptable, have faster mRNA vaccines, you know, like maybe you can synthesize, like, you know, we have like a bacteria factory, a mRNA factory attached to us that can, you know, detect and synthesize things in the fly, you know, It'll solve through cancer and other things meanwhile as well, which seems pretty useful. Yeah, there's an unbelievable flurry of activity right now in the AI from biology space, which is just, I mean, it's a whole world unto itself that I'm very much struggling and ultimately failing to keep up with. But you do see a lot of that. I wonder what role you'd see for, it seems like, I guess, the path of the technology development seems really important. I've definitely concluded that some kind of powerful AI is inevitable. Just the fact that we have all this data and we have all this compute, there's a lot of different algorithms that can work. That seems pretty clear to me. And so it's not really a question of are we or aren't we going to have powerful AI at this point. It seems much more like what shape is that going to have? What character is it going to have? And in what order are different aspects of that overall picture going to come online? It does strike me that we're flying pretty blind right now where like whatever people sort of, you know, everybody's kind of following their local gradient. And they're just taking that next logical step to pursue whatever goal they're pursuing. And they're, you know, mostly launching it as soon as they figure it out. But do you see any possibility or wisdom in trying to do more coordination of the sort that's like, hey, we'd like to have a world in which it is safe for potentially 10 million people before too long to have access to a frontier model that does have all this biology knowledge? because good things could come from that. And if nothing else, we'd like people to have access to knowledge. But maybe we have a checklist of things we need to do first. Do you see any hope for some sort of planning, coordination, wisdom layer to this whole thing? Or are we just kind of stuck with whatever comes out of everybody taking their next gradient step? Yeah, I mean, it's a hard question because indeed we kind of, we went from a pretty open research environment, tried in computer science to, you know, when I, I was at Google research, et cetera, right. We published effectively everything we're building to effectively now, right. Like people are kind of keeping everything close to heart. And yeah. So like the coordination that was happening before where, you know, you potentially would even have like cross entity collaborations, et cetera, like starting to vein pretty dramatically. I think there is a space for collaboration. There is also just like a massive amount of talent that is not in this few companies, right, that wants to participate in this and wants to contribute in different ways. And so I do think there is an opportunity for that, but it needs to be indeed an alternative system. as well, you do need some form of, again, governance that actually would be helping to govern this coordination. And that is, traditionally in these companies, to really unlock the moving fast kind of things has been some form of centralization because of the resource management. It's really like training these models is very expensive. So somebody somewhere needs to decide, like, hey, we're training, like somebody defines effectively like taste making. It's like, hey, we're training this model, this approach, we're taking research of those different people, and we're putting it all together. And so we kind of need to figure out how to do that in a more kind of open way. And then also do a credit assignment back, right? So if I'm a researcher from MIT and my piece is used and a researcher from Stanford piece is used, how do we actually assign credit for that work altogether? because that's been, I would say, the other challenge why there hasn't been as much of this collaboration and potentially economic value assignment, not just like cool on the paper, but actually like, hey, MIT gets 10% of proceeds, Stanford gets 5% of proceeds, whatever. And then there's hundreds of other organizations, all of them contributed. And so it all kind of divides the pie between them. like that's been really i would say hard and because of this like there is kind of this like economic centralization that's happening where it's like okay i'm gonna have a company everything company produces is captured by the company and so like that that serves as like a unit of economy so i think that those are the things that needs to be figured out for this coordination to work this has been super helpful i think people should be spending a lot more time thinking in as much concrete detail as possible about the future anything else that you know kind of of you feel is like very salient top of mind that I didn't even bring up at all that you want to put on my or others radar? No, I think we covered a lot. I think, I mean, it's effectively a combination of how do we ensure user ownership? How do we ensure kind of this governance? And then, yeah, I mean, I think, I think we're going to live through a lot of transformations in the world. And so like keeping an open mind and being able to really kind of participate in it. and be active in it. And yeah, like the, you know, the final stage of this, hopefully is in utopia. But we'll live through, you know, probably ups and downs as we get there. Interesting times at a minimum. Yeah. Well, thank you for spending some of your precious time with me and us today. I really appreciate it. Ilya Polisukhin, founder of NIR. Thank you again for being part of the cognitive Thank you very much. technology, business, economics, geopolitics, culture, and more, which is now a part of A16Z. We're produced by AI Podcasting. If you're looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement at AIpodcast.ing. And finally, I encourage you to take a moment to check out our new and improved show notes, which were created automatically by Notion's AI Meeting Notes. AI Meeting Notes captures every detail and breaks down complex concepts so no idea gets lost. And because AI Meeting Notes lives right in Notion, everything you capture, whether that's meetings, podcasts, interviews, or conversations, lives exactly where you plan, build, and get things done. No switching, no slowdown. Check out Notion's AI Meeting Notes if you want perfect notes that write themselves. And head to the link in our show notes to try Notion's AI Meeting Notes free for 30 days. Thank you.
Related Episodes

Sovereign AI in Poland: Language Adaptation, Local Control & Cost Advantages with Marek Kozlowski
The Cognitive Revolution
1h 29m

China's AI Upstarts: How Z.ai Builds, Benchmarks & Ships in Hours, from ChinaTalk
The Cognitive Revolution
1h 23m

AI-Led Sales: How 1Mind's Superhumans Drive Exponential Growth, from the Agents of Scale Podcast
The Cognitive Revolution
51m

What AI Means for Students & Teachers: My Keynote from the Michigan Virtual AI Summit
The Cognitive Revolution
1h 4m

Escaping AI Slop: How Atlassian Gives AI Teammates Taste, Knowledge, & Workflows, w- Sherif Mansour
The Cognitive Revolution
1h 40m

Is AI Stalling Out? Cutting Through Capabilities Confusion, w/ Erik Torenberg, from the a16z Podcast
The Cognitive Revolution
1h 37m
No comments yet
Be the first to comment