

Michael Timothy Bennett: Defining Intelligence and AGI Approaches
Machine Learning Street Talk
What You'll Learn
- ✓Michael Timothy Bennett defines intelligence as the efficiency of adaptation, which he sees as a more succinct definition than others.
- ✓Francois Chollet defines intelligence in terms of the ability to acquire skills, which is inspired by Legg-Hutter's work on Kolmogorov complexity.
- ✓The AIXI model aims to achieve an upper bound on intelligence by using Solomonoff induction to find the simplest models, but the guest disagrees with some of its theoretical foundations.
- ✓Active inference and AIXI both focus on producing simple models that predict well, but the guest notes that the subjective notion of complexity for the agent can differ from an external assessment.
- ✓The discussion highlights the challenges in defining and measuring intelligence, and the different approaches taken by researchers in this field.
AI Summary
The episode discusses the definition of intelligence and different approaches to Artificial General Intelligence (AGI). The guest, Michael Timothy Bennett, shares his perspective on intelligence as the efficiency of adaptation, and contrasts it with other definitions like Francois Chollet's focus on skill acquisition. The conversation also covers the AIXI model, active inference, and the role of simplicity and complexity in defining and measuring intelligence.
Key Points
- 1Michael Timothy Bennett defines intelligence as the efficiency of adaptation, which he sees as a more succinct definition than others.
- 2Francois Chollet defines intelligence in terms of the ability to acquire skills, which is inspired by Legg-Hutter's work on Kolmogorov complexity.
- 3The AIXI model aims to achieve an upper bound on intelligence by using Solomonoff induction to find the simplest models, but the guest disagrees with some of its theoretical foundations.
- 4Active inference and AIXI both focus on producing simple models that predict well, but the guest notes that the subjective notion of complexity for the agent can differ from an external assessment.
- 5The discussion highlights the challenges in defining and measuring intelligence, and the different approaches taken by researchers in this field.
Topics Discussed
Frequently Asked Questions
What is "Michael Timothy Bennett: Defining Intelligence and AGI Approaches" about?
The episode discusses the definition of intelligence and different approaches to Artificial General Intelligence (AGI). The guest, Michael Timothy Bennett, shares his perspective on intelligence as the efficiency of adaptation, and contrasts it with other definitions like Francois Chollet's focus on skill acquisition. The conversation also covers the AIXI model, active inference, and the role of simplicity and complexity in defining and measuring intelligence.
What topics are discussed in this episode?
This episode covers the following topics: Intelligence definition, AGI approaches, AIXI model, Active inference, Simplicity and complexity in intelligence.
What is key insight #1 from this episode?
Michael Timothy Bennett defines intelligence as the efficiency of adaptation, which he sees as a more succinct definition than others.
What is key insight #2 from this episode?
Francois Chollet defines intelligence in terms of the ability to acquire skills, which is inspired by Legg-Hutter's work on Kolmogorov complexity.
What is key insight #3 from this episode?
The AIXI model aims to achieve an upper bound on intelligence by using Solomonoff induction to find the simplest models, but the guest disagrees with some of its theoretical foundations.
What is key insight #4 from this episode?
Active inference and AIXI both focus on producing simple models that predict well, but the guest notes that the subjective notion of complexity for the agent can differ from an external assessment.
Who should listen to this episode?
This episode is recommended for anyone interested in Intelligence definition, AGI approaches, AIXI model, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
<p>Dr. Michael Timothy Bennett is a computer scientist who's deeply interested in understanding artificial intelligence, consciousness, and what it means to be alive. He's known for his provocative paper "What the F*** is Artificial Intelligence" which challenges conventional thinking about AI and intelligence.**SPONSOR MESSAGES***Prolific: Quality data. From real people. For faster breakthroughs.https://prolific.com/mlst?utm_campaign=98404559-MLST&utm_source=youtube&utm_medium=podcast&utm_content=mb***Michael takes us on a journey through some of the biggest questions in AI and consciousness. He starts by exploring what intelligence actually is - settling on the idea that it's about "adaptation with limited resources" (a definition from researcher Pei Wang that he particularly likes).The discussion ranges from technical AI concepts to philosophical questions about consciousness, with Michael offering fresh perspectives that challenge Silicon Valley's "just scale it up" approach to AI. He argues that true intelligence isn't just about having more parameters or data - it's about being able to adapt efficiently, like biological systems do.TOC:1. Introduction & Paper Overview [00:01:34]2. Definitions of Intelligence [00:02:54]3. Formal Models (AIXI, Active Inference) [00:07:06]4. Causality, Abstraction & Embodiment [00:10:45]5. Computational Dualism & Mortal Computation [00:25:51]6. Modern AI, AGI Progress & Benchmarks [00:31:30]7. Hybrid AI Approaches [00:35:00]8. Consciousness & The Hard Problem [00:39:35]9. The Diverse Intelligences Summer Institute (DISI) [00:53:20]10. Living Systems & Self-Organization [00:54:17]11. Closing Thoughts [01:04:24]Michaels socials:https://michaeltimothybennett.com/https://x.com/MiTiBennettTranscript:https://app.rescript.info/public/share/4jSKbcM77Sf6Zn-Ms4hda7C4krRrMcQt0qwYqiqPTPIReferences:Bennett, M.T. "What the F*** is Artificial Intelligence"https://arxiv.org/abs/2503.23923Bennett, M.T. "Are Biological Systems More Intelligent Than Artificial Intelligence?" https://arxiv.org/abs/2405.02325Bennett, M.T. PhD Thesis "How To Build Conscious Machines"https://osf.io/preprints/thesiscommons/wehmg_v1Legg, S. & Hutter, M. (2007). "Universal Intelligence: A Definition of Machine Intelligence"Wang, P. "Defining Artificial Intelligence" - on non-axiomatic reasoning systems (NARS)Chollet, F. (2019). "On the Measure of Intelligence" - introduces the ARC benchmark and developer-aware generalizationHutter, M. (2005). "Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability"Chalmers, D. "The Hard Problem of Consciousness"Descartes, R. - Cartesian dualism and the pineal gland theory (historical context)Friston, K. - Free Energy Principle and Active Inference frameworkLevin, M. - Work on collective intelligence, cancer as information isolation, and "mind blindness"Hinton, G. (2022). "The Forward-Forward Algorithm" - introduces mortal computation conceptAlexander Ororbia & Friston - Formal treatment of mortal computationSutton, R. "The Bitter Lesson" - on search and learning in AIPearl, J. "The Book of Why" - causal inference and reasoningAlternative AGI ApproachesWang, P. - NARS (Non-Axiomatic Reasoning System)Goertzel, B. - Hyperon system and modular AGI architecturesBenchmarks & EvaluationHendrycks, D. - Humanities Last Exam benchmark (mentioned re: saturation)Filmed at:Diverse Intelligences Summer Institute (DISI) https://disi.org/</p>
Full Transcript
Close your eyes, exhale, feel your body relax, and let go of whatever you're carrying today. Well, I'm letting go of the worry that I wouldn't get my new contacts in time for this class. I got them delivered free from 1-800-CONTACTS. Oh my gosh, they're so fast. And breathe. Oh, sorry. I almost couldn't breathe when I saw the discount they gave me on my first order. Oh, sorry. Namaste. Visit 1-800-CONTACTS.com today to save on your first order. 1-800-CONTACTS. When the holidays start to feel a bit repetitive, reach for a Sprite Winter Spiced Cranberry and put your twist on tradition. A bold cranberry and winter spice flavor fusion, Sprite Winter Spiced Cranberry is a refreshing way to shake things up this sipping season and only for a limited time. Sprite, obey your thirst. How close are we to AGI? I mean, it's interesting how much it has stuck around. We have just replaced the pineal gland with a Turing machine. You're a big fan of what I would call biologically inspired intelligence. Biological systems with a tiny fraction of the energy and learning data can do so much more. Is that fair? Because whatever that software does has to pass through an interpreter and the interpreter decides what it does. Consciousness is basically an illusion. One of my supervisors accused me of writing libertarian biology because one of the results of one of my thesis is called the law of the stack. I like dramatic names. MLST is proud to be sponsored by Prolific. This is Enzo Blinda. Yeah, that's kind of the goal that we're working towards. We're trying to make human data or human feedback, or actually any kind of feedback at that. We treat it as an infrastructure problem, right? We try to make it accessible. We make it cheaper. You see this pattern in almost any company, and even in academic research as well, every academic researcher cares about the quality of their data. Let's abstract it away. Let's put a nice API around it to make it just like the same way. You also do CICD or you do model training pipelines. We effectively democratize access to this data. Yeah, okay. I'm trying to think of my beliefs. Give me one second. What were they again? I'm entirely cognizant of it. Should I look at the camera? Should I look at you? Look at me. Okay. Yeah. My name is Michael Timothy Bennett. I am a computer scientist who has to use his middle name because there are too many Michael Bennett's in the world. I am interested in understanding AI, intelligence, life, the universe, and the nature of existence. And I spend all my time doing that. And I have a side hobby trying to build AI. I got in touch with you actually it was quite a few months ago it was when your paper called what the f is artificial intelligence so do I get the name right yeah I mean it's a couple of extra letters but yeah okay yeah that was doing the rounds and I flick through it at the time I've now just spent the last couple of hours like reading it word for word and it's actually brilliant I do recommend that folks at home read that especially for folks in the MLST audience because we're a little bit eclectic in our tastes. We are ideas collectors and we like different approaches to AGI and also hybrid approaches and a little bit of philosophy and consciousness and whatnot. So certainly in that respect, you might be the perfect guest. Thank you. So this is all very good. This is all very good. In that paper, you were talking about what intelligence is and various approaches to AGI and also approaches to categorizing them. Tell us about that. Okay, so intelligence is a hotly debated topic. It's been for a long time. I sort of started off with the leg-hutter definition of the ability to satisfy goals in a wide range of environments. But as I delved more into biological intelligence and other things, I sort of arrived at a definition as the ability, as the efficiency of adaptation. So how sample and energy efficient you are. And then later I found Pei Wang's definition, which preceded mine by several years, is adaptation with limited resources, which I think is really succinct and clear. And of course, there's a myriad of other definitions, but my favorite is Pei Wang's. Me too. Pei, if you're watching this, I'm sorry we haven't published your interview yet. Basically, the reason is I loved his defining artificial intelligence paper so much that we'd be meaning to make a special edition on it. So I kind of held back his material for quite a while and we still haven't done it. But even though it's my favorite definition as well, and of course it informs his NAS framework, his non-axiomatic reasoning framework, you know, some might say it's almost a bit tautological, you know, just to say, well, yeah, intelligence is about adaptation with insufficient resources. And in his paper, he went, he was at pains to kind of say, well, we need to have simplicity. We need to have fruitfulness in the definition. It's actually a really difficult thing to get a handle on. Yeah, and I think a lot of people who sort of look at what is intelligence, they end up writing very long and complicated definitions. And then you spend more time trying to figure out what the definition means than actually thinking about intelligence. That's another reason I like Pace, I guess. Yes. Clear. So as you know, I'm a big fan of Francois Cholet. For much of MLST history, there was a rule that I was only allowed to mention Francois' name once per show. We've relaxed that a little bit recently. But maybe that's a good place to start. So how does Cholet define intelligence? And you also said that Cholet was kind of inspired a little bit by Leggin-Hutter, certainly in terms of the use of, like, you know, Cormagorv complexity. Yeah, I mean, his work is very much descended from that sort of way of thinking. He defines it in terms of the ability to acquire skills, which is, you know, I suppose, perhaps a more benchmark focused version of Pei's definition, which makes sense because Chalet was proposing a benchmark in that paper. So he was looking at each, perhaps he was looking at each test question as a skill and the ability to acquire it as what it was testing. which makes a lot of sense. But his formalism, which almost seems an afterthought in that paper, is where you can see much more of Leggenhutter's influences. It once again uses Kolmogorov complexity. It sort of frames things in the same terms, even though in that paper, Chalet goes on to say that he doesn't think compression is sufficient for intelligence. He was very clear about that in the paper, and then he uses it. um so i i think that might be because um sure yeah he describes like a meta generating process which is the intelligence and that produces skill programs so the programs are a compression but the process which created them is doing something more maybe yeah i felt like that part was um my uh I chose to focus on the test part at that point. I'm like, oh, this isn't the bit that he was really focusing on. I don't know. I'd have to ask him, I guess. Yes, but you did say something interesting. So, yeah, maybe a distinction with Cholet is that, and he thinks of LLMs as being a kind of interpretive collection of skilled programs. So, you know, he thinks programs are the output of an intelligence system, not the intelligence itself. And with, you know, let's say ACSI, AXE and we should talk about what that is. I don't think the concept of a program was an explicit output artifact. It was more a definition of an agent which can succeed in an environment. Is that fair? Yeah. I mean, I suppose a lot of this is kind of semantics a little bit, but the AXE model is a general reinforcement learning agent. So it just takes the standard reinforcement learning thing and tries to make it as sort of a, what you might call an upper bound or a super intelligence based on that using Solominoff induction. And Solominoff induction is, you can think of it as a formalization of Occam's razor. It's just if I have two explanations, I pick the simpler one. And so the idea is that I actually achieves this upper bound on intelligence because if you accept that Occam's razor is some sort of optimal heuristic that you can use, it does this using Kalmogorov complexity, which is sort of the optimally compressed version of a model. so if I can compress something more then it's simpler and so if I just take the most compressible models then I can get the simplest ones and this is you know useful for thinking about what a superintelligence might do and because it's a general it's a reinforcement learning agent we can sort of model it out we can we can build approximations of it I disagree with some of the theoretical foundations and I you know a lot of my publications are about like what we could do better, but I find the overall idea of IACC very compelling, and it has informed a lot of my work. Yes, because I guess for you, reading your work, you said that if we wanted to create AGI, it would be something that would look like a scientist, because if we frame it at the right level, a scientist can generate hypotheses, and they're an agent, they can act in the world, they're embedded in an environment. So you're framed at a sufficient level of embedding that you can actually capture the dynamics of the system. And in that respect, ICSI is an agent, and it has this principle of compression. And actually, maybe you can contrast it to active inference, because that's quite similar. It's about this agent that, you know, balances energy and entropy. So sort of like, you know, predictive control and simplicity in some, you know, so-called natural way. How is that different from ICSI? Oh, I mean, they're very different formalisms. I would love to see someone try to do active inference, actually. But it's just, I guess, active inference has sort of, it has a simplicity bias built into it. You know, there's like a regular, there's, but it's sort of, the focus is more on explaining something else, like explaining, I mean, and also I think active inference is more, okay, so there's a whole bunch of ideas there. You've got the free energy principle, you've got active inference, you've got all this stuff about Markov blankets and maintaining the border of an organism and having an internal and external world. So the targets are kind of different. I think that's fair. We have an agent and the agent is doing prediction in the environment and it can act and so on. And they are both in some sense, ICSI and Active Inference, trying to produce a simple model. So there's this assumption or principle, if you like, that simplistic models, if they predict well, must be good. Yeah, yeah. And this is a very popular, almost orthodox assumption to make because Occam's razor does kind of work. but even as far back as 10 years ago there were people pointing out that this assumption is based on I mean, something like Solominoff induction, it performs reliably within bounds based on the original assumptions but once you put it in an interactive setting, like with ICSI now you've got the subjective notion of complexity that the agent has, which is used for its version of simplicity because it's sort of perceiving the world through an interpreter. In the case of actually a universal Turing machine, but you can think of it as just, just think of it as an instruction layer, like a language. When I say something in a language, how long it takes me to say it depends on the language I use. If I have some memetic single syllable word to describe a complicated concept, then the length of that concept is one in my language. And in sort of my subjective world, that's fine. But if I have an external world that assesses complexity, in the case of IOC, this is sort of like looking at leg hutter intelligence, which is sort of a measure of intelligence, measuring the intelligence of an agent based on the complexity of the model it comes up with. It's got a different concept, a different sort of you know you can make these you can make it perform arbitrarily well or arbitrarily poorly by sort of shifting the goalposts of interpretation you can make it so that if I am to, you know well yeah, you can essentially make simplicity completely disconnected from performance if you like not that that actually happens in reality, that's like it's not so cut and dried but it's certainly not optimal, which is, I think, what a lot of people were hoping for with the original. donation to charity. Visit Subaru.com slash share to learn more. This episode is brought to you by Indeed. You're ready to move your business forward, but first you need to find the right team. Start your search with Indeed Sponsored Jobs. It can help you reach qualified candidates fast, ensuring your listing is the first one they see. According to Indeed data, sponsored jobs are 90% more likely to report a hire than non-sponsored jobs. See the results for yourself. Get a $75 sponsored job credit at indeed.com slash podcast. Terms and conditions apply. Yes. And of course, it's so interesting that actually when you dig into this deeply, it just becomes apparent how difficult this problem actually is. You know, many people might just think, oh yeah, defining intelligence. We've got that now at ages ago. I think there was a distinction as well that certainly Legg and Hutter, they were very focused on this simplification of the model and occam's razor and i think sholet did overcome one hurdle which is um task generality as well this developer aware generalization yeah so so for sholet's definition is not a general definition of intelligence so it's it's very much a specialized definition so it's it's intelligence relative to a scope of tasks oh yeah and then the generalization difficulty is is like the relative entropy from that scope of tasks for you know from the wider scope of tasks yeah and if i understand correctly like i i think that it's been years since i read that 2007 paper but like legging had to but but it was about an agent minimizing commogore complexity which can like do well on a on a the expected performance on a wide range of environments the definition of task here is important i um i agree with chalet that the tasks are what's important i just disagree about what constitutes a task um so like something like legging had definition with the environments and the goals and the sort of it's the reinforcement learning framework you've got things like actions these are all like high level abstractions that that we humans use to simplify the world and uh as the uh problem of relative complexity in an interactive setting kind of illustrates if i use a different set of abstractions to achieve the same ends, I can make something very difficult or very easy. So if I'm trying to talk about tasks, then I need to talk about embodiment as well. I can't just rely on this idea of a software mind, because whatever the software mind does depends on the interpreter or hardware. You have to look at the system as a whole. And in cognitive science, they've got this idea of inactive cognition, which is not just embodied, but in the environment as well. Because if you change the environment, then the same goals change difficulty, which is kind of an idea that you can see in Leggenhada's definition. But it is something that needs to be formalized as part of the process of intelligence. Because if you just sort of assume you have a set of actions or assume an environment, you're kind of bypassing a lot of what intelligence needs to do to solve a task. And so it doesn't make sense to think of a computer program, you know, in an absolute sense. Programs have purpose. They have they are situated in a context, in a world, in an environment. And I guess more broadly, you're a big fan of what I would call biologically inspired intelligence, which is that we should create intelligence, which has properties like self and delegation and causal learning and all of this kind of stuff you know because that much more like how it works in the real world Yeah So to solve the so I you know had Hutter very briefly as a supervisor during my master and then sort of continued working on that sort of thing as I progressed through my PhD. And I wanted to address this sort of subjective complexity, subjective performance thing. And that turned out to be about defining this process of coming up with an abstraction layer. If you think of the Turing machine on which, with respect to which IXE is computed as a, or like Hutter Intelligence, if you think of that as an abstraction layer, then you've got a software mind on a hardware abstraction layer, and then that's sort of interpreted by physics. And in a conventional computer, you've got like Python interpreted by a C program interpreted by... And it just goes all the way down to hardware, but it doesn't really stop at hardware because hardware is sort of a state of a physical world and it's interpreted by whatever physical laws according to which that world runs. And you could then say, well, knowledge of physics is kind of incomplete, so where does the abstraction end? And if you really want to make an objective claim, or a claim about objective behavior to be more exact, you need to formalize what must be true of all abstraction layers, not just a fixed subset assuming some basic layer that you can identify. because we're sort of all interacting with the world through our own abstraction layers anyway. So if we want to make claims that generalize to other abstraction layers, it sort of helps to have this framework. Wait, where were we at the start of this? No, no, that's great. That makes sense. Let's bring in the causality component, right? So in your paper, you were describing almost like single direction arrows of causality. So, you know, we have the hardware, where we have the C compiler, the interpreter, all of this kind of stuff. And so part of what we were saying is that to build a living, breathing, lifelike system, you need to sort of respect the causality. You can't just take something out on its own. But I was also more broadly interested in, is it always the case that the causality goes in one direction, or is it actually kind of like quite multi-scale or bi-directional? It is definitely multiscale, bidirectional, because in the same way that cells can network, right? Each cell has its sort of own goal-directed behavior. This is in biological systems. And they can network with other cells in their sort of perceptual field, if you will. And then they are constrained by the collective of cells of which they're part in the same way that a human within a legal system is constrained by the behavior of the other humans around. They're not going to suddenly run down the street naked. So it's, yeah, definitely there is a top-down causation. We can see it in our own multi-scale architecture that we're a part of as a species. One really cool thing you did in your paper was you drew a plot and described what it is, but you had abstraction on the y-axis and delegated control on the x-axis. And you gave an example of like, you know, like a centralized, you know, form of governance would be in the top left and like a free market would be in the top right. Tell me about that. Yeah, I actually have a much better version of that graph that is coming out in the final version because it's provisionally accepted. So that'll be much clearer. So it's like the new version of the graph has got like food stamps versus UBI to illustrate something that is different levels of delegation of control. So these things both distribute resources to members of an entire population. But the food stamps doesn't delegate control. It only delegates some of the resources. People are very restricted in what they can do with that. And so that illustrates the difference between sort of decentralization and delegation of control. And then you can think of every system as a stack of abstraction layers. Not just, so a computer is typically arranged into like this hardware, you know, machine code, assembly, C, all this stack. But so are human organizations. We've got, in a military organization, we've got like soldiers, then you've got a squad, a platoon, and so you've got these different levels of abstraction at which you can look at the system. And each layer is sort of the behavior of the parts of the layer below. And in biological systems, so cells, their behavior can be an organ. The behavior of organs can be an organism. And the behavior of a set of organisms, like humans, can be language. So you just can move. And so you can use this framework of abstraction layers to understand some of the relative advantages that biological systems have. Should I keep going? Well, does that imply that the abstractions are real? Right. So, you know, like I see you as an agent and I see you as factorized quite neatly into organs and brains and eyes and whatnot. But if we want to build an artificial intelligence, one approach is we handcraft the abstraction hierarchy. Another one is that we adaptively learn it. Yeah, and adaptively learning it is definitely the way to go because we have learned the abstractions we have in order to, because these things are useful to us. So a chair, for example, is useful to me. It is something that is a cause of valence to me or is a step on the way to causing some positive or negative valence. It has utility. Table, same thing. I don't have the concept of half a chair that I think about. Like, it's just not useful to me to think. I have to combine this other concept of half to even describe it. And the world is in every aspect divided into these simplifications, these classifiers. And these are, if we want to go back to talking about an intelligence that's on the builds programs, I'm building all these classifier programs for television, light, chair, you, table. Like, this is all stuff that matters to me. I don't build classifiers for things that don't matter. And so you can even tie this in with things like the Fermi paradox of like, why we don't even notice something that might be classified as intelligent. Its behavior is just not relevant to anything that causes us valence, which would tie in with Mike Levin's work on mind blindness. Yeah. Very good, very good. And you spoke about the need for actually learning causal relationships between things as well. Tell me about that. Right. I started with a bit of Pearl's stuff because I was reading this book of why and looking at optimal agents, of course. I was thinking, well, if it's going to be optimal, it has to learn some sort of representation of its own interventions in the world. so that's to say that if i want to know whether if i want to be able to get food and navigate my environment i need to be able to tell the difference between if i'm a fly on my shoulder for example and the world moves around the fly there's two reasons that could have happened either my shoulder moved or the fly moved fly needs to know which or it's going to get squished same with humans the same with uh in and in uh and insects all the way up this is uh you know others have already suggested this is sort of important to the notion of subjective experience because you need a subject to have experience, right? You need to have an I to know that I did something. And so I started looking into how you would arrive at, how you would construct that as a self-organizing system. And this came into this alternative to simplicity that I was working on for like optimal learning, I call weak policy optimization or weak constraints, or even after feeling particularly cocky called it Bennett's Razor. That one hasn't caught on, but I'm working on it. You never know. Maybe. If anyone... Yeah, Bennett's Razor, everyone. So we can get this. So the point was, if we can define an optimal agent that learns optimally, it must construct this sort of representation of itself, sort of a, I call it a causal identity for self. It's like I have an identity that I associate with the causal effects of my actions. But it's also like if I don't start off with a world divided into objects, then I would also do that for other things like a chair, a television. And it wouldn't just be a passive classifier in the sense of like that we tend to think of things as value neutral because it's simple. But these are things that are not value neutral if you have a system that is sort of, you know, impelled by sort of attraction and repulsion from the ground up. It's not like it's going through an interpreter and having valence attached after the effect. It is, you know, in the case of like a biological organism, a network of cells, each of which are being attracted and repelled. Each collective of cells within that is sort of being pushed and pulled by attractive and repulsive forces. and the organism as a whole is being attracted and repelled. So you can think of this as sort of a tapestry of valence. And it's developing representations and classifiers of the world, not just of itself, but of other objects. And all of these objects would be sort of the causes of that valence. Or in some way causally relevant to what causes valence. So there was scale maxing, which is the San Francisco thing. Yeah. and then there's a simp maxing which let's make it simple yeah so like ixie would be an example of that yeah um and then there was the w maxing and w i think means world uh well it meant weakness but i just thought it was funny because it was like win maxing but oh interesting what was what was the interest because you were talking about things like you know an active cognition you know where we have like we consider the whole environment and everything is that roughly Yeah, so it's like, if Aixi is a sort of dualist, and when I say dualist, right, I'm referring to like, for anyone who's unfamiliar, there's Cartesian dualism, is the idea that you have mental substance and physical substance. and he was trying to come up in the 16th century with an explanation of the mind that conformed to church doctrine. And so he said that you had mental substance, which are all our thoughts and things, interacts with the physical substance of the world through the pineal gland and the animal spirits around the pineal gland. It sort of, you know, bumps it and then it bumps us and then we act. Now this, even in the 16th century, came under some criticism, but it sort of stuck around. And it's interesting how much it has stuck around because we've kind of done the same thing with AI we have just replaced the pineal gland with a Turing machine and an active cognition is the idea that your cognition is in the world right it's not a mental substance of sort of it's not just embodied in the sense of like I'm not just a body but I am part of the world around me I store my memory extends into the world i can write things on a piece of paper um i and i enact my cognition by interacting with the people around me and the objects around me yes i'm so i'm so glad you brought this up because the other day when we were chatting you were talking about the pineal gland and and i and i thought oh my god what the fuck is he talking about and um and and yes so you're saying in the 1600s we had this cartesian dualism you know that the mind and the body two different ontological substances and and people at the time thought the pineal gland was almost like the mediating thing in the brain yeah yeah so so obviously you're not you're not saying that the pineal gland is the mediating thing maybe you are i don't know i'm definitely not saying that okay good just to get that's right but but but the very interesting analogy is that you're saying that there is this thing called computational dualism and that and i want to press on this a little bit because i don't know are you are you saying that any form of functionalism or computationalism is computational dualism or are you saying this this um use of a turing machine in some formalisms is computational dualism i'm saying i'm using i use computational dualism more to poke fun at the idea of uh defining just a software intelligence because uh if we just have software by itself and we just we don't say anything about the hardware then it can't really be intelligence if intelligent if intelligence is measured in terms of performance in the environment because whatever that software does has to pass through an interpreter and the interpreter decides what it does so we can just make it arbitrarily stupid if we want yeah um and so it is really uh it was i i used that term computational dualism as almost like a rolled up newspaper to whack people on the nose because i was getting frustrated with repeating myself oh i like it i like it how does this idea relate to i mean i spoke with a few folks about um mortal computation right you know so turing machines and program you know these are great because they allow us and and also just you know people talk about pan computationalism it's a really neat idea to think about computation in the abstract and think about programs in the abstract you know these are things which can run on any computer potentially on different substrates and whatnot. And the world isn't really like that. No, no. There are finitely many copies of every piece of software we make. I love the branding, mortal and immortal computation, but there's no such thing as immortal computations. There's just finitely many copies of the software we make. And I get that people might quibble about, but yeah, you can copy the thing. But in the context of trying to define intelligence it is a terrible concept there's just there's just mortal computation let's not complicate the matter by adding in an extra concept that doesn't apply um yes yeah yeah and then just to help folks understand so mortal computation is that the the stuff is the computation there's like no meaningful disconnection between like the program and the stuff which does it i've seen a few people give different interpretations of it uh the best is probably that of Orobia and Friston, where they talk about... Oh, yes, I interviewed him, by the way, Alex. Oh, right, cool. Yes, cool guy. Yeah, so they've got quite a rigorous definition that extends over several pages. The most common definition I've seen is the internet version, which is just people going, it's immortal. And I think it was first, the term was used as sort of an afterthought by Hinton at the end of a paper that was mostly about feed-forward neural networks. oh yeah was it his new his new proposed architect was it like the feed forward forward from 2022 in europe's i i think so yeah i can't remember the name of the paper but yeah but i think it sort of and i remember i saw that and i'd already been writing about like embodiment and how like you you gotta like take into account the abstraction layer if you want to like have any claims that hold up about performance. And so, and my response to that was to come up with the term computational dualism and write an irritated paper. But I've grumpy papers are my favorite papers. Which is amazing. Okay, very good, very good. And so in this kind of, you know, categorization of different approaches, we haven't spoken about, you know, the Silicon Valley scale maxing and so on. I mean, maybe like what drives you? I mean, how close are we to AGI? I mean, the folks over there think that we've already done it, right? But just by scaling. Well, I mean, if you want something that can do jobs and automate a lot of the economy, then sure, we've got that form of AGI. If you want something that's actually intelligent, like a human, though, we do not have that. And a lot of that is because, like, even just interacting with, like, I use things like, you know, Grok and ChatGPT. It's not like I'm not interacting with this stuff. But if it was anywhere near as intelligent as a human, I wouldn't have to do all the work that I do. I would be able to offload a lot of, it's not sample efficient. I know there's claims about the ARC test, but just interacting with these agents, you can see that it's not really sample efficient. I don't know what they did there to get those results on ARC 1, and I don't know what the claims are about ARC 2. It's very hard to tell what is true when people don't release the code and the results and everything that you can sort of puzzle through and work out. So I think we're probably a good ways off something that really resembles human intelligence. And we need to look at something that isn just like a software innovation but hardware innovation The reason I think that is because we using these abstraction layers in the form of like you know the hardware we have purpose for very useful standardized software applications that we can roll out and copy and run on many different computers. But if we want something that's as adaptive as a biological system, we need something that is sort of modular and cellular and efficient like a biological system. We can't be expecting some, I mean, And biological systems with a tiny fraction of the energy and learning data can do so much more. Yes. Which, you know, is great because I like having a job. So let's... Absolutely. By the way, hot off the press, did you hear that Elon released Grok 4? Yeah. I don't know who he's released it, but Greg, you know, from Archeprise, he posted this morning. Yeah. Greg's a good guy. and apparently it's scored about 16% which is Sota even because you know my friends at Tufa you know like um Mohammed and um Jack I think they're around 15 and a half percent at the moment so yeah Grok 4 is now in the lead I mean how do you interpret that did you think it's just sort of like dint of memorization or I don't know I mean I'll have to interact with it right maybe it'll be really impressive maybe it'll but I suppose the proof's in the pudding uh like if if this starts to do a bunch of really useful jobs across the economy, then we can say with certainty that we're closer to it. But these benchmarks, even like, and the ArcGGI benchmark is a great benchmark, right? I've been looking at that my whole PhD. It's great, but it's not perfect. And even Chalet would not claim it to be perfect. And 16% is a great result. But I want to see if it can add long numbers. That probably... That would be a good start. I don't know. That's like, that's my usual thing. I sort of, oh, it's a new toy. Let's see if it can add long numbers. And it almost always can't. Well, I mean, devil's advocate. Yeah, no one's going to argue with you. If you say vanilla LLMs are, you know, basically databases, no one's going to argue with you. Right. But you can add tools to them. I mean, so you were starting to talk in your paper about there are some very interesting hybrid approaches. So, you know, obviously on the LLM side, you add tools. And there's an interesting discussion to be had there, whether, you know, you train them with stochastic gradient descent, they use tools, what can they do? But you're also talking about some other interesting, like, you know, there's the non-axiomatic reasoning system from Pei. And there's the Hyperon system from Ben Gertz. And my honest, like, I don't know anything, I don't know much about those systems. But honestly, when you were describing them, it seemed a little bit like they were everything but the kitchen sink. So they're like, you know, they can do a bit of Bayesian inference over here and they can do some neural networks over there. and I mean that could work but what's your assessment? Yeah so oh should I go around should I go over that like tool? Oh yeah yeah yeah just just sort of yeah weave a path. So I sort of said they're like taking the inspiration from Sutton's bitter lesson where he talked about sort of search and learning I sort of divided in we got sort of two basic tools right we got approximation which is what the LLMs are I mean and by definition they're inexact and that's really great for like you know um you know trawling through large amounts of data and coping with noisy data because it's an approximation you can do a lot with that and then with the computational resources we have approximation works beautifully and then you've got search which is like it like iterating through a flow diagram or something like that and that is great for precision or things like navigation on your phone uh and when you combine these things you get something like alpha Go or, yeah, I mean, so you can use the approximation part for a sort of a heuristic to guide the search. You can combine these in many different ways. And these hybrids allow us to create much more effective intelligent systems of some one form or another. So in the case of, you know, there's well-known examples of this are things like AlphaGo or AlphaStar. but there's also you know the sort of more comprehensive architectures that are meant to kind of emulate the versatility of a human mind so something like like nas is a system that sort of you can integrate many different components and i've seen some of the experiments involving nas and LLMs that were at the 2023 AGI conference. And Hyperon is an inherently modular system that is just meant to allow you to plug and play lots of... Well, that's lots of different modules. It's meant to be decentralized and adaptable so you can plug all these things in as they develop. And yeah, that is definitely... It can include the kitchen sink if you want to plug that in. but i guess uh i think it was yeah sorry it was more um hyper on i was thinking about the kitchen see pays one maybe you'd do a better job than me at this but roughly speaking it's about like building up a whole bunch of reasoning about something which i don't know much about and then adapting and modifying it over time until which i can make you know deductions and inferences about things and i remember there was some stuff with time constraints in there so if it got stuck on something it would move on and would rank things by it would take into account the resources it had so in practice it would actually be quite useful you can put it on a little robot have the little robot run around the room and do stuff yes yes which is cool uh but and they they seem to every year come up with better and better benchmarks results but it doesn't seem to get much attention in the sort of mainstream machine learning space uh yeah what do you think about benchmarks by the way because grok um i actually interviewed the guy who created humanity's last exam the other day, Dan Hendricks, and I think it was at 26%. Today with Grok4, it's about 46%. You know, what is the point of benchmarks if they're so easily saturated? Great marketing. I mean, it's like, we love measuring sticks. It's nice to have measuring sticks. It lets us know that we're progressing in a direction, whatever direction we point the stick in it, I guess. But humans are adaptive. If we set up a measuring stick and it's less than perfect, we will find a way to exploit that. I think this isn't necessarily a bad thing. It's just that I think people should interpret benchmarks as what they are, which is measuring sticks. Yes. Talk to me about consciousness. Okay. Well, that whole spiel about IXC and abstraction layers kind of led me down a very long and winding rabbit hole with the abstraction layers thing. Because after doing that, I mentioned before the idea of coming up with a causal representation of the self and this idea of tapestries of valence. And if you keep scaling up the ability to sort of learn these causes of valence, then you don't just get like a sort of a do operator for the self or a representation of the self, which others have already. Oh, actually, I should start there, that self thing. when I was looking at that in the original paper that I put that in I sort of said well if you've got this self and it is sort of is inherently valenced and it's sort of made up of sensory motor activity then wouldn't that be the sort of the basis for a subjective experience and explain something of consciousness so I put that in the paper and people liked it and I thought this consciousness thing is great I'm going to keep doing this I like this and people aren't laughing at me So, I kept going, and then one of my supervisors said, hey, that's like my theory. I did a causal self thing and pointed me at his paper. And his paper was about something called reafference in the insect central complex, which is also, he was trying to show that flies have subjective experience. And this is tiled back to some work from like 20 years ago where someone was saying that, well, this sort of representation of the self is like where human subjective experience comes. We have something called reafference in the mammalian midbrain. And so many animals have this. It's what enables us to tell when I am pressing down on the chair versus the chair pressing up on me. And this is very useful for causal relations. and then I kind of started thinking about consciousness more generally and how well we could make up our subjective experience with these causes of valence I was talking about you know things like the television the chair whatever um and someone uh started um beating me over the head with a copy of David Chalmers work on the hard problem of consciousness oh yes um and after a lengthy argument uh over that I decided well now I have to write about it so I started writing a what it ended up as a 70-page paper, much to the, all of my supervisors at the time were pleased to tell him, please stop, just graduate, just finish the thesis. But I kept writing this, and then it got cut down to a much shorter paper. I ended up writing, bringing on one of my supervisors as a collaborator, found another new supervisor who was sort of expert in consciousness to come on and help me finish that and talked about the hard problem of consciousness. And I proposed to solve it by showing that what's called a philosophical zombie is impossible in every conceivable world. And this is because if you go down all the abstraction layers, you can say, well, every conceivable world is sort of a must, whatever it is, right? it must include just change or difference. Otherwise, there's just a sort of universal oneness. And as I put it in my thesis, becoming one with the universe is beyond the scope of my work. So you've got a set of states, and you can build up a formalism from that. And this formalism, I argue, describes all conceivable worlds. So if you can show us a philosophical zombie that is, and just to explain what that is, it's something that's like you or me, but not conscious. But in every way, identical to you or me. So it's as efficient energetically, it's as smart, does everything we do, just missing consciousness. And I got really into philosophical zombies when I read the work of Peter Watts, who is a science fiction author that writes cosmic horror about philosophical zombies. Do you want to continue? I've got a couple of questions. No, no, let's do the questions because I can go on ranting for ages. Well, no, no. I mean, this is brilliant. Yes. So philosophical zombie, I mean, I love charmers. We've had them on all of the function, dynamics, and behavior without the little bit extra about the phenomenal component. And so I might be a phenomenal zombie. So I think what you're saying is because there's this kind of lowercase s and capital S subjectivity. And if I understand correctly, you're saying that the capital S subjectivity can be discounted. There are no phenomenal zombies. I don't know whether you're saying it's because there is no hard problem of consciousness. Like the consciousness is basically an illusion. So we should think about subjectivity in terms of purposeful, perspectival representations, right? But we shouldn't assume that there's some magical extra realm on top, which is called consciousness. I suppose there's a lot to interpret there. I mean, if we're saying, is there like a mental, anything non-physical that is, if we take physical to mean something that interacts directly with the physical world and is like not sort of, I suppose the advantage of doing that all conceivable worlds thing is I don't need to talk about physical and non-physical. And I like that because then I can entertain sort of ideas of magical worlds or whatever, but beside the point. I'm not saying that consciousness isn't a thing or that it's an illusion I'm saying that if we can sort of enumerate a bunch explain a whole lot of the features of consciousness as a necessary consequence of you know just the state of the environment changing from one to another in all possible worlds all conceivable worlds And if one accepts that my description of a conscious organism, without having said that anything is just physical or whatever, is compelling, and I think it is, then there is no conceivable world where you can have something that is as intelligent and acts the way that I do without being conscious. The consciousness is a necessary adaptation. And that the idea of information processing without consciousness is implausible. Yes, very good, very good. So you are saying basically that it is just something that happens when you have configurations along the lines that you are describing. I guess in a way I can't challenge you because you're already saying they need to be biological and like, you know, physical and real. And so because, you know, my obvious retort to that would be, well, you know, like a computer simulation of those things obviously wouldn't be conscious. But you're not really saying that, are you? No. And I there's one question that at the end of my thesis, again, I sort of arm and arm about because it's like I like this question. I like that I couldn't figure this out because it's fun to think about. So that tapestry of valence I mentioned with like all the cells getting pushed this way and that, and you can sort of say that, you know, a conscious state is a tapestry of valence. Now, if I simulate a bunch of cells and give it a tapestry of valence and all the necessary ingredients like the first and second and third order cells, I only talked about the first order self just before, but you can keep scaling the system up and getting predictions of predictions of predictions of stuff. and if you get all the necessary ingredients for consciousness and you just simulate it in a computer, there is two possibilities. One, either a conscious state has to be realized at a point in time by one state of the environment or it doesn't. If it does have to be realized by a state of the environment, that is to say all of the parts of the conscious state are at the exact same moment there. And in the formalism, that is a much clearer statement, but... Well. Then it's just humans and organisms and highly distributed. You could make a nanobot swarm that's conscious, but you can't make a single-thread CPU conscious because it's not really doing all this at once, right? It's kind of like looking at a thing, program counter, sort of load something into a register, does some stuff, shuffles some stuff around. it's spread smeared over time and so the idea is that smearing consciousness over time would kind of kill it the other possibility is that how the how would we know am i allowed to swear on this yeah of course yeah how the fuck would we know if we're we're being simulated we don't we have nowhere knowing i think i'm existing at a point in time um and uh i could be running this whole thing could be running on a single thread cpu in some kid's basement now i'm not endorsing the simulation hypothesis i'm just saying i mean i like the idea that so everything is clearly there's like an infinite stack of abstraction layers maybe those abstraction layers end in some kid's basement i don't think they do doesn't matter that's a i'm making fun of the simulation thing but um if it's smeared across time if i can have a single thread cpu load stuff into a register do all simulate all the stuff that is to do with consciousness then i could make like populations of humans or what are called liquid brains of and so a solid brain is like something like a it's a human brain with a persistent structure that supports like uh you know in the case of a human brain a bioelectric abstraction layer that can do information processing very useful well but it requires a certain stability it's it has to be maintain its form a liquid brain is something like a population of humans doesn't have to maintain its form its computation is not in the form of sort of electrical signals, but people moving around, doing actions and interacting with the world. Now, maybe we'll network ourselves up. I don know but that a liquid brain Ancone needs another liquid brain According to my theory if a consciousness has to be at a point in time then liquid brain not conscious If you can smear it across time then you can have a conscious liquid brain which is cool, but has some weird implications. That's very cool. By the way, we'll be clipping that part, so you'll see that part on Twitter. There is just one potential objection, which is that you know what a lot of theories of consciousness do is is they kind of they they brush it to one side and they treat it as something which is epiphenomenal which means that it's not like causally embedded in in the system and we did ask friston about this and he had a bit of an interesting response he said that you know phenomenal states are just um parts of the generative model you know it's it's all it's all in there it's all kind of like part of the the causal nexus or whatever what do you think about that so i i very explicitly argue that a phenomenal state is a tapestry of valence that that the sort of what the um that i am not just doing a representation that's value neutral that these that my classifiers of the world like television and so on is me being attracted or repelled from a physical state um in at like different levels of abstraction at different scales all happening at once so there's a lot of sensations going on there which why it's not that it's not just loading a file i feel something as i try to process this information because i'm being impelled by it but isn't consciousness something immaterial something unobservable so for me it makes sense to say that it's a correlate of something physical in your modeling but it has to be kind of somewhere else outside of the system Yeah, and I guess, so that's the idea of like a first-person ontology. There's something that I know by being conscious that you can't know by watching me. But, I mean, that's the point of the abstraction layer thing, I guess, is that if I do this formalism of all conceivable worlds, and I have the luxury of saying that I have this like God's eye view into everyone's head. so in the formalism it's nice and neat like that from a sort of an explanation point of view here yes i am stuck in my first person ontology and i can only say what i'm interacting with through my abstraction layer but it's also kind of um kind of like saying that uh you know it's if we if we buy into the idea that we can't say anything about things outside our abstraction layer we're basically giving up on science altogether and i think that why would we draw a different line for consciousness than for everything else. When we clearly, we could draw the same line for everything else. That's another thing. I'm not saying that it's not a line we could draw. We could say, I don't know that television's on. It's just, it might be on. I don't know. Prove it to me, according to my first person ontology. And so, you know, I can continue to hold whatever belief I like, really. Very good. Very good. I've enjoyed this. I've enjoyed it too. Yeah, so Dizzy Diverse Intelligences Summer Institute was something I found out about three days before the application deadline. I thought, oh, I should do that. Cold weather and talking about intelligence. So I have come here and met a lot of people who do everything from machine learning to biology to philosophy. And had very intense discussions about these things for three days. I'm not sure my brain still functions but I know it's ticking away there with something. We're supposed to come up with a project I have a weird problem there's too many projects, there's too many good projects but it's interesting because it sort of ties in with all this there's a lot of people who are also big fans of Mike Levin's work Oh yeah, I love him and I I've been writing more recently I've been writing a lot with the tapestry of valence stuff and this talking about abstraction layers. Everything I'm doing has become about that because it's just what I'm enjoying at the moment. I've been looking at, say, AI safety for one thing. It's like, well, it's not about the AI in isolation. It's another swarm architecture in which we are a liquid brain into which the AI plugs in. It's just a part of the organism. And so rather than worrying about aligning a policy or whatever, it's really about just designing the system as a whole to accommodate these different components and make use of them in the same way that a biological system sort of, you know, makes use of resources available to it and then network cells together to form stuff. that that's actually very interesting yeah so um you know like the way biology works is that it is very decentralized and there's this delegation and canalization and weirdly it seems quite orchestrated even though it is so decentralized so humans um maturate in similar ways we have similar behaviors and so on but but we also have a degree of agency and freedom on top of that I mean, if you were to design like an artificial or augmented AI system, how could you kind of have your cake and eat it and have something decentralized and steerable? So one of the results, one of my supervisors accused me of writing libertarian biology because one of the results of one of my thesis is called the law of the stack, which I like dramatic names. But it's basically that if you've got a high level of abstraction, say, you know, some software running in a computer and it's learning, its ability to learn and adapt hinges on the ability to adapt at lower levels of abstraction. and so like the hardware level learning and adapting there like in comparison if you compare biology and computers and so there's a paper coming out soon called are biological systems more intelligent than artificial intelligence and it is basically it says well in addition to the W maxing thing that biology seems to do better. It seems to delegate adaptation down the stack. Whereas computers are like an inflexible bureaucracy that makes decisions only at the top. And if we want to make something as adaptive and efficient as a human body or whatever, we need to emulate this sort of delegation. And if we want to, and then I go on to say, because I watched way too many Mike Levin interviews, He was talking about cancer and how cancer is something where it can be seen as where you've got a cell that becomes isolated from the informational structure of the collective of which it's part. And it reverts to primitive transcriptional behavior, which basically means it just starts reproducing and eating itself, eating and expanding cancer. and so you could think of it as like well what what under what circumstances does part of a collective system become isolated from the informational structure so I formalized that in this stack thing and said well each each cell is like a little task with a little policy and if there are are not any correct policies available for the overall collective then the only way for the thing to continue is the the overall like high level thing to continue is to break off some of the parts until there are some correct policies that exist. There's two ways this could happen. One, you impose lots of nasty stuff from the outside, make the thing hard, make it too hard to have a correct policy. It's just too difficult, right? So in the case of biological organism, like lighting it on fire. Another way to do this would be to just impose too much top-down control and sort of just sort of cut yourself off at the ankles and eliminate otherwise good policies by over-constraining the members of your collective. And if you think of this in terms of like humans, like this would be like an overly, this is why my supervisor was making jokes about libertarian biology, because I was saying that it's just like, if you over-constrain the members of the collective, then they're going to break off and do crazy shit. So it's like having too much of a totalitarian state or something like that. Too many laws, too much restrictions. And I used to live in Italy and I could see this in the way people used lines um they didn't line up they just kind of yeah yeah went towards the front of the shop there were rules for everything in Italy so nobody obeyed any of them yeah um other than that it's a great place yeah I loved living there um as soon as I got over the whole you know nobody standing in line thing it was fine yeah and don't drive yeah well I did a lot of that that was terrifying um I had to stop at the side of the road and my boss uh who is italian and he's like yelling at me he's like why aren't you driving i'm like i had to stop everyone's crazy they're all driving like maniacs i know i know it's insane yeah go on the ai if you ever if you put too many constraints on it it's like you're more likely to end up with like you want to just constrain it in the areas you actually need it to be constrained if you're trying to sort of avoid some sort of dangerous behavior don't sort of like constrain something to a set of impossible circumstances it's just going to break Yes, yes. I'm interested in this concept of what it means to be alive, right? Because we're kind of dancing around this a little bit. We want to create artificial systems that have the vibrancy, you know, and we could have perspectives on what that means, you know, maybe it's sort of like certain patterns of information processing, diffusion and whatnot. and there are some existence proofs like you look at conway's game of life you know artificial life and linear and you look at these things you think wow that seems very lifelike i can't really put into words why it is but it seems like it is and then on the other side of things i love building even with computers distributed agent-based systems using the actor pattern you know i i love i love sort of like separating things out into autonomous little units of computation that can run distributed but but they're very very much not alive i mean that they have some cool computational properties but you know they're they're brittle you know maybe i could make them do metaprogramming so you know have a little llm and it can kind of heal itself and update itself but still still not really like you know it's knocking on the door of it but it's not really where we want to go there are some interesting approaches like neurocellular automata where you sort of do this emergentist sort of optimization where you know it's self-organizing and it can heal itself from the bottom up but that's quite domain specific so we're dancing around this idea of creating systems which are alive. We've got gifting all wrapped up at Sephora. Gift more and spend less with our value sets, packed with the best makeup, skincare, fragrances, and haircare-ville love. This year's Showstopper gift sets are bursting with beauty products from Rare Beauty, Summer Fridays, Glossier, Amica, and so much more. Shop holiday gifts at sephora.com and give something beautiful. The Who's Down and Who Newville were making their list, But some didn't know. Walmart has the best brands for their gifts. What about toys? Do they have brands kids have been wanting all year? Yep, Barbie, Tonys, and Lego. Gifts that will make them all cheer. Do you mean they have all the brands I adore? They have Nintendo, Nespresso, Apple, and more. What about... So the Who answered questions from friends till they were blue. Each one listened and shouted, From Walmart? Who knew? Shop gifts from top brands for everyone on your list in the Walmart app. Yeah. Okay. Okay, there's two things I want to talk about there. Yeah. One is you mentioned healing, right? Yeah. So I was interested in why simp maxing sort of works when it could, in an ideal world, not work at all. And so I was looking at, well, we live in a spatially extended universe. There's only so much stuff you can cram into a bounded system or finite space. And so if I want my system to, say, I've got a biological system just coming up with abstraction layers to process information, then as it delegates control, in order to make efficient use of space, it's going to make weaker constraints take simple forms. And so if you delegate as much as you can, you're generally going to get simp maxing correlating with W maxing as you sort of scale things up and do this. So this means it's something that maintains homeostasis, like a self-repairing organism, is going to need to delegate control a lot in order to do this. And so self-repair and being alive almost requires this kind of combination of simp maxing and W maxing. And I was thinking about this because I wanted to know why things are alive, and I figured this would be a fun side gig with my thesis. so I am waiting for examiner feedback on this but I thought well a rock is something that simp maxes and just kind of persists through simp maxing as in by just simp maxing as the universe sort of transitions from one state to another it's going to destroy some objects and preserve others and if something is simple why would it persist? well it would persist because something that's simple is more likely to sort of stumble into a kind of a weak constraint because there is this sort of basic correlation thing we've got going on. And so rocks and lots of objects like that in the universe persist by being simple. But then if you take something that self-repairs, it's doing the opposite. It is becoming more complex than the same thing if it didn't self-repair. It is increased complexity and massively increased its ability to embody weak constraints. and so i would argue life is that which uh which w maxes at the expense of simp maxing whereas things which are not alive just can simp max and we were just talking about healing and and like self-organization and so on and it doesn't just come from within it also comes from the outside the interactions with the surrounding environment exactly because i think like a lot of what agency and intelligence is is about storing a history of information it's like you know it's almost like you've got like there's there's a there's a hard drive of human knowledge which is our culture and and this is all being stored and it survives and and it's kind of like ontogenically um imbued into all of us during our lifetimes and and this is why david krakauer said the culture is evolution at light speed right because we've we've like language and culture help us to transgress the physical limitations of dna and physical evolution so it's there are just many different ways of thinking about how this like machine works i sent an abstract of my thesis to david krakow last night and i don't i don't know what he thinks of it yet i will wait and see he he might think it's terrible um but i think he would be honest with you if it was yeah but um but i think these ideas are compatible i mean i'm i sort of i don't know with the start of my phd i was i sort of saw the free energy principle thought what is this i don't know about this and then i read more and more and i'm like oh okay i could come around to it now i like it so um so maybe there's maybe there's something i've missed but i i feel pretty confident in this and just to be clear that the the life is w maxing without simp maxing thing is not something that has gone through peer review yet. I have submitted it for peer review and we'll wait and see what sort of hate mail I get in return. But it's definitely something I would want to run past Friston if I get the opportunity in the future and hear what he thinks of it. Yeah, very cool, very cool. Well, Michael, what I will say is you should join our Discord server. I think the folks would love it if maybe one Friday afternoon and we all got together and, you know, shoot the shit for a little while. I'd love that. Yeah, that sounds great. You'd have a lot of fans on there. Michael, this has been amazing. Thank you so much. Thanks so much for having me on. Vrbo last-minute deals make chasing fresh mountain powder incredibly easy. With thousands of homes close to the slopes, you can easily get epic pow, freshies, first tracks, and more. No need for months of planning. In fact, you can't even plan pow. Pow is on its own schedule. Thankfully, somewhere in the world, it's always snowing. All you have to do is use the last-minute filter on the app to book a last-minute deal on a slope-side private rental home. Book now at Vervo.com. This episode is brought to you by Marketo. When it comes to your payments provider, you can't afford to compromise. Marketo's modern payment solutions flex with your business without the trade-offs. Stable and agile. Secure and innovative. Scalable and configurable. If they say you can't have it all, don't believe them. Your business demands more. Choose a payments provider that delivers more. Choose Marketa. Visit Marketa.com slash Spotify to learn more. Awesome.
Related Episodes

The Mathematical Foundations of Intelligence [Professor Yi Ma]
Machine Learning Street Talk
1h 39m

Pedro Domingos: Tensor Logic Unifies AI Paradigms
Machine Learning Street Talk
1h 27m

Why Humans Are Still Powering AI [Sponsored]
Machine Learning Street Talk
24m

The Universal Hierarchy of Life - Prof. Chris Kempes [SFI]
Machine Learning Street Talk
40m

Google Researcher Shows Life "Emerges From Code" - Blaise Agüera y Arcas
Machine Learning Street Talk
59m

The Secret Engine of AI - Prolific [Sponsored] (Sara Saab, Enzo Blindow)
Machine Learning Street Talk
1h 19m
No comments yet
Be the first to comment