

After LLMs: Spatial Intelligence and World Models — Fei-Fei Li & Justin Johnson, World Labs
Latent Space
Episode Description
Fei-Fei Li and Justin Johnson are cofounders of World Labs, who have recently launched Marble (https://marble.worldlabs.ai/), a new kind of generative “world model” that can create editable 3D environments from text, images, and other spatial inputs. Marble lets creators generate persistent 3D worlds, precisely control cameras, and interactively edit scenes, making it a powerful tool for games, film, VR, robotics simulation, and more. In this episode, Fei-Fei and Justin share how their journey from ImageNet and Stanford research led to World Labs, why spatial intelligence is the next frontier after LLMs, and how world models could change how machines see, understand, and build in 3D. We discuss: The massive compute scaling from AlexNet to today and why world models and spatial data are the most compelling way to “soak up” modern GPU clusters compared to language alone. What Marble actually is: a generative model of 3D worlds that turns text and images into editable scenes using Gaussian splats, supports precise camera control and recording, and runs interactively on phones, laptops, and VR headsets. Fei-fei’s essay (https://drfeifei.substack.com/p/from-words-to-worlds-spatial-intelligence) on spatial intelligence as a distinct form of intelligence from language: from picking up a mug to inferring the 3D structure of DNA, and why language is a lossy, low-bandwidth channel for describing the rich 3D/4D world we live in. Whether current models “understand” physics or just fit patterns: the gap between predicting orbits and discovering F=ma, and how attaching physical properties to splats and distilling physics engines into neural networks could lead to genuine causal reasoning. The changing role of academia in AI, why Fei-Fei worries more about under-resourced universities than “open vs closed,” and how initiatives like national AI compute clouds and open benchmarks can rebalance the ecosystem. Why transformers are fundamentally set models, not sequence models, and how that perspective opens up new architectures for world models, especially as hardware shifts from single GPUs to massive distributed clusters. Real use cases for Marble today: previsualization and VFX, game environments, virtual production, interior and architectural design (including kitchen remodels), and generating synthetic simulation worlds for training embodied agents and robots. How spatial intelligence and language intelligence will work together in multimodal systems, and why the goal isn’t to throw away LLMs but to complement them with rich, embodied models of the world. Fei-Fei and Justin’s long-term vision for spatial intelligence: from creative tools for artists and game devs to broader applications in science, medicine, and real-world decision-making. — Fei-Fei Li X: https://x.com/drfeifei LinkedIn: https://www.linkedin.com/in/fei-fei-li-4541247 Justin Johnson X: https://x.com/jcjohnss LinkedIn: https://www.linkedin.com/in/justin-johnson-41b43664 Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction and the Fei-Fei Li & Justin Johnson Partnership 00:02:00 From ImageNet to World Models: The Evolution of Computer Vision 00:12:42 Dense Captioning and Early Vision-Language Work 00:19:57 Spatial Intelligence: Beyond Language Models 00:28:46 Introducing Marble: World Labs' First Spatial Intelligence Model 00:33:21 Gaussian Splats and the Technical Architecture of Marble 00:22:10 Physics, Dynamics, and the Future of World Models 00:41:09 Multimodality and the Interplay of Language and Space 00:37:37 Use Cases: From Creative Industries to Robotics and Embodied AI 00:56:58 Hiring, Research Directions, and the Future of World Labs
Full Transcript
I think the whole history of deep learning is in some sense the history of scaling up compute. When I graduated from grad school, I really thought the rest of my entire career would be towards solving that single problem, which is... A lot of AI as a field, as a discipline, is inspired by human intelligence. We thought we were the first people doing it. It turned out that was also simultaneously doing it. So Marble, like basically one way of looking at it, it's the system, it's a generative model of 3D worlds, right? So you can input things like text or image or multiple images, and it will generate for you a 3D world that kind of matches those inputs. So while Marble is simultaneously a world model that is building towards this vision of spatial intelligence, it was also very intentionally designed to be a thing that people could find useful today. And we're starting to see emerging use cases in gaming, in VFX, in film, where I think there's a lot of really interesting stuff that Marvel can do today as a product, and then also set a foundation for the grand world models that we want to build going into the future. Hey everyone, welcome to the Layton Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swix, editor of Layton Space. And we are so excited to be in the studio with Fei-Fei and Justin of World Labs. Welcome. We're excited too. I nearly said marble. Yeah, thanks for having us. I think there's a lot of interest in world models and you've done a little bit of publicity around spatial intelligence and all that. I guess maybe one of the part of the story that is a rare opportunity for you to tell is how you two came together to start building World Labs. That's very easy because Justin was my former student. Yeah. So Justin came to my, you know, in my, the other hat I wear is a professor of computer science at Stanford. Justin joined my lab when? Which year? 2012. Actually, the semester that I, the quarter that I joined your lab was the same quarter that AlexNet came out. Yeah. Yeah. So Justin is my first. Were you involved in the whole announcement drama? No, no, not at all. But I was sort of watching all the ImageNet excitement around AlexNet at that quarter. So he was one of my very best students. And then he went on to have a very successful early career as a professor in Michigan, University of Michigan Ann Arbor in Meta. And then when we, I think around, you know, more than two years ago, for sure, I think both independently, both of us have been looking at the development of the large models and thinking about what's beyond language models. And this idea of building world models, spatial intelligence really was natural for us. So we started talking and decided that we should just put all the eggs in one basket and focus on solving this problem and started World Apps together. Yeah, pretty much. I mean, like after that, seeing that kind of ImageNet era during my PhD, I had the sense that the next sort of decade of computer vision was going to be about getting AI out of the data center and out into the world. um so a lot of my interests post phd kind of shifted in into 3d vision a little bit more into computer graphics more into generative modeling and i was i thought i was kind of drifting away from my advisor post phd but then when we reunited a couple years later it turned out she was thinking of very similar things so if you think about alex net the core pieces of it were obviously image net it was the move to gpus and neural networks how do you think about the alex net equivalent model for world models in a way it's an idea that has been out there right there's been, you know, Young Lagoon is maybe like the most, the biggest proponent, most prominent of it. What have you seen in the last two years that you were like, hey, now's the time to do this. And what are maybe the things fundamentally that you want to build as far as data and kind of like maybe different types of algorithms or approaches to compute to make world models really come to life? Yeah, I think one is just there is a lot more data and compute generally available. I think the whole history of deep learning is in some sense the history of scaling up compute. And if we think about, you know, AlexNet required this jump from CPUs to GPUs, but even from AlexNet to today, we're getting about a thousand times more performance per card than we had on AlexNet days. And now it's common to train models, not just on one GPU, but on hundreds or thousands or tens of thousands or even more. So the amount of compute that we can marshal today on a single model is, you know, about a million fold more than we could have even at the start of my PhD. So I think language was one of the really interesting things that started to work quite well the last couple of years. But as we think about moving towards visual data and spatial data and world data, you just need to process a lot more. And I think that's going to be a good way to soak up this new compute that's coming online more and more. Does the model of having a public challenge still work or should it be centralized inside of a lab? I think open science still is important. You know, AI obviously compared to the image that Alex that time has really evolved That was such a niche computer science discipline. Now it's just like civilizational technology. But I'll give you an example. Recently, my Stanford lab just announced they opened a data set and benchmark called Behavior, which is for benchmarking robotic learning in simulated environments. And that is a very clear effort in still keeping up this open science model of doing things, especially in academia. But I think it's important to recognize the ecosystem is a mixture, right? I think a lot of the very focused work in industry, some of them are more seeing the daylight in the form of a product rather than an open challenge per se. Yeah. And that's just a matter of the funding and the business model. You have to see some ROI from it. I think it's just a matter of the diversity of the ecosystem. Even during the so-called Alexa ImageNet time, I mean, there were closed models, there were proprietary models, there were open models. Or you think about iOS versus Android, right? They're different business models. I wouldn't say it's just a matter of funding per se. It's just how the market is. They're different plays. Yeah. But do you feel like you could redo ImageNet today with the commercial pressure that some of these labs has? I mean, to me, that's like the biggest question, right? It's like, what can you open versus what should you keep inside? Like, you know, if I put myself in your shoes, right? It's like, you raise a lot of money, you're building all of this. If you had the best data set for this, what incentives do you really have to publish it? And it feels like the people at the labs are getting more and more pulled in the PhD programs are getting pulled earlier and earlier into these labs. So I'm curious if you think there's like an issue right now with like how much money is and how much pressure it puts on like the more academia open research space? Or if you feel like that's not really a concern? I do have concerns about less about the pressure. It's more about the resourcing and the imbalance, the resourcing of academia. This is a little bit of a different conversation from World Labs. You know, I have been the past few years advocating for resourcing the healthy ecosystem, As the founding director, co-director of Stanford's Institute for Human-Centered AI, Stanford High, I've been working with policymakers about resourcing public sector and academic AI work. We work with the first Trump administration on this bill called National AI Resource Resource, NAYER bill, which is scoping out a national AI compute cloud as well as data repository. And I also think that open source, open data sets continue to be important part of the ecosystem. Like I said, right now in my Stanford lab, we are doing the open data set, open benchmark on robotic learning called behavior. And many of my colleagues are still doing that. I think that's part of the ecosystem. I think what the industry is doing, some startups are doing, are running fast with models, creating products is also a good thing. For example, when Justin was a PhD student with me, none of the computer vision programs work that well, right? We could write beautiful papers. Justin has some beautiful... I mean, actually, even before grad school, like, I wanted to do computer vision. And I reached out to a team at Google and, like, wanted to potentially go and try to do computer vision, like, out of undergrad. And they told me, like, what are you talking about? Like, you can't do that. Like, go to a PhD first and come back. What was the motivation that got you so interested? I had done some computer vision research during my undergrad with actually Fei-Fei's PhD advisor. There's a lineage here. Yeah, there's a lineage here. So I had done some computer vision even as an undergrad, and I thought it was really cool, and I wanted to keep doing it. So then I was sort of faced with this sort of industry academia choice even coming out of undergrad that I think a lot of people in the research community are facing now. But to your question, I think the role of academia, especially in AI, has shifted quite a lot in the last decade. And it's not a bad thing. Um, it's, it's a sense of it's, it's because of the technology has, has grown and emerged, right? Like five or 10 years ago, you really could train state of the art models in the lab. Um, even with, with just a couple of GPUs, but you know, because that technology was so successful and scaled up so much, then you, you can't train state of the art models with a couple of GPUs anymore. And that's not a bad thing. It's a good thing. It means the technology actually worked. Um, but that means the, the expectations around what we should be doing as academics shifts a little bit, and it shouldn't be about trying to train the biggest model and scaling up the biggest thing. It should be about trying wacky ideas and new ideas and crazy ideas, most of which won't work. And I think there's a lot to be done there. If anything, I'm worried that too many people in academia are hyper focused on this notion of trying to pretend like we can train the biggest models or or treating it as almost a vocational training program to then graduate and go to a big lab and be able to play with all the GPUs. I think there's just so much crazy stuff you can do around like new algorithms, new architectures, like new systems that, you know, there's a lot you can do as one person. And also just academia has a role to play in understanding the theoretical underpinning of these large models. We still know so little about this. Or extend to the interdisciplinary, you know, Justin calls wacky ideas. There's a lot of basic science ideas. There's a lot of blue sky problems. So I agree. I don't think the problem is open versus closed, productization versus open sourcing. I think the problem right now is that academia by itself is severely under-resourced so that, you know, the researchers and the students do not have enough resources to try these ideas. Yeah. Just for people to nerd snipe, what's a wacky idea that comes to mind when you talk about wacky ideas? Oh, like I had this idea that I kept pitching to my students at Michigan, which is that I really like hardware and I really like like new kinds of hardware coming online. And in some sense, the emergence of the neural networks that we use today and transformers are really based around matrix multiplication because matrix multiplication fits really well with GPUs. But if we think about how GPUs are going to scale, how hardware is likely to scale in the future, I don't think the current system that we have, like the GPU hardware design, is going to scale infinitely. And we start to see that even now, that the unit of compute is not the single device anymore. It's this whole cluster of devices. So if you imagine... Yeah, it's a whole node or a whole cluster. But the way we talk about neural networks is still as if they are a monolithic thing that could be coded in one GPU in PyTorch. But then in practice, they get distributed over thousands of devices. So are there like, just as transformers are based around MatMol, and MatMol is sort of the primitive that works really well on GPUs. As you imagine hardware scaling out, are there other primitives that make more sense? for large-scale distributed systems that we could build our neural networks on. And I think it's possible that there could be drastically different architectures that fit with the next generation or the hardware that's going to come 10 or 20 years down the line. And we could start imagining that today. It's really hard to make those kinds of bets because there's also the concept that the hardware lottery, where let's just say NVIDIA has won and we should just scale that out in infinitely and write software to patch up any gaps we have in the mix, right? I mean, yes and no. If you look at the numbers, even going from Hopper to Blackwell, the performance per watt is about the same. They mostly make the number of transistors go up, and they make the chip size go up, and they make the power usage go up. But even from Hopper to Blackwell, we're kind of already seeing a scaling limit in terms of what is the performance per watt that we can get. So I think there is room to do something new. And I don't know exactly what it is, and I don't think you can get it done in a three-month cycle as a startup. up. But I think that's the kind of idea that if you sit down and sit with for a couple of years, like maybe you could come up, come up with some breakthroughs. And I think that's the kind of long range stuff that is a perfect match for academia. Coming back to the little bit of background in history, we have this sort of research note on the scene storytelling work that you did or newer image captioning that you did with Andre. And I just wanted to hear you guys tell that story about, you know, you were like sort of embarking on that for your PhD and Fei-Fei, You having that reaction that you had. Yeah, so I think that line of work started between me and Andre and then Justin joined, right? So Andre started his PhD. He and I were looking at what is beyond ImageNet object recognition. And at that time, we, you know, the convolutional neural network was, has proven some power in ImageNet tasks. So ConvNet is a great way to represent images. In the meantime, I think in the language space, an early sequential model is called LSTM, was also being experimented. So Andrea and I were just talking about this has been a long-term dream of mine. I thought it would take 100 years to solve, which is telling the story of images. When I graduated from grad school, I really thought the rest of my entire career would be towards solving that single problem, which is given a picture or given a scene, tell the story in natural language. But things evolved so fast When Andre started we were like maybe combining the representation of convolutional neural network as well as the language sequential model of LSTM we might be able to learn through training to match caption with images So that's when we started that line of work. And I don't know if it was 2014 or 2015. CVPR 2015 was the captioning paper. So it was our first paper that Andre got it to work that was, you know, given an image. The image is represented with ConvNet. The language model is the LSTM model. And then we combine it and it's able to generate one sentence. And that was one of the first time. It was pretty, I think I wrote it in my book. We thought we were the first people doing it. It turned out that Google at that time was also simultaneously doing it. And a reporter, it was John Markov from New York Times, was breaking the Google story. But he by accident heard about us. And then he realized that we really independently got there together at the same time. So he wrote the story of both the Google research as well as Andrea Mayer's research. But after that, I think Justin was already in the lab at that time. Yeah, yeah. I remember the group meeting where Andre was presenting some of those results and explaining this new thing called LSTMs and RNNs that I had never heard of before. And I thought like, wow, this is really amazing stuff. I want to work on that. So then he had the paper at CVPR 2015 on the first image captioning results. Then after that, we started working together. And first we did a paper actually just on language modeling back in 2015. iClear 2015. Yeah, I should have stuck with language modeling. That turned out pretty lucrative in retrospect. But we did this language modeling paper together, me and Andre in 2015, where it was really cool. We trained these little RNN language models that could spit out a couple sentences at a time and poke at them and try to understand what the neurons inside the neural network, inside the things we're doing. Yeah, I remember you guys were doing analysis on the different memory. Yeah, it was really cool. And even at that time, we had these results where you could like look inside the LSTM and say like, oh, this thing is reading code. So one of the, like one of the datasets that we, that we trained on for this one was the, the, um, the Linux source code, right? Cause the whole, the whole, the whole thing is, you know, open source and you could just download this. So we trained an RNN on this, uh, on this dataset. And then as the network is trying to predict the tokens there, then, you know, try to correlate the kinds of predictions that it's making with the kind of internal structures in the RNN. And there we were able to find some correlations between, oh, like this unit in this layer of the LSTM fires when there's an open paren and then like turns off when there's a closed paren and try to do some empirical stuff like that to figure it out. So that was pretty cool. And that was just like, that was kind of like cutting out the CNN from this language modeling part and just looking at the language models in isolation. But then we wanted to extend the image captioning work. And remember at that time, we even have a sense of space because we feel like captioning does not capture different parts of the image. So I was talking to Justin and Andre about, can you go what we end up calling dense captioning, which is, you know, describe the scene in greater details, especially different parts of the scene. So that's... Yeah. And so then we built this system. So then it was me and Andre and Feifei on a paper the following year, CVPR, so in 2016, where we built this system that did dense captioning. So you input a single image and then it would draw boxes around all the interesting stuff in the image and then write a short snippet about each of them. It's like, oh, it's a green water bottle on the table. It's a person wearing a black shirt. And this was a really complicated neural network because that was built on a lot of advancements that had been made in object detection around that time, which was a major topic in computer vision for a long time. And then it was actually like one joint neural network that was both, you know, learning to look at individual images because I actually actually had like then three different representations inside this network. One was the representation of the whole image to kind of get the gestalt of what's going on. Then it would propose individual regions that it wants to focus on and then look at, you know, represent each region independently. And then once you look at the region, then you need to spit out text for each region. So that was a pretty complicated neural network architecture. This was all pre-Pytorch. And does it do it in one pass? Yeah, yeah. So it was a single forward pass that did all of it. Not only it was doing it in one pass, you also optimize inference. You're doing it on a webcam, I remember. Yeah, yeah. So I had built this like crazy real-time demo where I had the network running like on a server at Stanford and then a web front end that would stream from a webcam and then like send the image back to the server. The server would run the model and stream the predictions back. So I was just like walking around the lab with this laptop that would just like show people this network in real time. Identification and labeling as well. Yeah. It was pretty impressive because most of my graduate students would be satisfied if they can publish the paper, right? They package the research, put it in a paper, but just the window step further, he's like, I want to do this real-time web demo. Well, actually, I don't know if I told you this story, but then we had, there was a conference that year in Santiago at ICCV, it was ICCV 15. And then like, I had a paper at that conference for something different, but I had my laptop. I was like walking around the conference with my laptop, showing everybody this like real-time captioning demo. And the model was running on a server in California. So it was like actually able to stream like all the way from California down to Santiago. Well, latency is, it was terrible. It was like one FPS, but the fact that it worked at all was pretty, was pretty amazing. I was going to briefly quip that, you know, maybe vision and language modeling are not that different. You know, DeepSQL CR recently tried the crazy thing of let's language, let's model text from pixels and just like train on that. And it might be the future. I don't know. I don't know if you guys have any takes on whether language is actually necessary at all. I just wrote a whole manifesto. This is my segue into this. Yes. I think they are different. I do think the architecture of these generative models will share a lot of shareable components. But I think the deeply 3D, 4D spatial world has a level of structure that is fundamentally different from a purely generative signal that is one dimensional. Yeah, I think there's something to be said for pixel maximalism, right? Like there's this notion that language is this different thing, but we see language with our eyes and our eyes are just like, you know, basically pixels, right? Like we've got sort of biological pixels in the back of our eyes that are processing these things. And, you know, we see text and we think of it as this discrete thing, but that really only exists in our minds. Like the physical manifestation of text and language in our world are, you know, physical objects that are printed on things in the world. And we see it with our eyes. Well, you can also think it's sound, but even sound you can translate into a choreogram, which is a 2D signal. Right. And then like you actually lose something if you translate to this like purely tokenized representations that we use in LLMs, right? Like you lose the font, you lose the line breaks, you lose sort of the 2D arrangement on the page. And for a lot of cases, for a lot of things, maybe that doesn't matter. But for some things it does. And I think pixels are this sort of more lossless representation of what's going on in the world. And in some ways, a more general representation that more matches what we humans see as we navigate the world. so so like there's an efficiency argument to be made like maybe it's not super efficient to like you know render your text to an image and then feed that to a vision model that's exactly what it was like kind of worked i think this ties into the whole world model like one of the my favorite papers that i saw this year was about inductive bias to probe for world models so it was a harvard paper where they fed a lot of like orbital uh patterns into an llm and then they asked the llm to predict the orbit of a planet around the sun. And the model generated look good. But then if you asked it to draw the force vectors, it would be all wacky. You know, it wouldn't actually follow it. So how do you think about what's embedded into the data that you get? And we can talk about maybe organizing for 3D world models. What are the dimensions of information? There's the visual, but how much of the underlying hidden forces, so to speak, you need to extract out of this data And like, what are some of the challenges there? Yeah, I think there's different ways you could approach that problem. One is like you could try to be explicit about it and say like, oh, I want to, you know, measure all the forces and feed those as training data to your model. Right. Then you could like sort of run a traditional physics simulation and, you know, then know all the forces in the scene and then use those as training data to train a model that's now going to hopefully predict those. Or you could hope that something emerges more latently, right? That you kind of train on something end-to-end and then on a more general problem and then hope that somewhere, something in the internals of the model must learn to model something like physics in order to make the proper predictions. And those are kind of the two big paradigms that we have more generally. But there's no indication that those latent modeling will get you to a causal law of space and dynamics, right? That's where today's deep learning and human intelligence actually start to bifurcate, because fundamentally the deep learning is still fitting patterns. There you sort of get philosophical and you say that like we're trying to fit patterns, too, but maybe we're trying to fit a more broad array of patterns like over with a longer time horizon, a different reward function. But like basically the paper you mentioned is sort of, you know, that problem that it learns to fit the specific patterns of orbits, but then it doesn't actually generalize in the way that you'd like. It doesn't have a sort of causal model of gravity. Right. Because even in marble, you know, I was trying it and it generates this beautiful sceneries and there's like arches in them. But does the model actually understand how, you know, the arch is actually, you know, drawing on the center kind of like stone and like, you know, the actual physical structure of it. And the other question is like, does it matter that it does understand it as long as they always render something that would fit the physical model that we imagine? If you use the word understand the way you understand, I'm pretty sure the model doesn't understand it. The model is learning from the data, learning from the pattern. Yeah, does it matter, especially for the use cases? It's a good question, right? Like for now, I don't think it matters because it renders out what you need, assuming it's perfect. Yeah, I mean, it depends on the use case. Like if the use case is I want to generate sort of a backdrop for virtual film or production or something like that, all you need is something that looks plausible. And in that case, probably it doesn't matter. But if you're going to use this to like, you know, if you're an architect and you're going to use this to design a building that you're then going to go build in the real world, then, yeah, it does matter that you model the forces correctly because you don't want the thing to break when to actually actually build it. But even there, right, like even if your model has the semantics in it, let's say, I still don't think the understanding of the signal or the output on the model part and the understanding on the human part is a different word. But this gets, again, philosophical. Yeah, I mean, there's this trick with understanding, right? Like these models are a very different kind of intelligence than human intelligence. And human intelligence is interesting because, you know, I think that I understand things because I can introspect my own thought process to some extent. And then I believe that my thought process probably works similar to other people's so that when I observe someone else's behavior, then I infer that their internal mental state is probably similar to my own internal mental state that I've observed. And therefore, I know that I understand things. So there I assume that you understand something. But these models are sort of like this alien form of intelligence where they can do really interesting things. They can exhibit really interesting behavior, but whatever kind of internal, the equivalent of internal cognition or internal self-reflection that they have, if it exists at all, is totally different from what we do. It doesn't have the self-awareness. Right. But what that means is that when we observe seemingly interesting or intelligent behavior out of these systems, we can't necessarily infer other things about them because their model of the world and the way they think is so different from us. So would you need two different models to do the visual one and the architectural generation, you think eventually like there's not anything fundamental about the approach that you've taken on the model building it's more about scaling the model and the capabilities of it or like is there something about being very visual that prohibits you from actually learning the physics behind this so to speak so that you could trust it to generate a CAD design that then is actually going to work in the real world. I think this is a matter of scaling data and bettering model. I don't think there's anything fundamental that separates these two. Yeah, I would like it to be one model. But I think like the big problem in deep learning in some sense is how do you get emergent capabilities beyond your training data? Are you going to get something that understands the forces while it wasn't trained to predict the forces, but it's going to learn them implicitly internally? And I think a lot of what we've seen in other large models is that a lot of this emergent behavior does happen at scale. And will that transfer to other modalities and other use cases and other tasks? I hope so. But that'll be a process that we need to play out over time and see. Is there a temptation to rely on physics engines that already exist out there that are, you know, basically the gaming industry has saved you a lot of this work? Or do we have to reinvent things for some fundamental mismatch? I think that's sort of like climbing the ladder of technology, right? Like in some sense, the reason that you want to build these things at all is because maybe traditional physics engines don't work in some situations. If a physics engine was perfect, we would have sort of no need to build models because the problem would have already been solved. So in some sense, the reason why we want to do this is because classical physics engines don't solve problems in the generality that we want. But that doesn't mean we need to throw them away and start everything from scratch, right? We can use traditional physics engines to generate data that we then train our models on. And then you're sort of distilling the physics engine into the weights of the neural network that you're training. I think that's a lot of what, If you compare the work of other labs, people are speculating that Sora had a little bit of that. Genie 3 had a bit of that. Genie 3 is explicitly like a video game. You have controls to walk around in. And I always think it's really funny how the things that we invent for fun actually does eventually make it into serious work. The whole AI revolution started by graphics chips, partially. miss using the gpu for uh for generating a lot of triangles into generating a lot of everything else basically yeah we touched on marble a little bit i think you guys chose marble as i kind of feel like you're sort of a little bit coming out of stealth moment if you can call it that yeah uh maybe we can get a concise explanation from you on what people should take away because everyone here can try marble but i don't think they might be able to link it to the differences between what your vision is versus other i guess generative worlds they may have seen from other labs So Marble is a glimpse into our model right We are a model spatial intelligence model company We believe spatial intelligence is the next frontier In order to make spatially intelligent models, the model has to be very powerful in terms of its ability to, you know, understand, reason, generate in very multimodal fashion of worlds, as well as allow the level of interactivity that we eventually hope to be as complex as how humans can interact with the world. So that's the grand vision of spatial intelligence, as well as the kind of world models we see. Marble is the first glimpse into that. It's the first part of that journey. It's the first in-class model in the world that generates 3D worlds in this level of fidelity, that is in the hands of the public. It's the starting point, right? We actually wrote this tech blog. Justin spent a lot of time writing that tech blog. I don't know if you had time to browse it. I mean, Justin really broke it down into what are the inputs we can, multimodal inputs of Marble, what are the kind of editability, which is, you know, allows user to be interactive with the model and what are the kind of outputs we can have? Yeah. So, so marble, like basically one way of looking at it, it's the system, it's a generative model of 3D worlds, right? So you can input things like text or image or multiple images, and it will generate for you a 3D world that kind of matches those inputs. And it's also interactive in the sense that you can interactively edit scenes. Like I could generate this scene and then say, I don't like the water bottle, make it blue instead, like take out the table, like change these microphones around. And then you can generate new worlds based on these interactive edits and export in a variety of formats. And with Marble, we were actually trying to do sort of two things simultaneously. And I think we managed to pull off the balance pretty well. One is actually build a model that goes towards the grand vision of spatial intelligence. And models need to be able to understand lots of different kinds of inputs, need to be able to model worlds in a lot of situations, need to be able to model counter factuals of how they could change over time. So we wanted to start to build models that have these capabilities. And Marble today does already have hints of all of these. But at the same time, we're a company, we're a business. We were really trying not to have this be a science project, but also build a product that would be useful to people in the real world today. So while Marble is simultaneously a world model that is building towards this vision of spatial intelligence, it was also very intentionally designed to be a thing that people could find useful today. And we're starting to see emerging use cases in gaming, in VFX, in film, where I think there's a lot of really interesting stuff that Marble can do today as a product, and then also set a foundation for the grand world models that we want to build going into the future. Yeah, I noticed one tool that was very interesting was you can record your scene inside. Yes. It's very important. The ability to record means a very precise control of camera placement. And in order to have precise camera placement, it means you have to have a sense of 3D space. Otherwise, you don't know how to orient your camera, right? And how to move your camera. So that is a natural consequence of this kind of model. And this is why this is just one of the examples. Yeah, I find when I play with video generative models, I'm having to learn the language of being a director because I have to move them out. Like pan, you know, like dolly out. You cannot say pan 63 degrees to the north, right? You just don't have that control. Whereas in marble, you have precise control in terms of placing a camera. Yeah, I think that's one of the first things people need to understand. It's like you're not generating frame by frame, which is like what a lot of the other models are. What are, you know, people understand that an LLM generates one token. What are like the atomic units? There's kind of like, you know, the meshes, there's like the splats, the voxels, there's a lot of pieces. in a 3D world? What should be the mental model that people have of like your generations? Yeah, I think there's like what exists today and what could exist in the future. So what exists today is the model natively output splats. So Gaussian splats are these like, you know, each one is a tiny, tiny particle that's semi-transparent, has a position orientation in 3D space. And the scene is built up from a large number of these Gaussian splats. And Gaussian splats are really cool because you can render them in real time really efficiently. so you can render on your iPhone, render everything. And that's how we get that sort of precise camera control because the splats can be rendered real time on pretty much any client-side device that we want. So for a lot of the scenes that we're generating today, that kind of atomic unit is that individual splat. But I don't think that's fundamental. I could imagine other approaches in the future that would be interesting. So there are other approaches that even we've worked on at World Labs, like our recent RTFM model, that does generate frames one at a time. And there the atomic unit is generating frames one at a time as the user interacts with the system. Or you could imagine other architectures in the future where the atomic unit is a token, where that token now represents some chunk of the 3D world. And I think there's a lot of different architectures that we can experiment with here over time. I do want to press on, double click on this a little bit. My version of what Alessio was going to say was like, what is the fundamental data structure of a world model? Because exactly like you said, it's either a Gaussian splat or it's like the frame or what have you. You also, in the previous statements, focus a lot on the physics and the forces, which is something over time, which is loosely. I don't see that in marble. I presume it's not there yet. Maybe if there was like a marble too, you would have movement. Or is there a modification to Gaussian splats that makes sense? Or would it be something completely different? Yeah, I think there's a couple of modifications that make sense. And there's actually a lot of interesting ways to integrate things here, which is another nice place of working in this space. Then there's actually been a lot of research work on this. Like when you talk about wacky ideas, like there's actually been a lot of really interesting academic work on different ways to imbue physics. You can also do wacky ideas in industry. All right. But then it's like Gaussian splats are themselves little particles. There's been a lot of approaches where you basically attach physical properties to those splats and say that each one has a mass or like maybe you treat each one as being coupled with some kind of virtual spring to nearby neighbors. And now you can start to do sort of physics simulation on top of splats. So one kind of avenue for adding physics or dynamics or interaction to these things would be to predict physical properties associated with each of your splat particles and then simulate those downstream, either using classical physics or something learned. Or the beauty of working in 3D is things compose and you can inject logic in different places. So one way is sort of like we're generating a 3D scene. We're going to predict 3D properties of everything in the scene. Then we use a classical physics engine to simulate the interaction. or you could do something where like as a result of a user action the model is now going to regenerate the entire scene in splats or some other representation and that could potentially be a lot more general because then you're not bound to whatever sort of you know physical properties you know how to model already but that's also a lot more computationally demanding because then you need to regenerate the whole scene in response to user actions but i think this is a this was a really interesting area for for future work and for uh adding on to to into potential marble too, as you say. Yeah. And there's opportunity for dynamics, right? What's the state of like splats density, I guess, like, do we, can we render enough to have very high resolution when we zoom in? Are we limited by like the amount that you can generate, the amount that we can render, like how are these going to get super high fidelity, so to speak? You have some limitations, but depending on your target use case. So like one of the, one of the big constraints that we have on our scenes is we wanted things to render cleanly on mobile. And we wanted things to render cleanly in VR headsets. So those are those devices have a lot less compute than you're used than you have in a lot of other situations. And like if you want to get a splat file to render at high resolution, high like 30 to 60 FPS on like an iPhone from four years ago, then you are a bit limited in like the number of splats that you can handle. But if you're allowed to like work on a recent like even this year's iPhone or like a recent MacBook or even if you have a local GPU or if you don't need if you don't need that 60 FPS, 1080p, like then you can relax the constraints and get away with more splats and that lets you get higher resolution in your scenes. One use case I was expecting but didn't hear from you was embodied use cases. Are you just focusing on virtual for now? If you go to WorldLabs homepage, there is a particular page called MarbleLabs. There we showcase different use cases and we actually organize them in more visual effect use cases or gaming use cases, as well as simulation use cases. And in that, we actually show this is a technology that can help a lot in robotic training, right? This goes back to what I was talking about earlier in speaking of data starvation, robotic training really lacks data. You know, high fidelity, real world data is absolutely very critical, but you're just not going to get a ton of that. Of course, the other extreme is just purely internet video data, but then you lack a lot of the controllability that you want to train your embodied agents with. So simulation and synthetic data is actually a very important middle ground for that. I've been working in this space for many years. One of the biggest pain point is where do you get the synthetic simulated data. You have to curate assets and build these, compose these complex situations. And in robotics, you want a lot of different states. You want the embodied agent to interact in the synthetic environment. Marble actually is a really potential for helping to generate these synthetic simulated worlds for embodied agent training. Obviously, that's on it's on the homepage it'll be there i just i was like trying to make the link to as you said like you also have to build like a business model the market for robotics obviously is very huge maybe you don't need that or maybe we need to build up and solve the virtual worlds first before we go to embodied and obviously that is actually that is to be decided i i do think that uh because everyone else is going straight there, right? Not everyone else, but there is an excitement, I would say. But, you know, I think the world is big enough to have different approaches. Yeah. I mean, and we always view this as a pretty horizontal technology that should be able to touch a lot of different industries over time. And, you know, Marble is a little bit more focused on creative industries for now, but I think the technology that powers it should be applicable to a lot of different things over time. And robotics is one that, you know, is maybe going to happen sooner than later. Also design, right? It's very adjacent to creative. Oh yeah, definitely. Like I think it's like the architecture stuff. Yes. Okay. Yeah. I mean, I was joking online. I posted this video on Slack of like, oh, who wants to use marble to plan your next kitchen remodel? It actually works great for this already. Just like take two images of your kitchen, like reconstruct it in marble, and then use the editing features to see what would that space look like if you change the countertops or change the floors or change the cabinets. And this is something that's, you know, we didn't necessarily build anything specific for this use case. But because it's a powerful horizontal technology, you kind of get these emergent use cases that just fall out of the model. We have early beta users using an API key that is already building for interior design use case. I just did my garage. I should have known about this. I know. Next time you remodel, we can be of help. Well, kitchen is next, I'm sure. Yeah. Yeah, I'm curious about the whole spatial intelligence space. I think we should dig more into that. One, how do you define it? And what are the gaps between traditional intelligence that people might think about at LLMS when Dario says we have a data center full of Einsteins? That's like traditional intelligence. It's not spatial intelligence. What is required to be spatial intelligent? First of all, I don't understand that sentence, a data center full of Einsteins. I just don't understand that. It's an analogy. Well, so a lot of AI as a field, as a discipline is inspired by human intelligence, right? Because we are the most intelligent animal we know in the universe for now. And if you look at human intelligence, it's very multi-intelligent, right? There is a psychologist, I think his name is Howard Gardner, in the 1960s, actually literally called multiple intelligence to describe human intelligence. And there is linguistic intelligence, there's spatial intelligence, there is logical intelligence and emotional intelligence. So for me, when I think about spatial intelligence, I see it as complementary to language intelligence. So I personally would not say it's spatial versus traditional because I don't know tradition means what does that mean. I do think spatial is complementary to linguistic. And how do we define spatial intelligence? It's the capability that allows you to reason, understand, move, and interact in space. And I use this example of the deduction of DNA structure, right? And of course, I'm simplifying this story, but a lot of that had to do with the spatial reasoning of the molecules and the chemical bonds in a 3D space to eventually conjecture a double helix. And that ability that humans or Francis Crick and Watson had done, it is very, very hard to reduce that process into pure language. And that's a pinnacle of a civilizational moment. But every day, right, I'm here trying to grasp a mug. This whole process of seeing the mug, seeing the context where it is, seeing my own hand, opening of my hand that geometrically would match the mug and touching the right affordance points, all this is deeply, deeply spatial. It's very hard. I'm trying to use language to narrate it. but on the other hand that narrative language itself cannot get you to pick up a mug yeah bandwidth constraint yes I did some math recently on like if you just spoke all day every day for 24 hours a day how many tokens do you generate at the average speaking rate of like 150 words per minute it roughly rounds up to about 215,000 tokens per day and like your world that you live in is so much higher bandwidth than that Well I think that is true But if I think about Sir Isaac Newton right It like you have things like gravity at the time that have not been formalized in language that people inherently spatially understand, that things fall, right? But then it's helpful to formalize that in some way. Or like, you know, all these different rules that we use language to like really capture something that empirically and spatially you can also understand, but it's easier to like describe in a way. So I'm curious, like the interplay of like spatial and like linguistic intelligence, which is like, okay, you need to understand some rules are easier to write in language for then the spatial intelligence to understand, but you cannot, you know, you can not write, put your hand like this and put it down this amount. So I'm always curious about how you leverage each other together. I mean, if anything, like the example of Newton, like Newton only thinks to write down those laws because he's had a lot of embodied experience in the world. Right. Yeah, exactly. And actually it's useful to distinguish between the theory building that you're mentioning versus like the embodied, like the daily experience of being embedded in the three-dimensional world. Right. So, so, so to me, spatial intelligence is sort of encapsulating that embodied experience of being there in 3d space, moving through it, seeing it, actioning it. And as Feifei said, you can narrate those things, but it's a very lossy channel. It's just like the notion of, you know, being in the world and doing things in it is a very different modality from trying to describe it. But because we as humans are animals who have evolved interacting in space all the time, like we don't even think that that's a hard thing, right? And then we sort of naturally leap to language and then theory building as mechanisms to abstract above that sort of native spatial understanding. And in some sense, LLMs have just like jumped all the way to those highest forms of abstracted reasoning, which is very interesting and very useful. But spatial intelligence is almost like opening up that black box again and saying maybe we've lost something by going straight to that fully abstracted form of language and reasoning and communication. You know, it's funny as a vision scientist, right? I always find that vision is underappreciated because it's effortless for humans. You open your eyes as a baby, you start to see your world. We're somehow born with it. We're almost born with it, but you have to put effort in learning language, including learning how to write, how to do grammar, how to express. And that makes it feel hard. Whereas something that nature spends way more time actually optimizing, which is perception and spatial intelligence, is underappreciated by humans. Is there proof that we are born with it? You said almost born. So it sounds like we actually do learn after we're born. When we are born, our visual acuity is less. And our perceptual ability does increase, but we are, most humans are born with the ability to see. And most humans are born with the ability to link perception with motor movements, right? I mean, the motor movement itself takes a while to refine, but, and then animals are incredible, right? Like I was just in Africa earlier this summer, these little animals, they're born and within minutes they have to get going. And otherwise, you know, the lions will get them. And in nature, you know, it took 540 million years to optimize perception and spatial intelligence and language. The most generous estimation of language development is probably half a million years. Wow. That's longer than I would have got to say. I'm being very generous. Yeah. Yeah. Yeah, no, I was, you know, sort of going through your book and I was realizing that one of the interesting links to something that we covered on the podcast is language model benchmarks. And how do Winogrand actually put in all these sort of physical impossibilities that require spatial intelligence, right? Like A is on top of B, therefore A cannot fall through B is obvious to us. But to a language model, it could happen. I don't know. Maybe it's like a part of the, you know, the next token prediction. And that's sort of what I mean about like unwrapping this abstraction, right? Like if your whole model of the world is just like saying sequences of words after each other, it's really kind of hard to like, why not? It's actually unfair. Right. But then the reason it's obvious to us is because we are internally mapping it back to some three-dimensional representation of the world that we're familiar with. The question is, I guess, like, how hard is it? You know, how long is it going to take us to distill from like, I use the word distill, I don't know if you agree with that, to distill from your world models into a language model? Because we do want our models to have social intelligence, right? And do we have to throw the language model out completely in order to do that? No. No, right? Yeah, I don't think so. I think they're multimodal. I mean, even our model, Marvel today, takes language as an input. Right. Right. So it's deeply multimodal. And I think in many use cases, these models will work together. Maybe one day we'll have a universal model. I mean, even if you do, like there's sort of a pragmatic thing where people use language and people want to interact with systems using language. Even pragmatically, it's useful to build systems and build products and build models that let people talk to them. So I don't see that going away. I think there's a sort of intellectual curiosity of saying how, like intellectually, how much could you build a model that only uses vision or only uses spatial intelligence? I don't know that that would be practically useful, but I think it'd be an interesting intellectual or academic exercise to see how far you could push that. I think, I mean, not to bring it back to physics, but I'm curious, like if you had a highly precise world model and you didn't give it any notion of like our current understanding of the standard model of physics, how much of it it will be able to come up with and like recreate from scratch and what level of like language understanding it would need. Because we have so many notations that we kind of use that we created, but maybe we'll come up with a very different model of it and still be accurate. And I wonder how much we're kind of limited by how people say human always need to be like humans because the world is built for humans. And in a way, it's like the way we build language constrains some of the outputs that we can get from these other modalities as well. So I'm super excited to follow your work. Yeah, I mean, like there's another engine. I mean, you actually don't even need to be doing AI to answer that question. And you could discover aliens and see what kind of physics they have. Right. Right. And they might have a totally different. Well, Fei-Fei said, we are so far the smartest animal in the universe. So, so far. So, so what do you, so, I mean, but that is a really interesting question, right? Like, is our knowledge of the universe and our understanding of physics, is it constrained in some way by our own cognition or by the path dependence of our own technological evolution? And one way to sort of, and like do an experiment, like you almost want to do an experiment and say, like, if we were to rerun human civilization again, would we come up with the same physics in the same order? And I don't think that's a very practical experiment to run. You know, one experiment I wonder if people could run is that we have plenty of astrophysical data now on the planet or celestial body movements. Just feed the data into a model and see if Newtonian law emerges. My guess is it probably won't. That's my guess. It's not. But the abstraction level of Newtonian law is at a different level from what these language LLMs represents. So I wouldn't be surprised that giving enough celestial movement data, an LLM would actually predict pretty accurate movement trajectories. Let's say I invent a planet surrounding a star. and give you enough data, my model would tell you, you know, on day one where it is, day two where it is. I wouldn't be surprised. But F equals MA or, you know, action equals reaction, that's just a whole different abstraction level. That's beyond just today's LL. Okay, what model would you need to not have it be a geocentric model? Because if I'm training just on visual data, it makes sense that you think the sun rotates around the earth, right? But obviously that's not the case. So how would it learn that? Like, I'm curious about all these like, you know, forces that we talk about. It's like, sometimes maybe you don't need them because as long as it looks right, it's right. But like, as you make the jump to like, trying to use these models to do more high level tasks, how much can we rely on them? I think you can need kind of a different learning paradigm, right? So like, you know, there's a bit of conflation here happening where saying, is it LLMs and language and symbols versus, you know, human theory building and human, human physics. And they're very different because an LL, like the human objective function is to understand the world and thrive in your life. And the way that you do that is by, you know, sometimes you observe data and then you think about it and then you try to do something in the world and it doesn't match your expectations. And then you want to go and update sort of your, your, your, your, your understanding of the world online. And people do this all the time, constantly, like whether it's, you know, I think my keys are downstairs. So I go downstairs and I look for them and I don't see them. And oh, no, they're actually up in my bedroom. So we're like, because we're constantly interacting with the world, we're constantly having to build theories about what's happening in the world around us and then falsify or add evidence to those theories. And I think that that kind of process writ large and scaled up is what gives us F equals MA in Newtonian physics. And I think that's a little orthogonal to, you know, the modality of model that we're training, whether it's language or spatially. The way I put it is almost like this is almost more efficient learning because you have a hypothesis of here are the different possible worlds that are granted by my available data. And then you do experiments to eliminate the worlds that are not possible. And you resolve to the one that's right. To me, that's also how I also have theory of mind, which is like, I have a few thesis of what you're thinking, what you're thinking. and I try to create actions to resolve that or check my intuition as to what you're thinking. And obviously, LLMs don't do any of these. A theory of mind possibly also will break into even emotional intelligence, which today's AI is really not touching at all. And we really need it. People are starting to depend on these things probably too much. and that's a whole topic of other debate. I do have to ask because a lot of people have sent this to us. How much do we have to get rid of? Is sequence-to-sequence modeling out the window? Is attention out the window? How much are we re-questioning everything? I think you stick with stuff that works, right? So attention is still there. I think attention is still there. I think there's a lot, like you don't need to fix things that aren't broken. And like, there's a lot of hard problems in the world to solve, but let's focus on one at a time. I think it is pretty interesting to think about new architectures or new paradigms or drastically different ways to learn. But you don't need to throw away everything just because you're working on new modalities. I think sequence to sequence is actually, in world models, I think we are going to see algorithm or architecture beyond sequence to sequence. Oh, but here actually, I think there's a little bit of, you know, technological confusion and transformers already solved that for us, right? Like transformers are actually not a model of sequences. A transformer is natively a model of sets. And that's very powerful. But because a lot of the transformers grew out of earlier architectures based around recurrent neural networks, and RNNs definitely do have like a built-in architectural, like they do model one-dimensional sequences. Okay. But transformers are just object models of sets and they can model a lot of those sets could be, you know, 1D sequences. They could be other things as well. Do you literally mean set theory? Like, yeah, yeah. So, yeah, yeah, yeah. So a transformer is actually not a model of a sequence of tokens. A transformer is actually a model of a set of tokens, right? The only thing that gives that, that injects the order into it in the transfer, in the standard transformer architecture, the only thing that differentiates the order of the things is the positional embedding that you give the tokens, right? So if you, if you choose to give a sort of 1D positional embedding, that's the only mechanism that the model has to know that it's a 1D sequence. But all the operators that happen inside a transformer block are either token wise, right? So that either you have an FFN, you have QKV projections, like you have per token normalization. All of those happen independently per token. And then you have interactions between tokens through the attention mechanism. But that's also sort of, it's permutation equivariant. So if I permute my tokens, then the tension operator gets a permuted output in exactly the same way. So it's actually natively an architecture of sets of tokens. Literally a transform. Yeah. In a math term. I know we're out of time, but we just want to give you the floor for some call to action either on people that would enjoy working at WorldLabs, what kind of people should apply, what research people should be doing outside of WorldLabs that would be helpful to you or anything else on your mind? I do think it's a very exciting time to be looking beyond just language models and think about the boundless possibilities of spatial intelligence. So we are actually hungry for talent, ranging from very deep researchers, right, thinking about the problems like Justin just described, you know, training large models of world models. We are hungry for engineers, good engineers building systems, from training optimization to inference to product. And we're also hungry for good business product thinkers and go-to-market and business talents. So we are hungry for talent, especially now that we have exposed the model to the world through marble. I think we have a great opportunity to work with even a bigger pool of talent to solve both the model problem as well as deliver the best product to the world. Yeah, I think I'm also excited for people to try Marble and do a lot of cool stuff with it. I think it has a lot of really cool capabilities, a lot of really cool features that fit together really nicely. In the car coming here, Justin and I were saying people have not totally discovered that. Okay, it's only 24 hours. I have not only discovered some of the advanced mode of editing, right? Like turn on the advanced mode. You can, like Justin said, change the color of the bottle, you know, change your floor and change the trees. Well, I actually tried to get there, but when it says create, it just makes me create a completely different world. You need to click on the advanced mode. It's like a UI issue. We can improve on our UI UX, but remember to click on it. Yeah, we need to hire people who work on the product. but one thing we got that was clear from you guys are looking for is also intellectual fearlessness which is something that I think you guys hold as principle yeah I mean we are literally the first people who are trying this both on the model side as well as on the product side thank you so much for joining us thank you guys thanks for having us yeah you
Processing in Progress
This episode is being processed. The AI summary will be available soon. Currently generating summary...
Related Episodes

⚡️Jailbreaking AGI: Pliny the Liberator & John V on Red Teaming, BT6, and the Future of AI Security
Latent Space

AI to AE's: Grit, Glean, and Kleiner Perkins' next Enterprise AI hit — Joubin Mirzadegan, Roadrunner
Latent Space

World Models & General Intuition: Khosla's largest bet since LLMs & OpenAI
Latent Space

⚡️ 10x AI Engineers with $1m Salaries — Alex Lieberman & Arman Hezarkhani, Tenex
Latent Space

Anthropic, Glean & OpenRouter: How AI Moats Are Built with Deedy Das of Menlo Ventures
Latent Space

⚡ Inside GitHub’s AI Revolution: Jared Palmer Reveals Agent HQ & The Future of Coding Agents
Latent Space
No comments yet
Be the first to comment