Back to Podcasts
The AI Podcast (NVIDIA)

AI in 2025: From Agents to Factories - Ep. 282

The AI Podcast (NVIDIA) • NVIDIA

Wednesday, December 10, 202529m
AI in 2025: From Agents to Factories - Ep. 282

AI in 2025: From Agents to Factories - Ep. 282

The AI Podcast (NVIDIA)

0:0029:39

What You'll Learn

  • Agentic AI is an evolution from simple chatbots to adaptive partners and fully autonomous agents that can make decisions on their own
  • AI factories provide the infrastructure to support enterprise-scale AI systems, with the ability to bring GPU compute to the data instead of moving data to the compute
  • Sovereign AI factories enable data sovereignty by keeping sensitive data on-premises and allowing for customization of AI models
  • AI is helping reduce physician fatigue and burnout in healthcare by assisting with tasks and decision-making
  • Hippocratic AI uses a constellation architecture with multiple AI models to constantly double-check each other for safety and reliability

Episode Chapters

1

Introduction

Overview of the key topics and advancements in AI discussed in the episode

2

The Rise of Agentic AI

Explanation of the evolution of AI agents from simple chatbots to autonomous systems

3

AI Factories and Data Sovereignty

Discussion of the importance of compute infrastructure and data sovereignty in enterprise-scale AI

4

AI in Healthcare

Exploration of how AI is being used to reduce physician burnout and improve patient safety in healthcare

AI Summary

This episode of the NVIDIA AI Podcast explores the advancements in AI in 2025, focusing on the rise of agentic AI, AI factories, and the real-world impact of AI in healthcare. It discusses the evolution of AI agents from simple chatbots to autonomous systems, the importance of data and compute infrastructure to support enterprise-scale AI, and how open models and sovereign AI factories are enabling customization and data sovereignty. The episode also highlights how AI is helping reduce physician burnout and improve patient safety in healthcare.

Key Points

  • 1Agentic AI is an evolution from simple chatbots to adaptive partners and fully autonomous agents that can make decisions on their own
  • 2AI factories provide the infrastructure to support enterprise-scale AI systems, with the ability to bring GPU compute to the data instead of moving data to the compute
  • 3Sovereign AI factories enable data sovereignty by keeping sensitive data on-premises and allowing for customization of AI models
  • 4AI is helping reduce physician fatigue and burnout in healthcare by assisting with tasks and decision-making
  • 5Hippocratic AI uses a constellation architecture with multiple AI models to constantly double-check each other for safety and reliability

Topics Discussed

#Agentic AI#AI factories#Data sovereignty#AI in healthcare#AI safety

Frequently Asked Questions

What is "AI in 2025: From Agents to Factories - Ep. 282" about?

This episode of the NVIDIA AI Podcast explores the advancements in AI in 2025, focusing on the rise of agentic AI, AI factories, and the real-world impact of AI in healthcare. It discusses the evolution of AI agents from simple chatbots to autonomous systems, the importance of data and compute infrastructure to support enterprise-scale AI, and how open models and sovereign AI factories are enabling customization and data sovereignty. The episode also highlights how AI is helping reduce physician burnout and improve patient safety in healthcare.

What topics are discussed in this episode?

This episode covers the following topics: Agentic AI, AI factories, Data sovereignty, AI in healthcare, AI safety.

What is key insight #1 from this episode?

Agentic AI is an evolution from simple chatbots to adaptive partners and fully autonomous agents that can make decisions on their own

What is key insight #2 from this episode?

AI factories provide the infrastructure to support enterprise-scale AI systems, with the ability to bring GPU compute to the data instead of moving data to the compute

What is key insight #3 from this episode?

Sovereign AI factories enable data sovereignty by keeping sensitive data on-premises and allowing for customization of AI models

What is key insight #4 from this episode?

AI is helping reduce physician fatigue and burnout in healthcare by assisting with tasks and decision-making

Who should listen to this episode?

This episode is recommended for anyone interested in Agentic AI, AI factories, Data sovereignty, and those who want to stay updated on the latest developments in AI and technology.

Episode Description

The year in AI began with agents and brought us creative superpowers, robots on farms and in operating rooms, and so much more. Look back on AI in 2025 through the voices of the people who created it in this recap episode. Listen to every episode: ⁠ai-podcast.nvidia.com 

Full Transcript

Hello, and welcome to the NVIDIA AI podcast. I'm your host, Noah Kravitz. Today, we're looking back on the year in AI 2025. But before we begin, if you're enjoying the AI podcast, please take a moment to follow us on Apple, Spotify, or wherever you're listening. Thanks. Our year began with NVIDIA's Mingyu Liu talking about the importance of world foundation models to advancing physical AI in episode 240. Forty conversations later, Jacob Lieberman introduced us to the future of enterprise storage, AI data platforms, in episode 281. Along the way were advances in AI models and the infrastructure that they run on, like the rise of agentic AI and the AI factory. We heard firsthand from pioneers in healthcare, higher education, life sciences, marketing, and other industries about how they're using AI to advance their fields and make work better for the people doing it. And we talked to everyone from researchers to roboticists about the dawn of physical AI, where intelligence moves from our screens into the robots building our cars, assisting our surgeons, and walking among us. 2025 was quite the ride. Let's dive in. This year in AI began as last year ended, with lots of talk about agents and agentic AI. So what exactly is an AI agent? An evolution in the way people use generative AI, agentic AI is a move away from simple call-and-response-style chatbots towards systems that have true agency. Chris Covert from InWorld AI breaks this evolution down into phases in episode 243, moving from simple conversation to an adaptive partner, and finally, full autonomy. We have this, you know, first, again, is that conversational AI phase. And I'll use a gaming analogy, right? The conversational AI phase gives avatars, gives agents, I'll use them interchangeably today, extremely little agency in doing anything other than speaking, right? It may be able to respond to my input if I ask it to do something, but it's not going to physically change the state of something other than the dialogue it's going to tell me back. is an adaptive partner phase where the AI is observing and responding to changes on its own. It's not micromanaging every decision, but it feels like you're collaborating with an agent or a unit that has just enough context to make smart decisions on its own. Like an evolution of a recommendation engine being driven by a cognition engine here. So it's not just learning, but it feels like it's learning what we need even before we ask it. Again, I think that's phase three. I think there's still a phase four. And I think that's a fully autonomous agent. And that stage, you know, again, continuing our analogy, is a player two, right? Where player three is it's adapting to us. Stage four is, hey, this thing is an agent on its own. It feels like I'm playing against another human. It is making decisions that feel optimal to its own objectives, alike to mine or not. The immediate payoff of this capability is freeing human workers from repetitive, error-prone, non-creative tasks, what we often refer to as toil. But here's the key. We don't need the agents to be perfect to be valuable. In fact, we don't even need them to do all of the work for us, as NVIDIA's Bartley Richardson points out in episode 258. If it gets you 75, 80% of the way there, that's fantastic. That's great. Because what's, you know, I'm sure, you know, you do your fair share of writing, right? Like, the hardest part for me about writing is that blank page. That blank page, totally. Right? And if I can get something that's 80% of the way there, it's great. AI runs on data, and agentic AI is no different. Which is good, considering the sheer velocity of information creation in today's enterprise. Data growth is creating a widening gap between the data we have and the insights we can actually extract. Cytoreasons Shai Shen Orr describes this challenge in The Life Sciences, in episode 276, comparing the struggle to keep up with data growth to the Red Queen effect from Alice in Wonderland. You can think about it like data is exponential, insight is linear. everyday percent data utilized to give insight is lower. The analytical side of this and the AI solutions for this have been missing. The field is still largely a manual field where you give people some data, they sit in front of their computer, they try to figure out, they make some value and insight for this. And I figured that's not a sustainable solution. And this field needs to move to ultimately build much larger integrative solutions that bring in many different angles of machine learning, AI statistics and so forth to ultimately bridge this. The data inside gap keeps growing. So you basically are constantly in a game in which you need to make it faster. It's actually what's called an evolution. And then you remember Alice in Wonderland, the Red Queen? Sure. Right. Where she said to Alice, you have to run just to stay in place. But the red green effect, so this need for us to continuously run is a huge driver for automation, acceleration, and I would even say the cognitive meta-analysis that we as humans need to do to somehow describe to a machine how we make decisions so that we can automate them. Right. To handle this exponential growth, we need massive compute resources. AI factories provide the infrastructure to support enterprise-scale systems. But traditional ways of approaching storage had a major flaw, data gravity. The data was heavy, and moving it created security risks. Here's NVIDIA's Jacob Lieberman in episode 281. So far, in order to do AI, you've had to send your data out to some kind of AI factory with a GPU, do all your processing, and copy it back. Right. So the data has gravity, and it turns out that instead of sending all your data to the GPU, you can actually send your GPU to the data. And what that looks like is actually putting a GPU into your traditional storage system on that same storage network and letting it operate on the data in place where it lives without copying it out. And the advantage of generating these AI representations with the source of truth data is that if the source of truth changes, you can immediately propagate those changes to the representations. The new version reverses this. Instead of shipping data out, we bring the GPU compute to the data. This evolves the AI factory from a distant processing plant into a unified, efficient pipeline. Sarah Laszlo from Visa explains what this modern factory approach looks like in episode 256. What that means to me is a single pipeline that goes from a data scientist with an idea about a model they want to build all the way to the model running in production. It's interesting because I hadn't really thought much about this AI factory terminology like I had heard it. I hadn't really thought it was what I was doing until I came here to GTC and I started hearing other people talking about it. And then I realized, oh, that's what my platform does. So my platform recently we adopted what we call the Ray Everywhere strategy So we use AnyScale Ray ecosystem to do the whole thing the whole shebang So data conditioning model training and model serving we do all in Ray And it is intentionally trying to be more of this factory concept where there's not a whole bunch of distinct parts or distinct tools that are living in different places that work differently. It's just one unified, consistent pipeline from start to finish. This shift isn't just about efficiency. It is critical for data sovereignty. Countries and companies need to ensure their sensitive intelligence stays in their buildings and on their own soil. In episode 247, Karen Hilson from Norwegian telecom operator Telenor explains why they built a sovereign AI factory in Oslo. So like Hive Autonomy, for example, I mean, they work with logistics, robotics. so they are actually innovating I say a lot of industries whether it's ports as I said sort of factories or this sort of in their operations and have efficiency cases so they have very specific customer needs that they are trying to solve but they are the reason sort of why they were very interested in coming to the air factory is that they're sitting with sensitive data so it was very they wanted it to be really on Norwegian soil The Telenor brand sort of represents security, you know, so there's sort of a gain that really helps them. And then the sustainability part is super key. So that was sort of the combination of these three. Capgemini is also a customer of ours. They are developing products of doing voice-to-voice translation. And we can say, yes, that can be done, but these are for sensitive dialogues. Not all dialogues can go out in the cloud somewhere. These are very sort of sensitive dialogues, if you think, you know, within the health sector, within the police. So not so much on program, but again, it's sort of a safe, secure environment. Right. And that's sort of really key. And another customer is working a lot with the municipalities in Norway. In Norway, yeah. And again, with sort of sensitive cases that they sort of really would like their data to be secured. To build trust in these factories, openness is essential. Jonathan Cohen from NVIDIA explains in episode 278 how open models, like NVIDIA's Nemotron family, allow for the customization required by Sovereign projects. If you say, you know, NVIDIA trains a model, a Nemotron model, and it's great. But since you've disclosed all your training data and looked at your training data, for whatever reason, we have some policies where this data we can't use. And we can say, that's fine. Everything you need to reproduce what we did is there. You can train your own model, excluding that data. Or you say, well, I like the data, but the mix is wrong. I don't know. I'm a sovereign project, and it really needs to be very good at speaking this language and understanding this culture. And that data wasn't as represented in your training set as I want it to be. Everything that we did is transparent, and so you can make these modifications yourself. Since our first episode back in 2016, the AI podcast has told stories about the real-world impact of artificial intelligence. 2025 was no different. A big area of impact this year was, once again, in healthcare, where AI is helping with everything from drug discovery to reducing physician burnout. Here's Anne Osdwat of Moon Surgical from episode 272, explaining how their maestro system supports surgeons. Physician fatigue is absolutely real. Yeah. It's interesting. We did our first human study in Brussels in Belgium with a surgeon, and he used the system over 50 cases. And he told us after a few weeks, hey, when I get back home in the evening, my wife tells me that I'm, you know, a lot nicer than before. So like, what's going on? And, you know, I mean, he attributed that just to his own fatigue level, right? He's like, you know, I end my day in a way that is a lot more relaxed. It's about both the physical and the mental load. With AI and healthcare, safety is the number one priority. Hippocratic AI has tackled this by building a constellation architecture, using multiple AI models that constantly double-check each other. CEO Munjal Shah describes how it works in episode 262. We literally have multiple models double-checking each other. Right. And what people don't realize is a lot of the models now, they say you can give a lot of input tokens to them now. Just put it all in there, it'll figure it out. And Gemini is like, what, a million, I think it is now? million tokens. So it's like, oh, okay, no problem. But he can't reason across it all. They'll show you examples of what we're called needle in haystacks, where it'll be like, okay, it'll find that one thing. Yeah. I mean, grepping for a word is not that hard in computer science. It's like, we can find a word. But what you're really trying to do is reason across it. So I'll give an example. If you ask your care manager, can I have ibuprofen? And they say, sure, you can have ibuprofen, but don't take too much. That's fine, right? Because it's an over-the-counter medication, unless you have chronic kidney disease stage three or four, then it'll kill you. Well, if you put the rules for ibuprofen and CKD into GPT-4 and then ask it, it'll do great. If you put in all the rules for all condition-specific over-the-counter medications and ask, it'll still do pretty good. It'll start missing some sometimes, which is still not okay because you could kill people, but fine. If you put in the patient's medical history, the patient's last 10 conversations with you, all of those rules for over-the-counter medication disallowance, and the current checklist for what you're supposed to follow with that patient, and maybe a few other things and then ask it, good luck. And what it is, is we have an attention span problem. But if you have multiple models, we have these other models only focused on checking one thing at a time. So there's an overdose engine, and it listens to every turn of the conversation. It's like, are we talking about drugs? Are we talking about drugs? Yes, we're talking about drugs. Okay. And then it's like, well, okay, did somebody just say a number that's an overdose relative to their prescription or relative to max toxicity of what you can have of that drug? Okay, he did. And it may not seem that hard, four pills versus two pills, but when you're talking about creams and injectables, it gets quite hard. I took a whole bunch of my testosterone cream and I rubbed it on my hand. Was that an overdose? I don't know. How much cream was in your hand? What's a little bit? What's a little bit? Was it a pea size? Was it a cherry tomato size? Was it an apple size? RLM knows how to ask all these questions and knows how to navigate assessing whether it's actually an overdose. And you cannot have, if a patient shares an overdose information with a care manager in a clinical setting, you need to do something. AI is also changing healthcare from a totally different angle by transforming agriculture. Paul Mikesell of Carbon Robotics explains in episode 270 why his company's approach to weed control swaps chemical herbicides for AI-guided lasers. I've also learned a lot about the quality of our food system, and I know that there's lots of discussion about this now. We are becoming more aware of it, that different herbicides are being banned in Europe, United States, etc. We are learning about more of the long-term negative health effects. Again the ones who really suffer from it over their lifetime is the farmers who get exposed to this stuff much higher doses than the consumer But even the consumer even you right now are participating in some form of a multi maybe multi-generational science experiment. We all have glyphosate in our system. And so if you take everybody listening to this podcast right now, if we all went and did a urine sample, you would find about 90% of us would have glyphosate in our system right now. What's glyphosate? It's the active ingredient in Roundup. Right. We know that it's carcinogenic. Like any carcinogen, it's only a question of exposure over time. So we should be able to, with the kinds of technology that are available today, with the things that AI can do, we should be able to take a step back and say, do we really need to be spraying this stuff on our food in order to grow it and survive as a population? Yeah. My answer to that question, I think, is no, we don't need to do that. And we should be able to do things like laser reader. Yeah. Beyond the healthcare benefits, carbon robotics robots help farmers operate more efficiently and sustainably. And they look really cool, too. Speaking of cool, in the world of marketing and media, agents are fundamentally changing the relationship between brands and consumers. Firsthand's John Heller joined episode 242 to describe a shift where AI agents curate the web specifically for the user's intent. I had been working in the gaming world and some of the generative AI abilities for gaming assets when language models really came out. And something struck us, something very powerful, which is, and this is a metaphor for the math inside, but AI now understands the ideas and intents, needs you may have from what you're reading, what you're watching, what you might ask it outright. and it can go find the right response or take the right responding action. And everything is presented to you in a very natural human way. And if you back up a step and think of that happening all the way through a consumer's use of the digital world, from when they're searching and becoming aware of things they might need, when they do some investigation and read up on products or services, when they go to browse or shop, when they buy, all of those modes change pretty fundamentally. They don't replace. We think they get enhanced because instead of the world of the past, where I maybe did a search, got some directions and a link, went to a place, read up on something, browsed for something, went to Sonad maybe, went to another place to try to find the version I want. Those are all sort of separate hops. Right. The internet where it's, you know, the same content everybody sees. AI instead is going to understand and learn at each moment what it is you need. And as with most things, AI data is the core. The people who have the most and best data about a product or service are the brands. They are the retailers and the people who sell it. So they can create brand agents, which means your experience on the internet at all of those moments in the journey, from first learning about it to figuring out what the right configuration is and comparing and browsing and buying is going to adapt on the fly through these agents. So it doesn't replace the web, but it changes things from you looking at stuff someone wrote to something that's partially adapting to what you actually need, understanding your needs. But the agents that are doing that for you are from the retailers and brands themselves because it's their data that is what you need. And that sort of changes the internet into kind of your internet for both parties. While software agents are transforming the digital world, a massive shift is happening in the physical world as well. This is the dawn of physical AI, where AI models don't just generate text or images. They control things let move, like the aforementioned farm machinery. According to Sonia Fidler, VP of AI Research at NVIDIA, the scale of this opportunity is staggering. Here's Sonia in episode 249. At the end of the day, robots need to operate in the physical world, in our world. And this world is three-dimensional and conforms to the laws of physics. And there's humans inside that we need to interact with. We typically hear the term such AI that operates in a real physical world as physical AI. So I'll maybe use that term quite a lot. Physical AI is really kind of the upcoming big industry. very likely larger than generative and agentic AI. You know, Jensen typically says everything that moves, all devices that move will be autonomous, right? So that's kind of the vision. So a robot operating in the real world obviously needs to understand the world. What am I seeing? What is everything I'm seeing, doing? How is it going to react to my action, right? So understanding it needs to act. But there is a catch. You can't train a physical robot the same way you train a chatbot. If a chatbot makes a mistake, you might get a typo. If a robot makes a mistake, it'll probably break something. To solve this, researchers like Mingyu Liu are building World Foundation models, AI that understands physics and space-time, allowing robots to simulate thousands of futures before they take a single step in reality, as Mingyu explains in episode 240. So I think World Foundation models is important to physical AI developers. You know, physical AI are systems with AI deployed in the real world, right? And different to digital AI, these physical AI systems that interact with the environment can create damage, right? So this could be real harm, right? Right, right. So a physical AI system might be controlling a robotic arm or some other piece of equipment changing the physical world. Yeah, I think there are three major use cases for physical AI. Okay. It's all around simulation. The first one is, you know, when you train a physical AI system, you train a deep learning model, you have a thousand checkpoints. Do you know which one you want to deploy it, right? Right. And if you deploy individually, it's going to be very time consuming. Sure. Oh, then it's bad. It's going to damage your kitchen. So with a world model, you can do verification in the simulation. So you can quickly test out this policy in many, many different kitchens. And before, you deploy the real kitchen. And after this verification step, you may be narrowed down to three checkpoints, and then you do the real deployments. So you can have an easier life to put your physical AI. Once these brains are trained safely in digital worlds, they need bodies. And while we may see many form factors in factories, there is a massive surge in humanoid robotics going on outside those factory walls. Yashraj Narang from NVIDIA's Seattle Robotics Lab explains in episode 274 how this isn't just an aesthetic choice. It's a practical requirement for robots that need to work alongside us. You know, there's a group of people, you know, forward-thinking people. Jensen very much included, this is near and dear to his heart, that felt that the time is right for this dream of humanoid robotics to finally be realized, right? You know let let actually go for it And you know this this begs the question of why why humanoids at all You know why have people been so interested in humanoids Why do people believe in humanoids And I think that the most common answer you get to this which I believe makes a lot of sense is that the world has been designed for humans. You know, we have built everything for us, for our form factors, for our hands. And if we want robots to operate alongside us in places that we go to every day, in our home, in the office, and so on. We want these robots to have our form. And in doing so, they can do a lot of things, ideally, that we can. We can go up and down stairs that were really built for the dimensions of our legs. We can open and close doors that are located at a certain height and have a certain geometry because they're easy for us to grab. Humanoids could, you know, manipulate tools like hammers and scissors and screwdrivers and pipettes if you're in a lab, these sorts of things, which were built for our hands. As AI moves from the screen to the physical world, it is also fundamentally changing our creative and professional lives. In episode 265, Canva's Danny Wu talks about AI and creative superpowers. You kind of see like the magic of Canva is integrating all the different steps and different parts of design into a simple page, as I like to call it. And so we really invested in our content library, in millions of templates, in the make it easier to start. And what we saw and got really excited about AI was that firstly, we can offer all the amazing high quality content for people to use. When the user might want something, but then might want to have an idea that didn't necessarily exist. Maybe it has actually never been created in the world. Like AI just gives us this superpower and ability to actually create things on demand specifically for what someone has in mind or in mission and just kind of turn that idea, turn that search term or prompt into something they can use to express themselves. But as these systems become more widespread, we must focus on inclusivity. We need to ensure that the data feeding these models represents everyone. Angel Bush, founder of Black Women and AI, reminds us of the goal of true equity in episode 250. One of the things that I've always said to people is, I want Black women in artificial intelligence to be so successful that it no longer has to exist. We're really not looking for members. We're looking for people to be a part of a movement. Yeah. And really understand and trust that vision of the movement that we're going to make sure that you have all the tools you need in order to be a part of the AI economy, in order to pivot into your career. And in education, leaders like Dr. Cynthia Teniente-Matson at San Jose State University are teaching students that no matter how powerful the tool, the human element remains essential. Here's Dr. Teniente-Matson in episode 275. There are some students who are using the tools that I've talked to for study guides. There are some students that are using the tools for first drafts. I think however we use the tools, it's important if we're going to be writing about things or communicating that we're citing references and saying, you know, this was code developed based on whatever sort of information they might have retrieved from the instrument and also to validate it. And because no, these hallucinations exist, but as time goes on, the hallucinations are diminishing, especially if you're building your own custom GPTs. That doesn't mean mistakes aren't going to happen. But that's, as I say to students regularly, Noah and faculty and staff, you are still the human in the loop. We're not trying to replace the human in the loop. You have the tool be your co-pilot or your assistant that you're directing. So looking back on 2025, what's our best piece of guest-given advice for the year to come, it's simple. Start now. As Derek Slager of Imperity puts it in episode 271, if you're still on the sidelines when it comes to artificial intelligence, it's high time to get in the game. I would say the one piece of advice, and I give this advice a lot, is start now. It's so important. It's so important because like it's early, right? We're still figuring out the patterns and the practices. You know, like as an industry, we're learning a lot about kind of how to, you know, put these incredible new technologies together in ways that really, you know, move the needle. And, you know, right now you just have a choice, right? You can be a doer who's in that learning loop or you can be an observer and kind of, you know, wait and see. And I think, you know, we talk a lot about this here, like, you know, speed's the only thing that matters. And so I don't think it's viable in the current market to be outside that learning loop. And the good news it's early, right? And so you're not too late, but it's getting to the point where pretty soon you're late. And so I think we're certainly past the point. And again, this is something that's changed in the last six months. We're past the point where people are like, well, we'll see if this AI thing plays out or not. Like it's overwhelmingly obvious where things are going. And so, yeah, get off the sidelines, get in there, try stuff, learn. It's easier than ever, you know, to do that. There's more information out there. And of course, you know, AI feeds itself, right? AI can also help people figure out where to start and how to get through. And so, yeah, start now and go really fast. That's the path to success. We are moving toward a future of collaboration where human creativity is amplified by silicon capability. NVIDIA's Jacob Lieberman leaves us with this final thought on the partnership between people and agents in episode 249. There will be teams composed of carbon people and silicon agents, and they're collaborating on tasks. And at various times, the humans will be conducting the orchestra, and at other times, the orchestra will be conducting itself. And that might be the most efficient way to get the work done. Human judgment is critical. Human strategizing is critical and there's always room for that. So it's a way to complement the things that we're very good at with some of the things where we could use some help. Yeah. 2025 was an incredible year for AI and all signs point to 2026 being full of more breakthroughs and transformations in artificial intelligence and how we use it to change the ways we live and work. Follow the NVIDIA AI podcast wherever you get your podcasts to stay up with the latest in the industry as told by the people creating it. And browse the complete archive of episodes at ai-podcast.nvidia.com. Thanks for listening. I'm I'm

Share on XShare on LinkedIn

Related Episodes

Comments
?

No comments yet

Be the first to comment

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies