

AI Infrastructure Ecosystem | GTC Live Washington, D.C. Chapter 3
The AI Podcast (NVIDIA)
What You'll Learn
- ✓The power and cooling infrastructure required to support the growth of AI and data centers will need to scale to unprecedented levels by 2030.
- ✓Efficiency and energy optimization through AI-powered systems will be crucial to managing this infrastructure buildout.
- ✓Leveraging a diverse mix of power generation sources, including gas, nuclear, solar, and wind, will be necessary to meet the projected 50% increase in U.S. power demand over the next 20 years.
- ✓Innovations like 800V data center architectures demonstrate the potential for a more holistic, systems-level approach to designing and building this new AI infrastructure.
- ✓There is a need for policy support and a greater sense of urgency in the U.S. to keep pace with global competitors like China in building out critical AI infrastructure.
AI Summary
This episode discusses the immense infrastructure challenges and opportunities presented by the rapid growth of AI and data centers in the United States. The panelists, including leaders from Vertiv, Schneider Electric, GE Vernova, and Crusoe, highlight the need for massive investments in power generation, cooling, and grid modernization to support the exponential increase in compute demand. They emphasize the importance of taking a holistic, systems-level approach to designing and building this new AI infrastructure, leveraging innovations like 800V data center architectures. The discussion also touches on the need for policy support and a sense of urgency to keep pace with global competitors like China.
Key Points
- 1The power and cooling infrastructure required to support the growth of AI and data centers will need to scale to unprecedented levels by 2030.
- 2Efficiency and energy optimization through AI-powered systems will be crucial to managing this infrastructure buildout.
- 3Leveraging a diverse mix of power generation sources, including gas, nuclear, solar, and wind, will be necessary to meet the projected 50% increase in U.S. power demand over the next 20 years.
- 4Innovations like 800V data center architectures demonstrate the potential for a more holistic, systems-level approach to designing and building this new AI infrastructure.
- 5There is a need for policy support and a greater sense of urgency in the U.S. to keep pace with global competitors like China in building out critical AI infrastructure.
Topics Discussed
Frequently Asked Questions
What is "AI Infrastructure Ecosystem | GTC Live Washington, D.C. Chapter 3" about?
This episode discusses the immense infrastructure challenges and opportunities presented by the rapid growth of AI and data centers in the United States. The panelists, including leaders from Vertiv, Schneider Electric, GE Vernova, and Crusoe, highlight the need for massive investments in power generation, cooling, and grid modernization to support the exponential increase in compute demand. They emphasize the importance of taking a holistic, systems-level approach to designing and building this new AI infrastructure, leveraging innovations like 800V data center architectures. The discussion also touches on the need for policy support and a sense of urgency to keep pace with global competitors like China.
What topics are discussed in this episode?
This episode covers the following topics: AI infrastructure, Data center power and cooling, Energy efficiency, Power generation mix, Systems-level design.
What is key insight #1 from this episode?
The power and cooling infrastructure required to support the growth of AI and data centers will need to scale to unprecedented levels by 2030.
What is key insight #2 from this episode?
Efficiency and energy optimization through AI-powered systems will be crucial to managing this infrastructure buildout.
What is key insight #3 from this episode?
Leveraging a diverse mix of power generation sources, including gas, nuclear, solar, and wind, will be necessary to meet the projected 50% increase in U.S. power demand over the next 20 years.
What is key insight #4 from this episode?
Innovations like 800V data center architectures demonstrate the potential for a more holistic, systems-level approach to designing and building this new AI infrastructure.
Who should listen to this episode?
This episode is recommended for anyone interested in AI infrastructure, Data center power and cooling, Energy efficiency, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
Coverage from keynote pregame show, GTC Live Washington D.C. Chapter 3: AI Infrastructure Ecosystem Behind every breakthrough is an unseen network of data centers, power systems, and partners. Leaders across energy and infrastructure discuss how they’re building the backbone of the AI economy. Catch up with GTC DC on-demand: https://www.nvidia.com/en-us/on-demand/
Full Transcript
Hello, and welcome to a special GTC edition of the NVIDIA AI podcast. This is the third of five episodes on the road to GTC Live in Washington, D.C. Bonus conversations you won't hear anywhere else. We've gathered leaders from across the energy and infrastructure sectors to discuss how they're building the backbone of the AI economy, the unseen network of data centers, power systems, and partners that powers each and every breakthrough. Enjoy the conversation. And when you're done, visit ai-podcast.nvidia.com to browse our library of over 275 episodes of the NVIDIA AI podcast. AI isn't just transforming tech. It's becoming a new driver of the American economy. From manufacturing to medicine, it's rebuilding our industrial and scientific foundations that are driving new productivity and new jobs. Nowhere is that impact clearer than in the rise of data centers. They are the new factories of the AI age. America's AI leadership depends on more than algorithms and chips. All those are awesome. It runs on infrastructure. Data centers, power grids, supply chains are becoming the backbone of intelligence, the foundation of a national strategy for innovation. Even when we assume we need, let's say, a thousand X more, and there's forecasts that go up to a million times more, the buildout is tremendous. And here to talk about building that foundation is Gio Albertazzi, CEO of Vertiv, Olivier Bloom, CEO Schneider Electric, Krishna Janalagagata, CTO GE Vernova, and Chase Lockmiller, co-founder and CEO of Prusso. All right, Gio, let's start off with you. And let's pretend for a second people don't know how big of a challenge that we're looking at right now. Can you talk about the scope of the power and the cooling that we're going to need by, let's say, 2030? Well, think in terms of all the announcement that we have heard about investment in data center. certainly in the US, but globally. We have an imagination about how much IK power is behind that. Yeah, for every kilowatt, megawatt, gigawatt, there is an equivalent amount of power and thermal infrastructure. And that power and thermal infrastructure needs building at scale. Now, that has been traditionally a very kind of a construction, labor-intensive industry. And we're all thinking about how can we change that and scale it to absolutely unprecedented industrial proportions. Because without that, and that's what I think, I believe, and I know everyone in this panel and beyond are doing, without that, we will never be able to scale AI at the speed that we have seen in the video and that you have explained to us. So I think we are at a very important inflection moment in this juncture. So Olivier, Schneider has been a hallmark inside of data centers. I get a lot of data center tours and I see your logo on a lot of the boxes in there. One question I have for you is, sure, it's build more and more and more, but there's also an efficiency play. Can you talk me through the balance of efficiency and just more power? Yeah, you're absolutely right. You know, what is very interesting, we are at a time where AI depends on compute, compute depends on energy. The very interesting part is energy availability and efficiency depends on AI. Okay. Because, you know, at Schneider, I think we have been an advocate of energy efficiency for many, many years. And we strongly believe that the combination of electrification, automation, and digitalization will solve the energy transition in every part of the world. Now, 10 years ago, Frank speaking, it was not possible. Electrification, automation were available. But the type of technology you have now through AI helps you to make energy more efficient. So on our side, of course, we are excited because altogether we are providing the infrastructure to make the compute and therefore AI available. But we are even more excited to leverage AI to make our overall industry more efficient. And if you think through, we have been all of us through the industrial and digital revolution. Actually, what we are doing, it was in your introduction, we are talking more and more about AI factory. And we are implementing exactly the same principle. So going through the full life cycle of this AI factory, start from the design, building digital twin where you can simulate power and cooling efficiency to the development, the construction, the maintenance, the operation. This whole life cycle is completely powered by AI at the end. Krishna, I think that, you know, Vernova's gas combustion engines are really a secret weapon in America's AI build-out. But you're sold out. And, you know, talk to us a little bit just about how you're changing your own business processes to drive more production, to get more online, and how you see that playing out over the course of the next few years. Sure, Brad. So like you said, the production is, you know, the demand is massive. We are sold out through 28, maybe through 29, but we're investing heavily in capacity expansion of gas. So we are quadrupling our capacity of, you know, number of gas turbines delivered by 2028 compared to 2020. And this is all types of power is also an answer to this. It's not just gas. So we are fortunate to have basically all kinds of power that's available to us. This is gas. This is nuclear. This is solar, wind, and hydrogen. So we are looking to leverage every type of power source that's available to grow the electrical infrastructure. And power generation is a part of it. a big part of it is also we'll talk about later is the grid. Once you actually generate these electrons, how do you get them from source to use in a very complex environment with new grid resources and so on? But power generation is definitely a, and grid is a big bottleneck right now. And we're working on easing that. Help us understand how your power generation capabilities, capacity evolved from 2025 to let's call it 2029. So if you look at the demand side, for about 20 years, the total power demand in the U.S. has been flat. No growth, 0.3% growth. But in the next 20 years, it's expected to grow 50%. And about a third of that is going to come from data centers. And to meet the demand, you essentially need to leverage all kinds of power that's available. Like I said, it's gas, it's nuclear. We have an SMR that's coming online in 2029 in Ontario. Our first SMR application at TVA went in this year. So there is significant growth in nuclear that's coming up from a gas perspective. Depending on the gas turbine, sometimes heavy duty is the answer. Sometimes it's your aero derivative. Sometimes it's different types of turbines. And also future-proofing some of these gas turbines so that they can run on hydrogen in the future. So it's gas is a fantastic bridge right now. So electrify now, decarbonize later. So I think that that's something we can do as well. And we're also looking at, you know, wind and solar. So all of them belong in the energy equation. And I think that's what we're working on. It's what the market needs. Energy abundance is what we're chasing here. And energy abundance is really an all of the above kind of answer right now. Right. You know Chase you guys have been at the forefront of what Jensen talks about extreme co right Moore law is dead The data center is now the unit of compute This is about power in and tokens out Crusoe is building some of the most capable data centers in the world in Abilene and other places in partnership with NVIDIA Talk to us a little bit about what you see happening in that world of extreme co-design and why you've been so successful along with companies like CoreWeave in pushing the frontier on how NVIDIA brings its chips to market. Absolutely. So, you know, Crusoe is an AI factory business. And the thing about AI factories is this incredible composition of all of the greatest industries humanity has ever produced. From everything the gentlemen next to me are working on from a cooling, a power generation, a storage perspective, all the way to everything that we've done in the silicon space. And most importantly, our partnership with NVIDIA. But this process of taking electrons and turning them into tokens is just one of humanity's greatest opportunities and challenges over the next decade. And we see it as just a very complex challenge in terms of just the scope and magnitude of these investments. And it's requiring just an incredible amount of resources across the entire economy. So tell me a little bit. I mean, again, I'm out of my league. out of my depth a little bit here, but the 800 volt data center, you know, kind of architecture that I know NVIDIA has been pushing as just one example of this, you know, extreme co-design. What are the things that you see happening and coming down the pipeline that's going to drive that equation in terms of power in and tokens out? Yeah. So, you know, I think the 800 volt rack design is like a great example of us, you know, thinking through from a first principle basis, like, okay, data centers are sort of built in this way, and that's how we built them 25 years ago. Is that really how we should be doing it today at the scale of gigawatts? Maybe not. You know, when we think about this data center scale computer, where it's not just a single server, it's not just a single rack, it's really the entire system working together as one cohesive unit that's thinking as one giant brain. And how do you think about that from a networking perspective? How do you think about that from a cooling perspective? How do you think about that from an entire system perspective. And the power aspect of that, you get massive efficiencies from doing things like changing from 48 volt to 800 volt. That's a huge efficiency gain that all of us would benefit from. And so it's just one example of this cohesive, collective, industry-wide partnership that's required to bring this infrastructure to life. If I can kind of reinforce 100%, you know, this 800 volt is an example of how you can make the whole system, not only just more energy efficiency, but really scoop up kind of a power that is available because the system can be redesigned from scratch. Again, you were talking about the system, think system, not just components. We've come from decades of data center will being built a component at a time. I was talking about an inflection point in the way we think about data centers. This is a fantastic example. There are many examples. Again, we have to think the powertrain, the thermal chain, the whole prefabrication as ways to industrialize the way we build data centers like we've never done before. So I believe the next 10 years, the next five years, three years will be very different in that respect than I have been so far. We've seen some picture of what we call Vertiv One Core. That is exactly that type of modularization system level, all parts integrally designed to work together. And that is perfectly applicable to the 800 volts powertrain. So things are changing fast. So China is building faster. They're building bigger. You had mentioned the 10X on nukes to R0 or 1, if you count Georgia. And we're behind. I mean, I started, I did China, I worked with China in 1995 and on buying sheet metal and plastic injection molding. and they sent me, I said, hey, send me a picture of the site. And the picture had no roads, had no electricity. And I said, well, how are we going to be up and running in 60 days? They said, watch. And it happened. And China was willing to do whatever it took to build infrastructure. And here we are 30 years later, and it has continued. What is it that we could do to go faster? I think we were both at the winning of the AI race here in D.C. Chase, you were here as well on stage. What can Washington do to get this moving faster? Start with you. Well, look, when you see what happened in the past months, it's been very clear for the local government that AI has been big on the agenda and energy has been big on the agenda. And the two are really interconnected. So I think what they are doing right now to make sure, It's a bit what my colleagues have said. It will take an ecosystem at the end of the day. And you might not be aware, but whether it's NVIDIA, us, the four of us, you have more and more an ecosystem of people who are working together to support the local government to make it happen. We are working very closely also with all the IPR Scalor. And what is very, very important at the end of the day, and Joe talked about it, we are living a huge revolution in our sector. You know, I've been 32 years at Schneider Electric. I think more has happened in the past two years than in the past 32. So the fact that you have this ecosystem, which is working together, working closely with the government, I think we have the good discussion, the real discussion on what does it take to make it happen. And we are moving very, very fast. U.S., by the way, when it comes to AI and data center, is still by far number one in the world today. Yeah, President Trump signed an executive order which basically said removing some of the barriers to that. Are you currently experiencing that? Yep. We're working on all of that for sure. Good. I mean, you nailed it. I mean, Secretary Wright and Secretary Bergram, every time I've seen them, heard them, when we're in D.C., they're actually asking us, what is your recommendation? How do we go faster? What do we need to do? I think it's a complete change in kind of the mindset because now it's been elevated to a national security issue. That's right. This is no longer just about economic security. It's about national security and consequently moving faster than ever, which brings me to this question of where can startups help? When you think about we're sitting here with some of the largest industrial companies in the ecosystem. What, if anything, do you see happening out there in the startup ecosystem in Silicon Valley and power generation? Certainly, obviously, Crusoe and the work of CoreWeave and others, Nebius, in terms of building out these new data centers has been critical. But anything else that you see as interesting that people in the crowd might want to pay attention to? Absolutely. So if you look at, I think startups are the lifeblood of our innovation ecosystem. They're here. And when you look at what we're doing, I mean, G.E. Varnova is 18 months old, spun off as an independent company. And our CEO intentionally headquartered it in Cambridge, Massachusetts, because of the innovation ecosystem with MIT and the universities and everything surrounding there. And we are investors in a lot of these startups. But startups are bringing, so when I look, let's go to the grid, for example. Our grid was designed for, you know, 50 years ago, relatively stable, spinning loads, spinning sources of power, relatively stable loads. It's a completely different world right now with the AI workloads, hundreds of megawatts in milliseconds. And then on the source side you have wind and solar where these resources you know they changing constantly They fluctuating So having a grid that can actually deal with this is really really important So we are working it all our competitors are working it but the startups have a really, really important role to play there. There's the hardware aspect of it and also the software aspect of it. When it comes to hardware, we have power electronics, innovative solutions. I've seen companies that are cooling the transmission cables. I mean, just fascinating stuff, you know, so that you can increase the capacity to transmit power. But how do we leverage power electronics, you know, statcoms, you know, fact solutions to make sure we can leverage the infrastructure more? Startups are playing a huge role out there. And they're also playing a very massive role in the software ecosystem. How do we make, you know, how do we predict, you know, what's going to happen? How do we make sure the right resources are there at the right time, at the right place? And how do we manage in real time from millisecond scale to minutes and hour scale? How do we dynamically manage the grid across the country? These are all big, challenging problems. We are working it, but a lot of the innovation in that space is coming from startups. So we're seeing, again, I don't want to name a specific startup, but we're seeing lots of innovation coming from there. So I'm really excited to work with the startups and see how we can kind of accelerate this transition. Maybe the same question for the two of you. Yeah, maybe if I can one point. If I come back to what I said before, you know, AI depends on compute, compute depends on energy, and actually energy depends on energy intelligence. What I mean by that, at the end of the day, it's about data. You know, even if we are Schneider Electric, historically more hardware company, actually today we are more an hardware and in the digital company. And back to your question, what we love to do with startups, the more you have startups which are coming on this data intelligence and looking at the different assets you can have in the AI factory, because it's a very complex one. You will have switchgear, you will have racks. We are doing a lot of R&D, but the more they are taking those user case on how you can make it more efficient, how you can capture the data, we are helping to structure the data. We are helping to create an ecosystem, but our customers are asking for even more. So the more you have startup coming in our own ecosystem, which is completely open, by the way, not protected, the more we deliver solution and the more we can accelerate energy intelligence in our industry. So we love having those startups taking a strong interest. We need more and more smart people and capital, you know, coming from all of the startups. Yep, absolutely. A hundred percent. And if I look at it from the infrastructure technology, the industry infrastructure data center, infrastructure technology has been started for many years. And then it started to accelerate crazy in the last three, five years, and it's going to accelerate more. When you have this kind of acceleration, people like us, that scale. I mean, scale is absolutely essential, but you cannot just scale in this market. You have to be an innovator and scale. Innovation happens organically. Innovation happens inorganically. There needs to be a base of startups, inventors, creative minds, engineers out there that define what the new technology will be because a new technology cannot be just nurtured within the GVNOVA, Schneider, or Vertiv. It needs to come also from spring, from the, let's say, entrepreneurial fabric of a country. And that the two things combine and scale. And that's what we see very auspicious times for right now. Chase, where are the most exciting areas you see innovation when you look ahead two, three years, you say, I can't wait to bring this into the AI factory? Yeah, just adding to this sense of urgency and I think this huge opportunity for startups is just this sense of startups have the benefit of being able to move very, very quickly. and we're at a moment of just incredible urgency across the world in terms of bringing this infrastructure to life. I think to your question around things that we're really, really excited about, what's changing in the data center is, it's unbelievable how much change is happening over such a short period of time. From the cooling architectures to the power densities of the systems, I think when, if you look 20 years ago, a single rack in a data center might have been two to four kilowatts. Today with the GB200s, you're looking at 130-ish, 140 kilowatts, depending on peak demand. And then you play it forward and you look at Vera Rubin, you look at Feynman. These are one megawatt racks, right? There's a thousand homes worth of power in a single rack in a data center. That requires tremendous innovation across the cooling systems, the network systems in terms of cabling these things together, which we're seeing a lot of tremendous innovation. And I think there's a lot to be said about just the overall memory optimization in the systems. One thing Crusoe is very focused on is not just the hardware aspect of bringing these AI factories to life, but actually the software aspects of actually how do you operate the AI factory to produce intelligence very, very efficiently. That requires getting data and moving data from things like your object storage, bigger storage platform to directly into HBM. and how do you do that ultra efficiently? And we're really excited about some of the innovations taking place across the networking stack and the overall software innovations that are bringing data to compute way more efficiently so we can actually utilize the GPUs and get the most intelligence per unit of investment in compute. You know, it may be appropriate because we're here at GTC. NVIDIA has been an active investor across the space. They're forward engineering, working with partners like you guys, to help drive this extreme co-design in the data center, to get us the benefits of Moore's Law now that Moore's Law is dead, or even Moore. Can each of you just give us an example of, you know, some of the things you're doing with NVIDIA in particular that are helping to drive this flywheel of innovation and how this unique partnership might be accelerating the path that we're on? Yeah, we can. Please, please go. We can start exactly. We were talking about the 800 volt DC as an example. And that's an area where we are actively, actively working together. So how will we make that technology available? And what we have to do as infrastructure provider, we have to make that infrastructure ready ahead of the GPUs, the silicon that will land, because that infrastructure needs to be in place. So that's an area. Another area of great importance is still going to continue to evolve is everything around liquid cooling. Let's not forget the transition that has occurred and is still occurring in the industry from air cool to a liquid cool. And that's a colossal change, if you will. There's still air. It's a mix. It's a hybrid. but more and more loads are liquid cooled. So all these things require technology partners that are ahead of the pack, and then the industry can follow. But we have to pave those ways together. Yeah, we have a ton of deep partnerships with NVIDIA across the stack. And it's so amazing that this company just spans so many aspects of the economy from power generation to, you know, the most cutting edge artificial intelligence development. But we're working with NVIDIA very tightly on AI factory reference design. So how do you actually build these gigawatt scale data centers and do it efficiently from a power generation, from a cooling, from a power perspective? And so that's one big aspect that we're investing a lot of time with NVIDIA. Another is on the software layer right So working with them on things on you know what the you know things like Lepton which is basically a distribution of like getting compute in people hands so that they can run their workloads very efficiently across you know various NeoCloud partners like ourselves, as well as things like Dynamo, which is basically accelerating inference and creating, how do you actually get more utilization and a lot of these memory optimizations that Crusoe is focused on pushing and innovating on so that people actually get more tokens per unit of compute from their systems. Let me just interrupt by asking, do you see any other chip company out innovating? There are a lot of other chip companies. You guys work with other ones. But NVIDIA seems to be playing a really unique role in pushing the ecosystem forward. Maybe a little commentary on how it's similar or different than others. Yeah, the thing I would say about NVIDIA that they've done so incredibly well, and kudos to Jensen, It's just the ecosystem itself, right? NVIDIA, I forget the exact headcount. It's 35,000 people, something like that. The amount of leverage that they're able to get from those 35,000 people and then the ecosystem of builders is absolutely incredible. And that's kudos to Jensen in terms of building out all these different work streams and workflows across energy, data center design, all these different SDKs that they've released that have just helped unlock this collective intelligence of society and the startup ecosystem to big enterprises. Just to kind of build on that a little bit, by the time you're done with all four of us, maybe you'll get a view of the scope and breadth of the ecosystem that NVIDIA is playing. We talked about software optimization, AI, we talked about cooling, we talked about 800-volt rack. We are more on the power generation and grid side. So we just released a white paper with NVIDIA. We're working with NVIDIA on reference architectures for power gen inside the, and how do you manage it? So we just talked about the grid a few minutes ago, how complex it is, different source of power, load balancing, and so on. Now, the power is not available today at the grid sometimes when folks like you are building the data centers. So there's behind the meter power, there's bridging power. But when you do that, you're dealing with all the complexities of the grid inside the data center. So how do you design and architect it so that you can actually manage the power flow with battery energy storage, with power electronics, facts, all kinds of different systems, working with people who were in the medium voltage and low voltage space that my colleagues here are doing. So it's an ecosystem where all the way from the power gen to memory to algorithms, cooling, and the entire thing, we're all working with NVIDIA. So to me, it's amazing to see how they're curating the entire ecosystem and moving us all forward, helping us move forward. And maybe we'll end with Schneider's commentary on this. I don't see anything else out there that has this level of systems thinking, right? It's extraordinary, the breadth. It gives me a lot of encouragement about the gains yet to become or yet to come in terms of America's re-industrialization around power and AI generation. But how is Schneider working with NVIDIA to drive this forward? Well, look, a lot has been said, but for me, what is very unique in the vision of Jensen is, first of all, to work with partners. You know, what we love with the NVIDIA team, they share a lot with us. You know, they speak about the Omniverse, they speak about the next generation of chips, you know, the next generation of GPU. And when you can work as a technology partner, upfront and in a very open manner, that changes the game. What I would say also, you know, Jensen has this famous sentence about the speed of light. is forcing the entire industry, but in a very positive way, to work at a very different speed. And for us, for Shana, I think I can tell you it has changed a lot in the way we are managing the company. Joe mentioned the shift from air cooling to liquid cooling. We discovered that by working. We knew it would come. The question is when. We discovered two to three years ago that it could come much faster than expected. But if you ask me today, what does it change? We have to completely rethink the way we design those data centers. And I like what Chase mentioned in his sentence. He said design. We talked about maintenance, operation. So you have to completely rethink the way you build those AI factories from the design stage, you know, to the build stage, to the operate, to the maintenance. So it's about building a digital twin. And what Jensen and the NVIDIA team are forcing us to do is to think about those next generations. And guess what? Software is playing a big role and software is powered by AI. So again, it's a loop on which we are working all together. One thing I want to touch on, follow up there, just in terms of the system-wide innovation needed to invent solutions to support this infrastructure at scale. One of the conversations I was having backstage with Olivia is just around this notion of how the utilization of the data center scale computer impacts the power swings. Because you have this massive, large variance in terms of like when you have a gigawatt scale data center that's operating as a single brain, you have this problem where it's pushing data to the GPUs. all the GPUs are running their back propagation or their matrix multiplications, and then they're publishing the results to every other GPU on the cluster. And when you have that, you actually have these massive power swings. It's like whatever, six hertz, somewhere in that range, where you have these massive load oscillations. And that creates a lot of problems for utilities and on-site power generation. Like, GE Vernova doesn't like it when we have these massive power swings in the actual power utilization of the turbines. So it's actually created this system-wide partnership where we need to work with our on-site generation. We need to work with the utilities. And most importantly, we need to work with folks like Schneider to innovate these solutions where we can actually create a buffer through battery systems that can absorb a lot of this load oscillation that we're seeing from these large-scale training workloads and these system-wide workloads that are running on these massive-scale computers. I mean, one of the things that blows me away, if we think about the Manhattan Project, that cost about $4 billion over four years, right? Inflation adjusts that, that's about $40 billion. As a percentage of GDP, it was 1% of GDP, so let's call it $400 billion. Jensen has said we're going to spend $4 trillion over the next five years. So private industry is going to spend 10x what we spent on the Manhattan Project. And just listening to the decentralized coordination, really where NVIDIA plays this role in pulling everybody together and pushing everybody forward. It reminds me like the speed of that is very commensurate with what we think about the speed of, you know, the Manhattan Project. I don't see it happening anywhere else in the world. I think that all of your companies are critical to the success of the American AI race. And it's great to have NVIDIA helping push it all forward. I know we got a jump, so I'm going to throw it to you, Patrick. No, that was a great ending. Gentlemen, I just want to thank you so much for coming up here. And I hope everybody out there listening has a better understanding of the challenge that is ahead of us. And also, Washington can hopefully clear as many roadblocks as possible to let all of you cook. Thank you very much. Thank you.
Related Episodes

How Anyone Can Build Meaningful AI Without Code - Ep. 283
The AI Podcast (NVIDIA)
40m

AI in 2025: From Agents to Factories - Ep. 282
The AI Podcast (NVIDIA)
29m

Why the AI Bubble Debate is Useless
The AI Daily Brief
26m

Why Architecture Determines the Future of AI Innovation - with Aaron Levie of Box
The AI in Business Podcast
37m

How AI Data Platforms Are Shaping the Future of Enterprise Storage - Ep. 281
The AI Podcast (NVIDIA)
35m

Mayor Matt Mahan on How AI Is Changing City Life in San Jose - Ep. 280
The AI Podcast (NVIDIA)
46m
No comments yet
Be the first to comment