Back to Podcasts
Gradient Dissent

AI, autonomy, and the future of naval warfare with Captain Jon Haase, United States Navy

Gradient Dissent

Tuesday, March 25, 20251h 1m
AI, autonomy, and the future of naval warfare with Captain Jon Haase, United States Navy

AI, autonomy, and the future of naval warfare with Captain Jon Haase, United States Navy

Gradient Dissent

0:001:01:32

What You'll Learn

  • Automated target recognition using deep learning models is used on AUVs to detect underwater mines
  • AUVs are self-contained underwater vehicles that do not require life support systems like a submarine
  • Underwater communication and navigation are extremely challenging due to the dense fluid environment
  • Autonomous mission planning and adaptation is integrated with the AI systems to quickly respond to detected mines
  • Underwater mines pose a significant and persistent threat to naval operations, with historical examples of their disruptive impact
  • The cost of modern naval vessels makes the threat of mines even more critical to address

AI Summary

This episode discusses the use of AI and autonomous underwater vehicles (AUVs) in naval warfare, particularly for the detection of underwater mines. The guest, a Navy Captain, explains how AI-powered automated target recognition systems are used on AUVs to search for and identify mines, allowing for more efficient and safer naval operations. The challenges of underwater communication and navigation are also explored, highlighting the need for advanced autonomy capabilities in these systems. The episode emphasizes the real-world threat of underwater mines and the historical significance of their impact on naval warfare.

Key Points

  • 1Automated target recognition using deep learning models is used on AUVs to detect underwater mines
  • 2AUVs are self-contained underwater vehicles that do not require life support systems like a submarine
  • 3Underwater communication and navigation are extremely challenging due to the dense fluid environment
  • 4Autonomous mission planning and adaptation is integrated with the AI systems to quickly respond to detected mines
  • 5Underwater mines pose a significant and persistent threat to naval operations, with historical examples of their disruptive impact
  • 6The cost of modern naval vessels makes the threat of mines even more critical to address

Topics Discussed

#Autonomous Underwater Vehicles (AUVs)#Automated Target Recognition#Deep Learning#Underwater Communication and Navigation#Naval Warfare and Mine Countermeasures

Frequently Asked Questions

What is "AI, autonomy, and the future of naval warfare with Captain Jon Haase, United States Navy" about?

This episode discusses the use of AI and autonomous underwater vehicles (AUVs) in naval warfare, particularly for the detection of underwater mines. The guest, a Navy Captain, explains how AI-powered automated target recognition systems are used on AUVs to search for and identify mines, allowing for more efficient and safer naval operations. The challenges of underwater communication and navigation are also explored, highlighting the need for advanced autonomy capabilities in these systems. The episode emphasizes the real-world threat of underwater mines and the historical significance of their impact on naval warfare.

What topics are discussed in this episode?

This episode covers the following topics: Autonomous Underwater Vehicles (AUVs), Automated Target Recognition, Deep Learning, Underwater Communication and Navigation, Naval Warfare and Mine Countermeasures.

What is key insight #1 from this episode?

Automated target recognition using deep learning models is used on AUVs to detect underwater mines

What is key insight #2 from this episode?

AUVs are self-contained underwater vehicles that do not require life support systems like a submarine

What is key insight #3 from this episode?

Underwater communication and navigation are extremely challenging due to the dense fluid environment

What is key insight #4 from this episode?

Autonomous mission planning and adaptation is integrated with the AI systems to quickly respond to detected mines

Who should listen to this episode?

This episode is recommended for anyone interested in Autonomous Underwater Vehicles (AUVs), Automated Target Recognition, Deep Learning, and those who want to stay updated on the latest developments in AI and technology.

Episode Description

In this episode of Gradient Dissent, host Lukas Biewald speaks with Captain Jon Haase, United States Navy about real-world applications of AI and autonomy in defense. From underwater mine detection with autonomous vehicles to the ethics of lethal AI systems, this conversation dives into how the U.S. military is integrating AI into mission-critical operations — and why humans will always be at the center of warfighting. They explore the challenges of underwater autonomy, multi-agent collaboration, cybersecurity, and the growing role of large language models like Gemini and Claude in the defense space. Essential listening for anyone curious about military AI, defense tech, and the future of autonomous systems. ✅ *Subscribe to Weights & Biases* → https://bit.ly/45BCkYz 🎙 Get our podcasts on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/gd_google YouTube: http://wandb.me/youtube Follow Weights & Biases: https://twitter.com/weights_biases https://www.linkedin.com/company/wandb Join the Weights & Biases Discord Server: https://discord.gg/CkZKRNnaf3

Full Transcript

You're listening to Gradient Dissent, a show about making machine learning work in the real world. And I'm your host, Lucas B. Wald. Today we're joined by Navy Captain John Hasse. We've worked with John for quite a while on a specific use case around underwater unmanned vehicles, which we talk about. But John is actually the program manager for Navy's Expeditionary Warfare Division. And so he has a really broad purview into how AI and warfare interact. And this is a really interesting conversation, which isn't happening enough in Silicon Valley. I hope you enjoy it. So I guess, first of all, I want to start with, can you tell me about some of the AI applications that you've recently been involved with and how they work? Because I think you're in a domain that a lot of people listening to this podcast wouldn't know well. Yeah, the applications. So one of the applications that's really at the heart of what we do is automated target recognition. And so I got to back up a little bit to talk about the hardware instantiation of where we run these models from, because that's really important to understand the unique challenges and the way that that automatic target recognition is done. But the automated target recognition runs on unmanned underwater vehicles. And these are underwater robotic systems that are pre-programmed to run missions to search the water for mines. And then based on what they find, either to cue autonomous behavior by other vehicles or to simply bring that data back and allow human operators to be involved in decision making about what to do next with these mines and the naval operations that are happening. So the AI component to this for the automated target recognition is that this is actually an ensemble of deep learners that we put together over a number of years with a number of third parties who are able to integrate and then onboard into, you know, NVIDIA processor that runs on this unmanned underwater vehicle that's able to, you know, deliver inference runtime in order to queue where we found targets. and it also runs on laptops so that it helps human operators who are reviewing all this data that's generated from the missions to more reliably and faster review that data and determine where the mines would be so it's queuing it's targeting it's bounding boxes it's the stuff that you might expect and it also involves bringing that data back then from new missions and then fine-tuning all of the various components to that automated target recognition and again figuring out how to wait and ensemble those things and rapidly redeploy them to those vehicles. So really the big application that we work on is AI to detect mines on these autonomous underwater vehicles, which uses, you know, strangely a number of different techniques that work together really well when ensembled and are power efficient enough now and good enough at runtime with latency to be relevant. And there's enough metadata on the vehicles that it provides a really rich environment for us to extract that metadata as well as the data generated. So that's really the heart and So what we do is automated target recognition for underwater operations to detect mines so that we can conduct operations. We're also looking, and this is a little bit less mature, at some pilot programs about how we can make interacting with these vehicles easier for the sailors who use them. And so looking to replace PDFs and interactive tech manuals with things like intelligent agents that could be trained on all the specifics and answer questions, help with field maintenance, even plan missions better and be involved with that. and then sort of zooming out just a little bit more than that, which I think is pretty common with other folks, looking at how AI can help the workforce developing these products to generate the documents more effectively, understand our requirements better, and act as just an intelligent assistant to the process. So kind of looking at those three things. But the one that we've had the most engagement with and that we've been most involved with and has the most operational impact is the automated target recognition that allows us to spot mines underwater with those vehicles. Now, just the underwater autonomous vehicle, is that a fancy way of saying a submarine? They look like submarines from my perspective. Can you describe what this is? In some ways, they do look like submarines. In some ways, they don't. And so think about a cylinder that there's no need for life support. And that's really the big deal about autonomous underwater vehicles is that you don't need to have any of the services that a human would need. So oxygen purification, generation, atmospheric controls, eating, you know, discharges that would be required from in to out. It's just batteries and systems. And what that allows you to do is make much smaller systems than you could otherwise do, like the kind of things that a human being could never fit into. And so I have experience with the smaller and medium versions of it. So anything from, you know, about the size of a normal dining room table to, you know, two tables essentially put together would be the sort of scope that we do. We can think about a long cylindrical tube with one propeller at the back of it to power it and a couple of control surfaces and fins. And then for sensors, we have some sensors that would look forward to detect things directly in front of, below, or above that unmanned underwater vehicle. And then we have sonars that are looking downward from the sides and that are able to search the bottom. And so we load this thing up with sensors. We load it with batteries. Recently, we've put onboard compute on these as well. And then we have navigation autonomy software that we put on and then free program emissions. And I guess one thing that might not be obvious to people is underwater, I think you can't, it's harder to communicate. Right. So you actually really need total autonomy. Right. Or how much contact do you have with the UAV? You know, Lucas, I'm glad you mentioned that. That's like one of the things that's hardest for people to appreciate who haven't had to operate underwater is underwater is so hard. I mean, the fluid that you're in is like a thousand times more dense than the air to start with. So speed, resistance, control surfaces, right? You have to adjust for altitude at this point also. And you can't use GPS. Communication underwater is really challenging. And so, you know, just communicating between vehicles, you have to use sound waves that have to travel through the water. and, you know, there's a lot of background acoustic noise that's generated from marine life, as an example, or sound channels that are created because of temperature gradients. And so even delivering the sound energy between the vehicles becomes challenging, and then you have encryption issues that you have to work through, and delivering that and just communicating between the vehicles becomes, you know, really hard to do in addition to a number of other things. Is the underwater autonomy also an AI system, or is that simple because there's not really a thing to crash into underwater? Do you just sort of aim it or does it also have to navigate in these sensors? Yeah, so there are a couple of things to unpack there. The first is there are things to crash into underwater, and we have. So, you know, if the chance is not zero and given enough time, all things will happen, and we've done it. And so, you know, there's steep banks. And when you pre-program missions, these steep banks and gradients, because you have to maintain a relatively close proximity to the bottom in order to generate a lot of the targeting data that we would use, the sensor data that we're generating. And so if you have a real steep cliff, you can inadvertently run into it if you're not looking forward while you're looking down. So you actually can run into things underwater. But the autonomy is pre-programmed deterministic right now, but it is connected to our AI systems. And this is why having it on board and being able to form inference at runtime on the vehicle with low latency is really important for us in a power efficient way. Because what happens is if we think we have seen a mine, we're able to dynamically adjust the mission in a pre-programmed way to essentially bring in a branch to the original plan and to search that area more extensively while the mission continues. And what that avoids is having to bring these vehicles to the surface and to evaluate the data and then reacquire that object of interest and run a whole second mission set. And so while the autonomy is deterministic and not generated by AI at this point, it does rely on AI and is integrated with it so that we can change that autonomy in pre-program ways to speed the mission up. And I guess I would naively think maybe you'd want to put an underwater mine kind of close to the surface, right? why does the UAV run close to the ground? So there's mines that are on the bottom. There are mines that are in the water column, and there are mines that are very close to the surface as well. It's all just dependent on the specific design of the mine and how it's designed to function. So some are designed to and function very efficiently from the bottom. Oh, wow. And are mines a big problem? Are there lots of mines out there? This is way out of my wheelhouse, But like, is this a real thing that you really need to solve? Or is this more of like a proof of concept for the technique? They've been used in the past, and they can shut down fleets in the middle of operations. They are incredibly dangerous when they're out there, and they're fairly low cost. So they're sort of an asymmetric threat in a lot of ways. They're also very difficult to detect. And so this is the reason this capability is so important. It's not a theoretical threat. The Navy has run into these in the past. We've got plenty of case studies about where and how that's happened. And it's not a problem until it is. And when it is, it becomes a really big problem. I mean, all the way back to the Civil War, 1864, Battle of Mobile Bay. David Glasgow Farragut was going into Mobile Bay with ironsides and wooden ships together. I mean, damn the torpedoes full speed ahead is when the lead ship in his formation, the Tecumseh, one of the first ironsides, hit a mine. and it was so uh disruptive to the operation that the entire fleet stopped the battle of mobile bay stopped because of mines back in 1864 and david glasgow ferricot the admiral in charge was on the hartford the second in formation the first ship in formation and the wooden column stopped because they were so afraid to hit mines and he ordered his ship ahead full speed and right into the mine field and progressed with the fight and that's where that uh you know famous expression came from damn the torpedoes full speed ahead, was this assault on Moba Bay in the Civil War in 1864, where mines became a critical factor. And the only reason that we survived that attack from mines that we were in the middle of was by luck, not because we had done mine countermeasures and understood the threat, avoided or mitigated it sufficiently. And so, you know, the interesting thing is back then ships cost in today's dollars about 17 million, you know, today it's billions of dollars. And so it's just not something we could ever do again. And so, yeah, it's not a theoretical threat. They're out there and active. And so is the idea that someone on a boat would put these into the water and they'd go out ahead and look for mines? Or how would this actually be used in practice? For the vehicles you're saying, in order to find a minefield? Right. So there's a lot of things that would have to play into how we might suspect there's an area to be cleared. It could be the geography that we're going to navigate. It could be the threats that we know about. It could be a number of other things that are available to fleet commanders that would say, hey, this is worth investigating. These are areas that need to be risks that we would need to mitigate further. I mean, obviously the ocean is much too big a place for us to do this everywhere. And so it has to be sort of strategically done in points that matter at the right time, in the right place for the right ships to transit through. And, you know, that's all really situationally dependent sort of stuff. I mean, so this seems like a really good example of kind of putting intelligence into objects in a really useful way. And I think it's probably a really good example from a PR perspective, right? Because mine just seemed like kind of a horrible way to, it seems like it could kill civilians. It seems really, really dangerous. But it seems like there's kind of a spectrum here of different levels of autonomy that you could give lethal devices. I mean, this is purely defensive, but do you have a framework for thinking about, or is there a place where you would draw the line? It seems like you definitely wouldn't want like an AI system deciding to launch nuclear weapons. Like that would definitely be an extreme. I think no one in their right mind would support that. But it seems like there's a lot of stuff in between. Like how do you think about that? Yeah, I appreciate you highlighting that. There's certainly a broad spectrum of applications for the Department of Defense for using AI. Our portfolio has been doing this for some time. And as you highlight, it's a great mission for us to have because it's defensive and it clears these threats and obstacles so that not only do the U.S. Navy ships not hit these should they become an issue, right? We're available to help clear these so that civilian mariners or others are protected and they're safe. And so it's really an easy mission to sort of get behind in that way, and there's no ethical questions about using AI to clear these hazards, right? It's on the protective side, which makes it really from that ethical question you're asking much simpler. I've never had to wrestle, in terms of my job, policy or larger issues about where AI and autonomy could be used for more offensive operations. And so I haven't had to really wrestle through that or think through what a program would look like or fail safes or the other things necessary to do that. I think my thoughts on that are that the taking of human life is such an incredibly impactful thing that I could not imagine offloading that to AI or autonomous systems in any sort of meaningful way moving forward without humans like directly and specifically involved so that all the ethical concerns and considerations would be like fully met in that regard. And so, you know, there's a spectrum in between about like gathering information and it's very use case specific. So I think the best way to approach a subject like that is just with extreme regard for human life and safety and explainability. This is an irreversible thing that could only be done like under the most responsible and ways with the best oversight possible to ensure no collateral civilian or innocent lives are lost and just approaching it with that sort of framework and realizing that the tools that the military have should allow us to do our mission better, but never put that ethical high ground that we have so proudly maintained and do still, you know, ever at risk. Although it seems like that's kind of a comfortable position you might have when you don't feel under threat. Like I noticed in Silicon Valley, I think people were really against maybe any kind of like AI military applications. But then I think for a lot of people kind of seeing the war in Ukraine reminded people that, wow, there's bad actors out there and they're probably using, presumably like we would have enemies that maybe wouldn't hold to such a high ethical standard and, you know, might build autonomous weapons at any degree of autonomy. And, you know, you might lose a conflict if you weren't willing to be a little more aggressive, right? Or do you think it's possible to really hold that standard of like never integrating AI to anything offensive? Well, I don't think I'd say never to integrate things, AI into things that are offensive Certainly you would need and want to And as the pace of and as adversaries build these capabilities as well you need to keep pacing in front of them You also mentioned I heard being aggressive and victorious and sort of being warriors that could prevail in the face of conflict. And what I'd say there is, I think it's an and, and not an or. So I think you can be aggressive. I think you can be warriors. I think you can go and prevail in conflict and maintain the moral high ground. I think the way that you would integrate AI, autonomy, and advanced technology into that operational scheme, again, are really dependent. A strategic imperative, though, is that through all of this, we would never want to give up the ethical and moral high ground. We would never want to alienate our allies and those that would come with us into these conflicts in the future. And so I think those are things that are just bedrocks of the American military and have been for a long time. And I don't think that you can't enable and integrate technology, AI, and autonomy and still maintain the moral high ground and the respect and dignity of low collateral, no collateral damage to innocent civilians while prosecuting wars. And so I think those things are totally synchronous and can be done at the same time. And I think we've got some incredibly talented people working on those issues. And I'm confident that, you know, in the moment where we would need to come to a decision, we would handle that appropriately. I'm curious what it's been like for you working with Silicon Valley companies. I've noticed in the last couple of years, I think it seems like defense has, there's been a lot more collaboration, I think, between between Silicon Valley and US military. I think when I go back a few years, I remember I think Google Cloud decided to stop supporting certain military applications over how bad that made people I knew in kind of military applications feel. And I felt like that created a lot of animosity at the time. Do you feel like that's been kind of worked through and forgotten? Or do you think there's maybe still some talking past each other or kind of resentment on your side of the fence? Oh, on the military side, no. I don't think there's any issues on our side. I think we've always just wanted the best tools, the best capabilities, so that we could be the strongest military ever and maintain that position of strength so that we can prevail in any conflict. So I think whatever the tools are and whoever makes them, we're glad they use them. We just want to win. You know, that's really the only rule to warfare is win. And I think, you know, however and wherever we do that, you know, given all those constraints about protection of life and the moral high ground that we maintain and our allies and partners and all those things. But yeah, win it all, win. And America is just amazing at that. We were as good as we've ever been. And so I think when it comes to, you know, because there's a couple of things to unpack there. you mentioned Google specifically and sort of the transition that they've had and some of the stories that I had seen in the press. You know, recently I understand they've got their impact level five approval for use in the government now. So where they were versus where they are now, they're offering services to the Department of Defense. They're in compliance with some of our security concerns there for cloud computing and cloud usage. And they've entered that marketplace, you know, I think with a big contract recently along with Amazon and Microsoft. And so they're they're one of the offerings. I think more largely though, if I could kind of maybe address, I think what's kind of fundamental to that question is the partnership between tech companies and the military in general. And I think that there's actually a great relationship between technology companies and the military, one that's to everybody's benefit so that we see, you know, younger companies that are really bringing a lot of tech and forward-leaning processes and moving really aggressively and quickly to develop useful capability that are entering the scene now. and challenging the status quo from a tech mindset. They're also VC-backed in tech companies in the defense tech space. We're seeing more entrants there as well. And we're also seeing more partnerships with larger tech companies so that if you look at what, say, Microsoft is doing with OpenAI and offering some of those connected services, there's some examples of some really big tech companies that are supporting defense and government in general through partnerships. So I think what I've seen based on all that is that tech companies are trying to understand defense better, trying to serve us as one of their customers. And I think we're just grateful to have all that technology coming in, giving us more fighting power and making our military more effective. And so there's certainly, and you highlighted another thing there about cultural differences. There's certainly cultural differences, right? I mean, just having some exposure to what a tech company is like and how you treat your employees and what the demands are and the sort of tech and talent that you recruit, You know, that's obviously a totally different culture than the military. And I think what's important is to understand and to communicate between customer and client in that case so that tech companies understand what we would need, the reliability required to operate in the military environment so that we could be used in conflict and combat. And then for us to also understand the capabilities, the limitations, sort of the financial structures and incentives and make the process of working with the government a bit easier for tech companies. So I think there's some understanding and outreach on both sides, but it's to everybody's benefit. And I think generally things have gotten better, and I see a future in which those relationships get stronger. And more defense tech companies founded by simply tech folks who have done successful startups before or worked at large companies are just going to increase over time. I think there's a lot of capital going into that as well. So I think that marketplace is really rich and robust right now, and I think that's, again, to our benefit. what do you think um one of the things you mentioned to me before recording this is is people really underestimate the gap between software applied to to civilian use case and then software applied to one of your use cases and how much work goes into um kind of hardening the technology to make it to make it work for you can you talk a little about that or tell us some stories, you know, maybe from the work that you did on what needs to change. Like, you know, in some sense, when I just zoom way out, you know, autonomously operating a car like we have in San Francisco and a lot of our customers do and autonomously operating a submarine, like, okay, it's sonar instead of LiDAR and, you know, the cameras don't work as well, but it does seem, you know, both are, you know, a submarine and a car is a lethal weapon if you, you know, if you operated badly. Like what's the difference when you try to take technology not designed for you and make it work? Yeah. I think at a very high level, the biggest difference is that I am not convinced that autonomous vehicle companies are considering adversarial attacks and that adversaries are actively looking at those as attack surfaces and that there are regular, repeated, expected, and by design attacks on those autonomous vehicles to disrupt their operation or to make them intentionally unsafe and not function as operated, or even worse, to compromise them in a way that is unknown to the operators at the time. So this fleet of vehicles turn out to be a proxy for someone who is not supposed to have access to them. So I think that's the... Although they probably should, right? I mean, actually, as you say it, maybe they should be considering that, but you're right. They probably are not considering that scenario. but that does seem not impossible i'm sorry go ahead no i'm i am right there is right um okay so yes that is that is a thing but that's not considered by the manufacturers of those vehicles and there's a reason for that is because nobody likes here's the hard truth nobody likes cyber security when you cyber cyber harden something an object it makes it harder to use slower to operate, longer to boot up, and more difficult to access from a login perspective. So when you go to single sign-in to MFA, you've just made the login process harder. Now, imagining adequately cyber-hardening a vehicle so that all those attack surfaces are mitigated and a dedicated and professional actor couldn't access that vehicle under any conditions. Now you're talking about you would have three keys to the vehicle, you would have a password once you got in. You would need to make it larger and slower because there's more software that would have to, hardware and software that would have to go on it. The entire comms bus would have to be changed. The messaging internally would be encrypted. Everything would work worse. So here's the reason people don't like hardening things so that adversaries can't get to them is because they're more difficult to use. And so that is one of the most significant challenges and something that we've personally had to deal with with our programs is that we've taken a commercial robotic system that works great. And we were the first to go through this process for these unmanned underwater vehicles. And we cyber hardened it in accordance with the Navy standards are. And we thought that we were making like a 10% change because all we had to do was, add these additional modules. But it was like a 90% change because what we quickly realized is we changed the communication structure, we changed the comms bus, the physical architecture, the software layers associated with it, even the way it communicated were different. I mean, just to give you an example of the complexity that we didn't appreciate before we began as a first of for the Navy here, the length of signals that the underwater acoustic channels had to send increased because they were encrypted now. So in addition to the signal, you had to send the encryption packets. That allowed one of the transistors to stay open, the DSP to stay open longer to transmit the signal. That then applied more voltage for a longer period of time across the circuit, which caused it to overheat, which caused this circuit card to fail, which again, like we're adjusting now the physical architecture of this vehicle to account for a communications challenges that was related to a signal processing thing so that we could get the communication packets through. That's just one example, right? And so this is where when defense tech companies or others come to me, my programs with, you know, things that work, they never actually work. And what would seem simple turns out to be really, really complicated. So your initial question about, hey, autonomous vehicles, why can't we just do this with unmanned underwater vehicles? So first of all, the challenges underwater are significantly harder, I think, than operating on the road, even though those are not trivial also. But then it's this requirement for reliability and for robustness of cybersecurity that becomes really, really challenging, right? And so we have to tackle and deal with those. And that becomes really difficult because we're, I think, at the forefront of cyber hardening and making these robotic systems robust so that adversaries can't access these attack surfaces. And that has all sorts of technical implications that were difficult to foresee before we did it. And so, you know, working prototypes that aren't fully hardened and meeting our requirements turn out to be really difficult to fully, you know, harden. Does your organization do its own AI research or does it outsource the AI research? And what parts do you feel are core? Like a company would think, okay, maybe there's certain parts here that sort of like things that we really need to know how to do ourselves and there's parts that we need to, it's okay for us to like outsource to third parties so we're going to do it better than us. Like how do you think about that? Yeah, so unpack that. We do do some of our own research. So the government does have technical centers of excellence that are capable of doing some of this work, and we do leverage them to do some of this work as well. However, we find that a partnership for our particular programs with industry is also necessary. So we go through some big defense primes who are able to provide some of the AI services that we need to do this. And that's on the bespoke side of things. So when we need specific things built just for us, like the automated target recognition as an example, you know, there's no civilian equivalent that we can access something that's commercially available. And so you asked how we think through the process. And so the first thing would be to do a market survey and to see what's available and to reach out to the government warfare centers. Because the first question that we're going to ask is, has this been done before? Has anyone done this as well? Is there information? Are there architectures? Is there models? Are there data repositories? Is there a workflow that we could use? Is there even a schema that we should be using for commonality between our data sources? So we'll do a broad survey, and in some cases we'll find things, in some cases we won't. You know, the next part of this would be to really scope the requirement and think through it smartly so that we can then go out to industry. And there's a number of ways that we can do that to find the best available commercial partners who can develop these algorithms for us. And increasingly, with the pace of innovation speeding, with some of the novel models that are happening and some of the open source opportunities that are available, you know, open source has really allowed many of our providers that have experience and understand the mission to leverage some of these open source models in a way that's just really catapulting and advancing our capabilities by taking advantage of what's available in open source and being able to, you know, sort of ensure the safety reliability of that, bring it into our system and work on it. And so we think about it in terms of the requirement, what's already been done and what's available, and then a contracting strategy that lets us get to the best available workforce that'll do that for us. That's on the bespoke side. On the, you know, we have a large, say like a LLM requirement in order to, you know, perform some function. We'll just certainly do a commercial landscape there first to understand what's available. And if there's a pre-built solution, we are really better off changing our requirement to meet what's built at scale than we are to try to create a bespoke solution to meet our need. if there's anything that's even close. So the really important thing there for me and my team is to really adjust our expectations to take advantage of what's commercially available and work on policy to use that appropriately within our programs. Because there's just things happening right now that we would never, I mean, if you look at the amount that goes into training LLMs and the research and the investments that are made commercially, we would be foolish not to take advantage of that as it is and not modify it because that just increases your cost exponentially. Okay, so I was wondering, again, this is maybe putting a business hat on for me and then wondering how it applies to you. I was wondering how you think about your competitors. I was really excited to hear you say that America is the best in the world at winning. That seems very reassuring. Are there, I have competitors that keep me up at night. Do you feel similar about other countries' military? Like, do you think about like, okay, are they doing something better than us? Are they figuring out things that we don't know? Is that something you think about a lot? And are there aspects of what they're doing that you think we could learn from? Well, there's a lot in there. We can learn from everybody. I learned from my kids, right? So we can certainly learn from other militaries. There's things we should be watching. Or do you not even consider other militaries competitors? Is that the wrong analogy Yeah Yeah Yeah so I afraid of no one There no country that keeps me up at night We are America. I am not afraid of what that means. I work with people who are not afraid of what that means. And I've been in combat. I know what that looks like and I'm not afraid. And the rest of the folks who are with me right now are not afraid of what that means either. We will go toe-to-toe with anybody, and we fear no one. Now, we respect everyone, and war is a terrible thing to be a part of. In some ways, nobody wins. And so we should have an attitude where we respect everyone out there, everyone can put up an amazing fight, and you can look back through history to see just examples of great powers that struggled with smaller opponents. You can get the stiffest resistance from folks you didn't expect it from. You have to respect your adversary in combat. But at the same time, we don't, we're not afraid of what that means. And we're not afraid of moving forward to defend our country and our nation. And I'm really proud of all the work that we've done. In front of me, I've seen people who weren't afraid and went into combat and were victorious. And I see people coming behind me as I get ready to retire who are going to take that torch and who aren't afraid of what that means either. So I think that we don't fear other countries. I think we respect them and what that might mean for us to be in a position in the future where, you know, we might be competing with them in that way. And I think that we have to be humble enough to learn from everything that's happening, realize that technology has for a long time shaped what it means to go into conflict. and that technology is changing more and more rapidly, which is why the relationship you highlighted earlier between technology companies and the Navy is critically, and the military is critically important, right? We need to be aware of what's happening, to use it most effectively, to remain the strongest fighting force that the world has ever seen. I'm so proud of what we've done. It does seem like maintaining a technology advantage, though, is incredibly important for staying safe and winning wars, right? And it seems like as the technology moves faster and faster, maybe that becomes even more important. I don't know if that's a tech executive bias here, but it just sort of seems like the idea of another country having AI technology that's a few years ahead of ours seems really scary when you think about what, especially when you think about what military applications that might have, no matter how fearless we are. Software has never won a war. Interesting. Software has never won a war. That just isn't how it's done. And that's not going to be what decides the next war either. John Boyd was famous in the Air Force and Marine Corps during his time in and after active duty. The Americans with planes that were not as good were able to best Russian pilots in the air because our planes had hydraulics and theirs didn't. so we could maneuver faster, but their planes were actually better technically. Their pilots just couldn't operate them effectively. That's an area where the Russian planes were technologically, had better tech than ours and had better performance specs, but because of the human factors necessary to do that, our pilots were able to be better in the air because they could make better adjustments in the planes they were using. So it's not just about the tech specs. I've not seen one conflict or combat so far where software has been the deciding advantage over that. It's an enabler, but nothing will ever replace the fighting spirit of the folks who use those tools. So while it might be terrifying to see the battle about tech go back and forth and who has the advantage in any particular moment, that I don't think will ever be the deciding factor. There's a margin that as long as you're within that margin and you still can compete with the necessary technology that we have, there's something magical about America. And I think anyone who tries to test that is going to be really disappointed with the outcome in the future. And so tech is going to be an important part of that. We're going to use that tech as best we can. But I don't think, so I'm pushing back a little bit on like, hey, what happens if we slip a year or two behind? It's a surprising answer. Yeah, yeah. It's just not how we win wars. Like software is not going to march into the capital of another country and declare victory, right? This is not how it's ever been done. And I would be shocked if it ever is done that way. Autonomy is going to be important. AI is going to be important. But that's one component of how we would operate. And that's not the only component. So, you know, there's other areas that you could look at. Energetics inside of explosives, as an example. You could say like, hey, if we have better energetics and are like the energy per pound of explosives delivered is higher for one country than another, wouldn't that mean we're going to win? And the answer is that's one factor, right? But like so much goes into it that at the heart of it, warfare has always been about people. I think we have the best and the best warriors that the world has ever seen in the strongest military ever. I think we've stitched together all these capabilities in a way that's incredibly impressive. Software is part of it, but isn't the only part of it. So as the defense tech community comes and rallies behind and partners with the military, it's also important for us to realize that it's not the only part of it. It seems like software could also play a role in training people and choosing people. I mean, if that's the most important thing, that seems like maybe a really good place to try to use more software than. Absolutely. AR, VR, any way that you can make humans that are involved in the activities, the military do more effective. This is where that automated target recognition we were talking about becomes so important. This is where enabling decision advantage becomes so important. How do you lift the cognitive burden off people so they can focus on the right thing at the right time so that they're not overwhelmed by irrelevant information, but they clearly see the signal through the noise. They make the right decision about combat operations in the right moment, and they can most effectively move forward with the most important things that need to be done. You can look back through the history of warfare and find all sorts of, I mean, just as an example, in World War II, Nimitz sent a message to Halsey, two famous admirals, and Nimitz meant for Halsey to attack a Japanese battle group that was going through a vulnerable spot. Halsey got the message and thought it meant be ready to attack. So he never attacked. And so we had better technology. We had better intelligence. We had better positioning. We had the element of surprise and it didn't matter. And so that's an example where there's like so many factors that go into these operations that you have to sort of cut through all that and get to the best decision-making, best situational awareness, best capability, but like they all have to come together in kind of a magic moment. What do you think changes in the next like five to 10 years with these AI applications exploding? Like, like it does sort of seem like what we think of as war probably looks a lot different, doesn't it? Probably a lot more unmanned things going around, like, like you see in, in the news in Ukraine, you see a lot more, you know, drones and things like that? Is that, am I off on that? Or what's your view? I don't think so. I think that's a trend that will certainly not go away. I don't think that we've seen a nation state put all its resources behind conflict operations that has the sort of tech scope and background that America has right now to know what that would fully mean. That again is one of the factors that would go into it, but there, I think, are going to be a lot of things that go into it. And so, yeah, I think technology will play a big role. I think unmanned will play a big role. I think the future of warfare is going to be enabled more and more by technology. I do think we will see more of these robotic systems on the battlefield. How they interact together between systems, how they generate information and how humans process and use that on the battlefield is going to change a lot. And I think there are, you know, a lot of tailwinds that we have in terms of that technology developing on the commercial sector. So I think what we'll see is that humans will be enabled more and more to do more and more operations in various locations remotely. I think that we'll see that we have better information and more real-time information. I think that we'll see that we have a more connected battlefield where information is flowing between robotic systems, humans, AI agents that are helping with decision making. and I think that we're going to see the pace of operations speeding up more and more. But the other thing I know is that every time we've gone into a conflict in history where technology played a decisive role, whether it be the advent of aviation, rifling that was introduced into firearms, gunpowder, whatever it was, we were always surprised by it on the battlefield. And so I think that there'll be some surprises that nobody's anticipating. I think it's a reasonable project current trends into the future, but leave about 30% for something we just don't see yet. Interesting. What about LLMs? I feel like we've been mostly talking about autonomy here, but I was kind of excited when we were talking earlier. You talked about using, I think, Gemini in your daily job and your wife using Cursor as a developer. I mean, that seemed pretty exciting. It's pretty modern. Like what LLM applications are you seeing right now? Yeah, so the Department of Defense is really trying to figure out how to do this. They know it's important. And so the Air Force has done some great work. and it's a policy issue for us right now. We've never had to sort of bring intelligent agents into the DoD before and what that means in terms of our information protection, in terms of policy compliance, ethical considerations and all those things. So, you know, really taking a measured and cautious and diligent approach to all that. What's happening though is there's some limited instances that are starting to get piloted right now where it's Gemini, there is Anthropic is I've seen them recently come out with, the Air Force come out with some applications that are able to use to access that. And Microsoft making some of those available at different impact levels so that everybody who would have access to these systems now has access to the LLMs. And what that means is that we can expect things to move more rapidly. We can expect a higher quality on the first go. We can expect reviews, intelligent assistance, these complicated processes to become more easy to understand and to navigate through cycle times to be increased, developers to be more productive. I think it's a really exciting time for us to be onboarding this capability. And I think there's an incredible opportunity because some of the things that are happening inside of the DoD with these LLMs are gonna be so unique to what's happening outside of the DoD that there's probably some space that's not being adequately served there about how an LLM could be specifically tailored to meet the needs that are inside of the Department of Defense, as an example. Could you be a little more, like, at all more specific about what that might be? Like, what would be different? Because it's all, you know, kind of humans trying to solve problems at a high level, I think, right? Yeah. So there's some guardrails, I think, that are on large language models about, you know, things that you could query and policy things that you generally wouldn't want people to be discussing or talking about that would be part and parcel to what it means to be in the DoD and conducting combat operations. And so, you know, somehow figuring out a way to enable people that are doing mission planning or looking at warfare or making sort of plans or reviewing plans, processing data from the battlefield, having to deal with casualties and combat operations, right? Those might be triggering some policies and guardrails that would sort of generate the, hey, this violates our policy and we can't serve this at inference, right? And so I think there's a requirement there so that a little bit more interaction about like lethal operations in terms of what the DOD does, what we would want to plan for and how to process that information. And again, we haven't had to do this at scale during combat operations yet, but I would foresee that there'd be some policy issues there, both on our ability to use the large language models, but then also on the folks who are creating these where you wouldn't want normal people talking about how to conduct an assault on a position, right? But in the DoD, those are things that we're planning for and we would want to enable. And so I think there is something there to figure out, right? And we haven't had to see what those policy issues and problems are yet. But certainly the DoD would have some more limited applications if those aren't addressed. And, you know, right now we're just trying to figure out how to, you know, use these LLMs and these AI agents to sort of do the day job and do the paperwork and run, you know, low threat processes a little bit faster to process awards, do HR stuff. But when we get into more core business of the military and should we be looking to use these more in the future, you know, I think there has to be a closer partnership about some of the policy and guard rails that are associated with LLMs. One of the things that people worry about with LLMs is the idea that, for example, an individual could learn how to create a bioweapon and then actually build it or that people could learn on their own how to create really lethal weapons or big problems. Is that something that you think about or does that kind of change any of the ways that you think about security? It creates some new problems for sure. And ones that aren't new, to be honest, that information can be found in various places, right? And, you know, if you look back through some of the events that have happened, that is, you know, the genesis of where people have found these things. So the information is there. It makes it a little bit easier to get to. And again, this isn't something that the DOD or the military controls, what companies are able to serve or how their policies allow you to interface or jailbreak things or how they clamp down that information. because, again, we don't know how these models have been trained, so we don't have the data provenance to know what it was exposed to and what it knows and what it could figure out. You know, increasingly, if you have more and more intelligent agents, could you have it do one part of that planning over here in one chat window and then stitch together a number of chat windows? You know, these are questions I don't have answers to but are something that I think the trust and safety teams that these tech companies need to be taking seriously. And, you know, I'm really encouraged to see a lot of these companies think about trust and safety and put up policy guardrails to try to prevent this. And I think it's something that just they need to continue to monitor and to take an active role in to ensure the security of these intelligent agents don't put people needlessly at risk because of policy problems and guardrails that they didn't put in place. So one of the things that really affected, you know, kind of my world recently and actually of the stock market quite a bit was the surprising launch of DeepSeq and R1 and R10. I'm curious if you noticed that and if it changed your views at all on LLMs and how they might work. Certainly So we did notice that That was obviously a big development that kind of captured everybody collective intelligence and attention for still now still now And so that was a big surprise Hard not to notice it I think less concerned in terms of military application in terms of what we working on what that might mean Because, you know, we're still trying to figure out and to process policy issues, use cases, applications on how to use existing U.S. sort of AI systems that are already kind of being onboarded into the Department of Defense to do our mission more effectively. I think there's so much to work out there. It's a bit like running a race in this way. So if you're running a race, right, do you focus on running your race and hitting your marks and finishing on time and pacing yourself? Or, you know, are you looking at your competition the whole time? And I think in this case, there's so much work to be done and so much value to be garnered from the technology that exists within the U.S. sector right now. You know, we're really focusing on the opportunities that we have and what we can do as opposed to things outside of our control. and I think again it's one of these factors and the best LLM will never replace a good soldier sailor airman or marine and having you know people who are brave courageous warriors willing to go out and be part of a team to do this is so much more important than a 10-20% competitive advantage between large language models right now that you know I think we're best off focusing on what we have and making the most of it and you know I think there's again And there's so much ground to be gained by doing that, that I don't think we need to be overly concerned about what capabilities exist from other countries right now. Because I don't think anybody has really fully and completely garnered the value that is available from technology like this yet. So I think the people that will be best off are the ones who integrate it best, train with it best, and teach people how to use it best, do their jobs, as opposed to a 20% performance improvement on an ARC, you know, baseline. I think one of the things that you mentioned to me earlier was also that you, I think you said you wrote like a white paper on getting people to use AI better or something like that. That actually, we didn't find that in our research on you, but that seemed really interesting. Can you describe a little bit of what that was? And maybe is there a pointer to that that we could find and put in the show notes if it's publicly available? Yeah, thanks. I appreciate that. It's a paper that's under review right now under an academic publication. I like to do some academic publications with some colleagues in the Navy, and so that's a work that's still happening now. So that's probably why you didn't notice it. I did have a publication in Technologies earlier about using AI metrics in continuous improvement and to take a look at where we're at and to make advances there. And there's an upcoming publication in IEEE about an agility readiness level using AI and contracting approaches and organizational agility to get the correct contracting approach. But the one that you're referring to, I'm really excited about, which is leadership behaviors that will enable organizations to adopt AI and technology better. And it really happened because we started bringing in some early pilot programs into our particular team. And adoption was at first slow. And what was happening is that my teammates were concerned that it would be negatively perceived if they were using AI to do or help do their job better. And so there was an adoption problem that I recognized early on. Now, I've seen the research that does something like a 25% improvement through consultants that use AI tools to do their job. And these are highly trained, really intelligent people that are getting, you know, double digit boosts in productivity by doing it. So I knew it was important for us to do. And so I started just asking the question as a leader, what can I do to improve the adoption of AI throughout my organization? And as funny as it would sound, the most effective thing I found, I like to interface with people one on one or with groups in my office in person or over teams. And if there was a question that would come up or something that I didn't know, I used to ask the people in the office, hey, what is this thing? Like we would talk about it and they would bring me up to speed on a particular aspect. I started just asking our AI assistants for that instead. But I wouldn't do it in a way where I would obscure the fact that I was asking these questions to my tutor, assistant, AI agent that I would interface with. I would turn the screen towards my team in the middle of the meeting. I'd type the question in some cases while the conversation is still happening. I would generate the response. And then I would say, hey, there's these things that we hadn't considered. And they would like read it with me. And all of a sudden, within a week or two of starting this like very simple behavior, my other senior leaders who were in these meetings with me started doing the same thing with their team, started sending me products that were better organized, that were better structured, that had better, you know, review and thought behind them. And I would ask them about it. And they said, well, we just started using the same tools you did. And what they realized is that instead of me negatively looking down on them as teammates and leaders for using AI, I was encouraging them to do it. And I wanted to see the output of it, right? Again, they're responsible for their work. They can use the tools as they want. They have to cover all the right policy things. But it just started me asking the question, what are the leadership tools that we can do and use and demonstrate as leaders so that our teams feel safe and have the psychological safety and our backing so that they can adopt these tools? So there's actually one of the AI researchers. She's at Georgetown, and she's at AWS right now. We're co-authoring a paper that we hope to present in Switzerland on these leadership behaviors. It's under review right now. When it's published, we'll send you a link to it. Hopefully it gets accepted for publication. But it's really around the human aspects of adopting AI and teams and incentivizing people to do that in a way that they're still accountable. And so we're continuing to explore kind of behaviors, whether they be training sessions for everybody to understand what the capabilities are, rolling out new tools and expressing excitement and enthusiasm about that, demonstrating leadership behaviors where we're using these tools and encouraging our teams to see us using these tools, integrating them into existing workflows. You know, this is going to seem silly because probably most people are doing this already. but I would ask everybody to make a transcript of every Teams call that we had so we could download the text. Now we have NLP. And my thought was if we ever did need to build a repository and perform RAG, we wouldn't have to download all of our program documents. We'd just pull our transcripts and it would have our conversations and maybe it would know us. And now we could interface with agents who had access to everything that we had said in these transcripts that we controlled. And the compression of information is so amazing when you go from spoken language and video into a text transcript that like this is just another one of these leadership behaviors where you say like, hey, where's the transcript? Where do they go into the repository? How do we structure our file folders so that we can get to these and so that an agent can make sense of what program we're talking about based on just where we put the transcript? And so all of these were things that I started kind of unpacking as a team, and I've seen some positive responses. So if you've got some best practices there, maybe partner on that paper together in the future. I mean, it sounds like I could learn from you. I don't know. That seems fantastic and really interesting and cool that you have space to both innovate in your leadership in that way and then also space to publish research about it so that people can learn from that. I'm really fortunate. I have a medical in my – part of my portfolio is medical, so I have these PhDs in physiology that are part of my team that work for me. And so one of them was really big into AI. And when I started talking about publishing peer-reviewed journals, this is the eyes lit up. And so we would sort of come up with these topics, and we would submit these for publication. We'd partner with academic institutions that would go through the review that the Navy had done. And I'd actually published more academic papers. And I'm going to get into IEEE, which I'm super thrilled about. But I published more now, you know, 20 years out of grad school than I did when I was getting my master's degree. So I'm kind of like, you know, really proud of that. That's really, really cool. And it's amazing the breadth of topics, right, from like LLMs and leadership to contracting to – I think they have a paper on the underwater mine finding also, right? For sure. Yeah. Yeah, absolutely. And so I love this job because it's so broad in application. We get to look at so many things. And, you know, it's funny. I swim in this every day. So I understand how it works in the government to put things on contract, to test things, to structure a contract, to award mods and options, to work with the defense innovation unit, to evaluate commercial solutions. I'm like, yeah, of course, everybody knows this. Like, I want to learn the tech part of it. So I turn out to be, like, more technical than most PMs. And when I talk to technical people, they're like, oh, my gosh, you understand how the process works. And so I'm in this really great space where I can draw in experts from these different domains and make a contribution to, you know, whether it be academic work or advancing technologies within the system. And I just sort of let my creativity sort of guide me there and find interested partners who are willing to put in some work with me. And it scratches a lot of itches. And honestly, like one of the things I'm most nervous about in retirement when that would come is that I don't get to have such a broad breath and explore things as freely as I do now because I love it. I mean, it sounds like you could spin out a company that uses AI and help contracts with the government that I think would be wildly successful. You could sign me up first. I'm ready when you are, Lucas. I guess as a final question, are there other topics in AI that you're interested in exploring? Like, is there any kind of research that's like percolating in your mind that you'd want to talk about or applications that you think are coming that you're looking into that you haven't had a chance to publish? Multi-agent collaborative autonomy to me is such an interesting topic that's so underserved that I would love to take a look at how information interchangeability and interoperability could work with robotic systems on the battlefield. field. And so, you know, I think about all these robots that are going to come and all these sensors who are all in different file formats, who are generating streams and streams of information that we're generally not paying attention to. How do you sift through that, find the signal from the noise? And then how do you build the right agents at the right levels to perform the right functions with interoperability between the agents with data interoperability also? Which again, like I'm so fascinated by this idea because in my mind, I picture a four-person team that's out on a boat that's going to use these vehicles to run a mission, right? They have to plan the mission. That's just done by hand by now by entering a bunch of data points in. It's pre-programmed and it runs. But, like, what if the data that's being generated in the mission dynamically with, like, an AI agent helps the human being retask and replan that mission more effectively, right? And then that data percolates up to the next level in the chain of command who's making decisions about those missions across many ships. and the data interoperability allows that to seamlessly flow in, be compressed properly, and the agents can now talk to each other. So the next agent understands what's happening at the individual missions and what it means to the larger operation. And that just keeps flowing up to agents who can communicate and interoperate with each other, interoperability, and they're tuned to the right level of the chain of command that they would have to support so that the agent who's trying to figure out if the thing you're looking at is a mine or a rock shares the same data but does a different function than the agent that says, here's where we need to put all the ships that are across the entire battlefield right now. And so I think it's so interesting to look at multi-agent collaborative autonomy where the agents have interoperability of the data and the information, perform different tasks but can feed into each other's decision-making process in a dynamic, real-time, connected way that that would just be amazing to explore, unpack, uncover, and develop. That's the first. The second would be like an LLM specifically designed to support the use cases within sort of these operations and, you know, tailored to do just that. That's, that would, so that example of Nimitz to Halsey, that was a message. Those were words that were improperly written, that were improperly understood, and the mission never happened. Imagine if that could never, it would never happen again because these intelligent assistants would be like perfectly tuned to how to do that. and the person who receives the message could ask exactly what was happening to the agent. And that was the same agent that wrote the mission on one end, right? The data compression happened at the agent level, not in the communication of the words itself. And so I think those two things are kind of what's on my mind right now, and I would love to explore those more. You know, I think you really bring an interesting lens to that because, you know, it seems like you're very oriented towards the way kind of humans collaborate and work together. And, you know, you view that as, you know, more important than the technology clearly. But then I feel like you're sort of applying that focus on communication to technology and kind of getting agents to kind of collaborate well and work together as a team. And then also kind of a focus on like message passing with clarity. It seems like it's a real thread. It's the fog of war. It is. And it's like the Von Kostewitz thing. The fog of war is so difficult to get through. You're tired. You're exhausted. The adrenaline is pumping. It's broken messages. Data is sparse. You're trying to make sense of that on the battlefield, like how much progress we could make by solving that problem. And everything is a human problem at the end of the day, right? And so that's why technology is meant to enable and empower humans, not replace us, particularly on the battlefield, no matter how good it gets. And so making that more efficient, faster, and easier is super interesting to me and outsized returns from doing that, just completely outsized returns based, you know, when you compare that to, you know, mock whatever on an airplane, like it's not even close to the benefits you would get. You know, I have to ask you, I feel like a lot of my friend, entrepreneur CEOs that have never been anywhere near a battlefield and don't know anything about it, love, there's like a quote, I think that's a, I think it's a general said that like amateurs talk strategy and professionals talk logistics. Is that a real quote? Is that something that, do you think, Does that ring true to you in your field? It's used all the time. Whether it's an actual quote or not, that's a truism that's used all the time. That's absolutely the case. So in my world, I develop capability and then I field capability out to the Navy, right? And so what I would say is the sort of analogy that I make is amateurs get excited about working prototypes. Nice. Like, oh my gosh, we found this thing that works. It's like, hey, stop. Like everything works at some point. It doesn't work well enough for us. It's not hardened yet. Like that's 5% of the problem is like something that works. And so that is so true that amateurs get excited about like, oh, hey, there's this thing. Should I turn left or right? Like the tactic that's involved. And the professionals are like, we have an army to resupply. How do we get enough food so we don't starve? Like questions that you just wouldn't think about if you're so excited about the tactical part. And the way that translates into my world is, hey, tech that might work might not work for us. And understanding that difference is a huge gap. And the skill set to close that is really difficult to get and takes a lot of time and energy and effort to do. And so professionals ask about sustainment, fielding, cybersecurity, policy compliance, and not just about how effective the system is. Effective systems are table stakes, but it doesn't get you there. Awesome. And right there, that's a great one to end on. Thanks, John. Thanks, Lucas. That was fantastic. Appreciate it. Really appreciate it. Thanks so much for listening to this episode of Gradient Descent. please stay tuned for future episodes

Share on XShare on LinkedIn

Related Episodes

Comments
?

No comments yet

Be the first to comment

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies