

Amperity Reimagines Data and Developer Workflows with AI - Ep. 271
The AI Podcast (NVIDIA)
What You'll Learn
- ✓Amperity is an AI-powered customer data cloud that helps brands unify, understand, and activate customer data at scale
- ✓The company was founded in 2016 with the goal of solving the problem of enterprises struggling to unify their customer data, which they believed AI could help address
- ✓Amperity's core capabilities include stitching customer data from multiple sources, creating a high-quality customer 360 view, and activating that data across the marketing ecosystem
- ✓The speaker defines 'agentic AI' as AI systems that can impact control flow, such as retrying actions, calling other tools, or interacting with other agents
- ✓Amperity is seeing agentic AI used in customer support, analytics, and data introspection, and some early prototypes for direct-to-consumer applications
- ✓There is a wide range of reactions from Amperity's clients, from highly skeptical to fully embracing agentic AI, but the trend is towards more widespread adoption
AI Summary
The podcast episode discusses Amperity, a company that is using AI to help enterprises unify and activate their customer data. The co-founder, Derek Slager, talks about how Amperity's mission is to help brands better understand and serve their customers by improving data quality and enabling more advanced analytics and activation. He also discusses the company's approach to 'agentic AI', where AI systems can impact control flow and take on more autonomous tasks like customer support and data analysis.
Key Points
- 1Amperity is an AI-powered customer data cloud that helps brands unify, understand, and activate customer data at scale
- 2The company was founded in 2016 with the goal of solving the problem of enterprises struggling to unify their customer data, which they believed AI could help address
- 3Amperity's core capabilities include stitching customer data from multiple sources, creating a high-quality customer 360 view, and activating that data across the marketing ecosystem
- 4The speaker defines 'agentic AI' as AI systems that can impact control flow, such as retrying actions, calling other tools, or interacting with other agents
- 5Amperity is seeing agentic AI used in customer support, analytics, and data introspection, and some early prototypes for direct-to-consumer applications
- 6There is a wide range of reactions from Amperity's clients, from highly skeptical to fully embracing agentic AI, but the trend is towards more widespread adoption
Topics Discussed
Frequently Asked Questions
What is "Amperity Reimagines Data and Developer Workflows with AI - Ep. 271" about?
The podcast episode discusses Amperity, a company that is using AI to help enterprises unify and activate their customer data. The co-founder, Derek Slager, talks about how Amperity's mission is to help brands better understand and serve their customers by improving data quality and enabling more advanced analytics and activation. He also discusses the company's approach to 'agentic AI', where AI systems can impact control flow and take on more autonomous tasks like customer support and data analysis.
What topics are discussed in this episode?
This episode covers the following topics: Customer data unification, AI-powered analytics and activation, Agentic AI, Conversational interfaces, Data-driven customer experience.
What is key insight #1 from this episode?
Amperity is an AI-powered customer data cloud that helps brands unify, understand, and activate customer data at scale
What is key insight #2 from this episode?
The company was founded in 2016 with the goal of solving the problem of enterprises struggling to unify their customer data, which they believed AI could help address
What is key insight #3 from this episode?
Amperity's core capabilities include stitching customer data from multiple sources, creating a high-quality customer 360 view, and activating that data across the marketing ecosystem
What is key insight #4 from this episode?
The speaker defines 'agentic AI' as AI systems that can impact control flow, such as retrying actions, calling other tools, or interacting with other agents
Who should listen to this episode?
This episode is recommended for anyone interested in Customer data unification, AI-powered analytics and activation, Agentic AI, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
Derek Slager, co-founder and CTO of Amperity, explores how agentic AI and vibe coding are reshaping enterprise data management and the developer experience on the NVIDIA AI Podcast. Hear how Amperity’s platform unifies customer data, powers advanced analytics, and brings conversational interfaces to every part of the organization—helping brands activate, segment, and leverage insights at scale. Discover why data quality matters more than ever, how agentic AI transforms workflows, and why human accountability stays central in the age of automation. Learn more at ai-podcast.nvidia.com. podscan_PWXJwbBueH0iSHIUDliAbnP9LttIH8n9
Full Transcript
Hello, and welcome to the NVIDIA AI podcast. I'm your host, Noah Kravitz. From agentic AI to vibe coding, our guest has been at the forefront of showing how AI-powered systems can empower engineers while delivering measurable business value. Derek Slager is co-founder and chief technology officer of Amparity, a company that's redefining how enterprises use data to better understand and serve their customers. Derek's here to talk about his journey founding Amparity, the tools his team is building, like Chuck Data, an AI agent for data engineers, and his perspective on how AI is reshaping not just the enterprise landscape, but the developer experience itself. So let's get to it. Welcome, Derek, and thanks so much for joining the NVIDIA AI podcast. Thanks, Noah. Great to be here. Great to have you. So let's start at the beginning. Tell us a little bit about M-Parity. Yeah, so M-Parity is an AI-powered customer data cloud, and we help brands unify, understand, and activate customer data at scale, which is a lot of things. And, you know, we're really particularly focused on data quality, right? Because we're big believers that better data equals better results, right? You're funneling that through agentic use cases or otherwise. And so we have a lot of great capabilities to help people all over a consumer business take advantage of that data, but it's all about getting the data right. So what inspired you to co-found the company and how does your background, your own journey as an engineer kind of shape the direction of Imperity? Yeah, for sure. So we started in 2016, right? Which, you know, I sometimes refer to as like the false start AI era. Right, sure. There was a lot of excitement about kind of deep neural networks and a lot of other kind of innovation happening in the AI space. But certainly it was nothing in comparison to the current AI wave. But nonetheless, right, like that was kind of, you know, it was on a lot of people's minds. And, you know, what we observed in kind of researching Amparity was almost every consumer brand had a project to unify all their customer data. And yet we couldn't find a single one. And we tried pretty hard. We couldn't find a single one that said, yeah, we solved it. Right. Despite all that effort. And we just found that ridiculous. And so, you know, what inspired us to start Amparity was really the opportunity to help all these people who were trying and failing to solve a problem that was really important to their business actually succeed in doing that. And of course, like the key point there was, you know, all these people were smart. They were trying really hard to solve it. Right. So clearly the existing tools were insufficient because they were sufficient. People would be achieving more success. And so, you know, the big idea of Amperity was really how do we bring AI to this problem? And we felt like, again, this is through a 2016 lens. We felt like AI was sort of a perfect fit because a lot of the reasons that people were struggling with this problem was that they were trying to make the data perfect. Right. And yeah. AI kind of lives in that space where maybe it doesn't give you a perfect answer, but it gives you a really high-quality answer. And when we applied it to that problem, it worked really well. And so we've kind of carried that forward into the modern AI era, and that's allowed us to kind of bring acceleration to a whole bunch of other things, which I'm sure we'll talk a bunch about today. Maybe before we dive in a little deeper, can you give us kind of a high-level overview of some of Imperity's offerings, products, services, how you work with customers? Yeah, so we have kind of a core capability around helping users organize and unify data. So a big component of that and kind of our initial R&D innovation was around stitching. And that's about kind of finding all the Noah Kravitz's all over these different data sources. A typical Amparity customer will have 25, maybe 30 different data inputs that contain information about customers. And so, you know, we're looking for all the Noah Kravitz needles across those various haystacks. So it's a, you know, it's a challenging at scale problem. And so on the other side of that, then we can sort of utilize the outputs of Stitch to create a really high quality customer 360. And that allows people to kind of, you know, ask questions and understand their data better. And so we have a bunch of tools, you know, that allow people to, you know, kind of introspect and slice and dice. And, you know, many of those are, you know, powered, you know, in 2025 by modern conversational interfaces, which is really exciting because it empowers many different people from the organization. And then we have a lot of capabilities to take the data and take the kind of different segments of that data and then get it out into the ecosystem. Right. There's thousands of marketing tools and thousands of customer support tools and thousands of all these, you know, kind of tools that that all are hungry for better data. So Imperity is really focused on getting that data right and then activating that ecosystem. And we have a number of capabilities, you know, from kind of campaigns with A-B testing to journeys and many other kind of, you know, pieces of capability to sort of allow that to kind of integrate with the ecosystem. Right. Fantastic. One of the things, one of the, I don't know if you'd call it a tool per se, but one of the things that's getting a lot of attention currently in the world of AI is AI agents, agentic systems. We've talked a lot about it on the podcast, talked about it from the NVIDIA perspective of building the blocks that allow folks like you and Imperity to build the tools to serve your customers. What does agentic AI look like from your point of view when you're out working in real-life business situations and developing applications? What does agentic AI mean right now? Yeah. That puts you right on the spot. How many definitions of agentic AI? And I'll admit sometimes, you know, I'm a little flexible with my definition. Sure, sure. Sure. I can rephrase and say, how do you approach it? What do you think about what's the lens? Yeah, sure. So, you know, I'll start with my own definition and I've heard many definitions that are very complex. You know, my own definition is very simple and I think pretty expansive to the definition of agentic, which is it's just a program where the LLM can impact control flow. And so, you know, impacting control flow might be retrying, might be calling a tool, it might be interacting with another agent. And so, you know, it's pretty open-ended in terms of what could fit that definition of agentic. The reason I use that as the definition is because, you know, when the LLM is dictating control flow, that's very different from a traditional program, right? A traditional program that makes a call to an LLM is still a traditional program and your eval metrics and other things look awfully familiar, you know, relative to, you know, some other things that you would do. But once you kind of, you know, in essence, let the algorithm take the wheel, things change a lot. And it changes kind of how you build systems, how you monitor systems, how you evaluate systems, and more importantly, it changes what those systems can do, you know, for your business. And so, you know, obviously we're working a lot with that technology, but also I have the opportunity to talk to a lot of customers who are, you know, working with that, you know, technology in their own organizations in various ways, oftentimes using, you know, kind of imperative data assets. But we see use cases, I think customer support is one of the biggest use cases. And I think, you know, you know, one of the interesting things is, you know, a lot of people think, yeah, of course, like, you know, agentic for customer support, that makes sense as a cost cutting exercise, but that's terrible customer experience. I think what I've found talking to customers about it is their customers like it. And so even when they're kind of fully eyes wide open, hey, I'm interacting with an agent, there's just, it actually yields a good customer experience. So, you know, I think that's one I'm seeing a lot, you know, analytics, introspection, kind of understanding the business. There's a lot of agentic use cases that I see there. And many customers are experimenting with bringing agentic use cases to the end consumer. And I don't see quite as many of those. But ultimately, I think that's, you know, what we're going to start to see a lot more of as time goes on. I think, you know, I'm seeing a lot of those kind of up close and personal in prototype stage. And a lot of them are a little bit stuck there at the moment as they kind of work through that kind of, you know, long tail of challenges. But I think over the next year, we'll see a big increase. And that's becoming a prominent component of customer experience for consumer brands. Right. How do your clients, the users you work with, how do they feel about letting the LLM take the wheel? Is there, I'm sure there's a mix of things, but what's the sense you're getting and how are our companies adapting? It's a really interesting question. I would have given you a boring answer six months ago, you know, but like, it's amazing how much it's changed in six months. And I'll start by saying like, there's huge variance. And I don't always know, right? It just kind of depends. Probably the major variable there is kind of, you know, how much AI experimentation the leadership team has done You know because I think once they get it it flows down really really quickly Yeah but yeah I have conversations with people who are like yeah we want we want everything in our organization to work this way yesterday How fast can we get there And I work with other people who are like, yeah, we're still kicking the tires. Right. And so, you know, it's, it's incredibly, you know, diverse, the, the attitude, but boy, the trend line is clear. Right. And, and, and the, in the last six months, it's, it's amazing how many people have gone from kicking the tires to all in. And I think after another six months passes, I think we're going to see probably, you know, 80% of people are going to be in the all in mode. It feels like time has sped up over the past few years. Yeah. Like extremely. Yeah. Yeah. Can you share a story of delivering an agentic based and agent based experience application for a large client, a fortune 500 client, something like that, and kind of tell us about how it went, if there was a challenge or kind of a surprise to overcome and, you know, how the customer responded. Yeah, absolutely. So, you know, it's interesting. I'll maybe give an imparity-centric story since, you know, those are the ones I'm closest to. We started, like a lot of companies, are, you know, LLM in the product journey with text to SQL, you know, and that was kind of, you know, in a part of the product surface that was, you know, for people who already knew SQL and it was, you know, kind of a nice augment of the experience. But then we challenged ourselves. We said, hey, what if we built something that was for people who don't know SQL, right? People who want to ask questions of their data and better understand and make decisions about their data, but don't have that skill set. Like, what could we build for them? And so, you know, we built a capability called AMP AI. And to the point about a curveball, I'm going to admit I was a little skeptical because what I thought was we're going to build the surface. People would try it a little bit and then be like, well, I'm not sure if the data is right. So I'm just going to go back to what I did before, which is, you know, ask one of these SQL people to give me the answer. And I suppose I could say I was pleasantly surprised because what we found is when we when we gave people AMP AI, there was a lot more energy for people to kind of really get in and introspect the data than I had thought. Right. And it was surprising to me how much that kind of SQL interface between the user's question and the answer was a barrier. And so, you know, when we created the experience that was for them, I was amazed at how many people used it and not just used it a couple of times to try it. But like, you know, we did a cohort analysis and we looked at usage over time. And, you know, people who use the product, you know, a little bit once they started kind of using the AI interface went up to using it a lot and stayed there. And so it was really fascinating to me to see, you know, a whole bunch of people without a very technical skill set all of a sudden become more data informed. Right. And all we had to do was put this, you know, little surface on top of the data. It was really, you know, kind of a great surprise and an exciting surprise and certainly has given us the confidence to, you know, lean more into these AI use cases. Yeah, no, that's that's what you want to see. Right. You get that adoption and it sticks. for sure yeah along those lines of opening up more technical capabilities possibilities for the less technical people uh let's talk about vibe coding yeah maybe nine months 12 months ago whenever it was i saw a video and forgive me i can't remember the name of the company uh where somebody i think they were doing on a mobile phone said to the phone create an app that does XYZ. And we watched as on the screen, it spit out the code and ran the app. And it was, oh my goodness, like what's going to happen to the world? Now we fast forward. And the other day, I asked a coding tool like this to recreate one of the arcade games I grew up with. And, you know, as you alluded to before, the results might not have been perfect, but it sure wrote a whole lot of code that I couldn't have written myself. This is amazing. It opens up possibilities. And this is more from what I hear from people who are more versed in coding and security and things like that than myself. It also opens up some potential issues. Talk to us about VibeCoding. What does it mean to you? What does it mean for engineers, people who are actually steeped in what they're doing? And how is this changing the landscape now and going forward? Yeah. So VibeCoding, I love that it has a name now. Right. And I didn't even explain what it meant. I just assumed at this point. Yeah, I think if they're listening to the NVIDIA AI podcast, they're up on things. Right, right. Yeah, I think that makes sense. Yeah, I think Vibe Coding has, you know, impacted everybody. And you gave the examples of kind of these, you know, increasingly impressive one-shot examples where you give a single prompt and something pretty amazing comes out the other side. Right. That's maybe one category of Vibe Coding. And then I think, like, Vibe Coding for the, you know, professional software developer set looks very, very different, right? And I think part of that is, you know, people are working with very large code bases and, you know, just the properties of the system and what you get out are very different than the big one shot thing where it's generating lots of code and things need to need to integrate with an existing workflow. I think what's very interesting is Vibe Coding changes the workflow of programming. And I think this is something that, you know, I've really seen the people who in particular get tons of value out of Vibe Coding are not just doing what they've always done, but doing a little bit faster because they're involving an LLM. They've kind of re-architected how they write code. And specifically, a lot of times they're taking advantage of asynchronicity. So, you know, some engineers will launch 10 different, you know, LOM-based processes at the same time. And then you may go off and do something that, frankly, if you were sitting there watching them code, would look exactly like what they've always done. And then, you know, at the end of that session, they're going to go back in and check on the results of those agents. And, you know, they might take three or four of those and just say, ugh, garbage, delete it. You know, or just start from scratch, right? And then they might take a couple more of them, make a few refinements, you know, check in on the code. And then there's a couple others. Maybe it just nailed it. Right. And those will kind of go straight through following a quick review. But if you think about it, like, you know, they've essentially, you know, put kind of, you know, 10 additional engineers on their desk. And the engineers have some interesting properties. Right. Sometimes they are really impressive and sometimes they're astonishingly terrible. But they're, you know, they're effectively free, you know, to the cost of some of some token generation. And so I think, you know, there's very different workflow that comes with vibe coding. And, you know, obviously that has a kind of profound impact on what a team can accomplish. I think the other interesting thing about vibe coding is we're increasingly starting to see people with different skill set profiles contribute directly to the product. Right. So, you know, rewind a year. If you're a designer or you're a product manager, you're probably not impacting production code except by influence. Right. But that's changing. Right. You can say, I think this should be this way and you can go and try it. And, you know, we've built a lot of infrastructure here to enable these vibe coding use cases to be accessible to a lot of people. So you can try something and, you know, it'll put it in an environment where you can test it so you don't have to spend, you know, two days setting up your local machine. And then you can say, yeah, this is good. And then have an engineer review the code and put it straight into production. When it comes to things like quality assurance, security, other concerns around putting code into production, is the answer, and I'm speaking from my own experience kind of as a content creator and using AI systems to help in a very kind of similar way to what you're talking about, just my words are content, not instructions to a machine. And is the answer as simple as having the right human in the loop to review the output? And like you said, you know, know how to incorporate this into the workflow, launch 10 processes, review them, you know, discard, use, adapt. Is it that simple and sort of at a high level? And what are some of the things kind of maybe more nitty gritty that you think about when it comes to vibe coding, quality assurance, security, you know, and working with these large code bases? Yeah. Yeah. You know, it's interesting. I think there's a lot of discussion about this. The way I see it, it doesn't change much. Right. Like and what I mean by that is, you know, human beings are typically the, you know, the weak link in a system. Right. I think building systems that are robust from an infrastructure perspective, from a scale perspective, from a security perspective requires you to be resilient to error. And when designed intentionally, you expect error. And so, you know, I do think that, you know, good system design looks the same, you know, when vibe coding is involved versus not. And arguably it's more important, right? Because, you know, the impact is larger. So, yes, I think you still need accountability mechanisms, right? Like the, you know, at the end of the day, you know, we were very clear, you know, engineers are responsible for what comes out. And you know just like rewind 10 years somebody might have copied and pasted you know some code from Stack Overflow they accountable you know for that whether they use that as a shortcut or not This is exactly the same in Vibe coding at least in terms of kind of creating the surfaces that we have. Now, there's security concerns, obviously, that are new and different, you know, and, you know, when we do, you know, penetration testing in 2025, you know, we're looking at things like prompt injection and other things, you know, in terms of the operational surface. But if we really kind of zoom in on the Vibe coding era, it's like, yeah, absolutely. humans are still accountable for sure. Right. I'm speaking with Derek Slager. Derek is co-founder and CTO at Imperity, a company that is doing, I think it's safe to say, a variety of things to help their customers, other companies, users understand and make the most of their data. We've been talking about engineering the developer experience. I want to ask you about one of Imperity's tools, Chuck Data. Tell us about Chuck Data. Yeah. So Chuck Data is something that we launched, And, you know, it's such an interesting extension of our vibe coding conversation because, you know, we were, you know, experimenting and understanding and learning the best ways to, you know, extract value out of the various kind of interaction patterns around vibe coding. And then we asked ourselves what seemed like sort of an obvious question, like, why is this only for software engineers? Like, you know, and, you know, we looked at some of the things that, you know, our customers or even some of our own teammates were doing related to customer data engineering, right? Where, you know, they're pushing around, you know, big, big mountains of SQL or, you know, trying to kind of manually build, you know, workflow coordination patterns or, you know, maintaining, you know, huge kind of DBT code bases. And we thought, whoa, there's an opportunity here. Sure. And so the idea behind Chuck Data is, hey, what if we created a Vibe coding tool for data engineers that are specifically focused on customer data use cases? Okay. And so, you know, Chuck was kind of packaged up to be a very, very easy way for people to, you know, in essence, have these kind of like, you know, one shot experiences that you described earlier in sort of the classic software engineering sense, but for data engineering use cases. Are there competitors? Are there other when you were building Chuck Data? Were there reference points? And less asking about what those were, but what's different about Chuck Data? What sets it apart? Yeah, it's such an interesting question because the thought I had, you know, as we were kind of forming Chuck and no doubt we took a lot of inspiration from Cloud Code. You know, that's definitely kind of, you know, number one in the Amparity rankings of Vibe Coding Tools. And so we took a little bit of inspiration from that, though Cloud, of course, is aimed at more, you know, traditional software engineering. But I expected, you know, there was a point in time where I was like, OK, let me go find what other people are doing. I'm sure other people have thought of this and I couldn't find them, you know, and perhaps they, you know, they, they did exist and hadn't made their way up the, uh, you know, SEO and GEO rankings, but, but I literally couldn't find them trying that. And so, which is always kind of an exciting and scary feeling. That's exactly what, yeah, yeah. I was like, hooray. And then I was like, uh-oh. What am I missing? Yeah. Are we too early? Yeah. Right. And so, yeah, no, we, we, we leaned in and, and, and I think, you know, and parity, we always, we always really think about things starting from first principles. Right. And it's like, hey, we can help people really accelerate some things that we know they're going to be challenged with. And of course, that's valuable. So let's lean into that. And yeah, I'm sure the competitors will follow eventually. But yeah, I was a little bit surprised to have trouble finding. Where does the name come from? The name, it's inspired by, you know, one of our very early engineers who, you know, kind of did a lot of work on some of our, you know, initial R&D. And so we kind of wanted to, you know, have a little nostalgic throwback. Yeah, awesome. Yeah. Very cool. So talking about Vibe Coding and AI-powered workflows and the experience of a developer and, you know, a new developer kind of learning the trade and getting their feet wet, an experienced developer, as you said, kind of rethinking their workflows and how they do things. There's concern in some circles that, you know, again, I'll relate it back to my own experience, similar to how there's concern that students, adults, professionals for that matter, are using AI to do their writing for them. Concern around Vibe Coding, AI-powered tools, kind of enabling programmers, if you will, to skip the process of really learning to understand how the code works, how to structure applications, how to do all those things. What's your take on that? Are there risks of over-reliance? Is it just kind of where the wind's blowing and will adapt? You know, what's your take and what's Imparity doing to make sure that, you know, programming doesn't turn into just hitting tab repeatedly and then hitting ship? Yeah, the concerns sound familiar to me. You know, I've been around a little while and, you know, my career started in the late 90s. And, you know, a lot of the conversation in the late 90s was like, ah, you know, these kids today with their high level languages, right? They don't even know how to do pointer math. And then, you know, and then it sort of moved on to all these kids with their IDEs, with their fancy autocomplete, these kids with their, you know, garbage collection language. Back in my day, you know, we used to manage memory ourselves, et cetera, et cetera, et cetera. And so, you know, it sounds a little bit like the grumbly old person rant, you know, a little bit. Sure, like Vibe Coding has a bunch of properties that allow you to create a bunch of code quickly. And I think, you know, there's many prominent examples that aren't hard to find of people who didn't understand what was happening there. And then really bad things happened, right? And so in some sense, back to maybe the accountability point from before, nothing really changes, right? You've got some tools. It allows you to work differently, to work faster. But like you're ultimately accountable. And it's not optional, you know, to understand how that code works. It really isn't. And I think even, you know, if we look over the horizon and imagine a step change in the model and, you know, kind of more, you know, agentic sort of verification and validation workflows. It'll get easier, it'll get faster. But at the end of the day, you know, I think society is built around the notion that, you know, humans are going to have accountability for the things they do, right? And, you know, I don't really think that changes. And so I think it's a great new set of tools. I celebrate the great new set of tools. It allows us to, you know, build more faster for our customers. And I think that's amazing and awesome. So along those lines then, what will the engineers, what will the coders of tomorrow need? How does the skill set change? How does the mindset, the approach to, you know, constructing something new, working from an existing code base, all those things, how does that change for the folks coming up now? I think it's going to be really different. I think we're designing a new way to build software. And I really mean that. You know, the workflow is different. And I think the skill sets that matter are also different. I think maybe the best engineers of 2015 won't be the best engineers of 2026, if that makes sense. It does, but why? Because I think, you know, once upon a time, right, that developer who could master that algorithm, you know, or had this kind of like, you know, deep arcane knowledge of how a particular subsystem worked was, you know, especially valuable. But that skill set doesn't sort of automatically adapt to, you know, how do I go and kind of build a whole network of different processes that are happening, right? It's almost like, you know, you're going from being, you know, an expert artisan to a general contractor. That's a different job. And they're both important jobs, but it's a very different job. and being a great general contractor in a large complex problem space where you have lots and lots of subcontractors and you need to kind of orchestrate that all together is just different than kind of the core craft. And so, and look, we're still learning as an industry what that skill set ultimately looks like. But I think it's going to create a lot of opportunities for people who maybe wrote themselves out of the engineering game once upon a time. And I think the skill sets are going to look different. So, you know, I'm seeing some early evidence of that, and I expect that'll continue to evolve, you know, at Imperity and in the industry as a whole. So how do we, how does Imperity approach keeping engineers at the center of the process as these tools change? But as you said, you know, the human accountability and then sort of the, it's not the other side really, but maybe at the beginning of that process, the spark of innovation, the idea of humans innovating on our own, working together, using tools. Yeah. How does Imperity approach, you know, is there a philosophy around keeping the engineers at the forefront and not sort of having them sidestepped by increasingly advanced automation? Yeah, I think it's a fair question. And admittedly today that pretty easy you know because you know there only very small components of the engineering lifecycle and the product design lifecycle to your point the ideation and things like that that could be you know at least in theory automated away And so you know today obviously you know humans are still firmly planted in the in the in the feedback loop and they driving driving that process entirely You know, I think I can close my eyes and look over the horizon and imagine some pieces evolving that, but I think it's still, you know, largely rounds to the same. And I think, you know, it's sort of like, you know, if I look at sort of the backlog of ideas, you know, that people have for how to make imparity better, right? It's like, There's thousands of good ideas, you know, for how to make Amparity better. Right. And so, you know, in some sense, the challenge is always curation, having better tools to help with that are wonderful. And I think, you know, we all still are going to value sort of that, you know, kind of human touch. And I think we'll expect and appreciate that that human touch is going to be enhanced, augmented and made better by the inclusion of AI tools. But I think we're a long ways out from, you know, kind of having humans out of the loop, particularly in software engineering and customer data engineering use cases, because, you know, the difference between, you know, right and accountable, I think, is just sort of fundamental to the model. Well said. So shifting from the future and kind of coming back for a moment, if you look across the work and parity is doing, the work and parity is done with clients, specifically talking about agents and agentic systems, any big wins, any really exciting little stories you can share or learning moments, if not a win, something that, you know, kind of really stuck out that's really informing how you look ahead? Yeah, I think the biggest wins really fit the category of, enabling people to do things they didn't think they could do. And that might be from kind of the examples earlier where people can analyze data that weren't able to do it before, or people who kind of always told themselves that we can't do really sophisticated segmentation because we're bottlenecked on creative resources. And so being able to empower people to do things that they couldn't do before that ultimately are good for their customers and that they're motivated to do. I mean, those are the things that are the big wins for me and get me excited. I think in terms of learnings, business context matters a ton. And I think that's one of the really kind of, you know, it's sort of an obvious thing, but it's really important to design for that. You know, it's like I was recently talking to, you know, one of our customers that sells cars, right? And if you're a customer that sells cars and you refer to a product as a taco, you're probably talking about a Toyota Tacoma, right? Because that's their shorthand for Toyota Tacoma. And if you're in a quick serve restaurant and you say taco, it means something completely different, right? Just to choose a silly example. But, you know, there's so many of those things. And we see that, you know, when we watch people interact with the system. there, there's a language of every company and the way they think about things, particularly when they're asking questions about customer data are really infused with the language. And so, you know, a big learning is, you know, the faster we can kind of bootstrap the LLMs with customer specific knowledge, it's a, you know, it's a huge step change in kind of the efficacy and the empowerment, which kind of feels more of those wins. Yeah. Yeah. So you've kind of touched on this, but I'm going to ask you to sort of, to use your word, kind of curate some of the things we've talked about and, you know, some of the other things I'm sure you're seeing day to day. What gets you excited about what's coming next, specifically in the enterprise? And what are some of the things when you're working with, say, new clients or just talking to folks who are kind of on the forefront and new adopters, what are some of the things that not only excite you about what's coming, but that you think are really key for folks to keep in mind as they start exploring newer and just, you know, as we said before, everything's moving so fast, these increasingly advanced tools. Yeah. I think what's exciting to me and what we're really seeing is I think people started out by saying, how can AI make the thing I'm already doing go faster? Right. And that's a great place to start. But I think what's exciting is people are starting to rethink their businesses, right? Vibe coding causes you to rethink your developer workflow, you know, but what if you're planning a marketing campaign. You know, well, today in 2025, that probably still looks pretty similar. You know, but when we look ahead and particularly some of the things that, you know, we're working on with our customers, like we can go from making tasks go faster to really helping to, you know, sort of reinform the strategy and how that comes to life. And I think, you know, obviously, you know, anybody can go to their favorite LLM and ask it some questions about marketing strategy, and that's fine. But the thing that's so exciting is integrating that experience into a system and a platform that has the data, that has the business context, that knows what's happening, can close the loop. And you start to kind of like close your eyes and think about that. Wow. Like that's really exciting. At that point, you know, it's not just about making that task go faster. It's about making your business go faster. And I don't think that's hyperbole at all, right? Like right around the corner for us. And so, you know, I think we're extremely excited about some of the opportunities ahead and extremely excited that our customers are really kind of pushing us to be able to do that as quickly as possible. So along those lines, best practices, words of advice for listeners out there, for teams who want to harness agentic AI in particular, but want to be mindful of the things we've talked about, innovating, being accountable, being responsible. Yeah, I would say the one piece of advice, and I give this advice a lot, is start now. And we've heard that one before. Yeah, it's so important. It's so important because like it's early, right? We're still figuring out the patterns and the practices, you know, like as an industry, we're learning a lot about kind of how to, you know, put these incredible new technologies together in ways that really, you know, move the needle. And, you know, right now you just have a choice, right? You can be a doer who's in that learning loop or you can be an observer and kind of, you know, wait and see. And I think, you know, we talk a lot about this here, like, you know, speed's the only thing that matters. And so I don't think it's viable in the current market to be outside that learning loop. And the good news is it's early, right? And so you're not too late. Not at all. But it's getting to the point where pretty soon you're too late. And so I think, you know, I think we're certainly past the point. And again, this is something that's changed in the last six months. We're past the point where people are like, well, we'll see if this AI thing plays out or not. Like it's overwhelmingly obvious where things are going. And so, yeah, get off the sidelines, get in there, try stuff, learn. It's easier than ever, you know, to do that. There's more information out there. And of course, you know, AI feeds itself, right? AI can also help people figure out where to start and how to get through. And so, yeah, start now and go really fast. That's the path to success. Fantastic. So to put a point on that, for folks listening who are like, yeah, I'm ready to go. I want to start by learning more about M-Parity. Yeah. Best places for them to go, website, social media, where should they start? Yeah, go to M-Parity.com. I hope many of those people are thinking to themselves, wow, I'd love to work at M-Parity. So, you know, hit that careers page. Awesome. You know, we're in a growth mode and hiring for people, especially certainly people who maybe have passion about how to bring AI to customers and ways that really move the needle on their across a variety of roles. So yeah, we'd love to see people visit that page. And yeah, certainly we, you know, are fairly open with information on our website about what we do and how we do it. And, you know, we've got a lot of work we've done over the years with published papers and patents and other things. So, you know, we love engaging with people who are kind of just interested in the space and, you know, to use a phrase that gets used a lot around imperative, we love nerding out with people. Awesome. Derek Slager, this has been a great conversation and I think just a lot of wisdom and a lot of, you know, really practical advice that really resonates on how quickly things are moving, how important it is to get started. And just for all that we can do now, the possibilities, even in the very near term, or just as you said, close your eyes and imagine. And it's really something. Yeah, for sure. Well, I appreciate you having me, Noah. It's a fun conversation. Appreciate you having on. Let's do it again down the road. Sounds great. Thank you. ¶¶
Related Episodes

Rethinking Pre-Training for Agentic AI with Aakanksha Chowdhery - #759
TWIML AI Podcast
52m

How Anyone Can Build Meaningful AI Without Code - Ep. 283
The AI Podcast (NVIDIA)
40m

AI in 2025: From Agents to Factories - Ep. 282
The AI Podcast (NVIDIA)
29m

From Hiring to Growth and the Future of Workforce Strategy - with Meghna Punhani of Eightfold AI
The AI in Business Podcast
35m

Scaling Agentic Inference Across Heterogeneous Compute with Zain Asgar - #757
TWIML AI Podcast
48m

How AI Data Platforms Are Shaping the Future of Enterprise Storage - Ep. 281
The AI Podcast (NVIDIA)
35m
No comments yet
Be the first to comment