Back to Podcasts
The AI in Business Podcast

Turning Consumer Goods Data into Real-Time Business Decisions - with Michael Finley of AnswerRocket

The AI in Business Podcast • Daniel Faggella (Emerj)

Wednesday, November 19, 202526m
Turning Consumer Goods Data into Real-Time Business Decisions - with Michael Finley of AnswerRocket

Turning Consumer Goods Data into Real-Time Business Decisions - with Michael Finley of AnswerRocket

The AI in Business Podcast

0:0026:32

What You'll Learn

  • Agents need to be designed with a clear understanding of the business goals and policies to take meaningful actions, not just provide generic responses.
  • Providing valid, well-structured data is crucial for agents to make accurate, non-hallucinated decisions.
  • Agents should be packaged with the right tools, tests, and accessibility features to make them usable and impactful for enterprise users.
  • Overly constraining agent behavior or tightly coupling it to a specific model/provider can lead to watered-down, less useful capabilities.
  • Enterprises should focus on the overall concept of generative AI rather than getting locked into specialized models or APIs from a single vendor.

Episode Chapters

1

Introduction

Overview of the discussion on building enterprise-ready AI agents.

2

Understanding Business Strategy

The importance of aligning agent design with the organization's goals and policies.

3

Providing Valid Data

How to ensure agents have access to the right data in the right format to make accurate decisions.

4

Designing a Complete Agent Package

The key elements needed to make agents usable and impactful for enterprise users.

5

Avoiding Common Pitfalls

Discussions of pitfalls like over-constraining agent behavior and over-reliance on specific models or providers.

6

Focusing on Generative AI Concepts

The importance of maintaining flexibility and independence from vendor-specific offerings.

AI Summary

This episode explores the key factors that make AI agents truly enterprise-ready, including understanding the business strategy, providing valid data, and designing the agent as a complete package with the right tools and workflows. The discussion highlights common pitfalls, such as over-constraining agent behavior, binding the application too tightly to a specific model or provider, and getting lost in the variety of specialized models offered by vendors.

Key Points

  • 1Agents need to be designed with a clear understanding of the business goals and policies to take meaningful actions, not just provide generic responses.
  • 2Providing valid, well-structured data is crucial for agents to make accurate, non-hallucinated decisions.
  • 3Agents should be packaged with the right tools, tests, and accessibility features to make them usable and impactful for enterprise users.
  • 4Overly constraining agent behavior or tightly coupling it to a specific model/provider can lead to watered-down, less useful capabilities.
  • 5Enterprises should focus on the overall concept of generative AI rather than getting locked into specialized models or APIs from a single vendor.

Topics Discussed

#Enterprise AI agents#Data governance#Tool integration#AI deployment and iteration#Avoiding AI pitfalls

Frequently Asked Questions

What is "Turning Consumer Goods Data into Real-Time Business Decisions - with Michael Finley of AnswerRocket" about?

This episode explores the key factors that make AI agents truly enterprise-ready, including understanding the business strategy, providing valid data, and designing the agent as a complete package with the right tools and workflows. The discussion highlights common pitfalls, such as over-constraining agent behavior, binding the application too tightly to a specific model or provider, and getting lost in the variety of specialized models offered by vendors.

What topics are discussed in this episode?

This episode covers the following topics: Enterprise AI agents, Data governance, Tool integration, AI deployment and iteration, Avoiding AI pitfalls.

What is key insight #1 from this episode?

Agents need to be designed with a clear understanding of the business goals and policies to take meaningful actions, not just provide generic responses.

What is key insight #2 from this episode?

Providing valid, well-structured data is crucial for agents to make accurate, non-hallucinated decisions.

What is key insight #3 from this episode?

Agents should be packaged with the right tools, tests, and accessibility features to make them usable and impactful for enterprise users.

What is key insight #4 from this episode?

Overly constraining agent behavior or tightly coupling it to a specific model/provider can lead to watered-down, less useful capabilities.

Who should listen to this episode?

This episode is recommended for anyone interested in Enterprise AI agents, Data governance, Tool integration, and those who want to stay updated on the latest developments in AI and technology.

Episode Description

Today's guest is Michael Finley, Chief Technology Officer at AnswerRocket. Founded in 2013, AnswerRocket builds enterprise AI agents delivering measurable outcomes for Fortune 2000 clients across consumer goods, financial services, construction, real estate, and beyond. Finley joins Emerj Editorial Director Matthew DeMello to discuss how enterprises can move beyond AI experimentation toward scalable, agent-driven systems that deliver measurable business value. Finley also explores what makes AI agents truly enterprise-ready — from data governance and tool integration to deployment and iteration. This episode is sponsored by AnswerRocket. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the 'AI in Business' podcast!

Full Transcript

Welcome, everyone, to the AI in Business podcast. I'm Matthew DeMello, Editorial Director here at Emerge AI Research. Today's guest is Michael Finley, Chief Technology Officer at Answer Rocket. Founded in 2013, Answer Rocket builds enterprise AI agents delivering measurable outcomes for Fortune 2000 clients across consumer goods, financial services, construction, real estate, and beyond. On today's episode, Michael explores what makes AI agents truly enterprise-ready, from data governance and tool integration to deployment and iteration. Our conversation also examines how aligning agent design with existing workflows can reduce time to insight, improve cross-department collaboration, and accelerate ROI on AI investments. Today's episode is sponsored by Answer Rocket, but first, are you driving AI transformation at your organization, or maybe you're guiding critical decisions on AI investments, strategy, or deployment? If so, the AI in Business podcast wants to hear from you. Each year, Emerge AI Research features hundreds of executive thought leaders, everyone from the CIO of Goldman Sachs to the head of AI at Raytheon and AI pioneers like Yoshua Bengio. With nearly a million annual listeners, AI in Business is the go-to destination for enterprise leaders navigating real-world AI adoption. You don't need to be an engineer or a technical expert to be on the show. If you're involved in AI implementation, decision-making, or strategy within your company, this is your opportunity to share your insights with a global audience of your peers. If you believe you can help other leaders move the needle on AI ROI, visit Emerge.com and fill out our Thought Leader submission form. That's Emerge.com and click on Be an Expert. You can also click the link in the description of today's show on your preferred podcast platform. That's Emerge.com slash ExpertOne. Again, that's Emerge.com slash ExpertOne. Without further ado, here's our conversation with Michael. Michael, welcome to the program. It's a pleasure having you. Thanks, Matthew. It's great to be here. Absolutely. I'm really excited for this episode. I know we're continuing the conversation from our first episode with Jim Johnson of Answer Rocket. We've been staying in the CPG space, but I think really even pulling apart what makes pilots succeed is of incredible interest across industries. And many pilots look great in a demo, of course, but fail in production. To scale agents, our latest technology of 2025, to scale these products, these capabilities as digital workers, enterprises need software-grade design, policy grounding, and clear maturity paths. But let's talk about the patterns that make agents reliable and how to carry wins from one domain across the rest of the business. Just to start, there's been a lot in the media about agentic AI. There's more to come. We're hardly at the top of this hype cycle. CIOs, CTOs have heard a lot of different things. What do they need to hear about building agents that actually work? Yeah. So fantastic question. And it's one that I run into every day. And so it has to start with the idea of really what is an agent. And for a lot of our customers, a lot of the folks that are in the chief seats for CPG companies, they're working with this idea that an agent is a chatbot, that an agent is a tool call. It's something about MCP. I just want to emphasize that what makes something an agent, and this is tied to having it do useful work, what makes it an agent is its ability to have agency, its ability to actually make decisions and take actions on your behalf. And so in that context, in the context of saying, how do you give it agency and make it able to take actions, then what really makes an agent great is a couple of important things. One is understanding the strategy of your business. And I don't mean that it has a RAG index database to 10,000 documents. No, no, it needs to understand what you want it to do in your business. Is it helping you set your prices correctly? Is it helping you allocate advertising dollars? Is it helping you reduce waste in your supply chain? What is the goal that it has and how does your enterprise have a policy? A lot of our customers in CPG have been in business for over a hundred years. They know how to do these things. Tell the agent how to do that. And telling it, by the way, is as simple as training a human that you would have doing that job. It can be videos, it can be audio, but the point is to get that information across to it. Second key point is to give it valid data, to give it access to valid data. Now, a lot of my customers, again, in that sort of naive, why doesn't AI do this for me mode, just want to say, I'll dump my data in and it will tell me the optimal price. And the fact is, there is no example that's ever been provided by anybody at OpenAI or Anthropic or Google, where they trained a language model to understand how to price lollipops. I'll use lollipops as an easy example, right? That does not exist in the models pre-training, right? That's the PNGPT. Therefore, it does not know how to do it. And on the other hand, it has been trained on so many different things that it will try. It will try to answer that question on your behalf. And so there, by giving it good data, you're also saying, do it in a way that the model can consume it. The way that its attention heads in the AI world are focused on the right parts of the data and in a way that it doesn't want to hallucinate about the answer that comes back, right? And that takes place in the form of tools, the tools, these famous tools that models use that, you know, we've gotten really focused around this three-letter MCP standard. It's a great thing that's come about in AI because it does provide a standard for how language models can use tools, but that's like saying pages are a great standard for a book, right? What's actually in the book is what matters, not the fact that it is a book, right? And so, yeah, so those are some of the important factors, but the, you know, at the end of the day, we have to, we just have to think of agents not as an LLM with a key. We have to think of an agent as a package set of things, a description of what it's supposed to do, the tools that it's supposed to work with, the tests that prove that it's doing a good job, the packaging that makes it accessible to users, right Those things turn the idea of an agent a thing that producing AI tokens for you into a moneymaker for your enterprise So some really fascinating stuff there Just pulling this apart you need the understanding the business goals And this can be as simple as the human training that you can do. The products are right there that you can have it go through videos. You can have it read documents and get a good idea. Really, when you had mentioned that, it occurred to me as maybe the age old about avoiding micromanagement. If you want entry-level employees of any kind to have a sixth sense of how to operate when they haven't been given individual nuance of their actions or their tasks. It's to give them what does done, what does complete, what does the goal look like, and then trust their judgment. And the tools to do it. In fact, one of the pitfalls that we see all the time is this idea that somebody has an idea, They build an agent and they go to demonstrate it and it does something slightly different than it did the first time. It's still correct, but it's not the same. So they get nervous. So they put guardrails on it that narrow it down. It was an agent that did, you know, let's say 100 percent of some scope, but they narrow it down. So now it's only doing 30 percent. So it's already pretty watered down. And then they repeat that same demo two or three times. And every time it does something slightly different or maybe one time it slips and does something that because they didn't have any guardrails. So it went off the guardrails. Now they get really nervous. and they narrow it down to 1% of what it used to do, right? And this is how you get to this watered down effect where somebody had a brilliant idea, but by the time it actually got in the hands of the people that are going to use it, it's completely watered down and almost useless. And then people will look at it and say, at home, I can do so much more with GPT. Why is it that I'm stuck with these inferior tools at work, right? It's not a question of competence. It's a question of engineering, the process of taking this amorphous capability, this AI, this intelligence on tap that are these language models and structuring that in a way that can become battle-hardened, ready for use in the enterprise, right? You know, the same as you wouldn't download a $2 app and then give it a million-dollar responsibility within your enterprise, it's exactly the same. You're not going to get, for two cents that you pay for the tokens to AI, you're not going to get a million-dollar responsibility capability coming out of it, right? You're going to have to build that battle-hardened, ready to go. So we've addressed a couple of the pitfalls so far, particularly in the CPG area for deploying agents. What are some other pitfalls that you're seeing out there? The pitfalls related to these model deployments. Yeah. So certainly another great example is binding the application too tightly. So we're in a space, we're in a rarefied space where these intelligences on tap from the major providers are very much neck and neck, right? We see new models being released every day. We see prices falling, reliability is increasing. It is cutthroat competition, right? Being, you know, OpenAI, being Gemini, being whichever model you choose. What that means is that they're all trying to differentiate themselves by adding layers above their model, right? So by making new APIs, by making more comprehensive capabilities, by basically trying to create, trying to get you in, trying to lure you into their ecosystem, right? Because as soon as they do that, then you're a more assured source of ongoing token consumption for them. We see that as a major problem. A lot of folks in CPG have rightly committed long term to one of the hyperscaler providers because they got big discounts for doing that. And they know they're going to continue using that service day in and day out. Well, if you now just kind of dot, dot, dot, continue that and say, I'm also going to commit to that company's API, to that company's language model, well, you are almost certainly making yourself subject to not having the best language model, at least in some timeframe into the future. So staying independent, staying separate from those very fine-grained kind of specific capabilities that the models do, and instead subscribing to the overall concept of AI, of Gen AI, which is very simple. I give it prompts, I give it tools, and it does things for me. If that's the only requirement, you can put a lot of different choices at that layer in your stack. If you require more above that, then you're going to be stuck there. So it's another common pitfall. In a similar vein, many of the model providers have 15, 20, 30 different models that they provide. They'll have one for reasoning. They'll have one for chat conversations. The same as in the consumer space, they have models for video. They have models for audio. They have models for images. In the enterprise space, they'll have one for embeddings. There are all sorts of different options for those models. A lot of times what I'm seeing is that the customers get lost in what the differences are between those and they'll just pick one. And the model providers happen to have one typically called latest that is just their most recent version. And so they'll use that. Well, that means that just like we saw happen, you know, not too long ago, major provider puts out a new version of latest model and the consumers of that model suddenly see something different, right? Because it got swapped out from under them. So the same way that you wouldn't say, yes, I'll just leave my servers on auto-subscribe to the newest version of an operating system. You want to do those changes in a governed way. Same thing with the models. You want to do those changes in a governed way, which involves having the ability to regression test these things. They are software. The language models in the enterprise context are software. It's important to remember that they're not additional human beings in the room and they're not some special new thing. They are software and we have to treat them like that. That means they have to be packaged, they have to be tested, they have to be released, they have to be monitored on an ongoing basis. A lot of guests have come on the show and talked a lot about data governance. Yet a lot of what you're describing, especially for the longtime listeners of the show, and almost no one's been listening as long as me because of just doing hosting duties. But there's a big difference between data governance and data harmonization. And wondering, just in that last answer, if you can differentiate a little bit just for this audience in case they start hearing those terms around interchangeably for identification. Yeah, yeah. So no, it's a great question. Look, I put data, which, by the way, is databases, but also documents, right? Let's call it the blessed corporate documents. That's one category. And I put the tools or the services of the organization in that same kind of category. We have to have some governance over those tools and services in the same way that we have it over data. So what's an example? In a traditional ERP situation, you might have the one source of truth for who your employees are. That data Who works in what department Who their boss What their last performance review And then you might also have one way of getting to that data a microservice right that the enterprise offers that allows access to that data in a controlled and governed way right So both of those are really important in the language model world, in the LLM world. You have to provide not just those sources of data that need to be properly provisioned for whoever's using the AI, right? The AI itself has no intent of its own. It has no identity. So we have to track who is using the AI to make sure that it is accessing the data in an authorized way, that the data that it's accessing is the data that you want to be the representative of the version of the truth that's going to be used to make decisions on, and that it's being accessed through the method that you've designed as an enterprise to be the correct place to get that data. Now, all of that said, these are just good practices, whether you're using LLMs or ERPs or any other three-letter acronym that we might choose because software people love three-letter acronyms. So it's always important to follow those rules regardless of which technique you're using to access that data. That being said, I also want to emphasize something that is interesting. There's a network effect. We all are familiar with the network effect. When two people get together, well, that's one relationship. But if there's three people, well, that's three relationships. And if there's four people, well, now there's 12 relationships, right? So that expands out, right? And that happens with data too. If I have sales data and I have competitor data and I have warehouse data and I have distributor data as a packaged goods company, add the weather to that. And all of those things relate to each other, right? The value builds exponentially when I can relate the weather to my distribution, to my manufacturing, to my demand, right? Especially if I know what my competitor is doing. Okay. Well, in the traditional world, in the traditional data world, we would say, oh, no, there's going to be an 18-month project that's going to involve five different scales of normalization. And we're going to have a grand unified cloud that's going to have all that stuff in it. In the language model world, that's no longer necessary. And why is it not necessary? It's simply because the same as a human would access those five different sources that a user might want to understand. A human is going to go read those five sources separately. And then in their brain, which is in the LLM world is called the context. In their brain, they're going to put all that together and make some conclusions. LLM is going to do the same thing. I don't need one SQL query that the DBA knows how to do that joins all those things together. I can simply ask for the data from each source. And like a Harvard MBA, I can put it all, I can assimilate it in my LLM brain and I can do useful things with it. So while tying it back to your question, while I do want to see very good governance and very high data quality around both the data and the documents and the data sources, I don't require them to be as harmonized and as carefully curated and as pure and as, you know, denormalized and all these words that we've learned to create as a data organization. I don't need all those things to be true in an LLM world. I need to be able to add incrementally new data sets without adding an exponentially large amount of work that goes with those. And that's what we used to see. And I think there's a lot of leaders in this space that are hearing about these new technologies. There's shiny toy syndrome. Maybe we need to get this or competition with the Joneses, competition in their own markets. You know, our competitors have this. We want to make sure that we're keeping pace with the technology. but we've talked a lot about the pre-production. We've talked a lot about what goes into the foundations. As you're moving from the foundations to really deployment, what signs should they be looking for, especially to ensure that their architecture and governance are in a proper place and they're going to get good outputs by the time this is really touching customers? Yeah, no, fantastic question. So I would, first of all, I would say, look, the net output that we're all after is usage, right? We're looking for, you know, this thing to be so valuable that that it gets ripped out of your proverbial hands, right? Because so many people want it. And I think that's an important thing to look for. What you don't want to do is spend six months and then look for that, right? You want to be in people's hands in six weeks and see how they respond and understand what they can do and what they can't do. And then in six months, hopefully you're on like the 10th iteration of that, right? That's a way to know that things are going well. Another good example would be to monitor the flip side of that, which is what is not being used. Now that the AI is in place, what is it that's not being used? Now, a lot of enterprises have a strategy of a chatbot. Now, the chatbot is, it's an obvious use case because we all learned about this stuff with ChatGPT. But what's not obvious about a chatbot is exactly how you measure the return on that investment, which means how do you invest more when it works, right? So again, I would look for cases where there is value coming out of the result. That starts with choosing the right case to begin with, right? Choose the problem and it'll be valuable. If it's not something you're spending money on today, that's not the problem you want to solve with AI because you're not spending any money on it. So you're not going to save any money when you use AI to do it, right? So choose things in the enterprise that our jobs in AI can do, which is honestly most jobs with some partial human interaction these days. So choose things that the AI can do, put it to work, measure the results, and cycle really quickly through that. so you can add value and continue to develop it. Absolutely. You had mentioned in, I think it was an answer or two ago, that between lines of business functions within an organization, a two-way street is actually, when you think about it, a four-way street, a three-way street is actually a 12-way street. I had to take everything in me not to make a three-company joke when you had mentioned that. And I like to think that was good discipline as a podcast host. But really, I want to go back there just for a moment because we've been talking about executing these capabilities really in that silo of the CPG workflows. But as even a lot of business leaders know from many deterministic use cases, even going back more than a half decade, there are so many instances where you can deploy even low-grader versions of this technology in a small beachhead, really see the results, do 10% of an important job extremely well. which seems to be the the reoccurring chorus of the ratio that you're looking for for success and then sit back and say ah we have this technology deployed in this line of business but now since we're collecting from that line of business it's touching these different areas because of course all the divisions really need to talk to each other and there places where use cases we only thought would make sense in this particular spot actually make sense elsewhere Really wondering what are the cross industry cross division patterns that best showcase that scalable value and where are those use cases? Yeah. So the way that I think of this, and it's the recommendation that I make to my customers is following that same vein of, look, AI came about, at least in this generative form three years ago, it came into our hands. You've been running your business typically for 50 or 60 or 100 years, right? So you know a lot more about your business than they do. That means you've got tools built up. I don't care if there's Excel sheets or some report that you've got built out somewhere, or maybe it's a utility, maybe it's a notebook that a data scientist has. Taking that and leveraging it into your enterprise as part of your language model success is really important, right? So when you talk about, you know, if there is a tool that's been built in the, you know, in the supply chain team that understands the risks of commodity pricing, for example, and now you're doing another task, you're asking a model to look at, you know, how should we allocate revenue, allocate spending for advertising? Well, you should be using that tool that's going to take into account whether or not you can even produce the stuff that you're about to advertise, right? And so that cross has been missed in the past or is a very human function or many large organizations have a 12-gate process that gets them to look at all of those considerations, but it also means that it takes them 90 days to release a new product, right? The 90 days is probably a good example, a good case, right? So in this model-based world, you can just give the LLM those tools that have been built in each of those areas and potentially their agents, maybe if there is an existing like a model that someone's built that forecasts, for example, or assesses risk, right? Well, that's a tool in the LLM world. If somebody's built a chatbot that knows how to answer a risk question, well, that's an agent, right? It's as simple as that. So one way or another, an agent, let's say, in one department needs to be able to access those other agents from those other locations. Lots of different techniques for doing that that are available. This is nothing new for enterprise IT, right? They've been doing enterprise-level bus or communications since the Java Beans days, right? that's been out there a long time. It's just there's a new form of it now, right? Which is it's not people communicating. It's not applications communicating. It's language models communicating, agents communicating. But it's some of the same kind of techniques. And while the user experience for now might look very similar, and that's where you get those confusions with chatbots and folks really wanting to see in a user experience the difference between agents, these are fundamentally much more powerful tools than what was rolling out for banks. Very much so. Very much so. Yeah. More powerful, smarter, deeper, and just simply more cognitively loaded. Right. They're helping you make those decisions and then take the actions. They're always going to come if it's designed right. It's going to come back to a human for an authorization, but it should be going as far as it can without that human authorization. Right. So borrowing a term from Andrew Carpathy, the idea of Vibe admin. Right. It's one that we use a lot. Right. And so there's Vibe Coding, right? That's where LLMs are writing code supervised by a human programmer, right? We don't want them to take over the world. The humans have to look at what they're doing. Well, there's Vibe Admin, which says, okay, if you've got a workflow that might, maybe you're onboarding a new supplier, all right? And that's a 90-day process that involves all sorts of credit terms and betting and the initial deliveries of supplies, and you've got to do some quality tests, right? Well, you can have an agent that watches over that process, right? From soup to nuts. And all it has to do is escalate to a human. Funny, soup to nuts in the consumer packaged goods world. All it's going to do is escalate to a human when something breaks. Right. It's been too long since we heard from them. We got bad feedback from, you know, the the the survey responses for this new product. There the invoice came in and it does not match the terms that they said were in the contract. Right. All of these things can go to a human when it's time to actually take that sort of next step escalation action. but it can all happen automatically if the agent has the tools and has its fingers in all the right places to see what's going on in the enterprise. Absolutely. Absolutely. I know we're a little bit over on time. I really appreciate a little extra insight there before we close things out. But I think there's so much for folks at home to pull apart from today's episode across industries, especially as we're seeing these new agent capabilities really sweep their way across the global economy. Michael, thank you so much for being with us on today's episode. It's been very insightful. Appreciate the chance. Thanks, Matthew. Great questions. Wrapping up today's episode, I think there were three highlights for enterprise leaders looking to apply AI and data to key business decisions to take from our conversation with Michael today. First, success with AI agents begins with clarity of purpose, defining the business outcomes they're meant to drive and ensuring they align with established policies and decision-making processes. Second, reliable performance depends on governance. Valid, well-structured data and a disciplined approach to testing, monitoring, and iterating on models as enterprise-grade software. Finally, scalability comes from integration, connecting agentic systems across business functions so insights and automation compound value rather than remain trapped in silos. Interested in putting your AI product in front of household names in the Fortune 500? Connect directly with enterprise leaders at market-leading companies. Emerge can position your brand where enterprise decision makers turn for insight, research, and guidance. Visit Emerge.com slash sponsor for more information. Again, that's Emerge.com slash S-P-O-N-S-O-R. Take care.

Share on XShare on LinkedIn

Related Episodes

Comments
?

No comments yet

Be the first to comment

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies