
Transparency for AI Systems, Regulations, and Humans in Agricultural Manufacturing - with Kun He of Bayer
The AI in Business Podcast • Daniel Faggella (Emerj)

Transparency for AI Systems, Regulations, and Humans in Agricultural Manufacturing - with Kun He of Bayer
The AI in Business Podcast
What You'll Learn
- ✓Bayer is using AI-enabled breeding with genotyping and phenotyping data to accelerate crop development, but maintains human breeder oversight to capture step-change innovations
- ✓There is a balance to strike between innovation and caution, as the public can be skeptical of new technologies like AI due to biases and concerns about safety
- ✓Transparency is crucial, both in terms of the AI systems' outputs and the decision-making processes of the humans operating them, to build trust with the public
- ✓Scientists and companies like Bayer have a shared interest in ensuring the safety and responsible development of new technologies, as public trust is essential for their success
- ✓The old narratives of regulations being the enemy of innovation may need more nuance, as some innovation can indeed be harmful and requires careful consideration
Episode Chapters
Introduction
The host introduces the guest, Kun He from Bayer, and the topic of how AI is reshaping human talent and workforce development in agricultural manufacturing.
AI-Enabled Breeding and Crop Development
Kun discusses how Bayer is using AI-enabled breeding with genotyping and phenotyping data to accelerate crop development, while maintaining human breeder oversight to capture step-change innovations.
Balancing Innovation and Caution
The conversation explores the need to balance innovation and caution, as the public can be skeptical of new technologies like AI due to biases and concerns about safety.
Transparency and Building Trust
Kun emphasizes the importance of transparency, both in terms of the AI systems' outputs and the decision-making processes of the humans operating them, to build trust with the public.
Shared Interests and Responsible Development
The discussion touches on how scientists and companies like Bayer have a shared interest in ensuring the safety and responsible development of new technologies, as public trust is essential for their success.
Nuancing the Innovation Narrative
The host suggests that the old narratives of regulations being the enemy of innovation may need more nuance, as some innovation can indeed be harmful and requires careful consideration.
AI Summary
This episode explores the role of transparency and human oversight in the use of AI systems in agricultural manufacturing. The guest, Kun He from Bayer, discusses how Bayer is leveraging AI for faster crop development while maintaining breeder oversight to capture innovative breakthroughs. He also highlights the importance of balancing innovation with caution, as the public can be skeptical of new technologies like AI. The conversation emphasizes the need for transparency from both the AI systems and the humans operating them, to build trust and ensure responsible development of these technologies.
Key Points
- 1Bayer is using AI-enabled breeding with genotyping and phenotyping data to accelerate crop development, but maintains human breeder oversight to capture step-change innovations
- 2There is a balance to strike between innovation and caution, as the public can be skeptical of new technologies like AI due to biases and concerns about safety
- 3Transparency is crucial, both in terms of the AI systems' outputs and the decision-making processes of the humans operating them, to build trust with the public
- 4Scientists and companies like Bayer have a shared interest in ensuring the safety and responsible development of new technologies, as public trust is essential for their success
- 5The old narratives of regulations being the enemy of innovation may need more nuance, as some innovation can indeed be harmful and requires careful consideration
Topics Discussed
Frequently Asked Questions
What is "Transparency for AI Systems, Regulations, and Humans in Agricultural Manufacturing - with Kun He of Bayer" about?
This episode explores the role of transparency and human oversight in the use of AI systems in agricultural manufacturing. The guest, Kun He from Bayer, discusses how Bayer is leveraging AI for faster crop development while maintaining breeder oversight to capture innovative breakthroughs. He also highlights the importance of balancing innovation with caution, as the public can be skeptical of new technologies like AI. The conversation emphasizes the need for transparency from both the AI systems and the humans operating them, to build trust and ensure responsible development of these technologies.
What topics are discussed in this episode?
This episode covers the following topics: AI in agricultural manufacturing, Transparency and trust in AI systems, Balancing innovation and caution in new technologies, Public perception and bias towards agricultural companies and AI, Responsible development of AI and new technologies.
What is key insight #1 from this episode?
Bayer is using AI-enabled breeding with genotyping and phenotyping data to accelerate crop development, but maintains human breeder oversight to capture step-change innovations
What is key insight #2 from this episode?
There is a balance to strike between innovation and caution, as the public can be skeptical of new technologies like AI due to biases and concerns about safety
What is key insight #3 from this episode?
Transparency is crucial, both in terms of the AI systems' outputs and the decision-making processes of the humans operating them, to build trust with the public
What is key insight #4 from this episode?
Scientists and companies like Bayer have a shared interest in ensuring the safety and responsible development of new technologies, as public trust is essential for their success
Who should listen to this episode?
This episode is recommended for anyone interested in AI in agricultural manufacturing, Transparency and trust in AI systems, Balancing innovation and caution in new technologies, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
Today's guest is Kun He, Lead Scientific Advisor at Bayer Crop Science. He joins Emerj Editorial Director Matthew DeMello to discuss how AI is transforming human talent and workforce development in agricultural manufacturing, balancing data-driven efficiency with the irreplaceable role of human gut instinct. Kun also explores practical takeaways, such as integrating genotyping and phenotyping data to accelerate crop-breeding workflows, empowering breeders to drive "step change" innovations, and treating AI as a co-pilot to check biases while prioritizing customer needs for blockbuster R&D outcomes. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the 'AI in Business' podcast!
Full Transcript
Welcome, everyone, to the AI and Business Podcast. I'm Matthew DeMello, Editorial Director here at Emerge AI Research. Today's guest is Kun He, Lead Scientific Advisor at Bayer crop science. Kuhn joins us on today's program to explore how AI is reshaping human talent in workforce development and agricultural manufacturing, emphasizing the irreplaceable value of human gut instinct alongside data-driven decision-making. Our conversation also highlights practical workflow changes, such as AI-enabled breeding using genotyping and phenotyping data for faster hybrid crop development, while maintaining breeder oversight to capture step change innovations, and Bayer's bold R&D investments yielding 10 blockbuster products in the pipeline to improve ROI through balanced risk-taking. Just a quick note for our audience that the views expressed by Kuhn on today's program do not reflect those of Bayer or its leadership. But first, are you driving AI transformation at your organization? Or maybe you're guiding critical decisions on AI investments, strategy, or deployment? If so, the AI in Business podcast wants to hear from you. Each year, Emerge AI Research features hundreds of executive thought leaders, everyone from the CIO of Goldman Sachs to the head of AI at Raytheon and AI pioneers like Yoshua Bengio. With nearly a million annual listeners, AI in business is the go-to destination for enterprise leaders navigating real-world AI adoption. You don't need to be an engineer or a technical expert to be on the program. If you're involved in AI implementation, decision-making, or strategy within your company, this is your opportunity to share your insights with a global audience of your peers. If you believe you can help other leaders move the needle on AI ROI, visit Emerge.com and fill out our Thought Leader submission form. That's Emerge.com and click on Be an Expert. You can also click the link in the description of today's show on your preferred podcast platform. That's Emerge.com slash ExpertOne. Again, that's Emerge.com slash ExpertOne. The leaders who stand out today are those who transform data into better business outcomes. NYU Stern's one-year part-time MS in Business Analytics and AI prepares professionals to do exactly that. This segment is sponsored by NYU Stern's MS in Business Analytics and AI. visit their website to start your application today. Without further ado, here's our conversation with Kuhn. Kuhn, welcome back to the program. It's a pleasure having you last time, and I'm excited for this time. Same here. It's a fun conversation we just had. Absolutely. Absolutely. We spoke a lot about human bias, the difference between bias and gut. We got really existential there. We went deep into Plato's cave, but always using agricultural examples across manufacturing spaces. And I think it's really great to have that conversation alongside the one we're going to have today, thinking about how should human beings approach technological bias? What should they be thinking about? Especially since we established in the last episode in agricultural spaces, we expect human beings to be in the driver's seat for quite a while, if not ever. Going forward, especially even with the incredible step level changes we're seeing in AI from agentic and upwards, agriculture spaces, much like other industries we see out there, are still not making the most of even deterministic use cases and very simple first generation applications of AI, even when we get to the new fancy hype cycle stuff in agentic AI. But even just starting from there, even where we're seeing those very simple applications, they don't have to talk to us in co-pilots, where we're seeing simple machine learning, predictive analytics deployment. We talked so much in the last episode about how predictive analytics can give human beings this false confidence. How are you seeing human beings in agricultural spaces approach the problem of assessing bias in these systems and ensuring that they're transparent in their outputs? So that's a very good question. I want to start with internal, right, people in the industry. Again, based on my observation, I feel like we are not giving up the decision powers we have, right? There are plenty of us, most of all of us are trying to figure out how do we leverage the technology breakthrough AI to make our work more efficient, more faster, to bring better products faster to the market, to meet our customers' needs. So in that sense, I think everybody is just naturally trying to consider multiple input, right? Some maybe different data will give us different recommendations. Then some other input, it could be brainstorming, right? Just a discussion with other scientists or breeders. that's that's how uh west decision is being made in the business world so i think my conclusion is basically no one is just blindly saying that oh let's just give the driver's seat to the ai models and they can help us to develop the most productive right most the best products ever no, I don't see that kind of agreement at all. Right? Sure. So then I will shift to the general audience here. I don't know who is actually listening to this podcast, but a lot of you definitely are our consumers, right? Because we all eat food and food is produced by our farmers. And in many cases, this is our vegetable seed or broad acre crops. So here, actually, I want to call the major bias from the general public. Because Monsanto has a very negative reputation. I think I'm seeing a fact, right? Working in such a company, sometimes my neighbor questions, why do you work in that company? And because I want to help the world have more food, have a better living standard. But the bias is real. I don't want to diminish or blame such a bias. Basically, it's the same kind of reaction towards AI. Any kind of new technology will trigger at least half, roughly half of our population have those kind of fear. right is this something really good for us should we be careful what we are doing right are we producing a frankenstein yeah i mean you know here's my point this is the beauty of being inclusive because we are all naturally biased right some of us are jumping on new things no matter what right this is for me right i just want to go wherever new shiny pebble oh this is my dish right i want to go quiet some others like i said it's very careful very cautious you need both i i really appreciate that you brought up frankenstein last time we were we were mostly diving into plato's republic and we've moved on to mary shelley um as a matter of fact leading editorial efforts here at emerge whenever we onboard anybody in our department um or a freelance writer they ask me oh what should i be reading to do the best at this job and i always say read frankenstein and you only need to read the first 50 or so pages because i think especially thanks to hollywood uh everybody has this idea of the story that a huge part of the story usually the scene of Frankenstein on the rooftop with the lightning flipping the switch It takes about 15 minutes to portray. And usually there aren't words. There's lightning. And everybody knows the line. I don't even think it's in the movie. It's like, Luke, I am your father. Like, no, that's not how it's said. But every version of Frankenstein in film has an it's alive moment, right? Read the book. There's no such scene. There's no such scene. There is a journal entry from Dr. Frankenstein about 40, 50 pages in very quickly in the pace of the novel. And the journal entry is not transparent. The journal entry says, you know, for all of the previous journal entries in the novel that are going deep into the science of reanimating dead tissue. When he actually flips the switch and encounters the monster alive, the journal entry is, flipped the switch last night, I'm paraphrasing, flipped the switch last night, got real interesting. Two sentences. And I encourage all of our writers to understand that lesson, which is when things go wrong, the scientists don't want to talk about it. And they are not documenting their results, as effectively. And that is a very human bias. So the transparency is not only the systems, it's how honest do we want to be with ourselves? How honest do we want to be? And that runs into some questions in publicity terms. You don't want to induce panic. And you talked about the trustworthiness we have with agricultural manufacturers. This runs straight into pharmaceuticals as well. I don't need to explain to anybody coming on the show or listening that we're in an age of misinformation where a lot of people don't trust big name brands who develop agricultural and pharmaceutical products. I want to nail down what's the sense of responsibility to build transparency in the system, in the AI systems, and what transparency do we need from the humans operating those systems to be cleared, not just even internally with the organization, but also as honest as possible with the public for sure um so you have a very good point right missing information is actually causing a lot of problems in the world one thing is for sure i think it's slowing down the pace of innovations but on the other hand i want to re-emphasize i think those kind of cautious is very much needed i want to highlight two two facts one is if you are far from the scientific world right you know it's easy to be influenced by those hollywood movie that sure oh scientists the it's alive moment right right right albert einstein Eureka, right. It doesn't really happen that way. Kind of a crazy character, right? They do things because they want to prove they can do things, not because the things should be done. No, that's not true. In reality, the scientists working in Bayer or any other company, major company, it's like all regular people. We have our whole family, right? We have our kids. They do have different personalities, too. Some of the scientists are more dire, right? Like those kind of, God, let me do this, let me do that. But trust me, there are plenty of scientists that are similar like you. What if this is going wrong? Have you considered that? Those kind of challenges and balances are happening every single day in the company. Because in this perspective, I think the big corps and general public do have a shared common interest. which is I guarantee you no major company want to roll anything unsafe to any of us because if that really happened, it's basically the end of the company, right? There are no more business. You lose the trust, you are done, right? That's why in that perspective, we definitely are not doing anything in secret, right? We have a secret agenda. That's not true. That's never true. Actually, you'll use transparency as a great way to introduce our effort. On our website, you can watch a lot of information. We voluntarily disclose a lot of internal documents, but there's a balance to strike, as you can understand, because we cannot just share our playbook with all the competitors, right? That's the magic to get better products for all of you because the competition is the key, right? That's why Google is being questioned about the antitrust committee. So yeah, I will wrap that up here. Well, I think maybe even the subtext to a lot of what you're talking about, especially in terms of innovation. I think the old stories we've been telling ourselves for 50 years, regulations are the enemy of innovation. Innovation is the golden calf. We need to be going as fast as possible all the time. Innovation, there's no such thing as bad innovation, I think are a little dated and need a lot more nuance. Without getting too political about it, I think a lot of those narratives. Please, please. I want to stop you here. I have to say, no, some innovation is definitely indeed bad. Sure. That's more or less where I was going, but hit me. So, right. So, right. Now here, sorry to interrupt. You're fine. But there are plenty of examples that at the beginning, we thought, oh, this is a great thing. And then later on, when we do have more data, the valuable data, right? and then we realize, oh no, this has something that we didn't understand. In that kind of case, for sure, we should recognize it and correct it as soon as possible. If you didn't realize, like I didn't introduce myself enough, I'm actually working in a regulatory organization. If you didn't know though, like regulatory overall concept is a relatively new concept. Because before, we don't have EPA. We didn't have USDA, right? All those regulatory, FDA, for example. Right, right. Happened or triggered by a very tragic innovation. Right. It's a drug, I couldn't recall the name, it's helping the pregnant woman to ease the vomit, the feeling of vomit, right? It's very effective, very, very effective. and being introduced to UK market, European market. I couldn't recall whether USA opened the market back then, but there was no regulation back then. Way too late, in my opinion, we discovered, oh my goodness, there's suddenly an increase of children that, right, developed like abnormal development. I really feel sad when I talk about this. And that triggered creation across the world, the official requirement before you roll out any new innovation, new products onto the market, you have to do animal testing to make sure there's no development of non-annuality it will introduce. to do even more. And the regulatory policy is actually going more and more heavier for a good reason, right? Because today we not only even care about human beings ourselves, there's an endangered act that USA rolled out. We want to protect those endangered species. We don't want to roll out a product that's safe for humans, but by the end of the day kill all the butterfly or honeybee, right? So that what I trying to call out which is in this instance we as human beings first of all as a human being secondly as a scientist or a breeder or a rose we have the same common interest I will never, ever support any product. It's just for profit. Any company should, and actually, in reality, I think they do have those kind of social responsibility to make sure whatever they do is not going to cause major damages to us. Yeah. I mean, we hear this all the time in the healthcare space, but of course there's a thing in healthcare called the Hippocratic Oath. And I think to a certain extent, you know, our friends in agriculture have a same sense of duty. No one wants to sell food that poisons people, you know, and we want to make, we want to make safe products. But as you've been articulating in this last answer, there are three pillars to that transparency, to that sense of security. There's human beings checking ourselves in the system. There's systems that are built to explain themselves and check on the humans, almost like we have in a democratic government, right? Checks and balances. And then third, we have regulations that make sure everybody's playing by the same rules. And I know, especially in the regulatory space, to speak a little bit more to your direct expertise at Bayer, in the regulatory space, this is a very uneven landscape across the world, especially when it comes to AI. We don't have anything as comprehensive necessarily as the recent EU AI Act that only went into effect in February of this year, specifically made for agriculture. The EU AI Act is only categorizing AI systems based on risk. And of course, that has an effect on agricultural systems. But we've seen no specific AI Act anywhere in the world for here's how we manage these systems in agriculture. Sure. Just just getting a sense of where you were you were mentioning animal testing, you know, the different balances that regulations have to have between, you know, using animals so that human beings aren't harmed by products. Also keeping in mind endangered species, lots of different concerns here. I also think where we see, forgive me, I had a little note here and I wanted to make sure I hit all my all my points in the lead up to this question. But even we're for animal testing where we see, you know, that that balance. It's an imperfect system. I think a lot of folks, as you're saying, kind of touching on the agricultural Hippocratic Oath that that farmers feel, you know, they want it. They want to feed the population that that goes back thousands of years. There's a sense of that responsibility maybe starts with the farmers, maybe starts with the human beings growing the food. And I'm wondering from your perspective, we'll kind of put you in the farmer's role here. And with that background in regulations, how do you look at the landscape of regulations right now for AI? If I was to christen you maybe God Emperor, President, UN General Secretary, pass all the laws you want, what might those look like for a regulatory landscape that would match the demand we're seeing for transparency on both sides, the farmer and the AI systems? Very good question. You definitely did a lot of research. So I want to address the animal testing question first because it's so related to AI technology. Because if you didn't aware yet that EU is actually driving for no animal testing initiative, basically because we care about animals too. so there's a strong push for us to stop doing animal testing but not sacrifice on the safety concern part and this is why ai is going to help for because no animal testing doesn't mean no testing right digital twins that was that was the note i was trying to read in my chicken scratch yeah that's exactly where ai could help yeah okay but that's not addressing the direct question you are asking, which is where regulation should catch up, right? We have to realize that anything new, there will be a lag, right? Like you called out. There's not many reasonable rules that are being ruled out, right? We don't know what makes sense to regulate. Because here's the dilemma. because if you overrule, overgovern something, you could actually kill the innovation, right? Because why I say so, the longer a development process would be for a company, the more cost will generate. And you all know, especially the pharmaceutical area, you only have that long of time to sell your products with a rather high margin, the IP protected period, right? Once that IP protection is over, it's all generic and you no longer can make a lot of profit. So in this case, this is the reality. I'm not saying whether that makes sense or not, but if there's too heavy of regulation, company has to elongate another 10 years for development you think about the simple business decision to make is how much money you can make by developing this right you can easily estimate how much then minus the cost you are going to spend to develop such a product if it's negative i don't think anyone will will do it because it doesn't make sense i mean you are an investor you don't want to invest something and know for sure it's going to lose men, right? So that's a balanced track. That's my first point. And second point is, this is probably too big of a jump because I want to introduce a book called Learn the Optimistics, the book's name. Very interesting book. And this is a professor from UPenn but needs to be validated but he did a bunch of experiments to show people the difference of two types of personality one is optimistic people another is pessimistic people I don't know which type you are I think historically pessimistic and for my own mental health I'm trying to kick down the door to optimism I'll put it that way You surprised me because we are similar. I used to be a typical pessimistic, but today I transformed into an optimistic person. But again, it's not good or bad. Let me illustrate why this is relevant to the conversation here. So the study showed that optimistic people tend to overlook risks. They tend to overestimate what is under their control. The experiment I think the professor did, the author did, is ask the two groups of people, one group optimistic, one group pessimistic, to turn on and off a switch. And then actually they are controlling nothing. The switch connects nothing. But the let will be on and off, controlled by the professor. And then they will ask later, estimate how many times you are really controlling the light by that fake switch. And the optimistic group definitely overshoot. 80% of the time, in reality it's 50%. But the pessimistic group will tell you rather accurate data. They are very, very on the spot. the reason it's relevant is back then after I read that book I suddenly realized I was still a pessimist person and I actually don't want to stay as is because pessimistic is uneasy because you worry about this, you worry about that, right? But suddenly I realized okay, optimistic people it's useful when we are ending up in a situation that the hope is slim like a very hard to survive situation they won lose hope easily They will keep pushing us let try this let try that Yeah no matter how little the chance is we could get out of that situation. If it were me, pessimistic, we just gave up, sitting there, do nothing. But now, think about another scenario. If the optimist people just keep, let's go, let's go, even without marrying what could be on the path of the trajectory, we could easily be leading to a trap, right? Because he didn't pay attention. There's a trap along the way. We are all doomed again. So now, to wrap up, I hope I used too long a story to strike this point. You're totally fine. The balance is the key, right? I strongly believe that we need more regulatory rules, policy to roll out to make sure we don't do something stupid. We don't overlook some of the risk of AI, using AI for whatever things. But let's be careful, too. Let's not kill ideas. Let's not shut opportunities down by overruling right away. Give it a little bit of space to run, collect more data, be careful always along the way, but look back, all my peers, scientists, they are not crazy scientists showing up in Hollywood movies. we already have a very balanced group of people doing things in a very responsible way. I want to assure you this. And again, our transparency program is just trying to share with you, not to, you know, adding any information that we could share with you to make sure you trust us. Absolutely. I think balance is a fantastic single takeaway from this program where we're asking questions about transparency. You can't ask for transparency only from the AI system, only from the human beings, solely depend on the regulations and the rules. Going back to even your first answer, you will want to get a second opinion, and that might mean a second digital twin, a second AI system that comes in with maybe slightly different data. But also in the last answer that you just gave, knowing that those systems are biased and the way to achieve balance is to understand the bias where it gives an advantage based on the environment. Great if you're a glass half full person, but not to be too graphic here. Is the glass half full of blood? Does that really make you an optimist if you see it as half full? You know, what kind of blood? It depends on the environment. It depends on ensuring that you're balancing human beings and AI systems that are optimistic, pessimistic. You want to hear from both sides. You want to hear from as many sides as possible. And that's always limited as we are today by the time that we have to ask these questions and in the cost that it takes to really answer these systems. We'll have to fit it all in that funnel. And obviously that will mean our answers are always imperfect, but where we're at least delineating the most cost to getting the amount of answers and different opinions in the room from AI systems, from human beings, all bound by the same regulatory landscape, hopefully at a level that understands all of these biases, then maybe we can have the best possible outcomes. if i can put a nutshell on on maybe the lessons we can take away from the show that's how i do it anything else to add there just for folks across industries um you know trying to achieve that kind of balance in these systems sure thing um i do want to highlight one thing that i i don't have an answer myself but it's i think it's a good uh question to ask because we all know AI could ingest data much faster and much better than they could. Our brains is limited. But for example, I read an article talking about, oh, whole history, all the books human has ever written could be easily consumed by AI model, general AI. And yeah, for sure, they will become more knowledgeable than any single individual us. There's a caveat of that thinking because I couldn't stop thinking about many of those books are giving us very contradictory messages. You know where I'm going, I hope, right? I have a feeling, but go ahead. The real world conflicts, you can see, right? You believe in this. I believe in that. It's not compatible. unfortunately sometimes even contradictory that's that's exactly why there are so many tragedy happening every day in this world right all those wars that still ongoing unfortunately um even worse killing each other innocent people for nothing imagine it's all rooted in the belief so if an ai model read both books all the books we are saying can fit into this model What kind of conclusion the AI model will come? How does that model deal with those kind of dilemmas that no human being even know how to answer today? Are they really able to give us a better decision? I really thought so. Yes. So the problem of certainty where it bleeds into dogma, where our biases become religious beliefs, always a dangerous territory and always the enemy of that balance. Well, we may have to save how to how to be more cognizant of where our sense of certainties, our biases are becoming dogmatic beliefs, maybe for another program. But Kuhn, these last two conversations, absolutely fascinating. Been a pleasure cave diving in Plato's cave with you these last few episodes and hopefully giving a lot of lessons for the folks at home who want to ensure the best possible outcomes from these systems. Thanks so much for being with us this week. We really appreciate it. Thank you, Matt. I definitely want to echo, you know, I recognize you have DDR research. I really enjoy those kind of idea bouncing, right? You definitely helped me to push my own thinking into a deeper layer. So thank you very much for having me. Wrapping up today's episode, I think there were at least three critical takeaways for enterprise agricultural leaders to take from our conversation today with Koon He, lead scientific advisor at Bayer Crop Science. First, integrate AI tools for genotyping and phenotyping to accelerate crop breeding workflows while retaining human breeders for step change innovations that defy data predictions. Second, champion human gut instinct for bold R&D decisions like pioneering dwarf wheat varieties that historical data would dismiss while balancing safe bets with courageous risks. Finally, empower teams to treat AI as a co-pilot, checking biases and prioritizing customer needs to drive informed decisions and blockbuster product pipelines. Interested in putting your AI product in front of household names in the Fortune 500? Connect directly with enterprise leaders at market-leading companies. Emerge can position your brand where enterprise decision makers turn for insight, research, and guidance. Visit Emerge.com slash sponsor for more information. Again, that's E-M-E-R-J dot com slash S-P-O-N-S-O-R. I'm your host, at least for today, Matthew DeMello, Editorial Director here at Emerge AI Research. On behalf of Daniel Fagella, our CEO and Head of Research, as well as the rest of the team here to merge. Thanks so much for joining us today, and we'll catch you next time on the AI in Business podcast. Bye.
Related Episodes

Governing AI for Fraud, Compliance, and Automation at Scale - with Naveen Kumar of TD Bank
The AI in Business Podcast
22m

Why Granular Visibility and Data Control Determines AI Success in Financial Services - with Chris Joynt of Securiti
The AI in Business Podcast
30m

Rethinking Clinical Trials with Faster AI-Driven Decision Making - with Shefali Kakar of Novartis
The AI in Business Podcast
20m

Human-Centered Innovation Driving Better Nurse Experiences - with Umesh Rustogi of Microsoft
The AI in Business Podcast
27m

The Biggest Cybersecurity Challenges Facing Regulated and Mid-Market Sectors - with Cody Barrow of EclecticIQ
The AI in Business Podcast
18m

Overcoming Cloud Complexity in Mid Market Operations - with Dirk Michiels of Savaco
The AI in Business Podcast
22m
No comments yet
Be the first to comment